Search is not available for this dataset
text
stringlengths
1
1.92M
id
stringlengths
14
6.21k
metadata
dict
\section{Introduction} \par Due to the advances of trigger-action platforms (e.g. IFTTT \cite{IFTTT2020}) in IoT domain, IoT networks become more vulnerable towards malicious event injection attacks. Since IoT devices create a chain of interactions maintaining functional dependencies between entities and actions \cite{Celik2019b} \cite{Alam2021}, it is possible for adversaries to remotely inject malicious events somewhere in the interaction chain using a ghost device and activate a critical action through the exploitation of autonomous trigger-action scenario. For instance, an adversary can inject a fake thermometer reading of 110\textdegree F into the chain to initiate a critical \textit{window opening} action. \par There are a number of research efforts in the existing literature that attempt to solve the vulnerabilities caused by the trigger-actions in an IoT network. Most of them are designed to validate security properties by identifying the unsafe or insecure state transitions in the network \cite{Celik2019b} \cite{Nguyen-IoTSan-2018} \cite{Leonardo2018}. There is another line of research attempts where policy violations are addressed by checking sensitive user actions that may violate security policies \cite{Leonardo2018}. The research that is closest to our proposition is PEEVES \cite{sbirnbach2019}, where physical fingerprints of the devices are extracted using machine learning techniques to verify whether or not a certain event actually occurs. \par In this paper, we propose IoTMonitor, a security system that adopts a Hidden Markov Model based approach to determine the optimal attack path an attacker may follow to implement a trigger-action based attack, thus providing suggestions for subsequent patching and security measures. Our system examines the physical changes happening in an IoT environment due to the event occurrences, discovers the probabilistic relation between physical evidence and underlying events using the Baum-Welch algorithm \cite{Baum1967} \cite{Baum1968}, and discerns the optimal attack path using the Viterbi algorithm \cite{viterbi1967}. When the optimal attack path is determined, IoTMonitor identifies the crucial nodes in the path that the attacker must compromise to carry out the attack. Such information can be used for prioritizing security measures for IoT platforms. \par The contributions of the paper can be summarized as follows: \vspace{-5pt} \begin{itemize} \item We propose IoTMonitor, a Hidden Markov model based system that identifies the optimal attack path in a trigger-action IoT environment based on the probabilistic relation between actual IoT events and corresponding physical evidence; \item We implement the Baum-Welch algorithm to estimate transition and emission probabilities, and Viterbi algorithm to discern the attack path; \item We propose an algorithm to detect the crucial nodes in an extracted optimal attack path, thus providing guidelines for subsequent security measures; \item We thoroughly evaluate the performance of IoTMonitor in detecting the optimal attack path and achieve high accuracy scores. \end{itemize} \vspace{-5pt} \par The rest of the paper is organized into four sections. In Section II, we define the attack landscape, discuss an attack scenario, and present the threat model. In Section III, we present IoTMonitor and discuss each component of it in detail. Later in Section IV, we present the evaluation results of our approach. Finally, in Section V, we conclude the paper by summarizing the methodology and outputs of our experiments, and presenting future extensions. \section{Attack Landscape} \subsection{A Sample Attack Scenario} \par Assume that Alice has a limited number of trigger-action enabled IoT devices including Smart Lock, Motion Detector, Accelerometer, Smart Light, Coffee Machine, and Smart Window. Alice controls each device through a mobile application from her cell phone. The devices communicate with each other through a hub. Since the platform supports trigger-action functionality, a device has the capability to trigger an action of another device. \par Alice sets up the trigger events as follows. When she unlocks the smart lock of the front door and walks in, the motion sensor in the living room detects the motion and activates ``home-mode''. The home-mode activation event automatically turns on the smart light. When the light is turned on, the events associated with coffee grinding and window opening are triggered. When coffee is ready, Alice takes the coffee and enters into her bedroom by opening and subsequently closing the door. The vibration generated by the opening and closing operations of the door is measured by an accelerometer. Thus, a chain of events are triggered by the initial action. \par Now, Bob, an attacker, wants to compromise the smart window remotely when Alice is not at home and the front door is locked. His objective is to inject malicious events into the network to create a chain of interactions that eventually trigger the events associated with the window. \subsection{Threat Model} \par We assume that the attacker knows the locations of the IoT devices in the target system but he does not have physical access to the home. He can eavesdrop on wireless communication taking place between devices and the hub. His goal is to perform a trigger-action attack by injecting fake events into the IoT network through ghost applications. The ghost applications impersonate target devices just by mimicking their characteristics and functionalities. Therefore, he does not need to deploy any real IoT devices to conduct the attack. \section{The IoTMonitor System} \par Since the attacker exploits trigger-action functionality of IoT network to generate a chain of interactions by injecting fake events, we can thwart a trigger-action attack effectively if we can identify the optimal attack path the attacker may follow and perform security hardening on the crucial nodes in the attack path. In this research work, we propose \textit{IoTMonitor}, a system that discerns the optimal attack paths by analyzing physical evidence generated during the attack cycle, which are probabilistically correlated to the actual underlying events. IoTMonitor formulates the attack as a Hidden Markov Model (HMM) problem and solves it to determine the most likely sequence of events occur during an attack cycle and further identifies the crucial nodes in that sequence. Hence, in this paper, a \textit{node} represents an event occurring at a particular device. \subsection{Our Assumption} \par We assume that a configured trigger-action sequence contains $N$ events: $d_1, d_2,..., d_N$. The attacker injects fake events $\{d_i\}$ in the chain to achieve his final goal. Note that the attacker does not necessarily have to inject $d_1$ since he can wait for the occurrence of some real events to trigger the automatic chain occurrence of the rest of the events required to implement the attack. When an event is triggered, it causes some physical changes in the environment, which can be perceived as corresponding physical evidence $\{ph_i\}$ captured by an array of sensors and harnessed to verify the occurrence of that specific event. Note that some event may trigger non observable evidence, but others may trigger more than one evidence. \par Given this assumption, IoTMonitor models the trigger action scenario as a HMM problem, where physical evidence are visible to the analysis agent, but the actual events remain hidden. The tasks of the agent are to determine the probabilistic relation between events and evidence, and employ it to figure out the optimal attack path and diagnose the crucial nodes in that path. \begin{figure} \centering \includegraphics[scale=0.35]{figures/IoTMonitor_Framework_Updated.png} \caption{IoTMonitor System} \label{fig:IoTMonitor_system} \vspace{-10pt} \end{figure} \subsection{IoTMonitor} The proposed IoTMonitor comprises three main components: 1) state machine generator, 2) sequence extractor, and 3) crucial node detector. Fig \ref{fig:IoTMonitor_system} shows the architecture of IoTMonitor. We discuss the components below in detail. \subsubsection{\textbf{State Machine Generator}} \par When events are triggered in the environment and the deployed sensors capture corresponding evidence per event occurrence, this component will construct a \textit{state machine} to represent how state changes in the environment due to the exploitation of trigger-action functionalities across a series of time instances $t=1, 2,..., T$. Hence, \textit{states} delineate useful information regarding the occurrence of different events $d_i$ and corresponding evidence $\{ph_i\}$. \par The state machine accommodates two types of states: 1) \textit{true states}, which correspond to the actual event occurrences, and 2) \textit{observation states}, which represent the physical evidence. Hence, the true states remain hidden, but the analysis agent leverages the observation states to infer the hidden true state sequence. We define our state space as follows: \begin{itemize} \item true state, $x_i$ : state responding to the occurrence of $d_i$ \item observation state, $y_j$ : a subset of the physical evidence $\{ph_1, ph_2,..., ph_M\}$, which are emitted when the environment makes transition to a new state \end{itemize} \begin{figure} \centering \includegraphics[scale=0.35]{figures/State_Machine_Updated.png} \caption{A Sample State Machine} \label{fig:State_machine} \vspace{-10pt} \end{figure} \par Hence, we assume that there are $N$ true states $X=\{x_1, x_2,..., x_N\}$, and $T$ observation states $Y = \{y_1, y_2,..., y_T\}$ in the state machine, where $X_t$ and $Y_t$, respectively, denote the true state and observation state at time $t$. Here, each $y_j$ contains a subset of the physical evidence $\{ph_1, ph_2,..., ph_M\}$, where the total number of evidence is $M$. Note that each observation state $Y_t$ in our experiment is determined with the help of a \textit{sliding window} function, which is discussed in detail in Section IV. \vspace{1pt} \par When the environment is in $x_i$ at time instance $t$ and makes a transition to any $x_j \in X$ at time instance $t+1$, it changes its true state with a \textit{transition probability} $q_{ij} \in Q$, which can be defined as: \vspace{-5pt} \small \begin{equation} \label{define-state-transition-probability} q_{ij} = Pr (X_{t+1}=x_j | X_t=x_i), \hspace{0.3cm} 1 \leq i,j \leq N \end{equation} \normalsize Suppose, because of this state transition, the environment emits a new observation $y_k \in Y$ with an \textit{emission probability} $\mu_j(y_k) \in E$, which can be defined as: \vspace{-2pt} \small \begin{equation} \label{define-emission-probablility} \begin{split} \mu_j(y_k) = Pr (Y_{t+1}=y_k | X_{t+1}=x_j), \hspace{0.3cm} & 1 \leq j \leq N \\ & 1 \leq k \leq T \end{split} \end{equation} \normalsize \par In the equation \eqref{define-state-transition-probability}, $Q = \{q_{ij}\}$ is termed as \textit{state transition probability distribution}, while $E = \{\mu_j(y_k)\}$ in the equation \eqref{define-emission-probablility} is termed as \textit{emission probability distribution}. \par To model the attack as HMM, we need to generate an \textit{initial state distribution} $\sigma = \{\sigma_i\}$, such as: \vspace{-3pt} \small \begin{equation} \begin{aligned} \label{initial-state-probability} \sigma_i = Pr(X_1 = x_i), \hspace{0.3cm} 1 \leq i \leq N \end{aligned} \end{equation} \normalsize Hence, $\sigma_i$ is the initial state distribution at time instance $t=1$. \par Combining all the five aforementioned tuples, IoTMonitor models the trigger-action attack as an HMM problem $\big \langle N, M, Q, E, \sigma \big \rangle$ and solves it to determine the optimal attack path given a sequence of observation states. IoTMonitor also creates a parameter $\theta = (\sigma, Q, E)$, which is called the \textit{current model} of HMM. Figure \ref{fig:State_machine} shows a sample state machine where \textit{blue} circles represent the true states and \textit{yellow} circles represent the observation states. \textbf{Note:} For the rest of the paper, we call \textit{observation state} as only \textit{observation} sometimes and use the terms \textit{true state} and \textit{state} interchangeably to mean the same thing. \subsubsection{\textbf{Sequence Extractor}} \par Once the trigger action sequence is modeled as an HMM problem, IoTMonitor attempts to estimate the probability values and retrieve the optimal hidden state sequence from the observations. First, it starts with estimating the converged state distributions, transmission probabilities, and emission probabilities. Then, it seeks to figure out the underlying state sequence that maximizes the probability of getting a certain observation sequence. To accomplish both tasks, the \textit{sequence extractor} employs the following two subcomponents: a) probability estimator, and b) sequence retriever. The details of both subcomponents are described below. \par a) \textbf{Probability Estimator}: Given a complete observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$, the goal of this component is to determine the following: \vspace{-2pt} \small \begin{equation} \begin{split} \label{eqn1-baum-welch-general} \theta^* & = \underset{\theta}{argmax} \ Pr(Y_1, Y_2,..., Y_T | \theta) \end{split} \end{equation} \normalsize \vspace{-20pt} \par We use the Baum-Welch algorithm \cite{Baum1967} \cite{Baum1968} to iteratively update the current model $\theta$ and solve equation \eqref{eqn1-baum-welch-general}. It uses a \textit{forward-backward procedure} to find the maximum likelihood estimate of $\theta$ given a certain set of observations. We assume that each observation $Y_t$ is emitted by the environment at one discrete time instance $t=1,2,...,T$. \vspace{3pt} \par \textbf{Forward-backward Procedure}: Let $\alpha_t(i)$ and $\beta_t(i)$ are the probabilities of getting the observation sequences $ \langle Y_1, Y_2,..., Y_t \rangle$ and $\langle Y_{t+1}, Y_{t+2},..., Y_T \rangle$, respectively, while the system is being in the true state $x_i$ at time $t$. So, \vspace{-4pt} \small \begin{equation} \begin{aligned} \label{forward-backward-procedure} \alpha_t(i) &= Pr(Y_1, Y_2, ..., Y_t, X_t = x_i | \theta) \\ \beta_t(i) &= Pr(Y_{t+1}, Y_{t+2}, ..., Y_T | X_t = x_i, \theta) \end{aligned} \end{equation} \normalsize \par We can compute $\alpha_t(i)$ and $\beta_t(i)$ using the following steps: 1. Initialization \small \begin{equation} \begin{aligned} \label{forward-backward-procedure-initialization} \alpha_1(i) &= \sigma_i \mu_i(y_1), \hspace{0.3cm} 1 \leq i \leq N \\ \beta_T(i) &= 1, \hspace{0.3cm} 1 \leq i \leq N \end{aligned} \end{equation} \normalsize \vspace{-5pt} 2. Induction \vspace{-10pt} \small \begin{equation} \begin{aligned} \label{forward-backward-procedure-induction} & \alpha_{t+1}(j) = \mu_j(y_{t+1}) \sum_{i=1}^N \alpha_t(i) q_{ij}, \hspace{0.2cm} 1 \leq t \leq T-1, \hspace{0.2cm} 1 \leq j \leq N \\ & \beta_t(i) = \sum_{j=1}^N q_{ij} \mu_j(y_{t+1}) \beta_{t+1}(j), \hspace{0.1cm} t = T-1, ...,2, 1, \hspace{0.2cm} 1 \leq i \leq N \\ \end{aligned} \end{equation} \normalsize \par These two steps combined is called the \textit{forward-backward procedure}, and $\alpha_t(i)$ and $\beta_t(i)$ are termed as \textit{forward variable} and \textit{backward variable}, respectively. \par Now, suppose $\delta_t(i)$ is the probability of the system being in the true state $x_i$ at time instance $t$ given the complete observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$ and the current model $\theta$. We can define this probability in terms of the forward and backward variables $\alpha_t(i)$ and $\beta_t(i)$, i.e., \small \begin{equation} \label{eqn1-update-delta} \begin{split} \delta_t(i) & = Pr(X_t = x_i | Y_1, Y_2, ..., Y_T, \theta) \\ & = \frac{Pr(X_t = x_i, Y_1, Y_2, ..., Y_T | \theta)}{Pr(Y_1, Y_2, ..., Y_T | \theta)} \\ & = \frac{\alpha_t(i)\beta_t(i)}{\sum_{j=1}^N \alpha_t(j)\beta_t(j)} \end{split} \end{equation} \normalsize \par Again, given the complete observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$ and the current model $\theta$, suppose, $\xi_t(i,j)$ is the probability of the system being in the true states $x_i$ and $x_j$ at time instances $t$ and $t+1$, respectively. So, \vspace{-3pt} \small \begin{equation} \label{eqn2-update-xi} \begin{split} \xi_t(i,j) & = Pr(X_t = x_i, X_{t+1} = x_j | Y_1, Y_2, ..., Y_T, \theta) \\ & = \frac{Pr(X_t = x_i, X_{t+1} = x_j, Y_1, Y_2, ..., Y_T | \theta)}{Pr(Y_1, Y_2, ..., Y_T | \theta)} \\ & = \frac{\alpha_t(i) q_{ij} \beta_{t+1}(j) \mu_j(y_{t+1})}{\sum_{i=1}^N \sum_{j=1}^N \alpha_t(i) q_{ij} \beta_{t+1}(j) \mu_j(y_{t+1})} \end{split} \end{equation} \normalsize \par Now, we can update the initial state distribution $\Bar{\sigma}_i$, transition probability $\Bar{q}_{ij}$, and emission probability $\Bar{\mu}_j(y_k)$ using these two parameters $\delta_t(i)$ and $\xi_t(i,j)$. The state distribution can be updated as: \small \begin{equation} \label{eqn1-update-final-state-distribution} \Bar{\sigma}_i = \delta_1(i) \vspace{-3pt} \end{equation} \normalsize \par where, $\delta_1(i)$ is the expected number of times the system is in the true state $x_i$ at time instance $t=1$. \par To update the transition probabilities, we have to compute the ratio of \textit{the expected number of state transitions from $x_i$ to only $x_j$} (the numerator of the equation \eqref{eqn2-update-final-transition-probabalities}) and \textit{the expected number of transitions from $x_i$ to all other true states} (the denominator of the equation \eqref{eqn2-update-final-transition-probabalities}). \small \begin{equation} \label{eqn2-update-final-transition-probabalities} \Bar{q}_{ij} = \frac{\sum_{t=1}^{T-1} \xi_t(i,j)}{\sum_{t=1}^{T-1} \delta_t(i)} \end{equation} \normalsize \par And to update the emission probabilities, we have to take the ratio of two other quantities: \textit{the expected number of times being in state $x_j$ and observing the observation $y_k$} (the numerator of the equation \eqref{eqn3-update-final-emission probabilities}), and \textit{the expected number of times being in state} $x_j$ (the denominator of the equation \eqref{eqn3-update-final-emission probabilities}). \small \vspace{-5pt} \begin{equation} \label{eqn3-update-final-emission probabilities} \Bar{\mu}_j(k) = \frac{\sum_{t=1}^{T} 1_{(Y_t = y_k)} \delta_t(j)}{\sum_{t=1}^{T} \delta_t(j)} \end{equation} \normalsize where, \small \begin{equation} \label{eqn2-getting-observation_y_k} 1_{(Y_t = y_k)} = \left\{ \begin{array}{@{}ll@{}} 1, \hspace{0.5cm} \text{if } \hspace{0.1cm} Y_t = y_k \\ 0, \hspace{0.5cm} \text{Otherwise} \\ \end{array}\right. \end{equation} \normalsize \par The updated parameters $\Bar{\sigma} = \{\Bar{\sigma}_i\}$, $\Bar{Q} = \{ \Bar{q}_{ij} \}$, and $\Bar{E} = \{ \Bar{\mu}_j(y_k) \}$ now constitute the new model $\Bar{\theta} = (\Bar{\sigma}, \Bar{Q}, \Bar{E})$. We need to iterate the equations \eqref{eqn1-update-final-state-distribution} \eqref{eqn2-update-final-transition-probabalities}, and \eqref{eqn3-update-final-emission probabilities} until we find $\Bar{\theta} \approx \theta$. This convergence is guaranteed in \cite{Baum1968} by Baum et al., where it is ensured that either 1) the initial model $\theta$ defines a critical point in the likelihood function where $\Bar{\theta}=\theta$, or 2) $\Bar{\theta}$ explains the observation sequence $\langle Y_1, Y_2,..., Y_T \rangle$ more suitably than $\theta$, i.e. $Pr(Y_1, Y_2,..., Y_T | \Bar{\theta}) > Pr(Y_1, Y_2,..., Y_T | \theta)$ \cite{Rabiner1989}. \BlankLine \par b) \textbf{Sequence Retriever}: Once the probability estimator determines the converged HMM model $\theta^*$, now, it is job for the \textit{Sequence Retriever} to extract the optimal sequence of hidden events using Viterbi algorithm \cite{viterbi1967}. Given a particular observation sequence $\langle Y_1, Y_2,..., Y_t \rangle$ at time instance $t$ and $Y_t = y_k$, the goal here is to determine the following: \small \begin{equation} \label{viterbi-objective-eqn} \begin{split} \omega _t(i) &= \underset{x_1,..., x_{i-1}}{max} \Big \{ Pr(X_1 = x_1,...,X_t = x_i, Y_1,..., Y_t = y_k| \theta) \Big \} \\ & = \underset{x_1, x_2,..., x_{i-2}}{max} \bigg \{ \underset{x_{i-1}}{max} \Big \{\omega _{t-1}(i-1) q_{(i-1)(i)} \Big \} \mu_t (y_k) \bigg \}, \\ & \hspace{4.5cm} 2 \leq t \leq T, \hspace{0.1cm} 1 \leq i \leq N \end{split} \end{equation} \normalsize Hence, $\omega _t(i)$ represents the maximum probability of the occurrence of a particular state sequence $\langle x_1, x_2,..., x_i \rangle$ at time instance $t$ that corresponds to the aforementioned observation sequence $\langle Y_1, Y_2,..., Y_t \rangle$. \par The equation \eqref{viterbi-objective-eqn} can be solved recursively to determine the highest probability of the occurrence of a complete state sequence $\langle x_1, x_2,..., x_N \rangle$ for the time instance $2 \leq t \leq T$ given that $\omega _1(i) = \sigma_i \mu_i(y_1)$. The recursion stops after computing $\omega _T(i)$ such as: \small \begin{equation} \label{viterbi-omegha_final_instant} \omega _T^* = \underset{1 \leq i \leq N}{max} \ \omega _T(i) \end{equation} \normalsize \par But to obtain the optimal hidden sequence, we must trace the arguments that maximize the equation (\ref{viterbi-objective-eqn}) during each recursion. To achieve that, we introduce a variable $\chi$ to hold all the traces such as: \small \begin{equation} \label{viterbi-trace-arguments} \chi _t(i) = \underset{1 \leq i \leq N }{argmax} \Big \{ \omega _{t-1}(i-1) q_{(i-1)(i)} \Big \}, 2 \leq t \leq T, 1\leq i \leq N \end{equation} \normalsize Note that $\chi _1(i) = 0$ for $t=1$ because we start tracing the states for the very first time at time instance $t=2$ once we have at least one previous state. \par Once we have $\chi _T(i)$, all we need is backtracking through the traces to discern the optimal hidden sequence such as: \small \begin{equation} \label{viterbi-backtracking} \psi _t ^ * = \chi_{t+1} (\psi_{t+1} ^ *), \ t = T-1, ....., 2, 1 \end{equation} \normalsize Hence, $\psi _T ^ * (i) = \chi _T(i)$, and $\Upsilon = \{\psi_1 ^*, \psi_2 ^*,..., \psi_T ^*\}$ is the extracted optimal sequence. Note that each $\psi_t ^* \in \Upsilon$ represents a true state in $X$. \vspace{5pt} \normalsize \subsubsection{\textbf{Crucial Node Detector}} \par After the \textit{sequence retriever} extracts the hidden optimal sequence $\Upsilon = \{\psi_1 ^*, \psi_2 ^*,..., \psi_T ^* \}$, the component \textit{crucial node detector} applies Algorithm \ref{algo:crucial_node_detection} to detect the crucial events in the attack chain the attacker must compromise to successfully implement the attack. Hence, the most frequently triggered events are defined as \textit{crucial events}. \par If there are $p$ number of different extracted sequences $\Upsilon_1, \Upsilon_2,..., \Upsilon_p$ for $p$ different attempts, the Algorithm \ref{algo:crucial_node_detection} first determines the \textit{longest common subsequence} $S_i$ between each $\Upsilon_i$ and the original sequence $X = \{x_1, x_2, ..., x_N\}$. Later, it computes the \textit{SCORE} value for each pair of states in the subsequence such as: \small \begin{equation} \label{equation:SCORE_definition} \begin{split} & SCORE \Big[S_i[j],S_i[j+1] \Big] = \text{number of times a pair} \\ & \big \{ S_i[j],S_i[j+1] \big \} \text{ is present in the subsequence} \ \end{split} \end{equation} \normalsize \vspace{-5pt} \vspace{-5pt} \begin{algorithm}[h] \footnotesize \caption{Crucial node detection algorithm} \label{algo:crucial_node_detection} \KwIn{$X, \Upsilon_1, \Upsilon_2, ...., \Upsilon_p$} \KwOut{ Pairs of true states responding to the most frequently triggered events} \BlankLine \begin{algorithmic}[1] \STATE $i \gets 1$ \WHILE{$i \leq p$} \STATE $S_i \gets$ LCS between $X$ and $\Upsilon_i$ \tcp{\footnotesize LCS = Longest Common Subsequence} \FOR{$j \gets 1$ \KwTo $(|S_i|-1)$} \STATE $E[i, j] \gets \{S_i[j], S_i[j+1]\}$ \\ \IF{$E[i, j]$ not in $SCORE.Keys()$} \STATE $SCORE[E[i,j]] \gets 1$ \\ \ELSE \STATE $SCORE[E[i,j]] \gets SCORE[E[i,j]] + 1$ \ENDIF \ENDFOR \ENDWHILE \RETURN $\underset{E[i,j]}{argmax} \ (SCORE[E[i,j]])$ \end{algorithmic} \end{algorithm} \vspace{-5pt} \par Finally, the algorithm updates the \textit{SCORE} values based on the presence of pairs in all subsequences and retrieves the pairs with the maximum \textit{SCORE} value. It may output a number of pairs of states, such as $\{ x_{c_i}, x_{c_j} \}$, where there is a crucial state transition in the state machine from $x_{c_i}$ to $x_{c_j}$. Our goal is to identify the events (we call them \textit{nodes}) associated with such transitions that are exploited by the attackers to compromise the chain. \vspace{5pt} \par \textbf{A Simple Example} \par Suppose, there is a sequence of states (responding to some triggered events): \{\textbf{door-opened, light-on, camera-on, fan-on, window-opened}\}. And after making three separate attempts, the sequence retriever returns the following three sequences: Sequence-1: \{door-opened, light-on, light-on, camera-on, fan-on\}, Sequence-2: \{fan-on, light-on, camera-on, fan-on, window-opened\}, Sequence-3: \{door-opened, light-on, camera-on, window-opened, fan-on\}. \par Now, if we apply Algorithm \ref{algo:crucial_node_detection} on this scenario, we find that the pair \{\textbf{light-on, camera-on}\} obtains the highest score. Consequently, we can conclude that the transition from the state \textbf{light-on} to \textbf{camera-on} is the most vital one in the state machine, and the nodes associated with those states are the most crucial ones in the chain. IoTMonitor identifies these crucial nodes so that we can perform security hardening to minimize the attacker's chance of compromising an IoT network. The security hardening part is out of the scope of this paper, and we plan to incorporate such capability in the extended version of IoTMonitor in recent future. \section{Results and Evaluation} \par To evaluate the performance of IoTMonitor, we utilize the PEEVES dataset \cite{sbirnbach2019} that records IoT event occurrences from 12 different IoT devices and sensors measurements from 48 deployed sensors to verify those events. We use 24-hours data for our experiment, and our experiment executes on a 16 GB RAM and 4 CPU core system. \subsection{Dataset Processing} \par Our experiment mainly deals with three types of data: 1) event data (used as true states) 2) sensor measurements (used as observations), and 3) timestamps. We concentrate only on those event occurrences which can be verified by the sensor measurements. Since sensor measurements here capture the physical changes that have happened in the environment due to the event occurrences, they can be used to crosscheck whether a certain event has occurred. We conceptualize the function \textit{sliding window} to determine whether an event is verifiable. Hence, the function provides us with a time window (in milliseconds) $w_i$ that starts at the timestamp of a particular event occurrence. After an event occurrence is recorded at time instance $t_i$, if we find the necessary sensor measurements to verify that occurrence within the time instance $t_i + w_i$, we consider that event verifiable and keep it in the sequence of events occurred. Otherwise, we discard it from the sequence. In our experiment, we consider 20 such sliding windows with the size between 105 milliseconds and 200 milliseconds with an increase of 5 milliseconds. \subsection{Experiment Setting} \par At the beginning of our experiment, we choose Gaussian distribution to randomly assign the transition probabilities and initial state probabilities for each true state. On the other hand, we use Dirichlet distribution to assign the emission probabilities. We use the same seed value for each execution. \subsection{Probability Estimation Time} \par Probability estimation time represents the time required to estimate the converged transition probability distribution $Q$ and the emission probability distribution $E$. Figure \ref{fig:estimation_time_decoding_time}(a) presents the estimation time for four different sequences of events of different lengths (5, 10, 15, and 20) against a range of sliding windows. In the figure we show the average estimation time after 10 executions. \par As we can see from Figure \ref{fig:estimation_time_decoding_time}(a), the longest estimation time is $<4$ seconds for the sequence length of 20, while in most cases, it is $<0.5$ seconds. As the window size increases, the estimation time starts to decrease and stabilize. There are a few exceptional cases where the estimation time increases sharply for a increase in window size. For example, when the window size increases from 105 to 110 for the sequence of length 20, we see a sudden spike. We examine the source data and find that this spike is caused by the appearance of two new events that were not present earlier. Since the number of unique events increases and repetition of same events decreases in the sequence, the initial state distribution and transition probabilities are needed to be adjusted which costs adversely to the total estimation time. However, this type of exception is transient, and the graph stabilizes eventually. We do not present the estimation time for the sequences of lengths $>20$ in the Figure \ref{fig:estimation_time_decoding_time}(a) since we observe very little change in pattern for those sequences. \vspace{-8pt} \begin{figure}[h] \centering \includegraphics[scale=0.33]{figures/v2/EST_DCD_in_ms_final_v2.png} \vspace{-10pt} \caption{a) Probability estimation time with respect to sliding window size and length of the event sequence; b) Decoding time with respect to sliding window size and length of the event sequence } \label{fig:estimation_time_decoding_time} \vspace{-15pt} \end{figure} \subsection{Decoding Time} \par Decoding time represents the time required to extract the hidden sequence when we have the converged $\theta^*$. Similar to probability estimation time, we take average decoding time after 10 executions. The Figure \ref{fig:estimation_time_decoding_time}(b) presents the decoding time for four different sequences of events with lengths 5, 10, 15, and 20 against a range of sliding windows. \par If we look at the graph at Figure \ref{fig:estimation_time_decoding_time}(b), we see that the decoding time decreases when the window size increases. The longest decoding time we get is $<2.5$ milliseconds which is very fast for the retrieval of hidden event sequences. Although we see few little temporary spikes for the length 15 after sliding window 150, we still achieve $<2.0$ milliseconds as the decoding time. \begin{figure}[h] \centering \includegraphics[scale=0.40]{figures/v2/iteration_to_convergence_ratio_final_version.png} \caption{Number of iterations to estimate the converged transition probabilities and emission probabilities with respect to the ratio between number of observation states and number of true states} \label{fig:computational overhead} \vspace{-10pt} \end{figure} \vspace{-3pt} \subsection{Computational Overhead} \par Since our experiment dedicates most of the computation time to estimate the probabilities, we measure \textit{computational overhead} as the total number of iterations of the \textit{forward-backward procedure} required to reach the convergence of transition probabilities and emission probabilities. In Figure \ref{fig:computational overhead}, we present the required total number of iterations (in y-axis) with respect to the ratio between \textit{the total number of unique observation states} and \textit{the total number of unique true states} (in x-axis). We can see that, the computational overhead increases roughly linearly with the ratio. \begin{figure}[h] \centering \includegraphics[scale=0.32]{figures/v2/Accuracy_score_heatmap_updated_v2.png} \vspace{-15pt} \caption{Accuracy score vs Sliding window size vs Length of the event sequence} \label{fig:accuracy_score} \vspace{-5pt} \end{figure} \subsection{Accuracy Score} \par To determine how accurately the extracted hidden sequence of events represent the real events, we compute f-score for 29 different sequence of events starting with the length 2 and ending at length 30. We do not consider the sequence with length 1 because it does not offer any uncertainty in terms of transition and emission probability. We present a heatmap to visually show the correlation among accuracy score, sliding window size and length of the event sequence. In Figure \ref{fig:accuracy_score}, the accuracy scores are presented as colors. \par As we can see, when the length of event sequence is $<15$, the increase in window size after 160 assures a very high accuracy score. We even get the accuracy score of 1.0 in some occasions. There is only one exception for the sequence of length 5. We see a decrease in accuracy score after the window size 105, and that's because we see a completely new sequence for the window sizes 110 to 200. Similar pattern also arises, although to a less extent, for the sequence of length 7. But it is quite evident that the increase in window size for the smaller lengths ensures higher accuracy score (equals or close to 1.0). When the length increases to a considerable extent, we start to see the impact of sliding windows on the accuracy score diminishing slowly. Since our system emphasizes on the functional dependencies (in terms of transition probability) of the events to extract the hidden sequence, the longer the sequence becomes, the looser are the dependencies. \vspace{-5pt} \section{Conclusion} \par In this research work, we propose IoTMonitor that focuses on the extraction of the underlying event sequence using HMM approach given a set of physical evidence emitted during a trigger-action based attack in an IoT environment. We use the Baum Welch algorithm to estimate transition and emission probabilities, and Viterbi algorithm to extract the underlying event sequence. Our experiments show that both probability estimation and sequence extraction operations converge reasonably fast. In terms of accuracy score, IoTMonitor achieves 100\% in multiple cases and $\geq 90\%$ in a number of cases. We draw a heatmap to visually show the correlation among accuracy score, sliding windows, and length of the event sequences. We also present an algorithm to identify the crucial events in the extracted sequence which the attackers wish to compromise to implement a trigger-action attack. Immediate extensions to our approach include the following efforts. First, we currently focus on the crucial nodes that appear in multiple attack paths. If we extend our research to an attack graph, we can identify crucial node pairs on different attack paths. Second, the physical evidence collected by sensors could contain noises or even inaccurate data. We will improve our algorithm to provide more robust attack detection capability for IoT platforms. \bibliographystyle{./bibliography/IEEEtran}
proofpile-arXiv_065-5640
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Proof of Lemma \ref{local_approximation_2}}\label{plocal_approximation_2} Taking the first order Taylor series expansion of $\mathbf h(\mathbf d)$ at $\tilde{\mathbf d}$, we get \begin{align}\label{e1} \mathbf h(\mathbf d)=\mathbf h(\tilde{\mathbf d})+\tilde{\mathbf J}(\mathbf d-\tilde{\mathbf d})+o(\|\mathbf d-\tilde{\mathbf d}\|). \end{align} Let $\mathbf d=\mathbf W^{\operatorname{T}}\mathbf f(x)+\mathbf b$. Then, by using \eqref{constraint1}, we obtain \begin{align}\label{e2} \mathbf h(\mathbf W^{\operatorname{T}}\mathbf f(x)+\mathbf b)=\mathbf h(\tilde{\mathbf d})+\tilde{\mathbf J}(\mathbf W^{\operatorname{T}}\mathbf f(x)+\mathbf b-\tilde{\mathbf d})+o(\epsilon). \end{align} If $\mathbf d=\mathbf W^{\operatorname{T}}\bm \sigma(\tilde{\mathbf b^{(1)}})+\tilde {\mathbf d}$, we can expand $\sum_{x \in \mathcal X} P_X(x) \mathbf a_{P_{Y|X=x}}$ in \eqref{bias2} as \begin{align}\label{e3} &\sum_{x \in \mathcal X} P_X(x) \mathbf a_{P_{Y|X=x}}\nonumber\\ =&\mathbf h(\mathbf W^{\operatorname{T}}\bm \sigma(\tilde{\mathbf b^{(1)}})+\tilde {\mathbf d})\nonumber\\ \!\!\!=&\mathbf h(\tilde{\mathbf d})+\tilde{\mathbf J}\mathbf W^{\operatorname{T}}\bm \sigma(\tilde{\mathbf b}^{(1)})+o(\|\mathbf W^{\operatorname{T}}\bm \sigma(\tilde{\mathbf b}^{(1)})\|_2), \end{align} where we can design $\tilde{\mathbf d}$ and $\tilde{\mathbf b}^{(1)}$ such that $o(\|\mathbf W^{\operatorname{T}}\bm \sigma(\tilde{\mathbf b}^{(1)})\|_2)$ becomes $o(\epsilon)$. By similar Taylor series expansion, using \eqref{constraint2}, we can get \begin{align}\label{e4} &\bm \sigma(\mathbf W^{(1)} \mathbf f^{(1)}(x)+\mathbf b^{(1)})\nonumber\\ =&\bm \sigma(\tilde{\mathbf b}^{(1)})+\mathbf J_1({\mathbf W^{(1)}}^{\operatorname{T}}\mathbf f^{(1)}(x)+\mathbf b^{(1)}-\tilde{\mathbf b}^{(1)})+o(\epsilon). \end{align} Substituting \eqref{e4} and \eqref{e3} into \eqref{e2}, we obtain \begin{align}\label{e5} \mathbf h(\mathbf W^{\operatorname{T}}\mathbf f(x)+\mathbf b)=&\sum_{x \in \mathcal X} P_X(x) \mathbf a_{P_{Y|X=x}}\nonumber\\ &+\tilde{\mathbf J}(\mathbf W^{\operatorname{T}}\mathbf J_1({\mathbf W^{(1)}}^{\operatorname{T}}\mathbf f^{(1)}(x)+\mathbf b^{(1)}-\tilde{\mathbf b}^{(1)}))\nonumber\\ &+\tilde{\mathbf J}(\mathbf b-\tilde{\mathbf d})+o(\epsilon). \end{align} By substituting \eqref{e5} into \eqref{Lemma1proofeq5} and performing similar computations of Appendix \ref{plemma2}, we obtain \eqref{approximation1}. This concludes the proof. \textcolor{red}{Add more details} \section{Proof of Lemma \ref{assumption6to1}}\label{passumption6to1} Because $h$ is strictly increasing, $h'(\tilde b_i)>0$. Since $h$ is continuously differentiable, there exists a $\delta > 0$ such that for all $z \in (\tilde b_i - \delta, \tilde b_i +\delta)$ \begin{align}\label{eq_continuouslydifferentiable1} |h'(z)-h'(\tilde b_i)| \leq \frac{h'(\tilde b_i)}{2}. \end{align} It follows from \eqref{eq_continuouslydifferentiable1} that for all $z \in (\tilde b_i - \delta, \tilde b_i +\delta)$ \begin{align}\label{lemma1_ed} h'(z) \geq \frac{h'(\tilde b_i)}{2}. \end{align} From \eqref{lemma1_ed}, we can get that for all $z \in (\tilde b_i - \delta, \tilde b_i +\delta)$ \begin{align}\label{lemma1_ecompare1} \frac{|h(z) - h(\tilde b_i)|}{|z-\tilde b_i|} &\geq \frac{h'(\tilde b_i)}{2}. \end{align} Let $K = \frac{h'(\tilde b_i)}{2}$, then \eqref{lemma1_ecompare1} implies \eqref{eq_assumption1}, which completes the proof. \subsection{Two Examples}\label{examples} \subsubsection{Neural Network based Maximum Likelihood Classification (Softmax Regression)} The Bayes actions $\mathbf a_{P_Y}$ associated to the loss function \eqref{log-loss} are non-unique. The set of all Bayes actions is $\mathcal A_{P_Y}=\{\alpha P_Y: \alpha >0\}$, which satisfies Assumption \ref{assumption5}. By choosing one Bayes action $\mathbf a_{P_Y}=P_Y$, one can derive the matrices $\mathbf M_L$ and $\mathbf B$ used in Theorems \ref{theorem1}-\ref{theorem3}: The $(y, y')$-th element of $\mathbf M_L$ is \begin{align}\label{example111} (\mathbf M_L)_{y, y'}=\frac{\delta(y, y')}{P_Y(y)}-1, \end{align} where $\delta(y, y')=1$, if $y=y'$; and $\delta(y, y')=0$, if $y\neq y'$. The $(y, x)$-th element of $\mathbf B$ is \begin{align}\label{example112} (\mathbf B)_{y, x}=\sqrt{P_X(x)}(P_{Y|X=x}(y|x)-P_Y(y)). \end{align} To make our analysis applicable to the softmax activation function \cite[Eq. (6.29)]{goodfellow2016deep}, we have used a loss function \eqref{log-loss} that is different from the log-loss function in \cite{huang2019information,farnia2016minimax}. As a result, our local geometric analysis with \eqref{example111} and \eqref{example112} is different from the results in \cite{huang2019information}. \subsubsection{Neural Network based Minimum Mean-square Estimation} Consider the minimum mean-square estimation of a random vector $\mathbf Y=[ Y_1,\ldots, Y_n]^{\operatorname{T}}$. The Bayes action associated to the loss function \eqref{mean-square-error} is $\mathbf a_{P_{\mathbf Y}}=\mathbb E[\mathbf Y]$, which satisfies Assumption \ref{assumption5} because $\mathbb E[\mathbf Y]$ is a linear function of $P_{\mathbf Y}$. One can show that $\mathbf M_{L}=\mathbf I$ is an identity matrix and the $(j,x)$-th element of $\mathbf B$ is \begin{align} (\mathbf B)_{j,x}=\sqrt{P_X(x)}(\mathbb E[ Y_j|X=x]-\mathbb E[ Y_j]). \end{align} \section{Conclusion} In this paper, we have analyzed feature extraction in deep feedforward neural networks in a local region. We will conduct experiments to verify these results in our future work. \subsection{Local Geometric Analysis of Hidden Layers} Next, we provide a local geometric analysis for each hidden layer. To that end, let us consider the training of the $i$-th hidden layer for fixed weights and biases in the subsequent layers. Define a loss function $L^{(i)}$ for the $i$-th hidden layer \begin{align} L^{(i)}(y, \mathbf a^{(i)})=&L\left(y, \mathbf g \circ \mathbf g^{(m)}\circ \ldots \circ \mathbf g^{(i+1)}(\mathbf a^{(i)})\right), \end{align} where for $k=i,\ldots, m-1 \begin{align} \mathbf g^{(k+1)}(\mathbf a^{(k)})&=\mathbf h^{(k+1)}(\mathbf W^{(k+1)\operatorname{T}}\mathbf a^{(k)}+\mathbf b^{(k+1)}),\\ \mathbf g(\mathbf a^{(m)})&=\mathbf h(\mathbf W^{\operatorname{T}}\mathbf a^{(m)}+\mathbf b). \end{align} Given $(\mathbf W^{(k)}, \mathbf b^{(k)})$ for $k=i+1,\ldots,m$ and $(\mathbf W, \mathbf b)$, the training problem of the $i$-th hidden layer is formulated as \begin{align} \label{new_obj_func} \min_{\substack{\mathbf{W}^{(i)},\\\mathbf b^{(i)},\\\mathbf f^{(i-1)}\in \Lambda^{i-1}}}\!\!\!\!\!\!\sum_{x \in \mathcal X}\!\! P_X(x) D_{\!L^{\!(i)}}\!(\mathbf a^{(i)}_{P_{Y\!|\!X=x}}\! || \mathbf h^{(i)}\!(\mathbf{W}^{(i)\!\operatorname{T}}\mathbf f^{(i-1)}\!(x)\!+\!\mathbf b^{\!(i)})), \end{align} where $\Lambda^{i-1}$ is the set of all feature functions that can be created by the first $(i-1)$ hidden layers. We adopt several assumptions for the $i$-th hidden layer that are similar to Assumptions \ref{assumption1}-\ref{assumption2}. Let $\mathbf a^{(i)}_{P_Y}$ denote the Bayes action associated to the loss function $L^{(i)}$ and distribution $P_Y$. According to Lemma \ref{lemma1}, there exists a bias $\tilde{\mathbf b}^{(i)}$ and a tuple $(\mathbf f^{(i-1)}, \mathbf{W}^{(i)}, \mathbf b^{(i)})$ such that (i) ~$\mathbf h^{(i)}(\tilde{\mathbf b}^{(i)})=\mathbf a^{(i)}_{P_Y}$ is a Bayes action associated to the loss function $L^{(i)}$ and distribution $P_Y$, (ii) $(\mathbf f^{(i-1)}, \mathbf{W}^{(i)}, \mathbf b^{(i)})$ is an optimal solution to \eqref{new_obj_func}, and (iii) for all $x\in\mathcal X$ and $j=1, \ldots, k_i$ \begin{align}\label{constraint2} {\mathbf w_j}^{(i)\operatorname{T}}\mathbf f^{(i-1)}(x)+ b_j^{(i)}-\tilde b_j^{(i)}=O(\epsilon). \end{align} \ifreport Define \begin{align} & \mathbf \Xi_{\mathbf f^{(i)}}=[\bm \xi_{\mathbf f^{(i)}}(1), \ldots, \bm \xi_{\mathbf f^{(i)}}(|\mathcal X|)], \label{matrixf2} \\ & \mathbf{B}^{(i)}=[\bm{\beta}_Y^{(i)}(1), \ldots, \bm \beta_Y^{(i)}(|\mathcal X|)], \label{matrixB2} \end{align} where in \eqref{matrixf2}, \begin{align}\label{vectorf2} \bm \xi_{\mathbf f^{(i)}}(x)=&\sqrt{P_X(x)} \left(\mathbf f^{(i)}(x)-\bm \mu_{\mathbf f^{(i)}}\right), \\ \label{meanf2} \bm \mu_{\mathbf f^{(i)}}=&\sum_{x \in \mathcal X} P_X(x) \mathbf f^{(i)}(x), \end{align} and in \eqref{matrixB2}, \begin{align}\label{vectorB} \bm \beta_Y^{(i)}(x)=&\sqrt{P_X(x)} \left(\mathbf a^{(i)}_{P_{Y|X=x}}-\bm \mu_{\mathbf a^{(i)}}\right),\\ \bm \mu_{\mathbf a^{(i)}}=&\sum_{x \in \mathcal X} P_X(x) \mathbf a^{(i)}_{P_{Y|X=x}}. \end{align} Similar to \eqref{matrixM} and \eqref{matrixJ}, let us define the following two matrices for the $i$-th hidden layer \begin{align}\label{matrixM2} \mathbf{M}_{L^{(i)}}=\frac{\partial^2 \mathbb E_{Y \sim P_Y}[L^{(i)}(Y, \mathbf a^{(i)})]}{\partial \mathbf a \partial \mathbf a^{\operatorname{T}}}\bigg|_{\mathbf a=\mathbf a^{(i)}_{P_Y}}, \end{align} where the matrix $\mathbf{M}_{L^{(i)}}$ has a Cholesky decomposition $\mathbf{M}_{L^{(i)}}=\mathbf{R}_{L^{(i)}}^{\operatorname{T}}\mathbf{R}_{L^{(i)}}$ and \begin{align} \mathbf J^{(i)}=\frac{\partial \mathbf h^{(i)}(\mathbf b^{(i)})}{\partial \mathbf b^{(i)\operatorname{T}}}\bigg|_{\mathbf b^{(i)}=\tilde{\mathbf b}^{(i)}}. \end{align} \else Similar to \eqref{matrixf}-\eqref{matrixJ}, we define ${\mathbf \Xi}_{\mathbf f^{(i)}}$, $\mathbf B^{(i)}$, ${\bm \mu}_{\mathbf f^{(i)}}$, ${\bm \mu}_{\mathbf a^{(i)}}$, ${\mathbf M}_{L^{(i)}}$, ${\mathbf R}_{L^{(i)}}$, and $\mathbf J^{(i)}$ for the $i$-th hidden layer. Due to space limitations, these definitions are relegated to our technical report \cite{TechnicalReport}. \fi The following result is an immediate corollary of Lemma \ref{lemma2}. \begin{corollary}\label{corollary1} In the local analysis regime \eqref{constraint2}, the objective function in \eqref{new_obj_func} can be expressed as \begin{align} &\sum_{x \in \mathcal X} \!P_X(x) D_{\!L^{(i)}}\!(\mathbf a^{(i)}_{P_{Y\!|\!X=x}}\! || \mathbf h^{(i)}(\mathbf{W}^{(i)\!\operatorname{T}}\mathbf f^{(i-1)}(x)\!+\!\mathbf b^{\!(i)}))\nonumber\\ =&\frac{1}{2} \|\tilde{\mathbf B}^{(i)}- {\mathbf \Xi}_{\mathbf W^{(i)}} {\mathbf{\Xi}}_{\mathbf f^{(i-1)}}\|_{F}^2+\frac{1}{2}\eta({\mathbf d}^{(i)},{\mathbf f}^{(i-1)})+o(\epsilon^2), \end{align} where $\tilde{\mathbf B}^{(i)}=\mathbf R_{L^{(i)}}{\mathbf B}^{(i)}$, ${\mathbf \Xi}_{\mathbf W^{(i)}}=\mathbf R_{L^{(i)}}\mathbf J^{(i)}\mathbf W^{(i)\operatorname{T}}$, $\mathbf d^{(i)}=\mathbf b^{(i)}-\tilde{\mathbf b}^{(i)}$, and \begin{align} &\eta(\mathbf d^{(i)}, \mathbf f^{(i)})\nonumber\\ =&(\mathbf a^{(i)}_{P_Y}-\bm \mu_{\mathbf a^{(i)}}+\mathbf J^{(i)} \mathbf d^{(i)}+\mathbf J^{(i)} \mathbf W^{(i)\operatorname{T}}\bm \mu_{\mathbf f^{(i-1)}})^{\operatorname{T}} \mathbf{M}_L^{(i)}\nonumber\\ &\times (\mathbf a^{(i)}_{P_Y}-\bm \mu_{\mathbf a^{(i)}}+\mathbf J^{(i)} \mathbf d^{(i)}+\mathbf J^{(i)} \mathbf W^{(i)\operatorname{T}}\bm \mu_{\mathbf f^{(i-1)}}). \end{align} \end{corollary} In the local analysis regime, the training of $(\mathbf \Xi_{\mathbf W^{(i)}}, \mathbf{\Xi}_{\mathbf f^{(i-1)}}, \mathbf d^{(i)}, \bm \mu_{{\mathbf f}^{(i-1)}})$ in \eqref{new_obj_func} can be expressed as the following optimization problem: \begin{align}\label{approximate_problem1} \min_{\substack{\mathbf \Xi_{\mathbf W^{(i)}}, \mathbf{\Xi}_{\mathbf f^{(i-1)}}\\ \mathbf d^{(i)}, \bm \mu_{{\mathbf f}^{(i-1)}}}} \frac{1}{2} \|\tilde{\mathbf B}^{(i)}- \mathbf \Xi_{\mathbf W^{(i)}} \mathbf{\Xi}_{\mathbf f^{(i-1)}}\|_{F}^2+\frac{1}{2}\eta(\mathbf d^{(i)}, \mathbf f^{(i-1)}). \end{align} Similar to Theorems \ref{theorem1}-\ref{theorem3}, we can get \begin{corollary}\label{corollary2} For fixed $\mathbf{\Xi}_{\mathbf f}^{(i-1)}$ and $\bm \mu_{\mathbf f^{(i-1)}}$, the optimal ${\mathbf \Xi}_{\mathbf W^{(i)}}^*$ to minimize \eqref{approximate_problem1} is given by \begin{align}\label{optimal_weight_2} {\mathbf \Xi}_{\mathbf W^{(i)}}^*=\tilde{\mathbf B}^{(i)} \mathbf {\Xi}_{\mathbf f}^{(i-1)\operatorname{T}}(\mathbf \Xi_{\mathbf f}^{(i-1)}\mathbf \Xi_{\mathbf f}^{(i-1)\operatorname{T}})^{\operatorname{-1}}, \end{align} and the optimal bias ${\mathbf d}^{(i)*}$ is expressed as \begin{align}\label{optimal_bias2} {\mathbf d^{(i)*}}=-\bar{\mathbf W}^{(i)\operatorname{T}}\bm \mu_{\mathbf f^{(i-1)}}+(\mathbf{J}^{(i)})^{\operatorname{-1}}(\bm \mu_{\mathbf a^{(i)}}-\mathbf a^{(i)}_{P_Y}).\end{align} \end{corollary} \begin{corollary}\label{corollary3} For fixed $\mathbf \Xi_{\mathbf W^{(i)}}$ and $\mathbf d^{(i)}$, the optimal ${\mathbf{\Xi}}_{\mathbf f^{(i-1)}}^*$ to minimize \eqref{approximate_problem1} is given by \begin{align}\label{optimal_feature2} {\mathbf \Xi}_{\mathbf f^{(i-1)}}^*=(\mathbf \Xi_{\mathbf W^{(i)}}^{\operatorname{T}}\mathbf \Xi_{{\mathbf W}^{(i)}})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W^{(i)}}^{\operatorname{T}}\tilde{\mathbf B}^{(i)}, \end{align} and the optimal mean ${\bm \mu}_{\mathbf f}^*$ is given by \begin{align}\label{optimal_mean2} {\bm \mu}_{\mathbf f^{(i-1)}}^*=-(\mathbf \Xi_{\mathbf W^{(i)}}^{\operatorname{T}}\mathbf \Xi_{{\mathbf W}^{(i)}})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W^{(i)}}^{\operatorname{T}} (\mathbf a^{(i)}_{P_Y}-\bm \mu_{\mathbf a}^{(i)}+\mathbf J^{(i)}\mathbf d^{(i)}). \end{align} \end{corollary} \begin{corollary}\label{corollary4} If $k_{i-1}\leq \min(k_i, |\mathcal X|)$, then any $({\mathbf{\Xi}}_{\mathbf f^{(i)}}^*, {\mathbf \Xi}_{\mathbf W^{(i)}}^*)$ satisfying \eqref{optimal_pair2} jointly minimizes \eqref{approximate_problem1}: \begin{align}\label{optimal_pair2} {\mathbf \Xi}_{\mathbf W^{(i)}}^*{\mathbf{\Xi}}_{\mathbf f^{(i)}}^*=\mathbf{U}^{(i)}_{k_{i-1}}\mathbf{\Sigma}^{(i)}_{k_{i-1}}\mathbf V_{k_{i-1}}^{(i)\operatorname{T}}, \end{align} where $\mathbf{\Sigma}^{(i)}_{k_{i-1}}=\textbf{\normalfont{Diag}}(\sigma^{(i)}_1, \ldots \sigma^{(i)}_{k_{i-1}})$ is a diagonal matrix associated with $k_{i-1}$ leading singular values of $\tilde{\mathbf B}^{(i)}$, $\mathbf{U}^{(i)}_{k_{i-1}}$ and $\mathbf V_{k_{i-1}}^{(i)}$ are composed by the corresponding left and right singular vectors of $\tilde{\mathbf B}^{(i)}$, respectively. Moreover, any bias ${\mathbf d}^{(i)*}$ and mean ${\bm \mu}_{\mathbf f^{(i)}}^*$ satisfying \eqref{optimal_bias_mean2} jointly minimizes \eqref{approximate_problem1}\normalfont{:} \begin{align}\label{optimal_bias_mean2} \mathbf{J}^{(i)}\left({\mathbf d}^{(i)*}+{\mathbf W^{(i)}}^{\operatorname{T}}{\bm \mu}_{\mathbf f^{(i)}}^*\right)=\bm \mu_{\mathbf a^{(i)}}-\mathbf a^{(i)}_{P_Y}.\end{align} \end{corollary} Compared to the local geometric analysis for softmax regression in \cite{huang2019information}, Theorems \ref{theorem1}-\ref{theorem3} and Corollaries \ref{corollary2}-\ref{corollary4} could handle more general loss functions and activation functions. In addition, our results can be applied to multi-layer neural networks in the following iterative manner: For fixed $(\mathbf W, \mathbf b)$ in the output layer, the Bayes action $\mathbf a^{(m)}_{P_{Y|X=x}}$ needed for analyzing the $m$-th hidden layer is the optimal feature $\mathbf f^*(x)$ provided by Theorem \ref{theorem2}. Similar results hold for the $i$-th hidden layer. For fixed weights and biases in subsequent layers, the Bayes action $\mathbf a^{(i-1)}_{P_{Y|X=x}}$ needed for analyzing the $(i-1)$-th hidden layer is the optimal feature $\mathbf f^{(i-1)*}(x)$ in Corollary \ref{corollary3}. Hence, the optimal features obtained in Theorem \ref{theorem2} and Corollary \ref{corollary3} are useful for the local geometric analysis of earlier layers. \ignore{In addition to the assumptions of the previous section, we consider the following assumptions. \ignore{\begin{assumption}\label{assumption6} The activation function $\psi$ is strictly increasing and continuously differentiable. \end{assumption}} \ignore{\begin{assumption}\label{assumption7} For a sufficiently small $\epsilon >0$ and all $\mathbf a_{P_{Y|X=x}}$, there exists an optimal solution $(\bar{\mathbf f}^{(1)}, \bar{\mathbf{W}}^{(1)}, \bar{\mathbf b}^{(1)})$ to \eqref{reformed_problem} that closely approximates $\mathbf a_{P_{Y|X=x}}$ such that \begin{align} \!\!\!\!\| \mathbf a_{P_{Y|X=x}} - \mathbf h({\mathbf{W}}^{\operatorname{T}}\bm \psi({\bar{\mathbf W}^{(1){\operatorname{T}}}}\bar{\mathbf f}^{(1)}(x)+\bar{\mathbf b}^{(1)})+\mathbf b)\|_2^2\! \leq \epsilon^2. \end{align} \end{assumption}} There exists vectors $\hat{\mathbf b}^{(1)}\in \mathbb R^{k}$ and $\hat {\mathbf b} \in \mathbb R^n$ that satisfies \begin{align}\label{bias2} \mathbf h(\mathbf W^{\operatorname{T}}\bm \psi(\hat{\mathbf b}^{(1)})+\hat {\mathbf b})=\mathbf a_{P_Y}. \end{align} For a given $(\mathbf W, \mathbf b)$, we focus on the following local region, where $(\mathbf f^{(1)}, \mathbf W^{(1)}, \mathbf b^{(1)})$ satisfies \begin{align}\label{constraint1} \|{\mathbf{W}}^{\operatorname{T}}\bm \psi({\mathbf W}^{(1){\operatorname{T}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b-\hat{\mathbf b}\|_2=O(\epsilon), \end{align} \begin{align}\label{constraint2} \|{{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\|_2=O(\epsilon). \end{align} For differentiable activation functions $h$ and $\psi$, define Jacobian matrices \begin{align} \hat{\mathbf J}=\frac{\partial \mathbf h(\mathbf b)}{\partial \mathbf b^{\operatorname{T}}}\bigg|_{\mathbf b=\hat{\mathbf b}},~~ \mathbf J_1=\frac{\partial \bm \psi(\mathbf b^{(1)})}{\partial {\mathbf b^{(1)}}^{\operatorname{T}}}\bigg|_{\mathbf b^{(1)}=\hat{\mathbf b}^{(1)}}. \end{align} Also, define a bias vector $\mathbf c \in \mathbb R^n$ \begin{align} \mathbf c=\mathbf W^{\operatorname{T}}\mathbf J_1(\mathbf b^{(1)}-\hat{\mathbf b}^{(1)})+\mathbf b-\hat{\mathbf b} \end{align} and a matrix \begin{align} \mathbf \Xi_{\mathbf f^{(1)}}=[\bm \xi_{\mathbf f^{(1)}}(1), \ldots, \bm \xi_{\mathbf f^{(1)}}(|\mathcal X|)], \end{align} where $\bm \xi_{\mathbf f^{(1)}}(x)=\sqrt{P_X(x)}(\mathbf f^{(1)}(x)-\sum_{x \in \mathcal X}P_X(x) \mathbf f^{(1)}(x))$. \begin{lemma}\label{local_approximation_2} Given \eqref{constraint1} and \eqref{constraint2}, the minimum excess risk \eqref{excess_risk_1} can be expressed as \begin{align}\label{approximation1} &\!\!\!\!\!\!\!\!\!\!\sum_{x \in \mathcal X} P_X(x) D_L(\mathbf a_{P_{Y|X=x}} || \mathbf h(\mathbf{W}^{\operatorname{T}}\bm \psi({\mathbf W}^{(1)\operatorname{T}} {\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b))\nonumber\\ =&\frac{1}{2} \|\mathbf{\tilde B}- {\mathbf \Xi}_{\mathbf W^{(1)}} {\mathbf{\Xi}}_{\mathbf f^{(1)}}\|_{F}^2+\frac{1}{2}\kappa({\mathbf c}, {\mathbf f}^{(1)})+o(\epsilon^2), \end{align} where ${\mathbf \Xi}_{\mathbf W}=\mathbf{R}_L\tilde{\mathbf{J}}\mathbf{W}^{\operatorname{T}}\mathbf J_1 {{\mathbf W}}^{(1)\operatorname{T}},$ \begin{align} \kappa({\mathbf c}, {\mathbf f}^{(1)})=&(\mathbf R_L(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\tilde{\mathbf J} {\mathbf c})+{\mathbf \Xi}_{\mathbf W^{(1)}}{\bm \mu}_{\mathbf f^{(1)}})^{\operatorname{T}} \nonumber\\ &\times(\mathbf R_L(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\tilde{\mathbf J} {\mathbf c})+{\mathbf \Xi}_{\mathbf W^{(1)}}{\bm \mu}_{\mathbf f^{(1)}}), \end{align} \begin{align} {\bm \mu}_{\mathbf f^{(1)}}&=\sum_{x \in \mathcal X} P_X(x) {\mathbf f}^{(1)}(x). \end{align} \end{lemma} \begin{proof} See Appendix \ref{plocal_approximation_2}. \end{proof} We solve the following problem \begin{align}\label{approximate_problem1} \min_{\substack{\mathbf \Xi_{\mathbf W}, \mathbf \Xi_{\mathbf f^{(1)}}, \\ \mathbf c, \bm \mu_{\mathbf f^{(1)}}}} \frac{1}{2} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W^{(1)}} \mathbf{\Xi}_{\mathbf f^{(1)}}\|_{F}^2+\frac{1}{2}\kappa(\mathbf c, \mathbf f^{(1)}), \end{align} which is similar to \eqref{approximate_problem}. \begin{theorem}\label{theorem4} For fixed $\mathbf \Xi_{\mathbf f^{(1)}}$ and $\bm \mu_{\mathbf f^{(1)}}$, the optimal $\bar{\mathbf \Xi}_{\mathbf W^{(1)}}$ to minimize \eqref{approximate_problem1} is given by \begin{align}\label{optimal_weight_2} \bar{\mathbf \Xi}_{\mathbf W^{(1)}}=\tilde{\mathbf B} \mathbf {\Xi}_{\mathbf f^{(1)}}^{\operatorname{T}}(\mathbf \Xi_{\mathbf f^{(1)}}\mathbf \Xi_{\mathbf f^{(1)}}^{\operatorname{T}})^{\operatorname{-1}}, \end{align} and the optimal bias $\bar{\mathbf c}$ is expressed as \begin{align}\label{optimal_bias2} \bar{\mathbf c}=-\bar{\mathbf W}^{\operatorname{T}}\mathbf J_{1}{\bar{\mathbf W}}^{(1)\operatorname{T}}\bm \mu_{\mathbf f^{(1)}}+\tilde{\mathbf{J}}^{\operatorname{-1}}(\bm \mu_{\mathbf a}-\mathbf a_{P_Y}). \end{align} \end{theorem} \begin{proof} See Appendix \eqref{ptheorem4} \end{proof} \begin{theorem}\label{theorem5} For fixed $\mathbf W$, $\mathbf W^{(1)}$, and $\mathbf c$, the optimal $\bar{\mathbf{\Xi}}_{\mathbf f^{(1)}}$ to minimize \eqref{approximate_problem1} is given by \begin{align}\label{optimal_feature2} \bar{\mathbf \Xi}_{\mathbf f^{(1)}}=(\mathbf \Xi_{\mathbf W}^{(1)\operatorname{T}}\mathbf \Xi_{\mathbf W^{(1)}})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W^{(1)}}^{\operatorname{T}}\tilde{\mathbf B}. \end{align} and the optimal mean $\bar{\bm \mu}_{\mathbf f^{(1)}}$ is given by \begin{align}\label{optimal_mean2} \mathbf J_{1}{\bar{\mathbf W}}^{(1)\operatorname{T}}\bar{\bm \mu}_{\mathbf f^{(1)}}=-(\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\mathbf \Xi_{\mathbf W})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W}^{\operatorname{T}}(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J\mathbf c). \end{align} \end{theorem} \begin{proof} See Appendix \ref{ptheorem5}. \end{proof} \begin{theorem}\label{theorem6} If $k_1\leq K$, then an optimal $(\bar{\mathbf{\Xi}}_{\mathbf f^{(1)}}, \bar{\mathbf \Xi}_{\mathbf W^{(1)}})$ to minimize \eqref{approximate_problem1} is given by \begin{align}\label{optimal_pair2} \bar{\mathbf{\Xi}}_{\mathbf f^{(1)}}&=\mathbf V_{k_1}^{\operatorname{T}} \\ \bar{\mathbf \Xi}_{\mathbf W^{(1)}}&=\mathbf{U}_{k_1}\mathbf{\Sigma}_{k_1}, \end{align} and the optimal $\bar{\mathbf c}$ and $\bar{\bm \mu}_{\mathbf f^{(1)}}$ should satisfy \begin{align} \bar{\mathbf c}=-\bar{\mathbf W}^{\operatorname{T}}\mathbf J_{1}{\bar{\mathbf W}}^{(1)\operatorname{T}}\bm \bar{\mu}_{\mathbf f^{(1)}}+\tilde{\mathbf{J}}^{\operatorname{-1}}(\bm \mu_{\mathbf a}-\mathbf a_{P_Y}). \end{align} \end{theorem} \begin{proof} See Appendix \ref{ptheorem6}. \end{proof}} \section{Proof of Lemma \ref{lemma1}}\label{plemma1} \begin{lemma}\label{norm_B} If Assumptions \ref{assumption5} and \ref{assumption4} hold, then for any action $\mathbf a_{P_Y} \in \mathcal A_{P_Y}$ and any $x \in \mathcal X$, there exists an $\mathbf a_{P_{Y|X=x}} \in \mathcal A_{P_{Y|X=x}}$ such that \begin{align}\label{lemma1e1} \|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\|_2=O(\epsilon). \end{align} \end{lemma} \begin{proof} By Assumption \ref{assumption4} and \eqref{weak-dependent}, for all $x \in \mathcal X$ \begin{align} \sum_{y \in \mathcal Y} \frac{(P_{Y|X}(y|x)-P_Y(y))^2}{P_Y(y)} \leq \epsilon^2, \end{align} where $P_Y(y)>0$ for all $y \in \mathcal Y$. This implies for all $x \in \mathcal X$ and $y \in \mathcal Y$ \begin{align}\label{lemma1e3} |{P}_{Y|X}(y| x)-{P}_Y(y)|\leq \sqrt{P_Y(y)}\epsilon. \end{align} From \eqref{lemma1e3}, we obtain that for all $x \in \mathcal X$ \begin{align}\label{lemma1e4} \sum_{y \in \mathcal Y} ({P}_{Y|X}(y| x)-{P}_Y(y))^2 \leq \epsilon^2. \end{align} Using \eqref{lemma1e4} and Assumption \ref{assumption5}, we get that for any action $\mathbf a_{P_Y} \in \mathcal A_{P_Y}$, there exists an action $\mathbf a_{P_{Y|X=x}} \in \mathcal A_{P_{Y|X=x}}$ such that \begin{align} \|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\|_2 =O(\epsilon). \end{align} This concludes the proof. \end{proof} The Bayes action $\mathbf a_{P_Y}$, as an optimal solution to \eqref{bayes}, is determined only by the marginal distribution $P_Y$ and the loss function $L$. Hence, $\mathbf a_{P_Y}$ is irrelevant of the parameter $\epsilon$ in Assumptions \ref{assumption3}-\ref{assumption4}. Recall that the bias $\tilde{\mathbf b}=[\tilde b_1,\ldots,\tilde b_n]^{\operatorname{T}}\in \mathbb R^n$ satisfies \eqref{BiasVector}. Hence, the bias $\tilde{\mathbf b}$ is also irrelevant of $\epsilon$. Due to Assumption \ref{assumption1}, there exist $\delta > 0$ and $K>0$ such that for all $z\in (\tilde b_i-\delta, \tilde b_i+\delta) $\begin{align}\label{lemma1_ecompare111} \frac{|h(z) - h(\tilde b_i)|}{|z-\tilde b_i|} \geq K. \end{align} Hence, if $|z-\tilde b_i| \geq \delta$, then \begin{align}\label{lemma1_ecompare} |h(z) - h(\tilde b_i)| \geq K \delta. \end{align} We note that $\delta$ and $K$ depend only on the function $h$ and the bias $\tilde{\mathbf b}$. Hence, $\delta$ and $K$ are irrelevant of $\epsilon$. On the other hand, by using \eqref{BiasVector}, Assumption \ref{assumption5}, Assumption \ref{assumption4}, and Lemma \ref{norm_B}, for any $x \in \mathcal X$ there exists an $\mathbf a_{P_{Y|X=x}}\in \mathcal A_{P_{Y|X=x}}$ that satisfies \begin{align}\label{bayes_approx} \|\mathbf a_{P_{Y|X=x}}-\mathbf h(\tilde{\mathbf b})\|_2=O(\epsilon). \end{align} In addition, due to Assumption \ref{assumption3}, there exists an optimal solution $(\mathbf f, \mathbf W, \mathbf b)$ to \eqref{reformed_problem} such that \begin{align}\label{action_approx} \|\mathbf a_{P_{Y|X=x}}-\mathbf h(\mathbf {{ W}}^{\operatorname{T}}\mathbf { f}(x)+\mathbf { b})\|_2=O(\epsilon). \end{align} Combining \eqref{bayes_approx} and \eqref{action_approx}, yields \begin{align}\label{lemma1e6} &\|\mathbf h(\tilde{\mathbf b}) -\mathbf h(\mathbf {{ W}}^{\operatorname{T}}\mathbf { f}(x)+\mathbf { b})\|_2\nonumber\\ =&\|\mathbf h(\tilde{\mathbf b})-\mathbf a_{P_{Y|X=x}}+\mathbf a_{P_{Y|X=x}}-\mathbf h(\mathbf {{ W}}^{\operatorname{T}}\mathbf { f}(x)+\mathbf { b})\|_2\nonumber\\ \leq& \|\mathbf h(\tilde{\mathbf b})-\mathbf a_{P_{Y|X=x}}\|_2+\|\mathbf a_{P_{Y|X=x}}-\mathbf h(\mathbf {{ W}}^{\operatorname{T}}\mathbf { f}(x)+\mathbf { b})\|_2\nonumber\\ =&O(\epsilon). \end{align} Hence, for all $x\in\mathcal X$ and $i=1,2,\ldots,n$ \begin{align}\label{lemma1_finishing} h(\mathbf{ w}_i^{\operatorname{T}}\mathbf { f}(x)+{ b}_i)-h(\tilde b_i)=O(\epsilon). \end{align} Define $\alpha_i(x)=\mathbf { w}_i^{\operatorname{T}}\mathbf { f}(x)+{ b}_i-\tilde b_i$. According to \eqref{lemma1_finishing}, there exists a constant $C >0$ irrelevant of $\epsilon$, such that \begin{align}\label{lemma1_finishing11} |h(\tilde b_i+\alpha_i(x))-h(\tilde b_i)| \leq C\epsilon. \end{align} We choose a sufficiently small $\epsilon>0$ such that $0<\epsilon< \frac{K\delta}{C}$, where $K$ and $\delta$ are given by \eqref{lemma1_ecompare111}. Then, \eqref{lemma1_finishing11} leads to \begin{align}\label{lemma1_finishing2} |h(\tilde b_i+\alpha_i(x))-h(\tilde b_i)| < K\delta. \end{align} By comparing \eqref{lemma1_ecompare} and \eqref{lemma1_finishing2}, it follows that $|\alpha_i(x)| < \delta$. Then, by invoking \eqref{lemma1_ecompare111} again, we can get \begin{align} \frac{|h(\tilde b_i+\alpha_i(x))-h(\tilde b_i)|}{|\alpha_i(x)|} \geq K. \end{align} Hence, \begin{align}\label{eq_alpha_i_bound} |\alpha_i(x)| \leq& \frac{|h(\tilde b_i+\alpha_i(x))-h(\tilde b_i)|}{K}\nonumber\\ \leq &\frac{C\epsilon}{K}. \end{align} This implies $\alpha_i(x)=O(\epsilon)$ for all $x\in\mathcal X$ and $i=1,\ldots,n$. This completes the proof of Lemma \ref{lemma1}. \ignore{ \subsection{Case 2: Activation Function \eqref{case2} of the Output Layer} We now consider the activation function in $\eqref{case2}$ and prove the claimed result in three steps: \emph{Step 1: We will find an appropriate bias $\hat{\mathbf b}=[\hat b_1,\ldots,\hat b_n]^{\operatorname{T}}$ that satisfies $\mathbf h(\hat{\mathbf b})=\mathbf a_{P_Y}$.} Similar to Case 1, because $\mathcal A\subseteq \mathcal H$ and $\mathbf a_{P_Y}\in\mathcal A$, there exists a bias $\tilde{\mathbf b}=[\tilde b_1,\ldots,\tilde b_n]^{\operatorname{T}}$ such that \begin{align}\label{BiasVector1} \mathbf h(\tilde{\mathbf b})=\left[ \frac{g(\tilde b_1)}{\sum_{j=1}^n g(\tilde b_j)},\ldots,\frac{g(\tilde b_n)}{\sum_{j=1}^n g(\tilde b_j)}\right]^{\operatorname{T}}=\mathbf a_{P_Y}. \end{align} We note that the choice of bias $\tilde{\mathbf b}$ in \eqref{BiasVector1} is irrelevant of $\epsilon$. By substituting \eqref{BiasVector1} into \eqref{lemma1e6}, we can obtain that for all $i=1,2,\ldots,n$ \begin{align}\label{lemma1_finishing3} \frac{g(\mathbf { w}_i^{\operatorname{T}}\mathbf { f}(x)+{ b}_i)}{\sum_{j=1}^n g(\mathbf { w}_j^{\operatorname{T}}\mathbf { f}(x)+{ b}_j)}-\frac{g(\tilde b_i)}{\sum_{j=1}^n g(\tilde b_j)}=O(\epsilon). \end{align} The bias $\tilde{\mathbf b}$ that satisfies \eqref{BiasVector1} and \eqref{lemma1_finishing3} is non-unique. In the sequence, we will find a bias that not only satisfies \eqref{BiasVector1} and \eqref{lemma1_finishing3}, but also has a desirable orthogonality property \eqref{eq_orthogonal}. Denote \begin{align} \mathbf{g}&=[g(\tilde b_1), \ldots,g(\tilde b_n)]^{\operatorname{T}},\\ \mathbf{r}&=[g(\mathbf { w}_1^{\operatorname{T}}\mathbf { f}(x)+{ b}_1), \ldots,g(\mathbf { w}_n^{\operatorname{T}}\mathbf { f}(x)+{ b}_n)]^{\operatorname{T}} \end{align} and let $v = \mathbf{g}^{\operatorname{T}}\mathbf{r}/\mathbf{g}^{\operatorname{T}} \mathbf g>0$. Because the image set of function $g$ is $[0,\infty)$, for the $v >0$ and $\tilde{\mathbf b}$ given above, there exists a bias $\mathbf{\hat b}=[\hat b_1,\ldots,\hat b_n]^{\operatorname{T}}$ that satisfies $g(\hat b_i) = vg(\tilde b_i)$ for all $i=1,2,\ldots,n$ and \begin{align}\label{eq_scaled} \frac{g(\hat b_i)}{\sum_{j=1}^n g(\hat b_j)} = \frac{v g(\tilde b_i)}{\sum_{j=1}^n v g(\tilde b_j)} = \frac{ g(\tilde b_i)}{\sum_{j=1}^n g(\tilde b_j)}. \end{align} Let us denote \begin{align} \mathbf{\hat g}&=[g(\hat b_1), \ldots,g(\hat b_n)]^{\operatorname{T}}, \end{align} then $\mathbf{\hat g} = v \mathbf{g}$ and \begin{align}\label{eq_orthogonal} &(\mathbf{r} - \mathbf{\hat g})^{\operatorname{T}}\mathbf{\hat g}\nonumber\\ =&(\mathbf{r} - v\mathbf{g})^{\operatorname{T}}(v\mathbf{g})\nonumber\\ =&v \left(\mathbf{r}^{\operatorname{T}}\mathbf{g} - \frac{\mathbf{g}^{\operatorname{T}}\mathbf{r}}{\mathbf{g}^{\operatorname{T}} \mathbf g} \mathbf{g}^{\operatorname{T}} \mathbf g\right)\nonumber\\ =&0. \end{align} By substituting \eqref{eq_scaled} into \eqref{BiasVector1} and \eqref{lemma1_finishing3}, the bias $\hat{\mathbf b}$ satisfies \eqref{eq_orthogonal} and \begin{align}\label{eq_same_action} \mathbf h(\hat{\mathbf b})=\left[ \frac{g(\hat b_1)}{\sum_{j=1}^n g(\hat b_j)},\ldots,\frac{g(\hat b_n)}{\sum_{j=1}^n g(\hat b_j)}\right]^{\operatorname{T}}=\mathbf a_{P_Y},\\ \frac{g(\mathbf { w}_i^{\operatorname{T}}\mathbf { f}(x)+{ b}_i)}{\sum_{j=1}^n g(\mathbf { w}_j^{\operatorname{T}}\mathbf { f}(x)+{ b}_j)}-\frac{g(\hat b_i)}{\sum_{j=1}^n g(\hat b_j)}=O(\epsilon). \label{lemma1_finishing1} \end{align} By this, we have found a desirable bias $\hat{\mathbf b}$. \emph{Step 2: We will use \eqref{eq_orthogonal} and \eqref{lemma1_finishing1} to show $\mathbf{r} - \mathbf{\hat g}=O(\epsilon \mathbf 1)$.} Define $\mathbf a(\mathbf z) = [a_1(\mathbf z),\ldots,a_n(\mathbf z)]^{\operatorname{T}}$, where \begin{align} a_i(\mathbf z)= \frac{z_i}{\sum_{j=1}^n z_j}. \end{align} Then, $\mathbf h(\hat{\mathbf b})=\mathbf a(\mathbf{\hat g})$ and $\mathbf h({\mathbf{W}}^{\operatorname{T}} {\mathbf f} + {\mathbf b})=\mathbf a(\mathbf{ r})$. Let $\mathbf J_{\mathbf a}(\mathbf z) = \frac{\partial \mathbf a(\mathbf z)}{\partial \mathbf z^{\operatorname{T}}}$ be the Jacobian matrix of $\mathbf a(\mathbf z)$. The $(i,j)$th element of $\mathbf J_{\mathbf a}(\mathbf z)$ is \begin{align} \left(\mathbf J_{\mathbf a}(\mathbf z)\right)_{i,j} = \frac{\partial a_i}{\partial z_j}=\left\{\begin{array}{l l} \frac{\sum_{k=1}^n z_k - z_i}{\left(\sum_{k=1}^n z_k\right)^2}, & \text{ if } i=j\\ -\frac{z_i}{\left(\sum_{k=1}^n z_k\right)^2}, & \text{ if } i\neq j. \end{array}\right. \end{align} It is easy to check $\mathbf J_{\mathbf a}(\mathbf z) \mathbf z = \mathbf 0$. Hence, the directional gradient of $\mathbf a(\mathbf z)$ on the direction $\mathbf z$ is $\nabla_\mathbf z(\mathbf a(\mathbf z))=\mathbf J_{\mathbf a}(\mathbf z) \mathbf z/\|\mathbf z\|_2=\mathbf 0$. The largest and smallest singular values of $\mathbf J_{\mathbf a}(\mathbf z)$ are $\sigma_1(\mathbf J_{\mathbf a}(\mathbf z)) = \sqrt{n}\|z\|_2/\left(\sum_{k=1}^n z_k\right)^2$ and $\sigma_n(\mathbf J_{\mathbf a}(\mathbf z)) = 0$, respectively. If $n\geq 3$, the other singular values are $\sigma_2(\mathbf J_{\mathbf a}(\mathbf z))=\ldots=\sigma_{n-1}(\mathbf J_{\mathbf a}(\mathbf z)) = 1/\sum_{j=1}^n z_j$. The left and right singular vectors of $\mathbf J_{\mathbf a}(\mathbf z)$ associated to the smallest singular value $\sigma_n(\mathbf J_{\mathbf a}(\mathbf z))=0$ are $\mathbf 1/\|\mathbf 1\|_2$ and $\mathbf z/\|\mathbf z\|_2$, respectively. The directional gradient of $\mathbf a(\mathbf z)$ at the point $\mathbf z=\mathbf{\hat g}$ on the direction $(\mathbf{r} - \mathbf{\hat g})$ is \begin{align} \nabla_{\mathbf{r} - \mathbf{\hat g}}(\mathbf a(\mathbf z))\big|_{\mathbf z=\mathbf{\hat g}}=\frac{\mathbf J_{\mathbf a}(\mathbf{\hat g}) (\mathbf{r} - \mathbf{\hat g})}{\|\mathbf{r} - \mathbf{\hat g}\|_2}. \end{align} Notice that $\mathbf{\hat g}/\|\mathbf{\hat g}\|_2$ is the right singular vector of $\mathbf J_{\mathbf a}(\mathbf {\hat g})$ associated to the smallest singular value $\sigma_n(\mathbf J_{\mathbf a}(\mathbf {\hat g}))=0$. Taking \eqref{eq_orthogonal} into consideration, we can get \begin{align} &\nabla_{\mathbf{r} - \mathbf{\hat g}}(\mathbf a(\mathbf z))\big|_{\mathbf z=\mathbf{\hat g}} \geq \min_{\mathbf z: \mathbf z^{\operatorname{T}}\mathbf{\hat g}=0}\frac{\mathbf J_{\mathbf a}(\mathbf{\hat g}) \mathbf z}{\|\mathbf z\|_2} =\sigma_{n-1}(\mathbf J_{\mathbf a}(\mathbf {\hat g})) \geq \frac{1}{\|\mathbf {\hat g}\|_1}, \end{align} where $\sigma_{n-1}(\mathbf J_{\mathbf a}(\mathbf {\hat g}))$ is the second smallest singular value of $\mathbf J_{\mathbf a}(\mathbf {\hat g})$. The directional gradient $\nabla \mathbf h(\mathbf{\hat b};\bm\alpha)$ of $\mathbf h({\mathbf b})$ at the point ${\mathbf b}=\mathbf{\hat b}$ on the direction $\bm\alpha$ is \begin{align} \nabla \mathbf h(\mathbf{\hat b};\bm\alpha) = \frac{\mathbf J_{\mathbf a}(\mathbf z(\mathbf{\hat b})) \mathbf J_{\mathbf z}(\mathbf {\hat b}) \bm\alpha}{\|\bm\alpha\|_2}. \end{align} Denote $\alpha_i=\mathbf { w}_i^{\operatorname{T}}\mathbf { f}(x)+{ b}_i-\hat b_i$ and $\bm \alpha = [\alpha_1,\ldots,\alpha_n]^{\operatorname{T}}$. In the following, we will use Assumption \ref{assumption1} and \eqref{lemma1_finishing1} to show $\bm\alpha=O(\epsilon \mathbf 1)$. } \section{Proof of Lemma \ref{lemma2}}\label{plemma2} Let us define big-O and little-o notations for vectors and matrices, which will be used in the proof. \begin{definition}[Big-O and Little-o Notations for Vectors] Consider two vector functions $\mathbf f: \mathbb{R}\mapsto \mathbb{R}^{n}$ and $\mathbf g: \mathbb{R}\mapsto \mathbb{R}^{n}$. We say $\mathbf f(x) = O(\mathbf g(x))$, if there exist constants $M > 0$ and $d > 0$ such that \begin{align} \|\mathbf f(x)\|_2 \leq M \|\mathbf g(x)\|_2, ~ \text{for all }x\text{ with }|x| < d, \end{align} where $\|\mathbf f\|_2= (\sum_{i=1}^n f_i^2)^{1/2}$ is the $l_2$ norm of vector $\mathbf f$. We say $\mathbf f(x) = o(\mathbf g(x))$, if for each $M > 0$ there exists a real number $d > 0$ such that \begin{align}\label{eq_little_o} \|\mathbf f(x)\|_2 \leq M \|\mathbf g(x)\|_2, ~ \text{for all }x\text{ with }|x| < d. \end{align} If $\|\mathbf g(x)\|_2\neq 0$, then \eqref{eq_little_o} is equivalent to \begin{align} \lim_{x\rightarrow0} \|\mathbf f(x)\|_2/\|\mathbf g(x)\|_2 = 0. \end{align} \end{definition} \begin{definition}[Big-O and Little-o Notations for Matrices] Consider two matrix functions $\mathbf F: \mathbb{R}\mapsto \mathbb{R}^{n}\times\mathbb{R}^{n}$ and $\mathbf G: \mathbb{R}\mapsto \mathbb{R}^{n}\times\mathbb{R}^{n}$. We say $\mathbf F(x) = O(\mathbf G(x))$, if there exist constants $M > 0$ and $d > 0$ such that \begin{align} \|\mathbf F(x)\|_2 \leq M \|\mathbf G(x)\|_2, ~ \text{for all }x\text{ with }|x| < d, \end{align} where $\|\mathbf A\|_2 =\sigma_1 (\mathbf A)$ is the spectral norm of matrix $\mathbf A$. In addition, we say $\mathbf F(x) = o(\mathbf G(x))$, if for every $M > 0$ there exists a real number $d > 0$ such that \begin{align}\label{eq_little_o_matrix} \|\mathbf F(x)\|_2 \leq M \|\mathbf G(x)\|_2, ~ \text{for all }x\text{ with }|x| < d. \end{align} If $\|\mathbf G(x)\|_2\neq 0$, then \eqref{eq_little_o_matrix} is equivalent to \begin{align} \lim_{x\rightarrow0} \|\mathbf F(x)\|_2/\|\mathbf G(x)\|_2 = 0. \end{align} \end{definition} Let $\mathbf M_L(x)$ denote the Hessian matrix \begin{align} \mathbf M_L(x)= \frac{\partial^2 E_{Y \sim P_{Y|X=x}}[L(Y, \mathbf a)]}{\partial \mathbf a \partial \mathbf a^{\operatorname{T}}}\bigg|_{\mathbf a=\mathbf a_{P_{Y|X=x}}}, \end{align} The $(i, j)$-th element of $\mathbf M_L(x)$ is \begin{align} (\mathbf M_L(x))_{i,j}=\frac{\partial^2 E_{Y \sim P_{Y|X=x}}[L(Y, \mathbf a)]}{\partial a_i \partial a_j}\bigg|_{\mathbf a=\mathbf a_{P_{Y|X=x}}}. \end{align} \begin{lemma}\label{Hessian_lemma} If Assumptions \ref{assumption5}, \ref{assumption2}, and \ref{assumption4} hold, then \begin{align}\label{hessianlocal} (\mathbf M_L(x))_{i,j}=(\mathbf M_L)_{i,j}+o(1). \end{align} \end{lemma} \begin{proof} \ignore{{\red In the next equation, $P_Y$ occurs twice, in the inner product with loss function $L$ and in the Bayes action (I hope you have multivariable calculus book for knowledge). Because the inner product is linear wrt $P_Y$, continuity wrt $P_Y$ follows. The continuity of $g$ on the action, and the continuity of Bayes action on $P_Y$ (continuity of composite function) are needed. All these should be discussed in the proof.}} Consider the function \begin{align} g(P_Y)=\frac{\partial^2 \mathbb E_{Y \sim P_Y}[L(Y, \mathbf a)]}{\partial a_i \partial a_j}\bigg|_{\mathbf a=\mathbf a_{P_Y}}, \end{align} where the Bayes action $\mathbf a_{P_Y}$ satisfies Assumption \ref{assumption5}. Because of Assumption \ref{assumption5}, we can say that the $\mathbf a_{P_Y}$ is a continuous function of $P_Y$. In addition, due to Assumption \ref{assumption2} and by using the continuity property of a composite function, we obtain that $g(P_Y)$ is a continuous function of $P_Y$. Due to Assumption \ref{assumption4}, we can get that for all $x\in\mathcal X$ and $y\in\mathcal Y$ \begin{align}\label{HL1} P_{Y|X}(y|x)=P_Y(y)+O(\epsilon). \end{align} In addition, because $g$ is continuous, \eqref{HL1} implies \begin{align} g(P_{Y|X=x})=g(P_Y)+o(1). \end{align} This concludes the proof. \end{proof} It is known that \begin{align}\label{Lemma1proofeq1} D_L(\mathbf a_{P_{Y|X=x}}||\mathbf a)\geq 0, \end{align} where equality is achieved at $\mathbf a = \mathbf a_{P_{Y|X=x}}$, i.e., \begin{align}\label{stationary_point} D_L(\mathbf a_{P_{Y|X=x}}||\mathbf a_{P_{Y|X=x}})=0. \end{align} In addition, the function $\mathbf a \mapsto L(y, \mathbf a)$ is twice differentiable for all $y\in\mathcal Y$. Because of these properties, by taking the second order Taylor series expansion of the function $\mathbf a \mapsto D_L(\mathbf a_{P_{Y|X=x}}||\mathbf a)$ at the point $\mathbf a=\mathbf a_{P_{Y|X=x}},$ we can get \begin{align}\label{Lemma1proofeq3} D_L(\mathbf a_{P_{Y|X=x}}||\mathbf a)=&\frac{1}{2}(\mathbf a-\mathbf a_{P_{Y|X=x}})^{\operatorname{T}} \mathbf M_L(x) (\mathbf a-\mathbf a_{P_{Y|X=x}}) \nonumber\\ \!\!\!\!\!&+o(\|\mathbf a-\mathbf a_{P_{Y|X=x}}\|^2_2). \end{align} Let $\mathbf a=\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})$ in \eqref{Lemma1proofeq3}, we obtain \begin{align}\label{Lemma1proofeq4} &D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ =&\frac{1}{2}(\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})-\mathbf a_{P_{Y|X=x}})^{\operatorname{T}} \mathbf M_L(x) \nonumber\\ & \times (\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})-\mathbf a_{P_{Y|X=x}}) \nonumber\\ \!\!\!\!\!&+o\left(\|\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})-\mathbf a_{P_{Y|X=x}}\|^2_2\right). \end{align} \ignore{{\red Is the following assumption number correct?}} Due to Assumption \eqref{constraint}, \eqref{Lemma1proofeq4} can be reduced to \begin{align}\label{Lemma1proofeq5} &D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ =&\frac{1}{2}(\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})-\mathbf a_{P_{Y|X=x}})^{\operatorname{T}} \mathbf M_L(x) \nonumber\\ & \times (\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})-\mathbf a_{P_{Y|X=x}}) +o(\epsilon^2). \end{align} \ignore{{\red Which assumption of $\mathbf h$ is needed here?}} Because $h$ is continuously twice differentiable, we take the first order Taylor series expansion of $\mathbf h(\mathbf b)$ at the point $\mathbf b=\tilde{\mathbf b}$, which yields \begin{align}\label{activationlocal1} \mathbf h(\mathbf b)=\mathbf h(\tilde{\mathbf b})+\mathbf J(\mathbf b-\tilde{\mathbf b})+o(\mathbf b-\tilde{\mathbf b}). \end{align} \ignore{{\red Note: $\mathbf J$ is not of full rank for the second activation function of the output layer? This is a good reason to abandon the second activation function.}} In \eqref{activationlocal1}, by using Lemma \ref{lemma1} and letting $\mathbf b={{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}$, we can get \begin{align}\label{activationlocal} &\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})\nonumber\\ =&\mathbf h(\tilde{\mathbf b})+\mathbf J{{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+\mathbf J {\mathbf d}+o(\epsilon \bm 1), \end{align} where ${\mathbf d}={\mathbf b} - \tilde{\mathbf b}$ and $\bm 1=[1, \ldots, 1]^{\operatorname{T}} \in \mathbb R^n$. Define \begin{align} \mathbf q_1&=\mathbf a_{P_{Y|X=x}}-\bm \mu_{\mathbf a}, \\ \mathbf q_2&=\mathbf J{{\mathbf W}}^{\operatorname{T}}({\mathbf f}(x)-{\bm \mu}_{\mathbf f}),\\ \mathbf q_3&=\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J {\mathbf d}+\mathbf J{{\mathbf W}}^{\operatorname{T}}{\bm \mu}_{\mathbf f}. \end{align} By using \eqref{constraint} and Lemma \ref{norm_B}, we get \begin{align}\label{q1q2q3} &\|\mathbf q_1-\mathbf q_2-\mathbf q_3\|_2 \nonumber\\ =&\|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}-\mathbf J({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf d})\|_2\nonumber\\ \leq&\|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\|_2+\|\mathbf J({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf d})\|_2 \nonumber\\ \leq&\|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\|_2+\|\mathbf J\|_2\|({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf d})\|_2 \nonumber\\ \leq&\|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\|_2+\sigma_{\text{max}}(\mathbf J)\|({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf d})\|_2 \nonumber\\ =&O(\epsilon), \end{align} where $\sigma_{\text{max}}(\mathbf J)=\max_{i}h'(\tilde b_i)$. Substituting \eqref{hessianlocal} and \eqref{activationlocal} to \eqref{Lemma1proofeq5}, we obtain \begin{align} \!\!\!&D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ =&\frac{1}{2}(\mathbf q_1-\mathbf q_2-\mathbf q_3+o(\epsilon\bm 1))^{\operatorname{T}} \nonumber\\ &\times (\mathbf M_L+o(\mathbf I))(\mathbf q_1-\mathbf q_2-\mathbf q_3+o( \epsilon \bm 1))\nonumber\\ =&\frac{1}{2}\bigg[(\mathbf q_1-\mathbf q_2-\mathbf q_3)^{\operatorname{T}}\mathbf M_L(\mathbf q_1-\mathbf q_2-\mathbf q_3)\nonumber\\ &~~~+2(\mathbf q_1-\mathbf q_2-\mathbf q_3)^{\operatorname{T}}\mathbf M_L ~o(\epsilon \bm 1)\nonumber\\ &~~~+2(\mathbf q_1-\mathbf q_2-\mathbf q_3)^{\operatorname{T}}o(\mathbf I)~o(\epsilon \bm 1)\nonumber\\ &~~~+o(\epsilon \bm 1^{\operatorname{T}})~(\mathbf M_L+o(\mathbf I))~o(\epsilon \bm 1)\nonumber\\ &~~~+(\mathbf q_1-\mathbf q_2-\mathbf q_3)^{\operatorname{T}}o(\mathbf I) (\mathbf q_1-\mathbf q_2-\mathbf q_3)\bigg]+o(\epsilon^2).\!\! \end{align} Using \eqref{q1q2q3}, we can write \begin{align}\label{L2eImp} &D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ =&\frac{1}{2}(\mathbf q_1-\mathbf q_2-\mathbf q_3)^{\operatorname{T}} \mathbf M_L(\mathbf q_1-\mathbf q_2-\mathbf q_3)+o(\epsilon^2). \end{align} Because $\mathbf M_L=\mathbf R_L^{\operatorname{T}}\mathbf R_L$, we get \begin{align} &\!\!\!\!\!\!\!D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ &\!\!\!\!\!\!\!\!=\frac{1}{2}(\mathbf R_L(\mathbf q_1\!-\!\mathbf q_2\!-\!\mathbf q_3))^{\operatorname{T}} (\mathbf R_L(\mathbf q_1\!-\!\mathbf q_2\!-\!\mathbf q_3))\!+\!o(\epsilon^2).\!\! \end{align} Multiply the above equation by $P_X(x)$, yields \begin{align}\label{lastdecomposed} & P_X(x) D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ =&\frac{1}{2}(\mathbf R_L\sqrt{P_X(x)}(\mathbf q_1-\mathbf q_2-\mathbf q_3))^{\operatorname{T}} \nonumber\\ &~~~~~~~~~~\times (\mathbf R_L\sqrt{P_X(x)}(\mathbf q_1-\mathbf q_2-\mathbf q_3))+o(\epsilon^2)\nonumber\\ =&\frac{1}{2}(\mathbf R_L\sqrt{P_X(x)}(\mathbf q_1-\mathbf q_2))^{\operatorname{T}}(\mathbf R_L\sqrt{P_X(x)}(\mathbf q_1-\mathbf q_2))\nonumber\\ &-P_X(x)(\mathbf q_1-\mathbf q_2)^{\operatorname{T}}\mathbf M_L\mathbf q_3+\frac{1}{2}P_X(x)\mathbf q_3^{\operatorname{T}}\mathbf M_L\mathbf q_3+o(\epsilon^2). \end{align} By substituting \eqref{matrixf}-\eqref{matrixJ} into \eqref{lastdecomposed} and taking the summation over $x \in \mathcal X$, we derive \begin{align} &\sum_{x \in \mathcal X} P_X(x) D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h({{\mathbf W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b}))\nonumber\\ =&\frac{1}{2} \|\mathbf{R}_L\mathbf{B}-\mathbf{R}_L\mathbf{J}{\mathbf{W}}^{\operatorname{T}} {\mathbf{\Xi}}_{\mathbf f}\|_{F}^2\nonumber\\ &+\frac{1}{2}\sum_{x \in \mathcal X}P_X(x)(\mathbf q_3-2\mathbf q_1+2\mathbf q_2)^{\operatorname{T}}\mathbf M_L\mathbf q_3+o(\epsilon^2) \nonumber\\ =&\frac{1}{2} \|\mathbf{R}_L\mathbf{B}-\mathbf{R}_L\mathbf{J}{\mathbf{W}}^{\operatorname{T}} {\mathbf{\Xi}}_{\mathbf f}\|_{F}^2\nonumber\\ &+\frac{1}{2}\bigg(\mathbf q_3-2\sum_{x \in \mathcal X}P_X(x) \mathbf q_1+2 \sum_{x \in \mathcal X}P_X(x)\mathbf q_2\bigg)^{\operatorname{T}}\mathbf M_L {\mathbf q}_3 \nonumber\\ &+o(\epsilon^2) \nonumber\\ =&\frac{1}{2} \|\mathbf{R}_L\mathbf{B}-\mathbf{R}_L\mathbf{J}{\mathbf{W}}^{\operatorname{T}} {\mathbf{\Xi}}_{\mathbf f}\|_{F}^2+\frac{1}{2}\mathbf q_3^{\operatorname{T}}\mathbf M_L{\mathbf q}_3 +o(\epsilon^2) \nonumber\\ =&\frac{1}{2} \|\mathbf{R}_L\mathbf{B}-\mathbf{R}_L\mathbf{J}{\mathbf{W}}^{\operatorname{T}} {\mathbf{\Xi}}_{\mathbf f}\|_{F}^2+\frac{1}{2}\eta({\mathbf d}, {\mathbf f})+o(\epsilon^2), \end{align} where the second equality holds because $\mathbf q_3$ and $\mathbf{M}_L$ do not change with respect to $x$, and the third equality holds because $\sum_{x \in \mathcal X}P_X(x)\mathbf q_1=\bm 0$ and $\sum_{x \in \mathcal X}P_X(x)\mathbf q_2=\bm 0$. This completes the proof. \section{Proof of Lemma \ref{local_approximation_2}}\label{plocal_approximation_2} {\red Instead of saying ``we can do sth,'' use ``we do sth.''} We take the first order Taylor series expansion of $\mathbf h(\mathbf z)$ at the point $\mathbf z=\hat{\mathbf b}$, which yields \begin{align}\label{T4e1} \mathbf h(\mathbf z)=\mathbf h(\hat{\mathbf b})+\tilde{\mathbf J}(\mathbf z-\hat{\mathbf b})+o(\mathbf z-\hat{\mathbf b}). \end{align} {\red Notational confusion between $\tilde{\mathbf J}$ and $\hat{\mathbf J}$. The second seems better, as defined in (45).} Let $\mathbf z={\mathbf{W}}^{\operatorname{T}}\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b$. Then, by using \eqref{constraint1}, we obtain \begin{align}\label{T4e2} &\mathbf h({\mathbf{W}}^{\operatorname{T}}\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b)\nonumber\\ =&\mathbf h(\hat{\mathbf b})+\tilde{\mathbf J}\left({\mathbf{W}}^{\operatorname{T}}\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b-\hat{\mathbf b}\right)\nonumber\\ &+o({\mathbf{W}}^{\operatorname{T}}\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b-\hat{\mathbf b})\nonumber\\ =&\mathbf h(\hat{\mathbf b})+\tilde{\mathbf J}\left({\mathbf{W}}^{\operatorname{T}}\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b-\hat{\mathbf b}\right)+o(\epsilon \mathbf 1). \end{align} Next, we take the first order Taylor series expansion of $\bm \psi(\mathbf z^{(1)})$ at the point $\mathbf z^{(1)}=\hat{\mathbf b}^{(1)}$ and get \begin{align}\label{T4e3} \!\!\bm \psi(\mathbf z^{(1)})=\bm \psi(\hat{\mathbf b}^{(1)})+\mathbf J_1(\mathbf z^{(1)}-\hat{\mathbf b}^{(1)})+o(\mathbf z^{(1)}-\hat{\mathbf b}^{(1)}).\!\! \end{align} Let $\mathbf z^{(1)}={{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}$, then by using \eqref{constraint2}, we get \begin{align}\label{T4e4} &\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})\nonumber\\ =&\bm \psi(\hat{\mathbf b}^{(1)})+\mathbf J_1\left({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\right)+o(\epsilon\mathbf1). \end{align} {\red How to ensure \eqref{T4e5}??} Now, if we choose $\mathbf z=\mathbf W^{\operatorname{T}}\bm \psi(\hat{\mathbf b}^{(1)})+\hat {\mathbf b}$ in \eqref{T4e1} such that \begin{align}\label{T4e5} \mathbf W^{\operatorname{T}}\bm \psi(\hat{\mathbf b}^{(1)})=o(\epsilon\mathbf 1), \end{align} we get \begin{align}\label{T4e6} \!\!\mathbf h(\mathbf W^{\operatorname{T}}\bm \psi(\hat{\mathbf b}^{(1)})+\hat {\mathbf b})=\mathbf h(\hat{\mathbf b})+\tilde{\mathbf J}(\mathbf W^{\operatorname{T}}\bm \psi(\hat{\mathbf b}^{(1)}))+o(\epsilon\mathbf 1).\!\! \end{align} Substituting \eqref{bias2}, \eqref{T4e4}, and \eqref{T4e6} into \eqref{T4e2}, we get \begin{align}\label{T4e7} &\mathbf h({\mathbf{W}}^{\operatorname{T}}\bm \psi({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b)\nonumber\\ =&\mathbf a_{P_Y}+\tilde{\mathbf J}\left({\mathbf{W}}^{\operatorname{T}}\mathbf J_1\left({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\right)+\mathbf b-\hat{\mathbf b}\right)\nonumber\\ &+o(\epsilon \mathbf 1). \end{align} {\red Some details are skipped in the above step.} Define \begin{align}\label{T4e8} \mathbf q_1^{(1)}=&\mathbf a_{P_{Y|X=x}}-\bm \mu_{\mathbf a},\\ \mathbf q_2^{(1)}=&\tilde{\mathbf J}{{\mathbf W}}^{\operatorname{T}}\mathbf J_1{{\mathbf W}^{(1){\operatorname{T}}}}({\mathbf f}^{(1)}(x)-{\bm \mu}_{\mathbf f^{(1)}}), \\ \mathbf q_3^{(1)}=&\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J {\mathbf c}+\mathbf J{{\mathbf W}}^{\operatorname{T}}\mathbf J_1{{\mathbf W}^{(1){\operatorname{T}}}}{\bm \mu}_{\mathbf f^{(1)}}. \end{align} By using \eqref{norm_B}, \eqref{constraint1}, and \eqref{constraint2}, we get \begin{align} &\|\mathbf q^{(1)}_1-\mathbf q^{(1)}_2-\mathbf q^{(1)}_3\|_2 \nonumber\\ =&\|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\nonumber\\ &+\tilde{\mathbf J}\left({\mathbf{W}}^{\operatorname{T}}\mathbf J_1\left({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\right)+\mathbf b-\hat{\mathbf b}\right)\|_2\nonumber\\ \leq& \|\mathbf a_{P_{Y|X=x}}-\mathbf a_{P_Y}\|_2 \nonumber\\ &+\|\tilde{\mathbf J}\left({\mathbf{W}}^{\operatorname{T}}\mathbf J_1\left({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\right)+\mathbf b-\hat{\mathbf b}\right)\|_2 \nonumber\\ \leq &O(\epsilon)+\big\|\tilde{\mathbf J}\big\|_2 \nonumber\\ &\times \big\|\left({\mathbf{W}}^{\operatorname{T}}\mathbf J_1\left({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\right)+\mathbf b-\hat{\mathbf b}\right)\big\|_2\nonumber\\ =&O(\epsilon). \end{align} {\red What is the order of $\hat{\mathbf J}$? Any proof?} The last equality holds because \eqref{constraint1}, \eqref{constraint2}, and \eqref{T4e5} imply \begin{align} &\big\|\left({\mathbf{W}}^{\operatorname{T}}\mathbf J_1\left({{\mathbf W}^{(1){\operatorname{T}}}}{\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)}-\hat{\mathbf b}^{(1)}\right)+\mathbf b-\hat{\mathbf b}\right)\big\|_2 \nonumber\\ =&O(\epsilon). \end{align} In addition, using \eqref{T4e7} yields \begin{align}\label{T4e9} &D_L(\mathbf a_{P_{Y|X=x}} || \mathbf h(\mathbf{W}^{\operatorname{T}}\bm \psi({\mathbf W}^{(1)\operatorname{T}} {\mathbf f}^{(1)}(x)+{\mathbf b}^{(1)})+\mathbf b)) \nonumber\\ =&\frac{1}{2}(\mathbf R_L(\mathbf q^{(1)}_1-\mathbf q^{(1)}_2-\mathbf q^{(1)}_3))^{\operatorname{T}}(\mathbf R_L(\mathbf q^{(1)}_1-\mathbf q^{(1)}_2-\mathbf q^{(1)}_3)). \end{align} Now, we will same computation as described in Appendix \ref{plemma2}. First, multiply the above equation by $P_X(x)$ and then, taking the summation over $x \in \mathcal X$, we get \eqref{local_approximation_2}. {\red What is \eqref{local_approximation_2}?} \section{Main Results: Feature Extraction in \\ Deep Feedforward Neural Networks}\label{sec_3} \subsection{Local Geometric Analysis of the Output Layer}\label{feature_extraction} We consider the following reformulation of \eqref{reformed_problem_old} that focuses on the training of the output layer: \begin{align}\label{reformed_problem} \min_{\substack{\mathbf{W} \in \mathbb R^{k \times n},\\ \mathbf b \in \mathbb R^n,\mathbf f \in \Lambda}} ~\sum_{x \in \mathcal X} P_X(x) D_L(\mathbf a_{P_{Y|X=x}} || \mathbf h(\mathbf{W}^{\operatorname{T}}\mathbf f(x)+\mathbf b)), \end{align} where $\Lambda$ is the set of feature functions created by the input and hidden layers of the neural network. Recall that $\mathcal H^n$ is the image set of the vector-valued activation function $\mathbf h(\mathbf b)$ of the output layer. Because $\mathcal A_{P_Y}\subseteq \mathcal A\subseteq \mathcal H^n$, for any Bayes action $\mathbf a_{P_Y}\in\mathcal A_{P_Y}$ that solves \eqref{bayes}, there exists a bias $\tilde{\mathbf b}=[\tilde b_1,\ldots,\tilde b_n]^{\operatorname{T}}\in \mathbb R^n$ such that \begin{align}\label{BiasVector} \mathbf h(\tilde{\mathbf b})=[ h(\tilde b_1),\ldots,h(\tilde b_n)]^{\operatorname{T}}=\mathbf a_{P_Y}. \end{align} The following assumption is needed in our study. \ignore{ \begin{assumption}\label{assumption1} The activation function $h$ is strictly increasing and continuously differentiable. \end{assumption} Moreover, according to the above arguments in \eqref{eq_continuouslydifferentiable}-\eqref{eq_alpha_i_bound}, if Assumption \ref{assumption1} is replaced by the following weaker Assumption \ref{assumption6}, then Lemma \ref{lemma1} can be also proven: } \begin{assumption}\label{assumption1} For each $i=1,\ldots, n$, there exist $\delta>0$ and $K>0$ such that for all $z\in (\tilde b_i-\delta, \tilde b_i+\delta)$, the activation function $h$ satisfies \begin{align}\label{eq_assumption1} \left| h(z) - h(\tilde b_i)\right|\geq K \left|z-\tilde b_i\right|. \end{align} \end{assumption} \begin{lemma}\label{assumption6to1} If $h$ is strictly increasing and continuously differentiable, then $h$ satisfies Assumption \ref{assumption1}. \end{lemma} \ifreport \begin{proof} See Appendix \ref{passumption6to1}. \end{proof} \else Due to space limitation, all the proofs are relegated to our technical report \cite{TechnicalReport}.\!\! \fi It is easy to see that the leaky ReLU activation function \cite[pp. 187-188]{goodfellow2016deep} satisfies Assumption \ref{assumption1}. In addition, the hyperbolic tangent function and the sigmoid function \cite[p. 189]{goodfellow2016deep} also satisfy Assumption \ref{assumption1}, because they are strictly increasing and continuously differentiable. Let $\mathcal P^{\mathcal Y}$ be the set of all probability distributions on $\mathcal Y$ and $\text{relint}(\mathcal P^{\mathcal Y})$ be the relative interior of the set $\mathcal P^{\mathcal Y}$. \begin{assumption}\label{assumption5} If two distributions $P_Y, Q_Y \in \text{\normalfont{relint}}(\mathcal P^{\mathcal Y})$ are close to each other such that \begin{align} \sum_{y \in \mathcal Y} (P_Y(y)-Q_Y(y))^2\leq \gamma^2, \end{align} then for any $\mathbf a_{P_Y} \in \mathcal A_{P_Y}$, there exists an $\mathbf a_{Q_Y} \in \mathcal A_{Q_Y}$ such that \begin{align} \|\mathbf a_{P_Y}-\mathbf a_{Q_Y}\|_2=O(\gamma). \end{align} \end{assumption} Assumption \ref{assumption5} characterizes the differentiability of the Bayes action $\mathbf a_{P_Y}$ with respect to $P_Y$. The loss functions in \eqref{log-loss} and \eqref{mean-square-error} satisfy Assumption \ref{assumption5}, as explained later in Section \ref{examples}. Because of the universal function approximation properties of deep feedforward neural networks \cite{cybenko1989approximation, hornik1989multilayer, goodfellow2016deep}, we make the following assumption. \begin{assumption}\label{assumption3} For given $\epsilon >0$ and $\mathbf a_{P_{Y|X=x}}\in\mathcal A_{P_{Y|X=x}}$ with $x\in \mathcal X$, there exists an optimal solution $(\mathbf f, \mathbf W, \mathbf b)$ to \eqref{reformed_problem} such that for all $x\in \mathcal X$ \begin{align}\label{eq_assumption3} \| \mathbf a_{P_{Y|X=x}} - \mathbf h({\mathbf{W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})\|_2^2 \leq \epsilon^2. \end{align} \end{assumption} By Assumption \ref{assumption3}, the neural network can closely approximate the vector-valued function $x\mapsto \mathbf a_{P_{Y|X=x}}$. \begin{definition}For a given $\epsilon >0$, two random variables $X$ and $Y$ are called {$\epsilon$-dependent}, if the $\chi^2$-mutual information $I_{\chi^2}(X;Y)$ is no more than $\epsilon^2$, given by \begin{align}\label{weak-dependent} I_{\chi^2}(X;Y)=D_{\chi^2}(P_{X,Y} || P_X \otimes P_Y)\leq \epsilon^2, \end{align} where \begin{align} D_{\chi^2}(P_X||Q_X)=\int_{\mathcal X} \frac{(P(x)-Q(x))^2}{Q^2(x)} d Q(x) \end{align} is Neyman's $\chi^2$-divergence \cite{polyanskiy2014lecture}. \end{definition} Motivated by the seminal work \cite{huang2019information} and \cite{huang2019universal}, we consider the following assumption. \begin{assumption}\label{assumption4} For a given $\epsilon >0$, $X$ and $Y$ are $\epsilon$-dependent. \end{assumption} By using the above assumptions, we can find a local geometric region \eqref{constraint} that is useful for our analysis. \begin{lemma}\label{lemma1} For a sufficiently small $\epsilon > 0$, if Assumptions \ref{assumption1}-\ref{assumption4} hold, then there exists an optimal solution $(\mathbf f, \mathbf{W}, \mathbf b)$ to \eqref{reformed_problem} such that for all $x\in\mathcal X$ and $i=1, \ldots, n$ \begin{align}\label{constraint} {\mathbf w_i}^{\operatorname{T}}\mathbf f(x)+ b_i-\tilde b_i=O(\epsilon). \end{align} \end{lemma} \ifreport \begin{proof} See Appendix \ref{plemma1}. \end{proof} \else \fi \ignore{[Proof Sketch] By Assumptions \ref{assumption5} and \ref{assumption4}, there exists $a_{P_Y}\in \mathcal A_{P_Y}$ and $a_{P_{Y|X=x}}\in \mathcal A_{P_Y}$ such that $a_{P_Y}$ and $a_{P_{Y|X=x}}$ are within an $O(\epsilon)$ distance from each other for all $x\in \mathcal X$. By Assumption \eqref{assumption3}, there exists an optimal solution to \eqref{reformed_problem} such that $\mathbf h({\mathbf{W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})$ and $\mathbf a_{P_{Y|X=x}}$ are within an $O(\epsilon)$ distance from each other for all $x\in \mathcal X$. Hence, $\mathbf h({\mathbf{W}}^{\operatorname{T}}{\mathbf f}(x)+{\mathbf b})$ and $\mathbf a_{P_Y}$ are within an $O(\epsilon)$ distance. Because $\mathcal A \subseteq H^n$, there exists a bias $\tilde{\mathbf b}$ such that $\mathbf h(\tilde{\mathbf{b}}) = \mathbf a_{P_Y}$. Finally, by Assumption \ref{assumption1}, we can find a lower bound of the gradient $h'(\tilde{b}_i)$ in the neighborhood of $\mathbf b= \tilde{\mathbf b}$, which implies \eqref{constraint}. \ifreport For details, see Appendix \ref{plemma1}.} For any feature $\mathbf f(x)\in\mathbb R^{k}$, define a matrix $\mathbf{\Xi}_{\mathbf f} \in \mathbb R^{k \times |\mathcal X|}$ as \begin{align}\label{matrixf} \mathbf{\Xi}_{\mathbf f}=[\bm \xi_{\mathbf f}(1), \ldots, \bm \xi_{\mathbf f}(|\mathcal X|)], \end{align} where \begin{align}\label{vectorf} \bm \xi_{\mathbf f}(x)=&\sqrt{P_X(x)} \left(\mathbf f(x)-\bm \mu_{\mathbf f}\right), \\ \label{meanf} \bm \mu_{\mathbf f}=&\sum_{x \in \mathcal X} P_X(x) \mathbf f(x). \end{align} In addition, define the following matrix $\mathbf B \in \mathbb R^{n \times |\mathcal X|}$ based on the Bayes actions $\mathbf a_{P_{Y|X=x}}$ for $x\in \mathcal X$: \begin{align}\label{matrixB} \mathbf{B}=[\bm{\beta}_Y(1), \ldots, \bm \beta_Y(|\mathcal X|)], \end{align} where \begin{align}\label{vectorB} \bm \beta_Y(x)=&\sqrt{P_X(x)} \left(\mathbf a_{P_{Y|X=x}}-\bm \mu_{\mathbf a}\right),\\ \label{mu_a} \bm \mu_{\mathbf a}=&\sum_{x \in \mathcal X} P_X(x) \mathbf a_{P_{Y|X=x}}. \end{align} \begin{assumption}\label{assumption2} The function $\mathbf a \mapsto L(y, \mathbf a)$ is twice continuously differentiable. \end{assumption} The Hessian matrix $\mathbf{M}_L$ of the function $\mathbf a \mapsto \mathbb E_{Y \sim P_Y}[L(Y, \mathbf a)]$ at the point $\mathbf a= \mathbf a_{P_Y}$ is \begin{align}\label{matrixM} \mathbf{M}_L=\frac{\partial^2 \mathbb E_{Y \sim P_Y}[L(Y, \mathbf a)]}{\partial \mathbf a \partial \mathbf a^{\operatorname{T}}}\bigg|_{\mathbf a=\mathbf a_{P_Y}}. \end{align} Because $\mathbf a_{P_Y}$ is an optimal solution to \eqref{bayes}, $\mathbf{M}_L$ is positive semi-definite. Hence, it has a Cholesky decomposition $\mathbf{M}_L=\mathbf{R}_L^{\operatorname{T}}\mathbf{R}_L.$ The Jacobian matrix of $\mathbf h(\mathbf b)$ at the point $\mathbf b=\tilde{\mathbf b}$ is \begin{align}\label{matrixJ} \mathbf J= \frac{\partial \mathbf h(\mathbf b)}{\partial \mathbf b^{\operatorname{T}}}\bigg|_{\mathbf b=\tilde{\mathbf b}}. \end{align} \begin{lemma}\label{lemma2} If Assumptions \ref{assumption5}, \ref{assumption4}, and \ref{assumption2} are satisfied, then in the local analysis regime \eqref{constraint}, the objective function in \eqref{reformed_problem} can be expressed as \begin{align}\label{approximation1} & \sum_{x \in \mathcal X} P_X(x) D_L(\mathbf a_{P_{Y|X=x}}||\mathbf h(\mathbf{W}^{\operatorname{T}}\mathbf f(x)+\mathbf b))\nonumber\\ =&\frac{1}{2} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W} \mathbf{\Xi}_{\mathbf f}\|_{F}^2+\frac{1}{2}\eta(\mathbf d, \mathbf f)+o(\epsilon^2), \end{align} where $\mathbf{\tilde B}=\mathbf{R}_L\mathbf{B}$, \begin{align}\label{weight_change} \mathbf \Xi_{\mathbf W}&=\mathbf{R}_L\mathbf{J}\mathbf{W}^{\operatorname{T}},\\\label{bias_change} \mathbf d&=\mathbf b-\tilde{\mathbf b}, \\ \label{eta} \eta(\mathbf d, \mathbf f)&=(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J \mathbf d+\mathbf J \mathbf W^{\operatorname{T}}\bm \mu_{\mathbf f})^{\operatorname{T}}\mathbf{M}_L\nonumber\\ &~~~\times (\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J \mathbf d+\mathbf J \mathbf W^{\operatorname{T}}\bm \mu_{\mathbf f}). \end{align} \end{lemma} \ifreport \begin{proof} See Appendix \ref{plemma2}. \end{proof} \else {} \fi In the local analysis regime, the training of $(\mathbf f, \mathbf W, \mathbf b)$ in \eqref{reformed_problem} can be expressed as the following optimization problem of $(\mathbf \Xi_{\mathbf W}, \mathbf{\Xi}_{\mathbf f}, \bm \mu_{\mathbf f}, \mathbf d)$: \begin{align}\label{approximate_problem} \min_{\substack{\mathbf \Xi_{\mathbf W}, \mathbf{\Xi}_{\mathbf f}, \bm \mu_{\mathbf f}, \mathbf d}} \frac{1}{2} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W} \mathbf{\Xi}_{\mathbf f}\|_{F}^2+\frac{1}{2}\eta(\mathbf d, \mathbf f). \end{align} When $(\mathbf{\Xi}_{\mathbf f}, \bm \mu_{\mathbf f})$ are fixed, the optimal $({\mathbf \Xi}_{\mathbf W}^*, {\mathbf d}^*)$ are determined by \begin{theorem}\label{theorem1} For fixed $\mathbf{\Xi}_{\mathbf f}$ and $\bm \mu_{\mathbf f}$, the optimal ${\mathbf \Xi}_{\mathbf W}^*$ to minimize \eqref{approximate_problem} is given by \begin{align}\label{optimal_weight_1} {\mathbf \Xi}_{\mathbf W}^*=\tilde{\mathbf B} \mathbf {\Xi}_{\mathbf f}^{\operatorname{T}}(\mathbf \Xi_{\mathbf f}\mathbf \Xi_{\mathbf f}^{\operatorname{T}})^{\operatorname{-1}}, \end{align} and the optimal bias ${\mathbf d}^*$ is expressed as \begin{align}\label{optimal_bias} {\mathbf d}^*=-{\mathbf W}^{\operatorname{T}}\bm \mu_{\mathbf f}+\mathbf{J}^{\operatorname{-1}}(\bm \mu_{\mathbf a}-\mathbf a_{P_Y}).\end{align} \end{theorem} \ifreport \begin{proof} See Appendix \ref{ptheorem1}. \end{proof} \else {} \fi By Theorem \ref{theorem1}, the rows of ${\mathbf \Xi}_{\mathbf W}^*$ are obtained by projecting the rows of $\tilde{\mathbf B}$ on the subspace spanned by the rows of $\mathbf \Xi_{\mathbf f}$. The optimal bias $\mathbf d^*$ cancels out the effects of the mean feature $\bm \mu_{\mathbf f}$ and the mean difference $\bm \mu_{\mathbf a} -\mathbf a_{P_Y}$ between the Bayes actions $\mathbf a_{P_{Y|X=x}}$ and $\mathbf a_{P_Y}$. The optimal weight $\mathbf W^*$ and bias $\mathbf b^*$ can be derived by using \eqref{weight_change}-\eqref{bias_change} and \eqref{optimal_weight_1}-\eqref{optimal_bias}. When $(\mathbf \Xi_{\mathbf W}, \mathbf d)$ are fixed and the hidden layers have sufficient expression power, the optimal $({\mathbf \Xi}_{\mathbf f}^*, \bm \mu_{\mathbf f}^*)$ are given by \begin{theorem}\label{theorem2} For fixed $\mathbf \Xi_{\mathbf W}$ and $\mathbf d$, the optimal ${\mathbf{\Xi}}_{\mathbf f}^*$ to minimize \eqref{approximate_problem} is given by \begin{align}\label{optimal_feature} {\mathbf \Xi}_{\mathbf f}^*=(\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\mathbf \Xi_{\mathbf W})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\tilde{\mathbf B}, \end{align} and the optimal mean ${\bm \mu}_{\mathbf f}^*$ is given by \begin{align}\label{optimal_mean} {\bm \mu}_{\mathbf f}^*=-(\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\mathbf \Xi_{\mathbf W})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W}^{\operatorname{T}}(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J\mathbf d). \end{align} \end{theorem} \ifreport \begin{proof} See Appendix \ref{ptheorem2}. \end{proof} \else {} \fi By Theorem \ref{theorem2}, the columns of ${\mathbf \Xi}_{\mathbf f}^*$ are obtained by projecting the columns of $\tilde{\mathbf B}$ on the subspace spanned by the columns of ${\mathbf \Xi}_{\mathbf W}$. The optimal feature $\bm \mu_{\mathbf f}^*$ cancels out the effects of $\mathbf d$ and $\bm \mu_{\mathbf a} -\mathbf a_{P_Y}$. The optimal feature $\mathbf f^*(x)$ can be derived by using \eqref{matrixf}-\eqref{meanf} and \eqref{optimal_feature}-\eqref{optimal_mean}. \ignore{According to Theorem \ref{theorem2}, {\red [Add your discussion of Theorem 2 here, which should be similar to the discussions for Theorem 1]}.} The singular value decomposition of $\tilde{\mathbf B}$ can be written as \begin{align} \tilde{\mathbf B}=\mathbf U \mathbf \Sigma \mathbf V^{\operatorname{T}}, \end{align} where $\mathbf \Sigma=\text{Diag}(\sigma_1, \ldots, \sigma_K)$ is a diagonal matrix with $K=\min(n, |\mathcal X|)$ singular values $\sigma_1\geq \sigma_2 \geq \ldots \geq \sigma_K=0$, $\mathbf U$ and $\mathbf V$ are composed by the $K$ leading left and right singular vectors of $\tilde{\mathbf B}$, respectively. Denote \begin{align}\label{px} \sqrt{\mathbf p_X}=[\sqrt{P_X(1)}, \ldots, \sqrt{P_X(|\mathcal X|)}]^{\operatorname{T}}. \end{align} Because $\tilde{\mathbf B} \sqrt{\mathbf p_X}=0$ and $\|\sqrt{\mathbf p_X}\|_2=1$, $\sqrt{\mathbf p_X}$ is the right singular vector of $\tilde{\mathbf B}$ for the singular value $\sigma_K=0$. When $({\mathbf{\Xi}}_{\mathbf f}, \bm \mu_{\mathbf f}, {\mathbf \Xi}_{\mathbf W}, \mathbf d)$ are all designable, the optimal solutions are characterized in the following theorem. \begin{theorem}\label{theorem3} If $k\leq \min(n, |\mathcal X|)$, then any $({\mathbf{\Xi}}_{\mathbf f}^*, {\mathbf \Xi}_{\mathbf W}^*)$ satisfying \eqref{optimal_pair1} jointly minimizes \eqref{approximate_problem}: \begin{align}\label{optimal_pair1} {\mathbf \Xi}_{\mathbf W}^*{\mathbf{\Xi}}_{\mathbf f}^*=\mathbf{U}_{k}\mathbf{\Sigma}_{k}\mathbf V_k^{\operatorname{T}}, \end{align} where $\mathbf{\Sigma}_{k}=\textbf{\normalfont{Diag}}(\sigma_1, \ldots \sigma_k)$, $\mathbf U_k=[\mathbf u_1, \ldots, \mathbf u_k]$, and $\mathbf V_k=[\mathbf v_1, \ldots, \mathbf v_k]$. Moreover, any bias ${\mathbf d}^*$ and mean ${\bm \mu}_{\mathbf f}^*$ satisfying \eqref{optimal_bias_mean} jointly minimizes \eqref{approximate_problem}\normalfont{:} \begin{align}\label{optimal_bias_mean} \mathbf{J}\left({\mathbf d}^*+{\mathbf W}^{\operatorname{T}}{\bm \mu}_{\mathbf f}^*\right)=\bm \mu_{\mathbf a}-\mathbf a_{P_Y}.\end{align} \end{theorem} \ifreport \begin{proof} See Appendix \ref{ptheorem3}. \end{proof} \else {} \fi According to Theorem \ref{theorem3}, the optimal $({\mathbf{\Xi}}_{\mathbf f}^*, {\mathbf \Xi}_{\mathbf W}^*)$ are given by the low-rank approximation of $\tilde{\mathbf B}$, which can be derived by using the power iteration algorithm \cite{bulirsch2002introduction}, or equivalently, by executing \eqref{optimal_weight_1} and \eqref{optimal_feature} iteratively. The optimal $(\mathbf d^*,\bm \mu_{\mathbf f}^*)$ cancel out the effect of $\bm \mu_{\mathbf a} - \mathbf a_{P_Y}$. The optimal ${\mathbf \Xi}_{\mathbf f}^*$ in Theorems \ref{theorem2}-\ref{theorem3} can be achieved only when the hidden layers have sufficient expression power. Nonetheless, ${\mathbf{\Xi}}_{\mathbf f}^*$ plays an important role in the analysis of the hidden layers, as explained in the next subsection. \section{Model and Problem} \subsection{Deep Feedforward Neural Network Model} Consider the deep feedforward neural network illustrated in Figure \ref{fig:DNN}, which consists of one input layer, $m$ hidden layers, and one output layer. The input layer admits an input variable $x\in \mathcal X$ and feeds a vector \begin{align}\label{input} \mathbf f^{(0)}(x) = [h^{(0)}_1(x), \ldots, h_{k_0}^{(0)}(x)]^{\operatorname{T}} \in \mathbb R^{k_0} \end{align} to the first hidden layer, where $k_0$ is the number of neurons in the input layer and $h^{(0)}_j: \mathcal X\mapsto \mathbb R$ is the activation function of the $j$-th neuron in the input layer. For all $i=1,\ldots,m$, the $i$-th hidden layer admits $\mathbf f^{(i-1)}(x)\in \mathbb R^{k_{i-1}}$ from the previous layer and constructs a vector $\mathbf f^{(i)}(x)\in \mathbb R^{k_{i}}$, usually called a \emph{feature}, given by \begin{align}\label{featurei} &\mathbf f^{(i)}(x)\nonumber\\ =&\!\left[h^{\!(i)}\!(\mathbf w_1^{(i)\!\operatorname{T}}\mathbf f^{(i-1)}\!(x)\!+\!b^{(i)}_1), \ldots, h^{\!(i)}\!(\mathbf w_{k_i}^{(i)\!\operatorname{T}}\mathbf f^{(i-1)}\!(x)\!+\!b^{(i)}_{k_i})\right]^{\operatorname{T}}\!\!, \end{align} where $k_i$ is the number of neurons in the $i$-th hidden layer, $h^{(i)}: \mathbb R\mapsto \mathbb R$ is the activation function of each neuron in the $i$-th hidden layer, $\mathbf w_j^{(i)}\in \mathbb R^{k_{i-1}}$ and $b_j ^{(i)}\in \mathbb R$ are the weight vector and bias of the $j$-th neuron in the $i$-th hidden layer, respectively. Denote $\mathbf W^{(i)} =[\mathbf w^{(i)}_1, \ldots, \mathbf w^{(i)}_{k_{i}}]$ and $\mathbf b^{(i)}=$ $[b^{(i)}_1, \ldots, b^{(i)}_{k_i}]^{\operatorname{T}}$, then \eqref{featurei} can be expressed compactly as \begin{align} \mathbf f^{(i)}(x)=\mathbf h^{(i)} \left(\mathbf W^{(i)\operatorname{T}} \mathbf f^{(i-1)}(x) + \mathbf b^{(i)}\right), \end{align} where $\mathbf h^{(i)}: \mathbb R^{k_i}\mapsto \mathbb R^{k_i}$ is a vector-valued function determined by \eqref{featurei}. For notational simplicity, let us denote $k= k_m$ and $\mathbf f(x)=\mathbf f^{(m)}(x)$. The output layer admits $\mathbf f(x)\in \mathbb R^{k}$ from the last hidden layer and generates an output vector $\mathbf a(x) \in \mathcal H^n$, called an \emph{action}, which is determined by \begin{align}\label{case1} \mathbf a(x) =& {\mathbf h}(\mathbf {W}^{\operatorname{T}}\mathbf f(x)+ \mathbf b)\nonumber\\ =& [h(\mathbf w_{1}^{\operatorname{T}} \mathbf f(x) + b_1), \cdots, h(\mathbf w_{n}^{\operatorname{T}} \mathbf f(x) + b_n)]^{\operatorname{T}}, \end{align} where $n$ is the number of neurons in the output layer, $h: \mathbb R \mapsto \mathcal H$ is the activation function of each neuron in the output layer, $\mathcal H$ is the image set of $h$ with $\mathcal H\subseteq \mathbb R$, $\mathbf w_j\in \mathbb R^{k}$ and $b_j \in \mathbb R$ are the weight vector and bias of the $j$-th neuron in the output layer, respectively, $\mathbf W =[\mathbf w_1, \ldots, \mathbf w_n]$ and $\mathbf b = [b_1, \ldots, b_n]^{\operatorname{T}}$. \ignore{Our system consists of a feedforward neural network. Consider a random variable $X \in \mathcal X$ that denotes the data, where $\mathcal X$ is a discrete finite set. As shown in Fig. \ref{fig:DNN}, for an input $x \in \mathcal{X}$, an $m$ hidden layered feedforward neural network generates $k$ dimensional feature $\mathbf f(x) = [f_1 (x), \ldots, f_k(x)]^{\operatorname{T}}$ and produces an action $\mathbf a(x) = [a_1(x), \ldots, a_n(x)]^{\operatorname{T}} \in \mathbb R^n$. In this sequel, we first explain the features generated by the hidden layers and demonstrate the action later in this section. \ignore{The input to the first hidden layer is given by \begin{align} \mathbf f^{(0)}(x) =& [\mathbbm1(x=1), \ldots, \mathbbm1(x=|\mathcal{X}|)] \in \mathbb{R}^{|\mathcal{X}|}, \end{align} where $\mathbbm1(x=j)$ is an indicator function defined as \begin{align} \mathbbm1(x=j) =& \bigg\{\begin{array}{l l} 1, & \text{ if }~ x = j,\\ 0, & \text{ if }~ x \neq j. \end{array} \end{align}} Let, $\mathbf f^{(i)}(x) \in \mathbb R^{k_i}$ denotes the feature produced by the first $i$ hidden layers for all $i=1, \ldots, m$. The $j$-th element of the vector $\mathbf f^{(i)}(x)$ is $$h^{(i)}(\mathbf w_j^{(i)\operatorname{T}}\mathbf f^{(i-1)}(x)+b^{(i)}_j),$$ where $\mathbf f^{(0)}(x)\in \mathbb R^{k_0}$ is the input of the neural network, $h^{(i)}:\mathcal X \mapsto \mathbb R$ is the activation function of the $i$-th hidden layer, $\mathbf w_j^{(i)} \in \mathbb R^{k_{i-1}}$ and $b_j^{(i)} \in \mathbb R$ are the weight vector and bias of the $j$-th node of the $i$-th hidden layer, respectively. Hence, $\mathbf f^{(i)}(x)$ can be expressed as \begin{align} \mathbf f^{(i)}(x)= \bm h^{(i)} \left(\mathbf W^{(i)\operatorname{T}} \mathbf f^{(i-1)}(x) + \mathbf b^{(i)}\right), \end{align} where $\mathbf W^{(i)}=[\mathbf w_1^{(i)}, \ldots, \mathbf w_{k_i}^{(i)}]\in \mathbb{R}^{k_{i-1} \times {k_i}}$ and $\mathbf b^{(i)} \in \mathbb{R}^{k_i}$ are weights and bias of the $i$-th hidden layer, respectively. The feature $\mathbf f^{(m)}(x)$ is the output of the last hidden layer. For notational simplicity, we will use $\mathbf f^{(m)}(x)=\mathbf f(x)\in \mathbb R^k$. An activation function $h: \mathbb R \mapsto \mathcal H$ is applied on $\mathbf w_i^{\operatorname{T}}\mathbf f(x)+b_i$, where $\mathbf w_i \in \mathbb R^k$ is the weight and $b_i \in \mathbb R$ is the bias of the output layer. Then, the feedforward neural network produces $\mathbf a(x) \in \mathcal A=\mathcal H^n$, where $\mathcal A$ is a finite set and $\mathbf a(x)$ can be represented as \begin{align}\label{case1} \mathbf a(x) =& {\mathbf h}(\mathbf {W}^{\operatorname{T}}\mathbf f(x)+ \mathbf b), \nonumber\\ =& [h(\mathbf w_{1}^{\operatorname{T}} \mathbf f(x) + b_1), \cdots, h(\mathbf w_{n}^{\operatorname{T}} \mathbf f(x) + b_n)]^{\operatorname{T}}. \end{align}} \subsection{Neural Network based Supervised Learning Problem} The above deep feedforward neural network is used to solve a supervised learning problem. We focus on a class of popular supervised learning algorithms called \emph{empirical risk minimization (ERM)}. In ERM algorithms, the weights and biases of the neural network are trained to construct a vector-valued function $\mathbf a: \mathcal X \mapsto \mathcal A$ that outputs an action $\mathbf a(x)\in \mathcal A$ for each $x \in \mathcal X$, where $\mathcal A \subseteq \mathcal H^n \subseteq \mathbb R^n$. Consider two random variables $X\in\mathcal X$ and $Y\in\mathcal Y$, where $\mathcal X$ and $\mathcal Y$ are finite sets. The performance of an ERM algorithm is measured by a loss function $L: \mathcal Y \times \mathcal A \mapsto \mathbb R$, where $L(y,\mathbf a(x))$ is the incurred loss if action $\mathbf a(x)$ is generated by the neural network when $Y=y$. For example, in neural network based maximum likelihood classification, also known as \emph{softmax regression}, the loss function is \begin{align}\label{log-loss} L_{\log}(y,\mathbf a) = -\log\left( \frac{a_y}{\sum_{y'\in \mathcal Y} a_{y'}}\right), \end{align} which is the negative log-likelihood of a distribution $Q_Y$ generated by the neural network, where $Q_Y(y)= {a_y}/{\sum_{y'\in \mathcal Y} a_{y'}}$, $a_y>0$ for all $y\in \mathcal Y$, and the dimension of $\mathbf a$ is $n=|\mathcal Y|$. In neural network based minimum mean-square estimation, the loss function is one half of the mean-square error between $\mathbf y \in \mathbb R^n$ and an estimate $\hat{\mathbf y}=\mathbf a(x) \in \mathbb R^n$ constructed by the neural network, i.e., \begin{align}\label{mean-square-error} L_2(\mathbf y,\hat{\mathbf y}) = \frac{1}{2} \|\mathbf y-\hat{\mathbf y} \|_2^2. \end{align} Let $P_{X,Y}$ be the empirical joint distribution of $X$ and $Y$ in the training data, $P_X$ and $P_Y$ be the associated marginal distributions, which satisfies $P_X(x)>0$ for all $x\in\mathcal X$ and $P_Y(y)>0$ for all $y\in\mathcal Y$. The objective of ERM algorithms is to solve the following neural network training problem: \begin{align}\label{main_problem} \min_{\substack{(\mathbf W, \mathbf b),\\~(\mathbf W^{(i)}, \mathbf b^{(i)}), i=1, \ldots, m}} \mathbb E_{X,Y \sim P_{X, Y}}[L\left(Y, \mathbf a(X)\right)], \end{align} where $\mathbf a(x)$ is subject to \eqref{input}-\eqref{case1}, because $\mathbf a(x)$ is the action generated by the neural network. \ignore{Consider a random variable $Y \in \mathcal Y$ that represents the label, where $\mathcal Y$ is a discrete finite set. In supervised learning, a decision maker infers the label $Y$ by using the action $\mathbf a(x)$ provided by a neural network. The performance of the inference is measured by a loss function $L$. The $L(y, \mathbf{a})$ is the incurred loss if an action $\mathbf{a} \in \mathcal A$ is chosen when $Y=y$. For example, $L_2(y, \hat y)=\frac{1}{2}\|y-\hat y\|_2^2$ is a quadratic loss function and $L_{\text{log}}(y, \mathbf a)=-\text{log}(\frac{1}{\|\mathbf a\|_1} a_y)$ is a logarithmic loss function, where $\mathbf a \in \mathbb R^{|\mathcal Y|}$. Let $P_{X, Y}$ be the empirical joint distribution of the data $X$ and the label $Y$. Then, the objective of DNN based supervised learning is to solve the following optimization problem: \begin{align}\label{main_problem} \min_{\substack{(\mathbf W, \mathbf b),\\~(\mathbf W^{(i)}, \mathbf b^{(i)}),\\ i=1, \ldots, m}} \mathbb E_{X,Y \sim P_{X, Y}}[L\left(Y, \mathbf a(X)\right)], \end{align} where $\mathbf a(x)$ is action for given $x \in \mathcal X$ generated by the weight matrices $\mathbf W, \mathbf W^{(1)}, \ldots, \mathbf W^{(m)}$ and the bias vectors $\mathbf b, \mathbf b^{(1)}, \ldots, \mathbf b^{(m)}$ defined in \eqref{case1}. } \subsection{Problem Reformulation} Denote $\Phi = \{f: \mathcal X\mapsto \mathcal A\}$ as the set of all functions from $\mathcal X$ to $\mathcal A$. Any action function $\mathbf a(x)$ produced by the neural network, i.e., any function satisfying \eqref{input}-\eqref{case1}, belongs to $\Phi$, whereas some functions in $\Phi$ cannot be constructed by the neural network. By relaxing the set of feasible action functions in \eqref{main_problem} as $\Phi$, we derive the following lower bound of \eqref{main_problem}: \begin{align}\label{lower_bound1} &\min_{\mathbf a \in \Phi} \mathbb E_{X,Y \sim P_{X, Y}}[L(Y, \mathbf a(X))] \\ =&\sum_{x \in \mathcal X}P_X(x) \min_{\mathbf a(x) \in \mathcal A} \mathbb E_{Y \sim P_{Y|X=x}}[L(Y, \mathbf a(x))],\label{lower_bound} \end{align} where \eqref{lower_bound1} is decomposed into a sequence of separable optimization problems in \eqref{lower_bound}, each optimizing the action $\mathbf a(x)\in \mathcal A$ for a given $x\in\mathcal X$. Let $\mathcal A_{P_Y}\subseteq \mathcal A$ denote the set of optimal solutions to the following problem: \begin{align}\label{bayes} \mathcal A_{P_Y} =\arg\min_{\mathbf a \in \mathcal A} \mathbb E_{Y \sim P_Y}[L(Y, \mathbf a)] \end{align} and use $\mathbf a_{P_Y}$ to denote an element of $\mathcal A_{P_Y}$, which is usually called a \emph{Bayes action}. Define the discrepancy \begin{align}\label{L-divergence} D_L(\mathbf a_{P_Y}||\mathbf a)=\mathbb E_{Y \sim P_{Y}}[L(Y, \mathbf a)] -\mathbb E_{Y \sim P_{Y}}[L(Y, \mathbf a_{P_Y})]. \end{align} According to \eqref{bayes} and \eqref{L-divergence}, $D_L(\mathbf a_{P_Y}||\mathbf a)\geq 0$ for all $\mathbf a\in \mathcal A$, where equality is achieved if and only if $\mathbf a \in \mathcal A_{P_Y}$. {When $\mathbf a=\mathbf a_{Q_Y}$, $D_L(\mathbf a_{P_Y}||\mathbf a_{Q_Y})$ is a generalized divergence between $P_Y$ and $Q_Y$ \cite{farnia2016minimax,grunwald2004game}.} By subtracting the lower bound \eqref{lower_bound} from \eqref{main_problem}, we obtain the following problem that is equivalent to \eqref{main_problem}: \begin{align}\label{reformed_problem_old} \min_{\substack{(\mathbf W, \mathbf b),\\~(\mathbf W^{(i)}, \mathbf b^{(i)}), i=1, \ldots, m}} \sum_{x \in \mathcal X} P_X(x) D_L(\mathbf a_{P_{Y|X=x}} || \mathbf a(x)), \end{align} where $\mathbf a_{P_{Y|X=x}}\in \mathcal A_{P_{Y|X=x}}$ is a Bayes action associated with the conditional empirical distribution $P_{Y|X=x}$ and $\mathbf a(x)$ is subject to \eqref{input}-\eqref{case1}. \ignore{Consider a set of functions $\Phi$ that contains all functions mapping from the input space $\mathcal X$ to $\mathcal A$. Then, a lower bound of \eqref{main_problem} is \begin{align}\label{lower_bound} &\min_{\mathbf a \in \Phi} \mathbb E_{X,Y \sim P_{X, Y}}[L(Y, \mathbf a(X))] \nonumber\\ =&\sum_{x \in \mathcal X}P_X(x) \min_{\mathbf a(x) \in \mathcal A} E_{Y \sim P_{Y|X=x}}[L(Y, \mathbf a(x))]. \end{align} In the right-hand side of \eqref{lower_bound}, the lower bound is decomposed into a sequence of separated optimization problems each optimizing action $\mathbf a(x)$ for a given $x \in \mathcal X$. The optimal action to \eqref{lower_bound} for a given $x \in \mathcal X$ is called Bayes action $\mathbf a_{P_{Y|X=x}}$ associated with the conditional distribution $P_{Y|X=x}$ \cite{farnia2016minimax, grunwald2004game, dawid2014theory}. Let $\mathcal A_{P_Y}$ denotes the set of all Bayes actions $\mathbf a_{P_Y}$. If we need to choose an action $\mathbf a$ without the knowledge of $X$, we get Bayes action $\mathbf a_{P_Y}$: \begin{align}\label{bayes} \min_{\mathbf a \in \mathcal A} \mathbb E_{Y \sim P_Y}[L(Y, \mathbf a)]=\mathbb E_{Y \sim P_Y}[L(Y, \mathbf a_{P_Y})]. \end{align} The excess risk $D_L(\mathbf a_{P_Y}||\mathbf a)$ of an action $\mathbf a \in \mathcal A$ is given by \begin{align}\label{L-divergence} D_L(\mathbf a_{P_Y}||\mathbf a)=\mathbb E_{Y \sim P_{Y}}[L(Y, \mathbf a)] -\mathbb E_{Y \sim P_{Y}}[L(Y, \mathbf a_{P_Y})]. \end{align} From the definitions \eqref{bayes} and \ref{L-divergence}, we get for all $a \in \mathcal A$, \begin{align}\label{property} D_L(\mathbf a_{P_Y}|| \mathbf a) \geq 0 \end{align} with equality if and only if $\mathbf a = \mathbf a_{P_Y}$. By subtracting the lower bound \eqref{lower_bound} from \eqref{main_problem}, the problem \eqref{main_problem} can be equivalently expressed as: \begin{align}\label{reformed_problem_old} \min_{\substack{(\mathbf W, \mathbf b),\\~(\mathbf W^{(i)}, \mathbf b^{(i)}),\\ i=1, \ldots, m}} \sum_{x \in \mathcal X} P_X(x) D_L(\mathbf a_{P_{Y|X=x}} || \mathbf a(x)). \end{align} Instead of solving \eqref{lower_bound}, we seek to find weight matrices and bias vectors $(\mathbf W, \mathbf b)$ and $(\mathbf W^{(i)}, \mathbf b^{(i)})$, $i=1, \ldots, m$, which yield a good action $\mathbf a(x)$ and minimize the action divergence \eqref{reformed_problem_old}. } \section{Interpretation of Feature Generated in Neural Network} In practice, hidden layers of the neural network may not generate $f^*(x)$ obtained in \eqref{optimal_feature}. If $t(x)=[t_1(x), \ldots, t_m(x)] \in \mathbb R^m$ is the input of the last hidden layer, then the feature $f(x)=g_{\mathbf{V}, c}(t(x))=[g(t(x)^{\operatorname{T}}v_1+c_1), \ldots, g(t(x)^{\operatorname{T}}v_m+c_m)]$, where $g(\cdot)$ is the activation function of the last hidden layer and $v_i=[v_{i, 1}, \ldots v_{i, m}] \in \mathbb R^m$ and $c_i$ are weights and bias, respectively that connects the $i$-th node of the last hidden layer and input $t(x)$. To quantify the closeness between any feature $f(x)$ and the optimal feature $f^*(x)$ obtained from \eqref{optimal_feature}, we consider \begin{align} \mathbb E_{X,Y \sim P_{X, Y}}\bigg[L\big(Y, h_{W, b}(f(x))\big)-L\big(Y, h_{W, b}(f^*(x))\big)\bigg]. \end{align} To interpret the feature generated by neural network, we fix weight and bias $(W, b)$ of the output layer and consider the problem of designing weights $\mathbf{V}=[v_1, \ldots, v_k]$ and bias $c=[c_1, \ldots, c_k]$ that minimizes the score function $S(f^*, g_{\mathbf{V}, c}(t(x)))$ for any given input $t(x)$. We assume that $\mathbb E[t(X)]=0$. Define matrices \begin{align} \mathbf{\Xi}_{t}=&[\sqrt{P_X(1)}t(1), \ldots, \sqrt{P_X(|\mathcal X|)}t(|\mathcal X|)], \\ J_1=&\text{diag}(g'(c_1), \ldots, g'(c_m)). \end{align} \begin{theorem}\label{theorem4} If the conditions of Lemma \ref{lemma2} and $\mathbb E[t(X)]=0$, then for fixed weights and bias $(W, b)$ at the output layer \begin{align}\label{ScoreApproximation} & \mathbb E_{X,Y \sim P_{X, Y}}\bigg[L\big(Y, h_{W, b}(f(x))\big)-L\big(Y, h_{W, b}(f^*(x))\big)\bigg]\nonumber\\ =&\frac{1}{2}\|\mathbf{\Xi}_{\mathbf{W}}\mathbf{\Xi}_{f^*}-\mathbf{\Xi}_{\mathbf{V}}(\mathbf{\Xi}_{t})^{\operatorname{T}}\|_{F}^2+\frac{1}{2}\kappa_{W,b}(\mu_f)+o(\epsilon^2), \end{align} where $\mathbf{\Xi}_{\mathbf{V}}=\mathbf{\Xi}_{\mathbf{W}}\mathbf{J}_1\mathbf{V}$, \begin{align} &\kappa_{W,b}(\mu_f) \nonumber\\ =&(2h_{0,\tilde b}(0)-2\mu_{a^*}+\mathbf{JW}\mu_f+d)^{\operatorname{T}}\mathbf{M}_L(\mathbf{JW}\mu_f+d)\nonumber\\ &-\eta_{W, d}(f^*), \end{align} and $\mu_f$ is given by \begin{align} \mu_f=g_{0,c}(0)+o(\epsilon). \end{align} \end{theorem} \begin{proof} See Appendix \ref{ptheorem4} \end{proof} Theorem \ref{theorem2} quantifies the closeness of the feature $f^*(x)$ and $g_{V, c}(t(x))$. To design the weights and bias $(V, c)$, we consider minimizing \eqref{ScoreApproximation} that can be separated into two optimization problems, given by \begin{align}\label{weight_h} \mathbf{\Xi}_{\mathbf{V}^*}=\arg \min_{\mathbf{\Xi}_{\mathbf{V}}} \frac{1}{2}\|\mathbf{\Xi}_{\mathbf{W}}\mathbf{\Xi}_{f^*}-\mathbf{\Xi}_V\mathbf{\Xi}_t^{\operatorname{T}}\|_{F}^2, \end{align} \begin{align}\label{bias_h} \mu_f^*=\arg\min_{\mu_f} \kappa_{\mathbf{W},b}(\mu_f). \end{align} \begin{theorem}\label{theorem5} An optimal solution $\tilde \Xi^{V^*}$ to \eqref{weight_h} is \begin{align}\label{optimal_weight_h} \mathbf{\Xi}_{\mathbf{V}^*}=\mathbf{\Xi}_{\mathbf{W}}\mathbf{\Xi}_{f^*} \mathbf{\Xi}_t (\mathbf{\Xi}_t^{\operatorname{T}}\mathbf{\Xi}_t)^{-1}, \end{align} the corresponding weight matrix $V$ is expressed as \begin{align}\label{optimal_weight_h_1} V=\mathbf{J}_1^{-1}\mathbf{\Xi}_{f^*} \mathbf{\Xi}_t (\mathbf{\Xi}_t^{\operatorname{T}}\mathbf{\Xi}_t)^{-1}, \end{align} the optimal solution $\mu_f^*$ to \eqref{bias_h} is given by \begin{align}\label{optimal_bias_h} \mu_f^*=\mu_{f^*}, \end{align} and the corresponding bias vector $c$ is given by \begin{align} c=g^{-1}(\mu_{f^*})+o(\epsilon). \end{align} \end{theorem} Theorem \ref{theorem5} illustrates that the optimal weight matrix $V$ is obtained by projecting optimal feature $f^*(x)$ onto the subspace of feature functions spanned by $t(x)$. This argument can be generalized to the weights of any intermediate layer of a deep neural network. In the learning process of a deep neural network, the backpropagation algorithm alternatively chooses weight matrix $\mathbf{W}$ and $\mathbf{V}$. This is equivalent as alternately executing \eqref{optimal_weight_1} and \eqref{optimal_weight_h_1}. \section{Proof of Theorem \ref{theorem1}}\label{ptheorem1} Notice that $\mathbf d$ only affects the second term of \eqref{approximate_problem}. To optimize $\mathbf d$, we take the derivative \begin{align} \frac{\partial \eta(\mathbf d, \mathbf f)}{\partial \mathbf d}=2\mathbf M_L(\mathbf J \mathbf d+\mathbf J\mathbf W^{\operatorname{T}}\bm \mu_{\mathbf f}+\mathbf a_{P_Y}-\bm \mu_{\mathbf a}). \end{align} Equating the derivative to zero, we get \eqref{optimal_bias}. Substituting the optimal bias into \eqref{eta}, we get \begin{align} \eta(\mathbf d, \mathbf f)=0, \end{align} which is the minimum value of the function $\eta(\mathbf d, \mathbf f)$. Next, for fixed $\mathbf \Xi_{\mathbf f}$, we need to optimize $\mathbf \Xi_{\mathbf W}$ by solving \begin{align} \min_{\mathbf \Xi_{\mathbf W}}\|\tilde{\mathbf B}-\mathbf \Xi_{\mathbf W} \mathbf \Xi_{\mathbf f}\|_{F}^2, \end{align} which is a convex optimization problem. By setting the derivative \begin{align} \frac{\partial}{\partial \mathbf \Xi_{\mathbf W}}\|\tilde{\mathbf B}-\mathbf \Xi_{\mathbf W} \mathbf \Xi_{\mathbf f}\|_{F}^2=2(\mathbf \Xi_{\mathbf W}\mathbf \Xi_{\mathbf f}\mathbf \Xi_{\mathbf f}^{\operatorname{T}}-\tilde{\mathbf B}\mathbf {\Xi}_{\mathbf f}^{\operatorname{T}}) \end{align} to zero, we find the optimal solution \begin{align} \mathbf \Xi_{\mathbf W}^*=\mathbf R_L \mathbf B \mathbf {\Xi}_{\mathbf f}^{\operatorname{T}}(\mathbf \Xi_{\mathbf f}\mathbf \Xi_{\mathbf f}^{\operatorname{T}})^{\operatorname{-1}}. \end{align} \section{Proof of Theorem \ref{theorem2}}\label{ptheorem2} To optimize $\mathbf \Xi_{\mathbf f}$, we set \begin{align} \frac{\partial}{\partial \mathbf \Xi_{\mathbf f}}\|\tilde{\mathbf B}-\mathbf \Xi_{\mathbf W} \mathbf \Xi_{\mathbf f}\|_{F}^2=2(\mathbf \Xi_{\mathbf f}^{\operatorname{T}}\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\mathbf \Xi_{\mathbf W}-\tilde{\mathbf B}^{\operatorname{T}}\mathbf \Xi_{\mathbf W}) \end{align} to zero and get \begin{align} \mathbf \Xi_{\mathbf f}^*=(\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\mathbf \Xi_{\mathbf W})^{\operatorname{-1}}\mathbf \Xi_{\mathbf W}^{\operatorname{T}}\tilde{\mathbf B}, \end{align} where \begin{align} \mathbf \Xi_{\mathbf f}^*\sqrt{\mathbf p_X}=\mathbf 0, \end{align} where the vector $\sqrt{\mathbf p_X}$ is defined in \eqref{px}. Because $\bm \mu_{\mathbf f}$ only affects the second term of \eqref{approximate_problem}, we set the derivative \begin{align} &\frac{\partial}{\partial \bm \mu_{\mathbf f}}\eta(\mathbf d, \mathbf f) \nonumber\\ =&2\mathbf \Xi_{\mathbf W} \mathbf R_L(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J\mathbf d)+2 \mathbf \Xi_{\mathbf W}^{\operatorname{T}}\mathbf \Xi_{\mathbf W} \bm \mu_{\mathbf f} \end{align} to zero and obtain \eqref{optimal_mean}. \section{Proof of Theorem \ref{theorem3}}\label{ptheorem3} One lower bound of the first term in \eqref{approximate_problem} is given by \begin{align}\label{theorem2e1} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W} \mathbf{\Xi}_{\mathbf f}\|_{F}^2\geq \sum_{i=k+1}^{K} \sigma_i^2. \end{align} By using Eckart–Young–Mirsky Theorem \cite{eckart1936approximation}, if we substitute the value of $\mathbf{\Xi}_{\mathbf f}^*$ and $\mathbf \Xi_{\mathbf W}^*$ from \eqref{optimal_pair1} into \eqref{theorem2e1}, equality with the lower bound is achieved in \eqref{theorem2e1}. If the optimal bias $\mathbf d^*$ and the optimal mean $\bm \mu_{\mathbf f}^*$ satisfy \eqref{optimal_bias_mean}, we get the minimum of $\eta(\mathbf d, \mathbf f)$. \section{Proof of Theorem \ref{theorem4}}\label{ptheorem4} For fixed $\mathbf \Xi_{\mathbf f^{(1)}}$ and $\bm \mu_{\mathbf f^{(1)}}$, the problem \eqref{approximate_problem1} can be decomposed into two separate optimization problems: \begin{align}\label{RTe1} \min_{\mathbf \Xi_{\mathbf W^{(1)}}}\frac{1}{2} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W^{(1)}} \mathbf{\Xi}_{\mathbf f^{(1)}}\|_{F}^2, \end{align} \begin{align}\label{RTe2} \min_{\mathbf c} \frac{1}{2}\kappa(\mathbf c, \mathbf f^{(1)}), \end{align} which are both convex optimization problems. Taking derivative of $\kappa(\mathbf c, \mathbf f^{(1)})$ with respect to $\mathbf c$, we obtain \begin{align}\label{RTe3} \frac{\partial \kappa(\mathbf c, \mathbf f^{(1)})}{\partial \mathbf d}=2\mathbf M_L(\tilde{\mathbf J} \mathbf c+\tilde{\mathbf J}\mathbf W^{\operatorname{T}}\mathbf J_{1}{\mathbf W}^{(1)\operatorname{T}}\bm \mu_{\mathbf f^{(1)}}+\mathbf a_{P_Y}-\bm \mu_{\mathbf a}). \end{align} By setting \eqref{RTe3} to zero, we obtain \eqref{optimal_bias2}. Similarly, by setting the derivative \begin{align} \frac{\partial}{\partial \mathbf \Xi_{\mathbf W^{(1)}}}\|\tilde{\mathbf B}-\mathbf \Xi_{\mathbf W^{(1)}} \mathbf \Xi_{\mathbf f^{(1)}}\|_{F}^2=2(\mathbf \Xi_{\mathbf W^{(1)}}\mathbf \Xi_{\mathbf f^{(1)}}\mathbf \Xi_{\mathbf f^{(1)}}^{\operatorname{T}}-\tilde{\mathbf B}\mathbf {\Xi}_{\mathbf f^{(1)}}^{\operatorname{T}}) \end{align} to zero, we find \eqref{optimal_weight_2}. {\red You usually divide a proof into many paragraphs. Paragraphs should be designed based on similarity among their functionalities.} \section{Proof of Theorem \ref{theorem5}}\label{ptheorem5} For fixed $\mathbf \Xi_{\mathbf W^{(1)}}$ and $\mathbf c$, the problem \eqref{approximate_problem1} can be decomposed into two separate optimization problems: \begin{align}\label{T5e1} \min_{\mathbf{\Xi}_{\mathbf f^{(1)}}}\frac{1}{2} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W^{(1)}} \mathbf{\Xi}_{\mathbf f^{(1)}}\|_{F}^2, \end{align} \begin{align}\label{T5e2} \min_{\bm \mu_{\mathbf f^{(1)}}} \frac{1}{2}\kappa(\mathbf c, \mathbf f^{(1)}). \end{align} Setting the derivative \begin{align} &\frac{\partial \kappa(\mathbf c, \mathbf f^{(1)})}{\bm \mu_{\mathbf f^{(1)}}} \nonumber\\ =&2\mathbf \Xi_{\mathbf W^{(1)}} \mathbf R_L(\mathbf a_{P_Y}-\bm \mu_{\mathbf a}+\mathbf J\mathbf c)+2 \mathbf \Xi_{\mathbf W^{(1)}}^{\operatorname{T}}\mathbf \Xi_{\mathbf W^{(1)}} \bm \mu_{\mathbf f^{(1)}} \end{align} to zero, we get \eqref{optimal_mean2}. Similarly, by setting the derivative \begin{align} &\frac{\partial}{\partial \mathbf \Xi_{\mathbf f^{(1)}}}\|\tilde{\mathbf B}-\mathbf \Xi_{\mathbf W^{(1)}} \mathbf \Xi_{\mathbf f^{(1)}}\|_{F}^2 \nonumber\\ =&2(\mathbf \Xi_{\mathbf f^{(1)}}^{\operatorname{T}}\mathbf \Xi_{\mathbf W^{(1)}}^{\operatorname{T}}\mathbf \Xi_{\mathbf W^{(1)}}-\tilde{\mathbf B}^{\operatorname{T}}\mathbf \Xi_{\mathbf W^{(1)}}) \end{align} to zero, we get \eqref{optimal_feature2}. \section{Proof of Theorem \ref{theorem6}}\label{ptheorem6} Similar to Appendix \ref{ptheorem3}, one lower bound of the first term in \eqref{approximate_problem} is given by \begin{align}\label{T6e1} \|\mathbf{\tilde B}- \mathbf \Xi_{\mathbf W} \mathbf{\Xi}_{\mathbf f}\|_{F}^2\geq \sum_{i=k_1+1}^{K} \sigma_i^2. \end{align} By substituting \eqref{optimal_pair2} into \eqref{T6e1}, equality is achieved in \eqref{T6e1}. {\red Is (57) needed in the above sentence?} The optimal bias $\bar{\mathbf c}$ is in accordance with Appendix \ref{ptheorem4}. {\red Where is the proof of the optimal mean $\bar{\bm \mu}_{\mathbf f^{(1)}}$?} \section{introduction} In recent years, neural network based supervised learning has been extensively admired due to its emerging applications in a wide range of inference problems, such as image classification, DNA sequencing, natural language processing, etc. The success of deep neural networks depends heavily on its capability of extracting good low-dimensional features from high-dimensional data. Due to the complexity of deep neural networks, theoretical interpretation of feature extraction in deep neural networks has been challenging, with some recent progress reported in, e.g., \cite{bartlett2019nearly, zhang2021understanding, karakida2019universal, mei2018mean, goldt2020modeling, jacot2018neural, lei2020geometric, geiger2021landscape, quinn2021information, huang2019information, tishby2015deep, yu2020learning, arora}. In this paper, we analyze the training of deep feedforward neural networks for a class of empirical risk minimization (ERM) based supervised learning algorithms. A local geometric analysis is conducted for feature extraction in deep feedforward neural networks. Specifically, the technical contributions of this paper are summarized as follows: \begin{itemize} \item We first analyze the design of (i) the weights and biases in the output layer and (ii) the feature constructed by the last hidden layer. In a local geometric region, this design problem is converted to a low-rank matrix approximation problem, where the matrix is characterized by the Bayes action of the supervised learning problem. Optimal designs of the weights, biases, and feature are derived in the local geometric region (see Theorems \ref{theorem1}-\ref{theorem3}). \item The above local geometric analysis can be readily applied to a hidden layer (see Corollaries \ref{corollary2}-\ref{corollary4}), by considering another supervised learning problem for the hidden layer. The local geometric analyses of different layers are related to each other in an iterative manner: The optimal feature obtained from the analysis of one layer is the Bayes action needed for analyzing the previous layer. We use two supervised learning problems to illustrate our results. \end{itemize} \subsection{Related Work} Due to the practical success of deep neural networks, there have been numerous efforts \cite{bartlett2019nearly, zhang2021understanding, karakida2019universal, mei2018mean, goldt2020modeling, jacot2018neural, lei2020geometric, geiger2021landscape, quinn2021information, huang2019information, tishby2015deep, yu2020learning, arora} to explain the feature extraction procedure of deep neural networks. Towards this end, researchers have used different approaches, for example, statistical learning theory approach \cite{bartlett2019nearly, zhang2021understanding}, information geometric approach \cite{karakida2019universal, mei2018mean, goldt2020modeling, jacot2018neural, lei2020geometric, geiger2021landscape, quinn2021information, huang2019information}, information theoretic approach \cite{huang2019information, tishby2015deep, yu2020learning}, etc. The information bottleneck formulation in \cite{tishby2015deep} suggested that the role of the deep neural network is to learn minimal sufficient statistics of the data for an inference task. The authors in \cite{yu2020learning} proposed that maximal coding rate reduction is a fundamental principle in deep neural networks. In \cite{huang2019information}, the authors formulated the problem of feature extraction by using KL-divergence, and provided a local geometric analysis by considering a weak dependency between the data and the label. Motivated by \cite{huang2019information}, we also consider the weak dependency. Compared to \cite{huang2019information}, our local geometric analysis can handle more general supervised learning problems and neuron activation functions, as explained in Section \ref{sec_3}. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{DNN6.eps} \caption{\small A deep feedforward neural network. \label{fig:DNN} } \end{figure}
proofpile-arXiv_065-5641
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} As a wave crashes in the ocean it entrains air below the surface. After a turbulent break-up cascade \cite{Garrett2000} a population of bubbles is produced \cite{Deane2002, Prather2013, Deike2016} and while small bubbles may be dissolved into the water, larger bubbles rise back to the surface and collapse \cite{Deike2021}. The bursting of a bubble starts with the break up of the thin liquid film that separates the air cavity from the atmosphere and ends up with the fragmentation of a rising jet. Through these two fragmentation events, bubble bursting produces film drops \cite{Blanchard1988, Lhuissier2011b} and jet drops \cite{Spiel1997, Ghabache2014, Ghabache2016a, Brasz2018a, Ganan-Calvo2017, Blanco--Rodriguez2020} and constitutes one of the main sources of the ocean spray \cite{Veron2012}. By evaporating, the sea spray transports in the atmosphere water vapor, important for the thermodynamics of the atmosphere, and salt crystals that affect the radiative balance of the atmosphere and form cloud condensation nuclei \cite{Lewis2004, Veron2015}. And, in a significant way, they carry also heat, dissolved gases, surfactants, biological materials \cite{Deike2021}. Finally, uncertainties in predicting sea spray aerosols characteristics directly impacts our ability to perform weather prediction and earth system modeling \cite{Leeuw2011, Deike2018a}. Since the pioneer work of D. Blanchard \cite{Blanchard1963} there have been a number of - experimental, numerical and theoretical - combining studies on a single bubble bursting, that brought comprehensive data on the size and speed of the jet drops produced by bubble bursting in water \cite{Seon2017, Duchemin2002, Berny2020, Ganan-Calvo2018}. Applying these results to the bubble size distribution produced under a breaking wave enabled a rough estimation of the statistics of jet drop production \cite{Berny2021}. However, the ocean surface is partly covered by a biofilm, which can be modeled with surfactants \cite{Wurl2011}. The surface-active contaminations are known to modify the static and dynamic behaviors of bubbles, including their coalescence, lifetimes, and bursting \cite{Poulain2018a, Shaw2021, Neel2021}. Consequently, the influence of the physicochemistry of the interface has to be taken into account in the process of bubble collapse at the interface and in the subsequent drop production. Néel \& Deike (2021) \cite{Neel2021} considered a monodisperse assembly of millimetric air bubbles produced identically in the bulk for a wide range of surface contamination and showed that, depending on the contamination, the bubble distribution that bursts can be very distinct from the initial distribution. There have been various experiments that attempted to described the role of the physicochemical parameters on the production of droplets by bursting bubbles \cite{Modini2013, Prather2013, Quinn2015, Deike2021}, but there are large variations in protocols, and the influence of surfactants on the drop production remains largely unclear. All these experiments are realized on a large collection of bubbles, with different size distributions, suggesting the need to carry out a study on a single bursting bubble. Recently, Constante-Amores {\it et al.} (2021) \cite{Constante-Amores2021} studied the effect of surfactant on the dynamics of a bubble bursting, using numerical simulations, accounting for sorption kinetics and diffusive effects. At one fixed bubble size and one surface contamination, they showed that the presence of surfactant affects the dynamics of the system through Marangoni-induced flow and is responsible for delaying the collapse and generating slower and fewer drops. In this article, we study experimentally the effect of Sodium Dodecyl Sulfate (SDS) surfactant on the dynamics of a bubble bursting through an interface. After describing the experimental setup, we show qualitatively that the surfactants have an astonishing influence on the jet dynamics subsequent to the bubble collapse, and on the jet drops production. The following is dedicated to quantify this effect by varying the surfactant concentration and the bubble size. We start by studying the influence of the surfactants on the bubble collapse time, before characterizing the variation of the number, size and speed of the ejected drops as a function of the control parameters. Finally, we focus on the influence of the surfactant concentration on the cavity collapse and the capillary waves dynamics. \begin{figure}[b!] \centering \includegraphics[width=1.\linewidth]{FIG1_2.pdf} \caption{Sequences of bursting bubbles (bubbles start to burst at 0~ms) with comparable radius in three different liquids : (a) water, $\gamma$ = 70 mN.m$^{-1}$, R = 830 $\mu$m ; (b) water-ethanol solution with respectively 89.5\% and 10.5\% of total weight, $\gamma$ = 48 mN.m$^{-1}$, R = 830 $\mu$m ; (c) water-SDS solution with 3.4 mM of SDS, $\gamma$ = 40 mN.m$^{-1}$, R = 840 $\mu$m. The surfactants have a strong influence on the jet dynamics, independently of their influence on the surface tension.} \label{Qualitatif} \end{figure} \section{Experimental setup} The experiment consists in releasing a single air bubble from a submerged needle in a liquid and recording the upward jet and released drops after the bubble bursts at the free surface. Air bubbles are generated in a parallelipedal glass tank (20~cm length, 14~cm width, 9.5~cm depth) filled with either tap water or an aqueous solution of SDS (Sodium Dodecyl sulfate - purchased from Sigma Aldrich) surfactant with a mass concentration ranging from 0.5~g/L to 10~g/L, i.e. 1.7\,mM to 34.7\,mM. For SDS at the ambient temperature the critical micelle concentration (CMC) is found to be around 8mM \cite{Thominet1987}, which means that the SDS concentration in our solutions varies from C= 0.2 CMC to 4.3 CMC. Bubbles are generated using a syringe pump filled with air. Three different needles are used, with internal diameter varying from 0.08~mm to 1.5~mm enabling to create bubble with thee different radii : 0.8, 1.1, 1.7 $\pm$ 0.1~mm. The bubbles rise to the surface and briefly float before bursting. Considering the elliptic shape of the floating bubble, we defined an equivalent radius as $R=(a^2b)^{1/3}$ with $a$ and $b$ respectively the semi-major and semi-minor axes of the ellipse. The surface tension of each solution is measured using the pendant drop technique~\cite{Berry2015}. In all experiments a digital high speed camera ({\it Phantom} V2511) is used to image the rising jet and releasing drops from the side, above the free surface. In a few experiments a second digital high speed camera ({\it Photron} SA-5) is added to image the collapse of the submerged cavity below the free surface. \section{Qualitative description} Figure~\ref{Qualitatif} presents three sequences of bubble busting. In each case the bubble radius is almost constant and the liquid is different : (a) tap water ($\gamma$ = 70 mN.m$^{-1}$), (b) a water-ethanol mixture with a surface tension $\gamma$ = 48 mN.m$^{-1}$, and (c) a water-SDS solution with a surface tension $\gamma$ = 40 mN.m$^{-1}$. We observe on (a) and (b) that the decrease of surface tension does not affect much the drop size and velocity. These observations have been reported quantitatively in the literature \cite{Duchemin2002, Ghabache2014, Seon2017, Brasz2018a, Berny2020}. In the sequence (c), the bubble bursts in a liquid with surfactants concentrated at 0.4~CMC and with a surface tension very close to that of sequence (b). The result is remarkable. The presence of surfactants completely changes the jet dynamics, the jet velocity is so low that it can barely reach the free surface and cannot produce any droplet. In the following, our goal is to examine more quantitatively the influence of the surfactant concentration on the drop dynamics, and to look at where and how, in the cavity collapse process, the surfactants can have such a strong influence. \section{The bubble collapse} \begin{figure}[h!] \centering \includegraphics[width=0.8\linewidth]{FIG2.pdf} \caption{ (a) Bubble collapsing time $\Delta t$ as a function of the surfactant concentration, adimensionalized using the critical micelle concentration (CMC). The CMC is taken equal to 8~mM \cite{Thominet1987}. Inset: for each solution the surface tension is measured using the pendant drop technique and reported as a function of the surfactant concentration. (b) Collapsing time, $\Delta t$, normalized by the capillaro-inertial time scale, $\rho R^3/\gamma$, as a function of the dimensionless surfactant concentration, for three different bubble radius R : 0.8, 1.1 and 1.7 mm. The 0.3 prefactor is added in the normalization of $\Delta t$ in order to have the dimensionless collapsing time equal to one. The color of the markers is associated to the surfactant concentration in the liquid. } \label{CollapseTime} \end{figure} Before the cavity collapses, the bubble is floating at the free surface. As it is static, its shape is due to an equilibrium between capillarity and gravity and is obtained by integration of the Young–Laplace equation \cite{Toba1959, Lhuissier2011b, Poujol2021}. Surfactants have no more influence on the static bubble shape than through their modification of the surface tension. The static bubble shape before bursting cannot be responsible for the modification of the jet dynamics between figure~\ref{Qualitatif}(b) and (c). We then focus on the influence of the surfactant concentration on the bubble collapse duration $\Delta t$. $\Delta t$ is defined as the time elapsed between the hole nucleation in the cap film and the cavity reversal, when the depth of the immersed cavity starts to decrease. Figure~\ref{CollapseTime} (a) presents $\Delta t$ as a function of C(CMC) the surfactant concentration adimensionalized using the CMC. Note that the color of the markers is associated to the surfactant concentration in the liquid. The different markers at one concentration show $\Delta t$ for experiments in similar conditions, they therefore only reflect the dispersion of the results. We observe that, independently of the dispersion, the bubble collapsing time increases with the surfactant concentration, reaches a maximum close to the CMC and decreases. In other words, up to the CMC, the cavity is slower to collapse as the surfactants are more concentrated and, above the CMC, it becomes faster again. This non-monotonic variation of the collapsing time with the surfactant concentration is surprising. In particular because the variation of the surface tension $\gamma$ with the dimensionless surfactant concentration $C$ is monotonic, as verified in the inset that presents the variation of $\gamma$ with $C$. In this inset $\gamma$ expectedly decreases with $C$, from the water surface tension, until it reaches a plateau, around 36~mN/m, beyond the CMC. Consequently, the non-monotonic variation of $\Delta t$ with $C$ indicates that the surfactant dynamics should play a role in the cavity collapse. In this capillaro-inertial collapse, $\Delta t$ is expected to scale as the capillaro-inertial time : $\rho R^3/\gamma$, with $\rho$ the liquid density and $R$ the bubble radius \cite{Poujol2021}. Figure~\ref{CollapseTime} (b) presents the collapsing time $\Delta t$ normalized by this capillaro-inertial time scale as a function of the dimensionless surfactant concentration $C$, for the three different bubble radii $R =$ 0.8, 1.1 and 1.7~mm. As expected, without surfactant ($C=0$), all the bubbles with different radius collapse, demonstrating that this adimensionalized collapsing time is relevant. The prefactor 0.3 is added to the capillaro-inertial time so that the normalized times collapse around 1. However, as the surfactants are added in solution, the dimensionless collapsing time increases and become more dispersed with increasing bubble size. The dimensionless time and its dispersion both reach a maximum between half the CMC and the CMC (C= 0.5 - 1) and then decrease. As we approach the CMC, the experiments with different bubble radii do not collapse anymore and the small bubbles are relatively slower to collapse compared to the larger ones. By normalizing $\Delta t$ by the relevant capillaro-inertial collapsing time, figure~\ref{CollapseTime} (b) enables to decorrelate the respective influence of the surface tension and the surfactant dynamics. As the data do not rescale in the presence of the surfactants, they are expected to be a consequence of the particular dynamics of the surfactants, independently of their influence on the measured liquid surface tension $\gamma$ displayed in the inset. Indeed, in processes that involve surface stretching and/or capillary waves, as it is the case in our experiment, gradients of surfactants can appear, generating Marangoni stresses that affect the dynamics \cite{Kamat2018, Manikantan2020}. Constante-Amores {\it et al.} \cite{Constante-Amores2021} showed that in the insoluble surfactant limit, the collapse yields to an over-concentration of surfactants at the apex of the cavity when the capillary waves focus, source of a strong Marangoni stress that can delay the cavity collapse. The presence of insoluble surfactants is also known to influence the dispersion of surface waves and to enhance capillary wave damping due to the interfacial rigidification \cite{Lucassen1966, Asaki1995}. In the aim of contextualizing our results within the existing literature, we need to discuss the surfactant dynamics in our experiment. Two kinetics must be considered: the surface diffusivity and the surfactant rearrangement between the surface and the bulk. First, the surface diffusivity $D_s$ of SDS is around 10$^{-9}$ m$^2$.s$^{-1}$ \cite{McGough2006}. Thus, the Peclet number $\text{Pe} = \sqrt{\gamma R /\rho}/D_s$ that measures the relative importance of surface convection of surfactant to its diffusion is $\mathcal{O}(10^4)$ in our experiment. The surfactant surface diffusion is therefore not strong enough to mitigate its advection. Second, to estimate the adsorption-desorption dynamics of surfactants one can compare the characteristics time of the sorption rates to the diffusion time of the surfactants from the liquid-gas interface to the bulk. The Langmuir model gives the characteristics time for the sorption kinetics : $\tau_b= \Gamma_m (k_\text{des}\Gamma_m+k_\text{ads}C)$ with $\Gamma_m = 4.10^{-6}$ mol.m$^{-2}$ \cite{Lu1995} the maximum surface packing concentration of SDS at the air-water interface and $k_\text{des}$ and $k_\text{ads}$ respectively the desorption and adsorption rate \cite{Chang1995}. The typical diffusive time-scale can be express as $t_\text{diff} = \Gamma^2/(D_v C^2)$ with $\Gamma$ the surface concentration at equilibrium \cite{Cantat2013} and $D_v = 5.10^{-10}$ m$^2$.s$^{-1}$ \cite{Kinoshita2017} the diffusion coefficient of the surfactants in the bulk liquid. Even if the characteristic time $\tau_b$ is not easy to estimate, in particular due to the lack of precision in the sorption rates $k_\text{des}$ and $k_\text{ads}$, the diffusion time, of the order of the millisecond, seems to remain larger than the time of the adsorption-desorption kinetics. This indicates that the dynamics of surfactants is limited by the diffusion, which is of the same order of magnitude than the time of collapse $\Delta t$. Thus, we expect to have a comparable time between surfactant dynamics and bubble collapse and, subsequently, to have solubility effects during the collapse. However, as the surprising variation of the collapsing time $\Delta t$ with the surfactant concentration $C$ is interpreted as a consequence of the variation of the local concentration of the surfactants, we expect the solubility not to be dominant. Consequently, the characteristics times of the surfactant dynamics might be underestimated, or the dynamics might be influenced by the presence of dodecanol, an insoluble impurity which may be either present from the original preparation or produced by hydrolysis of the aqueous SDS on standing \cite{Lu1995}. The use of numerical simulation would now be an asset in the aim of interpreting this complex situation \cite{Kamat2018,Constante-Amores2021}. \section{Jet drops characteristics} \begin{figure}[h!] \centering \includegraphics[width=\linewidth]{FIG3.pdf} \caption{(a) Number of ejected drops as a function of the SDS concentration. (b) Capillary number of the first ejected drop as a function of La(1+2.2Bo) (c) Laplace number of the first ejected drop as function of the Laplace number of initial bubble. In all graphs color codes for the SDS concentration and symbol codes for the bubble size. In (b) and (c) the dashed lines represent the unified relations from~\cite{Deike2018}. The gray shadows traduce the dispersion of the experimental and numerical values all reported in \cite{Berny2020}.} \label{DropCharac} \end{figure} As shown in figure~\ref{Qualitatif}, after a bubble has collapsed, an upward jet usually rises and produces the so-called jet drops. Figure~\ref{DropCharac} (a) presents the number of these drops produced when a bubble bursts, as a function of the surfactant concentration, for 3 different bubble radii. For bubbles bursting in water (C=0~CMC, empty markers) the 3 different bubbles produce droplets and the smaller the bubble, the more drops are produced \cite{Berny2020}. When surfactants are added to the water, our largest bubble size (triangle) cannot produce drops anymore, regardless of the surfactant concentration tested. This is a strong result, for this bubble size, surface contamination completely kills the drop production. Then, for smaller bubble, we observe that there is dispersion in the number of produced droplets. Indeed, for C $\simeq$ 0.2~CMC (yellow markers), R = 0.8 and 1.1~mm are superimposed and produce either 1, 2 ou 3 droplets. There is the same kind of dispersion for C $\geq$ 0.9~CMC. Nevertheless, despite this dispersion, the trend is clear: \textit{(i)} surface contamination can prevent the drop production and \textit{(ii)} when droplets are produced there are less numerous and their number seems to decrease down to a minimum around half the CMC, before increasing again. These results are crucial, they signify that the size distribution of ejected jet drops produced in pure water \cite{Berny2021} might be very different than the one produced in water with surfactant. They precise and experimentally validate the recent numerical results of Constante-Amores \textit{et al.} \cite{Constante-Amores2021} that show that a reduction in the number of ejected droplets arises with surfactant-laden flow due to Marangoni flow. To go further in influence of the surfactant contamination in the jet drop production, the speed and size of the first ejected droplet are quantified as a function of the surfactant concentration. Based on a large amount of numerical and experimental results, previous studies have demonstrated that the problem has two control parameters : the main one, the Laplace number (La), which compares the capillaro-inertial forces with the viscous forces, and the Bond number (Bo), which compares the gravitational forces with the capillary ones \cite{Duchemin2002, Ganan-Calvo2017, Deike2018, Berny2020}. They are defined as: \begin{align} \text{La} & = \frac{\rho \gamma R}{\mu^2} \\ \text{Bo} & = \frac{\rho g R^2}{\gamma} \end{align} where $R$ is the bubble radius, $\mu$ the water viscosity, $\rho$ the water density, $\gamma$ the surface tension and $g$ the acceleration of gravity. The first drop speed $V_d$ and size $R_d$ are also adimensionalized using, respectively, the visco-capillary velocity $V_\mu=\mu/\gamma$ and length $l_\mu = \mu^2/(\rho \gamma)$, yielding the dimensionless drop speed and size: \begin{align} \text{Ca}_d & = \frac{V_d \mu}{\gamma} \\ \text{La}_d & = \frac{\rho\gamma R_d}{\mu^2} \end{align} Within this dimensionless framework, previous studies~\cite{Deike2018, Berny2020} have proposed universal rescalings able to fully describe the first drop velocity and size. These scalings are respectively represented with dashed line on figures~\ref{DropCharac} (b) and (c). They gather a large range of bubble size and liquid parameters ($\rho$, $\mu$, $\gamma$). The grey zone around the dashed line represents the error bar including all the experiences. The Bond number appears in the x-axis as a correction term for the drop velocity (Ca$_d$) and plays no role for the drop size (La$_d$). On these plots we add here the values measured with bubbles bursting in our solutions of SDS mixed to water. The surfactant concentration is represented using the same colors as in figures~\ref{CollapseTime} and \ref{DropCharac}(a). First, expectedly, the drop velocity and size from bubble busting in water, with empty markers, fall onto the universal scalings, and many markers are on the x-axis because no drop are produced for the largest bubble and for the concentration close to half of the CMC, as shown in figure~\ref{DropCharac}(a). Secondly, in figure~\ref{DropCharac}(b), we observe that the more concentrated is the solution, the higher the drop velocity is above the scaling. In figure~\ref{DropCharac}(c), even with a small amount of surfactant, the drop size falls quite far below the universal scaling. It also seems that the drop size is less affected by the Laplace number as the surfactant concentration is high. These variations indicate that the influence of the surfactants is highly non trivial, undoubtedly dependent on the local gradient concentration along the jet. Finally, we observe again that the dispersion is quite larger than without surfactant, may be because the dynamics is very sensitive to the balance between the coupled dynamics of surfactants and the jet. Consequently, the next step of this study will probably need a statistical characterization to properly capture the influence of the surfactant concentration in the drop production \cite{Berny2021a}. \section{Capillary waves focusing} \begin{figure}[h!] \centering \includegraphics[width=1\linewidth]{FIG4.pdf} \caption{Sequences showing the collapse of the submerged cavity for bubbles with almost the same radius in three different liquids: (a) water, $\gamma=70\,$mN.m$^{-1}$, $R=1.1\,$mm, (b) water-SDS solution at 0.4\,CMC, $R=1.1\,$mm, (c) water-SDS solution at 4.3\,CMC, $R=1.0\,$mm.} \label{SeqCollapse} \end{figure} The jet dynamics strongly depends on the capillary waves focusing at the bottom of the cavity \cite{Ghabache2014, Gordillo2019}. Figure~\ref{SeqCollapse} presents 3 sequences of cavity collapse with almost the same bubble radius and 3 different surfactant concentrations: (a) no surfactant, (b) 0.4~CMC and (c) 4.3~CMC. The capillary waves propagation is different in these 3 sequences. The clearest difference appears between (a) and (b), and lies in the wave shape, in particular in the second half of the collapse sequences. The shape of the lowest collapsing cavity shown in the second to last images is completely different, and undoubtedly explains the strong difference in the drop production dynamics (see $C=0.4$~CMC in figure~\ref{DropCharac}). As shown by Constante-Amores \textit{et al.} \cite{Constante-Amores2021} for precise values of the control parameters, the interfacial surfactant concentration reaches its maximum value as the surfactant-laden capillary waves converge on the cavity apex. The Marangoni stresses that drive motion from high to low surface concentration regions can explain the shape of the cavity. For the highest concentration (c) the shape of the capillary waves, and of the lowest cavity, looks quite similar to the one in water. This seems to indicate that for concentration higher than the CMC, the Marangoni stresses are lower, due to lower concentration gradient, and this can be explained by a smaller diffusive time of the surfactants for high concentration, giving a surfactant soluble behavior to the collapse. This similarity in the cavity collapse between no contamination and high contamination probably explains why the drop characteristics are closer between water and the highest concentration than water and 0.4~CMC, for which there is no drop produced. \section{Conclusion} Surfactants are most often present in the liquids where bubbles are bursting (ocean, sparkle wine, soda...) and have barely been taken into account in the experiments and the models. In particular, to our knowledge, experiments of a single bubble bursting in a surfactant laden liquid have never been carried out. Here, we have shown how the SDS strongly influences the cavity collapse and the drops production, for different values of the bubble size and of the surfactant concentration. In particular, we highlight that the contamination induces: (i) a maximum in the bubble collapse duration around the CMC, (ii) smaller and faster drops and (iii) less drops, with no drop at all for a particular concentration of half the CMC. We also show that these effects are a consequence of the surface tension gradients (Marangoni stresses) and not just the surface tension lowering. The exact role of the Marangoni flows is not known and needs to be clarified by quantifying the surface tension gradients appearing during the bubble collapse. In the following, motivated by this study, more experiments should be done, in particular with insoluble surfactants to examine the influence of the solubility. As it is complex to only change one parameter at a time, it would undoubtedly be interesting to carry out a large campaign of numerical simulations. Indeed, the cavity collapse, cavity reversal, jet dynamics and end pinching are very complex phenomenon and their dynamics involve a high dependency on the local gradient concentration. In simulations Marangoni stresses can be turned off while surfactant-induced lowering of surface tension can be retained, thereby determining which of the two effects is the dominant mechanism by which surfactants affect the flow \cite{Kamat2018}. On the other hand, simulations would also enable a statistical characterization of the drop production \cite{Berny2021a} as it seems to be very sensitive to the experimental conditions and probably to the initial conditions. Finally, in any way, the surfactant concentration needs to be taken into account in the experiments and simulations to improve the prediction of a real bubble bursting spray as the sea spray.
proofpile-arXiv_065-5645
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We recall that a $t$-$(v,k,\lambda)$ design is a pair ${\cal D}=(V,{\cal B})$ with $V$ a set of $v$ {\it points} and ${\cal B}$ a collection of $k$-subsets of $V$, called {\it blocks}, such that any $t$-subset of $V$ is contained in exactly $\lambda$ blocks. It is understood that $1\leq t\leq k\leq v$. The design is said to be {\it simple} if $\cal B$ is a set, i.e., if it does not have repeated blocks. When $v=k$ we necessarily have only one block, coincident with the whole point set $V$, repeated $\lambda$ times; in this case the design is said to be {\it trivial}. In the important case of $\lambda=1$ one speaks of a {\it Steiner $t$-design} and the notation $S(t,k,v)$ is often used in place of ``$t$-$(v,k,1)$ design". An isomorphism between two designs $(V,{\cal B})$ and $(V',{\cal B}')$ is a bijection $f: V \longrightarrow V'$ turning ${\cal B}$ into ${\cal B}'$. Of course the study of $t$-designs is done up to isomorphism. An {\it automorphism group} of a design ${\cal D}=(V,{\cal B})$ is a group $G$ of permutations on $V$ leaving ${\cal B}$ invariant, i.e., a group of isomorphisms of $\cal D$ with itself. If $G$ acts regularly -- i.e., sharply transitively -- on the points, then $\cal D$ is said to be {\it regular} under $G$ (briefly {\it $G$-regular}). Up to isomorphism, a $G$-regular design has point set $G$ and any translate $B+g$ of any block $B$ is a block as well. For general background on $t$-designs we refer to \cite{BJL} and \cite{CD}. Throughout the paper, every group will be assumed finite and abelian unless specified otherwise. A subset $B$ of a group $G$ will be said {\it zero-sum} if its elements sum up to zero. Representing the blocks of a design as zero-sum subsets of a commutative group turned out to provide an effective algebraic tool for studying their automorphisms (see, e.g., Example 3.7 in \cite{CFP2}). Also, some recent literature provides remarkable examples of usage of zero-sum blocks in the construction of combinatorial designs (see, e.g., \cite{BCHW, K}). This gives even more value to the interesting theory on {\it additive designs} introduced in \cite{CFP} by Caggegi, Falcone and Pavone. Other papers on the same subject by some of these authors are \cite{C, CF,FP,Pavone}. They say that a design is additive if it is embeddable into an abelian group in such a way that the sum of the elements in any block is zero. We reformulate just a little bit the terminology as follows. \begin{defn} A design $(V,{\cal B})$ is {\it additive under an abelian group $G$} (or briefly {\it $G$-additive}) if, up to isomorphism, we have: \begin{itemize} \item[$(1)$] $V\subset G$; \item[$(2)$] $B$ is zero-sum $\forall B\in{\cal B}$. \end{itemize} If in place of condition (2) we have the much stronger condition \begin{itemize} \item[$(2)_s$] ${\cal B}$ is precisely the set of all zero-sum $k$-subsets of $V$, \end{itemize} then the design is {\it strongly $G$-additive}. \end{defn} By saying that a design is additive (resp., strongly additive) we will mean that it is $G$-additive (resp., strongly $G$-additive) for at least one abelian group $G$. Note that we may have designs which are $G$-additive and\break $H$-additive at the same time even though none of them is isomorphic to a subgroup of the other. For instance, it is proved in \cite{CFP} that if $p$ is a prime, then the point-line design of the affine plane over $\mathbb{Z}_p$, which is obviously $\mathbb{Z}_p^2$-additive, is also strongly $\mathbb{Z}_p^{p(p-1)/2}$-additive. In general, to establish whether an additive design is also strongly additive appears to be hard. Examples of additive $2$-$(v,k,\lambda)$ designs which are not strongly additive are given for $2\leq \lambda\leq 6$ in \cite{Pavone}. The question on whether there exists an additive Steiner 2-design which is not strongly additive is still open. We propose to consider the $G$-additive designs whose set of points is precisely $G$ or $G\setminus\{0\}$. \begin{defn} An additive design is {\it strictly $G$-additive} or {\it almost strictly\break $G$-additive} if its point set is precisely $G$ or $G\setminus\{0\}$, respectively. \end{defn} Of course strictly additive (resp., almost strictly additive) means strictly $G$-additive (resp., almost strictly $G$-additive) for a suitable $G$. As it is standard, $\mathbb{F}_q$ will denote the field of order $q$ and also, by abuse of notation, its additive group. It is quite evident that the $2$-$(q^n,q,1)$ design of points and lines of $AG(n,q)$ (the $n$-dimensional affine geometry over $\mathbb{F}_q$) is strictly $\mathbb{F}_{q^{n}}$-additive. As observed in \cite{CF}, every $2$-$(2^v-1,2^k-1,\lambda)$ design over $\mathbb{F}_2$ is almost strictly $\mathbb{F}_{2^v}$-additive\footnote{A 2-design is {\it over $\mathbb{F}_q$} if its points are those of a projective geometry over $\mathbb{F}_q$ and the blocks are suitable subspaces of this geometry.}. Thus there exists an almost strictly $\mathbb{Z}_2^v$-additive\break $2$-$(2^v-1,7,7)$ design for any odd $v\geq3$ in view of the main results in \cite{BN,Thomas}. Also, there is an almost strictly $\mathbb{Z}_2^{v+1}$-additive $2$-$(2^{v+1}-1,3,1)$ design that is the point-line design of $PG(v,2)$ (the $v$-dimensional projective geometry over $\mathbb{F}_2$). Finally, each of the well-celebrated designs found in \cite{BEOVW} and revisited in \cite{BNW} is an almost strictly $\mathbb{Z}_2^{13}$-additive $2$-$(8191,7,1)$ design. Almost all known additive designs have quite large values of $\lambda$. For instance, it is proved in \cite{Pavone3} that if $p$ is an odd prime and $k=mp$ does not exceed $p^n$, then all zero-sum $k$-subsets of $\mathbb{F}_{p^n}$ form the block-set of a strongly additive $2$-$(p^n,k,\lambda)$ design with $\lambda={1\over p^n}{p^n-2\choose k-2}+{k-1\over p^n}{p^{n-1}-1\choose m-1}$. Applying this with $p=3$, $n=4$ and $k=6$, one finds a strongly additive $2$-$(81,6,18551)$ design. A sporadic example with $\lambda=2$ is the strictly $\mathbb{Z}_3^4$-additive $2$-$(81,6,2)$ design given in \cite{N} and some more classes with a relatively small $\lambda$ will be given in \cite{BN2}. Anyway, what is most striking is the shortage of additive Steiner 2-designs. Up to now, only three classes were known: \begin{itemize} \item[C1.] the designs of points and lines of the affine geometries over any field $\mathbb{F}_q$ (which are strictly additive); \item[C2.] the designs of points and lines of the projective geometries over $\mathbb{F}_2$ (which are almost strictly additive); \item[C3.] the designs of points and lines of the projective planes over any field $\mathbb{F}_q$ (which are strongly additive under a ``big" group \cite{CFP}). \end{itemize} Nothing else was known, except for the sporadic example of the $2$-$(8191,7,1)$ design mentioned above. Hence to find additive Steiner 2-designs with new parameters, in particular with block size which is neither a prime power nor a prime power plus one, appears to be challenging. Note that the $2$-$(q^n,q,1)$ designs mentioned above are also $\mathbb{F}_{q^n}$-regular. This fact suggests that a natural approach for reaching our target is to look for strictly $G$-additive Steiner 2-designs which are also $G$-regular. Let us give a name to the designs with these properties. \begin{defn} A design is {\it super-regular} under an abelian group $G$ (or briefly {\it $G$-super-regular}) if it is $G$-regular and strictly $G$-additive at the same time. \end{defn} Similarly as above, super-regular will mean $G$-super-regular for a suitable $G$. Super-regular Steiner 2-designs will be the central topic of this paper. Our main result will be the following. \begin{thm}\label{main} Given $k\geq3$, there are infinitely many values of $v$ for which there exists a super-regular $2$-$(v,k,1)$ design with the genuine exceptions of the singly even values of $k$ and the possible exceptions of all $k=2^n3\geq12$. \end{thm} As an immediate consequence, we have the existence of a strictly additive Steiner 2-design with block size $k$ for any $k$ with the same exceptions as in the above statement. A major disappointment is that the smallest $v$ for which, fixed $k$, we are able to say that a super-regular $2$-$(v,k,1)$ design exists, is huge. Suffice it to say that for $k=15$ this value is $3\cdot5^{31}$. Consider, however, that there are several asymptotic results proving the existence of some designs as soon as the number of points is admissible and greater than a bound which is not even quantified. This happens, for instance, in the outstanding achievement by P. Keevash \cite{K} on the existence of Steiner $t$-designs. Usually, these asymptotic results are obtained via probabilistic methods and are not constructive. Our methods are algebraic and ``half constructive". We actually give a complete recipe for building a super-regular $2$-$(kq,k,1)$ design under $G\times\mathbb{F}_q$ (with $G$ a suitable group of order $k$) whenever $q$ is an admissible power of a prime divisor of $k$ sufficiently large. Yet, in building every {\it base block} we have to pick the second coordinates of its elements, one by one, in a way that suitable cyclotomic conditions are satisfied and these choices are not ``concrete"; they are realizable only in view of some theoretical arguments deriving from the {\it theorem of Weil on multiplicative character sums}. In the penultimate section we will have a look at the super-regular non-Steiner 2-designs. The paper will be organized as follows. In the next section we first prove two elementary necessary conditions for the existence of a strictly\break $G$-additive $2$-$(v,k,1)$ design: $G$ cannot have exactly one involution, and every prime factor of $v$ must divide $k$. In Section 3 we recall some basic facts on regular designs and show that any super-regular design can be completely described in terms of differences. In particular, we prove that a sufficient condition for the existence of a\break $(G\times\mathbb{F}_q)$-super-regular design with $G$ a non-binary group of order $k$ and $q$ a power of a prime divisor of $k$ is the existence of an additive $(G\times\mathbb{F}_q,G\times\{0\},k,1)$ difference family. This is a set ${\cal F}$ of zero-sum $k$-subsets of $G\times \mathbb{F}_q$ whose list of differences is $(G\times\mathbb{F}_q)\setminus(G\times\{0\})$. In Section 4 we prove that such an $\cal F$ cannot exist for $k=2^n3\geq12$, clarifying in this way why this case is so hard. In Sections 5 it is shown that a difference family as above can be realized by suitably lifting the blocks of an additive $(G,k,\lambda)$ strong difference family, that is a collection of zero-sum $k$-multisets on $G$ whose list of differences is $\lambda$ times $G$. In Section 6, as a first application of the method of strong difference families, we construct a $\mathbb{F}_{p^n}$-super-regular $2$-$(p^n,p,1)$ design not isomorphic to the point-line design of $AG(n,p)$ for $p\in\{5,7\}$ and every integer $n\geq3$. In Section 7 a combined use of strong difference families and cyclotomy leads to a very technical asymptotic result. As a consequence of this result, the crucial ingredient for proving the main theorem is an additive $(G,k,\lambda)$ strong difference family with $G$ a non-binary group of order $k$ and $\gcd(k,\lambda)=1$. In Section 8 this ingredient is finally obtained, also via difference matrices, for all the relevant values of $k$ and then the main theorem is proved. As mentioned above, the final construction leads to super-regular Steiner 2-designs with a huge number of points. In Section 9 it is shown that when $k=15$ the smallest $v$ given by this construction is $3\cdot5^{9565939}$. On the other hand we also show that a clever use of strong difference families and cyclotomy allows to obtain smaller values of $v$. Still in the case $k=15$, we first obtain $v=3\cdot5^{187}$ and then $v=3\cdot5^{31}$ by means of two variations of the main construction. We also suggest a possible attempt to obtain $v=3\cdot5^7$ by means of a computer search. In Section 10 we sketch how the same tools used with so much labor to construct ``huge" super-regular Steiner 2-designs allow to rapidly obtain super-regular non-Steiner 2-designs with a ``reasonably small" $v$ at the expense of a possibly large $\lambda$ and the loss of simplicity (each of them has ${v\over k}$ blocks repeated $\lambda$ times). For instance, we will show the existence of a non-simple super-regular 2-design with block size $15$ having only $3\cdot5^3$ points and with $\lambda=21$. In the last section we list some open questions. \section{Elementary facts about strictly additive Steiner 2-designs} In these preliminaries we establish some constraints on the parameters of a strictly additive Steiner 2-design. First, it is useful to show two very elementary facts which we believe are folklore. \begin{fact}\label{fact1} Every non-trivial subgroup of $\mathbb{F}_q^*$ (the multiplicative group of $\mathbb{F}_q$) is zero-sum. \end{fact} \begin{proof} Let $B\neq\{1\}$ be a subgroup of $\mathbb{F}_q^*$ and let $n$ be its order. Then, if $b$ is a generator of $B$, we have $b^n-1=0$, i.e., $(b-1)(\sum_{i=0}^{n-1}b^i)=0$ in $\mathbb{F}_q$. Thus $\sum_{i=0}^{n-1}b^i$, which is the sum of all elements of $B$, is equal to zero. \end{proof} The subgroup of an abelian group $G$ consisting of all the involutions of $G$ and zero will be denoted by $I(G)$, i.e., $I(G)=\{g\in G : 2g=0\}$. We say that $G$ is {\it binary} when $I(G)$ has order 2, i.e., when $G$ has exactly one involution. \begin{fact}\label{fact2} An abelian group $G$ is not zero-sum if and only if it is binary. \end{fact} \begin{proof} The elements of $G\setminus I(G)$ are partitionable into 2-subsets consisting of opposite elements $\{g,-g\}$ so that $G\setminus I(G)$ is zero-sum. Then the sum of all elements of $G$ is equal to the sum of all elements of $I(G)$. Now note that either $I(G)=\{0\}$ or $I(G)$ is isomorphic to $\mathbb{Z}_2^n$ for some $n$. If $n=1$, then $G$ is binary and the sum of all elements of $G$ is the non-zero element of $I(G)$, that is the only involution of $G$. If $n>1$, then $G$ is not binary and $I(G)\setminus\{0\}$ can be viewed as the multiplicative group of $\mathbb{F}_{2^n}^*$, hence it is zero-sum by Fact \ref{fact1}. \end{proof} From the above fact we immediately establish when the trivial $S(2,k,k)$ is strictly additive. \begin{prop}\label{trivial} The trivial $2$-$(k,k,1)$ design is strictly additive if and only if $k\not\equiv2$ $($mod $4)$. \end{prop} \begin{proof} It is evident that the trivial $2$-$(k,k,1)$ design is strictly additive if and only if there exists an abelian zero-sum group of order $k$. Then we get the assertion from Fact \ref{fact2} and the following observations. Every group of odd order $k$ is not binary. Every group of singly even order $k$ is binary. Among the groups of doubly even order $k$ we have $G=\mathbb{Z}_2^2\times \mathbb{Z}_{k/4}$ which is not binary. \end{proof} We recall that the {\it radical} of an integer $n$, denoted by $rad(n)$, is the product of all prime factors of $n$. Thus, the fact that a finite field $\mathbb{F}_q$ has characteristic $p$ can be also expressed by saying that $rad(q)=p$. The following property reduces significantly the admissible parameters for a strictly additive $2$-$(v,k,1)$ design. \begin{prop}\label{rad} If a strictly $G$-additive $2$-$(v,k,1)$ design exists, then $G$ is zero-sum and the radical of $v$ is a divisor of $k$. \end{prop} \begin{proof} Let ${\cal D}=(G,{\cal B})$ be a $2$-$(v,k,1)$ design which is strictly additive under $G$. For any fixed element $g$ of $G$, let ${\cal B}_g$ be the set of blocks through $g$ and recall that its size $r$ (the so-called {\it replication number} of $\cal D$) does not depend on $g$. Now consider the double sum $$\sigma_g=\sum_{B\in{\cal B}_g}(\sum_{b\in B}b).$$ We have $\sum_{b\in B}b=0$ for every $B\in{\cal B}_g$ because $\cal D$ is strictly additive, hence $\sigma_g$ is null. Also note that in the expansion of $\sigma_g$ the fixed element $g$ appears as an addend exactly $r$ times whereas any other element $h$ of $G$ appears as an addend exactly once. Thus $\sigma_g$ can be also expressed as $(r-1)g+\sum_{h\in G}h$. We conclude that we have $$(r-1)g+\sum_{h\in G}h=0 \quad \forall g\in G.$$ Specializing this to the case $g=0$ we get $\sum_{h\in G}h=0$ which means that $G$ is zero-sum. Hence the first assertion is proved and we can write $$(r-1)g=0 \quad \forall g\in G.$$ This means that the order of every element of $G$ is a divisor of $r-1$. Let $p$ be a prime divisor of $v$, set $v=pw$, and take an element $g$ of $G$ of order $p$ (which exists by the theorem of Cauchy). For what we said, $p$ divides $r-1$. Now recall that $r={v-1\over k-1}$, hence $r-1={v-k\over k-1}$. Thus we can write ${pw-k\over k-1}=pn$ for some integer $n$ which gives $pw-k=(k-1)pn$. This equality implies that $p$ divides $k$. Thus every prime factor of $v$ is also a prime factor of $k$ and the second assertion follows. \end{proof} In particular, considering that every abelian group of singly even order is binary, we can state the following. \begin{cor}\label{v singly even} A strictly additive $2$-$(v,k,1)$ design with $v$ singly even cannot exist. \end{cor} In the next section we will see that in a super-regular $2$-$(v,k,1)$ design the radicals of $v$ and $k$ are even equal. \section{Difference families} We need to recall some classic results on regular designs. The {\it list of differences} of a subset $B$ of a group $G$ is the multiset $\Delta B$ of all possible differences $x-y$ with $(x,y)$ an ordered pair of distinct elements of $B$. More generally, if ${\cal F}$ is a set of subsets of $G$, the list of differences of $\cal F$ is the multiset union $\Delta{\cal F}=\biguplus_{B\in {\cal F}}\Delta B$. Let $H$ be a subgroup of a group $G$. A set ${\cal F}$ of $k$-subsets of $G$ is a $(G,H,k,1)$ difference family (briefly DF) if $\Delta{\cal F}=G\setminus H$. The members of such a DF are called {\it base blocks} and their number is clearly equal to ${v-h\over k(k-1)}$ where $v$ and $h$ are the orders of $G$ and $H$, respectively. Thus a necessary condition for its existence is that $v-h$ is divisible by $k(k-1)$. It is also necessary that $I(G)$ is a subgroup of $H$ since in a list of differences every involution necessarily appears an even number of times. If $G$ has order $v$ and $H=\{0\}$, one usually speaks of an {\it ordinary} $(v,k,1)$-DF in $G$. Instead, when $|H|=h>1$ one speaks of a $(v,h,k,1)$-DF in $G$ {\it relative to $H$} or, more briefly, of a {\it relative} $(v,h,k,1)$-DF. For general background on difference families as above we refer to \cite{BJL,CD}. More generally, one can speak of a {\it difference family relative to a partial spread of $G$}, that is a notion introduced by the first author in \cite{JSPI}. A {\it partial spread} of a group $G$ is a set $\cal H$ of subgroups of $G$ whose mutual intersections are trivial. It is a {\it spread} of $G$ when the union of its members is the whole $G$. Also, it is said {\it of type $\tau$} to express that the multiset of the orders of its members is $\tau$. In particular, to say that $\cal H$ is of type $\{k^s\}$ means that $\cal H$ has exactly $s$ members and all of them have order $k$. Given a partial spread $\cal H$ of a group $G$, a set $\cal F$ of $k$-subsets of $G$ is said to be a $(G,{\cal H},k,1)$ difference family if $\Delta{\cal F}$ is the set of all elements of $G$ not belonging to any member of $\cal H$. If $G$ has order $v$ and $\cal H$ is of type $\tau$, one also speaks of a $(v,\tau,k,1)$-DF in $G$ relative to $\cal H$. If $\tau=\{k^s\}$ for some $s$, the obvious necessary conditions for its existence are the following: \begin{equation}\label{necessaryspread} k \ | \ v;\quad {v\over k}\equiv 1 \ ({\rm mod} \ k-1);\quad s\equiv1 \ ({\rm mod} \ k);\quad I(G)\subset\bigcup_{H\in{\cal H}}H \end{equation} Clearly, a $(G,H,k,1)$-DF can be seen as a difference family relative to a partial spread of size 1. The following theorem is a special case of a general result concerning regular {\it linear spaces} \cite{JSPI}. \begin{thm}\label{regular} Let $G$ be an abelian group of order $v$. A $G$-regular $2$-$(v,k,1)$ design may exist only for $v\equiv1$ or $k$ $($mod $k(k-1))$ and it is equivalent to: \begin{itemize} \item an ordinary $(v,k,1)$-DF in $G$ when $v\equiv1$ $($mod $k(k-1))$; \item a $(v,\{k^s\},k,1)$-DF in $G$ for some $s$ when $v\equiv k$ $($mod $k(k-1))$. \end{itemize} \end{thm} We remark that the above theorem is false when $G$ is non-abelian. \begin{rem}\label{rem} It is useful to recall the constructive part of the proof of the above theorem (which also works when $G$ is not abelian). \begin{itemize} \item[(r1)] The set of all the translates of the base blocks of an ordinary $(v,k,1)$-DF in $G$ form the block-set of a $G$-regular $2$-$(v,k,1)$ design. \item[(r2)] If ${\cal F}$ is a $(v,\{k^s\},k,1)$-DF in $G$ relative to $\cal H$, then the set of all the translates of the base blocks of $\cal F$ together with all the right cosets of all the members of $\cal H$ form the block-set of a $G$-regular $2$-$(v,k,1)$ design. \end{itemize} \end{rem} It is immediate from Theorem \ref{regular} that any $G$-super-regular $2$-$(v,k,1)$ design is generated by a suitable difference family. Let us see some other consequences. \begin{prop}\label{k=4n+2} If there exists a $G$-super-regular $2$-$(v,k,1)$ design, then we have: \begin{itemize} \item[(i)] the order of every element of $G$ is a divisor of $k$; \item[(ii)] $v\equiv k$ $($mod $k(k-1))$; \item[(iii)] $rad(v)=rad(k)$; \item[(iv)] $k$ is not singly even. \end{itemize} \end{prop} \begin{proof} Let ${\cal D}$ be a $G$-super-regular $2$-$(v,k,1)$ design. (i).\quad Take any element $g$ of $G$ and any block $B$ of ${\cal D}$. By definition of a $G$-regular design $B+g$ is a block of ${\cal D}$ as well. Also, by definition of strictly $G$-additive design both $B$ and $B+g$ are zero-sum. Thus, considering that the elements of $B+g$ sum up to $(\sum_{b\in B} b)+kg$, we deduce that $kg=0$, i.e., the order of $g$ divides $k$. (ii).\quad If $v\equiv1$ (mod $k(k-1))$, then $k$ divides $v-1$. By (i), the order of any $g\in G$ divides $k$, hence it also divides $v-1$. On the other hand $ord(g)$ divides $v$ by Lagrange's theorem. Thus $ord(g)$ would be a common divisor of $v$ and $v-1$ whichever is $g\in G$. This would imply $v=1$ which is absurd. We conclude, by Theorem \ref{regular}, that we have $v\equiv k$ (mod $k(k-1))$. (iii).\quad We already know from Proposition \ref{rad} that $rad(v)$ divides $rad(k)$. On the other hand $k$ divides $v$ because of condition (ii) proved above, hence $rad(k)$ divides $rad(v)$ and the assertion follows. (iv).\quad $\cal D$ has at least one block $B$ which is a subgroup of $G$ of order $k$ in view of (ii) and Remark \ref{rem}(r2). Considering that $B$ is zero-sum by assumption, the group $B$ is not binary by Fact \ref{fact2}, hence $k\not\equiv2$ (mod 4). \end{proof} Note that condition (i) of the above lemma implies, in particular, that if $p$ is a prime factor of $k$ but $p^2$ does not divide $k$, then the Sylow $p$-subgroup of $G$ is elementary abelian. Hence, when $k$ is square-free, $G$ is necessarily a direct product of elementary abelian groups. In the following a $(G,{\cal H},k,1)$-DF will be said {\it additive} if all its base blocks are zero-sum and all the members of $\cal H$ are zero-sum (i.e., not binary) as well. The above results (Theorem \ref{regular}, Remark \ref{rem} and Proposition \ref{k=4n+2}) allow us to state the following. \begin{lem}\label{additiveDF} There exists a $G$-super-regular $2$-$(v,k,1)$ design if and only if $G$ satisfies conditions (i), (ii) of Proposition \ref{k=4n+2} and there exists an additive $(G,{\cal H},k,1)$-DF of type $\{k^s\}$ for some $s$. \end{lem} The next lemma will be our main tool to construct super-regular Steiner 2-designs. \begin{lem}\label{additive(kq,k,k,1)} Let $G$ be a zero-sum group of order $k$ and let $q\equiv1$ $($mod $k-1)$ be a power of a prime divisor $p$ of $k$. If there exists an additive $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF, then there exists a $(G\times\mathbb{F}_{q^n})$-super-regular $2$-$(kq^n,k,1)$ design for every $n\geq1$. \end{lem} \begin{proof} The hypotheses easily imply that $G\times\mathbb{F}_{q^n}$ satisfies conditions (i), (ii) of Proposition \ref{k=4n+2} for every $n$. Let $\cal F$ be an additive $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF, and let $S$ be a complete system of representatives for the cosets of $\mathbb{F}_q^*$ in $\mathbb{F}_{q^n}^*$. For every base block $B$ of $\cal F$ and every $s\in S$, let $B\circ s$ be the subset of $\mathbb{F}_{q^n}^*$ obtained from $B$ by multiplying the second coordinates of all its elements by $s$. It is easy to see that ${\cal F}\circ S:=\{B\circ s \ | \ B\in{\cal F}; s\in S\}$ is an additive $(G\times\mathbb{F}_{q^n},G\times\{0\},k,1)$-DF. The assertion then follows from Lemma \ref{additiveDF}. \end{proof} Recall that, for the time being, only three classes of non-trivial additive Steiner 2-designs are known, that are classes C1, C2, C3 mentioned in the introduction. The set of their block sizes clearly coincides with the set $Q \ \cup \ (Q+1)$ where $Q$ is the set of all prime powers. Thus, for now, we do not have any example of an additive non-trivial Steiner 2-design whose block size is neither a prime power nor a prime power plus one. Also, for $k\in(Q+1)\setminus Q$ we have only one example that is the projective plane of order $k-1$. Let us examine which is the very first possible attempt of filling these gaps using the above lemma. The first $k$ which is neither a prime power nor a prime power plus one is 15. We can try to find a super-regular $2$-$(15q,15,1)$ design using Lemma \ref{additive(kq,k,k,1)}, i.e., via an additive $(\mathbb{Z}_{15}\times\mathbb{F}_q,\mathbb{Z}_{15}\times\{0\},15,1)$-DF with $q$ a power of 3 or a power of 5. The first case is ruled out by Theorem \ref{nonexistence} in the next section. Thus $q$ has to be taken among the powers of 5. More precisely, in view of the condition $q\equiv1$ (mod 14), we have to take $q=5^{6n}$ for some $n$. We conclude that $2$-$(15\cdot5^6,15,1)$ is the first parameter set of a super-regular Steiner 2-design with block size belonging to $(Q+1)\setminus Q$ potentially obtainable via Lemma \ref{additive(kq,k,k,1)}. Unfortunately, we are not able to construct a design with these parameters. In the penultimate section we indicate a possible attempt to get it by means of a computer search. In that same section we will prove the existence of a super-regular $2$-$(15\cdot5^{30},15,1)$ design. \section{One more necessary condition and the hard case $k=2^n3$ with $n\geq2$}\label{2^n3} The following result will lead to one more condition on the parameters of a super-regular Steiner 2-design. This result will also imply that a non-trivial super-regular 2-$(v,k,1)$ design with $k=2^n3$ may be generated by an additive $(v,k,k,1)$-DF only if a very strong condition on $n$ holds. As a matter of fact we suspect that this conditions is never satisfied. This is why in our main result we are not able to say anything about the case $k=2^n3$ which appears to us very hard. \begin{thm}\label{nonexistence} If a super-regular $2$-$(v,k,1)$ design is generated by an additive $(v,k,k,1)$-DF and $k\equiv\pm3$ $($mod $9)$, then ${v\over k}\equiv1$ $($mod $3)$. \end{thm} \begin{proof} Let $\cal F$ be an additive $(G,H,k,1)$-DF generating a $G$-super-regular $2$-$(v,k,1)$ design with $k\equiv\pm3$ $($mod $9)$. Thus $G$ is a group of order $v\equiv k$ (mod $k(k-1))$, say $v=kv_1$, and $H$ is a subgroup of $G$ of order $k=3k_1$ with $k_1$ not divisible by 3. For what we observed immediately after Proposition \ref{k=4n+2} the Sylow 3-subgroup of $G$ is elementary abelian. For this reason, for every two subgroups of $G$ of order 3 there exists an automorphism of $G$ mapping one into the other. Then, up to isomorphism, we may assume that $G=\mathbb{Z}_3\times G_1$ with $G_1$ of order $k_1v_1$ and $H=\mathbb{Z}_3\times H_1$ with $H_1$ a subgroup of $G_1$ of order $k_1$. For each $B\in{\cal F}$, let $\overline{B}$ be the $k$-multiset on $\mathbb{Z}_3$ that is the projection of $B$ on $\mathbb{Z}_3$ and set $\overline{\cal F}=\{\overline{B} \ | \ B\in{\cal F}\}$. It is clear that $\Delta\overline{\cal F}$ is the projection of $\Delta{\cal F}$ on $\mathbb{Z}_3$. Thus, considering that $\Delta{\cal F}=(\mathbb{Z}_3\times G_1)\setminus(\mathbb{Z}_3\times H_1)$ by assumption, it is clear that $\Delta\overline{\cal F}$ is $\lambda$ times $\mathbb{Z}_3$ with $\lambda$ equal to the size of $G_1\setminus H_1$, i.e., $\lambda=k_1(v_1-1)$. Using some terminology that we will recall in the next section, $\overline{\cal F}$ is essentially a $(\mathbb{Z}_3,k,\lambda)$ {\it strong difference family}. Take any block $\overline{B}$ of $\overline{\cal F}$ and for $i=0,1,2$, let $\mu_i$ be the multiplicity of $i$ in $\overline{B}$. Clearly, we have $\mu_0+\mu_1+\mu_2=k$, hence $\mu_0+\mu_1+\mu_2\equiv0$ (mod 3). Considering that $\cal F$ is additive, $B$ is zero-sum and then $\overline{B}$ is zero-sum as well. It follows that $\mu_1+2\mu_2=0$ in $\mathbb{Z}_3$, i.e., $\mu_1\equiv\mu_2$ (mod 3). We easily conclude that \begin{equation}\label{mu} \mu_0\equiv\mu_1\equiv\mu_2 \quad {\rm (mod \ 3)} \end{equation} Now, let $\nu$ be the multiplicity of zero in $\Delta\overline{B}$ and note that we have $$\nu=\mu_0(\mu_0-1)+\mu_1(\mu_1-1)+\mu_2(\mu_2-1),$$ hence $\nu\equiv0$ (mod 3) in view of (\ref{mu}). Note that $\lambda$ can be seen as the sum of the multiplicities of zero in the lists of differences of the blocks of $\overline{\cal F}$. For what we just saw, all these multiplicities are zero (mod 3) and then $\lambda\equiv0$ (mod 3). Recalling that $\lambda=k_1(v_1-1)$ we conclude that $v_1\equiv1$ (mod 3) which is the assertion. \end{proof} As a consequence, we get the following non-existence result. \begin{thm} A super-regular $2$-$(v,k,1)$ design with $k\equiv\pm3$ $($mod $6)$ and ${v\over k}\equiv2$ $($mod $3)$ cannot exist. \end{thm} \begin{proof} Assume that there exists a $G$-super-regular $2$-$(v,k,1)$ design $\cal D$ with $v$ and $k$ as in the statement. Then $\cal D$ cannot be generated by an additive $(v,k,k,1)$-DF by Theorem \ref{nonexistence}. It follows that $\cal D$ is generated by an additive $(v,\{k^s\},k,1)$-DF for a suitable $s>1$ by Theorem \ref{additiveDF}. On the other hand the hypothesis obviously imply that $v$, as $k$, is divisible by 3 but not by 9. Thus $G$ necessarily has only one subgroup of order 3, hence it cannot have a partial spread with two distinct members of order $k$. We got a contradiction. \end{proof} Each of the following pairs $(v,k)$ satisfies the admissibility conditions $v\equiv k$ (mod $k(k-1))$ and $rad(v)=rad(k)$ given by Proposition \ref{k=4n+2}. Yet, for each of them no super-regular 2-$(v,k,1)$ design exists in view of the above theorem. \begin{center} \begin{tabular}{|c|c|} \hline $v$ & $k$\\ \hline $3\cdot2^6\cdot5^{10}$ & $3\cdot2^2\cdot5$ \\ \hline $3\cdot2^{18}\cdot11^{10}$ & $3\cdot2^2\cdot11$\\ \hline $3\cdot5\cdot11^7$ & $3\cdot5\cdot11$\\ \hline $3\cdot2^{21}\cdot7^{3}$ & $3\cdot2^3\cdot7$\\ \hline $3\cdot5^{22}\cdot13^{4}$ & $3\cdot5\cdot13$\\ \hline $3\cdot2^{26}\cdot5^{6}$ & $3\cdot2^4\cdot5$\\ \hline \end{tabular} \end{center} Another consequence of Theorem \ref{nonexistence} is the following. \begin{thm} Let $k=2^n3$ and assume that there exists a super-regular $2$-$(v,k,1)$ design generated by an additive $(v,k,k,1)$-DF. Then $v=2^{oi+n}3$ where $o$ is the order of $2$ in the group of units $($mod $k-1)$ and $0\leq i\leq \lfloor{n^2-n\over o}\rfloor$. \end{thm} \begin{proof} Let $\cal D$ be a $G$-super-regular $2$-$(v,k,1)$ design with $k=2^n3$ and assume that $\cal D$ is generated by an additive $(G,H,k,1)$-DF so that $G$ has order $v$ and $H$ is a subgroup of $G$ of order $k$. By Proposition \ref{k=4n+2} (ii) and (iii) we have $v=2^a3^b\equiv k$ (mod $k(k-1))$. Thus, reducing mod $k$ and mod $k-1$ we respectively get \begin{equation}\label{mod k & mod k-1} 2^a3^b\equiv 2^n3 \ (mod \ 2^n3) \quad {\rm and}\quad 2^a3^b\equiv 1 \ (mod \ 2^n3-1) \end{equation} From the first of the above congruences we deduce that $a\geq n$ and $b\geq 1$. By Theorem \ref{nonexistence} we must have $2^{a-n}3^{b-1}\equiv1$ (mod 3) which implies $b=1$. Hence $v=2^a3$ with $a\geq n$. Multiplying the second congruence in (\ref{mod k & mod k-1}) by $2^n$ (which is the inverse of $3$ mod $k-1$), we get $2^a\equiv2^n$ (mod $k-1$), i.e., $2^{a-n}\equiv1$ (mod $k-1$). This, by definition of $o$, means that $a=oi+n$ for some integer $i$. Hence we have \begin{equation}\label{v1} v=2^{oi+n}3 \end{equation} Now let $2^t$ be the order of $I(G)$ and recall that $I(G)$ is necessarily contained in $H$ so that we have $t\leq n$. Up to isomorphism, by the fundamental theorem on abelian groups, we have $G=\mathbb{Z}_{2^{\alpha_1}}\times\dots\times\mathbb{Z}_{2^{\alpha_t}}\times\mathbb{Z}_3$ for a suitable $t$-tuple $(\alpha_1,\dots,\alpha_t)$ of positive integers summing up to $a$. For $i=1,\dots,t$, there are elements of $G$ of order $2^{\alpha_i}$; for instance the element whose $i$th coordinate is 1 and all the other coordinates are zero. Hence $2^{\alpha_i}$ divides $k$ by Proposition \ref{k=4n+2}(i) and then $\alpha_i\leq n$ for $i=1,\dots,t$. We deduce that we have \begin{equation}\label{v2} v=|G|\leq(2^n)^t3\leq 2^{n^2}3 \end{equation} Comparing (\ref{v1}) and (\ref{v2}) we get $oi+n\leq n^2$, i.e., $i\leq \lfloor{n^2-n\over o}\rfloor$ and the assertion follows. \end{proof} \begin{cor} If $k=2^n3$ and there exists a non-trivial super-regular $2$-$(v,k,1)$ design generated by an additive $(v,k,k,1)$-DF, then the order of $2$ in the group of units of $\mathbb{Z}_{k-1}$ is less than $n^2-n$. \end{cor} We suspect that the order of 2 in the group of units of $\mathbb{Z}_{2^n3-1}$ is always greater than $n^2-n$ but we are not able to prove it. For now, we are able to say that it is true for $n\leq1000$ (checked by computer) and whenever $2^n3-1$ has a prime factor greater than $(n^2-n)^2$; this is a consequence of a result proved in \cite{M} according to which the order of 2 modulo an odd prime $p$ is almost always as large as the square root of $p$. Thus, for now, we can state the following. \begin{rem} Let $k=2^n3$ with $n\leq1000$ or $k$ has a prime factor greater than $(n^2-n)^2$. Then there is no value of $v$ for which a putative non-trivial super-regular $2$-$(v,k,1)$ design may be generated by an additive $(v,k,k,1)$-DF. \end{rem} The above leads us to believe that the existence of a non-trivial super-regular $2$-$(v,2^n3,1)$ design generated by a $(v,k,k,1)$-DF is highly unlikely. On the other hand such a design might be obtained via a difference family relative to a partial spread of size greater than 1. For instance, we cannot rule out that there exists a $G$-super-regular $2$-$(3^{9}4^4,12,1)$ design generated by an additive $(G,{\cal H},12,1)$-DF with $G=\mathbb{F}_{3^{9}}\times\mathbb{F}_{4^4}$ and $\cal H$ a partial spread of $G$ of type $\{12^{85}\}$. Indeed $G$ satisfies conditions (ii), (iii) of Propositions \ref{k=4n+2} and the necessary conditions (\ref{necessaryspread}) are also satisfied with an $\cal H$ constructible as follows. Take a (full) spread ${\cal H}_1$ of $\mathbb{F}_{4^4}$ consisting of subgroups of $\mathbb{F}_{4^4}$ of order 4 and note that it has size ${4^4-1\over3}=85$. Now take the (full) spread ${\cal H}_2$ of $\mathbb{F}_{3^{9}}$ consisting of all subgroups of $\mathbb{F}_{3^{9}}$ of order 3 which has size ${3^{9}-1\over2}>85$. Thus it is possible to choose an injective map $f:{\cal H}_1\longrightarrow{\cal H}_2$ and we can take ${\cal H}:=\{H\times f(H) \ | \ H\in{\cal H}_1\}$. On the other hand to realize an additive $(G,{\cal H},12,1)$-DF with $G$ and ${\cal H}$ as above appears to be unfeasible; suffice it to say that it would have 38,166 base blocks. Also, the fact that the literature is completely lacking of constructions for $(v,\{k^s\},k,1)$ difference families with $s>1$, further underlines the difficulty of the problem. \section{Strong difference families}\label{sectionSDF} In view of Lemma \ref{additive(kq,k,k,1)} our target will be the construction of additive $(G\times\mathbb{F}_q,G\times\{0\},k,1)$ difference families with $G$ of order $k$ and $q$ a power of a prime divisor of $k$. For this, we need one more variant of a difference family, that is a {\it strong difference family}. The notion of list of differences of a subset of a group $G$ can be naturally generalized to that of list of differences of a multiset on $G$ as follows. If $B=\{b_1,\dots,b_k\}$ is a multiset on a group $G$, then the list of differences of $B$ is the multiset $\Delta B$ of all possible differences $b_i-b_j$ with $(i,j)$ an ordered pair of distinct elements of $\{1,\dots,k\}$. It is evident that the multiplicity of zero in $\Delta B$ is even. Indeed if $b_i-b_j=0$, then $b_j-b_i=0$ as well. It is also evident that this multiplicity is equal to zero if and only if $B$ does not have repeated elements, i.e., $B$ is a set. By list of differences of a collection ${\cal F}$ of multisets on $G$ one means the multiset union $\Delta{\cal F}=\biguplus_{B\in {\cal F}}\Delta B$. \begin{defn}\label{SDF} Let $G$ be a group of order $v$ and let ${\cal F}$ be a collection of $k$-multisets on $G$. One says that ${\cal F}$ is a $(v,k,\lambda)$ strong difference family in $G$ (or briefly a $(G,k,\lambda)$-SDF) if $\Delta{\cal F}$ covers every element of $G$ ($0$ included) exactly $\lambda$ times. \end{defn} Note that if $s$ is the number of blocks of a $(G,k,\lambda)$-SDF, then we necessarily have $\lambda|G|=sk(k-1)$. A SDF with only one block is called a {\it difference multiset} \cite{B99} or also a {\it difference cover} \cite{Arasu}. \begin{ex}\label{5,5,4} Take the $5$-multiset $B=\{0,1,1,4,4\}$ on $\mathbb{Z}_5$. Looking at its ``difference table" \begin{center} \begin{tabular}{|c|||c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline {$$} & {\scriptsize$0$} & {\scriptsize$1$} & {\scriptsize$1$} & {\scriptsize$4$} & {\scriptsize$4$} \\ \hline\hline\hline {\scriptsize$0$} & $\bullet$ & $\bf4$ & $\bf4$ & $\bf1$ & $\bf1$ \\ \hline {\scriptsize$1$} & $\bf1$ & $\bullet$ & $\bf0$ & $\bf2$ & $\bf2$ \\ \hline {\scriptsize$1$} & $\bf1$ & $\bf0$ & $\bullet$ & $\bf2$ & $\bf2$ \\ \hline {\scriptsize$4$} & $\bf4$ & $\bf3$ & $\bf3$ & $\bullet$ & $\bf0$ \\ \hline {\scriptsize$4$} & $\bf4$ & $\bf3$ & $\bf3$ & $\bf0$ & $\bullet$ \\ \hline \end{tabular}\quad\quad\quad \end{center} we see that the singleton $\{B\}$ is a $(5,5,4)$-SDF in $\mathbb{Z}_5$. \end{ex} Throughout the paper, the union of $n$ copies of a set or multiset $S$ will be denoted by $\underline{n}S$. Thus the difference multiset of the previous example can be denoted as $\{0\} \ \cup \ \underline{2}\{1,4\}$. Much more in general, we recall that if $q$ is an odd prime power and $\mathbb{F}_q^\Box$ is the set of non-zero squares of $\mathbb{F}_q$, then $\{0\} \ \cup \ \underline{2}\mathbb{F}_q^\Box$ is the so-called {\it $(q,q,q-1)$ Paley difference multiset of the first type} \cite{B99}. We will say that a multiset on a group $G$ is zero-sum if the sum of all its elements (counting their multiplicities) is zero. A SDF in $G$ will be said {\it additive} if all its members are zero-sum. In view of Fact \ref{fact1} the Paley $(q,q,q-1)$ difference multisets of the first type are additive provided that $q\neq3$. \medskip Strong difference families are a very useful tool to construct relative difference families. Even though they were implicitly considered in some older literature, they have been formally introduced for the first time by the first author in \cite{B99}. After that, they turned out to be crucial in many constructions in design theory (see, e.g., \cite{BBGRT,BuGio,BP,BYW,CCFW,CFW1,CFW2,FW,Momihara,YYL}). The following construction explains how to use strong difference families in order to construct relative difference families. \begin{constr}\label{constr} Let $\Sigma=\{B_1,\dots,B_s\}$ be a $(G,k,\lambda)$-SDF and let $q\equiv1$ (mod $\lambda$) be a prime power. Lift each block $B_h=\{b_{h1},\dots,b_{hk}\}$ of $\Sigma$ to a subset $\ell(B_h)=\{(b_{h1},\ell_{h1}),\dots,(b_{hk},\ell_{hk})\}$ of $G\times\mathbb{F}_q$. By definition of a strong difference family, we have $\Delta{\cal F}=\biguplus_{g\in G}\{g\}\times\Delta_g$ where each $\Delta_g$ is a $\lambda$-multiset on $\mathbb{F}_q$. Hence, if the liftings have been done appropriately, it may happen that there exists a ${q-1\over\lambda}$-subset $M$ of $\mathbb{F}_q^*$ such that $\Delta_g\cdot M=\mathbb{F}_q^*$ for each $g\in G$. In this case, it is easy to see that $${\cal F}=\bigl{\{}\{(b_{h1},\ell_{h1}m),\dots,(b_{hk},\ell_{hk}m)\} \ | \ 1\leq h\leq s; m\in M\bigl{\}}$$ is a $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF. This DF is clearly additive in the additional hypothesis that $\Sigma$ is additive and each $\ell(B_h)$ is zero-sum. \end{constr} In most of the cases the above construction is applied when each $\Delta_g$ is a complete system of representatives for the cosets of the subgroup $C^\lambda$ of $\mathbb{F}_q^*$ of index $\lambda$, that is the group of non-zero $\lambda$-th powers of $\mathbb{F}_q$. Indeed in this case we have $\Delta_g\cdot M=\mathbb{F}_q^*$ for each $g\in G$ with $M=C^\lambda$. Note, however, that $\Delta_g$ is of the form $\{1,-1\}\cdot\overline{\Delta}_g$ for every $g\in I(G)$, hence it contains pairs $\{x,-x\}$ of opposite elements. Thus, if the elements of $\Delta_g$ belong to pairwise distinct cosets of $C^\lambda$, we necessarily have $-1\notin C^\lambda$, i.e., $q\equiv \lambda+1$ (mod $2\lambda)$. This explains why in the next Theorems \ref{SDF->DF} and \ref{additive version} we require that this congruence holds. \section{Anomalous $2$-$(q^n,q,1)$ designs} Let us say that a $2$-$(q^n,q,1)$ design is {\it anomalous} if it is $\mathbb{F}_{q^n}$-super-regular but not isomorphic to the design of points and lines of AG$(n,q)$. \begin{prop} If there exists an anomalous $2$-$(q^n,q,1)$ design, then there exists an anomalous $2$-$(q^m,q,1)$ design for any $m\geq n$. \end{prop} \begin{proof} Let $V$ be the $n$-dimensional subspace of AG$(m,q)$ defined by the equations $x_i=0$ for $n+1\leq i\leq m$. Take the {\it standard} $2$-$(q^m,q,1)$ design $(\mathbb{F}_q^m,{\cal B})$ and replace all its blocks contained in $V$ with the blocks of an anomalous $2$-$(q^n,q,1)$ design. We get, in this way, the block-set of an anomalous $2$-$(q^m,q,1)$ design. \end{proof} In the next theorem we put into practice Lemma \ref{additive(kq,k,k,1)} and Construction \ref{constr} to get an anomalous $2$-$(p^3,p,1)$ design for $p=5$ and $p=7$. Our proof is a slight modification of the construction for regular $2$-$(pq,p,1)$ designs in \cite{B&B} (improved in \cite{cyclotomic}) with $p$ and $q$ prime powers, $q\equiv1$ (mod $p-1$). In our construction below $q$ coincides with $p^2$. \begin{thm}\label{125,5,1} There exists an anomalous $2$-$(p^3,p,1)$ design for $p=5$ and $p=7$. \end{thm} \begin{proof} By Lemma \ref{additive(kq,k,k,1)} a super-regular $2$-$(5^3,5,1)$ design can be realized by means of an additive $(\mathbb{Z}_5\times\mathbb{F}_{25},\mathbb{Z}_5\times\{0\},5,1)$-DF. We can obtain several DFs of the required kind using Construction \ref{constr} with $\Sigma$ the additive $(5,5,4)$ difference multiset $B=\{0,1,1,4,4\}$ of Example \ref{5,5,4}. For instance, let us lift $B$ to the subset $\ell(B)$ of $\mathbb{Z}_5\times\mathbb{F}_{25}$ $$\ell(B)=\{(0,0),(1,1),(1,-1),(4,\ell),(4,-\ell)\}$$ with $\ell$ a root of the primitive polynomial $x^2+x+2$. It is readily seen that $\ell(B)$ is zero-sum. Looking at its difference table \begin{center} \begin{tabular}{|c|||c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline {$$} & {\scriptsize$(0,0)$} & {\scriptsize$(1,1)$} & {\scriptsize$(1,-1)$} & {\scriptsize$(4,\ell)$} & {\scriptsize$(4,-\ell)$} \\ \hline\hline\hline {\scriptsize$(0,0)$} & $\bullet$ & $\bf(4,-1)$ & $\bf(4,1)$ & $\bf(1,-\ell)$ & $\bf(1,\ell)$ \\ \hline {\scriptsize$(1,1)$} & $\bf(1,1)$ & $\bullet$ & $\bf(0,2)$ & $\bf(2,1-\ell)$ & $\bf(2,1+\ell)$ \\ \hline {\scriptsize$(1,-1)$} & $\bf(1,-1)$ & $\bf(0,-2)$ & $\bullet$ & $\bf(2,-1-\ell)$ & $\bf(2,-1+\ell)$ \\ \hline {\scriptsize$(4,\ell)$} & $\bf(4,\ell)$ & $\bf(3,\ell-1)$ & $\bf(3,1+\ell)$ & $\bullet$ & $\bf(0,2\ell)$ \\ \hline {\scriptsize$(4,-\ell)$} & $\bf(4,-\ell)$ & $\bf(3,-1-\ell)$ & $\bf(3,-\ell+1)$ & $\bf(0,-2\ell)$ & $\bullet$ \\ \hline \end{tabular}\quad\quad\quad \end{center} we see that $\displaystyle\Delta\ell(B)=\bigcup_{g=0}^4 \{g\}\times\Delta_g$ with \smallskip $\Delta_0=\{1,-1\}\cdot\{2,2\ell\};$ \smallskip $\Delta_1=\Delta_4=\{1,-1\}\cdot\{1,\ell\};$ \smallskip $\Delta_2=\Delta_3=\{1,-1\}\cdot\{\ell-1,\ell+1\}.$ Now note that each of the $2$-sets $\overline{\Delta}_0=\{2,2\ell\}$, $\overline{\Delta}_1=\{1,\ell\}$ and $\overline{\Delta}_2=\{\ell-1,\ell+1\}$ contains a non-zero square and a non-square of $\mathbb{F}_{25}$. Thus, if $M$ is a complete system of representatives for the cosets of $\{1,-1\}$ in $\mathbb{F}_{25}^\Box$, we clearly have $\Delta_g\cdot M=\mathbb{F}_{25}^*$. Hence $${\cal F}=\bigl{\{}\{(0,0),(1,m),(1,-m),(4,\ell m),(4,-\ell m)\} \ | \ m\in M\bigl{\}}$$ is an additive $(\mathbb{Z}_5\times\mathbb{F}_{25},\mathbb{Z}_5\times\{0\},5,1)$-DF. If we take, for instance, $M=\{\ell^{2i} \ | \ 0\leq i\leq 5\}$ then the blocks of $\cal F$, written in additive notation, are the following: $$B_1=\{(0,0,0),(1,0,1),(1,0,4),(4,1,0),(4,4,0)\};$$ $$B_2=\{(0,0,0),(1,4,3),(1,1,2),(4,4,2),(4,1,3)\};$$ $$B_3=\{(0,0,0),(1,3,2),(1,2,3),(4,4,4),(4,1,1)\};$$ $$B_4=\{(0,0,0),(1,0,2),(1,0,3),(4,2,0),(4,3,0)\};$$ $$B_5=\{(0,0,0),(1,3,1),(1,2,4),(4,3,4),(4,2,1)\};$$ $$B_6=\{(0,0,0),(1,1,4),(1,4,1),(4,3,3),(4,2,2)\}.$$ We can check, by hand, that the super-regular $2$-$(125,5,1)$ design $\cal D$ generated by $\cal F$ is anomalous. Assume for contradiction that it is isomorphic to the point-line design of AG$(3,5)$. It is then natural to speak of {\it lines} of $\cal D$ rather than blocks. Also, it makes sense to speak of the {\it planes} of $\cal D$ and a line containing two distinct points of a plane $\pi$ is clearly contained in $\pi$. Let $\pi$ be the plane of $\cal D$ containing the two lines through the origin $B_0=\{(0,0,0),(1,0,0),(2,0,0),(3,0,0),(4,0,0)\}$ and $B_1$. Of course, if ${\cal B}_\pi$ is the set of lines of $\cal D$ contained in $\pi$, then $(\pi,{\cal B}_\pi)$ is isomorphic to the affine plane over $\mathbb{F}_5$. The line through $(1,0,0)\in B_0$ and $(1,0,1)\in B_1$ is $$C=B_4+(0,0,3)=\{(0,0,3),{\bf(1,0,0)},{\bf(1,0,1)},(4,2,3),(4,3,3)\}$$ and belongs to ${\cal B}_\pi$ since it joins two points of $\pi$. The line through $(1,0,4)\in B_1$ and $(0,0,3)\in C$ is $$D=B_1+(0,0,3)=\{{\bf(0,0,3)},{\bf(1,0,4)},(1,0,2),(4,1,3),(4,4,3)\}.$$ The line through $(1,0,4)\in B_1$ and $(4,2,3)\in C$ is $$D'=B_6+(0,4,0)=\{(0,4,0),{\bf(1,0,4)},(1,3,1),{\bf(4,2,3)},(4,1,2)\}.$$ These two lines $D$ and $D'$ also belong to ${\cal B}_\pi$ since they also join two points of $\pi$. We also note that they are both disjoint with the line $B_0 \in {\cal B}_\pi$. This contradicts the Euclid's parallel axiom: there is a point of $\pi$ (that is $(1,0,4)$) and two distinct lines of $\pi$ through this point ($D$ and $D'$) which are both disjoint with a line of $\pi$ (that is $B_0$). \medskip Now consider the $(7,7,6)$ Paley difference multiset of the first type, that is $\{0\} \cup \underline{2}\{1,2,4\}$, and apply Construction \ref{constr} lifting it to a suitable 7-subset of $\mathbb{F}_{49}$. Without entering all the details, we just list the base blocks of the resultant $(\mathbb{Z}_7^3,\mathbb{Z}_7\times\{0\}\times\{0\},7,1)$-DF. $$\{(0,0,0),(1,1,0),(1,6,0),(2,2,1),(2,5,6),(4,2,0),(4,5,0)\}$$ $$\{(0,0,0),(1,2,4),(1,5,3),(2,0,3),(2,0,4),(4,4,1),(4,3,6)\}$$ $$\{(0,0,0),(1,2,2),(1,5,5),(2,2,6),(2,5,1),(4,4,4),(4,3,3)\}$$ $$\{(0,0,0),(1,3,5),(1,4,2),(2,1,6),(2,6,1),(4,6,3),(4,1,4)\}$$ $$\{(0,0,0),(1,0,1),(1,0,6),(2,6,2),(2,1,5),(4,0,2),(4,0,5)\}$$ $$\{(0,0,0),(1,3,2),(1,4,5),(2,4,0),(2,3,0),(4,6,4),(4,1,3)\}$$ $$\{(0,0,0),(1,5,2),(1,2,5),(2,1,2),(2,6,5),(4,3,4),(4,4,3)\}$$ $$\{(0,0,0),(1,2,3),(1,5,4),(2,1,1),(2,6,6),(4,4,6),(4,3,1)\}$$ One can check that the design generated by the above DF is anomalous with the same isomorphism test used for getting the anomalous $2$-$(5^3,5,1)$ design. \end{proof} The above results allow us to state the following. \begin{cor} There exists an anomalous $2$-$(p^n,p,1)$ design for $p\in\{5,7\}$ and any integer $n\ge3$. \end{cor} We tried to get an anomalous $2$-$(11^3,11,1)$ design with the same method used in the proof of Theorem \ref{125,5,1}, i.e., by means of a suitable lifting of the $(11,11,10)$ Paley difference multiset $\{0\}\cup\{1,3,4,5,9\}$, but we fail. \section{Cyclotomy} Starting from the fundamental paper of Wilson \cite{W}, cyclotomy has been very often crucial in the construction of many classes of difference families. Here it is also crucial for getting a good lifting of a SDF as required by Construction \ref{constr}. Given a prime power $q\equiv1$ (mod $\lambda$), let $C^\lambda$ be the subgroup of $\mathbb{F}_q^*$ of index $\lambda$. If $r$ is a fixed primitive element of $\mathbb{F}_q$, then $\{r^iC^\lambda \ | \ 0\leq i\leq \lambda-1\}$ is the set of cosets of $C^\lambda$ in $\mathbb{F}_q^*$. For $i=0,1,\dots,\lambda-1$, the coset $r^iC^\lambda$ will be denoted by $C^\lambda_i$ and it is called {\it the $i$-th cyclotomic class of order $\lambda$}. Note that we have $C^\lambda_i\cdot C^\lambda_j=C^\lambda_{i+j \ (mod \lambda)}$. We will need the following lemma deriving from the theorem of Weil on multiplicative character sums (see \cite{LN}, Theorem 5.41). \begin{lem}\label{BP} {\rm\cite{BP}} Let $q\equiv 1 \pmod{\lambda}$ be a prime power and let $t$ be a positive integer. Then, for any $t$-subset $C=\{c_1,\dots,c_t\}$ of $\mathbb{F}_q$ and for any ordered $t$-tuple $(\gamma_1,\dots,\gamma_t)$ of $\mathbb{Z}_\lambda^t$, the set $X:=\{x\in \mathbb{F}_q: x-c_i\in C_{\gamma_i}^\lambda \,\,{\rm for }\,\, i=1,\dots,t \}$ has arbitrarily large size provided that $q$ is sufficiently large. In particular, we have $|X|>2\lambda^{t-1}$ for $q>t^2\lambda^{2t}$. \end{lem} In most cases the above lemma has been used to prove that the set $X$ is not empty. But this is not enough for our purposes. The last sentence in the above statement is formula (2) in \cite{BP}. The following theorem is essentially Corollary 5.3 in \cite{BP} where it appeared as a special consequence of a more general result. Here, for convenience of the reader, it is better to show its proof directly. Then we will see how this proof can be modified in order to get its {\it additive version}. \begin{thm}\label{SDF->DF} If there exists a $(G,k,\lambda)$-SDF, then there exists a $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF for every prime power $q\equiv\lambda+1$ $($mod $2\lambda)$ provided that $q>(k-1)^2\lambda^{2k-2}$. \end{thm} \begin{proof} Let $\Sigma=\{B_1,\dots,B_s\}$ be a $(G,k,\lambda)$-SDF with $B_h=\{b_{h1},\dots,b_{hk}\}$ for $1\leq h\leq s$. Let $T$ be the set of all triples $(h,i,j)$ with $h\in\{1,\dots,s\}$ and $i$, $j$ distinct elements of $\{1,\dots,k\}$. For every $g\in G$, let $T_g$ be the set of triples $(h,i,j)$ of $T$ such that $b_{h,i}-b_{h,j}=g$. Note that $\bigcup_{g\in G}T_g$ is a partition of $T$ and that each $T_g$ has size $\lambda$ by definition of a $(G,k,\lambda)$-SDF. Thus it is possible to choose a map $\psi: T \longrightarrow \mathbb{Z}_\lambda$ satisfying the following conditions: 1) the restriction $\psi |_{T_g}$ is bijective for any $g\in G$; 2) $\psi(h,j,i)=\psi(h,i,j)+\lambda/2$ for every pair of distinct $i$, $j$. As a matter of fact the number $\Psi$ of all maps $\psi$ satisfying the above conditions is huge. If $\lambda=2\mu$ and $|G|=2^nm$ where $2^n$ is the order of $I(G)$, it is easy to see that $\Psi=\lambda!^{2^{n-1}(m-1)}(2^\mu\mu!)^{2^n}$. Now lift each $B_h$ to a subset $\ell(B_h)=\{(b_{h1},\ell_{h1}),\dots,(b_{hk},\ell_{hk})\}$ of $G\times \mathbb{F}_q$ by taking the first element $\ell_{h,1}$ arbitrarily and then by taking the other elements $\ell_{h,2}$, $\ell_{h,3}$, \dots, $\ell_{h,k}$ iteratively, one by one, according to the rule that once that $\ell_{h,i-1}$ has been chosen, we pick $\ell_{h,i}$ arbitrarily in the set $$X_{h,i}=\{x\in \mathbb{F}_q \ : \ x-\ell_{h,j}\in C^\lambda_{\psi(h,i,j)} \quad {\rm for} \ 1\leq j\leq i-1\}.$$ Note that $\{\ell_{h,1},...,\ell_{h,i-1}\}$ is actually a set, i.e., it does not have repeated elements. Indeed given two elements $j_1<j_2$ in $\{1,\dots,i-1\}$, we have $\ell_{h,j_2}-\ell_{h,j_1} \in C^\lambda_{\psi(h,j_2,j_1)}$ since $\ell_{h,j_2}$ has been picked in $X_{h,j_2}$. Thus we cannot have $\ell_{h,j_2}=\ell_{h,j_1}$. It follows that $X_{h,i}$ is not empty by Lemma \ref{BP}, hence an element $\ell_{h,i}$ with the above requirement can be actually chosen. Also note that we have \begin{equation}\label{ell-ell} \ell_{h_,i}-\ell_{h,j}\in C^\lambda_{\psi(h,i,j)}\quad \forall (h,i,j)\in T \end{equation} This is clear if $i>j$ considering the rule that we followed for selecting the $\ell_{h,i}$'s. If $i<j$, for the same reason, we have $\ell_{h_,j}-\ell_{h,i}\in C^\lambda_{\psi(h,j,i)}$, i.e., $\ell_{h_,j}-\ell_{h,i}\in C^\lambda_{\psi(h,i,j)+\lambda/2}$ in view of the second property of $\psi$. Multiplying by $-1$ and considering that $-1\in C^\lambda_{\lambda/2}$ since $q\equiv\lambda+1$ (mod $2\lambda$), we get (\ref{ell-ell}) again. We finally note that we have $\biguplus_{h=1}^s\Delta\ell(B_h)=\biguplus_{g\in G}\{g\}\times\Delta_g$ with $\Delta_g=\{\ell_{h,i}-\ell_{h,j} \ | \ (h,i,j)\in T_g\}$. Thus, in view of (\ref{ell-ell}) and the first property of $\psi$, we see that $\Delta_g$ is a complete system of representatives for the cyclotomic classes of order $\lambda$ whichever is $g\in G$. At this point we get the required $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF by applying Construction \ref{constr} as pointed out at the end of Section \ref{sectionSDF}. \end{proof} The additive version of the above theorem is straightforward in the case that $rad(q)$ is not a divisor of $k$. On the contrary, if $rad(q)$ divides $k$, which in view of Lemma \ref{additive(kq,k,k,1)} is the case we are interesting in, we have to lift the base blocks of the given additive SDF much more carefully. Also, we need to raise the bound on $q$ significantly, and to ensure that the order of $G$ is not too large. \begin{thm}\label{additive version} Assume that there exists an additive $(G,k,\lambda)$-SDF of size $s$ with $k\neq3$ and let $q\equiv\lambda+1$ (mod $2\lambda)$ be a prime power. Then there exists an additive $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF in each of the following cases: \begin{itemize} \item[(i)] $rad(q)$ does not divide $k$ and $q>(k-1)^2\lambda^{2k-2}$; \item[(ii)] $rad(q)$ divides $k$, $|G|<2\lambda^{2k-5}s$ and $q>(2k-3)^2\lambda^{4k-6}$. \end{itemize} \end{thm} \begin{proof} Let $\Sigma=\{B_1,\dots,B_s\}$ be a $(G,k,\lambda)$-SDF as in the proof of the previous theorem and let $q\equiv\lambda+1$ (mod $2\lambda$) be a prime power. \smallskip (i) $k$ is not divisible by $rad(q)$, and $q>(k-1)^2\lambda^{2k-2}$.\\ Take a $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF, say $\cal F$, which exists by Theorem \ref{SDF->DF}. For every block $B\in{\cal F}$, let $\sigma_B$ be the sum of the second coordinates of all elements of $B$ and set $B'=B+(0,-{\sigma_B\over k})$. It is evident that $\{B' \ | \ B \in{\cal F}\}$ is an additive $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF. \smallskip (ii) $rad(q)$ divides $k$, $|G|<2\lambda^{2k-5}s$, and $q>(2k-3)^2\lambda^{4k-6}$.\\ We keep the same notation as in the proof of the above theorem and the procedure for getting $\ell(B_h)$ will be exactly the same until determining the element $\ell_{h,k-4}$. After that we have to be much more careful in picking the last four elements $\ell_{h,k-3}$, $\ell_{h,k-2}$, $\ell_{h,k-1}$ and $\ell_{h,k}$. In the following, we set $\sigma_{h,i}=\sum_{j=1}^i\ell_{h,j}$ once that all $\ell_{h,j}$'s with $1\leq j\leq i$ have been chosen. \eject Choice of $\ell_{h,k-3}$. \noindent If $rad(q)\neq 3$, just proceed as in the proof of Theorem \ref{SDF->DF}; we can take $\ell_{h,k-3}$ in $X_{h,k-3}$ arbitrarily. If $rad(q)=3$ we take it in $X_{h,k-3}\setminus\{-\sigma_{h,k-4}\}$. Note that $rad(q)=3$ implies $k>4$ since we have $k\neq3$ by assumption, hence it makes sense to consider the sum $\sigma_{h,k-4}$. \smallskip Choice of $\ell_{h,k-2}$. \noindent We pick this element in $X_{h,k-2}\setminus Y$, where $Y$ is the union of the sets $$Y_1=\{-\sigma_{h,k-3}-\ell_{h,i}-\ell_{h,j} \ | \ 1\leq i\leq j\leq k-3\},$$ $$Y_2=\{- \sigma_{h,k-3} - \ell_{h,i} \ | \ 1\leq i\leq k-3\},\quad\quad Y_3={1\over2}Y_2,$$ and, only in the case that $rad(q)\neq3$, the singleton $Y_4=\{-{\sigma_{h,k-3}\over3}\}$. Note that this selection can be done since $|X_{h,k-2}|>|Y|$. Indeed we have $|X_{h,k-2}|>2\lambda^{k-4}$ by Lemma \ref{BP} and $2\lambda^{k-4}>{\lambda|G|\over s}$ in view of the upper bound on the order of $G$. Also, we have ${\lambda|G|\over s}=k(k-1)$ since, as observed after Definition \ref{SDF}, we have $\lambda|G|=sk(k-1)$. Finally, it is evident that $Y$ has size less than $k(k-1)$. \smallskip Choice of $\ell_{h,k-1}$. \noindent We pick this element in the set $$X'_{h,k-1}=\{x\in \mathbb{F}_q \ : \ x-c_{h,j}\in C^\lambda_{\gamma_{h,j}} \ {\rm for} \ 1\leq j\leq 2k-3\}$$ with the pairs $(c_{h,j},\gamma_{h,j})$ defined as follows: $$c_{h,j}=\ell_{h,j} \quad{\rm and}\quad \gamma_{h,j}=\psi(h,k-1,j) \quad {\rm for} \ 1\leq j\leq k-2;$$ $$c_{h,k-2+j}=-\sigma_{h,k-2}-\ell_{h,j} \quad{\rm and}\quad \gamma_{h,k-2+j}=\psi(h,k,j)+{\lambda\over2} \quad{\rm for} \ 1\leq j\leq k-2;$$ $$c_{h,2k-3}=-{\sigma_{h,k-2}\over2} \quad{\rm and}\quad \gamma_{h,2k-3}=\psi(h,k,k-1)-\alpha$$ where $C^\lambda_\alpha$ is the cyclotomic class of order $\lambda$ containing $-2$. Note that the first $k-2$ conditions required for the generic element of $X'_{h,k-1}$ are exactly the conditions for the generic element of $X_{h,k-1}$. Thus $X'_{h,k-1}$ is a subset of $X_{h,k-1}$. Assume that $c_{h,j_1}=c_{h,j_2}$ with $1\leq j_1< j_2\leq 2k-3$. If $j_2\leq k-2$, then we have $\ell_{h,j_1}=\ell_{h,j_2}$ which contradicts the fact that $\ell_{h,j_2}-\ell_{h,j_1} \in C^\lambda_{\psi(h,j_2,j_1)}$ (recall indeed that $\ell_{h,j_2}$ is in $X_{h,j_2}$). For the same reason, we cannot have $k-1\leq j_1< j_2\leq 2k-4$. If $j_1=k-2$ and $j_2=2k-3$ we get $-\sigma_{h,k-3}-3\ell_{h,k-2}$. If $rad(q)=3$, this means $\sigma_{h,k-3}=0$, hence $\ell_{h,k-3}=-\sigma_{h,k-4}$ contradicting the choice of $\ell_{h,k-3}$ in this case. If $rad(q)\neq3$, then we would have $\ell_{h,k-2}=-{\sigma_{h,k-3}\over3}$ contradicting the choice of $\ell_{h,k-2}$ in this case. In all the remaining cases the reader can check that we would get $\ell_{h,k-2}\in Y$. On the other hand, $\ell_{h,k-2}$ had been picked out of $Y$ on purpose. We conclude that the $c_{h,j}$'s ($j=1,2,\dots,2k-3)$ are pairwise distinct. Thus, Lemma \ref{BP} and the assumption $q>(2k-3)^2\lambda^{4k-6}$ guarantee that $X'_{h,k-1}$ is not empty and the selection of $\ell_{h,k-1}$ described above can be actually done. \smallskip Choice of $\ell_{h,k}$. \noindent Take $\ell_{h,k}=-\sigma_{h,k-1}$. This last (obligatory) choice assures that $\ell(B_h)$ is zero-sum; the sum of the first coordinates of all its elements is zero because $\Sigma$ is additive, and the sum of the second coordinates of all its elements is $\sigma_{h,k}=\sigma_{h,k-1}+\ell_{h,k}=0$. \smallskip It is evident that $\ell_{h,i}\in X_{h,i}$ for $1\leq i\leq k-1$. As a consequence of the fact that $\ell_{h,k-1}\in X'_{h,k-1}$, we show that this is true also for $i=k$, i.e., that we have $\ell_{h,k}-\ell_{h,j}\in C^\lambda_{\psi(h,k,j)}$ for $1\leq j\leq k-1$. $1\leq j\leq k-2$: by definition of $X'_{h,k-1}$, we have \begin{equation}\label{ell_{h,k}} \ell_{h,k-1}-c_{h,k-2+j}\in C^\lambda_{\psi(h,k,j)+\lambda/2}. \end{equation} Now note that $\ell_{h,k-1}-c_{h,k-2+j}=-\ell_{h,k}+\ell_{h,j}$ by the definitions of $c_{h,k-2+j}$ and $\ell_{h,k}$. Thus, multiplying (\ref{ell_{h,k}}) by $-1$ and recalling that $-1\in C^\lambda_{\lambda/2}$, we actually get $\ell_{h,k}-\ell_{h,j}\in C^\lambda_{\psi(h,k,j)}$. $j=k-1$: considering the last condition required for the generic element of $X'_{h,k-1}$, we have $\ell_{h,k-1}+{\sigma_{h,k-2}\over2}\in C^\lambda_{\psi(h,k,k-1)-\alpha}$. Multiplying by $-2$ and remembering that $-2\in C^\lambda_\alpha$ we get $-2\ell_{h,k-1}-\sigma_{h,k-2}\in C^\lambda_{\psi(h,k,k-1)}$ which is what we wanted. Indeed, by definition of $\ell_{h,k}$, we have $-2\ell_{h,k-1}-\sigma_{h,k-2}=\ell_{h,k}-\ell_{h,k-1}$. \smallskip We conclude that the above constructed liftings are in the same situation of the liftings constructed in the proof of Theorem \ref{SDF->DF}, i.e., (\ref{ell-ell}) holds. Thus, reasoning as at the end of that proof, we can say that they form a $(G\times\mathbb{F}_q,G\times\{0\},k,1)$-DF. The assertion follows considering that each of these liftings is zero-sum. \end{proof} We are going to see that the above theorem allows to obtain a difference family as required in Lemma \ref{additive(kq,k,k,1)} as soon as one has an additive $(G,k,\lambda)$-SDF with $G$ a zero-sum group of order $k$ and $\lambda$ not divisible by $rad(k)$. This will be the crucial ingredient for proving our main result. \begin{lem}\label{crucial} Assume that there exists an additive $(G,k,\lambda)$-SDF with $G$ a zero-sum group of order $k$ and assume that $k$ has a prime divisor not dividing $\lambda$. Then there exists a $G$-super-regular $2$-$(v,k,1)$ design for infinitely many values of $v$. \end{lem} \begin{proof} Let $\Sigma$ be a SDF as in the statement and let $p$ be a prime divisor of $k$ not dividing $\lambda$. Let $n$ be the order of $p$ in the group of units of $\mathbb{Z}_{\lambda}$, let $2^e$ be the largest power of $2$ dividing ${p^n-1\over \lambda}$, and set $\lambda_1=2^e\lambda$. Clearly, $\underline{2^e}\Sigma$ is an additive $(G,k,\lambda_1)$-SDF. We have $p^n-1=2^e\lambda\mu$ with $\mu$ odd, hence $q^n\equiv\lambda_1+1$ (mod $2\lambda_1$). It easily follows, by induction on $i$, that $p^{ni}\equiv\lambda_1+1$ (mod $2\lambda_1$) for every odd $i$. It is obvious that $|G|=k<2\lambda_1^{2k-5}$ and of course there are infinitely many odd values of $i$ for which $p^{ni}>(2k-3)^2\lambda_1^{4k-6}$. Hence, by Theorem \ref{additive version}, there exists an additive $(G\times\mathbb{F}_{p^{ni}},G\times\{0\},k,1)$-DF for each of these odd values of $i$. The assertion then follows from Lemma \ref{additive(kq,k,k,1)}. \end{proof} \section{The main result} For the proof of the main result we need one more ingredient, that is the notion of a {\it difference matrix}. If $G$ is an additive group of order $v$, a $(v,k,\lambda)$ difference matrix in $G$ (or briefly a $(G,k,\lambda)$-DM) is a $(k\times \lambda v)$-matrix with entries in $G$ such that the difference of any two distinct rows contains every element of $G$ exactly $\lambda$ times. For general background on difference matrices we refer to \cite{BJL,CD}. We will say that a DM is {\it additive} if each of its columns is zero-sum. An adaptation of an old construction for ordinary difference families by Jungnickel \cite{J} allows us to prove the following. \begin{lem}\label{dieter} If $\Sigma$ is an additive $(G,k,\lambda)$-SDF and $M$ is an additive $(H,k,\mu)$-DM, then there exists an additive $(G\times H,k,\lambda\mu)$-SDF. \end{lem} \begin{proof} Let $\Sigma$ be a $(G,k,\lambda)$-SDF and let $M=(m_{rc})$ be an $(H,k,\mu)$-DM. For each block $B=\{b_1,\dots,b_k\}\in\Sigma$ and each column $M^c=(m_{1c},\dots,m_{kc})^T$ of $M$, consider the $k$-multiset $B\circ M^c$ defined as follows: $$B\circ M^c=\{(b_1,m_{1c}),\dots,(b_k,m_{kc})\}.$$ It is straightforward to check that $$\Sigma\circ M:=\{B\circ M^c \ | \ B\in{\cal F}; 1\leq c\leq \mu|H|\}$$ is a $(G\times H,k,\lambda\mu)$-SDF. It is clearly additive in the hypothesis that both $\Sigma$ and $M$ are additive. \end{proof} In the proof of the following theorem we construct the crucial ingredient considered in Lemma \ref{crucial}. \begin{thm}\label{goodSDF} Let $k$ be a positive integer which is neither a prime power, nor singly even, nor of the form $2^n3$. Then there exists an additive $(G,k,\lambda)$-SDF in a suitable zero-sum group of order $k$ with $\gcd(k,\lambda)=1$. \end{thm} \begin{proof} Let $q$ be the largest odd prime power factor of $k$ and set $k=qr$. The hypotheses on $k$ guarantee that $q$ is greater than 3. Now consider the $k$-multiset $A$ on $\mathbb{F}_q$ which is union of $r$ copies of the $(q,q,q-1)$ Paley difference multiset of the first type: $$A=\underline{r}\{0\} \ \uplus \ \underline{2r}\mathbb{F}_q^\Box.$$ Let $\alpha: \mathbb{F}_q \longrightarrow \mathbb{N}$ be the map where $\alpha(x)$ is the multiplicity of $x$ in $\Delta A$ for every $x\in \mathbb{F}_q$. We have $$\alpha(0)=r(r-1)+{q-1\over2}2r(2r-1)=(2q-1)r^2-qr.$$ Now let $x$ be an element of $\mathbb{F}_q^*$ and distinguish two cases according to whether $q\equiv1$ or 3 (mod 4). \underline{1st case}: $q\equiv1$ (mod 4).\quad In this case it is well-known that $\mathbb{F}_q^\Box$ is a partial $(q,{q-1\over2},{q-5\over4},{q-1\over4})$ difference set\footnote{A $k$-subset $B$ of an additive group $G$ of order $v$ is a $(v, k, \lambda,\mu)$ {\it partial difference set} if $\Delta B=\underline{\lambda}(B\setminus\{0\}) \ \cup \ \underline{\mu}(G\setminus (B\cup\{0\})$. If $\lambda=\mu$ then $B$ is a $(v,k,\lambda)$ {\it difference set}.}. If $x\in \mathbb{F}_q^\Box$, there are ${q-5\over4}$ representations of $x$ as a difference from $\mathbb{F}_q^\Box$. Each of them has to be counted $(2r)^2$ times in the number of representations of $x$ as a difference from $A$. The remaining representations of $x$ as a difference from $A$ are $x=x-0$ ($2r\cdot r$ times) and $x=0-(-x)$ ($r\cdot2r$ times). Thus we have $\alpha(x)=(4r^2){q-5\over4}+2r^2+2r^2=(q-1)r^2$. If $x\in \mathbb{F}_q^{\not\Box}$, there are ${q-1\over4}$ representations of $x$ as a difference from $\mathbb{F}_q^\Box$. Each of them has to be counted $(2r)^2$ times in the number of representations of $x$ as a difference from $A$. There is no other representation of $x$ as a difference from $A$. Hence we have $\alpha(x)=(4r^2){q-1\over4}=(q-1)r^2$. \underline{2nd case}: $q\equiv3$ (mod 4).\quad Here, $\mathbb{F}_q^\Box$ is a $(q,{q-1\over2},{q-3\over4})$ difference set. Every $x\in \mathbb{F}_q^*$ admits precisely ${q-3\over4}$ representations as a difference from $\mathbb{F}_q^\Box$. Each of them has to be counted $(2r)^2$ times in the number of representations of $x$ as a difference from $A$. The remaining representations of $x$ as a difference from $A$ are $x=x-0$ ($2r\cdot r$ times) if $x$ is a square, or $x=0-(-x)$ ($r\cdot2r$ times) if $x$ is not a square. Thus, for every $x\in \mathbb{F}_q^*$ we have $\alpha(x)=(4r^2){q-3\over4}+2r^2=(q-1)r^2$. In summary, we have: \begin{equation}\label{alpha} \alpha(0)=(2q-1)r^2-qr\quad{\rm and}\quad\alpha(x)=(q-1)r^2 \ \forall x\in\mathbb{F}_q^* \end{equation} Now let $B=\underline{r}\mathbb{F}_q$ be the $k$-multiset which is union of $r$ copies of $\mathbb{F}_q$ and let $\beta: \mathbb{F}_q \longrightarrow \mathbb{N}$ be the map of multiplicities of $\Delta B$. It is quite evident that we have: \begin{equation}\label{beta} \beta(0)=qr(r-1) \quad{\rm and}\quad \beta(x)=qr^2 \ \forall x\in\mathbb{F}_q^* \end{equation} We claim that $$\Sigma=\{A,\underbrace{B,\dots,B}_{r-1 \ {\rm times}}\}$$ is a $(q,k,(k-1)r^2)$-SDF in $\mathbb{F}_q$. Indeed, if $\sigma$ is the map of multiplicities of $\Delta \Sigma$, in view of (\ref{alpha}) and (\ref{beta}) we have: \medskip $\sigma(0)=\alpha(0)+(r-1)\beta(0)=(2q-1)r^2-qr+qr(r-1)^2=(qr-1)r^2;$ \medskip $\sigma(x)=(q-1)r^2+qr^2(r-1)=(qr-1)r^2\quad\forall x\in\mathbb{F}_q^*.$ \medskip Considering that $\mathbb{F}_q^\Box$ is a zero-sum subset of $\mathbb{F}_q$ for $q>3$ (see Fact \ref{fact2}), the multiset $A$ is zero-sum. Also, considering that $\mathbb{F}_q$ is zero-sum, $B$ is zero-sum as well. We conclude that $\Sigma$ is additive. The hypothesis that $k$ is not singly even implies that $r$ is also not singly even. Hence we can take an abelian zero-sum group $H$ of order $r$. Let $M$ be the matrix whose columns are all possible zero-sum $k$-tuples of elements of $H$ summing up to zero. Let $(i,j)$ be any pair of distinct elements of $\{1,\dots,k\}$ and let $h$ be any element of $H$. The number of zero-sum $k$-tuples $(m_1,...,m_k)$ of elements of $H$ such that $m_i-m_j=h$ is equal to $r^{k-2}$. Indeed each of these $k$-tuples can be constructed as follows. Fix any element $\ell$ in $\{1,...,k\}\setminus\{i,j\}$, take $m_x$ arbitrarily for $x \notin\{i,\ell\}$, and then we are forced to take $m_i=m_j+h$ and $m_\ell=-\sum_{x\neq \ell}m_x$. The above means that there are exactly $r^{k-2}$ columns $(m_{1,c},\dots,m_{k,c})^T$ of $M$ such that $m_{i,c}-m_{j,c}=h$. Equivalently, the difference between the $i$th row and the $j$th row of $M$ covers the element $h$ exactly $r^{k-2}$ times. Thus, in view of the arbitrariness of $i$, $j$ and $h$, $M$ is a $(r,k,r^{k-2})$ difference matrix. Of course it is additive by construction. Thus, applying Lemma \ref{dieter}, we can say that $\Sigma\circ M$ is an additive $(k,k,\lambda)$-SDF in $G:=\mathbb{F}_q\times H$ with $\lambda=(k-1)r^k$. Recall that $q$ is the largest odd prime power factor of $k$ so that $q$ is coprime with both $k-1$ and $r={k\over q}$. Thus $\lambda$ is coprime with $k$ and the assertion follows. \end{proof} {\bf Proof of Theorem \ref{main}.}\quad If $k$ is a prime power we have the super-regular $2$-$(k^n,k,1)$ designs associated with $AG(n,k)$. The singly even values of $k$ are genuine exceptions in view of Proposition \ref{k=4n+2}(iv). Finally, if $k$ is neither a prime power, nor singly even, nor of the form $2^n3$, then the assertion follows from Theorem \ref{goodSDF} and Lemma \ref{crucial}. \hfill$\Box$ \section{A huge number of points} As already mentioned in the introduction the super-regular Steiner 2-designs obtainable by means of the main construction (Theorem \ref{goodSDF} combined with Lemma \ref{crucial}) have a huge number of points. On the other hand, there are some hopes to find more handleable super-regular Steiner 2-designs. We discuss this for the first relevant value of $k$, that is $k=15$. Let us examine, first, which is the smallest $v$ for which the main construction leads to a non-trivial super-regular $2$-$(v,15,1)$ design. Keeping the same notation as in Theorem \ref{goodSDF}, we have $q=5$, $r=3$ and $\Sigma\circ M$ is a $(15,15,\lambda)$-SDF in $\mathbb{Z}_3\times\mathbb{Z}_5\simeq\mathbb{Z}_{15}$ with $\lambda=14\cdot3^{15}$. Now proceed as in the proof of Lemma \ref{crucial} taking $p=5$. The order of $5$ in $\mathbb{Z}_{\lambda}$ is $n=2\cdot3^{14}=9565938$ and the largest power of 2 in ${q^n-1\over\lambda}$ is 4. Thus $\underline{4}(\Sigma\circ M)$ is a $(15,15,\lambda_1)$-SDF with $\lambda_1=4\lambda$ and we have $5^{ni}\equiv \lambda_1+1$ (mod $2\lambda_1$) for every odd $i$. One can check that $5^n>(2k-3)^2\lambda_1^{4k-6}=27^2\cdot(56\cdot3^{15})^{54}$. Hence we have an additive $(\mathbb{Z}_{15}\times\mathbb{F}_{5^{n}},\mathbb{Z}_{15}\times\{0\},15,1)$-DF. In conclusion, the first $v$ for which the application of Lemma \ref{crucial} with the use of $\Sigma\circ M$ leads to a super-regular $2$-$(v,15,1)$ design is $3\cdot5^{9565939}$. On the other hand, in this specific case, we can find a much lower $v$ with the use of another SDF. Consider the following three 15-multisets on $\mathbb{Z}_{15}$ $$B=\{0\} \ \cup \ \underline{2}\{1,2,3,7,9,11,12\};$$ $$B'=\{0\} \ \cup \ \underline{2}\{1,3,4,5,7,12,13\};$$ $$B''=\{0\} \ \cup \ \underline{2}\{1,5,8,10,11,12,13\}.$$ It is straightforward to check that $\Sigma'=\{B,B',B''\}$ is an additive $(15,15,\lambda')$-SDF with $\lambda'=42$. Let us apply Lemma \ref{crucial} using $\Sigma'$ rather than $\Sigma\circ M$. The order of $q=5$ in $\mathbb{Z}_{\lambda'}$ is $n=6$ and the largest power of 2 in ${q^n-1\over\lambda'}$ is $4$. Thus $\underline{4}\Sigma'$ is a $(15,15,\lambda'_1)$-SDF with $\lambda'_1=4\lambda'$ and we have $5^{6i}\equiv \lambda'_1+1$ (mod $2\lambda'_1$) for every odd $i$. The first odd $i$ for which $5^{ni}>(2k-3)^2{\lambda'_1}^{4k-6}$ is $31$. Hence, the first $v$ for which the use of $\Sigma'$ in Lemma \ref{crucial} gives a super-regular $2$-$(v,15,1)$ design is $3\cdot5^{187}$. Now we show a more clever use of $\Sigma'$ which exploits its nice form (every base block is of the form $\{0\} \ \cup \ \underline{2}A$ with $A$ a 7-subset of $\mathbb{Z}_{15}\setminus\{0\}$). Let $q\equiv1$ (mod 42) be a prime power and lift the blocks of $\Sigma'$ to three zero-sum 15-subsets of $\mathbb{Z}_{15}\times \mathbb{F}_q$ of the form \small $$\ell(B)=\{(0,0),(1,\pm\ell_1),(2,\pm\ell_2),(3,\pm\ell_3),(7,\pm\ell_4),(9,\pm\ell_5),(11,\pm\ell_6),(12,\pm\ell_7)\},$$ $$\ell(B')=\{(0,0),(1,\pm\ell'_1),(3,\pm\ell'_2),(4,\pm\ell'_3),(5,\pm\ell'_4),(7,\pm\ell'_5),(12,\pm\ell'_6),(13,\pm\ell'_7)\},$$ $$\ell(B'')=\{(0,0),(1,\pm\ell''_1),(5,\pm\ell''_2),(8,\pm\ell''_3),(10,\pm\ell''_4),(11,\pm\ell''_5),(12,\pm\ell''_6),(13,\pm\ell''_7)\},$$ \normalsize where, to save space, we have written $(x,\pm y)$ to mean the two pairs $(x,y)$ and $(x,-y)$. We have $\displaystyle\Delta\ell(B) \ \cup \ \Delta\ell(B') \ \cup \ \Delta\ell(B'')=\bigcup_{i=0}^{14}\{i\}\times\Delta_i$ with $\Delta_i=\{1,-1\}\cdot\overline{\Delta}_i$ where each $\overline{\Delta}_i$ is a list of 21 elements of $\mathbb{F}_q$. For instance, it is readily seen that $\overline{\Delta}_0=\{\ell_i, \ell'_i, \ell''_i \ | \ 1\leq i\leq 7\}$. Assume that the above liftings are done in such a way that each $\overline{\Delta}_i$ is a complete system of representatives for the cyclotomic classes of order 21. In this case we have $\Delta_i\cdot M=\mathbb{F}_q^*$ with $M$ a system of representatives for the cosets of $\{1,-1\}$ in $C^{21}$ and then, by Construction \ref{constr}, we get an additive $(\mathbb{Z}_{15}\times\mathbb{F}_{q},\mathbb{Z}_{15}\times\{0\},15,1)$-DF. Reasoning as in the proof of Theorem \ref{SDF->DF}, one can see that the required liftings certainly exist by Lemma \ref{BP} provided that $q>6^2\cdot21^{12}$. Now note that we have $5^{6i}\equiv1$ (mod 42) for every $i\geq0$ and $5^{6i}>6^2\cdot21^{12}$ as soon as $i\geq5$. Thus we have an additive $(\mathbb{Z}_{15}\times\mathbb{F}_{5^{30}},\mathbb{Z}_{15}\times\{0\},15,1)$-DF. So the first $v$ for which this construction leads, theoretically, to a strictly additive $2$-$(v,15,1)$ design is $3\cdot5^{31}$ that is dramatically smaller than the value obtained before by applying the main construction ``with the blinkers". Yet, it is still huge! We cannot exclude, however, that by means of a (probably heavy) computer work one may realize a good lifting of $\Sigma'$ with $q=5^6$. In this case we should have a $2$-$(3\cdot5^7,15,1)$ design. \section{Super-regular non-Steiner 2-designs} As underlined in the introduction, the paper is focused on super-regular Steiner 2-designs since their construction appears to be challenging. Here we just sketch how the methods used in the previous sections allow to obtain super-regular non-Steiner 2-designs much more easily and with a relatively ``small" number of points. In particular, without any need of cyclotomy (that is the heaviest tool used) it is possible to show that every additive $(k,k,\lambda)$-SDF with $k$ not singly even gives rise to a super-regular $2$-$(kq,k,\lambda)$ design for any power $q>k$ of a prime divisor of $k$. First, we need to recall the following well known fact. \begin{prop}\label{rem2} Let ${\cal F}$ be a $(v,k,k,\lambda)$-DF in $G$ relative to $H$, let $\cal C$ be the set of right cosets of $H$ in $G$, and set $${\cal B}=\{B + g \ | \ B\in{\cal F}; g\in G\} \ \cup \ \underline{\lambda}{\cal C}.$$ Then $(G,{\cal B})$ is a $G$-regular $2$-$(v,k,\lambda)$ design. \end{prop} The above is contained in Remark \ref{rem} (r2) for $\lambda=1$ and produces a non-simple design for $\lambda>1$. \begin{lem}\label{additive(kq,k,k,lambda)} If there exists an additive $(G\times\mathbb{F}_q,G\times\{0\},k,\lambda)$-DF with $G$ a zero-sum group of order $k$, then there exists a super-regular $2$-$(kq,k,\lambda)$ design. \end{lem} \begin{proof} The $(G\times\mathbb{F}_{q})$-regular $2$-$(kq,k,\lambda)$ design obtainable from $\cal F$ using Proposition \ref{rem2} is clearly additive. The assertion follows. \end{proof} \begin{thm} If there exists an additive $(k,k,\lambda)$-SDF with $k$ not singly even, then there exists a super-regular $2$-$(kq,k,\lambda)$ design for every power $q>k$ of a prime divisor of $k$. \end{thm} \begin{proof} Let $\Sigma=\{B_1,\dots,B_s\}$ be an additive $(k,k,\lambda)$-SDF in $G$ and let $q$ be a prime power as in the statement. Take a zero-sum $k$-subset $L=\{\ell_1,\dots,\ell_k\}$ of $\mathbb{F}_q$ whose existence is almost evident\footnote{It is also an immediate consequence of a formula giving the precise number of $k$-subsets of $\mathbb{F}_q$ whose sum is an assigned $b\in\mathbb{F}_q$ (see Theorem 1.2 in \cite{LW} or, for an easier proof, Theorem 1.1(3) in \cite{Pavone2}).}. Lift each block $B_h=\{b_{h1},\dots,b_{hk}\}$ of $\Sigma$ to the subset $L_h=\{(b_{h1},\ell_{1}),\dots,(b_{hk},\ell_{k})\}$ of $G\times\mathbb{F}_q^*$. By definition of a strong difference family, we have $\Delta\{L_1,...,L_h\}=\biguplus_{g\in G}\{g\}\times\Delta_g$ where each $\Delta_g$ is a $\lambda$-multiset on $\mathbb{F}_q^*$ so that we have \begin{equation}\label{penultimate} \Delta_g\cdot\mathbb{F}_q^*=\underline{\lambda}\mathbb{F}_q^*\quad\forall g\in G. \end{equation} Given $m\in\mathbb{F}_q^*$, denote by $L_h\circ m$ the subset of $G\times\mathbb{F}_q$ obtained from $L_h$ by multiplying the second coordinates of all its elements by $m$. Taking (\ref{penultimate}) into account, it is easily seen that \begin{equation}\label{last} {\cal F}=\{L_h\circ m \ | \ 1\leq h\leq s; m\in \mathbb{F}_q^*\} \end{equation} is a $(G\times\mathbb{F}_q,G\times\{0\},k,\lambda)$-DF. Also, we note that ${\cal F}$ is additive since $\Sigma$ is additive and $L$ is zero-sum. The assertion then follows from Lemma \ref{additive(kq,k,k,lambda)}. \end{proof} Applying the above theorem using the $(15,15,42)$-SDF given in the previous section, we find a super-regular $2$-$(15q,15,42)$ design for every power $q$ of 3 or 5 not smaller than 25. Here, however, in view of the special form of the used $(15,15,42)$-SDF, one could see that if $L$ is chosen more carefully as in Section 9 and if in (\ref{last}) we make $m$ vary in a system of representatives for the cosets of $\{1,-1\}$ in $\mathbb{F}_q^*$ rather than in the whole $\mathbb{F}_q^*$, we get an additive a $(\mathbb{Z}_{15}\times\mathbb{F}_q,\mathbb{Z}_{15}\times\{0\},15,\lambda)$-DF with $\lambda=21$ rather than 42. Thus we can say that there exists a super-regular $2$-$(15q,15,21)$ design for every power $q$ of 3 or 5 not smaller than 25. In particular, using $q=25$, we can say that there exists a super-regular $2$-$(375,15,21)$ design. \section{Open questions} Our research leaves open several questions. The most intriguing is probably the following. \begin{itemize} \item[(Q1)] Does there exist a strictly $G$-additive Steiner 2-design which is not $G$-regular? \end{itemize} Here are some other questions which naturally arise. \begin{itemize} \item[(Q2)] Do there exist strictly additive $2$-$(v,k,1)$ designs with $k$ singly even? \item[(Q3)] Do there exist super-regular Steiner 2-designs with block size $k=2^n3\geq12$? \end{itemize} Finally, it would be desirable to solve the following problem. \begin{itemize} \item[(P)] Find an additive Steiner 2-design with a non-primepower block size and a ``reasonably small" number of points. \end{itemize} \section*{Acknowledgements} The authors wish to thank the anonymous referees for their careful reading and some helpful comments. This work has been performed under the auspices of the G.N.S.A.G.A. of the C.N.R. (National Research Council) of Italy. The second author is supported in part by the Croatian Science Foundation under the projects 9752 and 6732.
proofpile-arXiv_065-5646
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Magnonics, i.e. the generation, control and detection of collective spin excitations (or magnons) is been considered for possible information storage and processing applications, due to promise for higher data density and its more energy-efficient elaboration \cite{Demokritov2013,Chumak2015,Tannous2015,Zakeri2018,Mahmoud2020,Xu2020}. This area is rapidly advancing, from first proposals of memory devices, to more recent examples concerning the implementation of logical operations \cite{Kostylev2005,Guo2018,Wang2020}. Various groups have studied how an external electric field can be used to modify features of the magnon spectra and to potentially realize these functionalities. An early example has been the measurement of proportionality between magnetic resonance shifts and an applied electric field in lithium ferrite \cite{Rado1979}. This observation has been explained as a consequence of a voltage-controlled magneto-crystalline anisotropy (VCMA) variation, and deemed small for practical applications \cite{Liu2013}. Subsequently, multiferroic materials have been found to offer a stronger response in their magnon spectrum through the coupling between their intrinsic electric polarization and the externally applied perturbation \cite{Rovillain2010,Risinggard2016}. More recently, Liu {\it et al.} have discussed yet a different theoretical mechanism not restricted to this class of materials and capable to produce effective Dzyaloshinskii-Moriya interactions (DMI) proportional to the field \cite{Liu2011b}. This has prompted to examine implications for magnon spectra \cite{Zhang2014a,Krivoruchko2017a,Krivoruchko2018,Rana2019,Savchenko2019b,Krivoruchko2020}, most frequently adopting as reference material the ferrimagnetic insulator yttrium iron garnet (YIG). In this work we are interested in the possible control of magnons by an applied electric field acting, across a dielectric barrier, on a two-dimensional (2D) heterostructure. We deal with the idealized layout of magnetic/non-magnetic layers of simple transition metals, e.g. Fe and Cu. Similarly to the case of YIG, absence of electric current due to the insulating barrier precludes energy dissipation into Joule heating (Ohmic losses). The gating E$_{\textrm{field}}$ acts by controlling the hybridization between electronic states. We study how this can offer another venue for controlled variation of the magnon dispersion relation and lifetime. This latter aspect complements previous theoretical studies which have typically examined only the adiabatic or infinitely long-lived limit of collective spin excitations. This paper is structured as follows. We first describe a reference device layout and introduce the theoretical scheme adopted to study from \textit{first principles} its magnon spectrum (Sec.~\ref{sec:problem-statement}). We then present numerical results, for an Fe monolayer and an Fe bilayer either suspended in vacuum or deposited on a Cu substrate. We show how the magnon lifetime and the gap between low- and high-energy eigenmodes depend on the external electric field and how this can be traced back to changes of the underlying electronic structure (Sec.~\ref{sec:numerical-results}). We summarize salient aspects of the results in Sec. \ref{sec:discuss} and offer our conclusions in Sec.~\ref{sec:conclusions}. \section{\label{sec:theory}Computational strategy} \label{sec:problem-statement} We consider a metallic 2D heterostructure which contains a thin magnetic region on top of a non-magnetic substrate and which is furthermore capped by a dielectric layer. A steady voltage between the substrate and an electrode located atop the dielectric barrier sets up a constant electric field $E_{\text{field}}$ (Fig.~\ref{fig:device-layout}). For the sake of clarity and simplicity, we model the dielectric barrier by a spacing vacuum gap, and we choose respectively Fe and Cu as the material of the magnetic and non-magnetic layers. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{schematic_geometry.png} \caption{\label{fig:device-layout}Schematic device layout. Precessing magnetic moments (red arrows) that compose a magnon mode (blue wave) are studied as a function of an external electric field acting along the stacking direction, across a dielectric barrier (green region) which prevents charge transport. } \end{figure} Our interest lies in how the applied voltage can control the spectrum of transverse spin-wave excitations or magnons. The magnons are confined within the magnetic layers because of the negligible proximity-induced spin polarization in copper. However, their dispersion relation $\omega_n(\bm{q})$, with $\bm{q}$ being the wave vector confined to the 2D Brillouin zone $\Omega_{BZ}$ and $n$ labeling distinct eigenmodes, as well as their lifetime, depend significantly on the underlying substrate already in the absence of any applied \mbox{E$_{\text{field}}$}. Various dissipation mechanisms can be responsible for finite lifetime of magnons that manifests itself through the $\bm q$-dependent broadening of the above dispersion relation $\omega_n(\bm q)$. Here we consider a 2D periodic, perfectly long-range ordered (LRO) scenario in the zero temperature limit, and we neglect therefore Bloch damping from disorder \cite{Dean1972,Buczek2018}. We also neglect dissipation through magnon-magnon scattering \cite{Azevedo2000,Landeros2008,Xue2017}. On the other hand, we consider Landau damping, which is due to the competition between magnons and single-particle Stoner spin-flip excitations with same energy and momentum, and which is deemed to be a dominant attenuation mechanism for magnons propagation in transition metals \cite{Costa2003a}. \subsection{General approximation strategy \label{sec:approx-strategy} } In the limit of sufficient time-scale separation between fast electrons and slow precession of atomic magnetic moments, we can adopt as starting point the Heisenberg Hamiltonian \begin{equation} \label{eq:Heisenberg_Hamiltonian} H = - \sum_{i \neq j} J_{ij} \hat{\bm e}_i \cdot \hat{\bm e}_j \; , \end{equation} where $\hat{\bm e}_{i}$ is the direction of magnetic moment around atom at position $\bm{R}_{i}$~\cite{Halilov1998}. The exchange coupling parameters $J_{ij}$ can be calculated at a \textit{first principles} electronic structure level by employing, for instance, the magnetic force theorem \cite{Liechtenstein1984,Liechtenstein1987}. Extensions of the basic scheme \cite{Udvardi2003,Mankovsky2017a} can be used to obtain the full tensor form, $J_{ij}^{\mu\nu}$ with $\mu(\nu)=x,y,z$, which can be of particular relevance in connection with relativistic effects such as spin-orbit coupling. Considering for instance ferromagnetic order along $z$, one can then identify the isotropic exchange interactions of Eq.~\eqref{eq:Heisenberg_Hamiltonian} with $J_{ij} = \frac{1}{2} ( J_{ij}^{xx} + J_{ij}^{yy} )$, and can analogously define a DMI vector \bm $\bm D_{ij} = ( D_{ij}^x, D_{ij}^y, D_{ij}^z )$ with components $D_{ij}^x = \frac{1}{2} ( J_{ij}^{yz} - J_{ij}^{zy} )$, $D_{ij}^y = \frac{1}{2} ( J_{ij}^{xz} - J_{ij}^{zx} )$ and $D_{ij}^z = \frac{1}{2} ( J_{ij}^{xy} - J_{ij}^{yx} )$. Liu {\it et al.}\ \cite{Liu2011b} discussed how an applied electric field can produce an additional DMI term $H_{DM} = \bm D_{ij} \cdot (\bm S_i \times \bm S_j)$, proportional to the perturbation and to the spin-orbit coupling strength. Although reduced dimensionality can have a significant impact on spin-orbit coupling, magnetism in thin films is known to heavily depend on the interplay between substrate and magnetic layers already at the level of isotropic exchange interactions $J_{ij}$. Our goal is to explore to what extent the layout of Fig.~\ref{fig:device-layout} could be used to control magnon spectral features by exploiting field-dependent hybridization of electronic states, without depending on more subtle relativistic effects. We remain, therefore, within the description of Eq.~\eqref{eq:Heisenberg_Hamiltonian}, and we neglect other features such as magneto-crystalline anisotropy or Gilbert damping \cite{Kunes2002,Udvardi2003,Hickey2009,He2013}. The precession of atomic magnetic moments around their ground state direction in the effective magnetic field generated by all their neighbors, $\bm B_i^{\textrm{eff}} = \sum_{j\neq i} J_{ij} \hat{\bm e}_j$, follows the Landau-Lifschitz equation of motion and can be studied as a secular equation problem. In particular, the adiabatic magnon spectrum is given by the eigenvalues of the lattice Fourier-transformed expression \cite{Halilov1998,Etz2015} \begin{equation} \label{eq:Landau-Lifschitz-secular} \widehat{N}(\bm{q}) | \omega_n(\bm{q}) \rangle = \omega_n(\bm{q}) | \omega_n(\bm{q}) \rangle \;\; , \end{equation} with explicit matrix elements $\left[ \underline{N}(\bm{q}) \right]_{s,s'} = \langle s | \widehat{N}(\bm{q}) | s' \rangle $. The subscript $s=1,\ldots,N_{\textrm{sub}}$ labels the (magnetic) sublattices with origin $\bm{b}_{s}$. Each atom lies therefore at position $\bm{R}_{i} = \bm{R}_{I} + \bm{b}_{s}$, where $\bm{R}_{I}$ is a vector of the periodic lattice. For a long-range ordered ground state with atomic magnetic moments $\bm m_s = (0,0,m_s^z)$ the matrix $\underline{N}(\bm{q})$ has elements \cite{Pajda2001,Rusz2006,Jacobsson2013,Bergqvist2013} \begin{equation} \label{eq:revised-LL-matrix} \left[ \underline{N}(\bm{q}) \right]_{s,s'} = \frac{4}{m_s^z} \Big[ J_{ss'}(\bm{0}) - J_{ss'}(\bm{q}) \Big] \;\; . \end{equation} The Fourier transformation in Eq.~(\ref{eq:Landau-Lifschitz-secular}) is performed over all displacements $\bm{R}_{IJ} = \bm R_I - \bm{R}_J$ between unit cells $I$ and $J$: \begin{equation} \label{eq:jij-fourier} \begin{split} J_{ss'}(\bm{0}) \: =& \: \delta_{s,s'} \sum\limits_{\bm{R}_{IJ}} \sum\limits_{s''=1}^{N_{\textrm{sub}}} J_{IsJs''} \;\; , \\ J_{ss'}(\bm{q}) \: =& \: \sum\limits_{\bm{R}_{IJ}} J_{IsJs'} \, e^{-i \bm{q} \cdot (\bm{R}_{IJ} + \bm b_s - \bm b_{s'}) } \;\; . \end{split} \end{equation} The above approach towards studying magnon spectra is intuitive, computationally expedite, and typically offers good agreement with experiment. However, it does not account for Landau damping. Physically, it originates from competition of collective transverse spin-wave excitations with single-particle spin-flip excitations \cite{Yosida1991,Kubler2000,Kakehashi2012}. A comprehensive scheme to account for both collective and single-particle magnetic excitations is provided by linear response formalism in the framework of the time-dependent density functional theory (TDDFT). This approach focuses on the dynamic transverse susceptibility $\underline{\chi}^{+(-)}(\bm{q},\omega)$ which describes the response of spin-polarized electrons to a magnetic field precessing clockwise $(+)$ or anticlockwise $(-)$ with the frequency $\omega$. This susceptibility is determined by the Dyson-like equation \begin{equation} \label{eq:interacting-susceptibility} \underline{\chi}^{+(-)}(\bm{q},\omega) = \left[ \underline{1} - \underline{\mathring{\chi}}^{+(-)}(\bm{q},\omega) \underline{f}_{xc}(\bm{q}) \right]^{-1} \underline{\mathring{\chi}}^{+(-)}(\bm{q},\omega) \;\; , \end{equation} where the kernel $\underline{f}_{xc}(\bm{q})$ is the second derivative of the exchange-correlation energy with respect to local magnetic moment \cite{Katsnelson2004a,Buczek2011}, and $\underline{\mathring{\chi}}^{+(-)}(\bm{q},\omega)$ is the transverse susceptibility of non-interacting electrons. This quantity can be given at the scalar-relativistic level in terms of Kohn-Sham eigenstates $\phi_{\nu}$ and eigenvalues $\epsilon_{\nu}$ solving the spin-polarized Schr\"{o}dinger problem. Simplifying for a moment the notation through restriction to the $N_{\textrm{sub}}=1$ case, we have \cite{Kubler2000} \begin{widetext} \begin{equation} \label{eq:Kohn-Sham-susceptibility} \begin{split} \mathring{\chi}^{+(-)}(\bm{r},\bm{r'},\bm{q},\omega) \: = \: \lim\limits_{\eta \to 0^+} \sum\limits_{\nu,\nu'} \int_{\Omega_{BZ}} \mathrm{d} \bm{k} \, & \frac{ \phi_{\nu}^{\uparrow(\downarrow),*}(\bm{k},\bm{r}) \, \phi_{\nu'}^{\downarrow(\uparrow)}(\bm{k}+\bm{q},\bm{r}) \, \phi_{\nu'}^{\downarrow(\uparrow),*}(\bm{k}+\bm{q},\bm{r}') \, \phi_{\nu}^{\uparrow(\downarrow)}(\bm{k},\bm{r}') }{ \omega \, + \, \mathrm{i} \eta \, + \, \epsilon_{\nu}^{\uparrow(\downarrow)}(\bm{k}) \, - \, \epsilon_{\nu'}^{\downarrow(\uparrow)}(\bm{k} + \bm{q}) } \: \times \\ & \left\{ \theta\left[ E_F - \epsilon_{\nu}^{\uparrow(\downarrow)}(\bm{k}) \right] \, - \, \theta\left[ E_F - \epsilon_{\nu'}^{\downarrow(\uparrow)}(\bm{k} + \bm{q}) \right] \right\} \;\; , \end{split} \end{equation} \end{widetext} with the Heaviside step function $\theta(x)=1$ for $x>0$, $\theta(x)=0$ for $x \leq 0$. The left (right) arrow selects the spin polarization relevant for the clockwise (anticlockwise) precession of the moments in response to the infinitesimal perturbation of the rotating magnetic field. The wave vectors for $\bm{k}$, $\bm{k} +\bm{q}$ are considered within the Brillouin zone $\Omega_{BZ}$, and the positions $\bm{r}$, $\bm{r}'$ are restricted to the Wigner-Seitz cells around sites $\bm{R}_I,\bm{R}_J$, respectively. The quantities in Eqs.~\eqref{eq:interacting-susceptibility} and \eqref{eq:Kohn-Sham-susceptibility} can be cast in matrix form by adopting, e.g., a combined basis set of spherical harmonics and orthogonal polynomials to represent the $\bm r$, $\bm{r}'$ dependence \cite{Staunton2000,Buczek2011}. Thanks to the fluctuation-dissipation theorem \cite{Kubo1957}, the propensity of a material to host a magnetic excitation with wave vector $\bm{q}$ and energy $\omega$ is marked by large values in the loss matrix $\Im \underline{\chi}^{+(-)} (\bm{q},\omega)$. Technically, this is due to zeros from the first term, $\underline{1} - \underline{\mathring{\chi}}^{+(-)}(\bm{q},\omega) \underline{f}_{xc}(\bm{q})$, as well as to singularities from the second term, $\underline{\mathring{\chi}}^{+(-)}(\bm{q},\omega)$, in Eq.~\eqref{eq:interacting-susceptibility}. The outcome can be studied by examining the eigenvalues of $\Im \underline{\chi}^{+(-)} (\bm{q},\omega)$ as a function of $\bm q$ and $\omega$ \cite{Antropov2003,Buczek2011}. Long-living collective excitations (magnons) are characterized by the occurence, at each energy and wave-vector, of as many sharply defined eigenvalues as the number of magnetic sublattices in the unit cell \cite{Buczek2011}. By following the sequence of such peaks one can reconstruct their dispersion relation and compare it for instance with the simpler $\omega_n(\bm q)$ outcome from Eq.~\eqref{eq:Landau-Lifschitz-secular}. Landau damping instead manifests itself through the emergence of multiple, no longer well-separated eigenvalues which lead in practice to a broadened magnon dispersion. The broadening can be interpreted as inversely proportional to finite magnon lifetime due to competition with Stoner single-particle excitations. These spin-flip transitions are described in particular by the non-interacting susceptibility $\mathring{\chi}^{+(-)}(\bm{r},\bm{r'},\bm{q},\omega)$ \cite{Buczek2011} and are entirely neglected in the secular equation problem of Eq.~\eqref{eq:Landau-Lifschitz-secular}. In order to approximately account for this aspect of the magnon physics, we apply here at a \textit{first principles} level an approximative procedure that has been proposed, among others, by Yosida \cite{Yosida1991} for simplified theoretical models, and adopted, e.g., by Kirschner et al.~\cite{Kirschner1986,Venus1988,Vollmer2003} for the interpretation of spin-polarized electron energy loss experiments in metallic thin films. The procedure consists of two steps. First we obtain the adiabatic dispersion relation $\omega_n(\bm{q})$ from Eq.~\eqref{eq:Landau-Lifschitz-secular}. This involves diagonalizing for each $\bm q$ the real $N_{\textrm{sub}} \times N_{\textrm{sub}}$ matrix defined in Eq.~\eqref{eq:revised-LL-matrix}. Such a procedure is much simpler than dealing with complex matrices of Eqs.~\eqref{eq:interacting-susceptibility} and \eqref{eq:Kohn-Sham-susceptibility}, which need to be dealt with not only for each $\bm{q}$ but also for every trial energy $\omega$ and which are also much bigger, depending on the sampling in $\bm r$ and $\bm{r}'$. Subsequently, the intensity of single-particle excitations $S^{+(-)}_n(\bm{q})$ is obtained by considering only Stoner spin-flip transitions between occupied and unoccupied Kohn-Sham states, such that their difference in energy and momentum corresponds to the magnon eigenmode under consideration $|\omega_n(\bm q) \rangle$. The number of relevant transitions is estimated by convoluting the spin-polarized electronic Bloch spectral functions $A^{\uparrow(\downarrow)}(\bm{k},s,E) = -\frac{1}{\pi} \Im \, G^{\uparrow(\downarrow)}(\bm{k}, s,E)$ where the electronic Green's function $G^{\uparrow(\downarrow)}(\bm{k}, s,E)$ is the Lehmann resummation of Kohn-Sham eigenstates and eigenvalues already appearing in Eq.~\eqref{eq:Kohn-Sham-susceptibility}. In practice we adopt the KKR construction to directly obtain these Green functions \cite{Ebert2011}, calculate the Heisenberg exchange parameters $J_{ij}$ \cite{Liechtenstein1987} and solve the secular equation problem of Eq.~\eqref{eq:Landau-Lifschitz-secular}, and then we evaluate the expression \begin{widetext} \begin{equation} \label{eq:Stoner-convolution} \begin{split} S^{+(-)}_n(\bm{q}) =& \int_{E_{\textrm{min}}}^{E_{\textrm{max}}} \mathrm{d} E \int_{\Omega_{BZ}} \mathrm{d}^3 k \sum\limits_{s=1}^{N_{\textrm{sub}}} A^{\uparrow(\downarrow)}(\bm{k},s,E) \, \theta(E_{F}-E) \: A^{\downarrow(\uparrow)}(\bm{k}+\bm{q},s,E + \omega_n(\bm{q})) \, \theta(E + \omega_n(\bm{q}) - E_{F}) \, \times \\ & \times \sqrt{ \Re[v_{n,s}(\bm{q})]^2 + \Im[v_{n,s}(\bm{q})]^2 }, \end{split} \end{equation} \end{widetext} where the double integration samples the full Brillouin zone $\Omega_{BZ}$ and the energy interval $E_{\textrm{min}} = E_F - \max[ \omega_n(\bm{q})]$, $E_{\textrm{max}} = E_F + \max[ \omega_n(\bm{q})]$ around the Fermi level $E_F$. Occupied and unoccupied states are selected via the Heaviside step function, similarly to Eq.~\eqref{eq:Kohn-Sham-susceptibility}. Finally, the last term in Eq.~\eqref{eq:Stoner-convolution} is the sublattice-projected magnitude of the complex-valued eigenvector $ | \omega_n(\bm q) \rangle := ( v_{n,1}(\bm q),v_{n,2}(\bm q),\ldots, v_{n,N_{\textrm{sub}}}(\bm q) )^{\dagger}$ from Eq.~\eqref{eq:Landau-Lifschitz-secular}. In general, this quantity describes how the $n$ magnon mode involves deviations from the ground state at each magnetic sublattice \cite{Halilov1998}. In this context, it is used to perform a weighted sum of Stoner spin-flip transitions which also originate from that sublattice, and which are assumed to compete proportionally more with the specific magnon mode, depending on how it involves the same atoms. Compared to Eq.~\eqref{eq:Kohn-Sham-susceptibility}, the energy and momentum convolution of Eq.~\eqref{eq:Stoner-convolution} only involves real quantities. We use the result to produce a magnon spectral function which includes the finite lifetime \begin{equation} \label{eq:approximated_magnon_spectral_function} A_{\textrm{mag}}(\bm{q},n,\omega) \: = \: - \lim_{\eta \to 0^{+}} \frac{ |\omega_n(\bm{q})\rangle \: \langle \omega_n(\bm{q})| } { \omega \, + \, \mathrm{i} [ \eta \, + \, S_n^{+(-)}(\bm{q}) ] - \, \omega_n(\bm{q}) } \;\; . \end{equation} We note that the approach is not as robust as the more rigorous but demanding formulation in terms of the loss matrix $\Im \underline{\chi}^{+(-)} (\bm{q},\omega)$ from Eq.~\eqref{eq:interacting-susceptibility}. Among various simplifications behind it, we deem as most severe the separate evaluation of the adiabatic dispersion $\omega_n(\bm q)$ and of the broadening function $S^{+(-)}_n(\bm{q})$. These quantities are used within Eq.~\eqref{eq:approximated_magnon_spectral_function} to approximate complex magnon poles which would, in an exact treatment, follow from analyzing the dynamic transverse susceptibility. The TDDFT Eq.~\eqref{eq:interacting-susceptibility} construction of the magnon spectral function evaluates collective and single-particle spin-flip excitations on equal footing, meaning that their relative spectral weights gets redistributed, depending for instance on the location of the wave vector $\bm q$ within the Brillouin zone, but it remains on the whole conserved. The approximated construction of Eq.~\eqref{eq:approximated_magnon_spectral_function} reproduces some of the same features, but does not guarantee conservation of the total spectral weight \cite{Edwards1978,Buczek2011}. However, our aim is not to obtain absolute values for the Landau damping but rather to investigate its relative changes as a function of the externally applied electric field efficiently. As long as the inaccuracies of the more expedite but less robust approach depend only weakly on this perturbation, we can expect reasonable trends for the ratio between lifetime estimated with $E_{\textrm{field}}=0$ and $E_{\textrm{field}} \neq 0$. \subsection{Finite electric field and other technical aspects} The results discussed in the following have been produced using the {\em ab~initio} spin-polarized multiple-scattering or Korringa-Kohn-Rostoker (KKR) Green function formalism \cite{Ebert2011} as implemented in the {\sc SPRKKR} code \cite{sprkkr2022}. The self-consistent field (SCF) ground state for the 2D heterostructure of Fig.~\ref{fig:device-layout} was obtained by solving the DFT problem in fully relativistic mode, relying on the local spin density approximation (LSDA) with the Vosko, Wilk and Nusair parametrisation for the exchange and correlation term \cite{Vosko1980}. To deal with systems with only 2D periodicity, we used the tight-binding or screened KKR method \cite{Zeller1995}. Fe monolayers and bilayers suspended in vacuum were modeled by slabs consisting of one or two Fe layers embedded in vacuum represented by four layers of empty sites at each site. Fe monolayers or bilayers deposited on Cu(001) were treated as truly semi-infinite systems: the electronic structure was reconverged within the topmost eleven or ten substrate layers, while at the bottom of this interaction zone the electronic structure was matched to the bulk. For all our systems we used experimental unit cell parameters of bulk copper, neglecting lattice relaxations, and assuming out-of-plane easy axis of magnetization \cite{Allenspach1992,Vaz2008a}. The geometry of Fe layers suspended in vacuum is taken the same as the geometry of the layers deposited on Cu(001). The external electric field is introduced similarly as in Refs.~\cite{Simon2021,Mankovsky2021a}, namely, by considering above the Fe layers an auxiliary array of point charges, separated from the surface by vacuum, during calculation of the SCF solutions and all other quantities. For sufficient areal density and vertical separation, this layer generates an electric field which can be considered constant \cite{Zhang2009d,Ignatiev2011}, with intensity \begin{equation} \label{eq:Efield} E_{\textrm{field}} \: = \: \frac{ Q_{\textrm{aux}} }{ 2 \epsilon_0 A } \;\; , \end{equation} where $Q_{\textrm{aux}}$ is the point charge (positive for a field oriented antiparallel to the surface normal $\widehat{z}$) per area of the 2D unit cell $A$, and $\epsilon_0$ is the vacuum permitivity. For the multipole expansion of the Green function, the angular momentum cutoff $\ell_{\text{max}}=3$ was used. The energy integrals to obtain the SCF-DFT solutions, as well as the isotropic Heisenberg exchange interactions from the magnetic force theorem \cite{Liechtenstein1987}, were evaluated by contour integration on a semicircular path within the complex energy plane using 32 Gaussian-Legendre abscissae. The Brillouin zone integrals used an equispaced mesh with 16000 $\bm k$-points or more, over the whole $\Omega_{BZ}$. The Stoner expression Eq.~\eqref{eq:Stoner-convolution} was evaluated by sampling energy points parallel and near to the real axis. For the ferromagnetic ground states studied in Sec.~\ref{sec:numerical-results} we only need to consider one chirality, meaning that we restrict ourselves to the $(+)$ variant of Eqs.~\eqref{eq:interacting-susceptibility}-\eqref{eq:Stoner-convolution} \cite{Yosida1991,Kakehashi2012,Buczek2011}. \section{\label{sec:numerical-results}Results} We discuss here results for a Fe monolayer and a Fe bilayer, both suspended in vacuum as well as deposited on Cu(001) surface. \subsection{\label{sec:suspend-mono-bilay}Fe monolayer and Fe bilayer in vacuum} \begin{figure}[htb] \centering \includegraphics[width=9.5cm]{VIV_Fe_1ml-DOS_Fe_vs_QCHRLAY-converted-eps-converted-to.pdf} \caption{\label{fig:DOS-QCHRLAY-Fe-mono}DOS of a Fe monolayer suspended in vacuum for different values of \mbox{E$_{\text{field}}$}. All the curves fall essentially on top of each other, with no discernible effects from the electric field. } \end{figure} \begin{figure}[htb] \centering \includegraphics[width=9.5cm]{VIV_Fe_2ml-Delta_DOS_Fe_vs_QCHRLAY-converted-eps-converted-to.pdf} \caption{\label{fig:delta-DOS-QCHRLAY-Fe-bilay}Difference between the DOS projected on individual layers of a Fe bilayer as a function of E$_{\textrm{field}}$.} \end{figure} We begin examining how the external electric field influences the spin-polarized density of states (DOS). Results for a Fe monolayer are shown in Fig.~\ref{fig:DOS-QCHRLAY-Fe-mono}, with no visible effects. Magnon spectra appear similarly robust with respect to the perturbation and are therefore not shown. If a second iron sheet is added, changes in the layer-resolved DOS start to appear but they are still very small. Therefore, to highlight the influence of the external perturbation E$_{\textrm{field}}$, we consider the difference between the DOS projected on individual layers, \[ \Delta n^{\uparrow(\downarrow)}(E) \: = \: n^{\uparrow(\downarrow)}_{\text{Fe}_{1}}(E) \, - \, n^{\uparrow(\downarrow)}_{\text{Fe}_{2}}(E) \; . \] The outcome is shown in Fig.~\ref{fig:delta-DOS-QCHRLAY-Fe-bilay}. If there is no external field, this difference is obviously zero because the bilayer is symmetric. With a finite \mbox{E$_{\text{field}}$}, the symmetry is removed and small energy- and spin-dependent transfer of electronic states between both layers occurs. This transfer is more pronounced for the minority states. Swapping the polarity of the perturbation, or the labeling of Fe$_1$ and Fe$_2$ layers, is equivalent to the $z \to -z$ coordinate transformation and leads to identical results. This will only change in the presence of a substrate which lifts the symmetry, as discussed in Sec.~\ref{sec:mono-Cu001} below. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{suspended_Fe_2ml_eigenmodes_projection.png} \caption{\label{fig:suspend-bilay-eigenmodes}Adiabatic magnon spectrum for the Fe bilayer suspended in vacuum with $E_{\textrm{field}}=0$. The $\omega_2(\bm q)$ solution is plotted with an artificial offset of +10 meV, to allow visualization where energy degenerate. The color coding represents the magnitude of the corresponding complex eigenvectors, projected on the Fe$_2$ layer. } \end{figure} With only two magnetic layers, the secular equation problem expressed by Eqs.~\eqref{eq:Landau-Lifschitz-secular} and \eqref{eq:revised-LL-matrix} reduces to diagonalizing the matrix \begin{equation} \label{eq:Landau-Lifschitz-bilayer} \underline{N}(\bm{q}) \hspace{-.05cm} = \hspace{-.05cm} 4 \hspace{-.05cm} \sum_{\bm{R}_{IJ}} \hspace{-.075cm} \left(\begin{array}{cc} \hspace{-.075cm} \frac{ J_{IJ}^{11} + J_{IJ}^{12} - J_{IJ}^{11} e^{-i \bm{q} \cdot \bm{R}_{IJ} } } { m_1^z } & \hspace{-.075cm} \frac{ - J_{IJ}^{12} e^{-i \bm{q} \cdot (\bm{R}_{IJ} + \bm b_1 - \bm b_2)} } { m_1^z } \hspace{-.075cm} \\[1ex] \hspace{-.075cm} \frac{ - J_{IJ}^{21} e^{-i \bm{q} \cdot (\bm{R}_{IJ} + \bm b_2 - \bm b_1)} } { m_2^z } & \hspace{-.075cm} \frac{ J_{IJ}^{21} + J_{IJ}^{22} - J_{IJ}^{22} e^{-i \bm{q} \cdot \bm{R}_{IJ} } } { m_2^z } \hspace{-.075cm}\\ \end{array}\right) \end{equation} Results are shown in Fig.~\ref{fig:suspend-bilay-eigenmodes}. We observe that eigenvalues are distinct between the $\overline{\Gamma}$ and the $\overline{X}$ point and between the $\overline{M}$ and the $\overline{\Gamma}$ point, i.e., when going from the center of the 2D Brillouin zone to its corners. For these portions of the spectrum, magnetic precession involves atoms from both layers. On the contrary, along the $\overline{X}$--$\overline{M}$ segment, i.e., at the Brillouin zone edge, eigenvalues are degenerate but precession involves exclusively one or the other iron sheet. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{acoustic-optic_gap_vs_QCHRLAY_Fe_2ml-VIV.png} \caption{\label{fig:gap-bilay-suspended}Energy gap between the high- and low-energy magnon branches at $\bm{q} = \overline{\Gamma}$ for an iron bilayer suspended in vacuum (cf.~Fig.~\protect\ref{fig:suspend-bilay-eigenmodes}) evaluated as a function of E$_{\textrm{field}}$. } \end{figure} The effect of the external electric field on the magnon spectra is again very weak for this suspended Fe bilayer, so that it would be hardly visible in a plot. Therefore we focus just on the gap between the high- and low-energy branches at the $\overline{\Gamma}$ point (see Fig.~\ref{fig:suspend-bilay-eigenmodes}). This gap can be evaluated as \[ \Delta E \: = \: \omega_2(\overline{\Gamma}) - \omega_1(\overline{\Gamma}) \: = \: 4 \sum_{\bm{R}_{IJ}} J_{IJ}^{12} \, \frac{ m_1^z + m_2^z }{ m_1^z \, m_2^z } \;\; . \] The dependence of this gap on E$_{\textrm{field}}$ is shown in Fig.~\ref{fig:gap-bilay-suspended}. We observe a very small variation for the considered range of E$_{\textrm{field}}$, just about 0.05~\%. Similarly as for Fig.~\ref{fig:delta-DOS-QCHRLAY-Fe-bilay}, the graph in Fig.~\ref{fig:gap-bilay-suspended} is symmetric with respect to the polarity of the external field, in accordance with the interchangeable role of layer~1 and layer~2 in the absence of a substrate. \subsection{\label{sec:mono-Cu001}Fe monolayer on Cu(001) substrate} \begin{figure}[htb] \centering \includegraphics[width=9.5cm]{CuFe_1ml-DOS_Fe_vs_QCHRLAY-converted-eps-converted-to.pdf} \caption{\label{fig:DOS-QCHRLAY-CuFe001}Spin-polarized Fe-projected DOS for a Fe monolayer on Cu(001) for different intensities and polarities of the external electric field. } \end{figure} \begin{figure}[htb] \centering \includegraphics[width=8.cm]{SMT_vs_QCHRLAY_Fe_1ml_on_Cu001.png} \caption{\label{fig:SMT-QCHRLAY-CuFe001}Dependence of the magnetic moments at Fe sites on the external electric field for a Fe monolayer on Cu(001). } \end{figure} Larger effects can be expected for supported iron sheets, because here the asymmetry introduced by the external field couples with the asymmetry stemming from the substrate. Fig.~\ref{fig:DOS-QCHRLAY-CuFe001} shows how the spin-polarized Fe-projected DOS varies with \mbox{E$_{\text{field}}$}\ for a Fe monolayer on Cu(001). The changes are now clearly visible, contrary to the situation for layers suspended in vacuum investigated in Figs.~\ref{fig:DOS-QCHRLAY-Fe-mono} and \ref{fig:delta-DOS-QCHRLAY-Fe-bilay}. The corresponding change of the magnetic moment with E$_{\textrm{field}}$ is shown in Fig.~\ref{fig:SMT-QCHRLAY-CuFe001}. The presence of the substrate means that the polarity of the external electric field matters this time --- unlike in the case of suspended layers, as evidenced e.g. in Fig.~\ref{fig:gap-bilay-suspended}. Overall, the variation in the magnetic moment is quite small, about 0.5~\%. \begin{figure} \centering \begin{tabular}{cc} \rotatebox{90}{\hspace{1.65cm}E$_{\textrm{field}}$= -5.2 V/nm} \includegraphics[width=8.cm]{Cu11FeVc6_BSF_Fe-QCHRLAY_-0375_pz.png} \\ \rotatebox{90}{\hspace{1.65cm}E$_{\textrm{field}}$= 0 V/nm} \includegraphics[width=8.cm]{Cu11FeVc6_BSF_Fe-QCHRLAY_0_pz.png} \\ \rotatebox{90}{\hspace{1.65cm}E$_{\textrm{field}}$= +5.2 V/nm} \includegraphics[width=8.cm]{Cu11FeVc6_BSF_Fe-QCHRLAY_+0375_pz.png} \\ \end{tabular} \caption{\label{fig:BSF-Fe-mono-Cu001}Fe-projected Bloch spectral function for a Fe monolayer on Cu(001), color-coded to indicate the predominantly down spin-polarization of electronic states at the Fermi level. From top to bottom: results for E$_{\textrm{field}}$= -5.2, 0, or +5.2 (V/nm). } \end{figure} A more detailed view can be obtained by inspecting the projection of the Bloch spectral function at the Fe site. Its dependence on \mbox{E$_{\text{field}}$}\ is outlined in Fig.~\ref{fig:BSF-Fe-mono-Cu001}. We show an interval around the Fermi level, which corresponds to the $\max[\omega_n(\bm q)]=0.5$ eV energy range of magnons in iron thin films. Note that the Bloch spectral function exhibits the characteristic broadening from lack of periodicity along the $z$ direction. Even though the general look of all three graphs is the same in Fig.~\ref{fig:BSF-Fe-mono-Cu001}, a systematic dependence of the position of certain features on \mbox{E$_{\text{field}}$}\ is evident: for example, the energy positions of the local maximum within 0.3~eV below $E_{F}$ for $\bm{k}$ between $\overline{\Gamma}$ and $\overline{X}$ or the energy positions of the inflection point within 0.3~eV below $E_{F}$ for $\bm{k}$ between $\overline{M}$ and $\overline{\Gamma}$. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{magnons_1_Fe_on_Cu001-vs_QCHRLAY-converted.png} \caption{\label{fig:magnons-QCHRLAY-mono-Cu001}Adiabatic magnon spectrum of a Fe monolayer on Cu(001) for selected values of E$_{\textrm{field}}$= -5.2, 0, and +5.2 (V/nm). } \end{figure} \begin{figure}[htb] \centering \begin{tabular}{c} \includegraphics[width=8.cm]{CuFe_1_ml_spinwaves_with_Stoner.png} \\ \includegraphics[width=8.5cm]{CuFe_1_ml_relative_Stoner-converted.png} \\ \end{tabular} \caption{\label{fig:damping-QCHRLAY-38-CuFe}Top panel: Magnon spectrum for a Fe monolayer on Cu(001) for $E_{\textrm{field}}=0$, depicting eigenvalues according to Eq.~\eqref{eq:Landau-Lifschitz-secular} (darker line) together with the corresponding intensity of Stoner excitations obtained by evaluating Eq.~\eqref{eq:Stoner-convolution} (lighter shaded area, in arbitrary units). Bottom panel: Relative change of the magnon lifetime (obtained as the inverse of the Stoner intensity) with \mbox{E$_{\text{field}}$}, for three choices of the $\bm{q}$-vector indicated in the top graph by differently dashed vertical lines of matching colors. } \end{figure} We show in Fig.~\ref{fig:magnons-QCHRLAY-mono-Cu001} the dispersion relation $\omega(\bm q)$ obtained according to Eq.~\eqref{eq:Landau-Lifschitz-secular} for the same three values of \mbox{E$_{\text{field}}$}\ considered in Fig.~\ref{fig:BSF-Fe-mono-Cu001}. We observe a very limited dependence. However, the situation is different for the Stoner spectrum estimated by means of Eq.~\eqref{eq:Stoner-convolution}. Results for \mbox{E$_{\text{field}}$}=0 are first illustrated in the top graph of Fig.~\ref{fig:damping-QCHRLAY-38-CuFe} as a broadening of the dispersion $\omega(\bm{q})$. The qualitative outcome of increasing Landau damping as we move away from the $\overline{\Gamma}$ point compares well both with experiments and with more comprehensive TDDFT calculations \cite{Buczek2011}. We interpret this broadening as inversely proportional to the magnon lifetime. The bottom graph of Fig.~\ref{fig:damping-QCHRLAY-38-CuFe} shows the relative change of this quantity with \mbox{E$_{\text{field}}$}. Results are depicted for three choices of the $\bm{q}$-vector, indicated by dashed lines in the top graph of the same figure. It is evident that varying E$_{\textrm{field}}$ leads to significant changes in the Stoner spectrum and, consequently, to different magnon lifetime. The general trend is that a positive \mbox{E$_{\text{field}}$}\ decreases the Landau damping thereby extending the magnon lifetime, whereas a negative \mbox{E$_{\text{field}}$}\ increases the damping and therefore reduces the magnon lifetime. The effect of a negative \mbox{E$_{\text{field}}$}, generated by having negative point charges above the Fe/Cu(001) semi-infinite system, appears to be larger than the effect of a positive \mbox{E$_{\text{field}}$}. \subsection{\label{sec:bilay-Cu001}Fe bilayer on Cu(001)} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{CuFe_2ml-SMT_vs_QCHRLAY.png} \caption{\label{fig:SMT-QCHRLAY-bilay-Cu001}Spin magnetic moment vs. E$_{\textrm{field}}$ for the exposed Fe$_2$ (brown full circles, left scale) and subsurface Fe$_1$ (blue empty circles, right scale) for an iron bilayer over Cu(001) substrate.} \end{figure} In the previous part Sec.~\ref{sec:mono-Cu001} we investigated a system with a single magnon eigenmode. In order to have more eigenmodes, it is necessary to consider more than a single Fe sheet. The Cu substrate has only a negligible induced magnetic moment and thus cannot host magnons. We consider in this part an iron bilayer on Cu(001), again assuming out-of-plane easy axis of magnetization and the same unrelaxed lattice parameters as in the previous sections, to facilitate comparison. We first examine the dependence of the magnetic moments in both Fe layers on E$_{\textrm{field}}$. For the upper Fe$_2$ layer, exposed to the vacuum, this dependence has got a similar nonmonotonous profile as for the iron monolayer on Cu(001) (compare the line with full circles in Fig.~\ref{fig:SMT-QCHRLAY-bilay-Cu001} with Fig.~\ref{fig:SMT-QCHRLAY-CuFe001}). On the other hand, the magnetic moments decrease almost linearly with increasing \mbox{E$_{\text{field}}$}\ for the subsurface Fe$_1$ layer (blue line with empty circles in Fig.~\ref{fig:SMT-QCHRLAY-bilay-Cu001}). The total change of the magnetic moment across the investigated range of \mbox{E$_{\text{field}}$}\ is about 0.5~\% for both layers, similarly as in the case of a Fe monolayer on Cu(001). \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{CuFe_2ml_eigenmodes_projection.png} \caption{\label{fig:CuFe-bilay-eigenmodes}Adiabatic magnon spectrum for a Fe bilayer on Cu(001) and with E$_{\textrm{field}}=0$. The color coding represents the magnitude of the corresponding complex eigenvectors, projected on the Fe$_2$ layer (as in Fig.~\ref{fig:suspend-bilay-eigenmodes}).} \end{figure} The adiabatic magnon dispersion is shown in Fig.~\ref{fig:CuFe-bilay-eigenmodes}. Some qualitative differences appear with respect to the case of a Fe bilayer suspended in vacuum. In particular, the substrate removes the energy degeneracy also for $\bm{q}$ points along the $\overline{X}$--$\overline{M}$ path. On the other hand, the suspended bilayer and the bilayer deposited on Cu(001) exhibit alike involvement of individual iron sheets' moments in hosting the magnons. The two eigenmodes involve precession of magnetic moments equally from both iron sheets near to $\overline{\Gamma}$, and from only one or the other layer away from the origin of the Brillouin zone. The high-energy branch involves only the subsurface Fe$_1$ atoms along the $\overline{X}$--$\overline{M}$ path, whereas the low-energy branch involves only the surface Fe$_2$ atoms. A similar $\bm q$-resolved decomposition can be observed for the suspendend bilayer of Fig.~\ref{fig:suspend-bilay-eigenmodes}. \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{acoustic-optic_gap_vs_QCHRLAY_Fe_2ml.png} \caption{\label{fig:gap-bilay-Cu001}Energy gap between the high- and low-energy magnon branches at $\bm{q} = \overline{\Gamma}$ for an iron bilayer on Cu(001) (cf.~Fig.~\protect\ref{fig:CuFe-bilay-eigenmodes}) evaluated as a function of E$_{\textrm{field}}$. } \end{figure} \begin{figure}[htb] \centering \includegraphics[width=8.5cm]{jxc-on-Efield-eps-converted-to.pdf} \caption{\label{fig:CuFe-bilay-J12-QCHRLAY}Inter-layer Heisenberg exchange couplings $J^{12}_{IJ}$ for a Fe bilayer on Cu(001) plotted as a function of the $|\bm R_I - \bm R_J|$ distance, for E$_{\textrm{field}}$= -5.2, 0, and +5.2 (V/nm). } \end{figure} We then evaluate again the gap $\Delta E = \omega_2(\overline{\Gamma})-\omega_1(\overline{\Gamma})$ between the high- and low-energy magnon branches as a function of E$_{\textrm{field}}$. For the suspended bilayer its influence was symmetric with respect to the polarity and quite small (Fig.~\ref{fig:gap-bilay-suspended}). The presence of the substrate changes the situation dramatically, as it can be seen in Fig.~\ref{fig:gap-bilay-Cu001}: the total variation of $\Delta E$ is now about 30~\% (in contrast with 0.05~\% for the case of a bilayer suspended in vacuum, see Sec.~\ref{sec:suspend-mono-bilay}) and it is asymmetric with respect to \mbox{E$_{\text{field}}$}. This outcome is not only due to the different effect of the perturbation on the magnetic moments for Fe$_1$ and Fe$_2$ atoms (see Fig.~\ref{fig:SMT-QCHRLAY-bilay-Cu001}) but it is also due to the E$_{\textrm{field}}$-induced modifications of the interlayer Heisenberg exchange couplings \cite{Mankovsky2021a}. This can be seen in Fig.~\ref{fig:CuFe-bilay-J12-QCHRLAY} where we present the inter-layer coupling constants $J^{12}_{IJ}$, for different values of the external electric field. The largest variation occurs among the nearest-neighbors and then decays rapidly with the distance $|\bm R_I - \bm R_J|$. \section{\label{sec:discuss}Discussion} The calculations presented in Sec.~\ref{sec:numerical-results} reveal that certain features of magnon spectra can be controlled by an applied electric field, beside aspects already considered in the literature as a consequence of voltage-controlled magneto-crystalline anisotropy \cite{Rado1979,Liu2013}, multiferroic coupling \cite{Rovillain2010,Risinggard2016}, induced effective DMI \cite{Liu2011b,Zhang2014a,Krivoruchko2017a,Krivoruchko2018,Rana2019,Savchenko2019b,Krivoruchko2020}, or strain from a piezoelectric substrate \cite{Qin2021}. In particular, we see that a finite E$_{\textrm{field}}$ perturbation may lead to sizable changes in the magnon lifetime, even in a case for which the adiabatic dispersion $\omega(\bm{q})$ is fairly unaffected (compare Fig.~\ref{fig:magnons-QCHRLAY-mono-Cu001} with Fig.~\ref{fig:damping-QCHRLAY-38-CuFe}). The stability of this latter quantity can be linked to the balance between the tiny asymmetric increase of the spin magnetic moment for $|E_{\textrm{field}}| > 0$ on the one hand (Fig.~\ref{fig:SMT-QCHRLAY-CuFe001}), and the strengthening of Heisenberg $J_{ij}$ parameters (by few tenths of meV) for nearest-neighbor Fe atoms on the other hand. The robustness of $\omega(\bm{q})$ against \mbox{E$_{\text{field}}$}\ suggests that the main reason why the magnon lifetime changes with \mbox{E$_{\text{field}}$}\ is that the Bloch spectral functions entering Eq.~\eqref{eq:Stoner-convolution} are significantly modified by the electric field. A negative E$_{\textrm{field}}$ couples mainly with minority electronic states, just below the Fermi level (Fig.~\ref{fig:BSF-Fe-mono-Cu001} top). This results in more minority states appearing closer to the Fermi level, with a shift of the $n^{\downarrow}_{\textrm{Fe}}(E)$ bump toward higher energy from its original position at around $E=-250$ meV (Fig.~\ref{fig:DOS-QCHRLAY-CuFe001}). The net result is an increase in Stoner intensity, which is shown in Fig.~\ref{fig:damping-QCHRLAY-38-CuFe} (bottom) as a noteworthy enhancement of Landau damping at every depicted $\bm q$-point. An opposite shift of the electronic spectral weight, i.e., to lower energies, takes place for $E_{\textrm{field}} > 0$. This results in longer magnon lifetimes due to the repulsion to deeper energies of the same minority electronic states discussed above, until they are pushed below the $[E_{\textrm{min}},E_{\textrm{max}}]$ energy interval sampled by Eq.~\ref{eq:Stoner-convolution}, and progressively allow only fewer competing Stoner excitations. For both electric field polarities, saturation of the change in Landau damping appears when the perturbation no longer can redistribute spin-polarized spectral weight within the energy interval spanned by the magnon. The scenario of a Fe bilayer on Cu(001) shows E$_{\textrm{field}}$-induced changes in the magnon dispersion relations even before considering finite lifetime effects. Interestingly, the dependence of the magnetic moments on \mbox{E$_{\text{field}}$}\ exhibits different trends for each of the two iron sheets (see Fig.~\ref{fig:SMT-QCHRLAY-bilay-Cu001}). In both cases, the magnetic moment is larger than in bulk bcc Fe, as it is common for surfaces. This is a consequence of the thin film straining to follow the different lattice parameters of the substrate. In addition, the reduced dimensionality, or more specifically, the reduced number of Fe atoms with alike neighbours also plays a role. However, whereas the surface Fe$_2$ layer shows an approximately parabolic and slightly asymmetric variation of the spin magnetic moment with \mbox{E$_{\text{field}}$}, similar to the case of a monolayer (cf.~Fig.~\ref{fig:SMT-QCHRLAY-CuFe001}), the sub-surface Fe$_1$ layer contiguous to copper shows a monotonous quasilinear dependence instead. It seems that exposition to the electric field perturbation with or without an in-between layer that can provide metallic screening is more important than the proximity to the non-magnetic substrate, in governing these trends. After the non-magnetic Cu(001) substrate has lifted the degeneracy between the two iron sheets, our calculations show in Fig.~\ref{fig:SMT-QCHRLAY-bilay-Cu001} different trends for the magnetic moment dependence on E$_{\textrm{field}}$ from sub-surface Fe$_1$ contiguous to copper, and from exposed Fe$_2$ facing vacuum. The change spans an alike interval of about $0.012$ $\mu_B$. The deeper iron sheet shows an approximately parabolic and slightly asymmetric variation in the spin magnetic moment, similar to the monolayer case of Fig.~\ref{fig:SMT-QCHRLAY-CuFe001}. The variation is linear instead for the surface Fe$_2$ atoms. For all cases under consideration we find a $\omega_1(\bm q)$ solution to Eq.~\eqref{eq:Landau-Lifschitz-secular} that requires zero energy at the $\overline{\Gamma}$ point, i.e. a Goldstone mode. The second eigenmode $\omega_2(\bm q)$, when present, starts from the origin of the Brillouin zone in similar quadratic fashion, which is a consequence of the ferromagnetic ground state order. While small-wavelength magnons are equally hosted by both layers, in the presence of a copper substrate the two modes are neither degenerate in energy, nor in the way that they involve Fe atoms from one or the other sheet at large $\bm q$. Upon including a finite electric field, the Goldstone theorem continues to apply and the lower-energy $|\omega_1(\bm q)\rangle$ branch continues to start from zero energy. The $\Delta E$ gap at $\overline{\Gamma}$ strongly depends on the presence of the non-magnetic substrate (cf. Fig.~\ref{fig:gap-bilay-suspended} vs. Fig.~\ref{fig:gap-bilay-Cu001}). In this case the applied perturbation significantly modifies the higher-energy $\omega_2(\bm q = \overline{\Gamma})$ solution, by changing both the inter-layer Heisenberg exchange parameters $J_{IJ}^{12}$, and layer-resolved magnetic moment $m_1^z$, $m_2^z$ that enter Eq.~\eqref{eq:Landau-Lifschitz-bilayer}. The resulting energy difference gets wider for negative E$_{\textrm{field}}$, and shrinks but remains open when inverting the sign of the perturbation. A negative electric field not only increases the spin magnetic moment of both Fe$_1$ and Fe$_2$ atoms which are equally involved in the $\omega_n(\bm q \to \overline{\Gamma})$ limit, but it also strengthens the $J_{ij}^{12}$ inter-layer interaction (Fig.~\ref{fig:CuFe-bilay-J12-QCHRLAY}). The opposite happens for $E_{\textrm{field}} > 0$. In summary, the electric field perturbation acts across the dielectric barrier of Fig.~\ref{fig:device-layout} by modulating the influence of the non-magnetic substrate. This mechanism provides different Landau damping even for limited changes in the purely adiabatic dispersion relation of magnons in simple metallic thin films. The same mechanism also offers possible routes to engineer specific changes in the magnon spectrum of more complex, thicker 2D systems, such as the energy gap at the $\overline{\Gamma}$ point. We have focused here on simple examples with a ferromagnetic ground state. However, analogous considerations should apply to more complex scenarios, such as antiferromagnets \cite{Cheng2016,Wang2020a,Kim2018a}, skyrmion lattices \cite{Chen2019c}, rare earths \cite{Leon2017}, or cases where the applied electric field is spatially inhomogeneous \cite{Krivoruchko2019,Krivoruchko2019a}. \section{\label{sec:conclusions}Conclusions} Magnon spectra of magnetic/non-magnetic metallic heterostructures can be manipulated by external gating electric field. Our ab-initio calculations for test systems of a Fe monolayer and a Fe bilayer, both suspended in vacuum and deposited on Cu(001), demonstrate that this perturbation can induce sizable modifications in finite magnon lifetimes from Landau damping, beside possible changes in the purely adiabatic dispersion relations already considered in the literature. The changes in magnon lifetimes can be related to modifications of the electronic structure, in particular in the layer-resolved spin-polarized Bloch spectral functions. For systems with more magnon dispersion branches, variation of the gap between high- and low-energy eigenmodes with the external field \mbox{E$_{\text{field}}$}\ can be expected. As the E$_{\textrm{field}}$ perturbation controls the degree of hybridization among magnetic/non-magnetic layers, one can expect considerable variability in how the magnon spectra are affected by the external field, depending on the choice of the substrate and the thickness of the magnetic film. \section{Acknowledgments} We gratefully acknowledge computational resources from the Information Technology for Innovation (IT4I) grants: OPEN-19-45 and OPEN-22-40 (Czech National Computing Centre, Ostrava, Czech Republic). Part of this work was supported by the Deutsche Forschungsgemeinschaft via the grant: DFG EB 154/35, by the Czech Science Foundation via the grant EXPRO no. 19-28375X, and by the Czech Minisitry of Education, Youth and Sports via the grant: CEDAMNF CZ.02.1.01/0.0/0.0/15\_003/0000358 (Computational and Experimental Design of Advanced Materials with New Functionalities).
proofpile-arXiv_065-5647
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Google Play lists over 2.89 million apps on its platform~\cite{stat1}. In the last year alone, these apps collectively accounted for over 111 billion installs by users worldwide ~\cite{stat2}. Given the magnitude of this scale, there is tremendous competition amongst developers to boost the visibility of their apps. As a result, developers spend considerable budgets on advertising, with expenditure reaching 96.4 billion USD on app installs in 2021~\cite{stat3}. Owing to this competitiveness, certain developers resort to inflating the reviews, ratings, and installs of their apps. The legitimacy of these means is determined by Google Play’s policy, under which the use of incentivized installs is strictly forbidden~\cite{google}. Some apps violate this policy by offering users incentive in the form of gift cards, coupons, and other monetary rewards in return for installing other apps; we refer to these as \textit{install-incentivizing apps}. Past work~\cite{farooqi2020} found that apps promoted on install-incentivizing apps are twice as likely to appear in the top charts and at least six times more likely to witness an increase in their install counts. While their work focuses on measuring the impact of incentivized installs on Google Play, our work aims to develop an understanding of how it affects the \textit{users} of install-incentivizing apps. To this end, we perform a mixed-methods analysis of the reviews and permissions of install-incentivizing apps. Our ongoing work makes the following contributions: \begin{enumerate} \item We provide a detailed overview of various dark patterns present in install-incentivizing apps and highlight several normative concerns that disrupt the welfare of users on Google Play. \item We examine different types of permissions requested by install-incentivizing apps to discover similarities with dark patterns, with 95\% apps requesting permissions that access restricted data or perform restricted actions \item We show promising preliminary results in algorithmic detection of fraud and lockstep behaviors in reviews that boost overall rating of install-incentivizing apps, detecting near-identical review pairs in 94\% of the 50 most suspicious review clusters. \item We release our dataset comprising 319K reviews written by 301K reviewers over a period of five months and 1,825 most relevant reviews with corresponding qualitative codes across 60 install-incentivizing apps.~\cite{Ashwin} \end{enumerate} \begin{figure}[!h] \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=\textwidth]{figure/cdf.pdf} \caption{\textbf{Distribution and CDF plot of install count for the 60 shortlisted install-incentivizing apps that collectively account for over 160.5M installs. Eighty-five percent of these apps have 100K or more installs, demonstrating their popularity.}} \label{fig:cdf_installs} \end{minipage} \hfill \begin{minipage}{.48\textwidth} \centering \includegraphics[width=0.75\textwidth]{figure/appnetwork.pdf} \caption{\textbf{Network of apps showing labels of five apps that share the most reviewers with other apps. App `us.current.android' shares 6.4K reviewers with other install-incentivizing apps.}} \label{fig:appnetwork} \end{minipage}% \vspace{-0.5mm} \end{figure} \section{Dataset} We created queries by prefixing “install apps” to phrases like “earn money”, “win prizes”, “win rewards”, etc., and searched them on Google Play to curate a list of potentially install-incentivizing apps. Then, we proceeded to install the apps from this list on our mobile devices to manually verify whether these apps incentivized installs for other apps; we discarded the apps that did not fit this criterion. Following this process, we shortlisted 60 \textit{install-incentivizing} apps. In Figure~\ref{fig:cdf_installs}, we plot a distribution and CDF of their installs, finding that most apps (85\%) have more than 100K installs. We used a scraper to collect reviews written daily on these apps, over a period of 5 months from November 1, 2021 to April 8, 2022. Reviews were collected daily to avoid over-sampling of reviews from certain temporal periods over others. This resulted in 319,198 reviews from 301,188 reviewers. Figure~\ref{fig:appnetwork} shows a network of apps where edges denote the number of reviewers shared by any two apps. We observe that certain apps share more reviewers with some apps over others, hinting at the possibility of collusion. Lastly, we also collected the permissions requested by apps on users’ devices. \section{Qualitative Analysis} To understand the various ways in which install-incentivizing apps affect their users, we performed qualitative analysis of their reviews. Unless a user expands the list of reviews, Google Play displays only the top four “most relevant” reviews under its apps. Owing to their default visibility, we sampled these reviews for all 60 apps over a one-month period, obtaining 1,825 unique reviews. Then, we adopted an inductive open coding approach to thematically code~\cite{miles1994qualitative} these reviews. In the first iteration, all researchers independently worked on identifying high-level codes for these reviews which were then compared and discussed. During this process, we defined the `completion of offers on install-incentivizing apps' as an act of \textit{labor} by users and the `incentive promised for their labor' as \textit{value}. Then, we reached a consensus on four high-level themes: \textit{exploitation, UI challenges, satisfaction}, and \textit{promotion}, which we define below: \begin{enumerate} \item \textbf{Exploitation:} User invests \textit{labor} but is unable to gain \textit{value}. \item \textbf{UI challenges:} User invests \textit{labor} but the app's UI makes it challenging for them to gain \textit{value}. \item \textbf{Satisfaction:} User invests \textit{labor} and is able to gain \textit{value}. \item \textbf{Promotion:} User invests \textit{labor} in promoting an app through their review, rating or a referral code to gain \textit{value}. \end{enumerate} While all themes were useful for capturing the inter-relationship between a user's \textit{labor} and its \textit{value}, the first three themes were relatively more prevalent in our data. Next, we performed two iterations of line-by-line coding of reviews within the high-level themes where the researchers identified emerging patterns under each theme until the principle of saturation was established. \begin{table* \footnotesize \caption{\textbf{Different types of dark patterns mapped to their individual \{Finanical Loss (\textbf{I1}), Invasion of Privacy (\textbf{I2}), Cognitive Burden (\textbf{I3})\} and collective \{Competition (\textbf{C1}), Price Transparency (\textbf{C2}), Trust in the Market (\textbf{C3})\} normative concerns.}} \label{darkpatterns} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|l|clllll|} \hline \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}High-Level \\ Code\end{tabular}}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}Low-Level \\ Code\end{tabular}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Review}}} & \multicolumn{6}{c|}{\textbf{Normative Concerns}} \\ \cline{4-9} & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{I1}} & \multicolumn{1}{c|}{\textbf{I2}} & \multicolumn{1}{c|}{\textbf{I3}} & \multicolumn{1}{c|}{\textbf{C1}} & \multicolumn{1}{c|}{\textbf{C2}} & \multicolumn{1}{c|}{\textbf{C3}} \\ \hline \multirow{6}{*}{Exploitation} & Withdrawal Limit & {\begin{tabular}[c]{@{}l@{}}\textit{100000 is equal to 10 dollars. Just a big waste of time.} \\ \textit{You can not reach the minimum cashout limit.}\end{tabular}} & \multicolumn{1}{c|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \checkmark \\ \cline{2-9} & Cannot Redeem & {\begin{tabular}[c]{@{}l@{}}\textit{Absolute scam. Commit time and even made in app} \\ \textit{purchases to complete tasks ... I have over 89k points} \\ \textit{that it refuses to cash out!} \end{tabular}} & \multicolumn{1}{c|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \checkmark \\ \cline{2-9} & Only Initial Payouts & {\begin{tabular}[c]{@{}l@{}}\textit{Good for the first one week then it will take forever to} \\ \textit{earn just a dollar. So now I quit this app ...}\end{tabular}} & \multicolumn{1}{c|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \checkmark \\ \cline{2-9} & Paid Offers & {\begin{tabular}[c]{@{}l@{}}\textit{In the task I had to deposit 50 INR in an app and I} \\ \textit{would receive 150 INR as a reward in 24 hrs. 5 days}\\ \textit{have passed and I get no reply to mail.}\end{tabular}} & \multicolumn{1}{c|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \checkmark \\ \cline{2-9} & Hidden Costs & {\begin{tabular}[c]{@{}l@{}}\textit{Most surveys say that the user isn’t eligible for them,} \\ \textit{after you complete them! Keep in mind you may not} \\ \textit{be eligible for 90\% of the surveys.}\end{tabular}} & \multicolumn{1}{c|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \checkmark \\ \cline{2-9} & Privacy Violations & {\begin{tabular}[c]{@{}l@{}}\textit{Enter your phone number into this app and you’ll be} \\ \textit{FLOODED with spam texts and scams. I might have} \\ \textit{to change my phone number because I unwittingly ...}\end{tabular}} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \checkmark \\ \hline \multirow{3}{*}{UI Challenges} & Too Many Ads & {\begin{tabular}[c]{@{}l@{}}\textit{Pathetic with the dam ads! Nothing but ads!!! Money} \\ \textit{is coming but only pocket change. It’ll be 2022 before} \\ \textit{i reach \$50 to cashout, if then.}\end{tabular}} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{} & \\ \cline{2-9} & Progress Manipulation & {\begin{tabular}[c]{@{}l@{}}\textit{I redownload the app since the app would crash all the} \\ \textit{time ... I logged in and guess what?? ALL MY POINTS} \\ \textit{ARE GONE.. 12k points all gone...}\end{tabular}} & \multicolumn{1}{c|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \checkmark \\ \cline{2-9} & Permission Override & {\begin{tabular}[c]{@{}l@{}}\textit{When you give it permission to go over other apps it} \\ {actually blocks everything else on your phone from} \\ \textit{working correctly including Google to leave this review.}\end{tabular}} & \multicolumn{1}{c|}{} & \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{\checkmark} & \multicolumn{1}{l|}{} & \checkmark \\ \hline \end{tabular}} \end{table*} \subsection{How Install-Incentivizing Apps affect Users} In this section, we describe our findings from the qualitative analysis to shed light on how install-incentivizing apps affect their users. More specifically, we elaborate on the commonalities and differences of patterns within high-level codes that we discovered using line-by-line coding to depict how labor invested by users in these apps is not only exploited but also leads to negative consequences for them as well as the platform. \subsubsection{Dark Patterns}\label{dp}\hfill\\ Dark patterns can be defined as tricks embedded in apps that make users perform unintended actions~\cite{brignull2020types}. We find comprehensive descriptions of dark patterns present within install-incentivizing apps in reviews coded as `exploitation' and `UI challenges'. These patterns make it difficult for users to redeem value for their labor. First, our low-level codes uncover the different types of dark patterns present in reviews of install-incentivizing apps. Then, we ground these types in prior literature~\cite{Mathur2021dark} by utilizing lenses of both individual and collective welfare to highlight their normative concerns. The individual lens focuses on dark patterns that allow developers to benefit at the expense of users whereas the collective lens looks at users as a collective entity while examining expenses. In our case, the former comprises three normative concerns. First, patterns that enable developers to extract labor from users without compensating cause \textbf{financial loss (I1)} to users. Second, cases where the data of users is shared with third parties without prior consent, leading to \textbf{invasion of privacy (I2)}. Third, when the information architecture of apps manipulates users into making certain choices due to the induced \textbf{cognitive burden (I3)}. The lens of collective welfare facilitates understanding of the bigger picture of install-incentivizing apps on Google Play by listing three additional concerns. Due to high \textbf{competition (C1)}, some developers incorporate dark patterns in apps that empower them to `extract wealth and build market power at the expense of users’~\cite{day2020dark} on the platform. In conjunction with their concerns at the individual level, they also pose a serious threat to the \textbf{price transparency (C2)} and \textbf{trust in the market (C3)} of Google Play. In Table~\ref{darkpatterns}, we show these different types of dark patterns mapped to their individual and collective normative concerns using sample reviews from our data. \subsubsection{Evidence of Fraudulent Reviews and Ratings}\label{evidence}\hfill\\ During qualitative analysis, we found that most reviews coded as `satisfaction' were relatively shorter and lacked sufficient context to explain how the app benefitted the user, for e.g. \textit{``Good app”, ``Nice App”, ``Very easy to buy money.”, ``Nice app for earning voucher”}. We performed welch’s \textit{t}-test to validate that the number of words in reviews coded as satisfaction were very highly significantly lower than reviews coded as exploitation or UI challenges ($p<0.001,t=-11.41$). The shorter length of reviews, along with the excessive use of adjectives and unrelatedness to the apps represented key spam-detection signals~\cite{shojaee2015framework}, raising suspicions about their fraudulence. We discovered evidence of the same in reviews coded as `promotion’ -- \textit{``Gets high rating because it rewards people to rate it so’’, ``I rated it 5 stars to get credits”}, thus finding that install-incentivizing apps also violate Google Play’s policy by incentivizing users to boost their ratings and reviews. Other reviews coded as `promotion’ involved users promoting other competitor apps (\textit{``No earning 1 task complete not give my wallet not good ! CASHADDA App is good fast earning is good go install now thanks”}) or posting their referral codes to get more credits within the install-incentivizing app (\textit{`The app is Awesome. Use My Referral Code am****02 to get extra coin`”}). \section{Quantitative Analysis} In this section, we ascertain findings from our qualitative analysis as well as reveal more characteristics about the behavior of install-incentivizing apps and their reviews. For the same, we examine the permissions requested by these apps to establish their relevance to the dark patterns discussed in Section~\ref{dp}, and perform anomaly detection on their reviews to build upon the evidence of fraud from Section~\ref{evidence}. \subsection{Permissions in Install-Incentivizing Apps} App permissions support user privacy by protecting access to restricted data and restricted actions on a user’s device \cite{android}. Most permissions fall into two protection levels as determined by Android, namely \textit{normal} and \textit{dangerous}, based on the risk posed to user privacy. Similarly, another distinction can be made between permissions that access \textit{user information} and permissions that only \textit{control device hardware} \cite{pew}. We leverage these categories in our analysis to identify types of permissions prominent across install-incentivizing apps. Figure~\ref{fig:upset} shows an UpSet plot~\cite{2014_infovis_upset} of different types of permissions present in install-incentivizing apps. First, we observe that over 92\% of apps comprise \textit{dangerous} permissions that access user information. The most popular permissions in this category include `modify or delete the contents of your USB storage’ (41 apps), `read phone status and identity’ (24 apps), `access precise location’ (19 apps) and `take pictures and videos’ (14 apps). Second, despite being requested by relatively fewer apps, some permissions in this category enable an alarming degree of control over user information; for e.g. `create accounts and set passwords’ (5 apps), `add or modify calendar events and send email to guests without owners' knowledge’ (3 apps) and `read your contacts’ (2 apps). Third, 34\% of install-incentivizing apps contain permissions that access dangerous hardware-level information, the most prominent one being `draw over other apps’ (14 apps). Fourth, we note that all but three apps request at least one dangerous permission. Lastly, permissions requested by install-incentivizing apps share common characteristics with the dark patterns discussed above, thus validating their qualitative discovery. \begin{figure}[!h] \centering \begin{minipage}{.49\textwidth} \centering \includegraphics[width=\textwidth]{figure/upset.pdf} \caption{\textbf{UpSet plot demonstrating different types of permissions present in install-incentivizing apps. Over ninety two percent of apps request permissions that access sensitive user information.}} \label{fig:upset} \end{minipage} \hfill \begin{minipage}{.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{figure/edgestream.pdf} \caption{\textbf{Reviews are modelled as an edge-stream in a dynamic bipartite graph of apps and reviewers. Each edge $e \in E$ represents a tuple $(r,a,t)$ where $r$ is a reviewer who reviews an app $a$ at time $t$.}} \label{fig:edgestream} \end{minipage}% \end{figure} \subsection{Lockstep Behaviors} In Section~\ref{evidence}, we found evidence of install-incentivizing apps indulging in review and rating fraud. Thus, we build upon the same to investigate reviews of these apps for anomalous behaviors such as lockstep that are indicative of fraud. Specifically, we focus on detecting groups of reviews that exhibit similar temporal and rating patterns; for e.g. bursts of reviews on an app within a short period of time to boost its overall rating. \subsubsection{Modelling and Experimental Setup}\hfill\\ Given that reviews are a temporal phenomenon, we model them as an edge-stream $E = \{e_{1},e_{2},...\}$ of a dynamic graph $G$. Each edge $e_{i} \in E$ represents a tuple $(r_{i},a_{i},t_{i})$ where $r_{i}$ is a reviewer who reviews an app $a_{i}$ at time $t_{i}$ (see Fig~\ref{fig:edgestream}). Groups of fraudulent reviewers may either aim to boost the overall rating of an install-incentivizing app or sink the rating of a competitor app. Thus, we partition our edge stream into two sub-streams as follows: \begin{enumerate} \item \textbf{$E_{boost}$} $= \{(r_{i},a_{i},t_{i}) \in E\:| \text{ Score}(r_{i},a_{i}) \geq R_{a_{i}} \} \text{, } |E_{boost}|=215,759$ \item \textbf{$E_{sink}$} $= \{(r_{i},a_{i},t_{i}) \in E\:| \text{ Score}(r_{i},a_{i}) < R_{a_{i}} \} \text{, } |E_{sink}|=103,439$ \end{enumerate} where $\text{Score}(r_{i},a_{i}) \in \{1,2,3,4,5\}$ is the score assigned by reviewer $r_{i}$ to the app $a_{i}$ and $R_{a_{i}}$ denotes the overall rating of app $a_{i}$. Next, we reconfigure a state-of-the-art microcluster anomaly detection algorithm \textsc{Midas-F}~\cite{bhatia2020realtime} for our use. In particular, we modify the definition of a microcluster to accommodate the bipartite nature of our dynamic graph. Given an edge $e \in E$, a detection period $T \geq 1$ and a threshold $\beta > 1$, there exists a microcluster of reviews on an app $a$ if it satisfies the following equation: \begin{equation} \begin{split} \MoveEqLeft \frac{c(e,(n+1)T)}{c(e,nT)} > \beta \text{ where }c(e,nT) = \\ & \bigl\lvert\{(r_{i},a,t_{i}) \mid (r_{i},a,t_{i}) \in E_{boost} \land (n-1)T < t_{i} \leq nT\}\bigl\lvert \end{split} \end{equation} if $e \in E_{boost}$ and vice versa for $E_{sink}$. Depending on whether $e$ is a boosting or sinking edge, $c(e,nT)$ counts similar edges for the app $a$ within consecutive detection periods $(n-1)T$ and $nT$. Values recommended by the authors are used for the remaining parameters $\alpha$ and $ \theta$. It is worth noting that our modification preserves its properties of (i) theoretical guarantees on false positive probability, and (ii) constant-time and constant-memory processing of new edges~\cite{bhatia2020realtime}. \subsubsection{Analysis and Preliminary Results}\hfill\\ \textsc{Midas-F} follows a streaming hypothesis testing approach that determines whether the observed and expected mean number of edges for a node at a given timestep are significantly different. Based on a chi-squared goodness-of-fit test, the algorithm provides anomaly scores $\text{S}(e)$ for each edge $e$ in a streaming setting. Upon computing anomaly scores for both sub-streams $E_{boost}$ and $E_{sink}$, we visualize their CDF with an inset box plot in Fig~\ref{fig:cdfbox}. It can be observed that $E_{boost}$ exhibits more anomalous behavior than $E_{sink}$. To ascertain statistical significance of the same, we make use of Welch's t-test for the hypothesis $H_{1}: \text{S}_{\mu}(E_{boost}) > \text{S}_{\mu}(E_{sink})$. We infer that reviews that aim to boost the rating of an install-incentivizing app show anomalous behavior that is highly significantly more ($t = 157.23, p<0.0$) than reviews that aim to bring it down. Next, we examine fraud across anomalous microclusters detected by the algorithm. Figure~\ref{fig:toy} shows one such microcluster anomaly where the algorithm detects reviews from three reviewers boosting the overall rating of two install-incentivizing apps on the same day. We extract the 50 most suspicious clusters of reviews from both sub-streams $E_{boost}$ and $E_{sink}$ based on their average anomaly scores. For each pair of reviews $(r_{i},r_{j})$ within these clusters, we compute their cosine similarity $CS(r_{i},r_{j})$ using embeddings generated by Sentence-BERT~\cite{reimers-2019-sentence-bert}. Over 35\% of reviews (1,687 of 4,717) from the suspicious clusters in $E_{boost}$ form at least one pair of highly identical reviews i.e., $CS(r_{i},r_{j}) = 1$. However, this percentage drops to 10\% (45 of 432 reviews) in case of $E_{sink}$. On closer inspection, we find that these are all extremely short reviews with at most three to four words that comprise mostly of adjectives; for e.g., $E_{boost}$: (`good app', `very good app'), (`good earning app', `very good for earning app'), (`best app', `very best app') and $E_{sink}$: (`bad', `very bad'), (`super', `super'), (`nice', `very nice'). It is surprising to see that all but four identical pairs from $E_{sink}$ contain only positive adjectives considering they assign the app a low rating. A potential reason for this dissonance can be that reviewers writing these reviews want to camouflage as normal users in terms of their rating patterns. Lastly, from the fifty most suspicious clusters, we find such pairs across 47 (94\%) clusters from $E_{boost}$ and 21 (42\%) clusters from $E_{sink}$. This demonstrates that the efficacy of our approach towards detecting lockstep behaviors is not only limited to the temporal and rating dimensions, but also extends to the content present in reviews. \begin{figure \includegraphics[width=0.45\textwidth]{figure/CDFbox.pdf} \caption{\textbf{CDF plot of anomaly scores for the two edge streams $E_{boost}$ and $E_{sink}$. Reviews that boost the overall rating of an install incentivizing app exhibit significantly more anomalous behavior than reviews that aim to bring it down. }} \label{fig:cdfbox} \end{figure} \begin{figure \includegraphics[width=0.44\textwidth]{figure/toy.pdf} \caption{\textbf{A microcluster anomaly detected by the algorithm where three reviewers are boosting the overall rating of two install-incentivizing apps `Cashyy' and `Appflame' on the same day.}} \label{fig:toy} \end{figure} \section{Discussion and Future Work} Our current work sheds light on how lax implementation of Google Play’s policy on fraudulent installs, ratings and reviews empowers developers of install-incentivizing apps to deplete the trust and transparency of the platform. Through use of permissions that access restricted data and perform restricted actions, developers incorporate dark patterns in these apps to deceive users and extort labor from them in the form of offers. The second form of labor that we study in our work is the writing of fraudulent reviews. We find evidence of their presence qualitatively and show promising results in detecting them algorithmically. Both types of fraud (incentivized installs and reviews) are only made possible by the labor of users who are vulnerable or crowd-workers who are underpaid~\cite{aso2019}. This enables developers to extract profits as they get away with violating Google Play’s policies without any consequences or accountability. However, a question that remains unanswered is, if reviews under these apps describe exploitative experiences of users, what is it that facilitates their continued exploitation? For now, we can only conjecture that fraudulent positive reviews on install-incentivizing apps suppress ranks of reviews containing exploitative experiences of users. Whether the same holds true or not is a question that remains to be explored in our future work. \bibliographystyle{ACM-Reference-Format}
proofpile-arXiv_065-5670
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Machine learning has been widely adopted in real-world applications. Although remarkable results were achieved in the prediction and decision-making scenarios, unexpected bias occurs regularly \cite{stoica2018algorithmic,besse2018confidence,friedler2019comparative}. For example, the famous new media company ProPublica found that black defendants were far more likely than white defendants to be incorrectly judged as having a higher risk of recidivism in the COMPAS system\cite{angwin2016machine}. The Amazon company found that the AI hiring tool they developed to automate the hiring process is biased against women\cite{lauret2019amazon}. Many works emerge to design algorithms to avoid such biases and aim to obtain {\em{fair}} machine learning models. This work focuses on achieving fairness in link prediction tasks. The link prediction task is a fundamental but essential problem in modern machine learning applications, not limited to recommendation systems and knowledge graph completion. The main goal is to predict whether the link between two nodes exists in a graph. Many existing popular algorithms, e.g., Node2Vec\cite{grover2016node2vec} and GCN\cite{kipf2016semi}, have been proposed to solve the link prediction task with superior performance in many scenarios. However, the dataset collected for the model training procedure usually has various unexpected biases. This will lead to unfair results for the link prediction model obtained. For instance, after collecting data from social media platforms, early works highlighted that users were more interested in conversing with others of the same race and gender\cite{khanam2020homophily}. Link prediction models, trained based on such unfair data, will also tend to predict the existence of links between nodes with the same sensitive information. This will unfairly disadvantage some users. To formally define such an unfair phenomenon, \cite{li2021on,DBLP:conf/aaai/MasrourWYTE20} introduced dyadic fairness for link prediction of graphs. The dyadic fairness criterion expects the prediction results to be independent of the sensitive attributes from the given two nodes. Recently, several works have been proposed to achieve dyadic fairness in link prediction tasks, which can be roughly divided into three categories: 1) in-processing scheme\cite{li2021on} considers modifying the learning algorithm to eliminate bias; 2) post-processing scheme\cite{DBLP:conf/aaai/MasrourWYTE20} attempts to debias directly the model's output after training; 3) pre-processing scheme\cite{DBLP:journals/corr/abs-2104-14210} aims to repair the graph data before the training procedure, and ensures the link prediction results can satisfy dyadic fairness. In this paper, our proposed method is established under the pre-processing scheme. Compared to the in-processing and post-processing schemes, the pre-processing scheme should be the most flexible fairness intervention\cite{nielsen2020practical}. Suppose the discriminating information is removed from the data during the pre-processing stage, the processed data could be utilized to solve arbitrary downstream tasks without concern about the fairness issue. Few works have studied obtaining dyadic fairness through a pre-processing scheme. FairDrop\cite{DBLP:journals/corr/abs-2104-14210} proposed a heuristic repairing method that can mask out edges based on the dyadic sensitive attributes. It is easy to implement but without a theoretical guarantee of achieving fairness. To design a theoretically sound pre-processing scheme, FairEdge\cite{DBLP:journals/corr/abs-2010-16326} firstly adopts the Optimal Transport (OT) theory\cite{villani2009optimal} to justify whether dyadic fairness can be obtained through a repairing scheme. FairEdge focuses on the plain graph (the node has no attribute) and proposes to repair adjacency information distributions (conditioned on sensitive attribute) to the corresponding Wasserstein barycenter. Dyadic fairness is obtained once the adjacency information distributions are all repaired as the obtained Wasserstein barycenter. Unlike the previous approach, we expect to focus on attributed graphs (each node has attributes) that are more general in the real world. Because node attributes introduce bias even if the bias of adjacency information can be removed, those algorithms that simply consider plain graphs cannot solve this problem, and the achievement of dyadic fairness on attributed graphs is still underexploited. \section{Related Works} \subsection{Fairness in Link Prediction} Link prediction is a well-researched problem in applications related to graph data\cite{al2006link,masrour2015network}. Since fairness in graph-structured data is a relatively new research topic, only a few works have investigated fairness issues in link prediction. In\cite{DBLP:journals/corr/abs-2104-14210}, the authors proposed a biased dropout strategy that forces the graph topology to reduce the homophily of sensitive attributes. Meanwhile, to measure the improvements for the link prediction, they also defined a novel group-based fairness metric on dyadic level groups. In contrast, \cite{DBLP:conf/aaai/MasrourWYTE20} considered generating more heterogeneous links to alleviate the filter bubble problem. In addition, they further presented a novel framework that combines adversarial network representation learning with supervised link prediction. Following the idea of adversarially removing unfair effects, \cite{li2021on} proposes the algorithm FairAdj to empirically learn a fair adjacency matrix with proper graph structural constraints for fair link prediction to ensure predictive accuracy as much as possible simultaneously. Most similar to our method, \cite{DBLP:journals/corr/abs-2010-16326} formulated the problem of fair edge prediction and proposed an embedding-agnostic repairing procedure for the adjacency matrix with a trade-off between group and individual fairness. However, they still ignore the node attributes, which impact both the prediction and fairness performance. \subsection{Fairness with Optimal Transport} In the context of ML fairness, several works have proposed using the capacity of optimal transport to align probability distributions, overcoming the limitation of most approaches that approximate fairness by imposing constraints on the lower-order moments. Along with this motivation, most of the existing methods consider using optimal transport theory to match distributions corresponding to different sensitive attributes in the model input space or the model output space, which corresponds to pre-processing\cite{pmlr-v97-gordaliza19a, feldman2015certifying, DBLP:journals/corr/abs-2010-16326} and post-processing\cite{jiang2019wasserstein, chzhen2020fair} methods, respectively. In addition, the in-processing\cite{jiang2019wasserstein, chiappa2021fairness} methods based on optimal transport achieve fairness by imposing constraints in terms of the Wasserstein distance in the objective function. \section{Dyadic Fairness in Link Prediction} In this section, we formulate dyadic fairness in the link prediction task and define two metrics (dyadic disparate impact and dyadic balanced error rate) to quantify dyadic fairness. Then we conclude two desired properties for our repairing algorithm that try to obtain dyadic fairness, i.e., flexibility and unambiguity. We further theoretically discuss how these properties can be achieved and prove that aligning conditional attribute and adjacency distributions to the same distribution can obtain dyadic fairness with these properties. \subsection{Problem Formulation} Given the graph $\mathcal{G}:= \left( \mathcal{V}, \mathcal{E} \right)$ with $\mathcal{V}:= \{v_1, \dots, v_N\}$ be the node set of the graph and $\mathcal{E}:= \{e_1, \dots, e_N\}$ be the edge set of the graph. Each node $v_i$ be endowed with a vector $\mathbf{x}_i\in\mathbb{R}^M$ of attributes. Each edge $e_i$ is the $i$th row of a non-negative adjacency matrix $A\in\{0,1\}^{N\times N}$ which summarizes the connectivity in the graph. If nodes $v_i$ and $v_{j}$ are connected, then $A_{ij}=1$; otherwise, $A_{ij}=0$. The link prediction model usually identifies whether the link between two nodes ($i,j$) exists based on their node representations, i.e., $g: \boldsymbol{z}_i \times \boldsymbol{z}_j \mapsto \{0, 1\}$ where the $\boldsymbol{z}_i$ denotes the node $i$'s representation. The $\boldsymbol{z}_i$ is usually obtained by random walk or graph convolution on the whole graph: $\boldsymbol{z}_i = f(\mathcal{G})[i]$ where the $f: \mathcal{G} \mapsto \mathbb{R}^{N\times d}$ is called the embedding function. The $d$ is the dimension of the node representation, and the $f$ can be Node2Vec, GCN, GAT, etc. The link predictor $g$ takes two nodes' representations with the node representations and directly outputs whether a link exists between them. To study the fairness of link prediction tasks, we assume that all nodes have one sensitive feature $S: \mathcal{V} \rightarrow \mathcal{S}$. We also take the binary sensitive feature $\mathcal{S}=\{0, 1\}$ first and let $S(i)$ denote the sensitive feature of node $i$. The binary sensitive feature will be relaxed later. Before proposing our algorithm, we make the following two assumptions: \smallskip \noindent{1). {\bf{Equivalence assumption}}} $$\mathbb{P}\left(S\oplus S^\prime=1 \right)=\mathbb{P} \left(S\oplus S^\prime=0\right)=\frac{1}{2},$$which is based on the fact that each node has the same chance of being sampled regardless of its sensitive attribute value. For instance, $\mathbb{P}(S=man)=\mathbb{P}(S=woman)$ is always an equivalence relationship independent of the sampling process and the obtained graph data itself; \smallskip \noindent{2). {\bf{Propensity assumption}}} \begin{eqnarray} &&\mathbb{P}\left(g(\boldsymbol{z}_u,\boldsymbol{z}_v) = 1\,\big| \,S(u)\oplus S(v)=0\right)\nonumber \\ & & \quad\quad \geq\mathbb{P} \left(g(\boldsymbol{z}_u,\boldsymbol{z}_v)=1 \,\big| \,S(u)\oplus S(v)=1 \right),\nonumber \end{eqnarray}which illustrates that the classifier we consider here will tend to predict the existence of links between nodes with the same sensitive attributes. For link prediction problems, the main unfairness phenomenon is assigning high link probability to nodes with the same sensitive feature while assigning low probability to nodes with different sensitive features. For example, a user may be treated unfairly on social platforms because they are rarely recommended to users of a different gender or race. This unfairness can be defined mathematically as in \cite{li2021on}. \begin{definition} [Dyadic Fairness] A link predictor $g$ obtains dyadic fairness if for node representation $\boldsymbol{z}_i$ and $\boldsymbol{z}_j$ \begin{equation}\label{eq: dp} \mathbb{P}\left( g(\boldsymbol{z}_i, \boldsymbol{z}_j)\, \big| \, S(i)\oplus S(j)=1 \right) = \mathbb{P} \left( g(\boldsymbol{z}_i, \boldsymbol{z}_j) \, \big|\, S(i) \oplus S(j)=0 \right). \end{equation} \end{definition}When the link predictor decides the link between two nodes in the same proportion regardless of whether they have the same sensitive attributes, the predictor can be denoted as obtaining dyadic fairness. Actually, the dyadic fairness described in \eqref{eq: dp} is difficult to achieve in real data. Therefore, to better quantify fairness, we could adopt two other essential fairness metrics, i.e., {\em{dyadic disparate impact}} (DDI) and {\em{dyadic balanced error rate}} (DBER), which are defined as follows: \begin{definition}[DDI: Dyadic Disparate Impact] Given a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ and a function $g(\boldsymbol{z}_u,\boldsymbol{z}_v):\mathbb{R}^d\times\mathbb{R}^d\rightarrow\{0,1\}$, we define the link prediction function $g$ has Disparate Impact at level $\tau\in (0,1]$ on $S(u)\oplus S(v)$ w.r.t.$\boldsymbol{Z}$ if: \begin{equation} \mathrm{DDI}\left(g,\boldsymbol{Z},\mathcal{S}\right)=\frac{\mathbb{P}\left( g(\boldsymbol{z}_u,\boldsymbol{z}_v)=1\,\big|\,S(u)\oplus S(v)=1 \right)}{\mathbb{P}\left( g(\boldsymbol{z}_u,\boldsymbol{z}_v)=1\,\big|\, S(u)\oplus S(v)=0 \right)}\leq\tau. \label{eq: di} \end{equation} \end{definition} DDI measures the fairness level of the predictor. The higher the value of $\tau$, the fairer it is. Ideally, when the value of $\tau$ reaches $1$, it means that the link predictor achieves dyadic fairness. \begin{definition}[DBER: Dyadic Balanced Error Rate] For a graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$ and a function $g(\boldsymbol{z}_u,\boldsymbol{z}_v):\mathbb{R}^d\times\mathbb{R}^d\rightarrow\{0,1\}$, we define the dyadic balanced error rate of the predictor $g$ as the average class-conditional error: \begin{equation} \begin{aligned} & \mathrm{DBER}\left(g,\boldsymbol{Z},\mathcal{S} \right)= \frac{1}{2} \left[\mathbb{P} \left( g(\boldsymbol{z}_u,\boldsymbol{z}_v)=0\, \big|\, S(u)\oplus S(v)=1 \right)\right.\\ &\qquad \left. + \mathbb{P}\left( g(\boldsymbol{z}_u,\boldsymbol{z}_v)=1\, \big|\, S(u)\oplus S(v)=0 \right)\right]. \end{aligned} \label{eq:ber} \end{equation} \end{definition} DBER measures the general misclassification error of sensitive attributes by $g$ in the particular case of $\mathbb{P}(S\oplus S^\prime=1)=\mathbb{P}(S\oplus S^\prime=0)=\frac{1}{2}$. DBER can be guaranteed to be smaller than $\frac{1}{2}$. With a larger DBER, the data and predictor $g$ will be more fair. If DBER equals $\frac{1}{2}$, then DDI will be $1$, and dyadic fairness will be achieved. \subsection{Obtaining Dyadic Fairness} In this paper, we consider establishing dyadic fairness through pre-processing the graph data. Due to the nature of pre-processing, our repairing procedure has no relationship with the embedding function $f$ and predictor $g$. As a result, it becomes important to ensure that the repaired data can achieve dyadic fairness for arbitrary embedding function and predictor. These can be considered as the requirements \textbf{flexibility}. Furthermore, another straightforward requirement needs to be emphasised, i.e., \textbf{unambiguity}. After repairing, the attribute and adjacency information of each node should be determined without ambiguity. To obtain the wide applicability on predictors (flexibility), we consider optimizing the DBER of the most unfair predictor with the repaired data, i.e., \begin{equation} \boldsymbol{Z}^* = \arg\max_{\boldsymbol{Z}} \min_{g}\ \mathrm{DBER}\left(g, \boldsymbol{Z}, \mathcal{S}\right). \end{equation} Suppose that the repaired data $\boldsymbol{Z}^*$ ensures high DBER under the most unfair predictor. In that case, it obtains dyadic fairness with wide applicability to predictors. Although this makes the problem a bi-level optimization one, the closed form of $g$ can be obtained with the Bayes formula as in \cite{pmlr-v97-gordaliza19a}. \begin{theorem} The smallest DBER for the data $\boldsymbol{Z}$ is equal to: \begin{equation} \min_{g}\ \mathrm{DBER}\left(g, \boldsymbol{Z}, \mathcal{S}\right)=\frac{1}{2}\left(1-\frac{1}{2} W_{1.\neq}\left(\boldsymbol{\hat{\gamma}}_{0}, \boldsymbol{\hat{\gamma}}_{1}\right)\right), \label{eq:rel} \end{equation} where $W_{1.\neq}$ denotes the Wasserstein distance between the conditional joint distributions of the node representation with the Hamming cost function. $\boldsymbol{\hat{\gamma}}_0$ and $\boldsymbol{\hat{\gamma}}_1$ are conditional distributions over $\boldsymbol{Z}\times \boldsymbol{Z}$ given $S(u)\oplus S(v)=0$ and $S(u)\oplus S(v)=1$. \end{theorem} The detailed proof of this theorem has been elaborated in work \cite{pmlr-v97-gordaliza19a}. As shown in the theorem, the dyadic balanced error rate of the most unfair predictor depends on the Wasserstein distance between the two conditional dyadic node representation distributions $(\boldsymbol{\hat{\gamma}}_0, \boldsymbol{\hat{\gamma}}_1)$. When $W_{1.\neq}(\boldsymbol{\hat{\gamma}}_0,\boldsymbol{\hat{\gamma}}_1)=0$, which means that the two conditional distributions are identical, i.e., \begin{equation} \mathbb{P} \left( \boldsymbol{z}_u,\boldsymbol{z}_v\, \big|\, S(u)\oplus S(v)=1 \right) = \mathbb{P} \left( \boldsymbol{z}_u,\boldsymbol{z}_v\, \big|\, S(u)\oplus S(v)=0 \right). \label{eq: xor} \end{equation} The DBER can achieve the optimal $\frac{1}{2}$ and $\boldsymbol{Z}$ are taken as dyadic fairness on the sensitive feature $S$. Ensuring \eqref{eq: xor} makes the repaired data achieve dyadic fairness with wide applicability on arbitrary predictor $g$. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{./fairillu.pdf} \caption{The ambiguity illustration of dyadic repairing. These pairs $(A,C)$ and $(A,D)$ are repaired respectively to the pairs in the black line. $A$'s original attribute is yellow, while in the repaired data, it has multiple values (``yello'' and ``purple''), which might lead to ambiguity.} \label{fig: illu} \end{figure} One straightforward repairing scheme is directly moving the two conditional distributions to the same distribution. However, the representation of node $i$' $\boldsymbol{z}_i$ often occurs more times in $\boldsymbol{\hat{\gamma}}_0$ and $\boldsymbol{\hat{\gamma}}_1$. When repairing $\boldsymbol{\hat{\gamma}}_0$ and $\boldsymbol{\hat{\gamma}}_1$, $\boldsymbol{z}_i$ will probably assign multiple values. For example, as shown in Figure~\ref{fig: illu}, the direct repairing leads to ambiguity in the $A$'s attribute. To achieve the unambiguity repairing, we propose the following proposition. \begin{prop} The dyadic fairness \eqref{eq: xor} is satisfied if and only if the following equation is satisfied: \begin{equation} \mathbb{P} \left( \boldsymbol{z}_u\, \big|\, S(u)=0 \right) = \mathbb{P} \left( \boldsymbol{z}_v\, \big|\, S(v)=1 \right). \label{eq: emb} \end{equation} \end{prop} \noindent{\bf{Proof}}: For the sufficient part, if \eqref{eq: emb} is satisfied, then for arbitrary representation $a$ and $b$, the \begin{equation} \begin{aligned} &\ \mathbb{P} \left( \boldsymbol{z}_u = a, \boldsymbol{z}_v=b\, \big|\, S(u)\oplus S(v)=0 \right) \\ = &{\textstyle{\sum_{i=0}^1}} \mathbb{P} \left( \boldsymbol{z}_u = a\, \big|\, S(u)=i \right) \times \mathbb{P} \left( \boldsymbol{z}_v=b \,\big| \, S(v)=i \right) \\ = &{\textstyle{\sum_{i=0}^1}} \mathbb{P} \left( \boldsymbol{z}_u = a \,\big|\, S(u)=i \right) \times \mathbb{P} \left( \boldsymbol{z}_v=b \, \big| \, S(v)=1-i \right)\\ = &\ \mathbb{P} \left( \boldsymbol{z}_u = a, \boldsymbol{z}_v=b \, \big| \, S(u)\oplus S(v)=1 \right), \end{aligned} \nonumber \end{equation}which indicates the satisfactory of \eqref{eq: xor} accordingly. For the necessary part, it can be easily proved by contradiction. The above proposition implies that a fair representation of nodes is sufficient to achieve dyadic fairness in the optimal case. Repairing based on \eqref{eq: emb} allows us to obtain the dyadic fairness and unambiguity requirement due to the node's representation being only repaired once. After achieving wide applicability on predictors and unambiguity, we consider obtaining the wide applicability on embedding function $f$. The embedding function takes the whole graph $\mathcal{G}$ as input and outputs the node representation $\boldsymbol{z}_i$ based on the graph. \begin{prop} For any node $u$, $v$ in the graph $\mathcal{G}$, if they have the same node attributes and adjacency status, i.e, \begin{equation} \mathbf{x}_u = \mathbf{x}_v \quad \mathrm{and} \quad \mathbf{e}_u = \mathbf{e}_v, \end{equation} then for any embedding function $f$, $f(\mathcal{G})[u] =f(\mathcal{G})[v]$. $\mathbf{x}_u$, $\mathbf{x}_v$ denote the attribute of node $u$ and node $v$, respectively. $\mathbf{e}_u$, $\mathbf{e}_v$ denote the 1-hop adjacency information, which means the local topology structure of node $u$ and node $v$. \end{prop} This proposition enables us to transform \eqref{eq: emb} into the following one: \begin{equation} \mathbb{P} \left( \mathbf{x}_u, \mathbf{e}_u \, \big| \, S(u)=0 \right) = \mathbb{P} \left( \mathbf{x}_v, \mathbf{e}_v \, \big| \, S(v)=1 \right). \label{eq: final} \end{equation} Based on \eqref{eq: final}, the dyadic fairness \eqref{eq: dp} can be further satisfied for arbitrary predictors. In the following, we aim to propose an efficient algorithm to guarantee \eqref{eq: final}. \section{Algorithmic Framework} In this section, we introduce a practical and efficient algorithm called {\bf{DyadicOT}} to achieve dyadic fairness in link prediction tasks based on optimal transport theory. It can be easily extended to multi-valued sensitive attributes problems, which can relax the binary sensitive value constraint. \subsection{Dyadic fairness with optimal transport} In order to achieve dyadic fairness through \eqref{eq: final}, we first represent the graph $\mathcal{G}$ as a matrix $\mathbb{R}^{N\times(d+N)}$ where each row represents the attribute of one node ($\mathbf{x}_u$) and adjacency information ($\mathbf{e}_u$). According to the sensitive feature of each node, we further split $\mathcal{G}$ into $\mathcal{G}_0\in\mathbb{R}^{N_0\times (d+N)}$ and $\mathcal{G}_1\in\mathbb{R}^{N_1\times (d+N)}$ where $N_0$ and $N_1$ are the number of nodes with $S=0$ and $S=1$. To bridge it with the optimal transport theory, we assume graph $\mathcal{G}_0$ and $\mathcal{G}_1$ form uniform distributions $\boldsymbol{\hat{\gamma}}_0$ and $\boldsymbol{\hat{\gamma}}_1$. Our goal can be explicitly described as $\min_{\mathcal{G}} W_{1.\neq}(\boldsymbol{\hat{\gamma}}_0, \boldsymbol{\hat{\gamma}}_1)$. To achieve that goal, we solve the following optimal transport problem: \begin{equation} \boldsymbol{\Gamma}^* = \min_{\boldsymbol{\Gamma}\in\Pi \left( 1/N_0, 1/N_1 \right)} \left\langle \boldsymbol{\Gamma}, \mathbf{C} \right\rangle, \label{eq:opt} \end{equation} where $N_s$ is the number of nodes in the graph and $\frac{1}{N_s}$ is the uniform vector with $N_s$ elements, i.e., $s\in\{0,1\}$. \subsubsection{Define the cost matrix ${\mathbf{C}}$} Considering the distribution $\boldsymbol{\hat{\gamma}}_0$ and $\boldsymbol{\hat{\gamma}}_1$ encodes two important parts of information about the node, i.e., feature $\mathbf{x}_u$ and the local topology structure $\mathbf{e}_u$, our cost matrix $\mathbf{C}$ will consist of two components with hyperparameter $\eta$ as a trade-off between the feature term and the structure term. \begin{equation} \mathbf{C}_{ij}=\eta \left\| \mathbf{x}_i,\mathbf{x}_j \right\|_2^2 + (1 - \eta) \left\|\mathbf{e}_i,\mathbf{e}_j\right\|_2^2. \label{eq:cost} \end{equation} To emphasis, although the Hamming distance is used in the above theoretical results, we practically employ the squared Euclidean distance. \subsubsection{The DyadicOT algorithm} The optimal transport plan $\boldsymbol{\Gamma}^*$ can be obtained, and further $\boldsymbol{\Gamma}^*$ can be utilized to repair the node feature and the adjacency information by mapping both $\mathcal{G}_0\in\mathbb{R}^{N_0\times (N+d)}$ and $\mathcal{G}_1\in\mathbb{R}^{N_1\times (N+d)}$ to the mid-point of the geodesic path between them \cite{villani2009optimal}, i.e., \begin{equation} \left\{\begin{array}{l}\tilde{{\mathcal{G}}_{0}}=\pi_{0} {\mathcal{G}}_{0}+\pi_{1} \boldsymbol{\Gamma}^{*} {\mathcal{G}}_{1}, \smallskip \\ \tilde{{\mathcal{G}}_{1}}=\pi_{1} {\mathcal{G}}_{1}+\pi_{0} \boldsymbol{\Gamma}^{* \top} {\mathcal{G}}_{0}.\end{array}\right. \label{eq:repair} \end{equation} Following the above schemes \eqref{eq:opt}-\eqref{eq:repair}, the proposed DyadicOT algorithm can be concluded as follows. \begin{algorithm}[ht] \label{Algo1} \caption{DyadicOT: Dyadic fairness with OT} \begin{algorithmic}[1] \STATE Initialize $\eta$ and $\boldsymbol{\Gamma}^0\in\Pi\left( 1/N_0,1/N_1\right)$; \STATE Split the graph $\mathcal{G}\in\mathbb{R}^{N\times (d+N)}$ into $\mathcal{G}_0\in\mathbb{R}^{N_0\times (d+N)}$ and $\mathcal{G}_1\in\mathbb{R}^{N_1\times (d+N)}$; \STATE Compute the cost matrix $\mathbf{C}$ with \eqref{eq:cost}; \STATE Transform the distributions to their Wasserstein barycenter by solving \eqref{eq:opt}; \STATE Repair the $\mathcal{G}_0$ and $\mathcal{G}_1$ with \eqref{eq:repair}. \end{algorithmic} \end{algorithm} \subsubsection{Multi-class extension} In order to extend our approach to the case of the non-binary sensitive attribute, it would be necessary to compute the Wasserstein barycenter\cite{barycenter} of the conditional distributions. Specifically, since each node has $|S|$ possible values of sensitive attribute, we first divide the graph $\mathcal{G}$ into $|S|$ sensitive attribute groups $\mathcal{G}_k\in\mathbb{R}^{N_k\times (d+N)}$ where $N_k$ is the number of nodes with $S=k$. Then, we compute the Wasserstein barycenter $\bar{\mathcal{G}}^*$ of these groups as follows: \begin{equation} \bar{\mathcal{G}}^*=\underset{\bar{\mathcal{G}} \in \mathbb{R}^{N \times (N+d)}}{\operatorname{argmin}}\frac{1}{|S|}\sum_{k=1}^{|S|}\min _{\boldsymbol{\Gamma}_k \in \Pi\left(\frac{1}{N}, \frac{1}{N_{k}}\right)}\langle\boldsymbol{\Gamma}_k, \mathbf{C}_k\rangle, \end{equation} where $\mathbf{C}_k$ is the cost matrix between $\mathcal{G}_k$ and $\bar{\mathcal{G}}$. Once we have the Wasserstein barycenter $\bar{\mathcal{G}}^*$ and the optimal transport plan between the Wasserstein barycenter and each sensitive attribute group, i.e.,$\boldsymbol{\Gamma}_k$, we will repair each sensitive attribute group $\mathcal{G}_k$ as follows: \begin{equation} \tilde{\mathcal{G}_k}=N_k{\boldsymbol{\Gamma}_{k}^{*}}^\top \bar{\mathcal{G}}^*. \end{equation} \section{Experiment Results} This section specifies the experimental procedure of our approach on link prediction tasks and summarizes the analysis of the experimental results. \subsection{Experiment Setup} At the beginning, we first describe the experimental setup, including real-world datasets, baselines, evaluation metrics, and experiment details. \smallskip \noindent{\bf{Datasets}}. Our proposed algorithm is evaluated on two real-world network datasets. The statistical results for these two datasets are summarized in the following Table \ref{dataset}. \begin{table}[ht] \begin{center} \caption{Statistic for datasets in experiments} \label{dataset} \begin{tabular}{@{}c|c|c|c|c@{}} \midrule Dataset & \#Nodes & \#Edges & \#Node attributes & $|S|$ \\ \midrule CORA & $2708$ & $5278$ & $2879$ & $7$ \\ \midrule CiteSeer & $2110$ & $3668$ & $3703$ & $6$ \\ \midrule \end{tabular} \end{center} \end{table} \noindent{-} {CORA}\footnote{{https://networkrepository.com/cora.php}} is a citation network consisting of $2708$ scientific publications classified into seven classes. Each node in the network is a publication, characterized by a bag-of-words representation of the abstract. The link between nodes represents undirected citations, and sensitive attributes are set to be the categories of the publication; \noindent{-} {CiteSeer}\footnote{https://networkrepository.com/citeseer.php} dataset consists of $2110$ scientific publications classified into one of six classes. Similar to the CORA dataset, the node in the CiteSeer network is also a publication. Its sensitive attribute is set to be the publication's categories. \smallskip \noindent{\bf{Baselines}}. The following two pre-processing dyadic fairness baseline methods are chosen to be compared as follows: \noindent{-} FairDrop\cite{DBLP:journals/corr/abs-2104-14210} is a biased dropout strategy that forces the graph topology to reduce the homophily of sensitive attributes. Specifically, it generates a fairer random copy of the original adjacency matrix to reduce the number of connections between nodes sharing the same sensitive attributes; \noindent{-} FairEdge\cite{DBLP:journals/corr/abs-2010-16326} is a theoretically sound embedding-agnostic method for group and individually fair edge prediction. It aims to repair the adjacency matrix of plain graphs based on the optimal transport theory and directly ignore the influence of node attributes. \smallskip \noindent{\bf{Evaluation metrics}}. In order to measure the structural changes between the repaired and the original graph for the pre-processing mechanism, we use Assortativity Coefficient (AC)\cite{DBLP:journals/corr/abs-2010-16326} to evaluate the correlation between the sensitive attributes of every pair of nodes that are connected. The values of AC always belongs to $[-1,1]$, and the value close to $0$ denotes that there is no strong association of the sensitive attributes between the connected nodes. To evaluate the fairness, which is the main concern of our work, Representation Bias (RB)~\cite{buyl2020debayes} is employed to measure whether the embedding is well-obfuscated, i.e., contains no sensitive information. Further more, we introduce a new dyadic fairness evaluation metric called DyadicRB through extending classical RB metric. Similar with RB, the DyadicRB is calculated based on the accuracy of dyadic sensitive feature classification problem, which can be calculated as $$ {\hbox{DyadicRB}} = \sum_{s=0}^{1}\frac{\big|\mathcal{E}_{s}\big|}{\big| \mathcal{E} \big|}\hbox{Accuracy} \left( S(u)\oplus S(v)\, \big|\, \boldsymbol{Z}_{u,v} \right). $$ where $\boldsymbol{Z}_{u,v}$ is the edge embedding as the concatenation of the embeddings of the two nodes $u$ and $v$ connected by the link. And $\hbox{Accuracy}(\cdot)$ is the accuracy of predicting the dissimilarity of sensitive information $S(u)\oplus S(v)$ based on edge embedding $\boldsymbol{Z}_{u,v}$. Without limiting ourselves to unbiased embeddings, we utilize DDI \eqref{eq: di} to measure the fairness properties of the predictions themselves. The effectiveness of our method on link prediction tasks from both the utility and fairness perspectives will be further evaluated. As for the utility index, the Accuracy (ACC) is considered to measure the predictor's performance. \smallskip \noindent{\bf{Experiment Details}}. Node2Vec\cite{grover2016node2vec} and support vector classifier are employed for all experiments as our embedding function and link predictor, respectively. The dimension of the node`s embedding is $128$, and all values are collected with $5$ different random seeds. For easy reproduction of the results, our codes are open-sourced in Github\footnote{https://github.com/mail-ecnu/OTDyadicFair}, and more details can be found there. \subsection{Experiment Results} In this section, we will evaluate and compare the effectiveness of our proposed DyadicOT method with other SOTA algorithms on real-world datasets at different stages along the pipeline of the link prediction task. \smallskip \noindent{\bf{Impact on the graph structure}}. Table \ref{ac} shows that the AC values of the two original graphs are relatively high, indicating that the links often appear between nodes with the same sensitive attributes. This leads to discrimination against nodes with different sensitive attributes. The three repairing methods can reduce the assortativity coefficient from the original graph. Specifically, DyadicOT achieves smaller AC than FairEdge, which indicates the effectiveness of DyadicOT. FairDrop could achieve a much smaller AC, and the resulting negative AC indicates that the different sensitive attribute nodes are more likely to connect. However, the prediction accuracy of FairDrop may be highly influenced, and this phenomenon has been shown in Table \ref{cora} and Table \ref{citeseer}. \begin{table}[tb!] \centering \caption{Assortativity Coefficient} \label{ac} \begin{tabular}{@{}c|c|c|c|c@{}} \midrule Dataset & Original & FairEdge & FairDrop & {\bf{DyadicOT}} \\ \midrule CORA & $.771$ & $.668$ & $-.089$ & $.397$ \\ \midrule CiteSeer & $.673$ & $.645$ & $-.065$ & $.567$ \\ \midrule \end{tabular} \end{table} \smallskip \noindent{\bf{Impact on node embeddings}}. Comparison on the impact on node embeddings among different repairing methods is another important concern. Two aforementioned metrics are used, i.e., RB and DyadicRB, to quantify the fairness of the node embedding. As shown in Tables~\ref{cora} and Table \ref{citeseer}, DyadicOT achieves the best score of both RB and DyadicRB. These results indicate that both the sensitive attribute prediction and the dyadic sensitive attribute relation prediction are hard after repairing through DyadicOT. \begin{table}[tb!] \centering \caption{Results on CORA. $\uparrow$ ($\downarrow$) denotes the higher (lower) the better respectively.} \label{cora} \scalebox{0.79}{ \begin{tabular}{@{}c|c|c|c|c@{}} \midrule & ACC $\uparrow$& DDI$\uparrow$ & RB $\downarrow$& DyadicRB $\downarrow$ \\ \midrule Original & $\textbf{.829}\pm.007$ & $.266\pm.012$ & $.834\pm.004$ & $.726\pm.009$ \\ \midrule FairEdge & $.663\pm.008$ & $.393\pm.073$ & $.655\pm.004$ & $.596\pm.031$ \\ \midrule FairDrop & $.533\pm.019$ & $.657\pm.087$ & $.467\pm.015$ & $\textbf{.522}\pm.018$ \\ \midrule DyadicOT & $.614\pm.006$ & $\textbf{.836}\pm.106$ & $\textbf{.172}\pm.018$ & $\textbf{.522}\pm.013$ \\ \midrule \end{tabular} } \end{table} \begin{table}[tb!] \centering \caption{Results on CiteSeer.} \label{citeseer} \scalebox{0.79}{ \begin{tabular}{@{}c|c|c|c|cc@{}} \midrule & ACC $\uparrow$& DDI$\uparrow$ & RB $\downarrow$& DyadicRB $\downarrow$& \\ \cmidrule(r){1-5} Original & $.820\pm.011$ & $.372\pm.019$ & $.661\pm.005$ & $.658\pm.009$ & \\ \cmidrule(r){1-5} FairEdge & $\textbf{.821}\pm.013$ & $.389\pm.018$ & $.655\pm.004$ & $.623\pm.023$ & \\ \cmidrule(r){1-5} FairDrop & $.532\pm.024$ & $\textbf{.717}\pm.081$ & $.493\pm.021$ & $.510\pm.037$ & \\ \cmidrule(r){1-5} DyadicOT & $.585\pm.014$ & $.653\pm.181$ & $\textbf{.211}\pm.027$ & $\textbf{.506}\pm.036$ & \\ \cmidrule(r){1-5} \end{tabular} } \end{table} \begin{figure}[htbp]% \centering \subfloat[Original Embedding]{ \includegraphics[width=0.2\textwidth]{./ori-pca2-com} \label{fig:ori} }\hfill \subfloat[FairEdge's Embedding]{ \includegraphics[width=0.2\textwidth]{./emd-pca2-com} \label{fig:fairedge} }\\ \subfloat[FairDrop's Embedding]{ \includegraphics[width=0.2\textwidth]{./drop-pca2-com} \label{fig:fairdrop} }\hfill \subfloat[DyadicOT's Embedding]{ \includegraphics[width=0.2\textwidth]{./sym-pca2-com} \label{fig:sym} } \caption{Visualization of node embedding learned by Node2Vec on CORA. Different colors indicate different sensitive attributes. (a) and (b) denote the node embeddings learned from the original graph or the graph repaired by DyadicOT respectively.} \label{fig: sig} \end{figure} \begin{figure}[htbp]% \centering \subfloat[Original Embedding]{ \includegraphics[width=0.2\textwidth]{./ori-pca-com} \label{fig:ori-pair} }\hfill \subfloat[FairEdge's Embedding]{ \includegraphics[width=0.2\textwidth]{./emd-pca-com} \label{fig:fairedge-pair} } \\ \subfloat[FairDrop's Embedding]{ \includegraphics[width=0.2\textwidth]{./drop-pca-com} \label{fig:fairdrop-pair} }\hfill \subfloat[DyadicOT's Embedding]{ \includegraphics[width=0.2\textwidth]{./sym-pca-com} \label{fig:sym-pair} } \caption{Visualization of dyadic node embedding learned by Node2Vec on CORA. Here, the red colour represents node embeddings with different sensitive attributes, while the blue colour indicates node embeddings with the same sensitive attributes.} \label{fig: dyadic} \end{figure} To better understand the impact of our repairing on node embedding, we employ the PCA method to reduce the learned embedding into $2$-dimension space. As shown in Figure~\ref{fig: sig}, the learned embedding from the original graph is distributed with highly correlated to the node's sensitive feature, which corresponds to higher RB. The embedding learned from the repaired graph by DyadicOT is less correlated with the sensitive features compared with the baselines, corresponding to lower RB. The comparison of dyadic embedding is shown in Figure~\ref{fig: dyadic}. The learned dyadic embedding by DyadicOT is less correlated than the original graph, indicating less predictability of the dyadic sensitive features' relationship (lower DyadicRB). \smallskip \noindent{\bf{Impact on link prediction}}. Finally, we consider the performance comparison on the link prediction task through two basic metrics, i.e., ACC and DDI. ACC indicates the utility of the predictor, while DDI denotes the quantity of dyadic fairness the predictor achieves. For the CORA dataset, all three repairing methods lose ACC while obtaining dyadic fairness. Compared with FairEdge and FairDrop, DyadicOT achieves the best quantity of dyadic fairness (DDI), and the ACC decreases within the tolerance range. As for the other CiteSeer dataset, FairEdge nearly cuts no ice on fairness. However, compared to FairDrop, DyadicOT achieves higher DDI with less accuracy decrease, which indicates the better performance of DyadicOT. \section{Conclusion} This paper proposes a pre-processing method to achieve dyadic fairness in link prediction tasks. By transforming the dyadic fairness obtaining problem into a conditional distribution alignment problem, dyadic fairness can be obtained with flexibility and unambiguity. Furthermore, a practical repairing method is introduced based on optimal transport theory. Experiments on CORA and CiteSeer show that the proposed DyadicOT method has significant results in obtaining the dyadic fairness of link prediction.
proofpile-arXiv_065-5691
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Analysis for regularized offline RL (Theorem~\ref{thm:sample regularized})} \label{sec:analysis} In this section we present the analysis for our main result in Theorem~\ref{thm:sample regularized}. \subsection{Intuition: invariance of saddle points} \label{sec:intuition} First we would like to provide an intuitive explanation why optimizing ${\mathcal{V}}\times {\mathcal{W}}$ instead of $\mathbb{R}^{|\mathcal{S}|}\times\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}_+$ can still bring us close to $({v^*_{\alpha}},{w^*_{\alpha}})$. More specifically, we have the following lemma: \begin{lemma}[Invariance of saddle points] \label{lem:minimax} Suppose $(x^*,y^*)$ is a saddle point of $f(x,y)$ over $\mathcal{X}\times\mathcal{Y}$, then for any $\mathcal{X}'\subseteq\mathcal{X}$ and $\mathcal{Y}'\subseteq\mathcal{Y}$, if $(x^*,y^*)\in\mathcal{X}'\times\mathcal{Y}'$, we have: \begin{align} &(x^*,y^*)\in\arg\min_{x\in\mathcal{X}'}\arg\max_{y\in\mathcal{Y'}}f(x,y),\\ &(x^*,y^*)\in\arg\max_{y\in\mathcal{Y'}}\arg\min_{x\in\mathcal{X}'}f(x,y). \end{align} \end{lemma} \begin{proof} See Appendix~\ref{proof:lemma minimax}. \end{proof} Lemma~\ref{lem:minimax} shows that as long as a subset includes the saddle point of the original set, the saddle point will still be a minimax and maximin point with respect to the subset. We apply this to \eqref{prob:maximin2}: the saddle point $(v^*_{\alpha},w^*_{\alpha})$ of \eqref{prob:maximin2}, also the solution to the regularized MDP without any restriction on function classes, is also a solution of $\max_{w\in {\mathcal{W}}}\min_{{v}\in {\mathcal{V}}}{L_{\alpha}}({v},w)$. We now give a brief sketch. Since $\hat L $ is unbiased for $L_{\alpha}$, using uniform convergence, $\hat{L}_{\alpha}({v},w)\approx {L_{\alpha}}({v},w)$ with high probability. Next, use strong concavity of ${L_{\alpha}}({v},w)$ with respect to $w$, to show that $\hat{w}\approx {w^*_{\alpha}}$. This implies that $\hat{\pi}\approx {\pi^*_{\alpha}}$, which is exactly Theorem~\ref{thm:sample regularized}. \subsection{Preparation: boundedness of ${v^*_{\alpha}}$} \label{sec:nu bound} Before proving Theorem~\ref{thm:sample regularized}, an important ingredient is to bound ${v^*_{\alpha}}$ since $\mathcal{V}$ is assumed to be a bounded set (Assumption \ref{ass:V bound}). The key idea is to utilize KKT conditions and the fact that for each $s\in\mathcal{S}$ there exists $a\in\mathcal{A}$ such that ${w^*_{\alpha}}(s,a)>0$. The consequent bound is given in Lemma~\ref{lem:bound tilde nu}. \begin{lemma}[Boundedness of ${v^*_{\alpha}}$] \label{lem:bound tilde nu} Suppose Assumption~\ref{ass:concentrability} and \ref{ass:f prop} holds, then we have: \begin{equation} \Vert{v^*_{\alpha}}\Vert_{\infty}\leq {B_{v,\alpha}}:=\frac{\alpha {B_{f',\alpha}}+1}{1-\gamma}. \end{equation} \end{lemma} \begin{proof} See Appendix~\ref{proof:lemma bound tilde nu}. \end{proof} \subsection{Proof sketch of Theorem~\ref{thm:sample regularized}} As stated in Section~\ref{sec:intuition}, our proof consists of (1) using concentration inequalities to bound $|{L_{\alpha}}({v},w)-\hat{L}_{\alpha}({v},w)|$, (2) using the invariance of saddle points and concentration bounds to characterize the error $\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$ and (3) analyzing the difference between $\hat{\pi}$ and ${\pi^*_{\alpha}}$. We will elaborate on each of these steps in this section. \paragraph{Concentration of $\hat{L}_{\alpha}({v},w)$.} First, it can be observed that $\hat{L}_{\alpha}({v},w)$ is an unbiased estimator of ${L_{\alpha}}({v},w)$, as shown in the following lemma \begin{lemma} \label{lem:unbiased hat L} \begin{equation} \mathbb{E}_{\mathcal{D}}[\hat{L}_{\alpha}({v},w)]={L_{\alpha}}({v},w),\quad\forall{v}\in {\mathcal{V}},w\in {\mathcal{W}}, \end{equation} where $\mathbb{E}_{\mathcal{D}}[\cdot]$ is the expectation with respect to the samples in $\mathcal{D}$, i.e., $(s_i,a_i)\sim d^D, s'_i\sim P(\cdot|s_i,a_i)$. \end{lemma} \begin{proof} See Appendix~\ref{proof:lemma unbiased hat L}. \end{proof} On the other hand, note that from the boundedness of ${\mathcal{V}},{\mathcal{W}}$ and $f$ (Assumption~\ref{ass:V bound}, \ref{ass:W bound}, \ref{ass:f prop}), $\hat{L}_{\alpha}({v},w)$ is also bounded. Combining with Lemma~\ref{lem:unbiased hat L}, we have the following lemma: \begin{lemma} \label{lem:hat L conc} Suppose Assumption~\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold. Then with at least probability $1-\delta$, for all ${v}\in {\mathcal{V}}$ and $w\in {\mathcal{W}}$ we have: \begin{equation} |\hat{L}_{\alpha}({v},w)-{L_{\alpha}}({v},w)|\leq\mathcal{E}_{n,n_0,\alpha}(B_{w,\alpha},B_{f,\alpha},B_{v,\alpha},B_{e,\alpha}):={\epsilon_{stat}}, \end{equation} \end{lemma} \begin{proof} See Appendix~\ref{proof:hat L conc}. \end{proof} \paragraph{Bounding $\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$.} To bound $\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$, we first need to characterize ${L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},\hat{w})$. Inspired by Lemma~\ref{lem:minimax}, we decompose ${L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},\hat{w})$ carefully and utilize the concentration results Lemma~\ref{lem:hat L conc}, which leads us to the following lemma: \begin{lemma} \label{lem:hat L tilde L} Suppose Assumption~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound},\ref{ass:f prop} and \ref{ass:V bound} hold. Then with at least probability $1-\delta$, \begin{equation} {L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},\hat{w})\leq 2{\epsilon_{stat}}. \end{equation} \end{lemma} \begin{proof} See Appendix~\ref{proof:hat L tilde L}. \end{proof} Then due to the strong convexity of $f$ which leads to $L_\alpha$ being strongly concave in $w$, $\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$ can be naturally bounded by Lemma~\ref{lem:hat L tilde L}, \begin{lemma} \label{lem: hat w error} Suppose Assumption~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold. Then with at least probability $1-\delta$, \begin{equation} \label{eq:lem hat w 1} \Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\leq \sqrt{\frac{4{\epsilon_{stat}}}{\alpha M_f}}, \end{equation} which implies that \begin{equation} \label{eq:lem hat w 2} \Vert \hat{d}-{d^*_{\alpha}}\Vert_{1}\leq \sqrt{\frac{4{\epsilon_{stat}}}{\alpha M_f}}, \end{equation} where $\hat{d}(s,a)=\hat{w}(s,a)d^D(s,a),\forall s,a$. \end{lemma} \begin{proof} See Appendix~\ref{proof:hat w error}. \end{proof} This proves the third part of (\ref{eq:thm1 1}) in Theorem~\ref{thm:sample regularized}. \paragraph{Bounding $\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(s,\cdot)-\hat{\pi}(s,\cdot)\Vert_1]$.} To obtain the second part of (\ref{eq:thm1 1}), we notice that ${\pi^*_{\alpha}}$ (or $\hat{\pi}$) can be derived explicitly from ${w^*_{\alpha}}$ (or $\hat{w}$) by (\ref{eq:tilde d tilde w}) (or (\ref{eq:hat pi})). However, the mapping ${w^*_{\alpha}}\mapsto{\pi^*_{\alpha}}$ (or $\hat{w}\mapsto\hat{\pi}$) is not linear and discontinuous when ${d^*_{\alpha}}(s)=0$ (or $\hat{d}(s)=0$), which makes the mapping complicated. To tackle with this problem, we first decompose the error $\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$ and assign to each state $s\in\mathcal{S}$, then consider the case where $\hat{d}(s)>0$ and $\hat{d}(s)=0$ separately. Consequently, we can obtain the following lemma: \begin{lemma} \label{lem:tilde w tilde pi} \begin{equation} \label{eq:tilde w tilde pi} \mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(s,\cdot)-\hat{\pi}(s,\cdot)\Vert_1]\leq2\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}. \end{equation} \end{lemma} \begin{proof} See Appendix~\ref{proof:tilde w tilde pi}. \end{proof} Combining Equation \eqref{eq:lem hat w 1}, \eqref{eq:tilde w tilde pi}, and the definition of ${\epsilon_{stat}} $ from Lemma \ref{lem:hat L conc}, gives us the second part of Theorem \ref{thm:sample regularized}. \paragraph{Bounding $J(\pi^*_{\alpha})-J(\hat{\pi})$.} To complete the proof of Theorem~\ref{thm:sample regularized}, we only need to bound $J(\pi^*_{\alpha})-J(\hat{\pi})$ via the bounds on $\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(s,\cdot)-\hat{\pi}(s,\cdot)\Vert_1]$, which is shown in the following lemma: \begin{lemma} \label{lem:tilde pi performance} \begin{equation} \label{eq:tilde pi performance} J(\pi^*_{\alpha})-J(\hat{\pi})\leq\frac{1}{1-\gamma}\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(s,\cdot)-\hat{\pi}(s,\cdot)\Vert_1]. \end{equation} \end{lemma} \begin{proof} See Appendix~\ref{proof:tilde pi performance}. \end{proof} This concludes the proof of Theorem \ref{thm:sample regularized}. \section{Proofs of Lemmas for Theorem~\ref{thm:sample regularized}} \subsection{Proof of Lemma~\ref{lem:minimax}} \label{proof:lemma minimax} We first prove that $(x^*,y^*)\in\arg\min_{x\in\mathcal{X}'}\arg\max_{y\in\mathcal{Y'}}f(x,y)$. Since $(x^*,y^*)$ is a saddle point \citep{sion1958general}, we have \begin{equation} \label{eq:nash} x^*=\arg\min_{x\in\mathcal{X}}f(x,y^*),y^*=\arg\max_{y\in\mathcal{Y}}f(x^*,y). \end{equation} Since $\mathcal{Y}'\subseteq\mathcal{Y}$ and $y^*\in\mathcal{Y}'$, we have: \begin{equation} f(x^*,y^*)=\max_{y\in\mathcal{Y}'}f(x^*,y). \end{equation} On the other hand, because $\mathcal{X}'\subseteq\mathcal{X}$ and $y^*\in\mathcal{Y}'$, \begin{equation} f(x^*,y^*)\leq f(x,y^*)\leq \max_{y\in\mathcal{Y}'}f(x,y), \forall x\in\mathcal{X}'. \end{equation} Notice that $x^*\in\mathcal{X}'$, so we have: \begin{equation} \max_{y\in\mathcal{Y}'}f(x^*,y)=\min_{x\in\mathcal{X}}\max_{y\in\mathcal{Y}'}f(x,y), \end{equation} or equivalently, \begin{equation} \label{eq:lem1 eq1} (x^*,y^*)\in\arg\min_{x\in\mathcal{X}'}\arg\max_{y\in\mathcal{Y'}}f(x,y). \end{equation} On the other hand, by a similar proof we have \begin{equation} f(x^*,y^*)\geq f(x^*,y)\geq\min_{x\in\mathcal{X}'}f(x,y),\quad\forall y\in\mathcal{Y'}, \end{equation} which implies that \begin{equation} (x^*,y^*)\in\arg\max_{y\in\mathcal{Y'}}\arg\min_{x\in\mathcal{X}'}f(x,y). \end{equation} \subsection{Proof of Lemma~\ref{lem:bound tilde nu}} \label{proof:lemma bound tilde nu} From the strong duality of the regularized problem (\ref{eq:constrained})(\ref{eq:bellman flow 1}), when $d^D(s,a)\neq0$, we have ${w^*_{\alpha}}=\arg\max_{w\geq 0}{L_{\alpha}}({v^*_{\alpha}},w)$, or \begin{equation} \label{eq:lem2 eq1} {w^*_{\alpha}}(s,a)=\max\left(0,(f')^{-1}\left(\frac{e_{{v^*_{\alpha}}}(s,a)}{\alpha}\right)\right). \end{equation} Note that ${d^*_{\alpha}}(s,a)={w^*_{\alpha}}(s,a)d^D(s,a)$ satisfies Bellman flow constraint (\ref{eq:bellman flow 1}), therefore \begin{equation} {d^*_{\alpha}}(s)\geq(1-\gamma)\mu_0(s)>0,\quad\forall s\in\mathcal{S}, \end{equation} which implies that for any $s\in\mathcal{S}$, $\exists a_s\in\mathcal{A}$ such that \begin{equation} {d^*_{\alpha}}(s,a_s)>0, \end{equation} or equivalently \begin{equation} {w^*_{\alpha}}(s,a_s)>0, d^D(s,a_s)>0. \end{equation} Thus from (\ref{eq:lem2 eq1}) we know that \begin{equation} e_{{v^*_{\alpha}}}(s,a_s)=\alpha f'({w^*_{\alpha}}(s,a_s)). \end{equation} From Assumption~\ref{ass:concentrability}, ${w^*_{\alpha}}(s,a_s)\leq {B_{w,\alpha}}$ and thus due to Assumption~\ref{ass:f prop}, \begin{equation} \label{eq:lem2 eq2} |e_{{v^*_{\alpha}}}(s,a_s)|\leq \alpha {B_{f',\alpha}}, \forall s\in\mathcal{S}. \end{equation} On the other hand, suppose $|{v^*_{\alpha}}(s_m)|=\Vert {v^*_{\alpha}}\Vert_{\infty}$, then from the definition of $e_{{v}}$ we have: \begin{equation} e_{{v^*_{\alpha}}}(s_m,a_{s_m})=r(s_m,a_{s_m})+\gamma\mathbb{E}_{s'\sim P(\cdot|s_m,a_{s_m})}{v^*_{\alpha}}(s')-{v^*_{\alpha}}(s_m), \end{equation} which implies that: \begin{align} |e_{{v^*_{\alpha}}}(s_m,a_{s_m})-r(s_m,a_{s_m})|&=|{v^*_{\alpha}}(s_m)-\gamma\mathbb{E}_{s'\sim P(\cdot|s_m,a_{s_m})}{v^*_{\alpha}}(s')|\\ &\geq|{v^*_{\alpha}}(s_m)|-\gamma|\mathbb{E}_{s'\sim P(\cdot|s_m,a_{s_m})}{v^*_{\alpha}}(s')|\\ &\geq|{v^*_{\alpha}}(s_m)|-\gamma\mathbb{E}_{s'\sim P(\cdot|s_m,a_{s_m})}|{v^*_{\alpha}}(s')|\\ &\geq(1-\gamma)|{v^*_{\alpha}}(s_m)|\label{eq:lem2 eq3}. \end{align} Combining (\ref{eq:lem2 eq2}) and (\ref{eq:lem2 eq3}), we have \begin{equation} \Vert{v^*_{\alpha}}\Vert_{\infty}\leq\frac{\alpha {B_{f',\alpha}}+1}{1-\gamma}. \end{equation} \subsection{Proof of Lemma~\ref{lem:unbiased hat L}} \label{proof:lemma unbiased hat L} First by the tower rule, we have: \begin{equation} \mathbb{E}_{\mathcal{D}}\left[\hat{L}_{\alpha}({v},w)\right]=\mathbb{E}_{(s_i,a_i)\sim d^D, s_{0,j}\sim\mu_0}\left[\mathbb{E}_{s'_i\sim P(\cdot|s_i,a_i)}\left[\hat{L}_{\alpha}({v},w)\vert s_i,a_i\right]\right]. \end{equation} Note that \begin{align} &\mathbb{E}_{s'_i\sim P(\cdot|s_i,a_i)}\left[\hat{L}_{\alpha}({v},w)\vert s_i,a_i\right]\\ =&(1-\gamma)\frac{1}{n_0}\sum_{j=1}^{n_0}[{v}(s_{0,j})]+\frac{1}{n}\sum_{i=1}^n[-\alpha f(w(s_i,a_i))]\\+&\frac{1}{n}\sum_{i=1}^n[w(s_i,a_i)\mathbb{E}_{s'_i\sim P(\cdot|s_i,a_i)}\left[e_{{v}}(s_i,a_i,r_i,s'_i)\vert s_i,a_i\right]]\\ =&(1-\gamma)\frac{1}{n_0}\sum_{j=1}^{n_0}[{v}(s_{0,j})]+\frac{1}{n}\sum_{i=1}^n[-\alpha f(w(s_i,a_i))]+\frac{1}{n}\sum_{i=1}^n[w(s_i,a_i)e_{{v}}(s_i,a_i)]. \end{align} Therefore, \begin{align} &\mathbb{E}_{\mathcal{D}}\left[\hat{L}_{\alpha}({v},w)\right]\\ =&(1-\gamma)\mathbb{E}_{s\sim \mu_0}[{v}(s)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{{v}}(s,a)]\\ =&{L_{\alpha}}({v},w). \end{align} \subsection{Proof of Lemma~\ref{lem:hat L conc}} \label{proof:hat L conc} Let $l^{{v},w}_{i}=-\alpha f(w(s_i,a_i))+w(s_i,a_i)e_{{v}}(s_i,a_i,r_i,s'_i)$. From Assumption~\ref{ass:V bound}, we know \begin{equation} |e_{{v}}(s,a,r,s')|=|r(s,a)+\gamma{v}(s')-{v}(s)|\leq(1+\gamma){B_{v,\alpha}}+1={B_{e,\alpha}}. \end{equation} Therefore, by Assumption~\ref{ass:W bound} and \ref{ass:f prop}, we have: \begin{equation} |l^{{v},w}_{i}|\leq \alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}}. \end{equation} Notice that $l^{{v},w}_i$ is independent from each other, thus we can apply Hoeffding's inequality and for any $t>0$, \begin{equation} \text{Pr}[|\frac{1}{n}\sum_{i=1}^nl^{{v},w}_i-\mathbb{E}[l^{{v},w}_i]|\leq t]\geq 1-2\mathrm{exp}\left(\frac{-nt^2}{2(\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})^2}\right). \end{equation} Let $t=(\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}$, we have with at least probability $1-\frac{\delta}{2|{\mathcal{V}}||{\mathcal{W}}|}$, \begin{equation} |\frac{1}{n}\sum_{i=1}^nl^{{v},w}_i-\mathbb{E}[l^{{v},w}_i]|\leq(\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}. \end{equation} Therefore by union bound, with at least probability $1-\frac{\delta}{2}$, we have for all ${v}\in {\mathcal{V}}$ and $w\in {\mathcal{W}}$, \begin{equation} |\frac{1}{n}\sum_{i=1}^nl^{{v},w}_i-\mathbb{E}[l^{{v},w}_i]|\leq (\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}. \end{equation} Similarly, we have with at least probability $1-\frac{\delta}{2}$, for all ${v}\in {\mathcal{V}}$, \begin{equation} |\frac{1}{n_0}\sum_{j=1}^{n_0}v(s_{0,j})-\mathbb{E}_{s\sim\mu_0}[v(s)]|\leq B_{v,\alpha}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}. \end{equation} Therefore, with at least probability $1-\delta$ we have \begin{equation} |\hat{L}_{\alpha}({v},w)-{L_{\alpha}}({v},w)|\leq(\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}+(1-\gamma)B_{v,\alpha}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}. \end{equation} \subsection{Proof of Lemma~\ref{lem:hat L tilde L}} \label{proof:hat L tilde L} First we decompose ${L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})$ into the following terms: \begin{align} &{L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})=(\underbrace{{L_{\alpha}}({v^*_{\alpha}},\hat{w})-\hat{L}_{\alpha}({v^*_{\alpha}},\hat{w})}_{(1)}) +(\underbrace{\hat{L}_{\alpha}({v^*_{\alpha}},\hat{w})-\hat{L}_{\alpha}(\hat{{v}},\hat{w})}_{(2)}) \notag\\ &+(\underbrace{\hat{L}_{\alpha}(\hat{{v}},\hat{w})-\hat{L}_{\alpha}(\hat{{v}}({w^*_{\alpha}}),{w^*_{\alpha}})}_{(3)}) + (\underbrace{\hat{L}_{\alpha}(\hat{{v}}({w^*_{\alpha}}),{w^*_{\alpha}})-{L_{\alpha}}(\hat{{v}}({w^*_{\alpha}}),{w^*_{\alpha}})}_{(4)})\\& + (\underbrace{{L_{\alpha}}(\hat{{v}}({w^*_{\alpha}}),{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})}_{(5)}), \end{align} where $\hat{{v}}(w)=\arg\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w)$. For term (1) and (4), we can apply Lemma~\ref{lem:hat L conc} and thus \begin{equation} (1)\geq-{\epsilon_{stat}},(4)\geq-{\epsilon_{stat}}. \end{equation} For term (2), since $\hat{{v}}=\arg\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},\hat{w})$ and ${v^*_{\alpha}}\in {\mathcal{V}}$, we have \begin{equation} (2)\geq0. \end{equation} For term (3), since $\hat{w}=\arg\max_{w\in {\mathcal{W}}}\hat{L}_{\alpha}(\hat{{v}}(w),w)$ and ${w^*_{\alpha}}\in {\mathcal{W}}$, \begin{equation} (3)\geq0. \end{equation} For term (5), note that due to the strong duality of the regularized problem~(\ref{eq:constrained})(\ref{eq:bellman flow 1}), $({v^*_{\alpha}},{w^*_{\alpha}})$ is a saddle point of ${L_{\alpha}}({v},w)$ over $\mathbb{R}^{|\mathcal{S}|}\times\mathbb{R}_+^{|\mathcal{S}||\mathcal{A}|}$. Therefore, \begin{equation} {v^*_{\alpha}}=\arg\min_{{v}\in\mathbb{R}^{|\mathcal{S}|}}{L_{\alpha}}({v},{w^*_{\alpha}}). \end{equation} Since $\hat{{v}}({w^*_{\alpha}})\in\mathbb{R}^{|\mathcal{S}|}$, we have: \begin{equation} (5)\geq0. \end{equation} Combining the above inequalities, it is obvious that \begin{equation} {L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})\geq-2{\epsilon_{stat}}. \end{equation} \subsection{Proof of Lemma~\ref{lem: hat w error}} \label{proof:hat w error} First we need to show ${L_{\alpha}}({v^*_{\alpha}},w)$ is $\alpha M_f$-strongly-concave with respect to $w$ and $\Vert\cdot\Vert_{2,d^D}$. Consider ${\tilde{L}_{\alpha}}(w)={L_{\alpha}}({v^*_{\alpha}},w)+\frac{\alpha M_f}{2}\Vert w\Vert_{2,d^D}^2$, then we know that \begin{equation} {\tilde{L}_{\alpha}}(w)=(1-\gamma)\mathbb{E}_{s\sim \mu_0}[{v}(s)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))-\frac{M_f}{2}w(s,a)^2]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{{v}}(s,a)]. \end{equation} Since $f$ is $M_f$-strongly-convex, we know ${\tilde{L}_{\alpha}}(w)$ is concave, which implies that ${L_{\alpha}}({v^*_{\alpha}},w)$ is $\alpha M_f$-strongly-concave with respect to $w$ and $\Vert\cdot\Vert_{2,d^D}$. On the other hand, since $({v^*_{\alpha}},{w^*_{\alpha}})$ is a saddle point of ${L_{\alpha}}({v},w)$ over $\mathbb{R}^{|\mathcal{S}|}\times\mathbb{R}_+^{|\mathcal{S}||\mathcal{A}|}$, we have ${w^*_{\alpha}}=\arg\max_{w\geq0}{L_{\alpha}}({v^*_{\alpha}},w)$. Then we have: \begin{equation} \Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\leq \sqrt{\frac{2({L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},\hat{w}))}{\alpha M_f}}. \end{equation} Substituting Lemma~\ref{lem:hat L tilde L} into the above equation we can obtain (\ref{eq:lem hat w 1}). For (\ref{eq:lem hat w 2}), it can be observed that \begin{equation} \Vert \hat{d}-{d^*_{\alpha}}\Vert_{1}=\Vert \hat{w}-{w^*_{\alpha}}\Vert_{1,d^D}\leq\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\leq\sqrt{\frac{4{\epsilon_{stat}}}{\alpha M_f}}. \end{equation} \subsection{Proof of Lemma~\ref{lem:tilde w tilde pi}} \label{proof:tilde w tilde pi} First note that $\Vert \hat{w}-{w^*_{\alpha}}\Vert_{1,d^D}\leq\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$, which implies that \begin{equation} \sum_s\epsilon_{\hat{w},s}\leq\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D} \end{equation} where \begin{equation} \epsilon_{\hat{w},s}=\sum_{a}|\hat{w}(s,a)d^D(s,a)-{w^*_{\alpha}}d^D(s,a)| \end{equation} If $\hat{d}(s)>0$, then we have: \begin{align} &{d^*_{\alpha}}(s)\sum_a|\hat{\pi}(s,a)-{\pi^*_{\alpha}}(s,a)|\\ =&\sum_a|\frac{{d^*_{\alpha}}(s)}{\hat{d}(s)}\hat{w}(s,a)d^D(s,a)-{w^*_{\alpha}}d^D(s,a)|\label{eq:importance_weight}\\ \leq&\sum_a(|\frac{{d^*_{\alpha}}(s)}{\hat{d}(s)}-1|\hat{w}(s,a)d^D(s,a))+\sum_a|\hat{w}(s,a)d^D(s,a)-{w^*_{\alpha}}d^D(s,a)|\\ \leq&\epsilon_{\hat{w},s}+\sum_a(|\frac{{d^*_{\alpha}}(s)}{\hat{d}(s)}-1|\hat{w}(s,a)d^D(s,a)). \end{align} Notice that $|\hat{d}(s)-{d^*_{\alpha}}(s)|\leq\epsilon_{\hat{w},s}$, which implies $|\frac{{d^*_{\alpha}}(s)}{\hat{d}(s)}-1|\leq\frac{\epsilon_{\hat{w},s}}{\hat{d}(s)}$, therefore: \begin{equation} {d^*_{\alpha}}(s)\sum_a|\hat{\pi}(s,a)-{\pi^*_{\alpha}}(s,a)|\leq\epsilon_{\hat{w},s}(1+\sum_a\frac{\hat{w}(s,a)d^D(s,a)}{\hat{d}(s)})=2\epsilon_{\hat{w},s}. \end{equation} If $\hat{d}(s)=0$, then we know that $\sum_{a}|{w^*_{\alpha}}(s,a)d^D(s,a)|\leq\epsilon_{\hat{w},s}$. Therefore \begin{align} {d^*_{\alpha}}(s)\sum_a|\hat{\pi}(s,a)-{\pi^*_{\alpha}}(s,a)|\leq 2{d^*_{\alpha}}(s)=2\epsilon_{\hat{w},s}. \end{align} Thus we have ${d^*_{\alpha}}(s)\sum_a|\hat{\pi}(s,a)-{\pi^*_{\alpha}}(s,a)|\leq 2\epsilon_{\hat{w},s}$, from which we can easily obtain: \begin{equation} \mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(s,\cdot)-\hat{\pi}(s,\cdot)\Vert_1]\leq2\sum_{s}\epsilon_{\hat{w},s}\leq2\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}. \end{equation} \subsection{Proof of Lemma~\ref{lem:tilde pi performance}} \label{proof:tilde pi performance} To bound $J({\pi^*_{\alpha}})-J(\hat{\pi})$, we introduce the performance difference lemma which was previously derived in \citet{Kakade02approximatelyoptimal,kakade2003sample}: \begin{lemma}[Performance Difference] \label{lem:performance difference} For arbitrary policies $\pi,\pi'$ and initial distribution $\mu_0$, we have \begin{equation} V^{\pi'}(\mu_0)-V^{\pi}(\mu_0)=\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\pi'}}[\langle Q^{\pi}(s,\dot), \pi'(\cdot|s)-\pi(\cdot|s)\rangle]. \end{equation} \end{lemma} The proof of Lemma~\ref{lem:performance difference} is referred to Appendix~\ref{proof:lem performance difference}. With Lemma~\ref{lem:performance difference}, we have \begin{align} &J({\pi^*_{\alpha}})-J(\hat{\pi})\\ =&(1-\gamma)(V^{{\pi^*_{\alpha}}}(\mu_0)-V^{\hat{\pi}}(\mu_0))\\ =&\mathbb{E}_{s\sim {d^*_{\alpha}}}[\langle Q^{\hat{\pi}}(s,\dot), {\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\rangle]\\ \leq&\frac{1}{1-\gamma}\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(s,\cdot)-\hat{\pi}(s,\cdot)\Vert_1]. \end{align} \subsection{Proof of Lemma~\ref{lem:performance difference}} \label{proof:lem performance difference} For any two policies $\pi'$ and $\pi$, it follows from the definition of $V^{\pi'}(\mu_0)$ that \begin{align} &V^{\pi'}(\mu_0)-V^{\pi}(\mu_0)\\ =&\mathbb{E}_{\pi'}\left[\sum_{t=0}^{\infty}\gamma^tr(s_t,a_t)\Big\vert\, s_0\sim \mu_0\right]-V^{\pi}(\mu_0) \notag\\ =&\mathbb{E}_{\pi'}\left[\sum_{t=0}^{\infty}\gamma^t\Big[r(s_t,a_t)+V^{\pi}_{\tau}(s_t)-V^{\pi}(s_t)\Big] \,\Big\vert\, s_0\sim\mu_0\right]-V^{\pi}(\mu_0) \notag\\ =&\mathbb{E}_{\pi'}\left[\sum_{t=0}^{\infty}\gamma^t\Big[r(s_t,a_t)+\gamma V^{\pi}(s_{t+1})-V^{\pi}(s_t)\Big] \,\Big\vert\, s_0\sim\mu_0\right]\notag\\ =&\mathbb{E}_{\pi'}\left[\sum_{t=0}^{\infty}\gamma^t\Big[r(s_t,a_t)+\gamma \mathbb{E}_{s_{t+1}\sim P(\cdot|s_t,a_t)}[V^{\pi}_{\tau}(s_{t+1})|s_t,a_t]-V^{\pi}_{\tau}(s_t)\Big] \,\Big\vert\, s_0\sim\mu_0\right] \notag\\ =&\mathbb{E}_{\pi'}\left[\sum_{t=0}^{\infty}\gamma^t\Big[Q^{\pi}(s_t,a_t)-V^{\pi}(s_t)\Big] \,\Big\vert\, s_0\sim\mu_0\right]\notag \\ =&\frac{1}{1-\gamma}\mathbb{E}_{(s,a)\sim d^{\pi'}}\left[Q^{\pi}(s,a)-V^{\pi}(s)\rangle\right]\notag \\ =&\frac{1}{1-\gamma}\mathbb{E}_{s\sim d^{\pi'}}\left[\langle Q^{\pi}(s,\cdot),\pi'(\cdot|s)-\pi(\cdot|s)\rangle\right], \label{eq:Vpiprime-Vpi-diff} \end{align} where the second to last step comes from the definition of $d^{\pi'}$ and the last step from the fact $V^{\pi}(s)=\mathbb{E}_{a\sim\pi(\cdot|s)}[Q^{\pi}(s,a)]$. \section{Proof of Corollary~\ref{cor:sample unregularized}} \label{proof:cor sample unregularized} The proof consists of two steps. We first show that $J({\pi^*_0})-J({\pi^*_{\alpha_{\epsilon}}})\leq\frac{\epsilon}{2}$ and then we bound $J({\pi^*_{\alpha_{\epsilon}}})-J(\hat{\pi})$ by utilizing Theorem~\ref{thm:sample regularized}. \paragraph{Step 1: Bounding $J({\pi^*_0})-J({\pi^*_{\alpha_{\epsilon}}})$.} Notice that ${\pi^*_{\alpha_{\epsilon}}}$ is the solution to the regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1}, therefore we have: \begin{equation} \mathbb{E}_{(s,a)\sim {d^*_{\alpha_{\epsilon}}}}[r(s,a)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{\alpha_{\epsilon}}}(s,a))]\geq\mathbb{E}_{(s,a)\sim {d^*_0}}[r(s,a)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{0}}(s,a))], \end{equation} which implies that \begin{align} J({\pi^*_0})-J({\pi^*_{\alpha_{\epsilon}}})&=\mathbb{E}_{(s,a)\sim {d^*_0}}[r(s,a)]-\mathbb{E}_{(s,a)\sim {d^*_{\alpha_{\epsilon}}}}[r(s,a)]\\ &\leq\alpha\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{0}}(s,a))]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{\alpha_{\epsilon}}}(s,a))]\\ &\leq\alpha\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{0}}(s,a))]\label{cor1-proof-eq1}\\ &\leq\alpha B^0_{f}\label{cor1-proof-eq2}, \end{align} where (\ref{cor1-proof-eq1}) comes from the non-negativity of $f$ and (\ref{cor1-proof-eq2}) from the boundedness of $f$ when $\alpha=0$ (Assumption~\ref{ass:f prop}). Thus we have \begin{equation} \label{eq:cor1-proof-3} J({\pi^*_0})-J({\pi^*_{\alpha_{\epsilon}}})\leq\frac{\epsilon}{2}. \end{equation} \paragraph{Step 2: Bounding $J({\pi^*_{\alpha_{\epsilon}}})-J(\hat{\pi})$.} Using Theorem~\ref{thm:sample regularized}, we know that if \begin{align} &n\geq\frac{131072\left(\epsilon {B_{f,\alpha_{\epsilon}}}+2{B_{w,\alpha_{\epsilon}}}{B_{e,\alpha_{\epsilon}}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^4}\cdot\log\frac{4|\mathcal{V}||\mathcal{W}|}{\delta}, \\ &n_0\geq\frac{131072\left(2{B_{v,\alpha_{\epsilon}}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^2}\cdot\log\frac{4|\mathcal{V}|}{\delta}, \end{align} then with at least probability $1-\delta$, \begin{equation} \label{eq:cor1-proof-5} J({\pi^*_{\alpha_{\epsilon}}})-J(\hat{\pi})\leq\frac{\epsilon}{2}. \end{equation} Using (\ref{eq:cor1-proof-3}) and (\ref{eq:cor1-proof-5}), we concludes that \begin{equation} J({\pi^*_0})-J(\hat{\pi})\leq\epsilon \end{equation} hold with at least probability $1-\delta$. This finishes our proof. \section{Proof of Proposition~\ref{prop:LP stability}} \label{proof:prop LP stability} This proof largely follows \citet{mangasarian1979nonlinear}. First note that the regularized problem~\eqref{eq:constrained}\eqref{eq:bellman flow 1} has another more commonly used form of Lagrangian function: \begin{equation} \label{eq:Langrangian} \bar{L}_{\alpha}(\lambda,\eta,w)=(1-\gamma)\mathbb{E}_{s\sim\mu_0}[\lambda(s)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{\lambda}(s)]-\eta^{\top}w, \end{equation} where $\lambda\in\mathbb{R}^{|\mathcal{S}|},\eta\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}\geq0,w\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}$. Let $(\lambda^*_{\alpha},\eta^*_{\alpha})=\arg\min_{\eta\geq0,\lambda\in\mathbb{R}^{|\mathcal{S}|}}\max_{w\in\mathbb{R}^{|\mathcal{S}||\mathcal{A}|}}\bar{L}_{\alpha}(\lambda,\eta,w)$, then we have the following lemma: \begin{lemma} \label{lem:Lagrangian equivalence} \begin{equation} \lambda^*_{\alpha}={v^*_{\alpha}}. \end{equation} \end{lemma} \begin{proof} The proof is referred to Appendix~\ref{proof:lem Lagrangian equivalence}. \end{proof} Due to Lemma~\ref{lem:Lagrangian equivalence}, we can only consider the primal optimum ${w^*_{\alpha}}$ and the dual optimum $(\lambda^*_{\alpha},\eta^*_{\alpha})$ of the Lagrangian function (\ref{eq:Langrangian}). Let $w^*$ be the solution to the following optimization problem: \begin{equation} \max_{w\in\mathcal{W}^*_0}-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))] \end{equation} Then since $w^*\in\mathcal{W}^*_0$, we know that $({{w^*}},\lambda^*_{0},\eta^*_{0})$ is the primal and dual optimum of the following constrained optimization problem, which is equivalent to the unregularized problem \eqref{prob:original problem}: \begin{align} &\max_{w}\sum_{s,a}[r(s,a)d^D(s,a)w(s,a)]\label{eq:constrained-alpha-0}\\ &\text{s.t. }\sum_{a}d^D(s,a)w(s,a)=(1-\gamma)\mu_0(s)+\gamma\sum_{s',a'}P(s|s',a')d^D(s',a')w(s',a')\label{eq:bellman-flow-alpha-0-1}\\ &\quad w(s,a)\geq0,\forall s,a\label{eq:bellman-flow-alpha-0-2}. \end{align} Let $p(s,a)$ denote $r(s,a)d^D(s,a)$ and $Aw=b$ denote the equality constraint (\ref{eq:bellman-flow-alpha-0-1}), then we can obtain the following LP: \begin{align} &\min_{w} -p^{\top}w\label{eq:LP-0}\\ &\text{s.t. }Aw=b\label{eq:LP-0-constraint-1}\\ &\quad w(s,a)\geq0,\forall s,a\label{eq:LP-0-constraint-2}. \end{align} By the KKT conditions of the above problem, we can obtain: \begin{align} &A^{\top}\lambda^*_{0}-p-\eta^*_{0}=0,\\ &A{{w^*}}=b, {{w^*}}\geq0,\\ &\eta^*_{0}\geq0,\\ &\eta^*_{0}(s,a){{w^*}}(s,a)=0,\forall s,a.\\ \end{align} Let $c=-p^{\top}{w^*}$. Next we construct an auxiliary constrained optimization problem: \begin{align} &\min_{w} \mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]\label{eq:auxiliary}\\ &\text{s.t. }Aw=b, \label{eq:lp1}\\ &\quad w(s,a)\geq0,\forall s,a, \label{eq:lp2}\\ &-p^{\top}w\leq c \label{eq:lp3}. \end{align} Then the corresponding Lagrangian function is \begin{equation} \mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]+\lambda_{aux}^{\top}(Aw-b)-\eta_{aux}^{\top}w+\xi_{aux}(-p^{\top}w-c). \end{equation} Denote the primal and dual optimum of the auxiliary problem by $({w}^*_{aux},\lambda^*_{aux},\eta^*_{aux},\xi^*_{aux})$. Then obviously the constraints \eqref{eq:lp1}\eqref{eq:lp2}\eqref{eq:lp3} are equivalent to $w\in\mathcal{W}^*_0$ and therefore ${w}^*_{aux}=w^*$, implying that $(w^*,\lambda^*_{aux},\eta^*_{aux},\xi^*_{aux})$ satisfies the following KKT conditions: \begin{align} &d^D\circ\nabla f({w^*})+A^{\top}\lambda^*_{aux}-\eta^*_{aux}-\xi^*_{aux}p=0,\\ &A{w^*}=b, {w^*}\geq0, -p^{\top}{w^*}=c,\\ &\eta^*_{aux}\geq0,\xi^*_{aux}\geq0,\\ &\eta^*_{aux}(s,a){w^*}(s,a)=0,\forall s,a, \end{align} where $d^D\circ\nabla f({w^*})$ denotes product by element. Now we look at KKT conditions of (\ref{eq:Langrangian}): \begin{align} &A^{\top}\lambda^*_{\alpha}-p-\eta^*_{\alpha}+\alpha d^D\circ\nabla f({w^*_{\alpha}})=0,\\ &A{w^*_{\alpha}}=b, {w^*_{\alpha}}\geq0,\\ &\eta^*_{\alpha}\geq0,\\ &\eta^*_{\alpha}(s,a){w^*_{\alpha}}(s,a)=0,\forall s,a.\\ \end{align} \begin{itemize} \item \textbf{When $\mathbf{\xi^*_{aux}=0}$.} It can be easily checked that $({w^*_{\alpha}}={w^*},\lambda^*_{\alpha}=\lambda^*_{0}+\alpha\lambda^*_{aux},\eta^*_{\alpha}=\eta^*_{0}+\alpha\eta^*_{aux})$ satisfies the KKT conditions of (\ref{eq:Langrangian}) for all $\alpha\geq0$. \item \textbf{When $\mathbf{\xi^*_{aux}>0}$.} It can be easily checked that $({w^*_{\alpha}}={w^*},\lambda^*_{\alpha}=(1-\alpha\xi^*_{aux})\lambda^*_{0}+\alpha\lambda^*_{aux},\eta^*_{\alpha}=(1-\alpha\xi^*_{aux})\eta^*_{0}+\alpha\eta^*_{aux})$ satisfies the KKT conditions of (\ref{eq:Langrangian}) for $\alpha\in[0,\bar{\alpha}]$ where $\bar{\alpha}=\frac{1}{\xi^*_{aux}}$. \end{itemize} Therefore, when $\alpha\in[0,\bar{\alpha}]$, $({w^*_{\alpha}}={w^*},\lambda^*_{\alpha}=(1-\alpha\xi^*_{aux})\lambda^*_{0}+\alpha\lambda^*_{aux},\eta^*_{\alpha}=(1-\alpha\xi^*_{aux})\eta^*_{0}+\alpha\eta^*_{aux})$ is the primal and dual optimum of (\ref{eq:Langrangian}). Then by Lemma~\ref{lem:Lagrangian equivalence}, we know for $\alpha\in[0,\bar{\alpha}]$, \begin{equation} {w^*_{\alpha}}={w^*}\in W^*_0, {v^*_{\alpha}}=(1-\alpha\xi^*_{aux})\lambda^*_{0}+\alpha\lambda^*_{aux}. \end{equation} Let $\alpha=\bar{\alpha}=\frac{1}{\xi^*_{aux}}$, then since $\Vertw^*_{\bar{\alpha}}\Vert_{\infty}=\Vert {w^*}\Vert_{\infty}\leq B^0_w$, by Lemma~\ref{lem:bound tilde nu} we have: \begin{equation} \Vert\bar{\alpha}\lambda^*_{aux}\Vert_{\infty}=\Vertv^*_{\bar{\alpha}}\Vert_{\infty}\leq\frac{\bar{\alpha}{B_{f',0}}+1}{1-\gamma}, \end{equation} which implies that \begin{equation} \Vert\lambda^*_{aux}\Vert_{\infty}\leq\frac{{B_{f',0}}+\xi^*_{aux}}{1-\gamma}. \end{equation} Therefore, combining with $\Vert{v^*_{0}}\Vert_{\infty}\leq\frac{1}{1-\gamma}$, we have \begin{equation} \Vert{v^*_{\alpha}}-{v^*_{0}}\Vert_{\infty}\leq\alpha\cdot\frac{{B_{f',0}}+2\xi^*_{aux}}{1-\gamma},\forall \alpha\in[0,\bar{\alpha}] \end{equation} which concludes our proof. \subsection{Proof of Lemma~\ref{lem:Lagrangian equivalence}} \label{proof:lem Lagrangian equivalence} From KKT conditions of $\bar{L}_{\alpha}(\lambda,\eta,w)$, we have \begin{align} &{w^*_{\alpha}}(s,a)=(f')^{-1}(\frac{e_{\lambda^*_{\alpha}}(s,a)+\eta^*_{\alpha}(s,a)}{\alpha}),\forall s,a,\\ &{w^*_{\alpha}}\geq0,\\ &\sum_{a}{w^*_{\alpha}}(s,a)d^D(s,a)=(1-\gamma)\mu_0(s)+\gamma\sum_{s',a'}P(s|s',a'){w^*_{\alpha}}(s',a')d^D(s',a'),\forall s,\\ &\eta^*_{\alpha}\geq0,\\ &\eta^*_{\alpha}(s,a){w^*_{\alpha}}(s,a)=0,\forall s,a. \end{align} Therefore, we can see that $\lambda^*_{\alpha}$ is the solution of the following equations: \begin{align} &e_{\lambda^*_{\alpha}}(s,a)=\alpha f'({w^*_{\alpha}}(s,a)),\text{ for }s,a \text{ such that }{w^*_{\alpha}}(s,a)\neq0,\label{eq:lagrange-1}\\ &e_{\lambda^*_{\alpha}}(s,a)\leq\alpha f'(0),\text{ for }s,a \text{ such that }{w^*_{\alpha}}(s,a)=0.\label{eq:lagrange-2} \end{align} Besides, from KKT conditions of ${L_{\alpha}}({v},w)$, we have \begin{align} &{w^*_{\alpha}}(s,a)=\max\{0,(f')^{-1}(\frac{e_{\lambda^*_{\alpha}}(s,a)}{\alpha})\},\forall s,a,\\ &{w^*_{\alpha}}\geq0,\\ &\sum_{a}{w^*_{\alpha}}(s,a)d^D(s,a)=(1-\gamma)\mu_0(s)+\gamma\sum_{s',a'}P(s|s',a'){w^*_{\alpha}}(s',a')d^D(s',a'),\forall s. \end{align} Therefore, ${v^*_{\alpha}}$ is the solution of the following equations: \begin{align} &e_{{v^*_{\alpha}}}(s,a)=\alpha f'({w^*_{\alpha}}(s,a)),\text{ for }s,a \text{ such that }{w^*_{\alpha}}(s,a)\neq0,\label{eq:lagrange-3}\\ &e_{{v^*_{\alpha}}}(s,a)\leq\alpha f'(0),\text{ for }s,a \text{ such that }{w^*_{\alpha}}(s,a)=0.\label{eq:lagrange-4} \end{align} It is observed that (\ref{eq:lagrange-1})(\ref{eq:lagrange-2}) is the same as (\ref{eq:lagrange-3})(\ref{eq:lagrange-4}), which implies that $\lambda^*_{\alpha}={v^*_{\alpha}}$. \section{Proof of Corollary~\ref{cor:sample unregularized 0}} \label{proof:cor sample unregularized 0} First by Lemma~\ref{lem:hat L tilde L}, we know that \begin{equation} {L_0}({v^*_{0}},{w^*_{0}})-{L_0}({v^*_{0}},\hat{w})\leq \frac{2{B_{w,0}}}{1-\gamma}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}+\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}. \end{equation} Substitute the definition (\ref{prob:maximin2}) of ${L_0}({v^*_{0}},w)=(1-\gamma)\mathbb{E}_{s\sim\mu_0}[{v^*_{0}}(s)]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{{v^*_{0}}(s)}(s,a)]$ into the above inequality, we have \begin{equation} \label{eq:cor-2-eq2} \sum_{s,a}\left({d^*_0}(s,a)e_{{v^*_{0}}(s)}(s,a)\right)-\sum_{s,a}\left(\hat{d}(s,a)e_{{v^*_{0}}(s)}(s,a)\right)\leq \frac{2{B_{w,0}}}{1-\gamma}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}+\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}. \end{equation} Note that ${v^*_{0}}$ is the optimal value function of the unregularized MDP $\mathcal{M}$ and ${d^*_0}$ is the discounted state visitation distribution of the optimal policy ${\pi^*_0}$ \citep{puterman1994markov}. Therefore, invoking Lemma~\ref{lem:performance difference}, we have \begin{align} J(\pi)-J({\pi^*_0})&=\mathbb{E}_{(s,a)\sim d^{\pi}}[r(s,a)+\gamma\mathbb{E}_{s'\sim P(\cdot|s,a)}{v^*_{0}}(s')- {v^*_{0}}(s)]\notag\\ &=\sum_{s,a}d^{\pi}(s,a)e_{{v^*_{0}}(s)}(s,a)\label{eq:cor-2-eq1}. \end{align} Let $\pi=\tilde{\pi}^*_{0}$ in (\ref{eq:cor-2-eq1}), then we can obtain \begin{equation} \sum_{s,a}{d^*_0}(s,a)e_{{v^*_{0}}(s)}(s,a)=0. \end{equation} Substitute it into (\ref{eq:cor-2-eq2}), \begin{equation} \label{eq:cor-2-eq3} \sum_{s,a}\left(\hat{d}(s,a)(-e_{{v^*_{0}}(s)}(s,a))\right)\leq \frac{2{B_{w,0}}}{1-\gamma}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}+\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}. \end{equation} Notice that since ${v^*_{0}}$ is the optimal value function, $-e_{{v^*_{0}}(s)}(s,a)\geq0$ for all $s,a$. Therefore, we have: \begin{align} J({\pi^*_0})-J(\hat{\pi})&=\sum_{s,a}d^{\hat{\pi}}(s,a)(-e_{{v^*_{0}}(s)}(s,a))\\ &=\sum_{s,a}d^{\hat{\pi}}(s)\hat{\pi}(a|s)(-e_{{v^*_{0}}(s)}(s,a))\\ &\leq{{B_{w,u}}}\sum_{s,a}d^D(s)\hat{\pi}(a|s)(-e_{{v^*_{0}}(s)}(s,a))\\ &={{B_{w,u}}}\sum_{s,a}d^D(s)\frac{\hat{w}(s,a)\pi_D(a|s)}{\sum_{a'}\hat{w}(s,a')\pi_D(a'|s)}(-e_{{v^*_{0}}(s)}(s,a))\\ &\leq\frac{{B_{w,u}}}{{B_{w,l}}}\sum_{s,a}d^D(s)\pi_D(a|s)\hat{w}(s,a)(-e_{{v^*_{0}}(s)}(s,a))\\ &=\frac{{B_{w,u}}}{{B_{w,l}}}\sum_{s,a}\hat{d}(s,a)(-e_{{v^*_{0}}(s)}(s,a))\\ &\leq\frac{2{B_{w,0}}{B_{w,u}}}{(1-\gamma){B_{w,l}}}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}+\frac{{B_{w,u}}}{{B_{w,l}}}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}, \end{align} where the first step comes from (\ref{eq:cor-2-eq1}), the third step is due to Assumption~\ref{ass:strong conc}, the fifth step comes from Assumption~\ref{ass:W good} and the last step comes from (\ref{eq:cor-2-eq3}). This concludes our proof. \subsection{Proof of Lemma~\ref{lem:ergodic}} \label{proof:lem ergodic} First notice that $d^D(s)\geq(1-\gamma)\mu_0(s)$. Then since $d^{\pi}(s)\leq B_{erg,2}\mu_0(s),\forall s,\pi$, we have for any policy $\pi$: \begin{equation} \frac{d^{\pi}(s)}{d^D(s)}\leq\frac{1}{1-\gamma}\frac{d^{\pi}(s)}{\mu_0(s)}\leq\frac{B_{erg,2}}{1-\gamma}. \end{equation} On the other hand, ${d^*_0}(s)\geq(1-\gamma)\mu_0(s)$, therefore similarly we have: \begin{equation} \frac{{d^*_0}(s)}{d^D(s)}\geq\frac{(1-\gamma)\mu_0(s)}{d^D(s)}\geq\frac{1-\gamma}{B_{erg,2}}. \end{equation} \section{Proof of Theorem~\ref{thm:sample regularized approximate}} \label{proof:thm sample regularized approximate} Our proof follows a similar procedure of Theorem~\ref{thm:sample regularized} and also consists of (1) bounding $|{L_{\alpha}}({v},w)-\hat{L}_{\alpha}({v},w)|$, (2) characterizing the error $\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}$ and (3) analyzing $\hat{\pi}$ and ${\pi^*_{\alpha}}$. The first and third step are exactly the same as Theorem~\ref{thm:sample regularized} but the second step will be more complicated, on which we will elaborate on in this section. We will use the following notations for brevity throughout the discussion: \begin{align} &{v^*_{\alpha,\mathcal{V}}}=\arg\min_{{v}\in {\mathcal{V}}}\Vert {v}-{v^*_{\alpha}}\Vert_{1,\mu_0}+\Vert {v}-{v^*_{\alpha}}\Vert_{1,d^D}+\Vert {v}-{v^*_{\alpha}}\Vert_{1,d^{D'}},\\ &{w^*_{\alpha,\mathcal{W}}}=\arg\min_{w\in {\mathcal{W}}}\Vert w-{w^*_{\alpha}}\Vert_{1,d^D},\\ &\hat{{v}}(w)=\arg\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w),\forall w. \end{align} We first need to characterize ${L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},\hat{w})$. Similarly, we decompose ${L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},\hat{w})$ into the following terms: \begin{align} &{L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})=(\underbrace{{L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha,\mathcal{V}}},\hat{w})}_{(1)})+(\underbrace{{L_{\alpha}}({v^*_{\alpha,\mathcal{V}}},\hat{w})-\hat{L}_{\alpha}({v^*_{\alpha,\mathcal{V}}},\hat{w})}_{(2)})\\ &+(\underbrace{\hat{L}_{\alpha}({v^*_{\alpha,\mathcal{V}}},\hat{w})-\hat{L}_{\alpha}(\hat{{v}},\hat{w})}_{(3)})+(\underbrace{\hat{L}_{\alpha}(\hat{{v}},\hat{w})-\hat{L}_{\alpha}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha,\mathcal{W}}})}_{(4)})\\ &+(\underbrace{\hat{L}_{\alpha}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha,\mathcal{W}}})-{L_{\alpha}}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha,\mathcal{W}}})}_{(5)})+ (\underbrace{{L_{\alpha}}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha,\mathcal{W}}})-{L_{\alpha}}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha}})}_{(6)}),\\ &+ (\underbrace{{L_{\alpha}}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha}})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})}_{(7)}). \end{align} For term (2) and (5), we can apply Lemma~\ref{lem:hat L conc} and thus \begin{equation} (2)\geq-{\epsilon_{stat}},(5)\geq-{\epsilon_{stat}}. \end{equation} For term (3), since $\hat{L}_{\alpha}(\hat{{v}},\hat{w})-\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},\hat{w})\leq\epsilon_{o,{v}}$ and ${v^*_{\alpha,\mathcal{V}}}\in {\mathcal{V}}$, we have \begin{equation} (3)\geq-\epsilon_{o,{v}}. \end{equation} For term (4), since $\max_{w\in {\mathcal{W}}}\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w)-\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},\hat{w})\leq\epsilon_{o,w}$ and ${w^*_{\alpha,\mathcal{W}}}\in {\mathcal{W}}$, \begin{equation} \hat{L}_{\alpha}(\hat{{v}},\hat{w})\geq\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},\hat{w})\geq\max_{w\in {\mathcal{W}}}\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w)-\epsilon_{o,w}\geq\hat{L}_{\alpha}(\hat{{v}}({w^*_{\alpha,\mathcal{W}}}),{w^*_{\alpha,\mathcal{W}}})-\epsilon_{o,w}, \end{equation} or \begin{equation} (4)\geq-\epsilon_{o,w}. \end{equation} For term (7), since ${v^*_{\alpha}}=\arg\min_{{v}\in\mathbb{R}^{|\mathcal{S}|}}{L_{\alpha}}({v},{w^*_{\alpha}})$, we have: \begin{equation} (7)\geq0. \end{equation} There are only term (1) and (6) left to be bounded, for which we introduce the following lemma on the continuity of ${L_{\alpha}}({v},w)$, \begin{lemma} \label{lem:f continuity} Suppose Assumption~\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold. Then for any ${v},{v}_1,{v}_2\in {\mathcal{V}}$ and $w,w_1,w_2\in {\mathcal{W}}$, we have: \begin{align} &|{L_{\alpha}}({v}_1,w)-{L_{\alpha}}({v}_2,w)|\leq\left({B_{w,\alpha}}+1\right)\left(\Vert{v}_1-{v}_2\Vert_{1,\mu_0}+\Vert{v}_1-{v}_2\Vert_{1,d^D}+\Vert{v}_1-{v}_2\Vert_{1,d^{D'}}\right),\\ &|{L_{\alpha}}({v},w_1)-{L_{\alpha}}({v},w_2)|\leq({B_{e,\alpha}}+\alpha {B_{f',\alpha}})\Vert w_1-w_2\Vert_{1,d^D}. \end{align} \end{lemma} The proof is in Section~\ref{proof:lem f continuity}. Using Lemma~\ref{lem:f continuity}, we can bound term (1) and (6) easily: \begin{equation} (1)\geq-\left({B_{w,\alpha}}+1\right)\epsilon_{\alpha,r,v}, (6)-\geq({B_{e,\alpha}}+\alpha {B_{f',\alpha}})\epsilon_{\alpha,r,w}. \end{equation} Combining the above inequalities, it is obvious that \begin{equation} {L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})\geq-2{\epsilon_{stat}}-(\epsilon_{o,{v}}+\epsilon_{o,w})-\left(\left({B_{w,\alpha}}+1\right)\epsilon_{\alpha,r,v}+({B_{e,\alpha}}+\alpha {B_{f',\alpha}})\epsilon_{\alpha,r,w}\right). \end{equation} Let $\epsilon_{\alpha,{app}}$ denote $\left({B_{w,\alpha}}+1\right)\epsilon_{\alpha,r,v}+({B_{e,\alpha}}+\alpha {B_{f',\alpha}})\epsilon_{\alpha,r,w}$ and $\epsilon_{opt}$ denote $\epsilon_{o,{v}}+\epsilon_{o,w}$, then \begin{equation} {L_{\alpha}}({v^*_{\alpha}},\hat{w})-{L_{\alpha}}({v^*_{\alpha}},{w^*_{\alpha}})\geq-2{\epsilon_{stat}}-\epsilon_{opt}-\epsilon_{\alpha,{app}}. \end{equation} Further we utilize the strong convexity of $f$ and Lemma~\ref{lem:tilde w tilde pi}, then we have: \begin{align} &\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]\leq 2\Vert \hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\leq4\sqrt{\frac{\epsilon_{stat}}{\alpha M_f}}+2\sqrt{\frac{2(\epsilon_{opt}+\epsilon_{\alpha,{app}})}{\alpha M_f}}, \end{align} which completes the proof. \subsection{Proof of Lemma~\ref{lem:f continuity}} \label{proof:lem f continuity} First, by the definition of ${L_{\alpha}}({v},w)$ (\ref{prob:maximin2}) we have \begin{align} &|{L_{\alpha}}({v}_1,w)-{L_{\alpha}}({v}_2,w)|\\ =&|(1-\gamma)\mathbb{E}_{s\sim \mu_0}[{v}_1(s)-{v}_2(s)]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)(e_{{v}_1}(s,a)-e_{{v}_2}(s,a))]|\\ \leq&(1-\gamma)\mathbb{E}_{s\sim \mu_0}[|{v}_1(s)-{v}_2(s)|]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)|e_{{v}_1}(s,a)-e_{{v}_2}(s,a)|]\\ =&(1-\gamma)\Vert{v}_1-{v}_2\Vert_{1,\mu_0}+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)|e_{{v}_1}(s,a)-e_{{v}_2}(s,a)|]. \end{align} For $\mathbb{E}_{(s,a)\sim d^D}[w(s,a)|e_{{v}_1}(s,a)-e_{{v}_2}(s,a)|]$, notice that from Assumption~\ref{ass:W bound}, \begin{align} &\mathbb{E}_{(s,a)\sim d^D}[w(s,a)|e_{{v}_1}(s,a)-e_{{v}_2}(s,a)|]\\ \leq&{B_{w,\alpha}}\mathbb{E}_{(s,a)\sim d^D}\left[|\gamma\mathbb{E}_{s'\sim P(s'|s,a)}[{v}_1(s')-{v}_2(s')]+\left({v}_2(s)-{v}_1(s)\right)|\right]\\ \leq&{B_{w,\alpha}}\mathbb{E}_{(s,a)\sim d^D}\left[|\gamma\mathbb{E}_{s'\sim P(s'|s,a)}[{v}_1(s')-{v}_2(s')]|\right]+{B_{w,\alpha}}\mathbb{E}_{s\sim d^D}\left[|{v}_2(s)-{v}_1(s)|\right]\\ \leq&\gamma {B_{w,\alpha}}\mathbb{E}_{(s,a)\sim d^D,s'\sim P(s'|s,a)}[|{v}_1(s')-{v}_2(s')|]+{B_{w,\alpha}}\Vert{v}_2-{v}_1\Vert_{1,d^D}\\ \leq& {B_{w,\alpha}}\left(\Vert{v}_1-{v}_2\Vert_{1,d^D}+\Vert{v}_1-{v}_2\Vert_{1,d^{D'}}\right). \end{align} Thus we have \begin{equation} |{L_{\alpha}}({v}_1,w)-{L_{\alpha}}({v}_2,w)|\leq\left({B_{w,\alpha}}+1\right)\left(\Vert{v}_1-{v}_2\Vert_{1,\mu_0}+\Vert{v}_1-{v}_2\Vert_{1,d^D}+\Vert{v}_1-{v}_2\Vert_{1,d^{D'}}\right). \end{equation} Next we bound $|{L_{\alpha}}({v},w_1)-{L_{\alpha}}({v},w_2)|$: \begin{align} &|{L_{\alpha}}({v},w_1)-{L_{\alpha}}({v},w_2)|\\ =&|\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w_2(s,a))-f(w_1(s,a))]+\mathbb{E}_{(s,a)\sim d^D}[(w_1(s,a)-w_2(s,a))e_{{v}}(s,a)]|\\ \leq&\alpha\mathbb{E}_{(s,a)\sim d^D}[|f(w_1(s,a))-f(w_2(s,a))|]+\mathbb{E}_{(s,a)\sim d^D}[|w_1(s,a)-w_2(s,a)|e_{{v}}(s,a)]. \end{align} For $\alpha\mathbb{E}_{(s,a)\sim d^D}[|f(w_1(s,a))-f(w_2(s,a))|]$, from Assumption~\ref{ass:f prop} we know \begin{align} &\alpha\mathbb{E}_{(s,a)\sim d^D}[|f(w_1(s,a))-f(w_2(s,a))|]\\ \leq&\alpha {B_{f',\alpha}}\mathbb{E}_{(s,a)\sim d^D}[|w_1(s,a)-w_2(s,a)|]\\ =&\alpha {B_{f',\alpha}}\Vert w_1-w_2\Vert_{1,d^D}. \end{align} For $\mathbb{E}_{(s,a)\sim d^D}[|w_1(s,a)-w_2(s,a)|e_{{v}}(s,a)]$, from Assumption~\ref{ass:V bound} we know \begin{align} &\mathbb{E}_{(s,a)\sim d^D}[|w_1(s,a)-w_2(s,a)|e_{{v}}(s,a)]\\ \leq&{B_{e,\alpha}}\mathbb{E}_{(s,a)\sim d^D}[|w_1(s,a)-w_2(s,a)|]\\ =&{B_{e,\alpha}}\Vert w_1-w_2\Vert_{1,d^D}. \end{align} Therefore we have \begin{equation} |{L_{\alpha}}({v},w_1)-{L_{\alpha}}({v},w_2)|\leq({B_{e,\alpha}}+\alpha {B_{f',\alpha}})\Vert w_1-w_2\Vert_{1,d^D}. \end{equation} \section{Proof of \texttt{PRO-RL-BC}\xspace} \subsection{Proof of Lemma~\ref{lem:variation}} \label{proof:lem variation} Notice that by the variational form of total variation, we have for any policies $\pi,\pi'$ and $s\in\mathcal{S}$, \begin{align} \Vert\pi(\cdot|s)-\pi'(\cdot|s)\Vert_1&=\max_{h:\Vert h\Vert_{\infty}\leq 1}[\mathbb{E}_{a\sim\pi(\cdot|s)} h(a)-\mathbb{E}_{a\sim\pi'(\cdot|s)} h(a)]\\ &=\mathbb{E}_{a\sim\pi(\cdot|s)}[ h^s_{\pi,\pi'}(a)]-\mathbb{E}_{a\sim\pi'(\cdot|s)}[h^s_{\pi,\pi'}(a)], \end{align} which implies that \begin{align} \mathbb{E}_{s\sim d}[\Vert\pi(\cdot|s)-\pi'(\cdot|s)\Vert_1]&=\mathbb{E}_{s\sim d}\left[\mathbb{E}_{a\sim\pi(\cdot|s)}[h^s_{\pi,\pi'}(a)]-\mathbb{E}_{a\sim\pi'(\cdot|s)}[h^s_{\pi,\pi'}(a)]\right]\\ &=\mathbb{E}_{s\sim d}\left[\mathbb{E}_{a\sim\pi(\cdot|s)}[h_{\pi,\pi'}(s,a)]-\mathbb{E}_{a\sim\pi'(\cdot|s)}[h_{\pi,\pi'}(s,a)]\right], \end{align} where the last step comes from the definition of $h_{\pi,\pi'}$. \subsection{Proof of Theorem~\ref{thm:sample regularized BC}} \label{proof:thm sample regularized BC} Let $\epsilon_{UO}$ denote $\left(\frac{4(\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})}{\alpha M_f}\right)^{\frac{1}{2}}\cdot\left(\frac{2\log\frac{8|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n_1}\right)^{\frac{1}{4}}+\left(\frac{4(1-\gamma)B_{v,\alpha}}{\alpha M_f}\right)^{\frac{1}{2}}\cdot\left(\frac{2\log\frac{8|{\mathcal{V}}|}{\delta}}{n_0}\right)^{\frac{1}{4}}$. Suppose $E$ denote the event \begin{equation} \Vert \hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\leq\epsilon_{UO}, \end{equation} then by Theorem~\ref{thm:sample regularized approximate}, we have \begin{equation} \text{Pr}(E)\geq 1-\frac{\delta}{2}. \end{equation} Our following discussion is all conditioned on $E$. Let $l'_{i,\pi,h}$ denote $\hat{w}(s_i,a_i)(h^{\pi}(s_i)-h(s_i,a_i))$ then we know: \begin{align} \mathbb{E}_{\mathcal{D}_2}[l'_{i,\pi,h}]&=\mathbb{E}_{(s,a)\sim d^D}[\hat{w}(s,a)(h^{\pi}(s)-h(s,a))]\\ &=\left(\sum_{s,a}d^D(s,a)\hat{w}(s,a)\right)\mathbb{E}_{s\sim\hat{d}'}\left[\mathbb{E}_{a\sim\pi(\cdot|s)}[h(s,a)]-\mathbb{E}_{a\sim\hat{\pi}(\cdot|s)}[h(s,a)]\right], \end{align} where $\hat{d}'(s)=\frac{\sum_{a'}d^D(s,a')\hat{w}(s,a')}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}$. Notice that $0\leq\hat{w}(s,a)\leq {B_{w,\alpha}},|h(s,a)|\leq1$, then by Hoeffding's inequality we have for any $\pi\in\Pi$ and $h\in\mathcal{H}$, with at least probability $1-\frac{\delta}{2}$, \begin{align} &\bigg|\frac{1}{n_2}\sum_{i=1}^{n_2}l'_{i,\pi,h}-\left(\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\right)\mathbb{E}_{s\sim\hat{d}'}\left[\mathbb{E}_{a\sim\pi(\cdot|s)}[h(s,a)]-\mathbb{E}_{a\sim\hat{\pi}(\cdot|s)}[h(s,a)]\right]\bigg|\notag\\ \leq&2{B_{w,\alpha}}\sqrt{\frac{2\log\frac{4|\mathcal{H}||\Pi|}{\delta}}{n_2}}\leq2{B_{w,\alpha}}\sqrt{\frac{6\log\frac{4|\Pi|}{\delta}}{n_2}}:={\epsilon_{stat,2}}.\label{eq:thm3-4} \end{align} Besides, the following lemma shows that $\hat{d}'$ is close to ${d^*_{\alpha}}$ and $\left(\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\right)$ is close to 1 conditioned on $E$: \begin{lemma} \label{lem:importance sampling close} Conditioned on $E$, we have \begin{align} \Vert\hat{d}'-{d^*_{\alpha}}\Vert_1\leq 2\epsilon_{UO},\label{eq:thm3-1}\\ \bigg|\left(\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\right)-1\bigg|\leq\epsilon_{UO}.\label{eq:thm3-2} \end{align} \end{lemma} The proof of the above lemma is in Appendix~\ref{proof:lem importance sampling close}. With concentration result (\ref{eq:thm3-4}) and Lemma~\ref{lem:importance sampling close}, we can bound $\mathbb{E}_{s\sim{d^*_{\alpha}}}[\Vert\bar{\pi}(\cdot|s)-{\pi^*_{\alpha}}(\cdot|s)\Vert_1]$. To facilitate our discussion, we will use the following notations: \begin{align} &\bar{h}:=h_{\bar{\pi},{\pi^*_{\alpha}}}\in\mathcal{H},\\ &\bar{h}':=\arg\max_{h\in\mathcal{H}}\sum_{i=1}^{n_2}\hat{w}(s_i,a_i)[h^{\bar{\pi}}(s_i)-h(s_i,a_i)],\\ &\tilde{h}:=\arg\max_{h\in\mathcal{H}}\sum_{i=1}^{n_2}\hat{w}(s_i,a_i)[h^{{\pi^*_{\alpha}}}(s_i)-h(s_i,a_i)]. \end{align} Then we have \begin{align} &\mathbb{E}_{s\sim{d^*_{\alpha}}}[\Vert\bar{\pi}(\cdot|s)-{\pi^*_{\alpha}}(\cdot|s)\Vert_1]\\ \leq&\mathbb{E}_{s\sim\hat{d}'}[\Vert\bar{\pi}(\cdot|s)-{\pi^*_{\alpha}}(\cdot|s)\Vert_1]+4\epsilon_{UO}\\ =&\mathbb{E}_{s\sim\hat{d}'}[\mathbb{E}_{a\sim\bar{\pi}(\cdot|s)}[\bar{h}(s,a)]-\mathbb{E}_{a\sim{\pi^*_{\alpha}}(\cdot|s)}[\bar{h}(s,a)]]+4\epsilon_{UO}\\ =&\mathbb{E}_{s\sim\hat{d}'}[\mathbb{E}_{a\sim\bar{\pi}(\cdot|s)}[\bar{h}(s,a)]-\mathbb{E}_{a\sim\hat{\pi}(\cdot|s)}[\bar{h}(s,a)]]\notag\\ &+\mathbb{E}_{s\sim\hat{d}'}[\mathbb{E}_{a\sim\hat{\pi}(\cdot|s)}[\bar{h}(s,a)]-\mathbb{E}_{a\sim{\pi^*_{\alpha}}(\cdot|s)}[\bar{h}(s,a)]]+4\epsilon_{UO}\\ =&\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\bar{h}^{\bar{\pi}}(s)-\bar{h}(s,a)]+\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[(-\bar{h}^{{\pi^*_{\alpha}}}(s))-(-\bar{h}(s,a))]+4\epsilon_{UO}\\ \leq&\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\bar{h}^{\bar{\pi}}(s)-\bar{h}(s,a)]+\mathbb{E}_{s\sim\hat{d}'}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]+4\epsilon_{UO}\\ \leq&\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\bar{h}^{\bar{\pi}}(s)-\bar{h}(s,a)]+\mathbb{E}_{s\sim{d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]+8\epsilon_{UO}\\ \leq&\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\bar{h}^{\bar{\pi}}(s)-\bar{h}(s,a)]+10\epsilon_{UO},\label{eq:thm3-5} \end{align} where the first and sixth steps come from (\ref{eq:thm3-1}), the fifth step is due to $\Vert\bar{h}\Vert_{\infty}\leq1$ and the last step from Theorem~\ref{thm:sample regularized approximate}. For $\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\bar{h}^{\bar{\pi}}(s)-\bar{h}(s,a)]$, we utilize concentration result (\ref{eq:thm3-4}) and have with at least probability $1-\delta$: \begin{align} &\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\bar{h}^{\bar{\pi}}(s)-\bar{h}(s,a)]\\ \leq&\left(\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\right)\mathbb{E}_{s\sim\hat{d}'}\left[\mathbb{E}_{a\sim\bar{\pi}(\cdot|s)}[\bar{h}(s,a)]-\mathbb{E}_{a\sim\hat{\pi}(\cdot|s)}[\bar{h}(s,a)]\right]+2\epsilon_{UO}\\ \leq&\frac{1}{n_2}\sum_{i=1}^{n_2}[\hat{w}(s_i,a_i)(\bar{h}^{\bar{\pi}}(s_i)-\bar{h}(s_i,a_i))]+{\epsilon_{stat,2}}+2\epsilon_{UO}\\ \leq&\frac{1}{n_2}\sum_{i=1}^{n_2}[\hat{w}(s_i,a_i)(\bar{h}'^{\bar{\pi}}(s_i)-\bar{h}'(s_i,a_i))]+{\epsilon_{stat,2}}+2\epsilon_{UO}\\ \leq&\frac{1}{n_2}\sum_{i=1}^{n_2}[\hat{w}(s_i,a_i)(\tilde{h}^{{\pi^*_{\alpha}}}(s_i)-\tilde{h}(s_i,a_i))]+{\epsilon_{stat,2}}+2\epsilon_{UO}\\ \leq&\mathbb{E}_{s\sim\hat{d}',a\sim\hat{\pi}(\cdot|s)}[\tilde{h}^{{\pi^*_{\alpha}}}(s)-\tilde{h}(s,a)]+2{\epsilon_{stat,2}}+4\epsilon_{UO}\\ \leq&\mathbb{E}_{s\sim\hat{d}'}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]+2{\epsilon_{stat,2}}+4\epsilon_{UO}\\ \leq&2{\epsilon_{stat,2}}+10\epsilon_{UO},\label{eq:thm3-6} \end{align} where the first step comes from (\ref{eq:thm3-2}), the second is due to (\ref{eq:thm3-4}), the third and fourth is from the definition of $\bar{h}'$ and $\bar{\pi}$, the fifth step utilizes (\ref{eq:thm3-2}) and (\ref{eq:thm3-4}), the sixth step is due to $\Vert\tilde{h}\Vert_{\infty}\leq1$ and the last step is from (\ref{eq:thm3-1}) and Theorem~\ref{thm:sample regularized approximate}. Combining (\ref{eq:thm3-5}) and (\ref{eq:thm3-6}), we have conditioned on $E$, with at least probability $1-\frac{\delta}{2}$, we have \begin{equation} \mathbb{E}_{s\sim{d^*_{\alpha}}}[\Vert\bar{\pi}(\cdot|s)-{\pi^*_{\alpha}}(\cdot|s)\Vert_1]\leq2{\epsilon_{stat,2}}+20\epsilon_{UO}. \end{equation} Notice that $\epsilon_{UO}\leq 2^{\frac{5}{4}}\sqrt{\frac{\mathcal{E}_{n_1,n_0,\alpha}(B_{w,\alpha},B_{f,\alpha},B_{v,\alpha},B_{e,\alpha})}{\alpha M_f}}$. Therefore, with at least probability $1-\delta$, we have: \begin{align} &\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\bar{\pi}(\cdot|s)\Vert_1]\leq4{B_{w,\alpha}}\sqrt{\frac{6\log\frac{4|\Pi|}{\delta}}{n_2}}+50\sqrt{\frac{\mathcal{E}_{n_1,n_0,\alpha}(B_{w,\alpha},B_{f,\alpha},B_{v,\alpha},B_{e,\alpha})}{\alpha M_f}} \end{align} This finishes our proof. \subsection{Proof of Lemma~\ref{lem:importance sampling close}} \label{proof:lem importance sampling close} The proof is similar to Lemma~\ref{lem:tilde w tilde pi}. First notice that \begin{align} &\bigg|\left(\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\right)-1\bigg|\\ =&\bigg|\left(\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\right)-\left(\sum_{s',a'}d^D(s',a'){w^*_{\alpha}}(s',a')\right)\bigg|\\ =&\bigg|\sum_{s',a'}d^D(s',a')\left(\hat{w}(s',a')-{w^*_{\alpha}}(s',a')\right)\bigg|\\ \leq&\sum_{s',a'}d^D(s',a')|\hat{w}(s',a')-{w^*_{\alpha}}(s',a')|\\ \leq&\Vert\hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\\ \leq&\epsilon_{UO},\label{eq:thm3-3} \end{align} which proves the second part of the lemma. For the first part, we have \begin{align} &\Vert \hat{d}'-{d^*_{\alpha}}\Vert_1\\ =&\sum_s\bigg|\frac{1}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}\sum_{a'}d^D(s,a')\hat{w}(s,a')-{d^*_{\alpha}}(s)\bigg|\\ \leq&\sum_s\left(\bigg|\frac{1}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}-1\bigg|\sum_{a'}d^D(s,a')\hat{w}(s,a')\right)\notag\\ &+\sum_s\bigg|\sum_{a'}d^D(s,a')\hat{w}(s,a')-{d^*_{\alpha}}(s)\bigg|\\ =&\underbrace{\sum_s\left(\bigg|\frac{1}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}-1\bigg|\sum_{a'}d^D(s,a')\hat{w}(s,a')\right)}_{(1)}\notag\\ &+\underbrace{\sum_s\bigg|\sum_{a'}d^D(s,a')\hat{w}(s,a')-\sum_{a'}d^D(s,a'){w^*_{\alpha}}(s,a')\bigg|}_{(2)}. \end{align} For term (1), notice that \begin{equation} \bigg|\frac{1}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}-1\bigg|=\frac{\big|1-\sum_{s',a'}d^D(s',a')\hat{w}(s',a')\big|}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}\leq\frac{\epsilon_{UO}}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}. \end{equation} Therefore, \begin{align} &\sum_s\left(\bigg|\frac{1}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}-1\bigg|\sum_{a'}d^D(s,a')\hat{w}(s,a')\right)\\ \leq&\epsilon_{UO}\sum_{s}\frac{\sum_{a'}d^D(s,a')\hat{w}(s,a')}{\sum_{s',a'}d^D(s',a')\hat{w}(s',a')}\\ =&\epsilon_{UO}. \end{align} For term (2), \begin{align} &\sum_s\bigg|\sum_{a'}d^D(s,a')\hat{w}(s,a')-\sum_{a'}d^D(s,a'){w^*_{\alpha}}(s,a')\bigg|\\ \leq&\sum_{s,a'}d^D(s,a')|\hat{w}(s,a')-{w^*_{\alpha}}(s,a')|\\ \leq&\epsilon_{UO}. \end{align} Thus we have \begin{equation} \Vert \hat{d}'-{d^*_{\alpha}}\Vert_1\leq2\epsilon_{UO}. \end{equation} \section{Proof of Lemmas in Theorem~\ref{thm:sample regularized constrained}} \subsection{Proof of Lemma~\ref{lem:bound tilde nu 2}} \label{proof:lem bound tilde nu 2} From KKT conditions of the maximin problem (\ref{prob:maximin2 constrained}), we have \begin{equation} {w^*_{\alpha,B_w}}(s,a)=\min\left(\max\left(0,(f')^{-1}\left(\frac{e_{{v^*_{\alpha,B_w}}}(s,a)}{\alpha}\right)\right),B_w\right). \end{equation} Suppose $|{v^*_{\alpha,B_w}}(s_m)|=\Vert {v^*_{\alpha,B_w}}\Vert_{\infty}$. Then we can consider the following two cases separately. \begin{itemize} \item \textbf{If there exists $a_{s_m}\in\mathcal{A}$ such that $0<{w^*_{\alpha,B_w}}(s_m,a_{s_m})<B_w$.} In this case, we know that \begin{equation} |e_{{v^*_{\alpha,B_w}}}(s_m,a_{s_m})|=\alpha |f'({w^*_{\alpha,B_w}}(s_m,a_{s_m}))|\leq\alpha B_{f'}. \end{equation} Then we can follow the arguments in Appendix~\ref{proof:lemma bound tilde nu} to obtain: \begin{equation} \Vert{v^*_{\alpha,B_w}}\Vert_{\infty}\leq\frac{\alpha B_{f'}+1}{1-\gamma}. \end{equation} \item \textbf{If for all $a\in\mathcal{A}$, ${w^*_{\alpha,B_w}}(s_m,a)\in\{0,B_w\}$.} In this case, we first introduce the following lemma: \begin{lemma} \label{lem:bound nu bridge} If for all $a\in\mathcal{A}$, ${w^*_{\alpha,B_w}}(s_m,a)\in\{0,B_w\}$, then there exist $a_1,a_2\in\mathcal{A}$ such that ${w^*_{\alpha,B_w}}(s_m,a_1)=0,{w^*_{\alpha,B_w}}(s_m,a_2)=B_w$. \end{lemma} See Appendix~\ref{proof:lem bound nu bridge} for proof. With Lemma~\ref{lem:bound nu bridge}, we can bound $|{v^*_{\alpha,B_w}}(s_m)|$ as follows. If ${v^*_{\alpha,B_w}}(s_m)\geq0$, then since ${w^*_{\alpha,B_w}}(s_m,a_2)=B_w$, we know $e_{{v^*_{\alpha,B_w}}}(s_m,a_2)\geq \alpha f'(B_w)$. Therefore we have: \begin{equation} \alpha f'(B_w)\leq e_{{v^*_{\alpha,B_w}}}(s_m,a_2)\leq r(s_m,a_2)-(1-\gamma){v^*_{\alpha,B_w}}(s_m), \end{equation} which implies: \begin{equation} \label{eq:bound nu 1} {v^*_{\alpha,B_w}}(s_m)\leq\frac{1}{1-\gamma}|r(s_m,a_2)+\alpha f'(B_w)|\leq\frac{\alpha B_{f'}+1}{1-\gamma}. \end{equation} If ${v^*_{\alpha,B_w}}(s_m)<0$, then since ${w^*_{\alpha,B_w}}(s_m,a_1)=0$, we know $e_{{v^*_{\alpha,B_w}}}(s_m,a_1)\leq \alpha f'(0)$. Therefore we have: \begin{equation} \alpha f'(0)\geq e_{{v^*_{\alpha,B_w}}}(s_m,a_1)\geq r(s_m,a_2)-(1-\gamma){v^*_{\alpha,B_w}}(s_m), \end{equation} which implies: \begin{equation} \label{eq:bound nu 2} {v^*_{\alpha,B_w}}(s_m)\geq-\frac{1}{1-\gamma}(|r(s_m,a_1)|+|\alpha f'(0)|)\geq-\frac{\alpha B_{f'}+1}{1-\gamma}. \end{equation} Combining (\ref{eq:bound nu 1}) and (\ref{eq:bound nu 2}), we have $\Vert{v^*_{\alpha,B_w}}\Vert_{\infty}=|{v^*_{\alpha,B_w}}(s_m)|\leq\frac{\alpha B_{f'}+1}{1-\gamma}$. \end{itemize} In conclusion, we have: \begin{equation} \Vert{v^*_{\alpha,B_w}}\Vert_{\infty}\leq\frac{\alpha B_{f'}+1}{1-\gamma}. \end{equation} \subsection{Proof of Lemma~\ref{lem:bound nu bridge}} \label{proof:lem bound nu bridge} First note that it is impossible to have ${w^*_{\alpha,B_w}}(s_m,a)=0,\forall a$. This is because ${d^*_{\alpha,B_w}}(s_m,a)={w^*_{\alpha,B_w}}(s_m,a)d^D(s_m,a)$ satisfies Bellman flow constraint (\ref{eq:bellman flow 1}). Therefore \begin{equation} {d^*_{\alpha,B_w}}(s_m)=\sum_a{w^*_{\alpha,B_w}}(s_m,a)d^D(s_m,a)\geq(1-\gamma)\mu_0(s_m)>0. \end{equation} On the other hand, if ${w^*_{\alpha,B_w}}(s_m,a)=B_w,\forall a$, then from Bellman flow constraints we have: \begin{equation} \label{eq:bound nu 3} B_wd^D(s_m)={d^*_{\alpha,B_w}}(s_m)=(1-\gamma)\mu_0(s_m)+\sum_{s',a'}P(s_m|s',a'){w^*_{\alpha,B_w}}(s',a')d^D(s',a'). \end{equation} Notice that from Assumption~\ref{ass:dataset} $d^D$ is the discounted visitation distribution of $\pi_D$ and thus also satisfies Bellman flow constraints: \begin{equation} d^D(s_m)=(1-\gamma)\mu_0(s_m)+\sum_{s',a'}P(s_m|s',a')d^D(s',a'), \end{equation} which implies \begin{equation} \label{eq:bound nu 4} B_wd^D(s_m)=(1-\gamma)B_w\mu_0(s_m)+\sum_{s',a'}B_wP(s_m|s',a')d^D(s',a'). \end{equation} Combining (\ref{eq:bound nu 3}) and (\ref{eq:bound nu 4}), we have \begin{equation} (1-\gamma)(B_w-1)\mu_0(s_m)=\sum_{s',a'}({w^*_{\alpha,B_w}}-B_w)P(s_m|s',a')d^D(s',a'). \end{equation} However, since $B_w>1,\mu_0(s_m)>0,{w^*_{\alpha,B_w}}-B_w\leq0$, we have: \begin{equation} (1-\gamma)(B_w-1)\mu_0(s_m)>0,\sum_{s',a'}({w^*_{\alpha,B_w}}-B_w)P(s_m|s',a')d^D(s',a')\leq0, \end{equation} which is a contradiction. Therefore, there must exist $a_1,a_2\in\mathcal{A}$ such that ${w^*_{\alpha,B_w}}(s_m,a_1)=0,{w^*_{\alpha,B_w}}(s_m,a_2)=B_w$. \section{Preliminaries} \label{sec:setting} \paragraph{Markov decision process (MDP).} We consider an infinite-horizon discounted MDP $\mathcal{M}=(\mathcal{S},\mathcal{A},P,r,\gamma,\mu_{0})$ \citep{bertsekas2017dynamic}, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\gamma \in [0,1)$ is the discount factor, $P: \mathcal{S}\times\mathcal{A}\to\Delta(\mathcal{S})$ is the transition function, $\mu_0\in\Delta(\mathcal{S})$ is the initial state distribution, and $r:\mathcal{S}\times\mathcal{A}\to[0,1]$ is the reward function. Here, we assume $\mathcal{S}$ and $\mathcal{A}$ to be finite, but our results will not depend on their cardinalities and can be extended to the infinite case naturally. We also assume $\mu_0(s)>0$ for all $s\in\mathcal{S}$; since our analysis and results will not depend on $\min_{s\in\mathcal{S}}\mu_0(s)$, $\mu_0(s)$ for any particular $s$ can be arbitrarily small and therefore this is a trivial assumption for certain technical conveniences. A policy $\pi:\mathcal{S}\to\Delta(\mathcal{A})$ specifies the action selection probability in state $s$, and the associated discounted state-action occupancy is defined as $d^{\pi}(s,a) \coloneqq (1-\gamma) \sum_{t=0}^{\infty} \gamma^t \text{Pr}_\pi( s_t = s, a_t = a),$ where the subscript of $\pi$ in $\text{Pr}_{(\cdot)}$ or $\mathbb{E}_{(\cdot)}$ refers to the distribution of trajectories generated as $ s_0\sim \mu_0 $, $a_t\sim\pi(\cdot|s_t)$, $s_{t+1}\sim P(\cdot|s_t,a_t)$ for all $t\geq0$. For brevity, let $d^{\pi}(s)$ denote the discounted state occupancy $\sum_{a\in\mathcal{A}}d^{\pi}(s,a)$. A policy $\pi$ is also associated with a value function $V^{\pi}:\mathcal{S}\to\mathbb{R}$ and an action-value (or Q) function $Q^\pi: \mathcal{S}\times\mathcal{A}\to\mathbb{R}$ as follows: $\forall s\in\mathcal{S}, a\in\mathcal{A}$, % $\textstyle V^{\pi}(s):=\mathop{\mathbb{E}}_\p \left[\sum_{t=0}^{\infty}\gamma^t r(s_t,a_t) ~\Big\vert~ s_0=s\right], ~~ Q^\pi(s,a) := \mathop{\mathbb{E}}_\pi\left[\sum_{t=0}^{\infty}\gamma^t r(s_t,a_t) ~\Big\vert~ s_0=s, a_0 = a\right].$ % % The goal of RL is to find a policy that maximizes the expected discounted return: \begin{align} \label{prob:original problem} \max_{\pi} J(\pi)= (1-\gamma)\mathbb{E}_{\pi}\left[\sum_{t=0}^{\infty}\gamma^tr(s_t,a_t)\right] = \mathop{\mathbb{E}}_{(s,a)\sim d^{\pi}}[r(s,a)]. \end{align} Alternatively, $J(\pi) =(1-\gamma)V^{\pi}(\mu_0) :=(1-\gamma) \mathbb{E}_{s\sim\mu_0}[V^{\pi}(s)]$. Let $\pi^*$ denote the optimal policy of this unregularized problem (\ref{prob:original problem}) \paragraph{Offline RL.} In offline RL, the agent cannot interact with the environment directly and only has access to a pre-collected dataset $\mathcal{D}=\{(s_i,a_i,r_i,s'_i)\}_{i=1}^n$. We further assume each $(s_i,a_i,r_i,s'_i)$ is i.i.d.~sampled from $(s_i,a_i)\sim d^{D},r_i=r(s_i,a_i),s'_i\sim P(\cdot|s_i,a_i)$ as a standard simplification in theory \citep{nachum2019dualdice,nachum2019algaedice,xie2021bellman,pmlr-v139-xie21d}. Besides, we denote the conditional probability $d^D(a|s)$ by $\pi_D(a|s)$ and call $\pi_D$ the behavior policy. However, we do not assume $d^D=d^{\pi_D}$ in most of our results for generality (except for Section~\ref{sec:results constrained}). We also use $d^D(s)$ to represent the marginal distribution of state, i.e., $d^D(s)=\sum_{a\in\mathcal{A}}d^D(s,a)$. In addition, we assume access to a batch of i.i.d. samples $\mathcal{D}_0=\{s_{0,j}\}^{n_0}_{j=1}$ from the initial distribution $\mu_0$. \input{notations} \section{Algorithm: \texttt{PRO-RL}\xspace} \label{sec:algorithm} Our algorithm builds on a regularized version of the well-celebrated LP formulation of MDPs \citep{puterman2014markov}. In particular, consider the following problem: \begin{samepage} \begin{problem*}[Regularized LP]\label{prob:regularized_lp} \vspace{-10pt} \begin{align} &\max_{d \ge 0}\mathbb{E}_{(s,a)\sim d}[r(s,a)]-\alpha\mathbb{E}_{(s,a)\sim d^D}\left[f\left(\frac{d(s,a)}{d^D(s,a)}\right)\right]\label{eq:constrained}\\ &\text{s.t. }d(s)=(1-\gamma)\mu_0(s)+\gamma\sum_{s',a'}P(s|s',a')d(s',a'), \forall s\in\mathcal{S}\label{eq:bellman flow 1} \end{align} where $d\in \mathbb{R}^{|\mathcal{S}\times\mathcal{A}|}$, $d(s) = \sum_{a} d(s, a)$, and $f:\mathbb{R}\to\mathbb{R}$ is a strongly convex and continuously differentiable function serving as a regularizer. \end{problem*} \end{samepage} Without the regularization term, this problem is exactly equivalent to the unregularized problem (\ref{prob:original problem}), as (\ref{eq:bellman flow 1}) exactly characterizes the space of possible discounted occupancies $d^\pi$ that can be induced in this MDP and is often known as the Bellman flow equations. Any non-negative $d$ that satisfies such constraints corresponds to $d^\pi$ for some stationary policy $\pi$. Therefore, once we have obtained the optimum $d^*_{\alpha}$ of the above problem, we can extract the regularized optimal policy $\pi^*_{\alpha}$ via \begin{equation} \label{eq:tilde d tilde w} {\pi^*_{\alpha}}(a|s):= \begin{cases} \frac{{d^*_{\alpha}}(s,a)}{\sum_a {d^*_{\alpha}}(s,a)}, & \text{for } \sum_a d^*_{\alpha}(s,a)>0,\\ \frac{1}{|\mathcal{A}|}, & \text{else.} \end{cases} ~~ \forall s\in\mathcal{S},a\in\mathcal{A}. \end{equation} Turning to the regularizer, $D_f(d\Vert d^D):=\mathbb{E}_{(s,a)\sim d^D}\left[f\left(\frac{d(s,a)}{d^D(s,a)}\right)\right]$ is the $f$-divergence between $d^{\pi}$ and $d^D$. This practice, often known as behavioral regularization, encourages the learned policy $\pi$ to induce an occupancy $d = d^\pi$ that stays within the data distribution $d^D$, and we will motivate it further using a counterexample against the unregularized algorithm \& analysis at the end of this section. To convert the regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1} into a learning algorithm compatible with function approximation, we first introduce the Lagrangian multiplier ${v}\in\mathbb{R}^{|\mathcal{S}|}$ to (\ref{eq:constrained})(\ref{eq:bellman flow 1}), and obtain the following maximin problem: \begin{align} \label{prob:maximin} \max_{d\geq0}\min_{{v}}~&\mathbb{E}_{(s,a)\sim d}[r(s,a)]-\alpha\mathbb{E}_{(s,a)\sim d^D}\left[f\left(\frac{d(s,a)}{d^D(s,a)}\right)\right]\notag\\ &+\sum_{s\in\mathcal{S}}{v}(s)\left((1-\gamma)\mu_0(s)+\gamma\sum_{s',a'}P(s|s',a')d(s',a')-d(s)\right). \end{align} Then, by variable substitution $w(s,a)=\frac{d(s,a)}{d^D(s,a)}$ and replacing summations with the corresponding expectations, we obtain the following proble \begin{equation} \label{prob:maximin2} \max_{w\geq0}\min_{{v}}{L_{\alpha}}({v},w):=(1-\gamma)\mathbb{E}_{s\sim \mu_0}[{v}(s)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{{v}}(s,a)], \end{equation} where $e_{{v}}(s,a)=r(s,a)+\gamma\sum_{s'}P(s'|s,a){v}(s')-{v}(s)$. The optimum of (\ref{prob:maximin2}), denoted by $({v^*_{\alpha}},{w^*_{\alpha}})$, will be of vital importance later, as our main result relies on the realizability of these two functions $v^*_{\alpha}$ and $w^*_{\alpha}$. When $\alpha= 0$, $v_{0}^*$ is the familiar optimal state-value function $V^{\pi^*}$, and $d_0^* := w_0^* \cdot d^D$ is the discounted occupancy of an optimal policy. Note that optimal policies in MDPs are generally not unique and thus $w^*_0,d^*_0$ are not unique either. We denote the optimal set of $w^*_0$ and $d^*_0$ by $\mathcal{W}^*_0$ and $D^*_0$, respectively. Finally, our algorithm simply uses function classes ${\mathcal{V}}\subseteq\mathbb{R}^{|\mathcal{S}|}$ and ${\mathcal{W}}\subseteq\mathbb{R}^{|\mathcal{S}|\times|\mathcal{A}|}_{+}$ to approximate ${v}$ and $w$, respectively, and optimizes the empirical version of $L_{\alpha}(v, w)$ over ${\mathcal{W}} \times \mathcal{V}$. Concretely, we solve for \begin{align} \label{prob:empirical} \textbf{\texttt{PRO-RL}\xspace:} \qquad (\hat{{w}},\hat{v})=\arg\max_{w\in {\mathcal{W}}}\arg\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w) , \end{align} where $\hat{L}_{\alpha}({v},w):=$ \begin{align}\label{hat L} (1-\gamma)\frac{1}{n_0}\sum_{j=1}^{n_0}[{v}(s_{0,j})]+\frac{1}{n}\sum_{i=1}^n[-\alpha f(w(s_i,a_i))] +\frac{1}{n}\sum_{i=1}^n[w(s_i,a_i)e_{{v}}(s_i,a_i,r_i,s'_i)], \end{align} and $e_{{v}}(s,a,r,s')=r+\gamma {v}(s')-{v}(s)$. The final policy we obtain is \begin{align} \label{eq:hat pi} \hat\pi(a|s)= \begin{cases} \frac{\hat{w}(s,a)\pi_D(a|s)}{\sum_{a'}\hat{w}(s,a')\pi_D(a'|s)}, & \text{for } \sum_{a'}\hat{w}(s,a')\pi_D(a'|s)>0,\\ \frac{1}{|\mathcal{A}|}, & \text{else,} \end{cases} \end{align} We call this algorithm \textbf{P}rimal-dual \textbf{R}egularized \textbf{O}ffline \textbf{R}einforcement \textbf{L}earning (\texttt{PRO-RL}\xspace). For now we assume the behavior policy $\pi_D$ is known; Section~\ref{sec:results BC} extends the main results to the unknown $\pi_D$ setting via behavior cloning. \paragraph{Why behavioral regularization?} While behavioral regularization (the $f$ term) is frequently used in MIS (especially in DICE algorithms \citep{nachum2019algaedice,lee2021optidice}), its theoretical role has been unclear and finite-sample guarantees can often be obtained without it \citep{jiang2020minimax}. For us, however, the use of regularization is crucial in proving our main result (Corollary~\ref{cor:sample unregularized}). Below we construct a counterexample against the unregularized algorithm under the natural ``unregularized'' assumptions. \begin{figure}[t] \centering \input{fig.tikz} \caption{Construction against the unregularized algorithm under $w_0^* \in \mathcal{W}$ and $v_0^* \in \mathcal{V}$. The construction is given as a 2-stage finite-horizon MDP, and adaptation to the discounted setting is trivial. State A is the initial state with no intermediate rewards. The offline data does not cover state C. The nature can choose between 2 MDPs that differ in the rewards for state C, and only one of the two actions has a $+1$ reward. \label{fig:counter}} \end{figure} \begin{example} Figure~\ref{fig:counter} shows a counterexample where the unregularized algorithm fails even with infinite data and the natural assumptions, that (1) there exists a $w_0^*\in\mathcal{W}^*_0$ such that $w_0^*\in \mathcal{W}$, (2) $v_0^* \in \mathcal{V}$, and (3) data covers the optimal policy induced by $w_0^*$. In state A, both actions are equally optimal. However, since data does not cover the actions of state C, the learner should not take \texttt{R} in state A as it can end up choosing a highly suboptimal action in state C with constant probability if nature randomizes over the 2 possible MDP instances. We now show that the unregularized algorithm (\eqref{prob:empirical} with $\alpha=0$) can choose \texttt{R} in state A, even with infinite data and ``nice'' $d^D$, $\mathcal{V}$, $\mathcal{W}$. In particular, the two possible MDPs share the same optimal value function $v_0^*(A) = v_0^*(B) = v_0^*(C) = 1$, which is the only function in $\mathcal{V}$ so we always have $v_0^* \in \mathcal{V}$. $d^D$ covers state-action pairs (A, \texttt{L}), (A, \texttt{R}), B. $\mathcal{W}$ also contains 2 functions: $w_1$ is such that $w_1 \cdot d^D$ is uniform over (A, \texttt{L}), B, which is the occupancy of the optimal policy $\pi^*(A) = L$. $w_2$ is such that $w_2 \cdot d^D$ is uniform over (A, \texttt{R}), B, which induces a policy that chooses \texttt{R} in state A. However, the unregularized algorithm cannot distinguish between $w_1$ and $w_2$ even with infinite data (i.e., with objective $L_0(v, w)$). This is because $w_1$ and $w_2$ only differs in the action choice in state A, but $v_0^*(B) = v_0^*(C) = 1$ so the unregularized objective is the same for $w_1$ and $w_2$. \end{example} \section*{Acknowledgements} \ifdefined\isarxivversion \bibliographystyle{apalike} \else \fi \section{Main results} \label{sec:results} In this section we present the main sample-complexity guarantees of our algorithm under only realizability assumptions for $\mathcal{V}$ and $\mathcal{W}$ and single-policy concentrability of data. We will start with the analyses that assume perfect optimization and that the behavior policy $\pi_D$ is known (Section~\ref{sec:results regularized}), allowing us to present the result in a clean manner. We then extend our analyses in several directions: Section~\ref{sec:results agnostic} handles approximation and optimization errors; Section~\ref{sec:results constrained} removes the concentrability assumption altogether and allows us to compete with the best covered policy; Section~\ref{sec:results BC} uses behavior cloning to handle an unknown behavior policy. \subsection{Sample-efficiency with only realizability and weak concentrability} \label{sec:results regularized} We introduce the needed assumptions before stating the sample-efficiency guarantees to our algorithm. The first assumption is about data coverage, that it covers the occupancy induced by a (possibly regularized) optimal policy. \begin{assumption}[$\pi^*_{\alpha}$-concentrability] \label{ass:concentrability} \begin{equation} \frac{{d^*_{\alpha}}(s,a)}{d^D(s,a)}\leqB_{w,\alpha}, \forall s\in\mathcal{S},a\in\mathcal{A}. \end{equation} \end{assumption} Two remarks are in order: \begin{enumerate}[leftmargin=*,itemsep=0pt] \item Assumption~\ref{ass:concentrability} is parameterized by $\alpha$, and we will bind it to specific values when we state the guarantees. \item This assumption is necessary if we want to compete with the optimal policy of the MDP, $\pi^*$, and is already much weaker than all-policy concentrability \citep{munos2008finite, farahmand2010error, chen2019information}. That said, ideally we should not even need such an assumption, as long as we are willing to compete with the best policy covered by data instead of the truly optimal policy \citep{liu2020provably, xie2021bellman}. We will actually show how to achieve this in Section~\ref{sec:results constrained}. \end{enumerate} We then introduce the realizability assumptions on our function approximators $\mathcal{V}$ and $\mathcal{W}$, which are very straightforward. For now we assume exact realizability, and Section~\ref{sec:results agnostic} handles misspecification errors. \begin{assumption}[Realizability of ${\mathcal{V}}$] \label{ass:V realize} Suppose ${v^*_{\alpha}}\in {\mathcal{V}}$. \end{assumption} \begin{assumption}[Realizability of ${\mathcal{W}}$] \label{ass:W realize} Suppose ${w^*_{\alpha}}\in {\mathcal{W}}$. \end{assumption} The above 3 assumptions are the major assumptions we need. (The rest are standard technical assumptions on boundedness.) Comparing them to existing results, we emphasize that all existing analyses require ``$\forall''$ quantifiers in the assumptions either about the data (e.g., all-policy concentrability) or about the function classes (e.g., Bellman-completeness). See Table~\ref{tab:main results} for a comparison to various approaches considered in the literature. Having stated the major assumptions, we now turn to the routine ones on function boundedness. \begin{assumption}[Boundedness of ${\mathcal{W}}$] \label{ass:W bound} Suppose $0\leq w(s,a)\leq {B_{w,\alpha}}$ for any $s\in\mathcal{S}, a\in\mathcal{A}, w\in {\mathcal{W}}$. \end{assumption} Here we reuse $B_{w,\alpha}$ from Assumption~\ref{ass:concentrability}. Since $d^*_{\alpha}/d^D = w^*_{\alpha} \in \mathcal{W}$ by Assumption~\ref{ass:W realize}, in general the magnitude of $\mathcal{W}$ should be larger than that of $d^*_{\alpha}/d^D$, and we use the same upper bound to eliminate unnecessary notations and improve readability. The next assumption characterizes the regularizer $f$. These are not really assumptions as we can make concrete choices of $f$ that satisfy them (e.g., a simple quadratic function; see Remark~\ref{rem:sample regularized}), but for now we leave them as assumptions to keep the analysis general. \begin{assumption}[Properties of $f$] \label{ass:f prop} Suppose $f$ satisfies the following properties: \begin{itemize} \item \textbf{Strong Convexity}: $f$ is $M_f$-strongly-convex. \item \textbf{Boundedness}: \begin{align} &|f'(x)|\leq {B_{f',\alpha}},\forall\quad 0\leq x\leq {B_{w,\alpha}},\\ &|f(x)|\leq {B_{f,\alpha}}, \forall\quad 0\leq x\leq {B_{w,\alpha}}. \end{align} \item \textbf{Non-negativity}: $f(x)\geq0$ for any $x\in\mathbb{R}$. \end{itemize} \end{assumption} \begin{remark} The non-negativity is trivial since $f$ is strongly convex and we can always add a constant term to ensure non-negativity holds. Besides, we can get rid of non-negativity with the results in Section~\ref{sec:results constrained}. \end{remark} Assumption~\ref{ass:f prop} allows us to bound $\|v^*_{\alpha}\|_\infty \le \frac{\alpha {B_{f',\alpha}}+1}{1-\gamma}$ (see Lemma~\ref{lem:bound tilde nu} in Section~\ref{sec:analysis}); in the same spirit as Assumption~\ref{ass:W bound}, we assume: \begin{assumption}[Boundedness of ${\mathcal{V}}$] \label{ass:V bound} Suppose $\Vert v\Vert_{\infty}\leq {B_{v,\alpha}}:=\frac{\alpha {B_{f',\alpha}}+1}{1-\gamma}$ for any $v\in {\mathcal{V}}$. \end{assumption} With the above assumptions, we have Theorem~\ref{thm:sample regularized} to show that \texttt{PRO-RL}\xspace can learn the optimal density ratio and policy for the regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1} with polynomial samples, whose proof is deferred to Section~\ref{sec:analysis}. To simplify writing, we introduce the following notation for the statistical error term that arises purely from concentration inequalities: \begin{definition} \begin{align} &\mathcal{E}_{n,n_0,\alpha}(B_w,B_f,B_v,B_e)=(1-\gamma)B_v\cdot\left(\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}\right)^{\frac{1}{2}}+\left(\alpha B_f+{B_w}B_e\right)\cdot\left(\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}\right)^{\frac{1}{2}}. \end{align} \end{definition} $\mathcal{E}$ characterizes the statistical error $\hat{L}_{\alpha}(v,w)-L_{\alpha}(v,w)$ based on concentration inequalities, and the two terms in its definition correspond to using $\mathcal{D}_0$ to approximate $(1-\gamma)\mathbb{E}_{s\sim \mu_0}[{v}(s)]$ and $\mathcal{D}$ for $-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{{v}}(s,a)]$, respectively. Using this shorthand, we state our first guarantee, that the learned $\hat w$ and the extracted policy $\hat\pi$ will be close to the solution of the regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1}, $w^*_{\alpha}$ and $\pi^*_{\alpha}$, respectively. \begin{theorem}[Sample complexity of learning $\pi_\alpha^*$] \label{thm:sample regularized} Fix $\alpha>0$. Suppose Assumptions~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold for the said $\alpha$. Then with at least probability $1-\delta$, the output of \texttt{PRO-RL}\xspace satisfies: \begin{align} &J(\pi^*_{\alpha})-J(\hat{\pi})\leq\frac{1}{1-\gamma}\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]\notag\\ &\leq\frac{2}{1-\gamma}\Vert \hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\leq\frac{4}{1-\gamma}\sqrt{\frac{\mathcal{E}_{n,n_0,\alpha}(B_{w,\alpha},B_{f,\alpha},B_{v,\alpha},B_{e,\alpha})}{\alpha M_f}},\label{eq:thm1 1} \end{align} where ${B_{e,\alpha}}:=(1+\gamma){B_{v,\alpha}}+1$. \end{theorem} \begin{remark}[Sample complexity for quadratic regularization] \label{rem:sample regularized} Theorem~\ref{thm:sample regularized} shows that \texttt{PRO-RL}\xspace can obtain a near-optimal policy for regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1} with sample complexity $O(n_0+n_1)=\tilde{O}\left(\frac{(\alpha {B_{f,\alpha}}+{B_{w,\alpha}}{B_{e,\alpha}})^2}{(1-\gamma)^4(\alpha M_f)^2\epsilon^4}\right)$. However, there might be implicit dependence on $1-\gamma,\alpha M_f,{B_{w,\alpha}}$ in the constants ${B_{e,\alpha}}$. To reveal these terms, we consider a simple choice of $f(x)=\frac{M_f}{2}x^2$. Then we have ${B_{e,\alpha}}=O(\frac{\alpha M_f({B_{w,\alpha}})^2+{B_{w,\alpha}}}{1-\gamma}),{B_{f,\alpha}}=O(\alpha M_f({B_{w,\alpha}})^2)$, leading to a sample complexity $\tilde{O}\Big(\frac{({B_{w,\alpha}})^2}{(1-\gamma)^6(\alpha M_f)^2\epsilon^4}+\frac{({B_{w,\alpha}})^4}{(1-\gamma)^6\epsilon^4}\Big)$. \end{remark} Moreover, \texttt{PRO-RL}\xspace can even learn a near-optimal policy for the unregularized problem (\ref{prob:original problem}) efficiently by controlling the magnitude of $\alpha$ in \texttt{PRO-RL}\xspace. Corollary~\ref{cor:sample unregularized} characterizes the sample complexity of \texttt{PRO-RL}\xspace for the unregularized problem \eqref{prob:original problem} without any approximation/optimization error: \begin{corollary}[Sample complexity of competing with $\pi_0^*$] \label{cor:sample unregularized} Fix any $\epsilon>0$. Suppose there exists $d^*_0\in D^*_0$ that satisfies Assumption~\ref{ass:concentrability} with $\alpha=0$. Besides, assume that Assumptions~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold for $\alpha=\alpha_{\epsilon}:=\frac{\epsilon}{2{B_{f,0}}}$. Then if \begin{align} &n\geq\frac{C_1\left(\epsilon {B_{f,\alpha_{\epsilon}}}+2{B_{w,\alpha_{\epsilon}}}{B_{e,\alpha_{\epsilon}}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^4}\cdot\log\frac{4|\mathcal{V}||\mathcal{W}|}{\delta}, \\ &n_0\geq\frac{C_1\left(2{B_{v,\alpha_{\epsilon}}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^2}\cdot\log\frac{4|\mathcal{V}|}{\delta}, \end{align} the output of \texttt{PRO-RL}\xspace with input $\alpha=\alpha_{\epsilon}$ satisfies \begin{equation} J(\pi^*_0)-J(\hat{\pi})\leq\epsilon, \end{equation} with at least probability $1-\delta$, where $C_1$ is some universal positive constants and $\pi^*_0$ is the optimal policy inducing $d^*_0$. \end{corollary} \begin{proof}[Proof sketch] The key idea is to let $\alpha$ be sufficiently small so that $J({\pi^*_{\alpha}})$ and $J({\pi^*_0})$ is close. Then we can simply apply Theorem~\ref{thm:sample regularized} and bound $J(\hat{\pi})-J({\pi^*_{\alpha}})$. See Appendix~\ref{proof:cor sample unregularized} for details. \end{proof} \begin{remark}[Quadratic regularization]\label{rem:sample_unreg} Similarly as Remark~\ref{rem:sample regularized}, the sample complexity of competing with $\pi^*_{\alpha_{\epsilon}}$ under quadratic $f$ is $\tilde{O}\left(\frac{({B_{w,0}})^4({B_{w,\alpha_{\epsilon}}})^2}{\epsilon^6(1-\gamma)^{6}}\right)$. \end{remark} \texttt{PRO-RL}\xspace is originally designed for the regularized problem. Therefore, when applying it to the unregularized problem the sample complexity degrades from $\tilde{O}\left(\frac{1}{\epsilon^4}\right)$ to $\tilde{O}\left(\frac{1}{\epsilon^6}\right)$. However, the sample complexity remains polynomial in all relevant quantities. Compared to Theorem~\ref{thm:sample regularized}, Corollary~\ref{cor:sample unregularized} requires concentrability for policy $\pi^*_0$ in addition to $\pi^*_{\alpha_{\epsilon}}$, so technically we require ``two-policy'' instead of single-policy concentrability for now. While this is still much weaker than all-policy concentrability \citep{chen2019information}, we show in Section~\ref{sec:results constrained} how to compete with $\pi^*_0$ with only single-policy concentrability. \begin{remark} When $\epsilon$ shrinks, the realizability assumptions for Corollary~\ref{cor:sample unregularized} also need to hold for regularized solutions with smaller $\alpha$. That said, in the following discussion (Proposition~\ref{prop:LP stability}), we will show that when $\epsilon$ is subsequently small, the realizability assumptions will turn to be with respect to the unregularized solutions. \end{remark} \paragraph{Comparison with existing algorithms.} Theorem~\ref{thm:sample regularized} and Corollary~\ref{cor:sample unregularized} display an exciting result that \texttt{PRO-RL}\xspace obtains a near optimal policy for regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1} and unregularized problem \eqref{prob:original problem} using polynomial samples with only realizability and weak data-coverage assumptions. The literature has demonstrated hardness of learning offline RL problems and existing algorithms either rely on the completeness assumptions \citep{xie2020q,xie2021bellman,du2021bilinear} or extremely strong data assumption \citep{pmlr-v139-xie21d}. Our results show for the first time that offline RL problems can be solved using a polynomial number of samples without these assumptions. \paragraph{High accuracy regime ($\epsilon\to0$).} Corollary~\ref{cor:sample unregularized} requires weak concentration and realizability with respect to the optimizers of the regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1}. A natural idea is to consider whether the concentration and realizability instead can be with respect to the optimizer of the unregularized problem $(v^*_{0},w^*_{0})$. Inspired by the stability of linear programming \citep{mangasarian1979nonlinear}, we identify the high accuracy regime ($\epsilon\to0$) where concentrability and realizability with respect to $w^*_{0}$ can guarantee \texttt{PRO-RL}\xspace to output an $\epsilon$-optimal policy as shown in the following proposition: \begin{proposition} \label{prop:LP stability} There exists $\bar{\alpha}>0$ and $w^*\in \mathcal{W}^*_0$ such that when $\alpha\in[0,\bar{\alpha}]$ we have \begin{equation} w^*_{\alpha}=w^*,\Vertv^*_{\alpha}-v^*_{0}\Vert_{2,d^D}\leq C\alpha, \end{equation} where $C=\frac{B_{f',0}+\frac{2}{\bar{\alpha}}}{1-\gamma}$. \end{proposition} \begin{proof} $w^*$ is indeed the solution of $\arg\max_{w\in {\mathcal{W}^*_0}}-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]$ and it can be shown that $w^*$ satisfies the KKT condition of the regularized problem \eqref{eq:constrained}\eqref{eq:bellman flow 1} for sufficiently small $\alpha$. See Appendix~\ref{proof:prop LP stability} for details. \end{proof} Here $\bar{\alpha}$ is a value only depends on the underlying MDP and not on $\epsilon$. Proposition~\ref{prop:LP stability} essentially indicates that when $\epsilon\to0$, $w^*_{\alpha_{\epsilon}}$ is exactly the unregularized optimum $w^*$, and $v^*_{\alpha_{\epsilon}}$ is $O(\epsilon)$ away form $v^*_{0}$. Combining with Corollary~\ref{cor:sample unregularized}, we know that $\epsilon$-optimal policy can be learned by \texttt{PRO-RL}\xspace if concentrability holds for $\pi^*_0$ and $\mathcal{W}$ contains $w^*$. \subsection{Unbiased OptiDICE with Behavior Cloning} \subsection{Policy extraction via behavior cloning} \label{sec:results BC} In this section we consider an \textit{unknown} behavior policy $\pi_D$. Notice that the only place we require $\pi_D$ in our algorithm is the policy extraction step, where we compute $\hat\pi$ from $\hat w$ using knowledge of $\pi_D$. Inspired by the imitation learning literature \citep{pomerleau1989alvinn,ross2014reinforcement,agarwal2020flambe}, we will use behavior cloning to compute a policy $\bar{\pi}$ to approximate $\hat{\pi}$, where $\hat\pi$ is not directly available and only implicitly defined via $\hat{w}$ and the data. As is standard in the literature \citep{ross2014reinforcement,agarwal2020flambe}, we utilize a policy class $\Pi$ to approximate the target policy. We suppose $\Pi$ is realizable: \begin{assumption}[Realizability of $\Pi$] \label{ass:pi realize} Assume $\pi^*_{\alpha}\in\Pi$. \end{assumption} One may be tempted to assume $\hat\pi \in \Pi$, since $\hat\pi$ is the target of imitation, but $\hat{\pi}$ is a function of the data and hence random. A standard way of ``determinizing'' such an assumption is to assume the realizability of $\Pi$ for \textit{all possible} $\hat\pi$ that can be induced by any $w\in \mathcal{W}$, which leads to a prohibitive ``completeness''-type assumption. Fortunately, as we have seen in previous sections, $\hat{\pi}$ will be close to ${\pi^*_{\alpha}}$ when learning succeeds, so the realizability of $\pi^*_{\alpha}$---a policy whose definition does not depend on data randomness---suffices for our purposes. In the rest of this section, we design a novel behavior cloning algorithm which is more robust compared to the classic maximum likelihood estimation process \citep{pomerleau1989alvinn,ross2014reinforcement,agarwal2020flambe}. In MLE behavior cloning, the KL divergence between the target policy and the policy class need to be bounded while in our algorithm we only require the weighted $\ell_1$ distance to be bounded. This property is important in our setting, as \texttt{PRO-RL}\xspace can only guarantee a small weighted $\ell_2$ distance between $\pi^*_{\alpha}$ and $\hat{\pi}$; $\ell_2$ distance is stronger than $\ell_1$ while weaker than KL divergence. Our behavior cloning algorithm is inspired by the algorithms in \cite{sun2019provably,agarwal2019reinforcement}, which require access to $d^{\pi}$ for all $\pi\in\Pi$ and is not satisfied in our setting. However, the idea of estimating total variation by the variational form turns out to be useful. More concretely, for any two policies $\pi$ and $\pi'$, define: \begin{equation} \label{defn:f pi pi'} h^s_{\pi,\pi'}\coloneqq\arg\max_{h:\Vert h\Vert_{\infty}\leq 1}[\mathbb{E}_{a\sim\pi(\cdot|s)} h(a)-\mathbb{E}_{a\sim\pi'(\cdot|s)} h(a)]. \end{equation} Let $h_{\pi,\pi'}(s,a)=h^s_{\pi,\pi'}(a),\forall s,a$. Note that the function $h_{\pi, \pi'}$ is purely a function of $\pi$ and $\pi'$ and does not depend on the data or the MDP, and hence can be computed exactly even before we see the data. Such a function witnesses the $\ell_1$ distance between $\pi$ and $\pi'$, as shown in the following lemma; see proof in Appendix~\ref{proof:lem variation}: \begin{lemma} \label{lem:variation} For any distribution $d$ on $\mathcal{S}$ and policies $\pi,\pi'$ , we have: \begin{equation} \mathbb{E}_{s\sim d}[\Vert\pi(\cdot|s)-\pi'(\cdot|s)\Vert_1]=\mathbb{E}_{s\sim d}\left[\mathbb{E}_{a\sim\pi(\cdot|s)}[h_{\pi,\pi'}(s,a)]-\mathbb{E}_{a\sim\pi'(\cdot|s)}[h_{\pi,\pi'}(s,a)]\right]. \end{equation} \end{lemma} Inspired by Lemma~\ref{lem:variation}, we can estimate the total variation distance between $\pi$ and $\pi'$ by evaluating $\mathbb{E}_{a\sim\pi(\cdot|s)}[h_{\pi,\pi'}(s,a)]-\mathbb{E}_{a\sim\pi'(\cdot|s)}[h_{\pi,\pi'}(s,a)]$ empirically. Let $\mathcal{H} := \{h_{\pi,\pi'}: \pi, \pi' \in \Pi\}$ and we have $|\mathcal{H}|\leq|\Pi|^2$. We divide $\mathcal{D}$ into $\mathcal{D}_1$ and $\mathcal{D}_2$ where $\mathcal{D}_1$ is utilized for evaluating $\hat{w}$ and $\mathcal{D}_2$ for obtaining $\bar{\pi}$. Let $n_1$ and $n_2$ denote the number of samples in $\mathcal{D}_1$ and $\mathcal{D}_2$. Then our behavior cloning algorithm is based on the following objective function, whose expectation is $\mathbb{E}_{s\sim \hat{d},a\sim\hat{\pi}}[h^{\pi}(s)-h(s,a)]$ and by Lemma~\ref{lem:variation} is exactly the TV between $\hat{\pi}$ and $\pi$: \begin{equation} \label{BC obj} \bar{\pi}=\arg\min_{\pi\in\Pi}\max_{h\in\mathcal{H}}[\sum_{i=1}^{n_2}\hat{w}(s_i,a_i)\left(h^{\pi}(s_i)-h(s_i,a_i)\right)], \end{equation} where $(s_i,a_i)\in\mathcal{D}_2,\forall 1\leq i\leq n_2$, $h^{\pi}(s)=\mathbb{E}_{a\sim\pi(\cdot|s)}[h(s,a)]$ and $\bar{\pi}$ is the ultimate output policy. It can be observed that (\ref{BC obj}) is the importance-sampling version of \begin{equation} \mathbb{E}_{s\sim \hat{d}}[\mathbb{E}_{a\sim\pi(\cdot|s)}[h(s,a)]-\mathbb{E}_{a\sim\hat{\pi}(\cdot|s)}[h(s,a)]]. \end{equation} Since $\hat{d}$ is close to $d^*_{\alpha}$, by minimizing (\ref{BC obj}) we can find a policy that approximately minimizes $\mathbb{E}_{s\sim d^*_{\alpha}}[\Vert\pi(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]$. We call \texttt{PRO-RL}\xspace with this behavior cloning algorithm by \texttt{PRO-RL-BC}\xspace. Theorem~\ref{thm:sample regularized BC} shows that \texttt{PRO-RL-BC}\xspace can attain almost the same sample complexity as \texttt{PRO-RL}\xspace in Theorem~\ref{thm:sample regularized} where $\pi_D$ is known. \begin{theorem}[Sample complexity of learning $\pi^*_{\alpha}$ with unknown behavior policy] \label{thm:sample regularized BC} Assume $\alpha>0$. Suppose Assumption~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} and \ref{ass:pi realize} hold. Then with at least probability $1-\delta$, the output of \texttt{PRO-RL-BC}\xspace satisfies: \begin{align} &J(\pi^*_{\alpha})-J(\bar{\pi})\leq\frac{1}{1-\gamma}\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\bar{\pi}(\cdot|s)\Vert_1]\notag\\ &\leq\frac{4{B_{w,\alpha}}}{1-\gamma}\sqrt{\frac{6\log\frac{4|\Pi|}{\delta}}{n_2}}+\frac{50}{1-\gamma}\sqrt{\frac{\mathcal{E}_{n_1,n_0,\alpha}(B_{w,\alpha},B_{f,\alpha},B_{v,\alpha},B_{e,\alpha})}{\alpha M_f}}, \end{align} where ${B_{e,\alpha}}$ is defined as in Theorem~\ref{thm:sample regularized}. \end{theorem} \begin{proof} See Appendix~\ref{proof:thm sample regularized BC} for details. \end{proof} \begin{remark} Notice that the error scales with $O(\frac{1}{\sqrt{n_2}})$ and $O(\frac{1}{n_1^{\frac{1}{4}}})$, which means that the extra samples required by behavior cloning only affects the higher-order terms. Therefore the total sample complexity $n=n_1+n_2$ is dominated by $n_1$, which coincides with the sample complexity of Theorem~\ref{thm:sample regularized}. \end{remark} Similarly, behavior cloning can be extended to the unregularized setting where we compete with $\pi^*_0$, and the sample complexity will remain almost the same as Corollary~\ref{cor:sample unregularized}: \begin{corollary} \label{cor:sample unregularized BC} Fix any $\epsilon>0$. Suppose there exists $d^*_0\in D^*_0$ such that Assumption~\ref{ass:concentrability} holds. Besides, assume that Assumption~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} and \ref{ass:pi realize} hold for $\alpha=\alpha_{\epsilon}$. Then if \begin{align} &n_0\geq C_2\cdot\frac{\left(2{B_{v,\alpha_{\epsilon}}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^2}\cdot\log\frac{4|\mathcal{V}|}{\delta},\\ &n_1\geq C_3\cdot\frac{\left(\epsilon B_{f,\alpha_{\epsilon}}+2B_{w,\alpha_{\epsilon}}\BEEB_{f,0}\right)^2}{\epsilon^6M_f^2(1-\gamma)^4}\cdot\log\frac{|{\mathcal{V}}||{\mathcal{W}}|}{\delta}, \\ &n_2\geq C_4\cdot\frac{(B_{w,\alpha_{\epsilon}})^2}{(1-\gamma)^2\epsilon^2}\log\frac{|\Pi|}{\delta}, \end{align} where $C_2,C_3,C_4$ are some universal positive constants, the output of \texttt{PRO-RL-BC}\xspace with input $\alpha=\alpha_{\epsilon}$ satisfies \begin{equation} J(\pi^*_0)-J(\bar{\pi})\leq \epsilon, \end{equation} with at least probability $1-\delta$. \end{corollary} \begin{proof} The proof is the same as in Appendix~\ref{proof:cor sample unregularized}. The only difference is that we replace the result in Theorem~\ref{thm:sample regularized} with Theorem~\ref{thm:sample regularized BC}. \end{proof} \begin{remark} The sample complexity to obtain $\epsilon$-optimal policy is still $\tilde{O}\left(\frac{(B_{w,0})^4(B_{w,\alpha_{\epsilon}})^2}{\epsilon^6(1-\gamma)^{6}}\right)$ since $n_2$ is negligible compared to $n_1$. \end{remark} \begin{remark} Similar to Corollary~\ref{cor:sample unregularized}, the concentrability assumptions in Corollary~\ref{cor:sample unregularized BC} can be reduced to single-policy concentrability with the help of Corollary~\ref{cor:sample unregularized constrained}. \end{remark} \subsection{Robustness to approximation and optimization errors} \label{sec:results agnostic} In this section we consider the setting where ${\mathcal{V}}\times {\mathcal{W}}$ may not contain $({v^*_{\alpha}},{w^*_{\alpha}})$ and measure the approximation errors as follows: \begin{align} &\epsilon_{\alpha,r,v}=\min_{{v}\in {\mathcal{V}}}\Vert {v}-{v^*_{\alpha}}\Vert_{1,\mu_0}+\Vert {v}-{v^*_{\alpha}}\Vert_{1,d^D}+\Vert {v}-{v^*_{\alpha}}\Vert_{1,d^{D'}},\\ &\epsilon_{\alpha,r,w}=\min_{w\in {\mathcal{W}}}\Vert w-{w^*_{\alpha}}\Vert_{1,d^D}, \end{align} where $d^{D'}(s)=\sum_{s',a'}d^D(s',a')P(s|s',a'),\forall s\in\mathcal{S}$. Notice that our definitions of approximation errors are all in $\ell_1$ norm and weaker than $\ell_\infty$ norm error. Besides, to make our algorithm work in practice, we also assume $(\hat{{v}},\hat{w})$ is an approximate solution of $\hat{L}_{\alpha}({v},w)$: \begin{align} &\hat{L}_{\alpha}(\hat{{v}},\hat{w})-\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},\hat{w})\leq\epsilon_{o,{v}},\label{AU-requirement-1}\\ &\max_{w\in {\mathcal{W}}}\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w)-\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},\hat{w})\leq\epsilon_{o,w}.\label{AU-requirement-2} \end{align} Equation \eqref{AU-requirement-1} says that $\hat L_\alpha (\hat v, \hat w) \approx \min_v\hat L_\alpha (v, \hat w)$. Equation \eqref{AU-requirement-2} says that $\min_v \hat L_\alpha(v, \hat w)\approx \max_{w\in {\mathcal{W}}}\min_{{v}\in {\mathcal{V}}}\hat{L}_{\alpha}({v},w)$. Combining these gives $\hat L_\alpha( \hat v, \hat w) \approx \max_{w\in\mathcal{W}}\min_{v\in\mathcal{V}}\hat L_\alpha (v,w)$, so $(\hat v, \hat w)$ is approximately a max-min point. In this case we call the algorithm \texttt{Inexact-PRO-RL}\xspace. Theorem~\ref{thm:sample regularized approximate} shows that \texttt{Inexact-PRO-RL}\xspace is also capable of learning a near-optimal policy with polynomial sample size: \begin{theorem}[Error-robust version of Theorem~\ref{thm:sample regularized}] \label{thm:sample regularized approximate} Assume $\alpha>0$. Suppose Assumption~\ref{ass:concentrability},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold. Then with at least probability $1-\delta$, the output of \texttt{Inexact-PRO-RL}\xspace satisfies: \begin{align} &J(\pi^*_{\alpha})-J(\hat{\pi})\leq\frac{1}{1-\gamma}\mathbb{E}_{s\sim {d^*_{\alpha}}}[\Vert{\pi^*_{\alpha}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]\leq \frac{2}{1-\gamma}\Vert \hat{w}-{w^*_{\alpha}}\Vert_{2,d^D}\notag\\ &\leq\frac{4}{1-\gamma}\sqrt{\frac{\mathcal{E}_{n,n_0,\alpha}(B_{w,\alpha},B_{f,\alpha},B_{v,\alpha},B_{e,\alpha})}{\alpha M_f}}+\frac{2}{1-\gamma}\sqrt{\frac{2(\epsilon_{opt}+\epsilon_{\alpha,{app}})}{\alpha M_f}}, \end{align} where ${B_{e,\alpha}}$ is defined as Theorem~\ref{thm:sample regularized}, $\epsilon_{{opt}}=\epsilon_{o,{v}}+\epsilon_{o,w}$ and $\epsilon_{\alpha,{app}}=\left({B_{w,\alpha}}+1\right)\epsilon_{\alpha,r,v}+({B_{e,\alpha}}+\alpha {B_{f',\alpha}})\epsilon_{\alpha,r,w}$. \end{theorem} \begin{proof}[Proof sketch] The proof follows similar steps in the proof of Theorem~\ref{thm:sample regularized}. See Appendix~\ref{proof:thm sample regularized approximate} for details. \end{proof} \begin{remark}[Optimization] When ${\mathcal{W}}$ and ${\mathcal{V}}$ are convex sets,\footnote{In this case they are infinite classes, and we can simply replace the concentration bound in Lemma~\ref{lem:hat L conc} with a standard covering argument} a line of algorithms \citep{nemirovski2004prox,nesterov2007dual,lin2020near} are shown to attain $\tilde{\epsilon}$-saddle point with the gradient complexity of $\tilde{O}(\frac{1}{\tilde{\epsilon}})$. Notice that an approximate saddle point will satisfy our requirements (\ref{AU-requirement-1})(\ref{AU-requirement-2}) automatically, therefore we can choose these algorithms to solve $(\hat{v},\hat{w})$. In more general cases, ${\mathcal{W}}$ and ${\mathcal{V}}$ might be parameterized by $\theta$ and $\phi$. As long as the corresponding maximin problem~(\ref{prob:empirical}) is still concave-convex (e.g., ${\mathcal{W}}$ and ${\mathcal{V}}$ are linear function classes), these algorithms can still work efficiently. \end{remark} Similar to Corollary~\ref{cor:sample unregularized}, we can extend Theorem~\ref{thm:sample regularized approximate} to compete with $\pi_0^*$. Suppose we select $\alpha=\alpha_{un}>0$ in \texttt{Inexact-PRO-RL}\xspace and let $\epsilon_{{un}}=\alpha_{un}B_{f,0}+\frac{2}{1-\gamma}\sqrt{\frac{2(\epsilon_{{opt}}+\epsilon_{\alpha_{un},{app}})}{\alpha_{un} M_f}}$. Then we have the following corollary: \begin{corollary}[Error-robust version of Corollary~\ref{cor:sample unregularized}] \label{cor:sample unregularized approximate} Fix $\alpha_{un} > 0$. Suppose there exists $d^*_0\in D^*_0$ such that Assumption~\ref{ass:concentrability} holds. Besides, assume that Assumptions~\ref{ass:concentrability},\ref{ass:W bound},\ref{ass:f prop},\ref{ass:V bound} hold for $\alpha=\alpha_{un}$. Then the output of \texttt{Inexact-PRO-RL}\xspace with input $\alpha=\alpha_{un}$ satisfies \begin{align} &J(\pi^*_0)-J(\hat{\pi})\leq\frac{4}{1-\gamma}\sqrt{\frac{\mathcal{E}_{n,n_0,\alpha_{un}}(B_{w,\alpha_{un}},B_{f,\alpha_{un}},B_{v,\alpha_{un}},B_{e,\alpha_{un}})}{\alpha_{un}M_f}}+\epsilon_{{un}}, \end{align} with at least probability $1-\delta$. \end{corollary} \begin{proof}[Proof sketch] The proof largely follows that of Corollary~\ref{cor:sample unregularized} and thus is omitted here. \end{proof} \paragraph{The selection of $\alpha_{un}$} The best $\alpha_{un}$ we can expect (i.e., with the lowest error floor) is \begin{equation} \alpha_{un}:=\arg\min_{\alpha>0}\left(\alphaB_{f,0}+\frac{2}{1-\gamma}\sqrt{\frac{2(\epsilon_{{opt}}+\epsilon_{\alpha,{app}})}{\alpha M_f}}\right). \end{equation} However, this requires knowledge of $\epsilon_{\alpha,{app}}$, which is often unknown in practice. One alternative method is to suppose $\epsilon_{\alpha,{app}}$ upper bounded by $\epsilon_{{app}}$ for some $\alpha\in\ I_{\alpha}$, then $\alpha_{un}$ can be chosen as \begin{equation} \alpha_{un}:=\arg\min_{\alpha\in I_{\alpha}}\left(\alphaB_{f,0}+\frac{2}{1-\gamma}\sqrt{\frac{2(\epsilon_{{opt}}+\epsilon_{{app}})}{\alpha M_f}}\right). \end{equation} Notice that $B_{f,0}$ is known and $\epsilon_{{opt}}$ can be controlled by adjusting the parameters of the optimization algorithm, therefore the above $\alpha_{un}$ can be calculated easily. \paragraph{Higher error floor} In the ideal case of no approximation/optimization errors, Corollary~\ref{cor:sample unregularized} (which competes with $\pi_0^*$) has a worse sample complexity than Theorem~\ref{thm:sample regularized} (which only competes with $\pi^*_{\alpha}$). However, with the presence of approximation and optimization errors, the sample complexities become the same in Theorem~\ref{thm:sample regularized approximate} and Corollary~\ref{cor:sample unregularized approximate}, but the latter has a higher error floor. To see this, we can suppose $\epsilon_{\alpha,{app}}$ are uniformly upper bounded by $\epsilon_{{app}}$, then $\alpha_{un}=O((\epsilon_{{opt}}+\epsilon_{{app}})^{\frac{1}{3}})$ by the AM-GM inequality and $\epsilon_{{un}}=O((\epsilon_{{opt}}+\epsilon_{{app}})^{\frac{1}{3}})$, which is larger than $O((\epsilon_{{opt}}+\epsilon_{{app}})^{\frac{1}{2}})$ as in Theorem~\ref{thm:sample regularized approximate}. \subsection{Handling an arbitrary data distribution} \label{sec:results constrained} In the previous sections, our goal is to compete with policy ${\pi^*_{\alpha}}$ and we require the data to provide sufficient coverage over such a policy. Despite being weaker than all-policy concentrability, this assumption can be still violated in practice, since we have no control over the distribution of the offline data. In fact, recent works such as \citet{xie2021bellman} are able to compete with the best policy covered by data (under strong function-approximation assumptions such as Bellman-completeness), thus provide guarantees to \textit{arbitrary} data distributions: when the data does not cover any good policies, the guarantee is vacuous; however, as long as a good policy is covered, the guarantee will be competitive to such a policy. In this section we show that we can achieve similar guarantees for \texttt{PRO-RL}\xspace with a twisted analysis. First let us define the notion of covered policies. \begin{definition} Let ${\Pi_{B_w}}$ denote the $B_w$-covered policy class of $d^D$ for $B_w>1$, defined as: \begin{align} {\Pi_{B_w}}\coloneqq\{\pi:\frac{d^{\pi}(s,a)}{d^D(s,a)}\leq B_w,\forall s\in\mathcal{S},a\in\mathcal{A}\}. \end{align} \end{definition} Here, $B_w$ is a hyperparameter chosen by the practitioner, and our goal in this section is to compete with policies in $\Pi_{B_w}$. The key idea is to extend the regularized LP \eqref{eq:constrained} by introducing an additional upper-bound constraint on $d$, that $d(s,a) \le B_w d^D$, so that we only search for a good policy within $\Pi_{B_w}$. The policy we will compete with $\pi^*_{\alpha,B_w}$ and the corresponding value and density-ratio functions $v^*_{\alpha,B_w}$, $w^*_{\alpha,B_w}$, will all be defined based on this constrained LP. In the rest of this section, we show that if we make similar realizability assumptions as in Section~\ref{sec:results} but w.r.t.~$v^*_{\alpha,B_w}$ and $w^*_{\alpha,B_w}$ (instead of $v^*_{\alpha}$ and $w^*_{\alpha}$), then we can compete with $\pi^*_{\alpha,B_w}$ without needing to make any coverage assumption on the data distribution $d^D$. \begin{samepage} \begin{problem*}[Constrained $\&$ regularized LP]\label{prob:constrained_lp} \vspace{-10pt} \begin{align} &\max_{0\le d \le B_wd^D}\mathbb{E}_{(s,a)\sim d}[r(s,a)]-\alpha\mathbb{E}_{(s,a)\sim d^D}\left[f\left(\frac{d(s,a)}{d^D(s,a)}\right)\right]\label{eq:constrained 2}\\ &\text{s.t. }d(s)=(1-\gamma)\mu_0(s)+\gamma\sum_{s',a'}P(s|s',a')d(s',a')\label{eq:bellman flow 2} \end{align} \vspace{-15pt} \end{problem*} \end{samepage} Following a similar argument as the derivation of \texttt{PRO-RL}\xspace, we can show that Problem~\eqref{eq:constrained 2} is equivalent to the maximin problem: \begin{equation} \label{prob:maximin2 constrained} \max_{0\leq w\leq B_w}\min_{{v}}{L_{\alpha}}({v},w):=(1-\gamma)\mathbb{E}_{s\sim \mu_0}[{v}(s)]-\alpha\mathbb{E}_{(s,a)\sim d^D}[f(w(s,a))]+\mathbb{E}_{(s,a)\sim d^D}[w(s,a)e_{{v}}(s,a)], \end{equation} Denote the optimum of (\ref{prob:maximin2 constrained}) by $({v^*_{\alpha,B_w}},{w^*_{\alpha,B_w}})$, then the optimal policy and its associated discounted state occupancy can be recovered as follows: \begin{equation} \label{eq:tilde d tilde w constrained} {\pi^*_{\alpha,B_w}}(s|a):= \begin{cases} \frac{{w^*_{\alpha,B_w}}(s,a)\pi_D(a|s)}{\sum_a {w^*_{\alpha,B_w}}(s,a)\pi_D(a|s)}, & \text{for } \sum_a {w^*_{\alpha,B_w}}(s,a)\pi_D(a|s)>0,\\ \frac{1}{|\mathcal{A}|}, & \text{else.} \end{cases}, \forall s\in\mathcal{S},a\in\mathcal{A}, \end{equation} \begin{equation} {d^*_{\alpha,B_w}}(s,a)={w^*_{\alpha,B_w}}(s,a)d^D(s,a). \end{equation} We now state the realizability and boundedness assumptions, which are similar to Section~\ref{sec:results regularized} \begin{assumption}[Realizability of ${\mathcal{V}}$ II] \label{ass:V realize 2} Suppose ${v^*_{\alpha,B_w}}\in {\mathcal{V}}$. \end{assumption} \begin{assumption}[Realizability of ${\mathcal{W}}$ II] \label{ass:W realize 2} Suppose ${w^*_{\alpha,B_w}}\in {\mathcal{W}}$. \end{assumption} \begin{assumption}[Boundedness of ${\mathcal{W}}$ II] \label{ass:W bound 2} Suppose $0\leq w(s,a)\leq B_w$ for any $s\in\mathcal{S}, a\in\mathcal{A}, w\in {\mathcal{W}}$. \end{assumption} \begin{assumption}[Boundedness of $f$ II] \label{ass:f bound 2} Suppose that \begin{align} &|f'(x)|\leq B_{f'},\forall\quad 0\leq x\leq B_w,\\ &|f(x)|\leq B_f, \forall\quad 0\leq x\leq B_w. \end{align} \end{assumption} Next we consider the boundedness of $\mathcal{V}$. Similar to Assumption~\ref{ass:V bound}, we will decide the appropriate bound on functions in $\mathcal{V}$ based on that of $v^*_{\alpha,B_w}$, which needs to be captured by $\mathcal{V}$. It turns out that the additional constraint $w\leq B_w$ makes it difficult to derive an upper bound on $v^*_{\alpha,B_w}$. However, we are able to do so under a common and mild assumption, that the data distribution $d^D$ is a valid occupancy \citep{liu2018breaking,tang2019doubly,levine2020offline}: \begin{assumption} \label{ass:dataset} Suppose $d^D=d^{\pi_D}$, i.e., the discounted occupancy of behavior policy $\pi_D$. \end{assumption} With Assumption~\ref{ass:dataset}, we have $\Vert{v^*_{\alpha,B_w}}\Vert_{\infty}\leq B_{v}$ from Lemma~\ref{lem:bound tilde nu 2} and therefore the following assumption is reasonable: \begin{assumption}[Boundedness of ${\mathcal{V}}$ II] \label{ass:V bound 2} Suppose $\Vert{v}\Vert_{\infty}\leq B_{v}:=\frac{\alpha B_{f'}+1}{1-\gamma}$ for any ${v}\in {\mathcal{V}}$. \end{assumption} With the above assumptions, we have the following theorem to show that \texttt{PRO-RL}\xspace is able to learn $\pi^*_{\alpha,B_w}$: \begin{theorem \label{thm:sample regularized constrained} Assume $\alpha>0$. Suppose \ref{ass:V realize 2},\ref{ass:W realize 2},\ref{ass:W bound 2},\ref{ass:f bound 2},\ref{ass:dataset},\ref{ass:V bound 2} and strong convexity in \ref{ass:f prop} hold. Then with at least probability $1-\delta$, the output of \texttt{PRO-RL}\xspace satisfies: \begin{align} &J(\pi^*_{\alpha,B_w})-J(\hat{\pi})\leq\frac{1}{1-\gamma}\mathbb{E}_{s\sim {d^*_{\alpha,B_w}}}[\Vert{\pi^*_{\alpha,B_w}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]\notag\\ &\leq \frac{2}{1-\gamma}\Vert \hat{w}-{w^*_{\alpha,B_w}}\Vert_{2,d^D}\leq \frac{4}{1-\gamma}\sqrt{\frac{\mathcal{E}_{n,n_0,\alpha}(B_w,B_f,B_v,B_e)}{\alpha M_f}}, \label{eq:thm4 1} \end{align} where $B_{e}:=(1+\gamma)B_{v}+1$. \end{theorem} \begin{proof}[Proof sketch] The proof largely follows Theorem~\ref{thm:sample regularized} except the derivation of the bound on ${v^*_{\alpha,B_w}}$, which is characterized in the following lemma: \begin{lemma} \label{lem:bound tilde nu 2} Suppose Assumption~\ref{ass:f bound 2} holds, then we have: \begin{equation} \Vert{v^*_{\alpha,B_w}}\Vert_{\infty}\leq B_{v}. \end{equation} \end{lemma} The proof of Lemma~\ref{lem:bound tilde nu 2} is deferred to Appendix~\ref{proof:lem bound tilde nu 2}. The rest of the proof of Theorem~\ref{thm:sample regularized constrained} is the same as in Section~\ref{sec:analysis} and thus omitted here. \end{proof} As before, we obtain the following corollary for competing with the best policy in $\Pi_{B_w}$: \begin{corollary \label{cor:sample unregularized constrained} For any $\epsilon>0$, assume that Assumption~\ref{ass:V realize 2},\ref{ass:W realize 2},\ref{ass:W bound 2},\ref{ass:f bound 2},\ref{ass:dataset},\ref{ass:V bound 2} and strong convexity in \ref{ass:f prop} hold for $\alpha=\alpha'_{\epsilon}:=\frac{\epsilon}{4{B_f}}$. Then if \begin{align} &n\geq\frac{C_1\left(\epsilon {B_f}+4{B_w}{B_e}{B_f}\right)^2}{\epsilon^6M_f^2(1-\gamma)^4}\cdot\log\frac{4|\mathcal{V}||\mathcal{W}|}{\delta}, \\ &n_0\geq\frac{C_1\left(4{B_v}{B_f}\right)^2}{\epsilon^6M_f^2(1-\gamma)^2}\cdot\log\frac{4|\mathcal{V}|}{\delta}, \end{align} the output of \texttt{PRO-RL}\xspace with input $\alpha=\alpha'_{\epsilon}$ satisfies \begin{equation} J(\pi^*_{0,B_w})-J(\hat{\pi})\leq\epsilon, \end{equation} with at least probability $1-\delta$, where $C_1$ is the same constant in Corollary~\ref{cor:sample unregularized}. \end{corollary} \begin{proof} First notice that \begin{equation} \mathbb{E}_{(s,a)\sim {{d^*_{\alpha'_{\epsilon},B_w}}}}[r(s,a)]-\alpha'_{\epsilon}\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{\alpha'_{\epsilon},B_w}}(s,a))]\geq\mathbb{E}_{(s,a)\sim d^*_{0,B_w}}[r(s,a)]-\alpha'_{\epsilon}\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{0,B_w}}(s,a))], \end{equation} which implies that \begin{align} J({\pi^*_{0,B_w}})-J({\pi^*_{\alpha'_{\epsilon},B_w}}) &\leq\alpha'_{\epsilon}\left(\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{0,B_w}}(s,a))]-\mathbb{E}_{(s,a)\sim d^D}[f({w^*_{\alpha'_{\epsilon},B_w}}(s,a))]\right)\\ &\leq 2\alpha'_{\epsilon}B_f=\frac{\epsilon}{2}. \end{align} On the other hand, by Theorem~\ref{thm:sample regularized constrained} we have with probability at least $1-\delta$, \begin{equation} \mathbb{E}_{s\sim {d^*_{\alpha'_{\epsilon},B_w}}}[\Vert{\pi^*_{\alpha'_{\epsilon},B_w}}(\cdot|s)-\hat{\pi}(\cdot|s)\Vert_1]\leq\frac{(1-\gamma)\epsilon}{2}. \end{equation} Using the performance difference lemma as in Appendix~\ref{proof:cor sample unregularized}, this implies \begin{equation} J({\pi^*_{\alpha'_{\epsilon},B_w}})-J(\hat{\pi})\leq\frac{\epsilon}{2}. \end{equation} Therefore, we have $J({\pi^*_{0,B_w}})-J(\hat{\pi})\leq\epsilon$ with at least probability $1-\delta$. \end{proof} \begin{remark} Corollary~\ref{cor:sample unregularized constrained} does not need the assumption of non-negativity of $f$. The reason is that we are already considering a bounded space ($0\leq w\leq B_w$) and thus f must be lower bounded in this space. \end{remark} \paragraph{Resolving two-policy concentrability of Corollary~\ref{cor:sample unregularized}} \label{rem:sample unregularized constrained} As we have commented below Corollary~\ref{cor:sample unregularized}, to compete with $\pi^*_0$ we need ``two-policy'' concentrability, i.e., Assumption~\ref{ass:concentrability} for both $\alpha=0$ and $\alpha= \alpha_\epsilon$. Here we resolve this issue in Corollary~\ref{cor:single policy} below, by invoking Corollary~\ref{cor:sample unregularized constrained} with $B_w$ set to $B_{w,0}$. This way, we obtain the coverage over the regularized optimal policy $\pi^*_{\alpha,B_w}$ (i.e., the counterpart of $\pi^*_{\alpha}$ in Corollary~\ref{cor:sample unregularized}) \textit{for free}, thus only need the concentrability w.r.t.~$\pi^*_0$. \begin{corollary \label{cor:single policy} Suppose there exists $d^*_0\in D^*_0$ that satisfies Assumption~\ref{ass:concentrability} with $\alpha=0$. For any $\epsilon>0$, assume that Assumption~\ref{ass:V realize 2},\ref{ass:W realize 2},\ref{ass:W bound 2},\ref{ass:f bound 2},\ref{ass:dataset},\ref{ass:V bound 2} and strong convexity in \ref{ass:f prop} hold for $B_w=B_{w,0}$ and $\alpha=\alpha'_{\epsilon}:=\frac{\epsilon}{4{B_{f,0}}}$. Then if \begin{align} &n\geq\frac{C_1\left(\epsilon {B_{f,0}}+4{B_{w,0}}{B_{e,0}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^4}\cdot\log\frac{4|\mathcal{V}||\mathcal{W}|}{\delta}, \\ &n_0\geq\frac{C_1\left(4{B_{v,0}}{B_{f,0}}\right)^2}{\epsilon^6M_f^2(1-\gamma)^2}\cdot\log\frac{4|\mathcal{V}|}{\delta}, \end{align} the output of \texttt{PRO-RL}\xspace with input $\alpha=\alpha'_{\epsilon}$ satisfies \begin{equation} J(\pi^*_0)-J(\hat{\pi})\leq\epsilon, \end{equation} with at least probability $1-\delta$, where $C_1$ is the same constant in Corollary~\ref{cor:sample unregularized}. \end{corollary} \begin{proof} Let $B_w=B_{w,0}$ in Corollary~\ref{cor:sample unregularized constrained}, then we know $\pi^*_{0,B_w}=\pi^*_0$ and Corollary~\ref{cor:single policy} follows directly. \end{proof} Corollary~\ref{cor:single policy} shows that our algorithm is able to compete with $\pi^*_0$ under concentrability with respect to ${\pi^*_0}$ alone. In addition, a version of Proposition~\ref{prop:LP stability} applies to Corollary~\ref{cor:sample unregularized constrained}, which indicates that $w^*_{\alpha'_{\epsilon},B_w}={w^*_{0}}$ for sufficiently small $\epsilon$. \begin{remark} Corollary~\ref{cor:single policy} still holds when we set $B_w\geq B_{w,0}$ in case $B_{w,0}$ is unknown. However the realizability assumptions will depend on the choice of $B_w$ and change accordingly. \end{remark} \subsection{\texttt{PRO-RL}\xspace with $\alpha=0$} \label{sec:results unregularized} From the previous discussions, we notice that when $\alpha>0$, extending from regularized problems to unregularized problems will cause worse sample complexity in \texttt{PRO-RL}\xspace (Remark~\ref{rem:sample regularized},\ref{rem:sample unregularized constrained}). Also, the realizability assumptions are typically with respect to the regularized optimizers rather than the more natural $(v^*_{0},w^*_{0})$. In this section we show that by using stronger concentrability assumptions, \texttt{PRO-RL}\xspace can still have guarantees with $\alpha=0$ under the realizability w.r.t.~$(v^*_{0},w^*_{0})$ and attain a faster rate. More specifically, we need the following strong concentration assumption: \begin{assumption}[Strong concentrability] \label{ass:strong conc} Suppose the dataset distribution $d^D$ and some $d^*_0\in D^*_0$ satisfy \begin{align} &\frac{d^{\pi}(s)}{d^D(s)}\leq {B_{w,u}},\forall \pi, s\in\mathcal{S},\label{eq:str conc 2-1}\\ &\frac{{d^*_0}(s)}{d^D(s)}\geq {B_{w,l}}>0,\forall s\in\mathcal{S}.\label{eq:str conc 2-2} \end{align} \end{assumption} \begin{remark} Eq.~\eqref{eq:str conc 2-1} is the standard all-policy concentrability assumption in offline RL \citep{chen2019information,nachum2019algaedice,xie2020q}. In addition, Assumption~\ref{ass:strong conc} requires the density ratio of the optimal policy is lower bounded, which is related to an ergodicity assumption used in some previous works in the simulator setting~\citep{wang2017primal,wang2020randomized}. \end{remark} \begin{remark} Recall the counterexample in Section~\ref{sec:algorithm}. It can be observed that $B_{w,l}=0$ in that case and thus the counterexample does not satisfy Assumption~\ref{ass:strong conc}. \end{remark} In the following discussion $w^*_0$ and $\pi^*_0$ are specified as the optimal density ratio and policy with respect to the $d^*_0$ in Assumption~\ref{ass:strong conc}. We need to impose some constraints on the function class ${\mathcal{W}}$ and ${\mathcal{V}}$ so that $d^{\hat \pi} $ can be upper bounded by $\hat w \cdot d^D$. \begin{assumption} \label{ass:W good} Suppose \begin{align} &{\mathcal{W}}\subseteq\bar{{\mathcal{W}}}:=\notag\\ &\left\{w(s,a)\geq0, \sum_{a}\pi_D(a|s)w(s,a)\geq {B_{w,l}}, \forall s\in\mathcal{S}, a\in\mathcal{A}\right\}, \end{align} \end{assumption} Given a function class $\mathcal{W}$, this assumption is trivially satisfied by removing the $w \in \mathcal{W}$ that are not in $\bar \mathcal{W}$ when $\pi_D$ is known. \begin{assumption} \label{ass:V good} Suppose \begin{align} 0\leq{v}(s)\leq\frac{1}{1-\gamma},\forall s\in\mathcal{S},{v}\in {\mathcal{V}}. \end{align} \end{assumption} By Assumption~\ref{ass:strong conc}, ${w^*_{0}}\in\bar{{\mathcal{W}}}$ and $0\leq{v^*_{0}}\leq\frac{1}{1-\gamma}$. Therefore Assumption~\ref{ass:W good} and Assumption~\ref{ass:V good} are reasonable. With strong concentrability, we can show that \texttt{PRO-RL}\xspace with $\alpha=0$ can learn an $\epsilon$-optimal policy with sample complexity $n=\tilde{O}\left(\frac{1}{\epsilon^2}\right)$: \begin{corollary} \label{cor:sample unregularized 0} Suppose Assumption~\ref{ass:concentrability},\ref{ass:V realize},\ref{ass:W realize},\ref{ass:W bound}, \ref{ass:W good}, \ref{ass:V good} and \ref{ass:strong conc} hold for $\alpha=0$. Then with at least probability $1-\delta$, the output of \texttt{PRO-RL}\xspace with input $\alpha=0$ satisfies: \begin{equation} J({\pi^*_0})-J(\hat{\pi})\leq\frac{2{B_{w,0}}{B_{w,u}}}{(1-\gamma){B_{w,l}}}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}||{\mathcal{W}}|}{\delta}}{n}}+\frac{{B_{w,u}}}{{B_{w,l}}}\sqrt{\frac{2\log\frac{4|{\mathcal{V}}|}{\delta}}{n_0}}, \end{equation} \end{corollary} \begin{proof} The key idea is to utilize Lemma~\ref{lem:hat L tilde L} to bound ${L_0}({v^*_{0}},{w^*_{0}})-{L_0}({v^*_{0}},\hat{w})$ and then quantify the performance difference $J({\pi^*_0})-J(\hat{\pi})$. See Appendix~\ref{proof:cor sample unregularized 0} for details. \end{proof} \paragraph{Comparison with $\alpha>0$ and $\alpha=0$.} When solving the unregularized problem, \texttt{PRO-RL}\xspace with $\alpha=0$ has better sample complexity than Corollary~\ref{cor:sample unregularized}. Also the realizability assumptions in Corollary~\ref{cor:sample unregularized 0} are with respect to the optimizers of the unregularized problem itself, which is not the case in Corollary~\ref{cor:sample unregularized} when $\epsilon$ is large. However, \texttt{PRO-RL}\xspace with $\alpha=0$ only works under a very strong concentrability assumption (Assumption~\ref{ass:strong conc}) and thus is less general than \texttt{PRO-RL}\xspace with $\alpha>0$. \section{Discussion} \subsection{Comparison with OptiDICE \citep{lee2021optidice}} Our algorithm is inspired by OptiDICE \citep{lee2021optidice}, but with several crucial modifications necessary to obtain the desired sample-complexity guarantees. OptiDICE starts with the problem of $\min_{{v}}\max_{w\geq0}{L_{\alpha}}({v},w)$, and then uses the closed-form maximizer ${w^*_{\alpha}}({v}):=\arg\max_{w\geq0}{L_{\alpha}}({v},w)$ for arbitrary ${v}$ \citep[Proposition 1]{lee2021optidice}: \begin{equation} {w^*_{\alpha}}({v})=\max\left(0,(f')^{-1}\left(\frac{e_{{v}}(s,a)}{\alpha}\right)\right), \end{equation} and then solves $\min_{{v}}{L_{\alpha}}({v},{w^*_{\alpha}}({v}))$. Unfortunately, the $e_v(s,a)$ term in the expression requires knowledge of the transition function $P$, causing the infamous double-sampling difficulty \citep{baird1995residual, farahmand2011model}, a major obstacle in offline RL with only realizability assumptions \citep{chen2019information}. OptiDICE deals with this by optimizing an upper bound of $\max_{w\geq0}L_{\alpha}({v},w)$ which does not lend itself to theoretical analysis. Alternatively, one can fit $e_v$ using a separate function class. However, since ${v}$ is arbitrary in the optimization, the function class needs to approximate $e_v$ for all ${v}$, requiring a completeness-type assumption in theory~\citep{xie2020q}. In contrast, \texttt{PRO-RL}\xspace optimizes over ${\mathcal{V}}\times {\mathcal{W}}$ and thus $\arg\max_{w\in {\mathcal{W}}}{L_{\alpha}}({v},w)$ is naturally contained in ${\mathcal{W}}$, and our analyses show that this circumvents the completeness-type assumptions and only requires realizability. Another important difference is the policy extraction step. OptiDICE uses a heuristic behavior cloning algorithm without any guarantees. We develop a new behavior cloning algorithm that only requires realizability of the policy and does not increase the sample complexity. \subsection{Discussion about Assumption~\ref{ass:strong conc}} \label{sec:discussion cor2 concentration} The following ergodicity assumption has been introduced in some online reinforcement learning works \citep{wang2017primal,wang2020randomized}: \begin{assumption} \label{ass:ergodicity} Assume \begin{equation} B_{\texttt{erg},1}\mu_0(s)\leq d^{\pi}(s)\leq B_{\texttt{erg},2}\mu_0(s),\forall s,\pi. \end{equation} \end{assumption} \begin{remark} The original definition of ergodicity in \citet{wang2017primal,wang2020randomized} is targeted at the stationary distribution induced by policy $\pi$ rather than the discounted visitation distribution. However, this is not an essential difference and it can be shown that Corollary~\ref{cor:sample unregularized 0} still holds under the definition in \citet{wang2017primal,wang2020randomized}. Here we define ergodicity with respect to the discounted visitation distribution for the purpose of comparing Assumption~\ref{ass:strong conc} and \ref{ass:ergodicity}. \end{remark} In fact, our Assumption~\ref{ass:strong conc} is weaker than Assumption~\ref{ass:ergodicity} as shown in the following lemma: \begin{lemma} \label{lem:ergodic} Suppose $d^{\pi}(s)\leq B_{\texttt{erg},2}\mu_0(s),\forall s,\pi$ and Assumption~\ref{ass:dataset} holds, then we have: \begin{align} &\frac{d^{\pi}(s)}{d^D(s)}\leq\frac{B_{\texttt{erg},2}}{1-\gamma},\forall \pi,s\\ &\frac{d^*_0(s)}{d^D(s)}\geq\frac{1-\gamma}{B_{\texttt{erg},2}},\forall s. \end{align} \end{lemma} The proof is deferred to Appendix~\ref{proof:lem ergodic}. Lemma~\ref{lem:ergodic} shows that the upper bound in Assumption~\ref{ass:ergodicity} implies Assumption~\ref{ass:strong conc}. Therefore, our strong concentration assumption is a weaker version of the ergodicity assumption. \subsection{Combination of different practical factors} In Section~\ref{sec:results}, we generalized \texttt{PRO-RL}\xspace to several more realistic settings (approximation and optimization error, poor coverage, unknown behavior policy). In fact, \texttt{PRO-RL}\xspace with $\alpha>0$ can even be generalized to include all of the three settings by combining Theorem~\ref{thm:sample regularized},\ref{thm:sample regularized approximate},\ref{thm:sample regularized constrained},\ref{thm:sample regularized BC} and Corollaries ~\ref{cor:sample unregularized},\ref{cor:sample unregularized approximate},\ref{cor:sample unregularized constrained},\ref{cor:sample unregularized BC}. For brevity, we do not list all the combinations separately and only illustrate how to handle each individually. For \texttt{PRO-RL}\xspace with $\alpha=0$, it is easy to extend Corollary~\ref{cor:sample unregularized 0} to approximation and optimization error but relaxation of the concentration assumption and unknown behavior policy is difficult. This is because the analysis of Corollary~\ref{cor:sample unregularized 0} relies on the fact that $v^*_{0}$ is the optimal value function of the unregularized problem~\eqref{prob:original problem}. Consequently, the same analysis is not applicable to $(w^*_{0,B_w},v^*_{0,B_w})$. Furthermore, Assumption~\ref{ass:W good} requires knowing $\pi_D$ and thus hard to enforce with unknown behavior policy. \section{Introduction} \label{sec:intro} Offline (or batch) reinforcement learning (RL) learns decision-making strategies using solely historical data, and is a promising framework for applying RL to many real-world applications. Unfortunately, offline RL training is known to be difficult and unstable \citep{fujimoto19off,wang2020statistical,wang2021instabilities}, primarily due to two fundamental challenges. The first challenge is distribution shift, that the state distributions induced by the candidate policies may deviate from the offline data distribution, creating difficulties in accurately assessing the performance of the candidate policies. The second challenge is the sensitivity to function approximation, that errors can amplify exponentially over the horizon even with good representations \citep{du2019good,weisz2020exponential,wwk21}. These challenges not only manifest themselves as degenerate behaviors of practical algorithms, but are also reflected in the strong assumptions needed for providing sample-efficiency guarantees to classical algorithms. (In this paper, by sample-efficiency we mean a sample complexity that is polynomial in the relevant parameters, including the horizon, the capacities of the function classes, and the degree of data coverage.) As an example, the guarantees of the popular Fitted-Q Iteration \citep{ernst05tree, munos2008finite, pmlr-v120-yang20a, chen2019information} require the following two assumptions: \begin{itemize}[leftmargin=*, itemsep=0pt] \item \textbf{(Data) All-policy concentrability: } The offline data distribution provides good coverage (in a technical sense) over the state distributions induced by \textit{all} candidate policies. \item \textbf{(Function Approximation) Bellman-completeness: } The value-function class is \textit{closed} under the Bellman optimality operator.\footnote{Approximate policy iteration algorithms usually require a variant of this assumption, that is, the closure under the policy-specific Bellman operator for \textit{every} candidate policy \citep{munos2003error,antos08learning}.} \end{itemize} Both assumptions are very strong and may fail in practice, and algorithms whose guarantees rely on them naturally suffer from performance degradation and instability \citep{fujimoto19off,wang2020statistical,wang2021instabilities}. On one hand, \textit{all-policy concentrability} not only requires a highly exploratory dataset (despite that historical data in real applications often lacks exploration), but also implicitly imposes structural assumptions on the MDP dynamics \citep[Theorem 4]{chen2019information}. On the other hand, \textit{Bellman-completeness} is much stronger than realizability (that the optimal value function is simply contained in the function class), and is \textit{non-monotone} in the function class, that the assumption can be violated more severely when a richer function class is used. To address these challenges, a significant amount of recent efforts in offline RL have been devoted to relaxing these strong assumptions via novel algorithms and analyses. Unfortunately, these efforts are only able to address either the data or the function-approximation assumption, and no existing works address both simultaneously. For example, \cite{liu2020provably, rajaraman2020towards, rashidinejad2021bridging, jin2020pessimism, xie2021bellman, uehara2021pessimistic} show that pessimism is an effective mechanism for mitigating the negative consequences due to lack of data coverage, and provide guarantees under \textit{single-policy concentrability}, that the data only covers a single good policy (e.g., the optimal policy). However, they require completeness-type assumptions on the value-function classes or \textit{model} realizability.\footnote{When a model class that contains the true MDP model is given, value-function classes that satisfy a version of Bellman-completeness can be automatically induced from the model class \citep{chen2019information}, so model realizability is even stronger than Bellman-completeness. Therefore, in this work we aim at only making a constant number of realizability assumptions of real-valued functions } \citet{xie2020batch} only require realizability of the optimal value-function, but their data assumption is even stronger than all-policy concentrability. To this end, we want to ask: \begin{center} \textbf{\textit{Is sample-efficiency possible with realizability and single-policy concentrability?}} \end{center} In this work, we answer the question in the positive by proposing the first model-free algorithm that only requires relatively weak assumptions on both data coverage and function approximation. The algorithm is based on the primal-dual formulation of linear programming (LP) for MDPs \citep{puterman2014markov,wang2017primal}, where we use marginalized importance weight (or density ratio) to model the dual variables which correspond to the discounted occupancy of the learned policy, a practice commonly found in the literature of off-policy evaluation (OPE) \citep[e.g.,][]{liu2018breaking}. Our main result (Corollary~\ref{cor:sample unregularized}) provides polynomial sample-complexity guarantees when the density ratio and (a regularized notion of) the value function of the regularized optimal policy are realizable, and the data distribution covers such an optimal policy. We also provide a number of extensions and alternative analyses to complement the main result and provide deeper understanding of the behavior of primal-dual algorithms in the offline learning setting: (see also Table~\ref{tab:main results} for a summary of the results) \begin{enumerate}[leftmargin=*, itemsep=0pt] \item Section~\ref{sec:results agnostic} extends the main result to account for approximation and optimization errors, and Section~\ref{sec:results constrained} handles the scenario where the optimal policy is not covered and we need to compete with the best policy supported on data. \item Section~\ref{sec:results BC} handles the case where the behavior policy is unknown (which the main algorithm needs) and estimated by behavior cloning. \item Our main result crucially relies on the use of regularization. In Section~\ref{sec:results unregularized} we study the unregularized algorithm, and provide performance guarantees under alternative assumptions. \end{enumerate} \begin{table}[h!] \begin{center} \caption{Assumptions required by existing algorithms and our algorithms to learn an $\epsilon$-optimal policy efficiently. Here $\pi$ is a policy and $d^{\pi}$ is the associated discounted state-action occupancy. $d^*_{\alpha}=d^{\pi^*_{\alpha}}$ where $\pi^*_{\alpha}$ is the $\alpha$-regularized optimial policy (defined in Section~\ref{sec:algorithm}). In particular, $d^*_0$ is the discounted state-action occupancy of the unregularized optimal policy. $d^D$ is the distribution of the offline dataset. $\mathcal{F},\Pi,\mathcal{W},\mathcal{V}$ are the approximation function classes and $\mathcal{T}$ is the Bellman operator. $Q^{\pi}$ is th action value function of $\pi$ and $Q^*$ is the unregularized optimal action value function. $v^*_{\alpha}$($v^*_{\alpha'_{\epsilon},B_w}$) is the $\alpha$-regularized optimal value function (with respect to the covered policy class), defined in Section~\ref{sec:algorithm} (Section~\ref{sec:results constrained}), and particularly $v^*_{0}$ is the unregularized optimal value function. $w^*_{\alpha}$($w^*_{\alpha'_{\epsilon},B_w}$) is the optimal density ratio $\frac{d^*_{\alpha}}{d^D}$ (with respect to the covered policy class), as stated in Section~\ref{sec:algorithm} (Section~\ref{sec:results constrained}). Here we compete with the unregularized optimal policy by default and will mark when competing against the regularized optimal policy.} \label{tab:main results} \begin{tabular}{c|c|c} Algorithm & Data & Function Class\\ \hline AVI & \multirow{2}{*}{$\Vert\frac{d^{\pi}}{d^D}\Vert_{\infty}\leq B_w,\textcolor{red}{\forall\pi}$} & $\mathcal{T} f\in\mathcal{F},\textcolor{red}{\forall f\in\mathcal{F}}$ {\scriptsize \citep{munos2008finite}}\\ \cline{1-1}\cline{3-3} API & &$\mathcal{T}^{\pi}f\in\mathcal{F},\textcolor{red}{\forall f\in\mathcal{F},\pi\in\Pi}$ {\scriptsize\citep{antos2008learning}}\\ \hline BVFT & \textcolor{red}{Stronger} than above & $Q^*\in\mathcal{F}${\scriptsize\citep{pmlr-v139-xie21d}}\\ \hline \multirow{2}{*}{Pessimism} & \multirow{2}{*}{$\Vert\frac{d^*_0}{d^D}\Vert_{\infty}\leq B_w$} & $\mathcal{T}^{\pi}f\in\mathcal{F},\textcolor{red}{\forall f\in\mathcal{F},\pi\in\Pi}$ {\scriptsize\citep{xie2021bellman}}\\ \cline{3-3} & & $w^*_{0}\in\mathcal{W}, Q^{\pi}\in\mathcal{F}, \textcolor{red}{\forall\pi\in\Pi}${\scriptsize\citep{jiang2020minimax}}\\ \hline \textbf{\texttt{PRO-RL}\xspace} & \multirow{2}{*}{$\Vert\frac{d^*_{\alpha}}{d^D}\Vert_{\infty}\leq B_w$} & \multirow{2}{*}{$w^*_{\alpha}\in\mathcal{W},v^*_{\alpha}\in\mathcal{V}$ \scriptsize(\textcolor{blue}{Theorem~\ref{thm:sample regularized}})}\\ (against $\pi^*_{\alpha}$) & & \\ \hline \multirow{2}{*}{\textbf{\texttt{PRO-RL}\xspace}} & \multirow{2}{*}{$\Vert\frac{d^*_0}{d^D}\Vert_{\infty}\leq B_w$} & \multirow{2}{*}{$w^*_{\alpha'_{\epsilon},B_w}\in\mathcal{W},v^*_{\alpha'_{\epsilon},B_w}\in\mathcal{V}$ \scriptsize(\textcolor{blue}{Corollary~\ref{cor:sample unregularized constrained}})}\\ &&\\ \hline & \multirow{2}{*}{$\Vert\frac{d^*_0}{d^D}\Vert_{\infty}\leq B_w,\frac{d^*_0(s)}{d^D(s)}\geq B_{w,l},\forall s$} & \multirow{4}{*}{$w^*_{0}\in\mathcal{W},v^*_{0}\in\mathcal{V}$ \scriptsize(\textcolor{blue}{Corollary~\ref{cor:sample unregularized 0}})}\\ \textbf{\texttt{PRO-RL}\xspace}&&\\ with $\alpha=0$ &\multirow{2}{*}{$\frac{d^{\pi}(s)}{d^D(s)}\leq B_{w,u},\textcolor{red}{\forall\pi},s$}& \\ & & \\ \hline \end{tabular} \end{center} \end{table} \subsection{Related works} Section~\ref{sec:intro} has reviewed the analyses of approximate value/policy iteration, and we focus on other related works in this section. \paragraph{Lower bounds} When we only assume the realizability of the optimal value-function, a number of recent works have established information-theoretic hardness for offline learning under relatively weak data coverage assumptions \citep{wang2020statistical,amortila2020variant,zanette2021exponential,chen2021infinite}. A very recent result by \citet{foster2021offline} shows a stronger barrier, that even with \textit{all-policy concentrability} and the realizability of the value functions of \textit{all policies}, it is still impossible to obtain polynomial sample complexity in the offline learning setting. These works do not contradict our results, as we also assume the realizability of the density-ratio function, which circumvents the existing lower bound constructions. In particular, as \citet[Section 1.3]{foster2021offline} have commented, their lower bound no longer holds if the realizability of importance weight is assumed, as a realizable weight class would have too large of a capacity in their construction and would explain away the sample-complexity lower bound that scales with $|\mathcal{S}|$. \paragraph{Marginalized importance sampling (MIS)} As mentioned above, a key insight that enables us to break the lower bounds against value-function realizability is the use of marginalized importance weights (or density ratio). Modeling such functions is a common practice in MIS, a recently popular approach in the OPE literature \citep{liu2018breaking, uehara2019minimax, kostrikov2019imitation,nachum2020reinforcement,zhang2020dendice}, though most of the works focus exclusively on policy evaluation. Among the few works that consider policy optimization, AlgaeDICE \citep{nachum2019algaedice} optimizes the policy using MIS as a subroutine for policy evaluation, and \citet{jiang2020minimax} analyze AlgaeDICE under the realizability of \textit{all} candidate policies' value functions. Similarly, MABO \citep{xie2020q} only needs realizability of the optimal value function, but the weight class needs to realize the density ratio of \textit{all} candidate policies. The key difference in our work is the use of the LP formulation of MDPs \citep{puterman2014markov} to directly solve for the optimal policy, without trying to evaluate other policies. This idea has been recently explored by OptiDICE \citep{lee2021optidice}, which is closely related to and has inspired our work. However, \citet{lee2021optidice} focuses on developing an empirical algorithm, and as we will see, multiple design choices in our algorithms deviate from those of OptiDICE and are crucial to obtaining the desired sample-complexity guarantees.
proofpile-arXiv_065-5697
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A (pseudo)-Riemannian $n$-manifold $(M,g)$ is said to have \emph{orthogonal coordinates around a point} if in the neighborhood of the point there is a chart such that the metric $g$ with respect to it is in diagonal form, i.e. \[ g=\sum_{i=1}^nf_idx^i\otimes dx^i. \] If the manifold satisfies this property at every point, then we will say that it admits an \emph{orthogonal atlas}. Any surface has an orthogonal atlas since there always are isothermal coordinates (see \cite{Deturck1981SomeGeometry}) and the metric assumes the particular diagonal form \[ g=f(x,y)\big(dx\otimes dx+dy\otimes dy\big). \] In the Riemannian setting D. DeTurck and D. Yang in \cite{Deturck1984} proved that any 3-dimensional smooth manifold $(M,g)$ has an orthogonal atlas using the technique of moving frames. In this case the metric assumes the more general form \begin{equation*} g=f_1(x, y,z)dx\otimes dx+f_2(x, y,z)dy\otimes dy+f_3(x, y,z)dz\otimes dz. \end{equation*} In their paper they point out that in higher dimension the situation changes because the existence of the orthogonal atlas is subject to a condition on the Weyl tensor. Subsequently, P. Tod in \cite{Tod1992OnCC} studied the problem in the same setting but in dimension $n\ge4$ where he found necessary algebraic conditions for the existence of the orthogonal atlas. In the paper the cases of dimensions $n=4$, $n=5$ and $n\ge6$ are studied separately as they require each a different condition on the Weyl tensor of the manifold; more precisely the restriction involves the third derivatives of the tensor for $n=4$, the first derivative for $n=5$, and the tensor itself for $n\ge6$. However, J. Grant and J. A. Vickers in \cite{Grant2009BlockMetrics} proved that something can be said even in dimension 4, in particular they showed that in the analytic setting, in the Riemannian and in Lorentzian case, one can find a chart such that the metric $g$ is block diagonal, i.e. \[ g_{ij}=\begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix} \] where $A$ and $B$ are 2×2 block matrix, even when no assumptions are made on the Weyl tensor (or its derivatives). More recently, O. Kowalski and M. Sekizawa in \cite{KOWALSKI2013251} proved the existence of an orthogonal atlas in the real analytic Lorentzian setting by applying the Cauchy-Kovalevski Theorem. On the other hand, P. Gauduchon and A. Moroianu proved in \cite{Gauduchon2020Non-existenceSpaces} that one cannot find orthogonal coordinates in the neighborhood of any point for the complex and quaternionic projective spaces $\mathbb{C}\mathbb{P}^m$ and $\mathbb{HP}^q$. In this paper we will prove that all smooth Lorentzian 3-manifolds admit an orthogonal atlas by following the same method used by DeTurck and Yang. The technique is the following: the problem is initially shifted from a PDE system about the coordinates to a PDE system about a coframe in the cotangent bundle with respect to a fixed coframe, then one proves that the Cauchy problem associated to the second PDE system admits a solution. \textbf{Acknowledgments} This paper was written as part of the author's PhD thesis for the joint PhD program in Mathematics Università di Milano Bicocca - University of Surrey, and it is based on a suggestion of his supervisor James Grant. The author also thanks his other supervisor Diego Conti for some useful advice. The author acknowledges GNSAGA of INdAM. \section{Orthogonal coordinates on Lorentzian manifolds} We proceed to illustrate the proof of the following \begin{theorem}\label{teo:main} Let $(M,g)$ be a smooth Lorentzian 3-manifold. Then $M$ admits an orthogonal atlas. \end{theorem} Let $(\bar e_1,\bar e_2,\bar e_3)$ be an orthonormal frame of $(M,g)$ and $(\bar\omega_1,\bar\omega_2,\bar\omega_3)$ the corresponding coframe. We want to find a triplet of coordinated functions $(x_1,x_2,x_3)$ such that, if $e_i=\partial_i$ is the coordinated frame of $(x_1,x_2,x_3)$, then $g(e_i,e_j)=0$ every time $i\ne j$. The first difficulty we find both in the Riemannian and in the Lorentzian setting is the following. Assume $(y^1,y^2,y^3)$ are fixed coordinates and $g(y)$ is the metric tensor w.r.t. this chart; then the coordinated frame $\{e_i\}$ can be written as \[ e_i=\frac{\partial y^\alpha}{\partial x^i}\frac{\partial}{\partial y^\alpha} \] and hence the PDE system to be solved is \begin{equation} 0=g(\partial_i,\partial_j)=\sum_{\alpha,\beta=1}^3\frac{\partial y^\alpha}{\partial x^i}\,\frac{\partial y^\beta}{\partial x^j}\,g_{\alpha\beta}(y)\quad\text{for }i\ne j. \end{equation} This system is nonlinear, and its linearization is not symmetric hyperbolic, which means that the standard results of existence of the solution do not apply. Furthermore, there is an invariance in the solution if the unknowns are the coordinates: assume $(\tilde x^1,\tilde x^2,\tilde x^3)$ are other coordinates, such that $\tilde x^i=f^i(x^i)$ and each $f^i$ is a strictly monotone function. Then \[ 0=g\left(\frac{\partial}{\partial x^i},\frac{\partial}{\partial x^j}\right)=\frac{\partial f^i}{\partial x^i}\frac{\partial f^j}{\partial x^j}g\left(\frac{\partial}{\partial \tilde x^i},\frac{\partial}{\partial \tilde x^j}\right) \] and hence also $(\tilde x^1,\tilde x^2,\tilde x^3)$ are orthogonal coordinates. For this reason it works best if one does not set the unknowns to be the coordinated functions $(x_1,x_2,x_3)$, but the normalized coframe $(\omega^1,\omega^2,\omega^3)$, where $\omega^i=f_idx^i$ (no sum intended) and $f_i=1/|dx^i|$. Applying Frobenius Theorem it is easy to get an equivalent condition to the existence of the coordinated charts depending on the coframe, that is \begin{equation}\label{eqn:FrobeniusCondition} \omega^i\wedge d\omega^i=0 \end{equation} must hold, when $i=1,2,3$. Now, as the coframe has to be orthonormal, it has to satisfy the first Cartan structure equation \[ d\omega^i=\sum_j\omega^j\wedge\omega_j^i \] where $(\omega^j_i)$ is the connection matrix 1-form. Here appears the first difference between the Riemannian and Lorentzian case, although it does not yield any actual change in the proof: in the first case $\omega_i^j=-\omega_j^i$ for any $i,j$, but in the second we have \begin{equation}\label{eqn:firstDifference} \omega_1^2=-\omega_2^1,\quad\omega_1^3=\omega_3^1,\quad\omega_2^3=\omega_3^2. \end{equation} Hence \eqref{eqn:FrobeniusCondition} becomes \begin{align*} \omega^1\wedge\omega^2\wedge\omega_2^1+\omega^1\wedge\omega^3\wedge\omega_3^1&=0\\ \omega^1\wedge\omega^2\wedge\omega_2^1+\omega^2\wedge\omega^3\wedge\omega_3^2&=0\\ \omega^1\wedge\omega^3\wedge\omega_3^1+\omega^2\wedge\omega^3\wedge\omega_3^2&=0, \end{align*} thus, by alternatively subtracting one and adding the other we get the system \begin{equation}\label{eqn:equationSystem} \omega^1\wedge\omega^2\wedge\omega_2^1=0,\quad\omega^2\wedge\omega^3\wedge\omega_3^2=0,\quad\omega^1\wedge\omega^3\wedge\omega_3^1=0. \end{equation} We now write $\omega^i$ with respect to $\bar\omega^j$ and vice-versa as \[ \omega^i=b^i_j\bar\omega^j,\quad\bar\omega^j=\bar b^i_j\omega^i \] and we will solve for the $b^j_i$. We will solve \eqref{eqn:equationSystem}, hence we need $\omega^i_j$ and we start by noting that \[ \begin{split} \omega^l\wedge\omega^i_l=d\omega^i&=d\sum_jb^i_j\bar\omega^j=\sum_j\left(\sum_k\bar e_k(b^i_j)\bar\omega^k\wedge\bar\omega^j+b^i_j\bar\omega^k\wedge\bar\omega^i_k\right)\\ &=\sum_{j,k}\bar\omega^k\wedge\big(\bar e_k(b^i_j)\bar\omega^j+b^i_j\bar\omega^i_k\big)\\ &=\sum_{j,k,l}\omega^l\wedge\big(b^l_k\bar e_k(b^i_j)\bar\omega^j+b^l_kb^i_j\bar\omega^i_k\big). \end{split} \] As a consequence of the first difference we find a second one here: while \[ \omega^1_2=\sum_{j,k}\frac{1}{2}\big\{b^2_k\bar e_k(b^1_j)-b^1_k\bar e_k(b^2_j)\big\}\bar\omega^j+b^2_kb^i_j\bar\omega^1_k \] remains as in \cite{Deturck1984}, the other two differ due to \eqref{eqn:firstDifference} as follows: \[ \omega^1_3=\sum_{j,k}\frac{1}{2}\big\{b^3_k\bar e_k(b^1_j)+b^1_k\bar e_k(b^3_j)\big\}\bar\omega^j+b^3_kb^i_j\bar\omega^1_k \] and \[ \omega^2_3=\sum_{j,k}\frac{1}{2}\big\{b^3_k\bar e_k(b^2_j)+b^2_k\bar e_k(b^3_j)\big\}\bar\omega^j+b^3_kb^i_j\bar\omega^1_k. \] Again by following \cite{Deturck1984} we rewrite \eqref{eqn:equationSystem} substituting $\omega^i_j$ and obtaining \begin{gather*} 0=\sum_{i,l,j,k}b^1_ib^2_l\bar\omega^i\wedge\bar\omega^l\wedge\left[\frac{1}{2}\big\{b^2_k\bar e_k(b^1_j)-b^1_k\bar e_k(b^2_j)\big\}\bar\omega^j+b^2_kb^i_j\bar\omega^1_k\right]\\ 0=\sum_{i,l,j,k}b^1_ib^3_l\bar\omega^i\wedge\bar\omega^l\wedge\left[\frac{1}{2}\big\{b^3_k\bar e_k(b^1_j)+b^1_k\bar e_k(b^3_j)\big\}\bar\omega^j+b^3_kb^i_j\bar\omega^1_k\right]\\ 0=\sum_{i,l,j,k}b^2_ib^3_l\bar\omega^i\wedge\bar\omega^l\wedge\left[\frac{1}{2}\big\{b^3_k\bar e_k(b^2_j)+b^2_k\bar e_k(b^3_j)\big\}\bar\omega^j+b^3_kb^i_j\bar\omega^1_k\right] \end{gather*} The unknowns of the system are $(b_i^j)\in C^\infty(M,\text{SO}(2,1))$. We are going to prove that the linearization of this system is diagonal hyperbolic. Consider the linearization $\beta_j^i=(\delta b)_j^i$ and notice that we can assume that $\{\bar\omega^i\}=\{\omega^i\}$ when we linearize around $\{\omega^i\}$, as such $b^i_j(x)=\delta^i_j$. Thus, the linearized system is \begin{gather*} 0=\delta^1_i\delta^2_l\frac{1}{2}\big(\delta^2_k\bar e_k(\beta^1_j)-\delta^1_k\bar e_k(\beta^2_j)\big)\bar\omega^i\wedge\bar\omega^l\wedge\bar\omega^j+\text{ lower order terms in }\beta\\ 0=\delta^1_i\delta^3_l\frac{1}{2}\big(\delta^3_k\bar e_k(\beta^1_j)+\delta^1_k\bar e_k(\beta^3_j)\big)\bar\omega^i\wedge\bar\omega^l\wedge\bar\omega^j+\text{ lower order terms in }\beta\\ 0=\delta^2_i\delta^3_l\frac{1}{2}\big(\delta^3_k\bar e_k(\beta^2_j)+\delta^2_k\bar e_k(\beta^3_j)\big)\bar\omega^i\wedge\bar\omega^l\wedge\bar\omega^j+\text{ lower order terms in }\beta \end{gather*} in which the only non-zero elements are \begin{gather*} \frac{1}{2}\big(\bar e_2(\beta^1_3)-\bar e_1(\beta^2_3)\big)=\text{ terms of order 0 in }\beta,\\ \frac{1}{2}\big(\bar e_3(\beta^1_2)+\bar e_1(\beta^3_2)\big)=\text{ terms of order 0 in }\beta,\\ \frac{1}{2}\big(\bar e_3(\beta^2_1)+\bar e_2(\beta^3_1)\big)=\text{ terms of order 0 in }\beta. \end{gather*} As $(b^i_j(x))\in\text{SO}(2,1)$ we have that $(\beta^i_j)\in\mathfrak{so}(2,1)$, hence we can rewrite everything as \begin{gather*} \bar e_1(\beta^2_3)=\text{ terms of order 0 in }\beta\\ \bar e_2(\beta^1_3)=\text{ terms of order 0 in }\beta\\ \bar e_3(\beta^1_2)=\text{ terms of order 0 in }\beta. \end{gather*} The differential operator is thus \[ A(u)=\bar e_1(u)+\bar e_2(u)+\bar e_3(u) \] that is in diagonal form, and its symbol is \[ \sigma(\xi)=\sum_{i=1}^3\xi^i. \] To finally prove that the metric is diagonalizable we have to find a solution to the Cauchy problem given by the differential operator $A$ and a set of initial data to be chosen. To do so, we need these data to not be characteristic of the operator. By the form of $A$ we deduce that the characteristics of the system are the covectors that annihilate $e_1,e_2$ and $e_3$. Hence, the initial data for the Cauchy problem associated to the system can be given as the coframe $\{\omega^i\}$ on a surface $\Sigma\subset M$ with $e_i\notin T\Sigma$ for $i=1,2,3$ since we need $\omega^i(v)\ne0$ for all $v\in T\Sigma$. This concludes the proof of Theorem \ref{teo:main}. \bibliographystyle{acm}
proofpile-arXiv_065-5704
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A key feature of cognitive systems is their reliance on structured representations of knowledge \citep{langley2012cognitive}. A well-known structured representation is hierarchical task networks (HTNs) \citep{erol1994umcp}. HTNs represent a series of tasks at different levels that explicitly encode two relations: \begin{itemize} \item Task-subtask: each task (unless it is at the root) has a parent task and one or more children (unless it is a leaf). \item Order: sibling tasks that are at the root or ones that are children of the same parent have an ordered relation between them. \end{itemize} Because of its expressivity, the HTN framework has been used in many applications \citep{nau2004applications} and cognitive architectures \citep{choi2018evolution,laird2019soar}. Another recurring research topic in cognitive systems is goal reasoning, which is based on the following idea: an agent supervises its own actions in an attempt to achieve some goals in an environment; when the agent encounters discrepancies between its expectations and actual observations about the environment, new goals are generated as a result. \cite{cox2016midca} use a mechanism based on cause and effect. For example, $c \to d$ indicates a cause $c$ (e.g., there is a minefield) for a discrepancy $d$ (e.g., a mine is encountered). When the discrepancy $d$ is encountered, the goal $\neg c$ is generated (e.g., clear the minefield) to obviate the discrepancy $d$. Goal reasoning uses the notion of goal. A goal is a set of ascertainable conditions (i.e., either true or false) in a state. For example, the condition ``the agent encounters a mine'' can be determined conclusively. In contrast, HTN planning is concerned with tasks, which may or may not be goals\footnote{Some approaches reason with goals only. For example, hierarchical goal network (HGN) planning \citep{shivashankar2013hierarchical}, a variant of HTN, use goals in hierarchies instead of tasks. Others have combined tasks and goals \citep{nau2021gtpyhop}. }. In general, a task is symbolic: it is defined by the methods that describe how to achieve it. Hence tasks are (ambiguously) considered ``activities'' to be performed. ``Seek nearby mines'' is an example of a task that does not represent a specific state condition and therefore it is not a goal. Despite the dichotomy between tasks and goals, there is no theoretical constraint we know of that hinders the application of goal reasoning techniques to systems that reason with tasks. Along these lines, we introduce a modification of the HTN planning algorithm SHOP \citep{nau1999shop}. The original algorithm and variants are widely adopted \citep{nau2004applications,cox2016midca}. Our modification enhances HTN planning in two ways: (1) planning and execution of actions are interleaved to handle interaction in nontraditional HTN domains; (2) the agent can change its tasks in response to discrepancies encountered during execution with the use of task modifiers, functions that receive a task list and a state and produce a new task list. \section{Preliminaries} Our proposed approach is based on HTN planning, although we use more general definitions than the nomenclature used by \cite{nau1999shop} to better match our implementation. A state variable describes an attribute of a domain world. For example, $\mathit{loc}(\mathit{agent})=(0,0)$ indicates that the agent is at location $(0,0)$. A state $s$ is a set of all state variables. The set of all states is denoted $S$. Tasks symbolically represent activities in a domain. A task $t$ consists of a name and a list of arguments and can be either primitive or compound. The set of all tasks is denoted $T$. A task list is a sequence of tasks $\tilde{t}=(t_1,\dots,t_n)$. The set of all possible task lists is denoted $\tilde{T}$ (excluding the empty list). A primitive task can be achieved by an action. An \textit{action} is a 2-argument function $a:S\times\{t\}\to S \cup \{\texttt{nil}\}$, \begin{equation} a(s,t) = s', \end{equation} where $s$ is a state and $t$ is a primitive task. If $a$ is applicable to $t$ in $s$, $s'\in S$. Otherwise, $s' = \texttt{nil}$. A method describes how to decompose a compound task into subtasks. A \textit{method} is a 2-argument function $m:S\times\{t\}\to \tilde{T} \cup \{\texttt{nil}\}$, \begin{equation*} m(s,t) = \tilde{t}, \end{equation*} where $s$ is a state and $t$ is a compound task. If $m$ is applicable to $t$ in $s$, $ \tilde{t}\in \tilde{T}$. Otherwise, $\tilde{t} = \texttt{nil}$. An HTN planning problem is a 3-tuple $(s, \tilde{t}, D)$, where $s$ is a state, $\tilde{t}=(t_1, \dots,t_n)$ is a task list, and $D$ is the set of all actions and methods. A plan is a sequence of actions. Solutions (plans) for HTN planning problems are defined recursively. A plan $\pi=(a_1,\dots,a_m)$ is a solution for the HTN planning problem $(s, \tilde{t}, D)$ if one of the following cases is true: \begin{enumerate} \item If $\tilde{t}=\emptyset$, then $\pi=\emptyset$ (the empty plan). \item If $\tilde{t}\neq \emptyset$: \begin{enumerate} \item If $t_1$ is primitive, $a_1$ is applicable (i.e., $a_1(s,t_1) \neq \texttt{nil}$), and $(a_2,\dots,a_m)$ is a solution for $(a(s,t_1),(t_2, \dots,t_n),D)$. \item If $t_1$ is compound, there is an applicable method $m(s,t_1) \neq \texttt{nil}$, and $\pi$ is a solution for $(s, (m(s,t_1), t_2, \dots, t_n),D)$. \end{enumerate} \end{enumerate} In Case 2 (a), after applying $a_1$, the remainder plan $(a_2,\dots,a_m)$ is found to be a solution for the remaining tasks $(t_2 \dots,t_n)$. In Case 2 (b), the compound task $t_1$ is replaced with $m(s,t_1)$, and $\pi$ is found to be a solution for the new task list $(m(s,t_1), t_2, \dots, t_n)$. \section{Task Modifiers} In this section, we describe an extension to HTN called task modifiers. We provide an example of a task modifier in a marinetime vehicle simulation domain. Then we describe an algorithm that integrates task modifiers and SHOP. The motivation for task modifiers is to provide a mechanism that handles unexpected events in some domains. Notably, this type of domain has the following characteristics: \begin{itemize} \item The agent observes an external environment and interacts with it through actions. The dynamics (state transitions) of the environment are defined by a set of Equations~(1). In contrast, traditional HTN planning domains use operators to transform states. \item Actions are irreversible in the sense that the environment cannot revert to a previous state by editing state variables. \item The agent does not have full knowledge of the environment's dynamics. After an action is applied, the environment transitions to a new state. The agent needs to observe and acquire information about the state. This necessitates interleaved planning and execution of actions. \item The agent's observations are partial. The agent needs to make inferences about the values of variables not observed. \item The environment is episodic. A terminal signal is sent when an episode ends. \end{itemize} Traditional HTN planners recursively decompose high-level tasks into simpler ones. As discussed in Section~2, the agent's task list $\tilde{t}=(t_1,\dots,t_n)$ can only be modified in one of two ways: \begin{enumerate} \item If $t_1$ is primitive and an applicable action for $t_1$ exists, the new task list is $(t_2,\dots,t_n)$. \item If $t_1$ is compound and an applicable method $m(s,t_1)$ exists, the new task list is \\$(m(s,t_1), t_2, \dots, t_n)$. \end{enumerate} Since the environment's dynamics are unknown to the agent, unexpected events may occur. For instance, the agent may encounter environmental hazards during a navigation task. This demands greater flexibility in terms of addition, deletion, and reordering of the tasks in $\tilde{t}$. For this reason, we introduce task modifiers that provide another way to modify a task list: replace $\tilde{t}$ with a new task list $\tilde{t}'$. A \textit{task modifier} is a 2-argument function $\mathit{TM}:S \times \tilde{T} \to \tilde{T}$, \begin{equation*} \mathit{TM}(s,\tilde{t}) = \tilde{t}'. \end{equation*} The definition of task modifiers is abstract. Any procedure that receives an observation $s$ and a task list $\tilde{t}$ and outputs another task list $\tilde{t}'$ can be considered a task modifier. (When no changes are made to the task list, $\tilde{t} = \tilde{t}'$.) An abstract task modifier is used as part of an algorithm based on SHOP in Section~3.2. The implementation of two domain-specific task modifiers are described in Section~4.2. \subsection{Task Modifier Example} We show a use case of task modifiers in a marinetime vehicle simulation domain called Minefield, where the agent is tasked with maximizing the survival of some transport ships that traverse through an area. Further details of this domain are described in Section~4.1. At the beginning of an episode (shown in Figure~\ref{fig:moos}), 10 transport ships are placed on the left side of the central region. The agent, the pirate boat, and three fishing boats are randomly placed in the upper and lower regions. After 20 seconds, the transport ships start to move to the right side. The pirate continuously moves to random grid cells in the central region and places mines along the way. A transport ship is destroyed when it touches a mine. The agent has no direct knowledge of which boat is the pirate. The mines in a cell are automatically cleared as the agent moves to the cell. The initial task list contains $\mathit{random\_moves}$, which keeps the agent in constant patrol of the central region. The episode terminates when the remaining transport ships reach the right side or all the ships have been destroyed. The objective of the agent is to maximize the number of transport ships that survive. The agent uses a task modifier that modifies the task list in response to unexpected events in the environment. For instance, after encountering a mine in a cell $c$, the agent decides to search nearby cells for more mines to clear: \begin{align*} (\mathit{random\_moves}) \Rightarrow (\mathit{search\_near}(c),\mathit{random\_moves}) \end{align*} Alternatively, the agent may decide to pursue a suspect boat $b$ and abandon other tasks: \begin{align*} (\mathit{search\_near}(c),\mathit{random\_moves}) \Rightarrow (\mathit{follow}(b)) \end{align*} To achieve a similar capability without a task modifier, each method has to be rewritten to handle dynamic events the same way a task modifier would. The code would be more complicated than just using a task modifier. Additionally, a method replaces a single task and is agnostic of other tasks in a task list. In contrast, a task modifier is more flexible and can change any part of the task list. \begin{figure}[h] \centering \includegraphics[width=.5\textwidth]{moos.png} \caption{A view of the Minefield domain at the beginning of an episode.} \label{fig:moos} \end{figure} \subsection{Integrating Task Modifiers and SHOP} We now describe an algorithm that integrates SHOP with task modifiers and interleaves planning and execution. The pseudocode is shown in Algorithm~1. The type of task modifier described in the algorithm is abstract; the details of two domain-specific task modifiers are discussed in Section~4.2. The differences between the original SHOP algorithm and our algorithm are underlined. At each time step, the agent observes the state of the environment and executes an action without full knowledge of state variables. The algorithm starts with the procedure PLAN-ACT-TM (line 1) as an episode begins. The agent observes the current state $s$ (line 2). The procedure SEEK-PLAN-ACT-TM is called (line 3). SEEK-PLAN-ACT-TM (line 4) has a recursive structure similar to the HTN solution cases described in Section~2, but it does not maintains a plan because planning and execution are interleaved. If the current task list is empty or the episode terminates (line 5), the procedure returns (line 6). If the first task $t$ in the task list is primitive (line 8) and an applicable action exists (line 9), the action is executed (line 10). Then the agent observes the next state $s'$ (line 11). The task modifier receives $s'$ and the remaining tasks $R$ and updates the task list (line 12). The updated task list is passed to a recursive call of SEEK-PLAN-ACT-TM (line 13). If $t$ is compound and an applicable method $m$ is found (line 17), $t$ is replaced by its reduction (subtasks) by $m$ (line 18). When no applicable action or method is found, the procedure returns $\texttt{nil}$ indicating a failure (lines 15 and 21). The task modifier $\mathit{TM}$ is only called (line 12) after an action is applied (line 10) but not after a method decomposes the first task in the task list (line 18). This is a design choice based on the experimental domains, not a theoretical limitation. After an action is executed, unexpected changes (from the agent's perspective) may occur in the environment. In contrast, task decomposition is nearly instantaneous because the task list and the set of methods are internal to the agent. \begin{algorithm}[!htbp] \caption{SHOP with Task Modifier} \label{algo_task} \begin{algorithmic}[1] \Procedure{plan-act-tm}{$\tilde{t},D$} \State \underline{observe $s$} \State \Return \Call{seek-plan-act-tm}{$s,\tilde{t},D$} \EndProcedure \Statex \Procedure{seek-plan-act-tm}{$s,\tilde{t},D$} \If{$\tilde{t} =\emptyset$ \textbf{or} \underline{the episode terminates} } \State \Return $s$ \EndIf \State $t\gets$ the first task in $\tilde{t}$; $R\gets$ the remaining tasks \If{$t$ is primitive} \If{there is an action $a(s,t) \neq \texttt{nil}$ } \State \underline{apply $a$} \State \underline{observe $s'$} \State \underline{$R\gets \mathit{TM}(s',R)$} \State \Return \Call{seek-plan-act-tm}{$s',R,D$} \Else \State \Return \texttt{nil} \EndIf \Else \For{every method $m(s,t) \neq \texttt{nil}$} \State $s\gets$ \Call{seek-plan-act-tm}{$s,(m(s,t),R),D$} \If{$s \neq $ \texttt{nil}} \State \Return $s$ \EndIf \EndFor \State \Return \texttt{nil} \EndIf \EndProcedure \end{algorithmic} \end{algorithm} \section{Experiments} To demonstrate the usage of task modifiers, we tested our implementation in two domains.\footnote{The code is available at \texttt{https://github.com/ospur/htn-tm}.} Both domains have some of the characteristics described in Section~3 that are atypical of traditional HTN domains. The intent of the experiments is to provide a qualitative comparison of our implementation and two simple baselines. \subsection{Domains} \paragraph{Rainy Grid.} In a $10 \times 10$ grid, the agent and a beacon randomly start at different locations and neither is at the exit, which is always in the bottom right corner. Each action produces a numerical reward. Rain occurs with a probability of $p$ and affects an action's reward. The agent knows the locations of the beacon and the exit but not $p$. If the agent reaches the beacon, the rain stops until the end of the current episode. The episode terminates when the agent reaches the exit. The objective is to maximize the episodic cumulative reward. Rainy Grid has the following tasks: \begin{itemize} \item $\mathit{move}(\mathit{dir})$. (Primitive) The agent moves one step right, up, left, or down. If it is not rainy, the action for this task has a reward of $-1$. If it is rainy, the action does nothing and has a reward of $-5$. \item $\mathit{go\_to}(\mathit{dest})$. (Compound) $\mathit{dest}$ is either the beacon or the exit. The method for this task recursively decomposes it into $(\mathit{move}(\mathit{dir}),\mathit{go\_to}(\mathit{dest}))$, where $\mathit{dir}$ is the direction toward $\mathit{dest}$. \end{itemize} \paragraph{Minefield.}\footnote{The domain was created using Mission Oriented Operating Suite Interval Programming (MOOS-IvP).} Continuing the description in Section 3.1, the entire area is a $20\times20$ grid; the central region is $20 \times 10$. The pirate continuously selects a random cell in the central region and moves toward it. At each step, with a probability of $p$, the pirate places 20 mines according to a multivariate Gaussian distribution. Minefield has the following tasks ($c$ is a grid cell and $b$ is a boat): \begin{itemize} \item $\mathit{move}(c)$. (Primitive) The action for this task is applicable if the agent is adjacent to $c$. (The agent can move diagonally.) \item $\mathit{arrest}(b)$. (Primitive) The action for this task is applicable if the agent is adjacent to $b$. If $b$ is the pirate, it ceases any activity for the rest of the episode. Otherwise, nothing happens. \item $\mathit{random\_moves}$. (Compound) This task is to randomly patrol the central region. The method for this task recursively decomposes it into $(\mathit{move\_diag}(c_1),\mathit{random\_moves})$, where $c_1$ is a random cell. \item $\mathit{move\_diag}(c)$. (Compound) The task is to move along the shortest path to $c$. The method for this task recursively decomposes it into $(\mathit{move}(c_1),\mathit{move\_diag}(c))$, where $c_1$ is a cell adjacent to the agent and in the shortest path to $c$. \item $\mathit{search\_near}(c)$. (Compound) The task is to clear the mines in the adjacent cells of $c$. Note that the mines in a cell are removed once the agent reaches it. The method for this task decomposes it into $(\mathit{move\_diag}(c_1),\dots,\mathit{move\_diag}(c_8))$, where $c_1,\dots,c_8$ are the 8 cells adjacent to $c$ in counterclockwise order. \item $\mathit{follow}(b)$. (Compound) The task is to follow $b$ until the agent is in the same cell as $b$. The method for this task recursively decomposes it into $(\mathit{move}(c_1),\mathit{follow}(b))$, where $c_1$ is one step closer to the current location of $b$. \end{itemize} \subsection{Implementation of Task Modifiers} Based on Algorithm~1, we created a modified version of Pyhop (a Python version of SHOP). Then we implement a task modifier for each domain. The following is a high-level description of the task modifiers and the intuition behind them. \paragraph{Rainy Grid task modifier.} The agent does not know the true probability of rain. Instead, it assumes a probability $p'$. In our experiments, we set $p'$ to 0.5. The expected cost of a single move action is computed: $E(\mathit{cost}) = \frac{1 + p'}{(1 - p')}$. Then the distances between its current location, the beacon, and the exit is computed. Multiplying $E(\mathit{cost})$ and the corresponding distance produces the expected cost of each task. The agent then decides whether to (1) directly go to the exit or (2) go to the beacon first and then the exit. The possible modifications are as follows: \begin{enumerate} \item $ (\mathit{go\_to}(\mathit{exit})) \Rightarrow (\mathit{go\_to}(\mathit{beacon}),\mathit{go\_to}(\mathit{exit}))$ \item $ (\mathit{go\_to}(\mathit{beacon}),\mathit{go\_to}(\mathit{exit})) \Rightarrow (\mathit{go\_to}(\mathit{exit})) $ \end{enumerate} \paragraph{Minefield task modifier.} Assume that the current task list is $(t_1,\dots,t_n)$. The following modifications are considered: \begin{enumerate} \item When the agent encounters one or more mines in a cell $c$, the task $\mathit{search\_near}(c)$ is inserted: \begin{align*} (t_1,\dots,t_n) \Rightarrow (\mathit{search\_near}(c),t_1,\dots,t_n) \end{align*} \item The agent estimates which boat is the pirate and decides to follow the boat $b$. This modification consists of two steps: (1) all previous $follow$ tasks are removed from the task list; (2) then a new one is added. This only triggers if the pirate has not been arrested. \begin{align*} (t_1,\dots,t_n) & \Rightarrow (t_1',\dots,t_m') \quad \text{remove all $\mathit{follow}$ tasks} \\ (t_1',\dots,t_m') & \Rightarrow (\mathit{follow}(b),t_1',\dots,t_m') \quad \text{add a new $\mathit{follow}$ task} \end{align*} \item When the agent is adjacent to a suspect boat $\mathit{b}$ after following it and the pirate has not been arrested, the task $\mathit{arrest}(b)$ is added to the task list: \begin{align*} (t_1,\dots,t_n) \Rightarrow (\mathit{arrest}(b),t_1,\dots,t_n) \end{align*} \end{enumerate} The identity of the pirate and the probability of it placing mines are both unknown, although the agent knows whether the pirate has been arrested. Under such conditions, the behavior of the agent is to balance between two objectives: \begin{itemize} \item Clear as many mines as possible: this objective impacts directly the number of transport ships that will survive. A reduced number of mines in the central region means that the transport ships are more likely to survive. \item Arrest the pirate as soon as possible: this objective when completed prevents the pirate from placing any more mines. However, solely focusing on this objective may cause many transport ships to be destroyed in the process. \end{itemize} \subsection{Baselines} \paragraph{Rainy Grid baselines.} The purpose is to compare an agent that uses a task modifier and two baselines that have a fixed task list. The task list of Baseline~1 is $(\mathit{go\_to}(\mathit{exit}))$. The task list of Baseline~2 is $(\mathit{go\_to}(\mathit{beacon}),\mathit{go\_to}(\mathit{exit}))$. In other words, Baseline 1 always goes to the exit directly; Baseline 2 always goes to the beacon first (turning off the rain) and then the exit. \paragraph{Minefield baselines.} Since the domain is complex and has many variables, it is useful to establish a minimal performance baseline without any agent, the purpose of which is to show that an agent with our task modifier indeed positively impacts the survival of the transport ships. The other baseline has a random task modifier, which inserts a random task to the beginning of the task list when invoked (Algorithm~\ref{algo_task} line 12). The purpose of the random baseline is similar. It is used to show whether our task modifier is performing better than modifying the task list randomly. \subsection{Results} Since each domain has an objective for the agent to achieve, the performance metric is based on that objective. In the Rainy Grid domain, each move has some cost associated with it, and therefore the performance metric is the total cost. In the Minefield domain, the objective is to ensure as many transport ships survive as possible, and thus the performance metric is the number of transport ships that survive at the end of an episode. Figure~\ref{fig:exp} (a) shows the results of the task modifier agent and two baselines in the Rainy Grid domain. The vertical axis is the cumulative reward. The horizontal axis is the true probability of rain. Each point is the average of 2000 runs. When the probability of rain is low, the task modifier agent performs similarly to the baselines. As the probability increases, the task modifier agent begins to outperform the baselines. Figure~\ref{fig:exp} (b) shows that in the Minefield domain, the task modifier agent outperforms the two baselines in all the probability configurations tested. The vertical axis is the number of transport ships that survive until the end of an episode. The horizontal axis is the probability of the pirate boat placing mines at each step. Each point is the average of 50 runs. We perform statistical significance tests on the data from both domains. Table~\ref{tab} shows the results. The first number is the $t$-statistic and the second number is the $p$-value. For the Rainy Grid domain, due to the small difference in reward when the probability of rain is low, we only run $t$-tests on the data where the probability is above 0.6. We find statistically significant difference in the average reward between the task modifier agent and the baselines. For the Minefield domain, we corroborate that the task modifier agent outperforms the baselines through $t$-tests on the data where the probability of placing mines ranges from 0.2 (low) to 0.5 (high). \begin{figure} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{rainy.png} \caption{Rainy Grid} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\textwidth]{minefield.png} \caption{Minefield} \end{subfigure} \caption{Comparison of the task modifier (TM) agent and baselines.} \label{fig:exp} \end{figure} \begin{table}[h] \small \centering \begin{tabular}{ |c|c|c|c|c| } \hline \multirow{2}{*}{Rainy Grid} & \multicolumn{4}{|c|}{Probability of Rain}\\\cline{2-5} & 0.6 & 0.7 & 0.8 & 0.9 \\ \hline TM Agent and Baseline 1 & 14.10, 4.38e-44 & 14.31, 2.46e-45 & 16.74, 7.84e-61 & 17.55, 1.83e-66 \\ TM Agent and Baseline 2 & 6.01, 2.01e-9 & 5.21, 2.04e-7 & 6.66, 3.05e-11 & 4.89, 1.03e-6 \\ \hline \multirow{2}{*}{Minefield} & \multicolumn{4}{|c|}{Probability of Placing Mines}\\\cline{2-5} & 0.2 & 0.3 & 0.4 & 0.5 \\ \hline TM Agent and No Agent & 3.86, 0.0002 & 3.03, 0.003 & 3.97, 0.0001 & 5.10, 1.63e-6\\ TM Agent and Random Agent & 4.37, 3.10e-5 & 2.34, 0.021 & 3.25, 0.0016 & 5.78, 8.93e-8 \\ \hline \end{tabular} \caption{$t$-statistics and $p$-values of select data from the experiments.} \label{tab} \end{table} \section{Related Work} The idea of organizing tasks hierarchically was originally proposed by \cite{sacerdoti1974planning}. Works on HTN planning established the theoretical underpinnings of tasks, methods, and actions \citep{erol1994umcp,wilkins2000using}. The dominant HTN planning paradigm is ordered task decomposition as implemented in SHOP and SHOP2. It commits to a total order of the tasks unlike earlier planners \citep{wilkins2000using}, which use partial-order planning. SHOP2 allows partial order between tasks in methods, but when tasks are committed they need to be totally ordered. While losing the flexibility of partial-order plan representations, ordered task decomposition has resulted in significant running time speed gains, one of the key reasons why this paradigm has become dominant. Although our extension is based on SHOP, the same design principle works for SHOP2 as the only difference between SHOP and SHOP2 is how the first task is selected; SHOP2 maintains a partially ordered list and hence selects a task that has no predecessor \citep{nau2001total}. HTN planning is shown to be strictly more expressive than STRIPS planning \citep{erol1994htn}. Our work is based on SHOP and has the same expressive power as an HTN planner. This type of hierarchical representation has been use by several cognitive architectures. For example, SOAR \citep{laird1990integrating} integrates hierarchical execution and learning to handle dynamic environments. It dynamically selects actions based on its production memory. When a solution is successfully found, SOAR learns the decisions that led to the solution. ICARUS \citep{choi2018evolution} uses a teleoreactive process to learn hierarchical planning knowledge whereby gaps in planning knowledge trigger a procedure to learn targeted HTNs. Our work is closely related to the actor view of planning \citep{ghallab2014actors}, which emphasizes the need for interleaved planning and execution. It advocates hierarchical online plan generation, as stated by \cite{ghallab2014actors}, "the actor, refine, extend, update, change, and repair its plans throughout the acting process." This builds on previous studies of interleaving planning and execution. For example, \cite{fikes1971strips} propose adding inhibitors as an execution strategy for plans to ensure that invalid actions are not executed. Cognitive architectures including SOAR, ICARUS, and MIDCA use a variety of mechanisms to identify gaps in their planning knowledge detected during execution and learn to fill those gaps. \cite{sirin2004htn} combine HTN planning and execution to generate semantic web service composition plans. Our use of methods is identical to that of \cite{sirin2004htn}: to decompose tasks into subtasks. However, task modifiers can replace task sequences. Task insertion \citep{xiao2020refining} is used in domains where HTN methods are incomplete. Primitive tasks are inserted to fill the gaps in HTN methods. In our work task modifiers are used in response to unexpected events in the environment, not to fulfill incomplete HTN methods. Replanning is needed when a failure in the execution of the current plan is encountered in a state that prevents it from continuing to execute the portion of the plan yet to be executed. For instance, \cite{fox2006plan} examine two plan completion strategies: adapting the remaining plan or generating a new plan from the scratch starting from the failing state. In general, adapting the remaining plan is known to be computationally harder than planning from scratch \citep{nebel1995plan}. \cite{warfield2007adaptation} present a plan adaptation algorithm for HTN replanning. The difference between replanning and our work is that task modifiers can change tasks including the input task list whereas in replanning the input task list remains the same. The partially observable Markov decision process (POMDP) formulation can be used for planning when states are partially observable. The formulation enables planning in advance for any contingency the agent may encounter \citep{kaelbling1998planning}. The 3-tuple $O(s,a,o)$ indicates the probability of making an observation $o$ when executing an action $a$ in a state $s$. For instance, an agent might know the layout of a labyrinth it is navigating but not its exact location within the labyrinth. This means the agent can partially observe its whereabouts. When the agent moves forward it may encounter a red wall (i.e., an observation $o$), which indicates the agent is in any of the two possible locations that have a red wall (i.e., the probability is 0.5). Using this probabilistic information, a POMDP agent can plan for every circumstance ahead of time. However, our work and goal reasoning in general deal with potentially unforeseen and unmodeled situations (e.g., no prior knowledge of $O(s,a,o)$) and the overall goal can be changed. \section{Conclusions} In this paper we describe an extension of the HTN planning paradigm called task modifiers. A task modifier receives a task list and a state and produces a new task list. We describe a SHOP-based algorithm that combines task modifiers and interleaved planning and execution, which provides greater flexibility when handling unexpected events in the environment. We implemented this algorithm and two domain-specific task modifiers. Our implementation is shown to be effective in two domains: a stochastic grid environment and a marinetime vehicle simulation, where the agent is tasked with protecting transport ships. For future work, we want to create a task modifier procedure that is more domain-independent. In addition to being a function, this version of task modifier has access to a set of modifications. A modification consists of a task list and a set of conditions; it is applicable to a task list given an observation if the conditions holds. Applying the modification results in a new task list. The task modifier will select appropriate modifications based on some criteria that can be learned. \section*{Acknowledgements} This research is supported by the Office of Naval Research grants N00014-18-1-2009 and N68335-18-C-4027 and the National Science Foundation grant 1909879. \bibliographystyle{cogsysapa}
proofpile-arXiv_065-5707
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Intoduction} In 1935, Stefan Banach wrote down Problem 1 in the Scottish Book: {\it when does a metric (possibly Banach) space $X$ admit a condensation (i.e. a bijective continuous mapping) onto a compactum (= compact metric space)}? The answer to this problem for Banach spaces can be found in \cite{bp,banakh,os,pytk}. Independently to S. Banach the problems concerning condensations were posed by P.S. Alexandroff (\cite{arh}, Problem 2.7): {\it when does a Hausdorff space $X$ admit a condensation onto a Hausdorff compact space?} \medskip The main result of this paper is the following theorem that answers Banach's Problem in the class of metric spaces of weight continuum and Alexandroff's Problem in the class of metric spaces of weight $\lambda=\lambda^{\aleph_0}$. \begin{theorem} Let $\lambda$ be a cardinal number such that $\lambda=\lambda^{\aleph_0}$. If $(X,\tau)$ is a metric space of weight $\lambda$ then $\tau$ is the supremum of two topologies $\tau_1$ and $\tau_2$, where $(X,\tau_i)$ is homeomorphic to a Banach space of weight $\lambda$ for $i=1,2$. \end{theorem} The motivation and idea of proving the main result of this paper follows from the result of Theorem 1 in \cite{pytk}: {\it Every separable absolute Borel space $X$ condenses onto the Hilbert cube, whenever $X$ is not $\sigma$-compact}. \section{Main definitions and notation} Recall that a topology $\tau$ on $X$ is the supremum of topologies $\tau_1$ and $\tau_2$ on $X$ if it is the coarsest topology on $X$ that is finer that $\tau_i$ for every $i=1,2$. Let $\lambda$ be an infinite cardinal, $S$ a set of cardinality $\lambda$, and let $I=[0,1]$ be the closed unit interval. Define an equivalence relation $E$ on $I\times S$ by $(x, \alpha)E(y, \beta)$ if either $x=0=y$ or $(x,\alpha)=(y,\beta)$. Let $H(\lambda)$ be the set of all equivalence classes of $E$; in other words, $H(\lambda)$ is the quotient set obtained from $I\times S$ by collapsing the subset $\{0\}\times S$ to a point. For each $x\in I$ and each $\alpha\in S$, $\langle x,\alpha\rangle$ denotes the element of $H(\lambda)$ corresponding to $(x, \alpha)\in I\times S$. There is the topology $\tau_{\rho}$ induced from the metric $\rho$ on $H(\lambda)$ defined by $\rho(\langle x,\alpha\rangle, \langle y,\beta\rangle)=|x - y|$ if $\alpha=\beta$ and $\rho(\langle x,\alpha\rangle, \langle y,\beta\rangle)=x+y$ if $\alpha\neq \beta$. The set $H(\lambda)$ with this topology is called {\it the metrizable hedgehog of spininess $\lambda$} and is often denoted by $(J(\lambda),\tau_{\rho})$ (\cite{H10}, 4.1.5). The space $(J(\lambda),\tau_{\rho})$ is a complete, non-compact, metric space of weight $\lambda$. Moreover, the product $J(\lambda)^{\aleph_0}$ is a universal space of metrizable spaces of weight $\lambda$. Every Banach space of weight $\lambda\geq\aleph_0$ is homeomorphic to $J(\lambda)^{\aleph_0}$ \cite{tur}. Fix $d\in S$. The second topology $\tau_d$ on $H(\lambda)$ is the topology generated by the base of the neighborhood system: $\{O(z)\in \tau_{\rho}\}$ for each $z\in J(\lambda)\setminus ((0,1]\times \{d\})$ and $\{O(K,\epsilon, z)=(x-\epsilon,x+\epsilon)\times (S\setminus K): K\in [S]^{<\omega}, d\not\in K, \epsilon>0\}$ for $z=(x,d)\in (0,1]\times \{d\}$. The set $H(\lambda)$ with this topology is denoted by $(J(\lambda),\tau_{d})$. The space $(J(\lambda),\tau_{d})$ is a Hausdorff compact space. Note that if $d_1,d_2\in S$ and $d_1\neq d_2$ then $\tau_{\rho}=\sup \{\tau_{d_1},\tau_{d_2}\}$. For the terms and symbols that we do not define follow \cite{H10}. \section{Proof of Theorem 1.1} \begin{proof} Let $(X,\tau)$ be a metric space with the metric $\rho$ of weight $\lambda$. Because $\lambda=\lambda^{\aleph_0}$, $\lambda$ is not sequential (K\"{o}nig's theorem; see e.g. (\cite{sirp}, p. 181)), so that (\cite{stone}, Th. 2.7) $X$ has a metrically discrete subset $D$ of cardinality $\lambda$ ($D$ is metrically discrete if there is $\epsilon>0$ such that every two distinct points $x,y\in D$ satisfy $\rho(x,y)\geq \epsilon$). Note that any subset of $D$ is closed subset of $X$. Let $D=\bigcup\limits_{i=1}^{4}D_i$ where $D_i\cap D_j=\emptyset$ for $i\neq j$ and $|D_i|=\lambda$ for $i,j\in \{1,2,3,4\}$. Let $V_i=\{x\in X: \rho(x,D_i)<\frac{\epsilon}{3}\}$. Clearly $\overline{V_i}\cap \overline{V_j}=\emptyset$ for $i\neq j$ and $D_i\subseteq V_i$ for $i\in \{1,2,3,4\}$. Let $S$ be a set of cardinality $\lambda$ and fix distinct points $d_0, d_1, d_2\in S$. Then $I'=((0,1]\times \{d_0\})\cup \{0\}\subset J(\lambda)$ is homeomorphic to $I=[0,1]$. We will construct two condensations $f,g: X\rightarrow (J(\lambda)\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0})\cong J(\lambda)^{\aleph_0}$ where $J(\lambda)$ is the metrizable hedgehog of spininess $\lambda$ (i.e. $J(\lambda)=(J(\lambda),\tau_{\rho})$) such that no matter which topology $\tau_{\rho}$, $\tau_{d_1}$ or $\tau_{d_2}$ is considered in the space $J(\lambda)$, the properties are satisfied: I) $f\upharpoonright (X\setminus (V_3\cup V_4))$ is a homeomorphism, II) $g\upharpoonright (X\setminus (V_1\cup V_2))$ is a homeomorphism, III) $\overline{f(V_4)}\cap \overline{f(V_1)}=\emptyset$ and $\overline{f(V_3)}\cap \overline{f(V_2)}=\emptyset$, IV) $\overline{g(V_2)}\cap \overline{g(V_4)}=\emptyset$ and $\overline{g(V_1)}\cap \overline{g(V_3)}=\emptyset$. \medskip {\it Construction of $f$}. Since $X$ is a normal space there is a continuous mapping $\varphi_0: X\rightarrow I'$ such that $\varphi_0(\overline{V_1}\cup \overline{V_3})=0$ and $\varphi_0(\overline{V_2}\cup \overline{V_4})=1$. Let $F_1=\varphi_0^{-1}([0,\frac{1}{2}])$ and $F_2=\varphi_0^{-1}([\frac{1}{2},1])$. By Kowalsky's Theorem (\cite{kow} or (\cite{H10}, Th. 4.4.9)), the space $X\setminus(V_3\cup V_4)$ is embedded in $(J(\lambda)\setminus ((0,1]\times \{d_1\cup d_2\}))^{\aleph_0}$. Let $\varphi_1: X\setminus(V_3\cup V_4)\rightarrow (J(\lambda)\setminus ((0,1]\times \{d_1\cup d_2\}))^{\aleph_0}$ be a embedding. Then $\varphi_2: X\setminus(V_3\cup V_4)\rightarrow I'\times J(\lambda)^{\aleph_0}\times \{a^0\}$ where $a^0\in (J(\lambda))^{\aleph_0}$ and $\varphi_2(x)=(\varphi_0(x),\varphi_1(x),a^0)$ is also embedding. Since $\varphi_0(F_1)\subseteq [0,\frac{1}{2}]$ and $\varphi_0(V_4)=1$, then $F_1\cap V_4=\emptyset$. Hence $(X\setminus (V_3\cup V_4))\cap F_1=F_1\setminus V_3$. Since $\varphi_0(V_3)=0$, then $V_3\subseteq F_1$. Similarly $(X\setminus (V_3\cup V_4))\cap F_2=F_2\setminus V_4$ and $V_4\subseteq F_2$. \medskip {\it Construction of $f_1$ and $f_2$} where $f_1: F_1\rightarrow ((0,\frac{1}{2}]\times \{d_0\}\cup \{0\}\cup ((0,1]\times (S\setminus \{d_0\})))\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$, $f_2: F_2\rightarrow (([\frac{1}{2},1]\times \{d_0\})\cup \{0\})\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$ such that $f_1\upharpoonright (F_1\setminus V_3)=\varphi_2\upharpoonright (F_1\setminus V_3)$ and $f_2\upharpoonright (F_2\setminus V_4)=\varphi_2\upharpoonright (F_2\setminus V_4)$. The mappings $f_1$ and $f_2$ are constructed similarly, but the image of $f_2$ is more complex, so we will construct only the mapping $f_2$. Note that $D_4\subseteq V_4\subseteq F_2$. Let $D_4=\bigcup \{T_i: i\in \omega\}$ where $T_i\cap T_j=\emptyset$ for $i\neq j$ and $|T_i|=\lambda$ for each $i,j\in \omega$. The family $\{F_2\setminus V_4, T_i, i\in\omega\}$ is discrete. Define the mapping $\varphi: (F_2\setminus V_4)\cup D_4\rightarrow \mathbb{N}$ by $\varphi((F_2\setminus V_4)\cup T_1)=1$ and $\varphi(T_i)=i$ for $i>1$. Let $\varphi^*: F_2\rightarrow \mathbb{R}$ be a continuous extension of $\varphi$. Let $\Phi_1=(\varphi^*)^{-1}((-\infty, \frac{3}{2}])$ and $\Phi_i=(\varphi^*)^{-1}([i-\frac{1}{2},i+\frac{1}{2}])$ for $i>1$. Then the family $\{\Phi_i : i\in\omega\}$ is a closed locally-finite cover of $F_2$ and $\Phi_i\setminus \bigcup \{\Phi_j: j\in \omega$, $j\neq i\}\supseteq T_i$ for each $i\in \omega$. Let $a=(a_j)\in J(\lambda)^{\aleph_0}=\{b=(b_j): b_j\in J(\lambda)$ for $j\in \omega\}$. Denote by $J(\lambda)^{\aleph_0}[a]=\{b=(b_j)\in J(\lambda)^{\aleph_0}: |j\in \omega : b_j\neq a_j|<\aleph_0\}$. Choose $a^i=(a^i_j)\in J(\lambda)^{\aleph_0}$ such that $J(\lambda)^{\aleph_0}[a^k]\cap J(\lambda)^{\aleph_0}[a^m]=\emptyset$ for $k\neq m$, $k,m\in\omega$. Let $\Phi_0=F_2\setminus V_4$. {\it Construction of continuous mappings $\psi_k$} $\psi_k: \bigcup \{\Phi_i: i=0,...,k\}\rightarrow ((\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0})\cup \varphi_2(F_1\cap F_2)$ for each $k=0,1,...$ such that 1). $\psi_k$ is an injection, 2). $\psi_{k+1}$ a extension of $\psi_k$, 3). $((\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}\setminus \bigcup\{J(\lambda)^{\aleph_0}[a^i]: i=k,...\})\cup \varphi_2(F_1\cap F_2)\subseteq \psi_{k+1}(\bigcup\{\Phi_i: i=0,...,k\})\subseteq ((\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}\setminus \bigcup\{J(\lambda)^{\aleph_0}[a^i]: i=k+1,...\})\cup \varphi_2(F_1\cap F_2)$, 4). $\psi_{k+1}\upharpoonright (\bigcup\limits_{i=0}^{k+1} \Phi_i\setminus \bigcup\limits_{i=0}^{k} \Phi_i)$ is a homeomorphism for each $k\in \omega$. Let $\psi_0=\varphi_2\upharpoonright \Phi_0$. Assume that $\psi_0,...,\psi_m$ are constructed. Let $C_0=\{(\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times (J(\lambda)^{\aleph_0}\setminus \bigcup\{J(\lambda)^{\aleph_0}[a^i]: i=m+1,...\})\setminus \psi_m(\bigcup\limits_{i=0}^m \Phi_i)\}$. Since $\lambda=\lambda^{\aleph_0}$ then $|C_0|=\lambda$. Let $\eta_{m+1}:T_{m+1}\rightarrow C_0$ be a condensation. For $[\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$ we write as $I_0\times \prod\limits_{i=1}^{\infty} J^1_i\times \prod\limits_{i=1}^{\infty} J^2_i$. Let $\pi_i^j$ be the projection $[\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$ onto $J_i^j$ for $j=1,2$ and $i\in \omega$ and $\pi_0$ be the projection $[\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$ onto $I_0$. The set $\bigcup \{\Phi_i: i=0,...,m\}\cup T_{m+1}$ is closed in $\bigcup \{\Phi_i: i=0,...,m+1\}$. Then $\bigcup \{\Phi_i: i=0,...,m+1\}\setminus (\bigcup \{\Phi_i: i=0,...,m\}\cup T_{m+1})=\bigcup\{M_j: j\in\omega\}$ where $M_j\subseteq M_{j+1}$ and $M_j$ is closed in $\bigcup \{\Phi_i: i=0,...,m+1\}$ for each $j\in \omega$. Define $f^2_j: (\bigcup \{\Phi_i: i=0,...,m\}\cup T_{m+1}\cup M_j)\rightarrow J^2_j$ for $j\in \omega$ as follows $$ f^2_j(x)= \left\{ \begin{array}{lcr} \pi^2_j\psi_m(x)$, \ \ \ \ if $x\in \bigcup\limits_{i=0}^m \Phi_i, \\ \pi^2_j\eta_{m+1}(x)$, \ if $x\in T_{m+1}, \\ a^j_{m+1}$ , \ \ \ \ \ \ \ if $x\in M_j.\\ \end{array} \right. $$ Let $f^{2,*}_j: \bigcup\limits_{i=0}^{m+1} \Phi_i \rightarrow J^2_j$ be a continuous extension of $f^2_j$. Fix a homeomorphic embedding $\Omega_j: M_j\rightarrow \prod\limits_{i=1}^{\infty} J^1_i$ for each $j\in \omega$. Define $f^1_j:(\bigcup\limits_{i=0}^m \Phi_i\cup T_{m+1}\cup M_j)\rightarrow J^1_j$ for $j\in \omega$ as follows $$ f^1_j(x)= \left\{ \begin{array}{lcr} \pi^1_j\psi_m(x)$, \ \ \ \ if $x\in \bigcup\limits_{i=0}^m \Phi_i, \\ \pi^1_j\eta_{m+1}(x)$, \ if $x\in T_{m+1}, \\ \pi^1_j \Omega_j(x)$ , \ \ \ \ \ \ \ if $x\in M_j.\\ \end{array} \right. $$ Let $f^{1,*}_j: \bigcup\limits_{i=0}^{m+1} \Phi_i \rightarrow J^1_j$ be a continuous extension of $f^1_j$. Consider $f^0: \bigcup\limits_{i=0}^{m+1} \Phi_i\cup T_{m+1}\rightarrow I_0$ as follows $$ f^0(x)= \left\{ \begin{array}{lcr} \pi_0 \psi_m(x)$, \ \ \ \ if $x\in \bigcup\limits_{i=0}^m \Phi_i, \\ \pi_0 \eta_{m+1}(x)$, \ if $x\in T_{m+1}.\\ \end{array} \right. $$ By properties (1-4), and the definition of $\eta_{m+1}$, $(f^0)^{-1}(\frac{1}{2})=F_1\cap F_2$. Let $f^{0,*}: \bigcup\limits_{i=0}^{m+1} \Phi_i \rightarrow I_0$ be a continuous extension of $f^0$ such that $(f^{0,*})^{-1}(\frac{1}{2})=F_1\cap F_2$. Define $\psi_{m+1}: \bigcup\limits_{i=0}^{m+1} \Phi_i \rightarrow (((\frac{1}{2},1]\times \{d_0\})\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0})\cup \varphi_2(F_1\cap F_2)$ as follows $\psi_{m+1}(x)=f^{0,*}(x)\times \{f^{1,*}_j(x)\}_{j=1}^{\infty}\times \{f^{2,*}_j(x)\}_{j=1}^{\infty}$. Note that the system of functions $\psi_0, ..., \psi_{m+1}$ is such that properties (1-4) are satisfied. Let the function $\psi_k$ is constructed for each $k\in \omega$. Define $f_2: F_2 \rightarrow (([\frac{1}{2},1]\times \{d_0\})\cup \{0\})\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$ as follows $f_2(x)=\psi_n(x)$ where $n=\min \{k: x\in \Phi_k\}$. Because $\{\Phi_i\}$ is a closed locally-finite family, $f_2$ is a continuous function. By properties (1) and (2), $f_2$ is an injection. By property (3), $f_2$ is a surjection. Let $f_1$ and $f_2$ are constructed. Then define $f: X \rightarrow (J(\lambda)\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0})\cong J(\lambda)^{\aleph_0}$ as follows $f(x)=f_i(x)$, if $x\in F_i$, $i=1,2$. This definition is correct because $F_1\cap F_2\subseteq (F_1\setminus V_3)\cap (F_2\setminus V_4)$ and, hence, $f_1\upharpoonright (F_1\cap F_2)=\varphi_2\upharpoonright (F_1\cap F_2)=f_2\upharpoonright (F_1\cap F_2)$. Because $f_1$ and $f_2$ are injections, then $f$ is an injection,too. Since $f\upharpoonright (X\setminus (V_3\cup V_4))=\varphi_2\upharpoonright (X\setminus (V_3\cup V_4))$, then the property (I) is satisfied. Claim that the property (III) is satisfied. Since $F_1\setminus V_3\supseteq V_1$ and $f\upharpoonright (F_1\setminus V_3)=f_1 \upharpoonright (F_1\setminus V_3)= \varphi_2\upharpoonright (F_1\setminus V_3)$, then $f(V_1)=\varphi_2(V_1)\subseteq \varphi_0(V_1)\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}=\{0\}\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$. On the other side, $f(V_4)\subseteq f(F_2)=f_2(F_2)\subseteq [\frac{1}{2},1]\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$. It follows that $\overline{f(V_1)}\cap \overline{f(V_4)}=\emptyset$. Similarly, it is proved that $\overline{f(V_3)}\cap \overline{f(V_2)}=\emptyset$. Then the condensation $f: X \rightarrow (J(\lambda)\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0})$, satisfying properties (I) and (III), is constructed. To construct the function $g$, it suffices to change $V_1$ with $V_3$ and $V_2$ with $V_4$ in the construction $f$. Let $f$ and $g$ are constructed. Define $\tau_1$ ($\tau_2$) as an initial structure on a set $X$ generated by map $f$ ($g$, respectively), i.e. $\tau_1$ ($\tau_2$) is the coarsest topology on $X$ making $f$ ($g$, respectively) continuous. Note that $(X\setminus (V_3\cup V_4))\cup (X\setminus (V_1\cup V_2))=X$, $\tau\upharpoonright (X\setminus (V_3\cup V_4)=\tau_1\upharpoonright (X\setminus (V_3\cup V_4))$ and $\tau\upharpoonright (X\setminus (V_1\cup V_2)=\tau_2\upharpoonright (X\setminus (V_1\cup V_2))$. In order to prove that $\tau=\sup\{\tau_1,\tau_2\}$, it is necessary to show that $X\setminus (V_3\cup V_4)$ and $X\setminus (V_1\cup V_2)$ are closed in $\sup\{\tau_1,\tau_2\}$. It is enough to prove that $V_i$ ($i=1,2,3,4$) is an open set in $\sup\{\tau_1,\tau_2\}$. Since $V_1\subseteq X\setminus (V_3\cup V_4)$ then, by property (I), there is an open set $U$ in $J(\lambda)\times J(\lambda)^{\aleph_0}\times J(\lambda)^{\aleph_0}$ such that $V_1=(X\setminus (V_3\cup V_4))\cap f^{-1}(U)$. By property (III), $X\setminus f^{-1}(\overline{f(V_4)})\supseteq V_1$. By property (IV), $X\setminus g^{-1}(\overline{g(V_3)})\supseteq V_1$. Then $V_1=(X\setminus (V_3\cup V_4))\cap f^{-1}(U)\supseteq (X\setminus f^{-1}(\overline{f(V_4)})\cap (X\setminus g^{-1}(\overline{g(V_3)})\cap f^{-1}(U)\supseteq V_1\cap f^{-1}(U)$. Hence, $V_1=(X\setminus f^{-1}(\overline{f(V_4)})\cap (X\setminus g^{-1}(\overline{g(V_3)})\cap f^{-1}(U)$, i.e., $V_1$ is an open set in $\sup\{\tau_1,\tau_2\}$. Similarly, we can prove that $V_i$ is an open set in $\sup\{\tau_1,\tau_2\}$ for $i=2,3,4$. Thus $\tau=\sup\{\tau_1,\tau_2\}$. In particular, the space $(X,\tau)$ admits a condensation onto $J(\lambda)^{\aleph_0}$ where $J(\lambda)=(J(\lambda), \tau_{\rho})$ is the metrizable hedgehog of spininess $\lambda$. Since $\tau_{\rho}$ is the supremum of two topologies $\tau_{d_1}$ and $\tau_{d_2}$, where $(J(\lambda),\tau_{d_i})$ is a Hausdorff compact space for $i=1,2$, then $X$ admits a bijective continuous mapping onto a Hausdorff compact space $(J_{d_1}(\lambda))^{\aleph_0}$ where $J_{d_1}(\lambda)=(J(\lambda),\tau_{d_1})$. \end{proof} \begin{corollary} {\it Every metric space of weight $\lambda=\lambda^{\aleph_0}$ admits a bijective continuous mapping onto a Banach space of weight $\lambda$.} \end{corollary} \begin{corollary} {\it Every metric space of weight $\lambda=\lambda^{\aleph_0}$ admits a bijective continuous mapping onto a Hausdorff compact space.} \end{corollary} \begin{corollary} {\it If $X$ is a metric space of weight $\lambda$ and $D$ is a discrete space of cardinality $\lambda^{\aleph_0}$ then $X\times D$ admits a condensation onto a Banach space of weight $\lambda^{\aleph_0}$.} \end{corollary} In \cite{bp}, it is proved that every Banach space of weight $\mathfrak{c}$ admits a condensation onto the Hilbert cube. Since $\mathfrak{c}=\mathfrak{c}^{\aleph_0}$ then we have the following results. \begin{theorem}{\it If $X$ is a metric space of weight $\mathfrak{c}$ then $X$ admits a condensation onto the Hilbert cube.} \end{theorem} \begin{theorem} {\it If $X$ is a metric space of weight $\lambda\leq\mathfrak{c}$ and $D$ is a discrete space of cardinality $\mathfrak{c}$ then $X\times D$ admits a condensation onto the Hilbert cube.} \end{theorem} \bibliographystyle{model1a-num-names}
proofpile-arXiv_065-5742
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction\label{sect1}} Hub location lies in the intersection of Location Analysis and Network Design, and produces challenging optimization problems with multiple applications, mostly in the fields of distribution/logistics (parcel delivery, air transportation, etc.) and telecommunications {\citep{farahani2013hub}}. The increasing attention they have received in the last decades is thus not surprising \citep[see e.g.][]{Campbell25,ContrerasOkelly}. One of the current trends in hub location is the search of models suitable for dealing with different sources of uncertainty \citep{Alumur}. While some models in the literature consider uncertainty in demand \citep{Contreras1,Contreras2}, other models are concerned with the robustness of solution {hub} networks, by associating uncertainty with the possibility (probability) of disruption of the {involved} elements of {the} solution networks and looking for solutions that are \emph{robust} under disruptions. Some works have studied models in which it is assumed that activated hub nodes may (totally or partially) fail with a certain probability. \cite{An} propose a model in which two backup hub nodes are determined for each commodity. \cite{Rostami} assume that a finite set of hub breakdown scenarios is known and provide a two-stage formulation for the single allocation hub location problem with possible hub breakdown. \cite{cui2010reliable} provide a stochastic model to determine a subset of the activated hub nodes of given size through which each commodity can be routed, in such a way that if the cheapest route fails, the commodity can be routed through the second cheapest, and so on, or through an emergency facility. The authors provide a Mixed Integer Linear Programming (MILP) formulation for the problem as well as an approximate Lagrangean relaxation scheme for its resolution based on the ideas in \citep{snyder2005reliability} for the $p$-median problem. The planar version of this model is also analyzed there.\\ \cite{kim2009reliable} propose the reliable $p$-hub location problem (PHMR) and the $p$-hub mandatory dispersion (PHMD). In the PHMR the goal is to determine the location of $p$ nodes based on the level of reliability to maximize the completed flows among the set of nodes. The PHMD imposes a certain minimum separation between the $p$ selected hub nodes, and maximizes the reliability of the network. MILP formulations and heuristic approaches are provided for both problems both in {the} single and {the} multiple-allocation {framework}. Reliability of hub backbone networks has been also studied in \citep{zeng2010reliable,korani2021bi,LiLiShuSongZhang_2021IJOC}.\\ \cite{Parvaresh:2013wc} consider the multiple allocation $p$-hub median problem under intentional disruptions in which the goal is to identify the optimal strategy for the location of $p$ hub nodes by minimizing the expected transportation cost in case the worst-case disruptions is minimized. A bilevel mixed integer formulation is provided as well as a simulated annealing heuristic for its resolution. The problem of designing robust networks under edges/arcs failures has also been studied in the literature under different settings. \cite{aneja2001maximizing} study the single-commodity maximum flow problem for the case when edge failures may occur by means of the maximal residual flow problem, whose goal is to determine the maximal flow in which the largest arc flow is as small as possible. This problem is closely related to the network interdiction problem that consists of determining a certain number of arcs whose removal from the network minimizes the maximum amount of flow that one can send through the network \citep[see e.g.][]{altner2010maximum,cormican1998stochastic,royset2007solving,wood1993deterministic}. \cite{ma2016minimum} propose the Conditional Value-at-Risk Constrained Minimum Spanning $k$-Core Problem where the possibility of edge disruptions is prevented when trying to construct minimum cost subgraphs of a network with a minimum number of incident edges at each node. \cite{andreas2008mathematical} study shortest path problems under link failures by imposing that the probability that all arcs {that} successfully operate in at least one path {greater than certain} threshold value. However, existing works dealing with potential failure of inter-hub links is very scarce \citep{Mohammadi}. This is precisely the focus of this work, where we introduce the Hub Location Problem under Link Failures (HLPLF), a hub location problem in which activated inter-hub links may fail with a given probability. This can be very useful in typical hub location applications in which the total failure of a hub is highly unlikely, whereas partial failures occur only affecting some of the links incident with the hubs (certain air connections, train lines, etc.) We point out that by protecting inter-hub links under failure we also partially protect hub nodes under {failures}. For dealing with the HLPLF we propose two alternative models, which guarantee that solution networks are protected under disruption of inter-hub links, in the sense that for each commodity at least one alternative (backup) routing path exists. The main difference between the two models is how backup paths are enforced. In both cases we consider set-up costs for both activated hubs and activated inter-hub edges. Thus, we do not fix the number of hubs to activate, {allowing} for incomplete backbone networks. As usual, the routing costs apply a discount factor $\alpha$ to inter-hub arcs. We assume multiple allocation of nodes to activated hubs, although the allocation may be different in the original and backup paths. We further impose that the original routing path of each commodity $r=(o_r, d_r)$ contains exactly one inter-hub arc, which can be a loop. That is, the original routing path is of the form $(o_r, k, m, d_r)$, where $k$ and $m$ are activated hubs, and $(k, m)$ is an inter-hub arc, which reduces to a loop when $k=m$. {We are also given the failure probabilities for each potential inter-hub edge. Then, t}he objective is to minimize the sum of the set-up costs of the activated hubs and inter-hub edges, plus the expected routing costs. Our models can be seen as two-stage stochastic programming models in which the \emph{a priori} solution is determined by the strategic decisions associated with the selection of activated {hub nodes} and inter-hub edges together with an \emph{original plan} given by a set of feasible routing paths, one for each commodity, whereas the recourse action determines a \emph{backup plan}, given by a set of alternative routing paths for the commodities, that can be used in case the inter-hub {edge} of the original plan fails. As already mentioned, the models that we propose differ on the way backup paths are constructed. The first model imposes that the alternative routing path of each commodity contains exactly one inter-hub arc {(as in the original plan)}, which can be a loop, and builds it explicitly. The second model is more flexible, in the sense that it allows {for} arbitrarily large sequences of inter-hub arcs {to be} used in the alternative routing paths, although such paths are not built explicitly. This is achieved with a set of exponentially many (on the number of nodes of the network) constraints, by imposing that the cut-set of the backbone network contains at least $\lambda$ edges, for a given integer value of $\lambda \geq 2$. We study some properties of both models and propose a MILP in each case. For the second model, since it has exponentially many constraints, we also propose a branch-and-cut solution algorithm. Extensive computational experiments have been carried out on a large set of benchmark instances based on the well-known CAB~\citep{OKelly_EJOR87}, AP~\citep{EK96}, and {TR~\citep{tan2007hub}} datasets, for varying settings of the failure probabilities and other cost parameters. The obtained results are summarized and analyzed, comparing the computational performance of each of the models and the effect of the different parameters. Managerial insights are derived from the analysis of the characteristics of the solutions produced by each of the models and their comparison. {In particular, we analyze the distribution of the costs among the different elements considered (hubs and links set-up costs and routing costs), the number of activated hub and links, and the density of the obtained backbone networks}. Finally, an empirical analysis of the two proposed models has been carried out. We compare solutions obtained with the different models in terms of efficiency and robustness. For this analysis, multiple failure scenarios have been generated from optimal solutions to the underlying deterministic hub location model and their \emph{a posteriori} {capability to re-route the commodities}, tested against that of the proposed models. The obtained results assess the validity of the proposal. The remainder of this paper is structured as follows. In Section \ref{sect1.1} we introduce the notation that we will use and formally define the HLPLF from a general perspective. Section \ref{sect:unrestricted} is devoted to the first HLPLF model that we study in which it is assumed that the alternative paths for the commodities contain exactly one inter-hub arc. We study some of its properties and propose a MILP formulation for it. The model in which we impose that the backbone network is $\lambda$-connected is studied in Section \ref{sect:unrestricted2} where we also present a MILP formulation for it. Section \ref{sec:comput} describes the computational experiments we have carried out and summarizes the obtained results. Some managerial insights from the analysis of the structure of the solution networks produced by each of the models are also derived in this section. Finally, Section \ref{sec:comput_simul} describes the empirical analysis that has been carried out in which multiple failure scenarios have been generated from optimal solutions to the underlying deterministic hub location model and their \emph{a posteriori} capability of re-routing the commodities tested against that obtained with the proposed models. The paper closes in Section \ref{sec:conclu} with some conclusions. \section{Notation and definition of the problem}\label{sect1.1} Consider a graph $N=(V, E)$, where the node set $V=\{1, 2,\dots, n\}$ represents a given set of users and the edge set $E$ the existing connections between pairs of users. We assume that $N$ is a complete graph and that $E$ contains loops, i.e. for all $i\in V$, edge $\{i,i\}\in E$. We further assume that potential locations for hubs are placed at nodes of the graph and the set of potential locations coincides with $V$. For each potential location $k\in V$, we denote by $f_k$ the set-up cost for activating a hub at node $k$. Any pair of hub nodes can be connected by means of an \emph{inter-hub} edge, provided that both endnodes $k$ and $l$ are activated as hub nodes as well. The set $E$ will be referred to as {the} set of potential inter-hub edges or just as set of potential hub edges. Activated hub edges incur set-up costs as well; let {$h_{kl}\geq0$} be the set-up cost for activating hub edge $\{k, l\}\in E$. A set of activated hubs will be denoted by $H \subseteq V$, and a set of activated hub edges for $H$ by $E_H \subseteq E[H]$, where $E[H]$ is the set of edges with both endnodes in $H$. Note that the assumption that $N$ is a complete graph implies no loss of generality, since ($i$) arbitrarily large set-up costs can be associated with nodes that are not potential hubs; and, ($ii$) arbitrarily large activation costs can be associated with non-existing hub edges. Activating a hub edge allows to send flows through it in either direction. Let $A=\{(i,j) \cup (j,i): \{i, j\}\in E, \, i,j\in V\}$ be the arc set. Arcs in the form $(i,i)$ will also be called \emph{loops} and we will use $A_H\subseteq A$ to denote the set of inter-hub arcs induced by $E_H$. Service demand is given by a set of commodities defined over pairs of users, indexed in a set $R$. Let $\mathcal{D}=\{(o_r, d_r, w_r): r\in R\}$ denote the set of commodities, where the triplet $(o_r, d_r, w_r)$ indicates that an amount of flow $w_r\ge 0$ must be routed from origin $o_r\in V$ to destination $d_r\in V$. The origin/destination pair associated with a given commodity will also be referred to as its OD pair. Commodities must be routed via paths of the form $\pi=(o_r, k_1, \ldots, k_{s}, d_r)$ with $k_i\in H$, $1\le i\le s$. Similarly to most Hub Location Problems (HLPs), a routing path $\pi=(o_r, k_1, \ldots, k_{s}, d_r)$ is \emph{feasible} if $(1)$ it includes at least one hub node, i.e. $s\geq 1$, and $(2)$ the underlying edges of all traversed arcs other than the access and delivery arcs, $(o_r, k_1)$ and $(k_{s}, d_r)$, respectively, are activated inter-hub edges, i.e., $\{k_i, k_{i+1}\}\in E_H$, $1\le i\le s-1$. Note that this implies that in any feasible path all intermediate nodes are activated as hubs as well, i.e. $k_i\in H$, $1\leq i\leq s$. In the following the set of feasible paths for a given commodity $r\in R$ will be denoted by $\Pi_{(H,E_H)}(r)$. Most HLPs studied in the literature do not consider loops explicitly. Then, only \emph{proper} arcs $(k, l)$ with $k\ne l$ can be considered as hub arcs. In this work we follow a slightly more general setting in which loops of the form $(k, k)$ can also be considered as hub arcs. Then, if a loop $(k, k)$ is used in a routing path, it is required not only that $k$ is activated as a hub node, but also that the loop $\{k, k\}$ is activated as a hub edge as well. Throughout we assume multiple allocation of commodities to open hubs. That is, it is possible that two commodities with the same origin are routed using a different access hub. Routing flows through the arcs of a hub-and-spoke network incurs different types of costs. These costs, which may depend on the type of arc, account for transportation costs as well as for some additional collection/handling/distribution costs at the endnodes of the arcs. As usual in the literature, we assume that transportation costs of flows routed through inter-hub arcs are subjected to a discount factor $0\leq \alpha \leq 1$. In this work we will denote by $c_{ij}\geq 0$ the unit routing cost through inter-hub arc $(i,j)\in A_H$, which includes discounted transportation costs and handling costs and we will denote by $\bar{c}_{ij}\geq 0$ the unit routing cost for an access or a delivery arc $(i,j)\in A\setminus A_H$, which could also incorporate different discounted access or delivery costs. Thus, with the above notation, the routing cost of commodity $r\in R$ through a feasible path $\pi=(o_r, k_1, \ldots, k_{s}, d_r)\in \Pi_{(H,E_H)}(r)$ is: $$ C^r_{\pi}=w_r\left(\overline c_{o_rk_1}+\sum_{i=1}^{s-1}c_{k_ik_{i+1}} + \overline c_{k_{s}d_r}\right), $$ where the first and last addends correspond to the access and delivery arcs, respectively, and the intermediate ones are the service costs through the backbone network $(H,E_{H})$. Broadly speaking, under the above assumptions, the goal of a HLP is to decide the location of the hub nodes $H$ and to select a suitable subset of hub edges $E_H$, to \textit{optimally} route the commodities through the backbone network $(H,E_{H})$ induced by the activated hub nodes and hub edges, so as to minimize the sum of the overall set-up costs for activating hub nodes and hub links, plus the {commodities} routing costs. With the above notation, this problem can be stated as: \begin{align}\label{p0}\tag{\rm HLP} \min_{H \subseteq V, E_H\subseteq E[H]} \sum_{k \in H} f_k + \sum_{e \in E_H} h_{e}+ \sum_{r \in R}\min_{\pi \in \Pi_{(H,E_H)}(r)} C^r_{\pi}. \end{align} Most HLPs studied in the literature (that do not consider loops explicitly) restrict the set of potential paths for routing the commodities to those using at most one hub arc. When loops are also considered as potential hub arcs as we do, the analogous set of potential paths for routing the commodities is restricted to those containing exactly one hub arc. Thus, for $H$ and $E_{H}$ given, the set of potential paths for routing {commodity $r\in R$} is given by $\Pi_{(H,E_H)}(r) =\{\pi=(o_r, k, l, d_r): \, k, l \in H, \, \{k,l\}\in E_{H}\}$. In such a case the routing cost of commodity $r$ through path $\pi=(o_r,k,l,d_r)$ reduces to $$ C_{kl}^r := C^r_{\pi}=w_r(\overline c_{o_rk}+ c_{kl}+ \overline c_{ld_r}), $$ \noindent and the HLP simplifies to: \begin{align}\label{p0_1}\tag{\rm HLP$^1$} \min_{H \subseteq V, E_{H}\subseteq E[H]} \sum_{k \in H} f_k + \sum_{e \in E_{H}} h_{e} + \sum_{r \in R}\min_{(k,l) \in A_{H}} C^r_{k l}. \end{align} {Most hub networks are sensitive to failures in their links, being the impact in some of them particularly harmful for the users. Examples of potential applications of the models that we study include the management of airlines and airport industries~\citep{campbell2005a}, in which breakdowns in certain flight connections may occur, and passengers are directly affected. Also, in rapid delivery packing systems~\citep{ccetiner2010hubbing}, where the users pay for fast services and failures in the hub network cause large delays.} In the remainder of this work we consider HLPs in which the (\emph{original}) routing paths consist of exactly one hub arc (possibly a loop), and we assume that activated hub edges in $E_{H}$ may fail. In case hub edge $\{k,l\}\in E_{H}$ fails, then the inter-hub arcs $(k, l), (l, k)\in A_H$ can no longer be used for routing the commodities. In order to protect solution networks from failure we follow a policy that does not alter the strategic decisions on the activated hubs and inter-hub edges and focuses solely on the operational decisions concerning the re-routing of affected commodities. Accordingly, we impose that, for each commodity, the backbone network $(H,E_{H})$ contains, in addition to the original routing path, some substitute path connecting $o_r$ and $d_r$. Such a substitute path will be referred to as \emph{backup} or \emph{alternative} path. For each edge $e=\{k,l\} \in E$, let $X_{kl}$ denote the random variable modeling whether $e$ fails. We assume that the random {variable $X_{kl}$ follows a Bernoulli distribution with probability $p_{kl}$, for each }$\{k,l\}\in E$. When $k\ne l$ failure of edge $e$ arises not only when the link $\{k,l\}$ can no longer be used, but also when, for any reason, the collection and redistribution services at any endnode of edge $\{k,l\}$ cannot be carried out. Thus $p_{kl}$ represents the probability that any of these events happen. In case edge $e$ is a loop, i.e., $e=\{k,k\}$, with $k\in H$, then $p_{kk}$ represents the probability that the handling process carried out when $k$ is used as the unique intermediate hub fails. Observe that when edges may fail with a given probability, the costs of feasible routing paths are also random variables, which will be denoted by $\mathcal{C}_r$, $r\in R$. Furthermore, the probability distribution of $\mathcal{C}_r$, $r\in R$, is dictated by the failure probability distribution of the involved inter-hub edges. In particular, when $\pi_0 = (o_r,k,l,d_r)$ is the original routing path of a given commodity $r\in R$, the expected routing cost of commodity $r$ can be calculated as: $$ E[\mathcal{C}_r| \pi_0] = (1-p_{kl}) C^r_{kl} + p_{kl} C^r_{\overline \pi_0}. $$ where $\overline \pi_0 \in \Pi_{(H,E_H)}(r) \backslash\{\pi_0\}$ is the backup path in case $\{k,l\}$ fails. In the following we deal with the problem of finding hub networks protected against {inter-hub edge} failures under {the} above assumptions. For this, in the objective function, instead of considering the costs of the routing paths, we will consider their expected routing cost. That is, the HLPLF can be stated as: \begin{align}\label{p1}\tag{HLPLF} \min_{H \subseteq V, E_H\subseteq E[H]} \sum_{k \in H} f_k + \sum_{e \in E_H} h_{e}+ \sum_{r\in R} \min_{\pi_0\in\Pi_{(H,E_H)}(r)} E[\mathcal{C}_r| \pi_0]. \end{align} Indeed, multiple alternatives fall within the above generic framework which differ from each other in how the alternative routing paths are obtained. In the following sections we propose two alternative models for determining such backup paths, based on different assumptions, and provide mathematical programming formulations for each of them. \section{HLPLF with single inter-hub arc backup paths}\label{sect:unrestricted} The \ref{p1} that we address in this section enforces that the alternative paths for routing the commodities have the same structure as the original ones. That is, we assume that backup paths contain at least one hub and have exactly one inter-hub arc {(possibly a loop)}. This avoids having to use many transshipment points in case of failure. This model will be referred to as (HLPLF-1BP). Given a commodity $r\in R$, the backbone network $(H,E_{H})$ and the original routing path $\pi_0 = (o_r,k,l,d_r)$, we assume that the backup path is in the form $\overline \pi_0 = (o_r,\bar k,\bar l,d_r)$, with $\{k, l\}\neq\{\bar k, \bar l\}$. Thus, the expected routing cost of commodity $r$ is: $$ E[\mathcal{C}_r|\pi_0] = (1-p_{kl}) C^r_{kl} + p_{kl} C^r_{\bar k \bar l}. $$ Figure \ref{fig00} illustrates the different situations that may arise in case a hub link fails. Figure \ref{fig0:0} shows a backbone network with four hub nodes ($k$, $l$, $m$ and $q$) and five hub links, two of them corresponding to loops, $\{m,m\}$ and $\{l,l\}$, and rest ones corresponding to edges $\{k,q\}$, $\{k,l\}$ and $\{q,m\}$. The figure also depicts the origin ($o_r$) and destination ($d_r$) of a given commodity $r\in R$ and a possible path for this commodity through hub arc $(k,l)$. We assume that it is the \emph{original path} for the commodity $r\in R$. Access/distribution arcs are depicted as dashed lines and hub edges as solid lines. Figures \ref{fig0:a} and \ref{fig0:b} show different single inter-hub arc backup paths for commodity $r\in R$ in case the original one fails. In Figure \ref{fig0:a}, the backup path uses hub-arc $(q,m)$ to re-route the commodity, while in Figure \ref{fig0:b} the backup path uses loop arc $(l,l)$. \begin{figure}[h] \centering \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{fig3a} \caption{\scriptsize Original path via $(q,\ell)$. \label{fig0:0}} \end{subfigure}\quad \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{fig3b} \caption{\scriptsize Backup path via $(q,m)$. \label{fig0:a}} \end{subfigure}\quad \begin{subfigure}[b]{0.31\textwidth} \centering \includegraphics[width=\textwidth]{fig3d} \caption{\scriptsize Backup path via $(l,l)$. \label{fig0:b}} \end{subfigure} \caption{A network with four nodes and one commodity $r=(o_r,d_r,w_r)$: Original path, $\pi_0 = (o_r,k,l,d_r)$, and different backup paths in case the original hub arc $(k,l)$ fails. \label{fig00}} \end{figure} Next we develop a mathematical programming formulation for the above problem, first introducing the decision variables. We use the following variables associated with the design decisions on the elements of the network that are activated, hubs and edges: $$ z_k = \left\{\begin{array}{cl} 1 & \mbox{if a hub is opened at the potential hub node $k$,}\\ 0 & \mbox{otherwise} \end{array}\right. \quad \text{ for $k \in V$.} $$ $$ y_{kl} = \left\{\begin{array}{cl} 1 & \mbox{if hub edge $\{k, l\}$ is activated,}\\ 0 & \mbox{otherwise} \end{array}\right. \quad \text{ for $\{k,l\} \in E$.} $$ The formulation uses two additional sets of variables, which respectively represent the original and alternative routing path for each commodity. In particular, for $r\in R$ {and} $(k,l) \in A$: $$ x^r_{kl} = \left\{\begin{array}{cl} 1 & \mbox{ if the original routing path for commodity $r$ is $(o_r,k,l,d_r)$}\\ 0 & \mbox{otherwise,} \end{array}\right. $$ $$ \bar x^r_{kl} = \left\{\begin{array}{cl} 1 & \mbox{ if the alternative path for commodity $r\in R$ is $(o_r,k,l,d_r)$,}\\ 0 & \mbox{otherwise.} \end{array}\right. $$ With these sets of decision variables the expected routing cost of commodity $r\in R$ can be expressed as: $$ \sum_{(k,l)\in A} x_{kl}^r \Big[C_{kl}^r (1-p_{kl}) + p_{kl} \sum_{(k^\prime, l^\prime)\in A \setminus\{(k, l)\cup (l,k)\}} C_{k^\prime l^\prime}^r\bar x^r_{k^\prime \, l^\prime}\Big] $$ \noindent where the two addends in each term of the above expression correspond to the expected routing cost of the original and backup plan of commodity $r$, respectively, both of which only apply if, in the original plan, the commodity is routed through the inter-hub edge corresponding to the term. In particular, the first addend gives the overall routing cost for commodity $r$ in case the arc of the backbone network used for routing $r$ in the original plan does not fail (multiplied by the probability of not failing). The second term computes the cost of the alternative routing path, multiplied by the probability of failure of the inter-hub edge of the original plan. Observe that in case $(k,l)$ is the arc used initially by {commodity $r\in R$} and $(k^\prime, l^\prime)$ is the backup arc {for $r$}, one obtains the cost $(1-p_{kl}) C^r_{kl} + p_{kl} C^r_{k^\prime l^\prime}$. Rearranging terms in the above expression one can rewrite the overall routing cost function for commodity $r$ as: $$ \sum_{(k,l)\in A} C_{kl}^r \Big[(1-p_{kl}) x_{kl}^r + \bar x_{kl}^r \sum_{(k^\prime,l^\prime)\in A \setminus\{(k, l)\cup (l,k)\}} p_{k^\prime\, l^\prime} x^r_{k^\prime \, l^\prime}\Big],$$ \noindent where it can be observed that the impact of a given arc $(k,l)\in A$ in the routing cost of commodity $r$ is either $0$ (if it is not used neither in the original nor the alternative path); $(1-p_{kl}) C_{kl}^r$ if it is used in the original path; or {$C_{kl}^r p_{k^\prime,l^\prime}$} in case arc $(k,l)$ is used in the alternative path and arc ($k^\prime,l^\prime)$ in the original one. The above decision variables together with this routing cost function lead to the following Integer Nonlinear Programming formulation for (HLPLF-1BP): \begin{subequations} \makeatletter \def\@currentlabel{${\rm HLPLF-1BP}$} \makeatother \label{HLPLF-1BP} \renewcommand{\theequation}{$1.{\arabic{equation}}$} \begin{align} \min & \sum_{k \in V} f_k z_k + \sum_{\{k,l\} \in E} h_{kl} y_{kl} &+&\sum_{(k,l)\in A} C_{kl}^r \Big[(1-p_{kl}) x_{kl}^r + \bar x_{kl}^r \sum_{(k^\prime,l^\prime)\in A \setminus\{(k, l)\cup (l,k)\}} p_{k^\prime\, l^\prime} x^r_{k^\prime \, l^\prime}\Big]\nonumber\\ \mbox{s.t. } & \sum_{(k, l)\in A}x^{r}_{kl} =1\, && \forall {r\in R}\label{model1:1}\\ & \sum_{(k, l)\in A} \bar x^{r}_{kl} =1\, && \forall {r\in R}\label{model1:2}\\ &x^{r}_{kl}+x^{r}_{lk}+\bar x^{r}_{kl}+\bar x^{r}_{lk}\le y_{kl}\, \qquad && \forall r\in R, \{k, l\}\in E, k\ne l\label{model1:3}\\ &x^{r}_{kk}+\bar x^{r}_{kk}\leq y_{kk}\,\qquad && \forall r\in R, \{k, k\}\in E \label{model1:4}\\ &y_{kl}\leq z_k\,\qquad && \forall \{k, l\}\in E\label{model1:5}\\ &y_{kl}\leq z_l\,\qquad && \forall \{k, l\}\in E, k\ne l\label{model1:6}\\ &x^r_{kk}+\sum_{\substack{l\in V\\l\ne k}} (x^r_{kl}+x^r_{lk})\le z_k && \forall r, \forall k\in V\label{model1:7}\\ &\bar x^r_{kk}+\sum_{\substack{l\in V\\l\ne k}} (\bar x^r_{kl}+\bar x^r_{lk})\le z_k &&\forall r, \forall k\in V\label{model1:8}\\ & x^{r}_{kl}, \bar x^{r}_{kl}\in\{0, 1\} && \forall r\in R, (k, l)\in A\label{int_x}\\ & z_k\in\{0, 1\} && \forall k\in V\label{int_z}\\ & y_{kl}\in\{0, 1\} && \forall \{k, l\}\in E.\label{int_y} \end{align} \end{subequations} where constraints \eqref{model1:1} and \eqref{model1:2} enforce that each commodity uses exactly one inter-hub arc both in the original and the backup path. Constraints \eqref{model1:3} and \eqref{model1:4} impose that the original and the backup path do not coincide. These constraints also guarantee that any used inter-hub edge is activated. Constraints \eqref{model1:5} and \eqref{model1:6} ensure that any endnode of an activated inter-hub edge must be activated as a hub. Constraints \eqref{model1:7} and \eqref{model1:8} are valid inequalities, already proposed in \cite{MARIN2006274}, which reinforce the relationship between the routing variables and the hub activation variables. Finally, \eqref{int_x}--\eqref{int_y} are the domains of the decision variables. \subsection{Linearization of the objective function} The reader may have observed the non-linearity of the objective function term corresponding to the expected routing cost. As we explain below this term can be suitably linearized by introducing a new auxiliary variable $P^r_{kl}\in \mathbb{R}_+$ associated with each commodity $r\in R$ and each arc $(k, l)\in A$. Let $P^r_{kl}= \bar x^r_{kl} \sum_{(k^\prime,l^\prime)\in A \setminus\{(k, l)\cup (l,k)\}} p_{k^\prime l^\prime} x^r_{k^\prime l^\prime}$ denote the probability of using inter-hub arc $(k, l)$ in the alternative path of commodity $r$. Observe that because of the minimization criterion, the nonnegativity of the routing costs, and constraints \eqref{model1:1}-\eqref{model1:4}, the value of $P_{kl}^r$ can be determined by the following set of constraints: \begin{align} P^r_{kl} &\geq \sum_{(k^\prime,\,l^\prime)\in A} p_{k^\prime l^\prime} x^r_{k^\prime l^\prime} + (\bar x^r_{kl} -1), && \forall r\in R,\; \forall\, (k,l)\in A, \label{Pkl}\tag{$1.{12}$}\\ P_{kl}^r & \geq 0, &&\forall r\in R, \; \forall\, (k, l)\in A,\label{Const_last}\tag{$1.{13}$} \end{align} and the objective function can be rewritten as: \begin{align*} \sum_{k\in V}f_kz_k+ \sum_{\{k,l\}\in E}h_{kl}y_{kl}+ \sum_{r\in R}\sum_{(k,l)\in A} C^r_{kl} \Big((1-p_{kl})x^{r}_{kl} + P_{kl}^r\Big) \end{align*} We can also incorporate the following valid inequalities to reinforce our formulation: \begin{align} \sum_{(k,l)\in A} P^r_{kl}\leq {\max_{\{k,l\}\in E} p_{kl}}, && \forall r\in R\label{dv_Pkl}.\tag{$1.{14}$} \end{align} Therefore, we have the following MILP formulation for the problem: \begin{align}\label{M1}\tag{HLPLF-1BP} \min & \sum_{k\in V}f_kz_k+ \sum_{\{k,l\}\in E}h_{kl}y_{kl} + \sum_{r\in R}\sum_{(k,l)\in A} C^r_{kl} \Big((1-p_{kl})x^{r}_{kl} + P_{kl}^r\Big)\\ \mbox{s.t. }& \eqref{model1:1}-\eqref{dv_Pkl}.\nonumber\nonumbe \end{align} Below we state some simple optimality conditions that can be used to reduce the set of decision variables. \begin{prop} There is an optimal solution to \eqref{M1} such that $x^r_{lk}=\bar x^r_{lk}=P^r_{lk} = 0$ for all $r\in R$, $(l,k)\in A$, with $C^r_{kl}\leq C^r_{lk}$. \end{prop} \begin{proof} The proof is straightforward. Indeed, the value of any solution with $x^r_{lk}=1$ where $C^r_{kl}< C^r_{lk}$ will improve by changing the direction in which edge $\{k,l\}$ is traversed, i.e., by doing $x^r_{lk}=0$, $x^r_{kl}=1$. When $C^r_{kl}= C^r_{lk}$ the value of the new solution will not change. The same argument can be applied for setting $\bar x^r_{lk}=0$ and thus, $P^r_{lk}=0$. \end{proof} Note that the above result allows one to reduce by one half the number of decision variables. In practice, it is likely that there are few possible values for failure probabilities, and the edges of the network are clustered in groups such that, within each group, all edges have the same failure probability. We next analyze such situation. \begin{rmk}[Clustered sets of edges] Let us assume that the edges in $E$ are clustered in $K$ groups $E_1, \ldots, E_K$ such that all edges in $E_s$ have the same failure probability $\rho_s \in [0,1]$, for $s=1, \ldots, K$. Then, in the term of the objective function of \eqref{M1} corresponding to the expected cost of the commodities, variables $P_{kl}^r$ can be substituted by a new set of variables as follows. For $r\in R$, $(k, l)\in A_s$, being $A_s$ the arc set induced by $E_s$, $s=1, \ldots, K$, let $\xi_{kls}^r$ be a binary variable that takes value one if and only if the original route of commodity $r$ uses some hub arc in the $s$-th cluster (with failure probability $\rho_s$) and in the backup route it uses hub arc $(k,l)$. Then, the expected routing cost of commodity $r\in R$ can be rewritten as: $$ \displaystyle\sum_{s=1}^K \Big[(1-\rho_s) \displaystyle\sum_{(k,l)\in A_s} C_{kl}^r x_{kl}^r + \rho_s \displaystyle\sum_{(k,l)\in A} C_{kl}^r \xi_{kls}^r\Big]. $$ Using similar arguments as for the linearization of variables $P_{kl}^r$, the values of the $\xi_{kls}^r$ variables can be determined by the following sets of constraints: \begin{align} &\xi_{kls}^r \geq \displaystyle\sum_{(k^\prime,l^\prime)\in A_s} x_{k^\prime l^\prime}^r + (\bar x_{kl}^r -1)&& \forall r\in R, (k,l) \in A, s=1, \ldots, K\label{1_xi}\\ &\displaystyle\sum_{s=1}^K\xi_{kls}^r=\bar x^r_{kl} && \forall r\in R, (k,l) \in A\label{last_xi}\\ &\xi_{kls}^r \geq 0 &&\forall r\in R, (k,l) \in A, s=1, \ldots, K.\label{2_xi} \end{align} The particular case of one single cluster ($K=1$) where all edges have the same failure probability, i.e., $p_{kl}=\rho$ for all $\{k, l\}\in E$, allows to further simplify the above formulation. Now the index $s$ can be dropped from variables $\xi$ and Constraints \eqref{1_xi}-\eqref{2_xi} are no longer needed, as Constraints \eqref{last_xi} reduce to $\xi_{kl}^r=\bar x^r_{kl}$, $(k,l)\in A$. Then, the expected routing cost of commodity $r\in R$ simplifies to: $$ \displaystyle\sum_{(k,l)\in A} C_{kl}^r \Big((1-\rho) x_{kl}^r + \rho \overline x_{kl}^r\Big). $$ \end{rmk} \section{HLPLF with $\lambda$-connected backbone networks}\label{sect:unrestricted2} In this section we introduce a different model for the HLPLF, that will be referred to as $\lambda$-connected HLPLF (HLPLF-$\lambda$). Again we make the assumption that the original routing paths contain at least one hub node and exactly one inter-hub arc, although we follow a different modeling approach as for how to protect the backbone network $(H, E_H)$ against potential failures. On the one hand, we extend the set of alternative paths that can be used when hub edges in original paths fail, and allow for any arbitrarily long chain of arcs connecting the OD pair of each commodity, provided that all its intermediate arcs are activated inter-hub arcs. On the other hand, we no longer make explicit the alternative routing paths for the commodities. Instead, we impose that the backbone network is $\lambda$-connected, in the sense that it must contain at least $\lambda$ routing paths connecting any pair of activated hubs $k$, $l\, \in H$ with $k\neq l$, where $\lambda\geq 2$ is a given {integer} parameter. This implies that if some hub arc of the original path fails, then the backbone network contains at least $\lambda-1$ alternative paths connecting the activated hubs. Note that, this forces the backbone network to have at least $\lambda$ activated hub nodes. The particular case of HLPLF-$\lambda$ with $\lambda=2$, extends the HLPLF-1BP studied in the previous section, as it enforces at least one backup path in the backbone network in addition to the original one, which can be arbitrarily long. We recall that for any non-empty subset of nodes $S\subset V$, the cutset associated with $S$ is precisely the set of edges connecting $S$ and $V\backslash S$ namely: $$ \delta(S)=\{\{k, l\} \in E | k \in S, l \in V\setminus S\}. $$ Observe that the backbone network $(H, E_H)$ depicted in Figure \ref{fig0:0} is 2-connected since any cutset has at least two edges (possibly one of them being a loop). Let us introduce the following additional notation. For a given indicator vector $\bar y\in\{0, 1\}^{|E|}$: $$ \bar y(\delta(S))=\sum_{\{k,l\}\in \delta(S)} \bar y_{kl}. $$ That is $\bar y(\delta(S))$ gives the number of edges in the cutset $\delta(S)$, that are activated relative to vector $\bar y$. When $\bar y=y$, with $y$ being the vector of hub edge decision variables as defined in HLPLF-1BP, then $y(\delta(S))$ gives precisely the number of inter-hub edges in the cutset $\delta(S)$. Figure \ref{fig11} shows different choices for backup paths in case the hub arc $(k, l)$, using in the original path for commodity $r=(o_r,d_r,w_r)$, fails. Note that while the backup paths drawn in Figures \ref{fig4b} and \ref{fig4d} are also valid for HLPLF-1BP, the backup path shown in Figure \ref{fig4c} uses two inter-hub arcs, thus not being valid for HLPLF-1BP. Similarly to HLPLF-1BP, loops are also counted for the $\lambda$-connectivity, as they can be used both in original and backup paths. Note also that in case the original path for commodity $r=(o_r,d_r,w_r)$ uses the loop $(m,m)$, the backup paths in Figure \eqref{fig11} are also feasible. \begin{figure}[h] \centering \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{fig3b} \caption{\scriptsize Backup path via $(q,m)$. \label{fig4b}} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{fig3c} \caption{\scriptsize Backup path via $(k,q)$ and $(q,m)$. \label{fig4c}} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \centering \includegraphics[width=\textwidth]{fig3d} \caption{\scriptsize Backup path via $(l,l)$. \label{fig4d}} \end{subfigure} \caption{Alternative possibilities for backup paths in the $\lambda$-connected model. \label{fig11}} \end{figure} With the above notation, and taking into account that, by definition, $\delta(S)$ contains no loop hub edges, the $\lambda$-connectivity of the backbone network can be stated by means of the following constraints, associated with each subset $S\subset V$, and each pair of potential hubs $k, l\in V$ with $k\in S$, $l\notin S$: \begin{align} y(\delta(S))+y_{kk}\geq \lambda \left(z_{k}+z_{l}-1\right). \label{no-single} \end{align} The right hand side of the above constraint can take a strictly positive value only when $k , l$ are activated hub nodes, that is, when the cutset $\delta(S)$ is a cutset of the backbone network. In this case, the inequality imposes that $\delta(S)$ contains at least $\lambda-y_{kk}$ activated hub edges. As indicated, the loop $\{k, k\}$ has also been taken into account as a potential hub edge since it can be used in routing paths. Hence, if the loop $\{k, k\}$ is activated as hub edge, it should be discounted from the number of hub edges in $\delta(S)$ that must be activated. Summarizing, the above constraint imposes that, if nodes $k, l$, $k\in S$, $l\notin S$, are activated hubs, then the number of hub edges in the cutset $\delta(S)$ must be at least $\lambda-1$ if the loop $\{k, k\} $ is activated as a hub edge or $\lambda$ otherwise. The $\lambda$-connectivity of singletons can be imposed by means of constraints $y(\delta(k))+y_{kk}\geq \lambda z_k$, $k\in V$, which have an analogous interpretation. Below we develop a MILP formulation for the HLPLF-$\lambda$, which incorporates $\lambda$-connectivity by means of the family of constraints \eqref{no-single} introduced above. The formulation uses the same $z$, $y$, and $x$ variables as before. Still, since backup paths are no longer made explicit, variables $\bar x$ used in formulation HLPLF-1BP of the previous section are no longer needed. As explained, the $y$ variables will be used to impose the $\lambda$-connectivity condition, which will be stated by means of an exponential set of constraints. Given that backup routes are no longer made explicit, we no longer have closed expressions for their expected routing costs and we must estimate their values. Let us denote by $\bar{C}^r_{kl}$ an estimation of the backup routing cost of commodity $r$ when the hub arc $(k,l)$ of its original routing path fails. The resulting formulation for the HLPLF-$\lambda$ is: \begin{subequations} \makeatletter \def\@currentlabel{${\rm HLPLF-}\lambda$} \makeatother \label{HLPLF-lambda} \renewcommand{\theequation}{$2.{\arabic{equation}}$} \begin{align} \min &\sum_{k\in V}f_kz_k+ \sum_{\{k,l\}\in E}h_{kl} y_{kl} +& \sum_{r\in R}\sum_{(k,l)\in A} & \left[(1-p_{kl}) C^r_{kl}+ p_{kl} \bar{C}^r_{kl}\right] x^{r}_{kl} \nonumber\\% \nonumber \\ \mbox{s.t. } & \sum_{(k,l)\in A}x^{r}_{kl} =1\, \qquad && \forall {r\in R} \label{c1}\\%\nonumber\\ & x^{r}_{kl}+x^{r}_{lk}\le y_{kl} && \forall r\in R, \{k, l\}\in E, k\neq l \label{c2}\\%\nonumber\\ & x^{r}_{kk}\le y_{kk} && \forall r\in R, k \in V\label{c3}\\% \nonumber\\ & y_{kl}\leq z_k && \forall \{k, l\}\in E \label{c5}\\%\nonumber\\ & y_{kl}\leq z_l && \forall \{k, l\}\in E, k\ne l\label{c6}\\%\nonumber\\ & y(\delta(S))+y_{kk}\geq \lambda (z_{k}+z_l-1) && \forall S\subset V,\, |S|\geq 2,\, k\in S,\, l \notin S \label{cutset}\\ & y(\delta(k)) + y_{kk} \geq \lambda z_k && \forall k\in V \label{cutset2}\\ & x^{r}_{kl}\in\{0, 1\} && \forall r\in R, (k,l)\in A\label{c7}\\% \nonumber\\ & z_k\in\{0, 1\} && \forall k\in V\label{c8}\\% \nonumber\\ & y_{kl}\in\{0, 1\} && \forall \{k, l\}\in E. \label{c9 \end{align} \end{subequations} where constraints \eqref{c1}-\eqref{c6} are similar to \eqref{model1:1}-\eqref{model1:6} but referring to the original path only, and \eqref{cutset} and \eqref{cutset2} are the $\lambda$-connectivity constraints described above. {Note that once the hub backbone network $(H,E_H)$ is obtained by solving the above problem, one can explicitly compute a backup path for a commodity $r\in R$, whose original path is $(o_r, k_r,l_r, d_r)$, by solving (in polynomial time) a shortest path problem from source $o_r$ to destination $d_r$ on the graph $G_r=(V_r,E_r)$ with nodes $V_r=\{o_r, d_r\} \cup H$ and edges $E_r=\{\{o_r, h\}: h \in H\} \cup E_h \cup \{(h,d_r): h\in H\}\backslash \{k_r,l_r\}$.} \begin{prop} Let $S\subset V$ be a nonempty subset of nodes with $|S|\leq \lambda-1$. Then, the set of constraints \eqref{cutset} is dominated by the set of constraints, which are also valid for \eqref{HLPLF-lambda}: \begin{align}\tag{$2.6'$} &y(\delta(S)) + y_{kk} \geq \lambda z_k && \forall k \in S.\label{singlelambda} \end{align} \end{prop} \begin{proof} Indeed, \eqref{singlelambda} dominate \eqref{cutset}, since $z_l \leq 1$ implies that $\lambda z_k \ge \lambda (z_k+z_l-1)$. We now see that \eqref{singlelambda} are valid for \eqref{HLPLF-lambda}. Taking into account that $\lambda$-connectivity with $\lambda\geq 2$ implies that any feasible solution has at least $\lambda$ open hubs and that $|S|\leq \lambda-1$, when $k\in S$ is activated as a hub node (i.e. $z_k=1$), there will be at least one more open hub $\bar l\notin S$ (i.e. $z_{\bar l}=1$). That is, when $z_k=1$ there will be at least one active constraint in the set \eqref{cutset} with right-hand-side value $\lambda(z_k+z_{\bar l}-1)=\lambda z_k$. When $k$ is not activated as a hub node (i.e. $z_k=0$) none of the constraints \eqref{cutset}, nor of the constraints \eqref{singlelambda} will be active. Therefore, the result follows. \end{proof} Note that when $S$ is a singleton, i.e., $S=\{k\}$, the set of constraints \eqref{singlelambda} reduces precisely to \eqref{cutset2}. Thus, in what follows, we replace in \eqref{HLPLF-lambda} both constraints \eqref{cutset} and \eqref{cutset2} by \eqref{singlelambda}. \subsection{Incorporation of $\lambda$-cutset constraints: a branch-and-cut approach} \label{Algorithm} As already mentioned, in \eqref{HLPLF-lambda}, the size of the family of constraints \eqref{singlelambda} is exponential in the number of potential hub nodes, $n$. It is thus not possible to solve the formulation directly with some off-the-shelf solver, even for medium size instances. In this section we present an exact branch-and-cut algorithm for this formulation in which, as usual, the family of constraints of exponential size \eqref{singlelambda} is initially relaxed. The strategy that we describe below is embedded within an enumeration tree and it is applied not only at the root node but also at all explored nodes. Our separation procedure is an adaptation of the separation procedure for classical connectivity constraints \citep{PG-1985}, and follows the same vein of those applied to more general connectivity inequalities in node and arc routing problems (see, e.g., \citep{BB-COA-1998,AFF09,RPFLBM19} for further details). The initial formulation includes all constraints \eqref{c1}-\eqref{c6}, and the singleton version of \eqref{singlelambda}. Furthermore, all integrality conditions are relaxed. Let $(\overline{x},\overline{y} ,\overline{z})$ be the solution to the current LP and let $G(\overline{y})=(V(\overline{y}), E(\overline{y}))$ denote its associated support graph where $E(\overline{y})$ consists of all the edges of $E$ such that $\bar y_{kl}>0$ and $V(\overline{y})$ the set of endnodes of the edges of $E(\overline{y})$. Each edge $(k,l)\in E(\overline{y})$ is associated with a capacity $\bar y_{kl}$. The separation for inequalities \eqref{singlelambda} is to find $S \subset V$, and $k\in S$, with $\overline{y}(\delta(S))<\lambda\overline z_{k} - \overline y_{kk}$ or to prove that no such inequality exists. Note that, when they exist, violated $\lambda$-connectivity constraints \eqref{singlelambda} can be identified from a tree of min-cuts associated with $G(\overline{y})$ relative to the capacities vector $\overline{y}$, $T(\overline{y})$. Therefore, to solve the above separation problem we proceed as follows. For each min-cut $\delta(S)$ of $T(\overline{y})$ of value $\overline y(\delta(S))$, we identify $\overline k \in\arg\max \{\lambda\;\overline z_{k} - \overline y_{kk} : k\in S\}$. Then, if $\overline y(\delta(S))<\lambda\,\overline z_{\overline k} - \overline y_{\overline k\overline k}$, the inequality \eqref{singlelambda} associated with $S$ and $\overline k$ is violated by $\overline{y}$. We use the procedure proposed by \cite{G-SJAM-1990} to identify $T(\overline{y})$. Such an algorithm computes $V(\overline{y})$ max-flows in $G(\overline{y})$, so its overall complexity is $\mathcal{O}(|V(\overline{y})|\times |V(\overline{y})|^3)$. \section{Computational Experience}\label{sec:comput} In this section we report the results of an extensive battery of computational tests, which have been carried out to analyze the performance of the two modeling approaches for obtaining robust hub networks protected under inter-hub failures, discussed in the previous sections. For the experiments, we have used a large set of benchmark instances based on the well-known CAB (\cite{OKelly_EJOR87}), AP (\cite{EK96}) and {on the} TR (\cite{tan2007hub}) datasets (taken from the \texttt{phub} datasets in ORLIB~\url{http://people.brunel.ac.uk/~mastjjb/jeb/orlib/} and \url{https://ie.bilkent.edu.tr/~bkara/dataset.php}), for varying settings of the failure probabilities and other parameters as described below. All instances were solved with the Gurobi 9.1.1 optimizer, under a Windows 10 environment on an Intel(R) Core(TM) i7-6700K CPU @ 4.00 GHz 4.01 GHz processor and 32 GB of RAM. Default values were used for all parameters of Gurobi solver and a computing time limit of 7200 seconds was set. \subsection{Instances generation} \label{sec:genera} We have generated several instances based on the entire CAB, AP and TR datasets with a number of nodes ($n$) initially ranging in $\{10, 15, 20, 25\}$ for the instances based on the CAB and TR datasets and in $\{10, 20, 25\}$ for the instances based on the AP dataset. Let $c^\prime_{kl}$ be the standard unit transportation costs provided in ORLIB for CAB and AP instances or the travel distances provided for the TR instances. The unit routing costs for access/distribution arcs ($\overline c_{ij}$) and the inter-hub routing costs ($c_{kl}$) have been obtained as follows. We take the original costs as the unit routing cost through the access and delivery arcs, i.e., $\overline c_{ij}=c^\prime_{ij}$. For the routing costs through the inter-hubs arcs, we assume that these costs include not only transportation costs but also some additional handling costs at the endnodes of the traversed arcs, associated with the collection (at the entering node) and redistribution (at the leaving node) of the routed commodity. Then, we define the unit routing costs through arc $(k,l) \in A$ as: $$ c_{kl} = \alpha (a_k + c^\prime_{kl} + d_l), $$ where: \begin{itemize} \item $\alpha \in [0,1]$ is the usual discount factor applied to routing costs through inter-hub arcs due to economies of scale. Three values for the discount factor $\alpha\in\{0.2, 0.5, 0.8\}$ have been considered in our study. \item $a_k\geq 0$ and $d_k\geq 0$ are the unit collection and redistribution costs at node $k$, respectively. Note that applying the discount factor $\alpha$ to these terms implies no loss of generality. Note also that with this choice of costs, in case $k=l$, the unit routing (service) cost through the loop $(k, k)$ reduces to $c_{kk}=\alpha(a_k+d_k)$. In our computational study we define $a_k=d_k=\min\{\min_{j\neq k} c^\prime_{kj}, \min_{j\neq k} c^\prime_{jk}\}$. \end{itemize} As usual in the literature~\citep{OKelly_PRS92}, we have considered the same {set-up} costs for all {potential hubs} $k \in V$, $f_k=100$ for the CAB {dataset}, two types of set-up costs ($T$ and $L$) for the hub nodes provided with the AP dataset, and the set-up costs provided in the TR dataset. Service demand, $w_r$, $r\in R$, was also taken from the provided datasets. As considered in the literature (see e.g., \cite{alumur2009design}, \cite{calik2009tabu}), the set-up costs for activating hub edges for the CAB and the AP datasets were set: $$h_{kl} = \left\{\begin{array}{cl} 100 \dfrac{c_{kl}/\textsc{w}_{kl}}{\textsc{maxw}} & \mbox{if $k\neq l$},\vspace*{0.2cm}\\ 100 \dfrac{c_{kl}/\bar{\textsc{w}}}{\textsc{maxw}} & \mbox{if $k=l$}. \end{array}\right.$$ where \textsc{w} is the normalized vector of flows, $\bar{\textsc{w}}$ is the mean of \textsc{w} and \textsc{maxw}$=\max\{\frac{c_{ij}}{\textsc{w}_{ij}}\, :\, i,j\in V, \, \textsc{w}_{ij}>0\}$ and for TR those provided in the original dataset.\\ In formulation (HLPLF-$\lambda$), we have estimated the costs of backup paths as $\bar C_{kl}^r = (1+\beta) C_{kl}^r$ for two different values of $\beta \in \{0.5, 1\}$. Observe that in this case the expected routing cost simplifies to: $$ \sum_{r\in R}\sum_{(k,l)\in A} (1+\beta p_{kl}) C^r_{kl} x^{r}_{kl}.\\ $$ As for the failure probabilities $p_{kl}$, $\{k,l\} \in E$, we have considered three different scenarios: \begin{itemize} \item[\textbf{RP:}] Random probabilities. The failure probability of each edge is randomly generated from a uniform distribution, i.e. $p_{kl}\equiv U[0,\rho]$ for all $\{k,l\} \in E$. \item[\textbf{CP:}] Clustered probabilities. Edges are clustered into three groups, each of them with a different failure probability. For this, each edge $\{k, l\} \in E$ is randomly assigned a failure probability in $p_{kl}\in\{0.1, 0.2, 0.3\}$. \item[\textbf{SP:}] Same probability. All edges have the same failure probability, i.e. $p_{kl}=\rho$, for all $\{k,l\} \in E$ \end{itemize} The values of the parameter $\rho$ we have used in \textit{RP} and \textit{SP} scenarios are $\rho \in \{0.1, 0.3\}$. {The files of the randomly generated probabilities are available in the Github repository \url{https://github.com/vblancoOR/HLPLF}. For each combination of parameters $n,\, \alpha, \, \rho$, and each {dataset} (CAB, AP with type T and L fixed set-up hub nodes costs, and TR) five different instances have been generated for scenario of failure probabilities RP and one instance has been considered for scenario SP. Five instances have been also generated for scenario {CP} and each combination of $n,\, \alpha$, and each {dataset}. Thus, (HLPLF-1BP), {hereafter called} M1, has been solved on a total of 714 instances. Concerning formulation (HLPLF-$\lambda$), we considered three different values for the parameter $\lambda$, namely $\lambda=2$, $\lambda=3$ and $\lambda=4$ (we call {the corresponding} models M2\_2, M2\_3 and M2\_{{4}}, respectively). Thus, (HLPLF-$\lambda$) has been solved on a total of 4284 instances. Additionally, for comparative purposes, we have solved 42 instances of the Uncapacitated Hub Location Problem, in which no protection under failures is considered. This model will be referred to as M0. Finally, to test the scalability of our formulations, a second experiment was carried out on {a set of} larger instances ($n \in \{ 40, 50\}$) based on the AP and TR {datasets} considering only (HLPLF-$\lambda$), which, as we will see, is the most promising formulation, for $\lambda \in\{2,4\}$ and $\beta=1$. We have solved a total of {612} instances in this second study. Overall, 5652 instances have been solved. Table \ref{tab:summary} summarizes the main characteristics of the testing instances and the selected parameters. \begin{table \begin{center} \hspace*{-1cm} \def2.2{2.2} \adjustbox{scale=0.8}{ \begin{tabular}{c|c|c|c|c|c|c} {Instances} & $n$ & $\alpha$ & $c_{kl}$ & $f_k$& $h_{kl}$ & $\lambda$\\%\vspace{5pt}\\ \hline CAB & $\{10, 15, 20, 25\}$ & \multirow{3}{*}{\{0.2, 0.5, 0.8\}} & \multirow{3}{*}{$a_k + c^\prime_{kl} + d_l$} & 100 & \multirow{2}{*}{$\left\{\begin{array}{cl} 100 \dfrac{c_{kl}/\textsc{w}_{kl}}{\textsc{maxw}} & \mbox{ if } k\neq l,\vspace*{0.2cm}\\ 100 \dfrac{c_{kl}/\bar{\textsc{w}}}{\textsc{maxw}} & \mbox{ if } k=l. \end{array}\right.$} & \multirow{3}{*}{$\{2, 3, 4\}$}\\ AP & $\{10, 20, 25, 40, 50\}$ & & & \multirow{2}{*}{Data file} & &\\ TR & $\{10, 15, 20, 25, 40, 50\}$ & & & &Data file &\vspace{7pt}\\ \hline \hline \multicolumn{7}{c}{Failure probabilities}\\ \hline \multicolumn{3}{c|}{Random probabilities (RP)} & \multicolumn{2}{l}{$\qquad p_{kl}\sim U[0,\rho],$} & \multicolumn{1}{l}{$\rho\in\{0.1, 0.3\}$ }&\\ \multicolumn{3}{c|}{Clustered probabilities (CP)} & \multicolumn{2}{l}{$\qquad p_{kl}\in\{0.1, 0.2, 0.3\}$}&\multicolumn{1}{l}{}&\\ \multicolumn{3}{c|}{Same probability (SP)} & \multicolumn{2}{l}{$\qquad p_{kl}=\rho,$} &\multicolumn{1}{l}{ $\rho\in\{0.1, 0.3\}$}&\\ \hline \end{tabular}}% \vspace*{0.25cm}\\ \caption{Summary of instances and parameters.\label{tab:summary}} \end{center} \end{table} \begin{table \begin{center} \scriptsiz \adjustbox{scale=0.8}{\begin{tabular}{ccc|rrrrr|rrrrr|rrrrr} & & & \multicolumn{5}{c|}{CPUTime} & \multicolumn{5}{c|}{MIPGAP} & \multicolumn{5}{c}{\%Solved} \\ & & & \multicolumn{2}{c}{RP} & \multicolumn{1}{c}{\multirow{2}[3]{*}{CP}} & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{RP} & \multicolumn{1}{c}{\multirow{2}[3]{*}{CP}} & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{RP} & \multicolumn{1}{c}{\multirow{2}[3]{*}{CP}} & \multicolumn{2}{c}{SP} \\ \cline{4-5}\cline{7-10}\cline{12-15}\cline{17-18} n & $\alpha$ & Data & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} \\ \hline \multirow{12}[5]{*}{10} & \multirow{4}[2]{*}{0.2} &${\rm AP}_T$& 4 & 7 & 15 & 1 & 1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & ${\rm AP}_L$ & 7 & 93 & 13 & 2 & 3 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & CAB & 143 & 5198 & \texttt{TL} & 1 & 1 & 0.00 & 0.45 & 1.08 & 0.00 & 0.00 & 100 & 40 & 0 & 100 & 100 \\ & & TR & 4 & 9 & 44 & 0 & 1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{2-18} & \multirow{4}[2]{*}{0.5} &${\rm AP}_T$& 6 & 21 & 14 & 1 & 1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & ${\rm AP}_L$ & 10 & 112 & 11 & 2 & 3 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & CAB & 62 & 5391 & 1864 & 1 & 1 & 0.00 & 0.95 & 0.00 & 0.00 & 0.00 & 100 & 40 & 100 & 100 & 100 \\ & & TR & 5 & 11 & 63 & 0 & 1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{2-18} & \multirow{4}[1]{*}{0.8} &${\rm AP}_T$& 7 & 13 & 15 & 1 & 1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & ${\rm AP}_L$ & 9 & 99 & 12 & 1 & 3 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & CAB & 25 & \texttt{TL} & 727 & 1 & 1 & 0.00 & 0.61 & 0.00 & 0.00 & 0.00 & 100 & 0 & 100 & 100 & 100 \\ & & TR & 5 & 19 & 53 & 0 & 1 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{6}[4]{*}{15} & \multirow{2}[1]{*}{0.2} & CAB & \texttt{TL} & \texttt{TL} & \texttt{TL} & 2 & 7 & 0.60 & 8.35 & 9.37 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 51 & 71 & 287 & 4 & 6 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{2-18} & \multirow{2}[2]{*}{0.5} & CAB & 4577 & \texttt{TL} & \texttt{TL} & 5 & 8 & 0.37 & 9.85 & 7.62 & 0.00 & 0.00 & 40 & 0 & 0 & 100 & 100 \\ & & TR & 52 & 89 & 619 & 3 & 5 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{2-18} & \multirow{2}[1]{*}{0.8} & CAB & 511 & \texttt{TL} & \texttt{TL} & 4 & 6 & 0.00 & 8.76 & 3.65 & 0.00 & 0.00 & 100 & 0 & 0 & 100 & 100 \\ & & TR & 55 & 138 & 412 & 3 & 4 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{12}[4]{*}{20} & \multirow{4}[1]{*}{0.2} &${\rm AP}_T$& 1812 & \texttt{TL} & 6336 & 43 & 30 & 0.00 & 5.92 & 1.38 & 0.00 & 0.00 & 100 & 0 & 20 & 100 & 100 \\ & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & 3443 & 2080 & 14.80 & 20.53 & 17.81 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & CAB & \texttt{TL} & \texttt{TL} & \texttt{TL} & 13 & 121 & 3.10 & 14.43 & 15.38 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 360 & 1402 & 4131 & 32 & 32 & 0.00 & 0.00 & 1.07 & 0.00 & 0.00 & 100 & 100 & 80 & 100 & 100 \\ \cline{2-18} & \multirow{4}[2]{*}{0.5} &${\rm AP}_T$& 1520 & \texttt{TL} & 6611 & 53 & 34 & 0.00 & 2.92 & 1.29 & 0.00 & 0.00 & 100 & 0 & 20 & 100 & 100 \\ & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & 4308 & 1257 & 13.11 & 20.30 & 17.19 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & CAB & \texttt{TL} & \texttt{TL} & \texttt{TL} & 20 & 51 & 3.01 & 15.65 & 15.55 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 363 & 1850 & 5497 & 21 & 36 & 0.00 & 0.00 & 3.60 & 0.00 & 0.00 & 100 & 100 & 60 & 100 & 100 \\ \cline{2-18} & \multirow{4}[1]{*}{0.8} &${\rm AP}_T$& 1693 & 7137 & 5780 & 40 & 33 & 0.00 & 3.82 & 1.61 & 0.00 & 0.00 & 100 & 20 & 40 & 100 & 100 \\ & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & 2343 & 1809 & 12.60 & 18.98 & 15.63 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & CAB & \texttt{TL} & \texttt{TL} & \texttt{TL} & 12 & 27 & 2.85 & 16.39 & 16.22 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 422 & 1864 & 5625 & 19 & 33 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{12}[4]{*}{25} & \multirow{4}[1]{*}{0.2} &${\rm AP}_T$& 4774 & 6835 & 4731 & 128 & 129 & 2.35 & 12.50 & 0.00 & 0.00 & 0.00 & 80 & 20 & 100 & 100 & 100 \\ & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 16.58 & 22.81 & 19.76 & 11.46 & 14.21 & 0 & 0 & 0 & 0 & 0 \\ & & CAB & \texttt{OoM} & \texttt{TL} & \texttt{TL} & 62 & 3208 & 4.33 & 19.52 & 19.21 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 2332 & \texttt{TL} & \texttt{TL} & 119 & 169 & 0.00 & 11.98 & 17.25 & 0.00 & 0.00 & 100 & 0 & 0 & 100 & 100 \\ \cline{2-18} & \multirow{4}[2]{*}{0.5} &${\rm AP}_T$& 6389 & \texttt{TL} & 4789 & 129 & 179 & 4.97 & 12.60 & 0.00 & 0.00 & 0.00 & 40 & 0 & 80 & 100 & 100 \\ & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 14.72 & 20.82 & 18.12 & 8.58 & 13.39 & 0 & 0 & 0 & 0 & 0 \\ & & CAB & \texttt{OoM} & \texttt{OoM} & \texttt{TL} & 98 & 2403 & 4.69 & 23.21 & 20.37 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 1935 & \texttt{TL} & \texttt{TL} & 82 & 128 & 0.00 & 11.54 & 13.29 & 0.00 & 0.00 & 100 & 0 & 0 & 100 & 100 \\ \cline{2-18} & \multirow{4}[1]{*}{0.8} &${\rm AP}_T$& 6465 & 6789 & 3490 & 125 & 316 & 5.06 & 10.28 & 0.00 & 0.00 & 0.00 & 40 & 20 & 100 & 100 & 100 \\ & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & 2927 & \texttt{TL} & 13.77 & 20.52 & 17.54 & 0.00 & 11.90 & 0 & 0 & 0 & 100 & 0 \\ & & CAB & \texttt{OoM} & \texttt{OoM} & \texttt{TL} & 71 & 2740 & 7.07 & 23.99 & 18.90 & 0.00 & 0.00 & 0 & 0 & 0 & 100 & 100 \\ & & TR & 2483 & \texttt{TL} & \texttt{TL} & 53 & 80 & 0.00 & 12.68 & 14.23 & 0.00 & 0.00 & 100 & 0 & 0 & 100 & 100 \\ \end{tabular}}% \vspace*{0.25cm}\\ \caption{Average Results for (HLPLF-1BP).\label{tab:m1}} \end{center} \end{table}% \subsection{{Numerical results with (HLPLF-1BP) and (HLPLF-$\lambda$)}} The results obtained in our first computational study are summarized in {Tables \ref{tab:m1} and \ref{tab:m2} for (HLPLF-1BP) and (HLPLF-$\lambda$), respectively}. In both tables, ``RP'', ``CP'' and ``SP'' stand for the scenarios with random failure probabilities (with $\rho= 0.1$ and $\rho= 0.3$), clustered failure probabilities and same failure probability (with $\rho= 0.1$ and $\rho= 0.3$), respectively, {as described in Section \ref{sec:genera}}. The values of $n$, $\alpha$ and ``Data'' {in both tables} indicate the number of nodes in the network, the {value} for the discount factor applied to the routing cost through inter-hubs arcs, and the dataset that {has} been used to {obtain} the costs and the flows, respectively. ``AP$_T$'' and ``AP$_L$'' refer to AP dataset using type T and type L set-up costs for the hub nodes, respectively. In Table \ref{tab:m1}, for scenarios RP and CP, the information contained in each row refers to average values over the five instances with the corresponding combination of parameters, whereas for scenario SP the values of the entries correspond to the unique instance with this combination of parameters. In Table \ref{tab:m1} the numerical results of (HLPLF-1BP) are summarized in three blocks of columns. Block ``CPUTime'' gives the computing times, in seconds, required to solve the instances, block ``MIPGap'' the percentage MIP gaps returned by Gurobi at termination, and block ``\%Solved'' the percentage of instances solved to proven optimality within the time limit. An entry ``\texttt{TL}'' in the CPUTime block means that the time limit of 7200 seconds was reached in all five instances of the group. The ``\texttt{OoM}'' entry indicates that the flag ``Out of memory'' was the output of the solver in at least one of the instances in the row, and then, the remaining information of the row refers to average values over the solved instances only (even if none of these instances could be solved to proven optimality). Table \ref{tab:m2} is organized in three blocks, $\lambda =2$, $\lambda =3$ and $\lambda=4$, for each of the three considered values of $\lambda$ in (HLPLF-$\lambda$). We have observed that the value of the parameter $\beta$ does not affect the results and thus, in this table the information contained in each row refers to average values of 10 instances ($5\times 2$ different values of $\beta$) for RP and CP scenarios and refers to the average values of the two (different values of $\beta$) instances for SP scenario. Using formulation (HLPLF-$\lambda$), all instances have been solved to proven optimality for all three considered values of $\lambda$. For this reason, blocks ``MIPGap'' and ``\%Solved'' {have been omitted} in Table \ref{tab:m2} since MIPGap is 0.00 for all the instances and the percentage of solved instances is always $100\%$. In Table \ref{tab:m1} we observe a different performance of (HLPLF-1BP) among the instances corresponding to the different scenarios. Based on the computing times, MIPGAPs, and percentage of solved instances, scenario SP produces the easiest instances, for all configurations of parameters{, as expected}. One can observe that, for instances with the same probability, all instances generated from the CAB, AP$_T$, and TR datasets with up to $n=25$, as well as the instances generated from AP$_L$ with up to $n=20$ have been optimally solved within the time limit. On the other hand, note that instances based on CAB dataset are more difficult to solve than instances based on TR and AP datasets. TR based instances are the easiest to solve: all instances with up to $n=15$, $95\%$ for $n=20$ and $35\%$ for $n=25$ have been optimally solved within the time limit. Regarding AP based instances, all instances with $n=10$ have been optimally solved within the time limit, although AP$_T$ instances consumed, in general, less computing time. The difference between AP$_T$ and AP$_L$ instances becomes more evident for $n>10$, since approximately $50\%$ of the AP$_T$ instances were optimally solved whereas none of the AP$_L$ instances with random and clustered probabilities {(scenarios RP and CP)} was solved to proven optimality within the time limit. As mentioned before, CAB instances are the most difficult ones. For $n=10$, $30\%$ of these instances could not be optimally solved solved within the time limit. This percentage increases up to $90\%$ for $n=20$ and up to $100\%$ for $n=25$. Additionally, for $n=25$, the execution was stopped due to an Out of Memory flag with $30\%$ of the CAB instances under the random probabilities (RP) scenario. \begin{table \centering \scriptsize \adjustbox{scale=0.8}{\begin{tabular}{ccc|rrrrr|rrrrr|rrrrr} & & & \multicolumn{5}{c|}{$\lambda=2$} & \multicolumn{5}{c|}{$\lambda=3$} & \multicolumn{5}{c}{$\lambda=4$} \\ & & & \multicolumn{2}{c}{RP} & & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{RP} & & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{RP} & & \multicolumn{2}{c}{SP} \\ \cline{4-5}\cline{7-10}\cline{12-15}\cline{17-18} n & $\alpha$ & Data & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} \\ \hline \multirow{12}[6]{*}{10} & \multirow{4}[2]{*}{0.2} & ${\rm AP}\_T$ & 2 & 2 & 3 & 2 & 2 & 3 & 5 & 5 & 3 & 5 & 3 & 3 & 2 & 3 & 2 \\ & & ${\rm AP}\_L$ & 6 & 3 & 6 & 7 & 6 & 2 & 2 & 2 & 2 & 2 & 3 & 4 & 3 & 3 & 5 \\ & & CAB & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1 \\ & & TR & 1 & 1 & 1 & 1 & 1 & 5 & 4 & 5 & 6 & 2 & 8 & 7 & 6 & 14 & 9 \\ \cline{2-18} & \multirow{4}[2]{*}{0.5} & ${\rm AP}\_T$ & 2 & 2 & 3 & 4 & 2 & 3 & 4 & 4 & 4 & 4 & 3 & 3 & 2 & 2 & 2 \\ & & ${\rm AP}\_L$ & 6 & 3 & 8 & 5 & 3 & 2 & 2 & 2 & 3 & 2 & 3 & 3 & 2 & 2 & 4 \\ & & CAB & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 3 & 2 & 2 & 4 & 3 \\ & & TR & 3 & 1 & 2 & 4 & 2 & 4 & 4 & 4 & 2 & 3 & 8 & 7 & 6 & 6 & 9 \\ \cline{2-18} & \multirow{4}[2]{*}{0.8} & ${\rm AP}\_T$ & 2 & 3 & 3 & 3 & 2 & 3 & 3 & 3 & 4 & 2 & 2 & 2 & 3 & 2 & 2 \\ & & ${\rm AP}\_L$ & 3 & 3 & 5 & 7 & 2 & 2 & 2 & 2 & 2 & 1 & 2 & 3 & 2 & 2 & 3 \\ & & CAB & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 0 & 2 & 2 & 2 & 2 & 2 \\ & & TR & 3 & 3 & 3 & 2 & 4 & 4 & 3 & 3 & 4 & 2 & 7 & 8 & 6 & 7 & 6 \\ \hline \multicolumn{1}{r}{\multirow{6}[6]{*}{15}} & \multirow{2}[2]{*}{0.2} & CAB & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ & & TR & 9 & 6 & 8 & 6 & 7 & 22 & 17 & 25 & 40 & 17 & 42 & 26 & 31 & 40 & 24 \\ \cline{2-18} & \multirow{2}[2]{*}{0.5} & CAB & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 2 & 1 & 3 & 4 & 11 & 3 & 1 \\ & & TR & 31 & 13 & 19 & 26 & 17 & 18 & 23 & 23 & 19 & 18 & 35 & 36 & 31 & 33 & 24 \\ \cline{2-18} & \multirow{2}[2]{*}{0.8} & CAB & 1 & 1 & 1 & 1 & 1 & 3 & 2 & 3 & 3 & 3 & 12 & 10 & 12 & 12 & 10 \\ & & TR & 23 & 18 & 16 & 24 & 15 & 17 & 20 & 17 & 14 & 16 & 26 & 23 & 25 & 25 & 17 \\ \hline \multirow{12}[6]{*}{20} & \multirow{4}[2]{*}{0.2} & ${\rm AP}\_T$ & 56 & 42 & 52 & 58 & 35 & 133 & 93 & 132 & 112 & 88 & 134 & 119 & 95 & 98 & 113 \\ & & ${\rm AP}\_L$ & 95 & 69 & 99 & 179 & 49 & 102 & 137 & 90 & 81 & 105 & 111 & 97 & 98 & 163 & 137 \\ & & CAB & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 2 & 2 & 1 & 2 & 1 \\ & & TR & 25 & 26 & 23 & 17 & 15 & 98 & 66 & 77 & 50 & 121 & 139 & 133 & 127 & 130 & 95 \\ \cline{2-18} & \multirow{4}[2]{*}{0.5} & ${\rm AP}\_T$ & 51 & 55 & 61 & 40 & 42 & 119 & 109 & 134 & 73 & 99 & 114 & 111 & 121 & 201 & 81 \\ & & ${\rm AP}\_L$ & 101 & 60 & 52 & 36 & 51 & 101 & 82 & 71 & 90 & 117 & 81 & 140 & 89 & 130 & 78 \\ & & CAB & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 4 & 2 & 1 \\ & & TR & 46 & 43 & 44 & 28 & 70 & 91 & 81 & 58 & 154 & 98 & 198 & 122 & 141 & 116 & 150 \\ \cline{2-18} & \multirow{4}[2]{*}{0.8} & ${\rm AP}\_T$ & 41 & 54 & 49 & 27 & 59 & 106 & 91 & 137 & 112 & 114 & 81 & 146 & 97 & 168 & 105 \\ & & ${\rm AP}\_L$ & 44 & 77 & 41 & 46 & 47 & 74 & 109 & 90 & 78 & 81 & 95 & 112 & 102 & 103 & 105 \\ & & CAB & 1 & 2 & 1 & 1 & 1 & 1 & 3 & 1 & 1 & 1 & 27 & 40 & 57 & 21 & 21 \\ & & TR & 49 & 42 & 46 & 64 & 34 & 101 & 82 & 52 & 132 & 76 & 168 & 144 & 117 & 163 & 239 \\ \hline \multirow{12}[5]{*}{25} & \multirow{4}[2]{*}{0.2} & ${\rm AP}\_T$ & 301 & 242 & 238 & 111 & 144 & 179 & 205 & 161 & 182 & 251 & 481 & 550 & 412 & 402 & 523 \\ & & ${\rm AP}\_L$ & 205 & 246 & 249 & 228 & 105 & 360 & 434 & 356 & 451 & 289 & 628 & 723 & 704 & 537 & 596 \\ & & CAB & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 4 & 8 & 4 & 4 & 4 \\ & & TR & 316 & 173 & 246 & 204 & 304 & 608 & 545 & 557 & 298 & 486 & 1364 & 1383 & 1018 & 1544 & 922 \\ \cline{2-18} & \multirow{4}[2]{*}{0.5} & ${\rm AP}\_T$ & 187 & 179 & 165 & 108 & 64 & 163 & 177 & 167 & 151 & 150 & 330 & 386 & 277 & 326 & 421 \\ & & ${\rm AP}\_L$ & 178 & 403 & 122 & 329 & 250 & 387 & 341 & 329 & 529 & 338 & 543 & 674 & 490 & 676 & 738 \\ & & CAB & 4 & 4 & 4 & 5 & 4 & 4 & 4 & 5 & 4 & 4 & 135 & 220 & 236 & 71 & 11 \\ & & TR & 458 & 425 & 282 & 410 & 596 & 790 & 749 & 974 & 558 & 1003 & 1795 & 1650 & 1376 & 1016 & 2003 \\ \cline{2-18} & \multirow{4}[1]{*}{0.8} & ${\rm AP}\_T$ & 180 & 121 & 142 & 165 & 40 & 142 & 158 & 141 & 124 & 139 & 266 & 369 & 338 & 303 & 365 \\ & & ${\rm AP}\_L$ & 125 & 87 & 153 & 86 & 130 & 271 & 258 & 296 & 181 & 283 & 589 & 383 & 370 & 414 & 396 \\ & & CAB & 3 & 10 & 11 & 4 & 4 & 4 & 8 & 7 & 4 & 4 & 90 & 255 & 150 & 202 & 131 \\ & & TR & 396 & 421 & 348 & 216 & 499 & 848 & 1250 & 1036 & 606 & 928 & 1870 & 1593 & 1599 & 2062 & 2323 \\ \end{tabular}}% \vspace*{0.25cm}\\ \caption{Average CPU times for (HLPLF-$\lambda$).} \label{tab:m2} \end{table}% Comparing Table \ref{tab:m1} with Table \ref{tab:m2} we observe that (HLPLF-$\lambda$) is \textit{notably} easier to solve than (HLPLF-1BP), which can be explained by its smaller number of decision variables. The difficulty of (HLPLF-$\lambda$) increases with the value of $\lambda$, as reflected by a decrease in its performance for higher values of this parameter. This could be expected, as (HLPLF-$\lambda$) becomes more restrictive as the value of $\lambda$ increases. When $\lambda=2$, the average computing time over all the instances is approximately 64 seconds, being two seconds for the CAB instances, 70 seconds for the AP$_T$ instances, 89 seconds for the AP$_L$ instances, and 102 for the TR instances. Note that, unlike (HLPLF-1BP), CAB instances are less computationally demanding than AP and TR instances. This behavior was also observed for $\lambda = 3$ and $\lambda = 4$. For $\lambda=4$ the average computing time over all the instances is approximately 218 seconds, being 30 seconds for the CAB instances, 168 seconds for the AP$_T$ instances, 225 seconds for the AP$_L$ instances, and 438 for the TR instances. Observe that the value of $\alpha$ also affects the performance of (HLPLF-$\lambda$), instances being more difficult for smaller $\alpha$ values, specially for the AP instances. We also note that, unlike (HLPLF-1BP), with (HLPLF-$\lambda$) there seem to be no noticeably differences among scenarios. \begin{table \centering\scriptsize \adjustbox{scale=0.8}{\begin{tabular}{cccc|rrrrr|rrrrr|rrrrr} & & & & \multicolumn{5}{c|}{CPUTime } & \multicolumn{5}{c|}{MIPGap } & \multicolumn{5}{c}{\%Solved} \\ & & & & \multicolumn{2}{c}{RP} & & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{RP} & & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{RP} & & \multicolumn{2}{c}{SP} \\ \cline{5-6}\cline{8-11}\cline{13-16}\cline{18-19} & \multicolumn{1}{l}{ n } & $\alpha$ & \multicolumn{1}{l|}{Data} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} & \multicolumn{1}{c}{CP} & \multicolumn{1}{c}{0.1} & \multicolumn{1}{c}{0.3} \\ \hline \multirow{18}[12]{*}{$\lambda=2$} & \multirow{9}[6]{*}{40} & \multirow{3}[2]{*}{0.2} & ${\rm AP}_T$ & 371 & 363 & 382 & 361 & 310 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & 1608 & 1689 & 5418 & 1181 & 1059 & 0.00 & 0.00 & 4.43 & 0.00 & 0.00 & 100 & 100 & 60 & 100 & 100 \\ & & & TR & 870 & 775 & 773 & 666 & 537 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.5} & ${\rm AP}_T$ & 344 & 479 & 313 & 322 & 432 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & 3579 & 3155 & 2487 & 2602 & 1351 & 0.00 & 3.16 & 0.00 & 0.00 & 0.00 & 100 & 80 & 100 & 100 & 100 \\ & & & TR & 434 & 600 & 601 & 337 & 414 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.8} & ${\rm AP}_T$ & 331 & 376 & 325 & 303 & 337 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & 2067 & 1663 & 2126 & 1604 & 692 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & TR & 499 & 489 & 430 & 512 & 390 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{2-19} & \multirow{9}[6]{*}{50} & \multirow{3}[2]{*}{0.2} & ${\rm AP}_T$ & 3986 & 3470 & 5128 & 4147 & 3651 & 0.00 & 0.00 & 2.46 & 0.00 & 0.00 & 100 & 100 & 80 & 100 & 100 \\ & & & ${\rm AP}_L$ & 6427 & 5362 & 6472 & 3273 & \texttt{TL} & 13.79 & 6.56 & 12.40 & 0.00 & 15.64 & 20 & 60 & 40 & 100 & 0 \\ & & & TR & 2877 & 2520 & 3385 & 2180 & 3817 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.5} & ${\rm AP}_T$ & 2635 & 3789 & 4319 & 4823 & 1590 & 0.00 & 0.00 & 2.02 & 0.00 & 0.00 & 100 & 100 & 80 & 100 & 100 \\ & & & ${\rm AP}_L$ & 3819 & 3406 & \texttt{TL} & 2761 & 2131 & 0.00 & 3.45 & 17.09 & 0.00 & 0.00 & 100 & 80 & 0 & 100 & 100 \\ & & & TR & 1296 & 3037 & 3264 & 1201 & 1349 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.8} & ${\rm AP}_T$ & 2862 & 3732 & 2836 & 3736 & 2474 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & 5872 & 5151 & 7183 & 6279 & 1172 & 9.93 & 2.99 & 14.02 & 0.00 & 0.00 & 40 & 80 & 20 & 100 & 100 \\ & & & TR & 1010 & 1495 & 1356 & 852 & 1050 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ \hline \multirow{18}[12]{*}{$\lambda=4$} & \multirow{9}[6]{*}{40} & \multirow{3}[2]{*}{0.2} & ${\rm AP}_T$ & 2919 & 2907 & 2300 & 2582 & 6060 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & 5643 & 5427 & 5674 & 49.57 & 50.23 & 19.51 & 0.00 & 0.00 & 0 & 0 & 60 & 100 & 100 \\ & & & TR & 6958 & 7044 & 6002 & \texttt{TL} & \texttt{TL} & 13.64 & 9.16 & 5.89 & 7.14 & 10.61 & 20 & 20 & 60 & 0 & 0 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.5} & ${\rm AP}_T$ & 2807 & 2335 & 2293 & 2186 & 2939 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & 5589 & 4865 & 3875 & 5535 & 2960 & 10.03 & 10.03 & 0.00 & 0.00 & 0.00 & 80 & 80 & 100 & 100 & 100 \\ & & & TR & \texttt{TL} & 7030 & 7024 & \texttt{TL} & \texttt{TL} & 20.15 & 14.00 & 12.62 & 22.24 & 13.99 & 0 & 20 & 20 & 0 & 0 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.8} & ${\rm AP}_T$ & 2315 & 2400 & 2272 & 1640 & 1750 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 100 & 100 & 100 & 100 & 100 \\ & & & ${\rm AP}_L$ & 4959 & 5691 & 3512 & 6253 & 2767 & 19.83 & 9.12 & 0.00 & 0.00 & 0.00 & 60 & 80 & 100 & 100 & 100 \\ & & & TR & \texttt{TL} & 6991 & 7139 & \texttt{TL} & \texttt{TL} & 17.73 & 13.56 & 12.25 & 17.51 & 14.20 & 0 & 20 & 20 & 0 & 0 \\ \cline{2-19} & \multirow{9}[6]{*}{50} & \multirow{3}[2]{*}{0.2} & ${\rm AP}_T$ & 7089 & 7198 & 6966 & 7134 & 6004 & 30.03 & 40.96 & 27.71 & 0.00 & 0.00 & 40 & 20 & 40 & 100 & 100 \\ & & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 49.99 & 49.23 & 50.31 & 48.24 & 46.73 & 0 & 0 & 0 & 0 & 0 \\ & & & TR & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 17.12 & 13.76 & 16.23 & 17.77 & 16.54 & 0 & 0 & 0 & 0 & 0 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.5} & ${\rm AP}_T$ & 6482 & 6995 & 6711 & 5310 & \texttt{TL} & 20.67 & 39.79 & 25.85 & 0.00 & 49.76 & 60 & 20 & 40 & 100 & 0 \\ & & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 49.68 & 48.38 & 50.03 & 48.53 & 48.19 & 0 & 0 & 0 & 0 & 0 \\ & & & TR & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 24.59 & 16.95 & 17.05 & 22.29 & 20.88 & 0 & 0 & 0 & 0 & 0 \\ \cline{3-19} & & \multirow{3}[2]{*}{0.8} & ${\rm AP}_T$ & 6933 & 7081 & 6259 & 7202 & 7201 & 28.66 & 39.67 & 0.00 & 29.26 & 46.94 & 40 & 20 & 100 & 0 & 0 \\ & & & ${\rm AP}_L$ & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 48.91 & 47.81 & 47.93 & 48.24 & 46.94 & 0 & 0 & 0 & 0 & 0 \\ & & & TR & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & \texttt{TL} & 25.94 & 21.03 & 22.78 & 27.43 & 23.51 & 0 & 0 & 0 & 0 & 0 \\ \end{tabular}}% \vspace*{0.25cm} \caption{Average results for (HLPLF-$\lambda$) for $n\geq 40$.} \label{tab:m240} \end{table}% Finally, Table \ref{tab:m240} summarizes the results of our second {set of computational experiments}, which {was} carried out considering only (HLPLF-$\lambda$) for $\lambda \in\{2,4\}$) and $\beta=1$ on larger instances ($n \in \{ 40, 50\}$) based on the AP and TR {datasets}. {In this second set of experiments we did not consider (HLPLF-1BP) as most instances could not be optimally solved within the time limit already for $n=25$.} We can observe that, for $\lambda=2$, all the TR instances, 99\% of the ${\rm AP}_T$ instances, and 80\% of the $AP_L $ instances have been solved {to proven optimality} within the time limit, whereas for $\lambda=4$ the percentage of solved instances was $70\%$ of the ${\rm AP}_T$ instances, 40\% of the $AP_L $ instances, and 6\% of the TR instances. This shows that (HLPLF-$\lambda$) is able to solve {larger} instances with up to $n=50$, {even if instances become more challenging as the value of the parameter $\lambda$ increases.} \subsection{Managerial Insight \label{sec:comput_manag}} In this section we derive some managerial insight from the results obtained in our first set of experiments, i.e., $n\leq 25$ when the instances were solved with both formulations, as well from the solutions of these instances for M0 (the uncapacitated HLP with no protection under failures). Figure \ref{fig:cost} shows {the percentage contribution to the objective function value of the different types of costs:} routing costs, {set-up} costs for activating hubs (Hubs\_Costs) and {set-up} costs for activating inter-hub edges (Links\_Costs). We have observed that results are similar for AP$_L$ and AP$_T$ data sets and thus, for each {formulation,} we differentiate between datasets CAB, AP and TR, as well as {among} the three values of the $\alpha$ parameter. \begin{figure}[h] \centering\includegraphics[scale=0.55]{Propor_CostesTodo.png} \caption{{Percent contribution to the objective function of the different types of costs} \label{fig:cost}} \end{figure} We can observe that the percent contribution of the set-up costs for activating inter-hubs edges varies from $0.1\%$ for CAB instances to $13\%$ for TR instances in M0. The {percent contribution} of {hub set-up} costs {depends}, as expected, on the value of the parameter $\alpha$ but mainly on the dataset and on the model. For the instances based on the CAB dataset, the percent contribution of hub set-up costs varies from $20\%$ with M0, M1, and M2\_2 {for} $\alpha=0.8$, to $40\%$ with M2\_4 {for} $\alpha=0.2$. For the instances based on the AP {dataset, the percent contribution of hub set-up costs varies from $45\%$ with M0 and $\alpha\in\{0.5, 0.8\}$ to $75\%$ with M2\_4. Regarding the instances based on the TR dataset, the percent contribution of hub set-up costs varies from $21\%$ with M0 and $\alpha\in\{0.5, 0.8\}$ to $69\%$ with M2\_4 for $\alpha=0.2$. The {percent contribution} of routing costs also {depends} on the value of the parameter $\alpha$, on the dataset, and on the model. For the instances based on the CAB dataset, {this percentage {varies} from $55\%$ to $80\%$, {with} the highest values {for} M0, M1, and M2\_2 {and} $\alpha=0.8$. For the instances based on the AP {dataset, the percent contribution of} routing costs {varies} from $15\%$ to $55\%$, {corresponding} the highest values {to} M0 with $\alpha=0.8$. Finally, for the instances based on the TR dataset, the percent contribution of routing costs varies from $22\%$ to 68\%, corresponding again the highest values to M0 with $\alpha=0.8$. \begin{table}[htbp] \scriptsize\centering \adjustbox{scale=0.8}{\begin{tabular}{ccc|rrr|rrr|rrr|rrr|rrr} & & & \multicolumn{3}{c|}{M0} & \multicolumn{3}{c|}{M1} & \multicolumn{3}{c|}{M2\_2} & \multicolumn{3}{c|}{M2\_3} & \multicolumn{3}{c}{M2\_4} \\ n & $\alpha$ & Data & \multicolumn{1}{l}{\# H} & \multicolumn{1}{l}{\# Lk} & \multicolumn{1}{l|}{\# Lp} & \multicolumn{1}{l}{\# H} & \multicolumn{1}{l}{\# Lk} & \multicolumn{1}{l|}{\# Lp} & \multicolumn{1}{l}{\# H} & \multicolumn{1}{l}{\# Lk} & \multicolumn{1}{l|}{\# Lp} & \multicolumn{1}{l}{\# H} & \multicolumn{1}{l}{\# Lk} & \multicolumn{1}{l|}{\# Lp} & \multicolumn{1}{l}{\# H} & \multicolumn{1}{l}{\# Lk} & \multicolumn{1}{l}{\# Lp} \\ \hline \multirow{9}[6]{*}{10} & \multirow{3}[2]{*}{0.2} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 2.00 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 2.00 & 3.00 & 2.00 & 2.94 & 5.47 & 2.59 & 2.53 & 4.12 & 2.12 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.06 & 0.06 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{3}[2]{*}{0.5} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 2.00 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 2.00 & 3.00 & 2.00 & 2.35 & 3.71 & 2.35 & 2.03 & 3.09 & 2.03 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.00 & 0.00 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{3}[2]{*}{0.8} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 2.00 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 2.00 & 2.00 & 2.00 & 2.29 & 2.41 & 2.29 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.65 & 0.65 & 4.00 & 6.12 & 0.12 & 5.00 & 10.00 & 0.00 \\ \hline \multirow{6}[6]{*}{15} & \multirow{2}[2]{*}{0.2} & CAB & 4.00 & 6.00 & 2.00 & 4.18 & 8.06 & 2.94 & 4.24 & 7.56 & 2.71 & 4.21 & 8.59 & 2.88 & 4.71 & 11.41 & 3.29 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.53 & 0.53 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{2}[2]{*}{0.5} & CAB & 2.00 & 3.00 & 2.00 & 2.71 & 4.47 & 2.71 & 2.15 & 3.29 & 2.03 & 3.03 & 6.06 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.12 & 0.12 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{2}[2]{*}{0.8} & CAB & 2.00 & 2.00 & 2.00 & 2.18 & 2.65 & 2.18 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.41 & 0.41 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \hline \multirow{9}[6]{*}{20} & \multirow{3}[2]{*}{0.2} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.26 & 1.50 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 5.00 & 13.00 & 5.00 & 4.29 & 10.88 & 4.24 & 4.76 & 12.12 & 4.59 & 4.76 & 12.29 & 4.74 & 4.74 & 12.94 & 4.74 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.00 & 0.00 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{3}[2]{*}{0.5} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.21 & 2.00 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 4.00 & 9.00 & 4.00 & 3.76 & 8.94 & 3.76 & 3.76 & 8.47 & 3.76 & 3.76 & 8.53 & 3.76 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.41 & 0.41 & 4.00 & 6.06 & 0.06 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{3}[2]{*}{0.8} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.12 & 2.00 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 3.00 & 5.00 & 3.00 & 3.18 & 5.41 & 3.18 & 2.97 & 4.94 & 2.97 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 3.76 & 0.76 & 4.00 & 6.06 & 0.06 & 5.00 & 10.06 & 0.06 \\ \hline \multirow{9}[5]{*}{25} & \multirow{3}[2]{*}{0.2} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.18 & 1.82 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 4.00 & 10.00 & 4.00 & 3.75 & 8.56 & 3.75 & 4.03 & 9.88 & 4.03 & 4.03 & 9.88 & 4.03 & 4.03 & 10.12 & 4.03 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.06 & 2.06 & 0.94 & 3.00 & 3.76 & 0.76 & 4.00 & 6.00 & 0.00 & 5.00 & 10.00 & 0.00 \\ \cline{2-18} & \multirow{3}[2]{*}{0.5} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.18 & 1.94 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 3.00 & 6.00 & 3.00 & 3.62 & 7.62 & 3.62 & 3.00 & 5.94 & 3.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 4.00 & 1.00 & 4.00 & 6.65 & 0.65 & 5.00 & 11.00 & 1.00 \\ \cline{2-18} & \multirow{3}[1]{*}{0.8} & AP & 1.00 & 1.00 & 1.00 & 2.00 & 2.15 & 1.97 & 2.00 & 3.00 & 2.00 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & CAB & 3.00 & 5.00 & 3.00 & 3.46 & 6.15 & 3.46 & 2.94 & 4.91 & 2.94 & 3.00 & 6.00 & 3.00 & 4.00 & 10.00 & 4.00 \\ & & TR & 1.00 & 1.00 & 1.00 & 2.00 & 2.00 & 1.00 & 3.00 & 4.00 & 1.00 & 4.00 & 7.00 & 1.00 & 5.00 & 11.00 & 1.00 \\ \end{tabular}}% \vspace*{0.25cm} \caption{Average number of open hubs, links and loops} \label{tab:nhub}% \end{table}% Information about the structure of the optimal backbone network can be found in Table \ref{tab:nhub}, {and Figures \ref{fig:densCAB}, \ref{fig:densAP} and \ref{fig:densTR}}. \begin{figure \begin{center} \includegraphics[scale=0.55]{Indice1y2CABnuevo.png} \caption{Backbone network density for {CAB-based instances} \label{fig:densCAB}} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[scale=0.55]{Indice1y2APnuevo.png} \caption{Backbone network density for {AP-based instances} \label{fig:densAP}} \end{center} \end{figure} \begin{figure \begin{center} \includegraphics[scale=0.55]{Indice1y2_TR.png} \caption{Backbone network density for {TR-based instances} \label{fig:densTR}} \end{center} \end{figure} In Table \ref{tab:nhub}, `` $\#$H'', ``$\#$Lk'' and ``$\#$Lp'' stand for the average number {of hubs} activated in the optimal backbone network, the average number of activated inter-hub edges (including loops), and the average number of activated {loops}, respectively. Two density indices have been studied. While the first index ($I_1$) indicates the density of the backbone network including loops, the second one ($I_2$) indicates the density of the backbone network when loops are not considered. {That is, $I_1$ is} the ratio between the number of activated inter-hub edges and the {total} number of links in a complete graph with $\#H$ nodes if loops were included: $$I_1=\frac{2 \# Lk}{\#H (\#H+1)},$$ and {$I_2$} the ratio between the number of {non-loop inter-hub edges activated} and the number of links in a complete graph with $\#H$ nodes if loops were excluded: $$I_2=\frac{2 (\# Lk-\#Lp)}{\#H (\#H-1)}.$$ Note that values of the two indices {range in $[0, 1]$, i.e., $0\leq I_1, I_2\leq 1$}. {Figures \ref{fig:densCAB}, \ref{fig:densAP} and \ref{fig:densTR} show, for each formulation and value of the parameter $\alpha$, the percentage of instances that reach the corresponding index value, for the CAB, the AP and the TR instances, respectively.} We can observe in Table \ref{tab:nhub} that the number of open hubs depends mainly on the model and also on the dataset. For all AP instances the number of opens hubs is always {one for M0, two for M1 and M2\_2, three for M2\_3, and four for M2\_4}. For CAB instances this number ranges in average in [2, 5] for M0, in [2.18, 4.29] for M1, in [2, 4.76] for M2\_2, in [3, 4.76] for M2\_3, and in [4, 4.74] for M2\_4. For TR instances, the number of opens hubs is always one for M0, two for M1, three for M2\_2, four for M2\_3, and five for M2\_4. On the other hand, we can observe that the number of activated inter-hubs edges {is smaller for M0 than for M1} and that, {for M1}, this number is similar to {that for} M2\_2 but smaller than {that for} M2\_$\lambda$ for $\lambda>2$, {since, as expected, this number increases} with the value of $\lambda$. Additionally, we can observe that with M0 most of the activated inter-hubs edges are loops, mainly {with} the AP and the TR instances. This fact can be also observed in Figure \ref{fig:densAP} for the AP instances where for M0 and M1 the density index $I_1$ is close to 1, whereas the density index $I_2$ is close to 0. This indicates that most of the activated inter-hubs edges are loops. Figure \ref{fig:densTR} also shows that for the TR instances and for M0 the optimal backbone network {has a density index $I_1=1$, but the value of density index $I_2=0$. On the other hand, we can observe { in Figure \ref{fig:densCAB}, Figure \ref{fig:densAP}, and Figure \ref{fig:densTR}}, that the density of the backbone network for {M2\_$\lambda$} increases, as expected, with the value of the parameter $\lambda$. \section{The price of robustness \label{sec:comput_simul}} For assessing the robustness and reliability of the hub network models proposed in this paper, we evaluate the so-called price of robustness (see \cite{bertsimas2004price}), defined as the extra cost incurred to design a robust network. In our case robustness translates into protecting the backbone network under inter-hub edge failures, which, essentially, is attained by incorporating additional inter-hub edges to the backbone network. We thus start out analysis by comparing the overall set-up cost of the activated inter-hub edges for each of the proposed models M1, {M2\_2, M2\_3, and M2\_4}, with that of the \emph{unprotected} network obtained with M0. This information is summarized in Table \ref{t:PoR}, which, for each of the models M1, {M2\_2, M2\_3, and M2\_4}, gives the average percent deviation of the inter-hub set-up costs with respect those of M0. For the sake of simplicity, in this table we only show the results for the TR dataset, although the behavior of the CAB and AP datasets is similar. \begin{table \centering \begin{tabular}{c|c|rrrr} $n$& $\alpha$ & M1 & M\_2 & M2\_3 & M2\_4 \\\hline \multirow{3}[0]{*}{10} & 0.2 & 68.57\% & 136.86\% & 235.11\% & 361.15\% \\ & 0.5 & 68.57\% & 134.70\% & 235.11\% & 360.23\% \\ & 0.8 & 68.57\% & 148.34\% & 240.17\% & 358.39\% \\\hline \multirow{3}[0]{*}{15} & 0.2 & 68.57\% & 147.33\% & 233.01\% & 358.39\% \\ & 0.5 & 68.57\% & 134.45\% & 233.01\% & 358.39\% \\ & 0.8 & 68.57\% & 143.17\% & 234.23\% & 358.39\% \\\hline \multirow{3}[0]{*}{20} & 0.2 & 61.52\% & 98.44\% & 191.38\% & 302.97\% \\ & 0.5 & 61.93\% & 113.60\% & 184.87\% & 279.68\% \\ & 0.8 & 61.52\% & 126.21\% & 184.87\% & 280.18\% \\\hline \multirow{3}[0]{*}{25} & 0.2 & 65.80\% & 139.29\% & 231.72\% & 347.47\% \\ & 0.5 & 61.52\% & 134.65\% & 223.84\% & 314.85\% \\ & 0.8 & 61.52\% & 134.52\% & 219.54\% & 314.85\% \\\hline \end{tabular}% \caption{Average percent deviation of activated hubs and inter-hub set-up costs relative to those of M0. \label{t:PoR}} \end{table}% As expected, constructing networks that are robust under inter-hub edge failures has a significative impact in the design cost of the network. For the $\lambda$-connected models, the reported percent deviations increase with the value of $\lambda$, which can be easily explained as larger backbone networks are required as $\lambda$ increases. Nevertheless, as shown by the results of the experiment that we report next, in case of failure, this increase in the design costs strengthens the possibility of being able of re-routing all the commodities of \emph{a posteriori} solutions (after the occurrence of a failure in the inter-hub edges). \begin{figure \begin{center} \includegraphics[scale=0.8]{simulations} \caption{Illustration of the four different failure scenarios that we perform.\label{t:simulation}} \end{center} \end{figure} For this experiment, we have used all the $n=20$ instances of the TR dataset, whose optimality is guaranteed for all the models. For each of the 255 instances generated for the TR dataset with $n=20$, we have simulated the following scenarios for potential failures of the inter-hub edges of the backbone networks produced by the different models: \begin{itemize} \item {\bf Failure scenario 1 (FS1):} Only activated inter-hub edges may fail. Each activated hub edge $\{k,l\}$ in a backbone network fails (and removed from the backbone network) according to a Bernoulli distribution with probability $p_{kl}$. \item {\bf Failure scenario 2 (FS2):} In this scenario failures are associated with hub nodes. Failure of a hub node implies the failure of all the inter-hub edges incident to the hub. Thus, each activated hub fails with probability $p_{kk}$, and then all its incident inter-hub edges are removed from the backbone network. \item {\bf Failure scenario 3 (FS3):} Failures are simulated for inter-hub edges of the backbone network similarly to FS1. In case an inter-hub edge fails, the failure probability of the loops at the extreme nodes of the edge is increased in $50\%$. Then, failures in the loop edges are simulated. \item {\bf Failure scenario 4 (FS4):} First, failures are simulated for inter-hub edges of the backbone network similarly to FS1. The difference is that we now assume that the failure of a considerable number of inter-hub edges incident with any activated hub node, will provoke the failure of the hub node as well, and thus the failure of all its incident inter-hub edges. That is, for each activated hub node, we assume that if at least a given percentage $\gamma\%$ of its incident inter-hub edges have failed, then the whole hub node fails, provoking that its remaining incident inter-hub edges also fail, which are also removed from the network. In our study we fix the value of the parameter $\gamma$ to $75\%$, i.e., {if $75\%$ or more of the inter-hub edges incident to a hub node fails, then, the hub (and the remaining incident inter-hub edges) cannot be used any longer for routing the commodities.} \end{itemize} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{FS1}~\includegraphics[width=0.5\textwidth]{FS2}\\ \includegraphics[width=0.5\textwidth]{FS3}~\includegraphics[width=0.5\textwidth]{FS4} \caption{Average percentage of \emph{after-failure} networks for which commodities can no longer be routed (from top left to bottom right: FS1, FS2, FS3, FS4). {Light blue bars represent model $M0$, dark blue model $M1$, orange model $M2\_2$, gray model $M2\_3$ and yellow model $M2\_4$}.\label{fig:FS}} \end{figure} Figure \ref{t:simulation} illustrates with simple backbone networks {($H,E_H$)} the four failure scenarios that we consider. The backbone network has four hub nodes {$H=\{k,l,m,q\}$} and six inter-hub edges, of which two are loops {$E_H=\{\{k,l\}$, $\{k,q\}, \{l,m\}, \{q,m\}, \{l,l\},\{m,m\}\}$}. In FS1, hub edges $\{k,l\}$ and $\{m,m\}$ are chosen to fail, both depicted with dashed lines in the left picture. The network resulting after removing these inter-hub edges is shown in the right picture. In FS2, hub node $m$ (in a gray circle) is chosen to fail, and then, all the hub edges incident with it (depicted with dashed lines), namely $\{q,m\}, \{m,m\}$ and $\{l,m\}$, removed from the network. In FS3 the interhub edge $\{q,m\}$ is chosen to fail, and increases in $50\%$ the failure probability of the loop $\{m,m\}$ ,which is then randomly chosen to fail. Thus, we chose $\{q,m\}$ and $\{m,m\}$ (depicted with dashed lines), which are then removed from the network. Finally, in FS4, hub edges $\{k,l\}$, $\{l,m\}$ and $\{m,m\}$ are chosen to fail. Then, since the percentage of inter-hub edges incident to $m$ that fail exceeds $\gamma=75\%$, all edges incident to $m$ are removed. For the remaining hub nodes, such a percentage is not exceeded so no further inter-hub edges are removed. We have carried out simulations for each of the above failure scenarios, all of which follow the same general structure, for a given backbone network. {($i$)} We randomly generate the links that fail according to the corresponding failure scenario and obtain the \emph{after-failure} network by removing from the backbone network the edges that fail. ($ii$) We try to re-route all the commodities through the \emph{after-failure} network. ($iii$) Since it may happen that it is no longer possible to route some of the commodities in the \emph{after-failure} network, we will analyze this circumstance in our study. For each failure scenario, each simulation is repeated $10000$ times over each instance. The average results obtained for all the instances are reported in Figure \ref{fig:FS}. There we draw light blue bars to represent the results for model $M0$, dark blue for$M1$, orange for $M2\_2$, gray for$M2\_3$, and yellow for $M2\_4$. As one can observe, the networks obtained with the proposed models (M1 and M2\_$\lambda$) are clearly more robust under inter-hub failures than M0. Specifically, in average, our models allow re-routing all the involved commodities in more than $90\%$ of the failure occurrences while M0 was only able to re-route $78\%$ of them. The robustness of model M2\_4 is even more impressive, being the percentage of simulations in which re-routing is possible $99.5\%$. On the other hand, analyzing the results of the failure scenario FS2, one can observe that models $M2\_\lambda$ are not only robust under inter-hub edges failures, but also under failures of the hub nodes. However, $M0$ and $M1$ have a similar behaviour under these scenarios with close to $20\%$ of simulations, in average, in which the commodities could not be routed. {This highlights the performance of $M2\_2$, $M2\_3$, and $M2\_4$, for which the percentage of simulations where some commodity could not be rerouted decreases to 12\%, 5\%, and 2\%, respectively.} At each of the simulations, when all commodities can be routed in the \emph{after-failure} network, we compute the overall \emph{a posteriori} routing cost $R^F$. We denote by $\tau_F \in [0,1]$ the proportion of simulations for which this cost can be computed. In case a commodity $r \in R$ is not able to be routed through the \emph{after-failure} backbone network, we assume that its routing cost is proportional to the \emph{cost of the direct connection} $\bar c_{o(r) d(r)}$, i.e. the overall routing cost is $(1+q) \displaystyle\sum_{r\in R} w_r c_{o(r) d(r)}$. {The parameter $q>0$ represents the \emph{extra percent cost} (over the cost of the direct connection) when re-routing a commodity in case the backbone network cannot be used any longer to route it. Such a cost may represent the outsourcing cost of a direct delivery between the origin and destination of the commodity or the lost of opportunity cost of a unsatisfied user for which the service could not be provided. With this information, we compute the average set-up and routing cost for the network as: $$ \Phi(q) = \displaystyle\sum_{h \in H} f_h + \displaystyle\sum_{e\in E_H} h_e + \tau_F R^F+(1-\tau_F) (1+q) \displaystyle\sum_{r\in R} w_r c_{o(r) d(r)}. $$ \begin{figure \centering\includegraphics[scale=0.7]{figq_all} \centering\caption{Routing costs + Set-up costs on the after-failure network as a function of the parameter $q$.\label{fig:qall}} \end{figure} We summarize in Figure \ref{fig:qall} the average behavior of this cost for all the simulations and all the failure scenarios. Each line represents the above cost, as a function of the parameter $q$, for each of the \emph{after-failure} networks produced by the simulations constructed with the five different models (M0, M1, M2\_2, M2\_3, and M2\_4). One can observe that for small values of $q$ (the re-routing costs are a small factor of the direct costs from origin to destination) $M0$ is more convenient. This is clear, since in case the re-routing costs are not very high, one may undertake these costs, even when these failures occur very often. As $q$ increases, the most convenient models are M1, M2\_2, and M2\_3 (in this order). Model M2\_4 is clearly the most robust one, since the parameter $q$ almost does not affect the cost (in this case the percentage of simulations for which the commodities cannot be routed is tiny), but their set-up costs are very high. \begin{figure \centering\includegraphics[width=0.5\textwidth]{figq_1}~\includegraphics[width=0.5\textwidth]{figq_2}\\ \centering\includegraphics[width=0.5\textwidth]{figq_3}~\includegraphics[width=0.5\textwidth]{figq_3}\\ \centering\caption{Routing costs on the after-failure backbone network as a funciton of the parameter $q$ for each failure scenario.\label{fig:q}} \end{figure} In Figure \ref{fig:q} we show the disaggregated results by failure scenario (FS1, FS2, FS3, and FS4). There, we observe that the behavior of $\Phi(q)$ is different for the failure scenario FS2, where we simulate failures in hub nodes. In FS2, models M0, M2\_3, and M2\_4 outperform M1 and M2\_2, in average, while in the remaining failure scenarios models M0, M1 and M2\_2 are more convenient for reasonable values of $q$. We conclude this section by highlighting that the study that we have carried out allows the decision maker to determine the best model to construct the backbone network based on the expected extra cost that should be paid for not providing the service to commodities due to failures in the network. \section{Conclusions \label{sec:conclu}} In this paper we propose different models to construct robust hub networks under inter-hub links failures. The models that we develop ensure that an additional routing path exists besides its original routing path for all the commodities. In the first model, an explicit backup path using at most an inter-hub edge is constructed to be used in case of failure of the original path from which each of the commodities is routed. The second model assures the existence of backup paths (using an arbitrary number of inter-hub links) in case of failure of the original inter-hub edges by means of imposing $\lambda$-connectivity of the backbone network for a given value of $\lambda\ge 2$. The two models present advantages from the point of view of the robustness of the hub network. One the one hand, the first model guarantees that backup paths for the commodities are of the same nature than the original non-failing network, although the computational difficulty to obtain solutions is high. On the other hand, the second model, although ensuring also the construction of backup paths, exhibits a lower computational load than the first model. Both models have been computationally tested on an extensive battery of experiments with three hub location benchmarks, namely AP, CAB and TR. Some conclusions are derived from this study. Furthermore, we have analyzed the robustness of the model by simulating different types of failures on the TR network, concluding the applicability of our models. Future research on the topic includes the study of valid inequalities for both models in order to alleviate the computational complexity of the exact resolution of the model. For larger instances, it would be helpful to design heuristic approaches that assure good quality solution in smaller computing times. \section*{Acknowledgements} The authors of this research acknowledge financial support by the Spanish Ministerio de Ciencia y Tecnolog\'ia, Agencia Estatal de Investigaci\'on and Fondos Europeos de Desarrollo Regional (FEDER) via projects PID2020-114594GB-C21 and MTM2019-105824GB-I00. The authors also acknowledge partial support from projects FEDER-US-1256951, Junta de Andaluc\'ia P18-FR-422, P18-FR-2369, B-FQM-322-UGR20 (COXMOS), and NetmeetData: Ayudas Fundaci\'on BBVA a equipos de investigaci\'on cient\'ifica 2019. \bibliographystyle{apalike}
proofpile-arXiv_065-5749
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and motivation} Cumulant mapping is an extension of covariance mapping \cite{frasinski1989covariance} to more than two correlated variables. The covariance mapping technique in turn is an extension of the coincidence method \cite{eland1986photoelectron, frasinski1986dissociative} to high counting rates, where several fragmentation events may occur in an elementary sample. From its invention covariance mapping has been used mostly in studies of ionization and fragmentation of small molecules, with some notable exceptions, such as x-ray scattering or brain studies \cite[see reviews][]{frasinski2016covariance, vallance2021covariance}. Since covariance mapping requires extensive data processing, two-dimensional maps have been most practical. With continued progress in computational power an extension of covariance mapping to higher dimensions is a timely proposition. Two recent developments have motivated this work. One is a successful application of covariance mapping to two-dimensional mass spectrometry of large biomolecules \cite{driver2020two, driver2021two}, with good prospects for extending this technique to higher dimensions. The second development is the emergence of x-ray free-electron lasers (XFELs), which are powerful research tools for studying atomic and molecular dynamics on the femtosecond and attosecond timescales \cite{huang2021features}. The unprecedented intensity of x-rays in the XFEL pulses induces a large number of fragmentation events, which leaves covariance mapping as the only practical method for correlating the fragments. Moreover, recent XFEL upgrades to high repetition rates, including fast data acquisition, \cite{huang2021features} make the extension of covariance mapping to higher dimensions feasible. \section{Fragmentation scenario} The scheme for cumulant mapping is outlined in Fig.~\ref{fig_principle}. A random sample of unknown objects is drawn from a Poisson distribution. The objects are fragmented, and the fragments are detected. To understand the basic principle, it is helpful to consider initially an ideal scenario where the objects are identical and always break up in the same way into distinguishable fragments. The fragments of each kind are collected in separate bins, and their number is recorded as \(Z, Y, X,\) etc. The sampling is repeated many times and the fragment numbers are used to reconstruct the parent objects. \begin{figure}[t] \includegraphics[width=7cm]{Fig1.png} \caption{\label{fig_principle} \textbf{The principle of reconstruction from fragments.} A Poissonian sample of parent objects is fragmented and the fragments are detected with efficiency \(\eta\), including some uncorrelated background in a mean proportion \(\zeta\). The detected fragments (filled circles) are counted and stored in discrete random variables \(Z,Y,X,...\), which are processed over many samples using the cumulant formula. The cumulant value, \(\varkappa_n(Z,Y,X,...)\), measures the number of parent objects at a single point in an \textit{n}-dimensional spectral map.} \end{figure} \subsection{Realistic conditions} The reconstruction would be trivial under the ideal conditions outlined above. In practice, however, the collected fragment numbers require extensive statistical processing for several reasons. Firstly, each of the \(Z, Y, X, ...\) random variables is usually measured at just one point on a fluctuating spectrum of mass, energy, or other quantity characterising the fragments, and normally it is not obvious in advance into which parts of the spectra the fragments may fall. Therefore it necessary to calculate the \(\varkappa_n\) cumulant for each possible \textit{n}-tuple of spectral points and display the reconstructed parent objects in an \textit{n}-dimensional map, as shown at the bottom of Fig.~\ref{fig_principle}. Moreover, the spectra of the \((Z, Y, X, ...)\) variables could be more than 1-dimensional, for example, they could be sourced from a position-sensitive timing detector, which effectively resolves fragments in 3D momentum space \cite{allum2021multi}, and in principle would require mapping in 3\textit{n}-dimensional space, unless the fragmentation kinematics can be used to reduce the dimensionality. Secondly, in most experiments the fragments are detected with quantum efficiency \(\eta < 100\%\). This means that some of the fragments are undetected, as indicated by empty circles in Fig.~\ref{fig_principle}. (The colours mark different detection patterns to enable the reader tracing them from the sample to the bins.) Therefore, the number of tuples \((Z_\text{c}, Y_\text{c}, X_\text{c}, ...)\) of only collectively-correlated detected fragments (black filled circles) is smaller than the number of all detected fragments \(Z, Y, X, ...\) (all filled circles). And thirdly, there may be a Poissonian background of fragments (magenta circles) completely unrelated to the sample of interest. This background is characterised by parameter \(\zeta\), which is the ratio of the mean number of the background fragments to the mean number of the sample fragments. Hence, \(\zeta = 0\) means no background, \(\zeta = 1\) means as much background as the sample signal, etc. Taking into account reduced detection efficiencies and the presence of background fragments relaxes the idealized requirements of identical parent objects and a single fragmentation pattern, which makes cumulant mapping applicable to studies of mixtures and multiplicity of fragmentation channels. The aim of this work is to estimate the cumulant value and noise when the fragments are detected under the non-ideal conditions of \(\eta < 100\%\) and \(\zeta > 0\). \subsection{Statistical concepts and notation} Statistics, like all natural sciences, encompasses two realms: the factual world of tangible measurements and the Platonic world of mathematical abstractions. The objects spanning both worlds are random variables, such as \(Z\), denoted in this work with Roman capitals. On one hand, a random variable can be measured yielding sample values, which I denote giving it an index, e.g. \(Z_i\). On the other hand, the general properties of the random variables, such as probability distributions, moments, etc. are mathematical abstractions that cannot be known with absolute certainty. Nevertheless, we can infer these abstract properties by repetitive sampling of the random variable. I assume that in an experiment the conditions stay constant and that we draw \(N\) samples of \(Z\), which are indexed by \(i = 1, 2, 3, ... N\). The parameters describing the properties of a random variable are denoted with Greek letters, such as \(\varkappa\). To infer a parameter value we can use the samples to construct an estimator denoted by a hat, e.g. \(\widehat{\varkappa}\). For example, to estimate the first moment of variable \(Z\), we use the sample average indicated by an overline: \begin{equation} \widehat{\varkappa_1}(Z) = \overline{Z} = \frac{1}{N} \sum_{i=1}^{N}{Z_i}. \label{kappa-1-estim} \end{equation} Since repeating the experiment gives us a new set of samples, the sample averages and the estimators are also random variables. However, calculating their expected values (or variance, or higher-order moments) fixes them to the theoretical limit when \(N \rightarrow \infty\). Angular brackets are used to denote the expected values: \begin{multline} \varkappa_1(Z) = \langle \widehat{\varkappa_1} \rangle = \big\langle \overline{Z} \big\rangle = \Big\langle \frac{1}{N} \sum_{i=1}^{N}{Z_i} \Big\rangle \\ = \frac{1}{N} \sum_{i=1}^{N}{\langle Z_i \rangle} = \frac{1}{N} N{\langle Z \rangle} = {\langle Z \rangle}. \label{kappa-1} \end{multline} The bulk of this work is dedicated to such calculations. \section{Fragment correlations} In general, \(\widehat{\varkappa_n}\) stands for an estimator of collective correlations among \(n\) fragments. When \(n = 1\) the problem is degenerate and the best we can do is to estimate the mean number of only one kind of a fragment, \(Z\), using the sample average according to Eq.~(\ref{kappa-1-estim}). \subsection{Covariance} When \(n = 2\) the appropriate estimator is the sample covariance of the two fragments \(Z\) and \(Y\): \begin{equation*} \widehat{\varkappa_2}(Z,Y) = \overline{(Z - \overline{Z})(Y - \overline{Y})}. \end{equation*} It is worth noting that that this estimator is biased. In principle, the bias can be removed by using Bessel's correction factor \(N/(N-1)\) each time a degree of freedom of the sample has been used to calculate an inner average. In practice however, the bias is insignificant for \(N \gg 1\) and can be ignored where appropriate. Calculating the expected value of this estimator leads to the well known formula for covariance: \begin{multline*} \varkappa_2(Z,Y) = \langle \widehat{\varkappa_2} \rangle = \Big\langle \overline{(Z - \overline{Z})(Y - \overline{Y})} \Big\rangle \\ = \big\langle (Z - \overline{Z})(Y - \overline{Y}) \big\rangle = \langle (Z - \langle Z \rangle)(Y - \langle Y \rangle) \rangle \\ = \langle ZY \rangle - \langle Z \rangle \langle Y \rangle = \text{cov}(Z, Y). \end{multline*} Introducing mean-centered variables \begin{equation} z_0 = Z -\langle Z \rangle, \;\; y_0 = Y -\langle Y \rangle , \label{z0-y0} \end{equation} gives us a compact version of the formula: \begin{equation} \varkappa_2(Z,Y) = \langle z_0 y_0 \rangle . \label{kappa-2} \end{equation} (Symbols \(z\), \(y\), \(x\), etc. are reserved for later use.) \subsection{The problem of more than two fragments} When there are three fragments, an extension of Eq.~(\ref{kappa-2}) has been proposed \cite{frasinski1991multiphoton}: \begin{equation} \varkappa_3(Z,Y,X) = \langle z_0 y_0 x_0 \rangle , \label{kappa-3} \end{equation} and the suitability of this ``3-fold covariance'' formula has been demonstrated experimentally \cite{frasinski1991multiphoton, bryan2006observation} and theoretically \cite{mikosch2013coincidence}. One may expect that this method of extending the covariance formula works for four fragments: \[ \varkappa_4^\text{trial}(Z,Y,X,W) = \langle z_0 y_0 x_0 w_0 \rangle. \] Unfortunately, this trial for the formula is unsuitable \cite{zhaunerchyk2014theory}. The reason is that if we have only pairwise correlations, e.g. \(Z\) with \(Y\) and \(X\) with \(W\), then \[ \varkappa_4^\text{trial} = \langle z_0 y_0\rangle \langle x_0 w_0 \rangle \neq 0,\] but we want \(\varkappa_4 = 0\) because there is no collective correlation among all four fragments. \subsection{The solution} To find the correct formula for \(n \geq 4\), I start with listing the desired properties of \(\varkappa_n = \varkappa_n(Z,Y,X, \ldots)\): \begin{itemize} \item \(\varkappa_n \neq 0\) only if all arguments are collectively correlated; \item \(\varkappa_n\) has units of the product of all arguments; \item \(\varkappa_n\) is linear in the arguments; \item \(\varkappa_n\) is invariant under interchange of any two arguments. \end{itemize} It turns out that these properties uniquely determine the formula. For example, the reader is invited to check that the following formula has the desired properties: \begin{multline*} \varkappa_4(Z,Y,X,W) = \langle z_0 y_0 x_0 w_0 \rangle \\ - (\langle z_0 y_0 \rangle \langle x_0 w_0 \rangle + \langle z_0 x_0 \rangle \langle y_0 w_0 \rangle + \langle z_0 w_0 \rangle \langle y_0 x_0 \rangle), \end{multline*} and that other products of expected values, such as \(\langle z_0 \rangle \langle y_0 x_0 w_0 \rangle\) cannot contribute to the formula because \[\langle z_0 \rangle = \langle Z - \langle Z \rangle \rangle = \langle Z \rangle - \langle Z \rangle = 0.\] The formula for \(\varkappa_4\) can be simplified by writing \begin{equation} \varkappa_4(Z,Y,X,W) = \langle z_0 y_0 x_0 w_0 \rangle - \sum^3 \langle z_0 y_0 \rangle \langle x_0 w_0 \rangle, \label{kappa-4} \end{equation} where \(\sum^3\) denotes a sum over all ways of paring the four variables. Similarly, \begin{multline} \varkappa_5(Z,Y,X,W,V) = \langle z_0 y_0 x_0 w_0 v_0 \rangle \\ - \sum^{10} \langle z_0 y_0 \rangle \langle x_0 w_0 v_0 \rangle, \label{kappa-5} \end{multline} and \begin{multline} \varkappa_6(Z,Y,X,W,V,U) = \langle z_0 y_0 x_0 w_0 v_0 u_0 \rangle \\ - \sum^{15} \langle z_0 y_0 \rangle \langle x_0 w_0 v_0 u_0 \rangle - \sum^{20} \langle z_0 y_0 x_0 \rangle \langle w_0 v_0 u_0 \rangle \\ - \sum^{15} \langle z_0 y_0 \rangle \langle x_0 w_0 \rangle \langle v_0 u_0 \rangle. \label{kappa-6} \end{multline} Formulae for collective correlations among more variables can be constructed in a similar manner. \subsection{Cumulants} In statistics the \(\varkappa_n\) parameter that measure the collective correlations of \textit{n} random variables is known as the \textit{n}-variate joint cumulant of the first order \cite[Ref.][chapters 3, 12 and 13]{kendall94vol1}. In this work I shall shorten this name and call it simply the \textit{n}\textsuperscript{th} cumulant. Cumulants are useful in seemingly disjoint areas of science, such as light--matter interactions \cite{sanchez2020cumulant}, quantum theory of multi-electron correlations \cite{kutzelnigg1999cumulant}, bond breaking of diatomic molecules \cite{brea2013behavior}, neural network theory \cite{helias2020statistical}, financial data analysis \cite{domino2020multivariate}, and gravitational interaction of dark matter \cite{uhlemann2018finding}. In physics multivariate cumulants are also known as the Ursell functions \cite{ursell1927evaluation}. \subsection{Physical interpretation of \(\varkappa_n\)} So far, the cumulants have been introduced as parameters describing multivariate distributions. Now we want to apply them to the problem of recovering the parent objects from their fragments. Let us suppose that we repetitively gather a sample of objects in a random manner, so the number of objects \(S\) in the sample follows the Poisson distribution \begin{equation} S \sim \text{Pois}(\lambda) \equiv P(S=k) = \frac{\lambda^k}{k!}e^{-\lambda}, \label{pois} \end{equation} where \(P(S=k)\) is the probability of having exactly \(k\) objects in the sample and parameter \(\lambda\) is the expected number of the objects in a sample. Next, we fragment the objects and detect only the fragments. And from the detected fragments we want to infer the identity of the undetected parent objects. To understand why cumulants are useful in this task, we first consider ideal conditions: there is only one way of fragmenting the parent, we detect every fragment, and there is no background of fragments from other processes. Such fragmentation process can be written as \[S \rightarrow (Z, Y, X, ...),\] where \((Z, Y, X, ...)\) is a tuple containing \(n\) fragments. In such a simple process the number of fragments of each kind matches the number of parent objects. Hence, the random variables are equal: \[Z = Y = X = ... = S\] and \[z_0 = y_0 = y_0 = ... = s_0,\] which reduces the expected values in Eqs. \ref{kappa-1} and \ref{kappa-2}--\ref{kappa-6} to the central moments of the Poisson distribution: \[\langle z_0 y_0 x_0 ... \rangle = \langle s_0^n \rangle = \langle (S - \langle S \rangle)^n \rangle = \mu_n .\] Since these moments are known polynomials of \(\lambda\) \cite[Ref.][Section 5.9]{kendall94vol1}, the calculation of cumulants is straightforward: \begin{subequations} \begin{align} \varkappa_1(S) &= \langle S \rangle = \lambda, \\ \varkappa_2(S,S) &= \mu_2 = \lambda, \\ \varkappa_3(S,S,S) &= \mu_3 = \lambda, \\ \varkappa_4(S,S,S,S) &= \mu_4 - 3\mu_2^2 \nonumber \\ &= (\lambda + 3\lambda^2) - 3\lambda^2 = \lambda. \end{align} \label{moments} \end{subequations} Note how the sum in Eq.~\ref{kappa-4} cancels out the higher powers of \(\lambda\) leaving just the linear term. In fact this is the general property of the Poisson-distribution cumulants \cite[Ref.][Example 3.10]{kendall94vol1}: \[\varkappa_n(S, S, S, ...) = \lambda.\] We conclude that under the ideal fragmentation scenario cumulant mapping of fragments gives us a statistical estimate of the mean number of parent objects. \section{Uncorrelated background} Ideal conditions are rarely met in practice. We should estimate how a reduced detection efficiency and a background from uncorrelated fragments affect cumulant mapping. Both effects influence the statistics of the cumulant estimator in a similar manner; those fragments that are detected but have undetected siblings effectively contribute to the uncorrelated background, which is depicted in Fig.~\ref{fig_principle} using non-black filled circles. When \(\eta < 100\%\) or \(\zeta > 0\), then the random variables \(S, Z, Y, ...\) are only partially correlated. To find the formula for \(\varkappa_n\) we need to separate the correlated and uncorrelated parts of these variables. \subsection{Correlated and uncorrelated parts} While the parent objects in the sample still follow the Poisson distribution given by Eq.~\ref{pois}, the reduced detection efficiency effectively combines the parent distribution with a binomial distribution of the partial detection giving another Poisson distribution: \begin{equation*} Z \sim \text{Pois}(\lambda) \ast \text{Binom}(\eta_Z) \rightarrow \text{Pois}(\eta_Z \lambda). \end{equation*} When a Poissonian background from other, uncorrelated fragments is added, the compound probability distribution continues to be Poissonian with a modified expected value: \begin{equation*} Z \sim \text{Pois}(\eta_Z \lambda) \diamond \text{Pois}(\eta_Z \zeta_Z \lambda) \rightarrow \text{Pois}(\eta_Z (1 + \zeta_Z) \lambda). \end{equation*} Similarly, \(Y \sim \text{Pois}(\eta_Y (1 + \zeta_Y) \lambda)\), etc. Since the binomial sampling of each of the \(Z, Y, X, ...\) fragments is independent, their joint detection efficiency is a product of the individual efficiencies. Therefore the probability distribution of the detected fragments \(Z\) correlated with all the other detected fragments is given by \begin{equation*} Z_\text{c} \sim \text{Pois}(\theta_n \lambda), \end{equation*} where \[\theta_n = \eta_Z \eta_Y \eta_X ... \; .\] By the same reasoning \(Y_\text{c} \sim \text{Pois}(\theta_n \lambda), X_\text{c} \sim \text{Pois}(\theta_n \lambda),\) etc. Moreover, the correlated parts are present in each kind of fragment to the same extent, therefore: \begin{equation} Z_\text{c} = Y_\text{c} = X_\text{c} = ... = S_\text{c} \sim \text{Pois}(\theta_n \lambda) \label{pois-corr}. \end{equation} Since the number of detected fragments is the sum of the correlated and uncorrelated parts: \begin{equation} Z = Z_\text{c} + Z_\text{u}, \;\; Y = Y_\text{c} + Y_\text{u}, \;\; X = X_\text{c} + X_\text{u}, \;\; ... \label{Z-Y-X} \end{equation} and a sum of Poisson distributions is a Poisson distribution, the distributions of uncorrelated parts can be found as follows: \begin{align*} Z_\text{u} &= Z - Z_\text{c} \sim \text{Pois}(\eta_Z (1 + \zeta_Z) \lambda - \theta_n \lambda) \\ &= \text{Pois}((\eta_Z (1 + \zeta_Z) - \theta_n) \lambda), \\ Y_\text{u} & \sim \text{Pois}((\eta_Y (1 + \zeta_Y) - \theta_n) \lambda), \\ X_\text{u} & \sim \text{Pois}((\eta_X (1 + \zeta_X) - \theta_n) \lambda), ... \end{align*} Further calculations are significantly simplified when mean-centered variables are introduced for the correlated and uncorrelated parts: \begin{align} s &= S_\text{c} - \langle S_\text{c} \rangle, \nonumber \\ z &= Z_\text{u} - \langle Z_\text{u} \rangle, \nonumber \\ y &= Y_\text{u} - \langle Y_\text{u} \rangle, \nonumber \\ x &= X_\text{u} - \langle X_\text{u} \rangle, \; ... \; . \label{s-z-y-x} \end{align} Using Eqs. \ref{z0-y0}, \ref{Z-Y-X}, and \ref{pois-corr} we obtain \begin{align} z_0 &= s + z, \nonumber \\ y_0 &= s + y, \nonumber \\ x_0 &= s + x, ... \; . \label{z0-y0-x0} \end{align} One useful implication of Eq.~\ref{s-z-y-x} is that the expected values of the mean-centered variables vanish: \begin{equation} \langle s \rangle = \langle z \rangle = \langle y \rangle = \langle x \rangle = ... = 0. \label{vanish} \end{equation} The second useful property is that they can be regarded as independent, which is exactly true if only one or two detectable fragments are produced. For three or more fragments it may happen that one fragment that is not detected relegates the other fragments to the background in spite of their correlation. It can be shown that these residual correlations do not affect the expected values of cumulants, nor the variance of the first and second cumulant. \subsection{Expected values of cumulants} The first cumulant is unusual because formally it cannot distinguish between the sample fragments and the uncorrelated background. To deal with this ambiguity I shall redefine the first cumulant given by Eq.~\ref{kappa-1}, so it measures only the fragments coming from the sample: \begin{equation} \varkappa_1(Z) = \langle Z_\text{c} \rangle = \langle S_\text{c} \rangle = \eta_Z \lambda = \theta_1 \lambda . \label{kappa-1-bgnd} \end{equation} This definition not only makes \(\varkappa_1\) consistent with the higher cumulants but also gives it the meaning of a signal that is separate from the background. As illustrated in Fig.~\ref{fig_kappa1}, when in spectral analysis the sample fragments form a peak riding on a broad background given by \(\langle Z_\text{u} \rangle = \eta_Z \zeta_Z\lambda\), then \(\langle Z_\text{c} \rangle = \eta_Z \lambda\) is just the peak height. \begin{figure}[t] \includegraphics[width=7cm]{Fig2.png} \caption{\label{fig_kappa1} \textbf{How to separate the first cumulant from the uncorrelated background.} Cumulant \(\varkappa_1\) measures only the correlated fragments \(Z_\text{c}\) that form a peak on a spectrum. The peak rides on a background of uncorrelated fragments \(Z_\text{u}\). This definition of \(\varkappa_1\) keeps the background parameter \(\zeta_Z =\langle Z_\text{u} \rangle / \langle Z_\text{c} \rangle\) consistent with the higher cumulants.} \end{figure} The higher cumulants can be calculated using the mean-centered variables, \(z_0 = s + z, \; y_0 = s + y, \:\) etc. For example, \begin{multline} \varkappa_2(Z,Y) \overset{\ref{kappa-2}}{=} \langle z_0 y_0 \rangle \ \overset{\ref{z0-y0-x0}}{=} \langle (s + z)(s + y) \rangle \\ = \langle s^2 \rangle + \langle s \rangle (\langle z \rangle + \langle y \rangle) + \langle z \rangle \langle y \rangle \\ \overset{\ref{vanish}}{=} \langle s^2 \rangle \overset{\ref{s-z-y-x},\ref{pois-corr}}{=} \theta_2 \lambda, \label{kappa-2-bgnd} \end{multline} where the numbers above equality signs refer to the equations used. We notice that the formulae for all higher cumulants can be derived in a similar manner. When expanding \(\langle z_0 y_0 x_0 ... \rangle\), most of the terms vanish because of Eq.~\ref{vanish}, and we are left only with polynomials of moments of \(s\). These polynomials are the same as in Eqs.~\ref{moments} except now \(\varkappa_n\) follows \(\text{Pois}(\theta_n \lambda)\) rather then \(\text{Pois}(\lambda)\). Therefore, we obtain a general result: \begin{equation} \varkappa_n(Z,Y,X,...) = \theta_n \lambda. \label{kappa-n} \end{equation} The simplicity of this result is remarkable. Despite extensive data processing required to estimate cumulants, their meaning is simple: cumulants reconstruct objects from partially detected fragments disregarding any background of other fragments. \subsection{Estimators of cumulants} To construct cumulant estimators, the expected values in Eqs.~\ref{kappa-2}--\ref{kappa-6} should be replaced with sample averages. The simplest action is to use Eq.~\ref{kappa-1-estim} everywhere. If, however, unbiased estimators are desired, factor \(1/N\) should be replaced with \(1/(N-1)\) whenever a degree of freedom has already been used, for example \begin{multline} \widehat{\varkappa_2}(Z,Y) = \overline{(Z - \overline{Z})(Y - \overline{Y})} \\ = \frac{1}{N-1} \sum_{i=1}^{N}{ \Big (Z_i - \frac{1}{N} \sum_{i=j}^{N}{Z_j} \Big ) \Big (Y_i - \frac{1}{N} \sum_{i=j}^{N}{Y_j} \Big )}. \label{estim-2} \end{multline} These estimators can be plotted as 2-dimensional maps \cite{frasinski1989covariance} or slices of higher-dimensional maps \cite{frasinski1991multiphoton}. \subsection{Noise of estimators} Due to the finite number of samples collected, cumulant estimators are noisy. When assessing the feasibility of an experiment involving a cumulant map, the expected noise on the map is of primary concern. It is known that with increasing dimensionality of the map, the noise-to-signal ratio (N/S) increases \cite{frasinski1991multiphoton}. There are two sources of this deterioration. Firstly, each time the map dimensionality is increased, the signal decreases because it is multiplied by the detection efficiency according to Eq.~\ref{kappa-n}. And secondly, the higher the cumulant, the more subtraction of lower correlations is needed, which contributes more noise from the subtrahends. These effects can be quantified by calculating the variance of the cumulant estimator, \(\text{var}(\widehat{\varkappa_n})\), and finding the noise-to-signal ratio: \begin{equation} \text{N/S} = \sigma_n / \varkappa_n \text{, where } \sigma_n = \sqrt{\text{var}(\widehat{\varkappa_n})}. \label{NS} \end{equation} Since the calculations of the variance are lengthy, they are relegated to the appendix. As usual, we find that the standard deviation \(\sigma_n \propto 1/\sqrt{N}\), therefore, once we know \(\sigma_n\) we can estimate the number of samples, \(N\), needed for the required noise-to-signal ratio and assess the experimental feasibility. \section{Summary of analytical results} \label{summary} The values and variances of the cumulants are quite complicated functions of the counting rate, \(\lambda\), the detection efficiency, \(\eta\), and the relative background, \(\zeta\). Rather than inspecting the analytical formulae, it is more informative to plot the results for some chosen argument ranges. A Matlab code that calculates the values and variances of cumulants up to the 4\textsuperscript{th} one is included in Supplemental Material. When reading and using the code, it is helpful to refer to the equations written in normal mathematical notation. The equations give cumulant estimates \(\widehat{\varkappa_n}\) constructed from the samples according to Eq.~\ref{kappa-1-estim}, the cumulant values \(\varkappa_n\), and the cumulant variances \(\text{var}(\widehat{\varkappa_n})\). The auxiliary quantities are the central moments of the correlated parts, \(\langle s^n \rangle\), and of the uncorrelated parts, \(\langle z^2 \rangle\), \(\langle y^2 \rangle\), \(\langle x^2 \rangle\), and \(\langle w^2 \rangle\) (see Eq.~\ref{s-z-y-x}). \subsection{1D spectrum} \label{summ1D} Note that the expected value of the first cumulant depends only on the correlated part, which is justified in the discussion of Eq.~\ref{kappa-1-bgnd}. Hence \begin{align*} \widehat{\varkappa_1} &= \overline{Z} = \overline{Z_\text{c}} + \overline{Z_\text{u}}, \\ \varkappa_1 &= \big\langle \overline{Z_\text{c}} \big\rangle = \langle S_\text{c} \rangle = \theta_1 \lambda, \\ \text{var}(\widehat{\varkappa_1}) &= \frac{1}{N} \big( \langle s^2 \rangle + \langle z^2 \rangle \big), \\ \end{align*} where \begin{align*} \langle s^2 \rangle &= \theta_1 \lambda, \\ \langle z^2 \rangle &= \big((1 + \zeta_Z) \eta_Z - \theta_1 \big) \lambda, \\ \theta_1 &= \eta_Z. \end{align*} \subsection{2D covariance map} \label{summ2D} The second cumulant is commonly known as covariance. \begin{align*} \widehat{\varkappa_2} &= \overline{(Z - \overline{Z})(Y - \overline{Y})}, \\ \varkappa_2 &= \langle \widehat{\varkappa_2} \rangle = \langle s^2 \rangle = \theta_2 \lambda, \\ \text{var}(\widehat{\varkappa_2}) &\approx \frac{1}{N} \Big( \langle s^4 \rangle - \langle s^2 \rangle^2 \\ &+ \langle s^2 \rangle \big(\langle z^2 \rangle + \langle y^2 \rangle \big) \\ &+ \langle z^2 \rangle \langle y^2 \rangle \Big), \\ \end{align*} where \begin{align*} \langle s^2 \rangle &= \theta_2 \lambda, \\ \langle s^4 \rangle &= \theta_2 \lambda + 3 \,\theta_2^2 \lambda^2, \\ \langle z^2 \rangle &= \big((1 + \zeta_Z) \eta_Z - \theta_2 \big) \lambda, \\ \langle y^2 \rangle &= \big((1 + \zeta_Y) \eta_Y - \theta_2 \big) \lambda, \\ \theta_2 &= \eta_Z \eta_Y. \end{align*} \subsection{3D cumulant map} \label{summ3D} The third cumulant is sometimes called 3-fold covariance or 3-dimensional covariance. \begin{align*} \widehat{\varkappa_3} &= \overline{(Z - \overline{Z})(Y - \overline{Y})(X - \overline{X})}, \\ \varkappa_3 &= \langle \widehat{\varkappa_3} \rangle = \langle s^3 \rangle = \theta_3 \lambda, \\ \text{var}(\widehat{\varkappa_3}) &\approx \frac{1}{N} \Big( \langle s^6 \rangle - \langle s^3 \rangle^2 \\ &+ \langle s^4 \rangle \sum^3 \langle z^2 \rangle \\ &+ \langle s^2 \rangle \sum^3 \langle z^2 \rangle \langle y^2 \rangle \\ &+ \langle z^2 \rangle \langle y^2 \rangle \langle x^2 \rangle \Big), \\ \end{align*} where \begin{align*} \langle s^2 \rangle &= \theta_3 \lambda, \\ \langle s^3 \rangle &= \theta_3 \lambda, \\ \langle s^4 \rangle &= \theta_3 \lambda + 3\,\theta_3^2 \lambda^2, \\ \langle s^6 \rangle &= \theta_3 \lambda + 25\,\theta_3^2 \lambda^2 + 15\,\theta_3^3 \lambda^3, \\ \langle z^2 \rangle &= \big((1 + \zeta_Z) \eta_Z - \theta_3 \big) \lambda, \\ \langle y^2 \rangle &= \big((1 + \zeta_Y) \eta_Y - \theta_3 \big) \lambda, \\ \langle x^2 \rangle &= \big((1 + \zeta_X) \eta_X - \theta_3 \big) \lambda, \\ \theta_3 &= \eta_Z \eta_Y \eta_X. \end{align*} \subsection{4D cumulant map} \label{summ4D} The fourth cumulant formulae are the main result of this work. \begin{align*} \widehat{\varkappa_4} &= \overline{ (Z - \overline{Z})(Y - \overline{Y})(X - \overline{X})(W - \overline{W})} \\ &- \sum^3 \overline{(Z - \overline{Z})(Y - \overline{Y})} \:\: \overline{(X - \overline{X})(W - \overline{W})}, \\ \varkappa_4 &= \langle \widehat{\varkappa_4} \rangle = \langle s^4 \rangle - 3 \langle s^2 \rangle^2 = \theta_4 \lambda, \\ \text{var}(\widehat{\varkappa_4}) &\approx \frac{1}{N} \Big( \langle s^8 \rangle - \langle s^4 \rangle^2 \\ &+ 48\langle s^4 \rangle \langle s^2 \rangle^2 - 12\langle s^6 \rangle \langle s^2 \rangle - 36\langle s^2 \rangle^4 \\ &+ \big(\langle s^6 \rangle - 6\langle s^4 \rangle \langle s^2 \rangle + 9\langle s^2 \rangle^3 \big) \sum^4 \langle z^2 \rangle \\ &+ \big(\langle s^4 \rangle - \langle s^2 \rangle^2 \big) \sum^6 \langle z^2 \rangle \langle y^2 \rangle \\ &+ \langle s^2 \rangle \sum^4 \langle z^2 \rangle \langle y^2 \rangle \langle x^2 \rangle \\ &+ \langle z^2 \rangle \langle y^2 \rangle \langle x^2 \rangle \langle x^2 \rangle \Big), \\ \end{align*} where \begin{align*} \langle s^2 \rangle &= \theta_4 \lambda, \\ \langle s^4 \rangle &= \theta_4 \lambda + 3\,\theta_4^2 \lambda^2, \\ \langle s^6 \rangle &= \theta_4 \lambda + 25\,\theta_4^2 \lambda^2 + 15\,\theta_4^3 \lambda^3, \\ \langle s^8 \rangle &= \theta_4 \lambda + 119\,\theta_4^2 \lambda^2 + 490\,\theta_4^3 \lambda^3 + 105\,\theta_4^4 \lambda^4, \\ \langle z^2 \rangle &= \big((1 + \zeta_Z) \eta_Z - \theta_4 \big) \lambda, \\ \langle y^2 \rangle &= \big((1 + \zeta_Y) \eta_Y - \theta_4 \big) \lambda, \\ \langle x^2 \rangle &= \big((1 + \zeta_X) \eta_X - \theta_4 \big) \lambda, \\ \langle w^2 \rangle &= \big((1 + \zeta_W) \eta_W - \theta_4 \big) \lambda, \\ \theta_4 &= \eta_Z \eta_Y \eta_X \eta_W. \end{align*} \section{Discussion of the results} The figures in this section have been drawn using the Matlab code given in Supplemental Material. For a detailed inspection of the results, it is recommended to run the code and vary the figure options, such as rotate the 3D plots or change the argument ranges. \subsection{Ideal conditions} It is instructive first to look at the results under the ideal conditions of full detection efficiency and no background. Substituting \(\eta = 100\%\) and \(\zeta = 0\) into the equations in Section \ref{summary} we find that there is no contribution from the uncorrelated parts and the expressions for the noise-to-signal ratio (N/S) calculated from Eq.~\ref{NS} are relatively simple functions of \(\lambda\). These functions are plotted in Fig.~\ref{fig_noiseLambda} for \(N = 1\). \begin{figure}[b] \includegraphics[width=\linewidth]{Fig3.png} \caption{\label{fig_noiseLambda} \textbf{Cumulant noise as the function of the number of parent objects under the ideal conditions.} Unlike for the first and second cumulants, the noise for the higher cumulants is minimal at low counting rates.} \end{figure} With increasing counting rate, i.e. increasing \(\lambda\), the noise of the first cumulant approaches zero as \(1/\sqrt{\lambda}\), which reflects the well-known fact that a high counting rate is always advantageous in collecting 1D spectra. The N/S of the second cumulant (covariance) approaches a constant value of \(\sqrt{2}\) with an increasing counting rate. This tells us that for covariance mapping there is little advantage in increasing \(\lambda\) beyond 1 or 2, unless we want to accommodate weak and strong features on the same map. (In fact a high counting rate exacerbates map distortions due to fluctuations in experimental conditions, which induce common-mode fragment correlations \cite{frasinski2013dynamics, kornilov2013coulomb}.) Cumulants higher than the second one have N/S increasing at high counting rates due to the higher powers of \(\lambda\) present in the expressions for \(\text{var}(\widehat{\varkappa_n})\). Therefore, for \(n \geq 3\) there is an optimal counting rate at low values of \(\lambda\) as the minima of the green and orange curves show in Fig.~\ref{fig_noiseLambda}. \subsection{Reduced detection efficiency} To assess how a reduced detection efficiency affects the noise, we plot N/S as a function of \(\eta\) and \(\eta \lambda\), as shown in Fig.~\ref{fig_noiseEta}. The reason for choosing the latter argument rather than just \(\lambda\) is that \(\eta \lambda\) is the mean number of only the detected fragments, which is what is observed experimentally on 1D spectra. For simplicity, it is assumed that \(\eta\) is the same for every fragment. As expected, the noise of the 2\textsuperscript{nd} and higher cumulants substantially increases when the detection efficiency is very low. However, when the detection efficiency is reduced only moderately, to around 50\%, the increase in the noise is also moderate, even for the 4\textsuperscript{th} cumulant. Since 50\% detection efficiency is within the reach of modern particle detectors, it makes cumulant mapping a feasible proposition. \begin{figure}[b] \includegraphics[width=\linewidth]{Fig4.png} \caption{\label{fig_noiseEta} \textbf{Cumulant noise as in Fig.~\ref{fig_noiseLambda} but resolved for \(\eta\).} When detection efficiency is reduced to about 50\%, there is only a modest increase in the noise.} \end{figure} \subsection{Background fragments} The next correction to the ideal conditions worth considering is a background of uncorrelated fragments (magenta circles in Fig.~\ref{fig_principle}). This is done using a realistic value of \(\eta = 50\%\) and plotting N/S as a function of \(\zeta\) and \(\eta \lambda\) in Fig.~\ref{fig_noiseZeta}. This choice of arguments means that they are proportional, respectively, to the relative background level and the height of a peak on a 1D spectrum, as shown in Fig.~\ref{fig_kappa1}. For simplicity, we assume the same background level, \(\zeta\), for each kind of fragment, which makes the noise of higher cumulants to grow faster with increasing \(\zeta\) than the lower ones because of the higher powers of \(\zeta\) present in the expressions for \(\text{var}(\widehat{\varkappa_n})\). In some experiments this may be an over-pessimistic assumption because some of the \(Z, Y, X, \text{ or } W\) fragments may experience little or no background at all. The code given in Supplemental Material accepts \(\zeta\) and \(\eta\) tailored to each kind of fragment. The optimum counting rate is broadly the same as for no background shown in Fig.~\ref{fig_noiseEta}. With an increasing background level, however, the optima for the higher cumulants shift to even lower counting rates. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig5.png} \caption{\label{fig_noiseZeta} \textbf{Cumulant noise as in Fig.~\ref{fig_noiseLambda} but at a reduced \(\eta\) and resolved for \(\zeta\).} The noise increases with increasing background, especially for the higher cumulants and higher counting rates.} \end{figure} \subsection{Number of samples needed} When planning an experiment, the calculated noise is used to estimate the number of samples, \(N\), needed to suppress N/S to an acceptable level. We can use Eq.~\ref{NS} to calculate \(N\) for a fixed noise level, e.g. N/S = 0.1. Taking \(\eta = 50\%\), the result is shown in Fig.~\ref{fig_samplesZeta} on a logarithmic scale. The need to reduce the counting rate is clearly visible for the higher cumulants, especially at high background levels. The optimal \(\eta \lambda\) for the 3\textsuperscript{rd} and 4\textsuperscript{th} cumulants is below 0.05 at \(\zeta > 5\), which means that the observed fragments should be detected in less than 1 in 20 single-sample spectra. Such a low counting rate is comparable to the requirement of coincidence experiments. Unlike coincidences, however, cumulant mapping can accommodate higher counting rates if necessary, albeit at an increased noise. \subsection{Practical implications} Fig.~\ref{fig_samplesZeta} tells us that at least on the order of 10\textsuperscript{5} samples will be needed to obtain a good 4\textsuperscript{th} cumulant map. If we want to complete data collections in 15--20 minutes, then the sampling rate should be at least 100 Hz, and 1 kHz or more is desirable. Such sampling rates are now routinely available from femtosecond lasers and becoming available from XFELs \cite{huang2021features}. For example, the LCLS-II XFEL will be operating at up to 1 MHz repetition rate, enabling researchers to probe over 10\textsuperscript{9} samples in a single experimental run. In principle, such a large number of samples makes it possible to build clear cumulant maps of even higher order than the 4\textsuperscript{th} one. In practice, however, the data acquisition speed is likely to be the limiting factor, since every single-sample spectrum needs to be recorded. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig6.png} \caption{\label{fig_samplesZeta} \textbf{Number of samples needed for a fixed noise-to-signal ratio.} In realistic experimental conditions about 10\textsuperscript{5} samples are needed to build a clear 4\textsuperscript{th} cumulant map.} \end{figure} Cumulant mapping can significantly enhance the conventional mass spectrometry, whose main application is in identifying large biomolecules. The conventional approach is to obtain a high-quality mass spectrum of fragments and search for a match in a large database of molecular spectra. Therefore, the development of commercial spectrometers is driven towards the high mass resolution at the expense of the repetition rate, which is normally below 1 Hz. Recently covariance mapping has been successfully applied to analyse mass spectra obtained from a commercial spectrometer \cite{driver2020two, driver2021two}. Rather than relying on the high mass resolution, the technique resolves the spectra in the second dimension and partially reconstructs the parent objects on a 2D map. Such a reconstruction allows some parent identifications that would be impossible using only a 1D spectrum of any quality. Clearly, this technique can utilise higher-cumulant mapping to perform a more complete parent reconstruction. Since many samples are needed to build cumulant maps, the spectrometer should operate at high repetition rates, for example, by employing the time-of-flight technique. The volume of a multi-dimensional cumulant map can be very large and cumbersome to explore. In the simplest approach, cross-sections and projections of the map can be used to visualise the reconstructed molecules \cite{frasinski1991multiphoton}. If the locations of the reconstructed molecules are to be found, computational methods of artificial intelligence can be used to discover and identify them. Cumulant mapping spectrometry can be combined with laser-induced fragmentation. On one hand, such combination allows us to elucidate the dynamics of molecular ionization and fragmentation \cite{allum2021multi}, on the other hand, it makes it possible to tune the fragmentation to specific bonds in large biomolecules \cite{ayers2022covariance}. \section{Conclusions and outlook} Cumulant mapping forms a firm theoretical basis for the concept of `multi-fold covariance'. The derived formulae enable the experimentalist to assess quantitatively the feasibility of studying multiple correlations in a fragmentation experiment. The key requirements are detection efficiency of around 50\%, and a sufficient number of samples, which in practice translates to sampling at high repetition rates. Studies of molecular fragmentations induced by femtosecond or x-ray lasers are obvious areas for applying cumulant mapping. The technique can be used to extend the conventional mass spectrometry to multiple dimensions and substantially enhance its selectivity. Extension to particles other than electrons or ions should be straightforward. In particular, photons in a wide spectral range from near infrared to hard gamma rays are the promising candidates. Since cumulant mapping relies on the Poisson distribution of the samples, in principle, it is applicable to any repetitive Poissonian process. For example, the neuronal spike trains closely follow Poisson point processes \cite{gardella2019modeling, campo2020inferring}. It could be speculated that cumulants represent the high-level brain functions emerging from correlations in low-level neuronal activity. \bibsection \section*{Appendix: Variance calculations} \section*{Popular summary} Experimental sciences often use a crude but powerful method to study an unknown object by breaking it into pieces, identifying the fragments, and reconstructing the object. Unfortunately, the reconstruction is often ambiguous because some fragments are lost, or other, unrelated fragments are inadvertently collected. Cumulant mapping not only performs the reconstruction correctly, but also overcomes the problem of lost and unrelated fragments, albeit at the cost of sampling the objects many times. This work describes the theory of cumulant mapping and estimates how many samples are needed for a reliable reconstruction under given experimental conditions. \begin{wrapfigure}{L}{6.2cm} \includegraphics[width=5.7cm]{FigPS.png} \centering \end{wrapfigure} The objects do not need to be fragmented one by one. In fact, each sample should contain a random number of objects drawn from the Poisson distribution. Such samples are common in studies at the molecular level. Molecules are inserted into vacuum, ionized and fragmented. The ionization and fragmentation method could be as simple as colliding the molecules with helium gas, or as elaborate as exposing them to intense x-ray pulses from a free-electron laser. Spectra of ions, electrons, or photons emerging from each sample are recorded, digitally processed, and the cumulant values are displayed on a multi-dimensional map showing the reconstructed molecules. This method could be applied to quite long biomolecular chains, such as fragments of proteins or DNA. Cumulant mapping forms a natural basis of multi-dimensional spectrometry. The technique substantially enhances the selectivity of analytical mass spectrometry and can detect variations in the structure of biomolecules inaccessible to the conventional analysis. When combined with short laser pulses, the technique can be used to elucidate ultra-fast molecular processes such as solar energy harvesting or radiation damage of the DNA. In principle, cumulant mapping can be applied to any repetitive Poissonian process. Intriguingly, since the firing of neurons closely follow Poisson probability distribution, one could speculate that cumulants represent the high-level mental functions emerging from low-level neuronal activity. \bibsection \end{document}
proofpile-arXiv_065-5754
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Massive black holes (MBHs) are thought to be ubiquitous in the centre of massive galaxies at all redshifts \citep[][and references therein]{Kormendy_Ho_2013}. Through gas accretion, they are the engines responsible for the huge emission of galactic centres as active galactic nuclei (AGN). Extremely bright quasars are AGN with enormous luminosities ($L_{\rm bol} > 10^{46}$~erg~s$^{-1}$), likely powered by the most massive MBHs, with masses $M_{\rm BH}$ ranging from $10^{8}$ to $10^{10}~\!{\rm M}_\odot$. The number of observed bright quasars at $z>6$ has substantially grown in the last years \citep[e.g,][]{Wu_et_al_2015, Venemans_et_al_2015, Jiang_et_al_2015, Wu_et_al_2015, Carnall_et_al_2015, Reed_et_al_2015, Matsuoka_et_al_2016, Reed_et_al_2017, Banados_et_al_2018}, proving that MBHs with $M_{\rm BH} \sim 10^{9}~\!{\rm M}_\odot$ were already in place when the Universe was less than $950$~Myr old. The fast growth of these compact objects represents an exciting and fully open problem in modern astrophysics and various models have been proposed to solve the issue. For a long time, it was assumed that MBH seeds were provided by ($i$) the massive remnants of Pop III stars \citep[see, e.g.][]{Madau_Rees_2001, Haiman_Loeb_2001, Heger_et_al_2003, Volonteri_et_al_2003, Madau_et_al_2004}, formed at $z\gsim20$. However this scenario is problematic, since the accretion would have to either continue unperturbed at the Eddington rate, maintaining a very low radiative efficiency \citep[not to excessively hinder the feeding process; see, e.g.][]{Tanaka_Haiman_2009}, or ($ii$) exceed the Eddington limit \citep[see, e.g.][]{Madau_et_al_2014, Volonteri_et_al_2015, Lupi_et_al_2016, Pezzulli_et_al_2017}. Other models predict the ($iii$) existence of very massive MBH seeds ($M_{\rm BH} \mathrel{\rlap{\lower 3pt \hbox{$\sim$}} \raise 2.0pt \hbox{$>$}} 10^{4}~\!{\rm M}_\odot$, possibly originated from direct collapse; see e.g. \citealt{Loeb_Rasio_1994, Bromm_Loeb_2003, Koushiappas_et_al_2004, Spaans_Silk_2006, Mayer_et_al_2010, Mayer_et_al_2015}). The steady and efficient gas inflow, necessary for the fast MBH growth, would be strongly favoured within highly massive dark matter (DM) halos \citep[see][]{Efstathiou_Rees_1988}. Accordingly, theoretical models including numerical simulations with MBH accretion and feedback prescriptions \citep[see e.g.][]{Barkana_Loeb_2001, Springel_et_al_2005a, Sijacki_et_al_2009, Di_Matteo_et_al_2012, Costa_et_al_2014, Weinberger_et_al_2018, Ni_et_al_2018, Marshall_et_al_2019, Ni_et_al_2020} predict high-$z$ MBHs to reside in the most massive DM-halos with $M_{\rm vir} > 10^{12}-10^{13}~\!{\rm M}_\odot$, corresponding to fluctuations above $3-4\sigma$ in the cosmic density field \citep{Barkana_Loeb_2001}. A connection between MBHs and their host DM halo has been studied also via clustering measurements of AGN. Various large surveys, from the optical to the X-ray band, have shown that AGN reside in massive halos with masses $M_{\rm vir} = 10^{12}-10^{13}~\!{\rm M}_\odot$ at low ($z\sim0.1$) and higher (up to $z\approx2$) redshift \citep[e.g.][]{Hickox_et_al_2009, Ross_et_al_2009, Cappelluti_et_al_2010, Allevato_et_al_2011, Allevato_et_al_2012}. As a consequence, quasars at high redshift should be part of large scale structures marked by a significant over-density in galaxy number count, that may extend over Mpc scales \citep[e.g, ][]{Costa_et_al_2014}. However, this expectation has not been conclusively confirmed by observations. On the one hand, various observations corroborated the high-density scenario hypothesis (see, e.g. \citealt{Steidel_et_al_2005, Capak_et_al_2011, Swinbank_et_al_2012, Husband_et_al_2013} for $2<z<5$ and \citealt{Kim_et_al_2009, Morselli_et_al_2014, Balmaverde_et_al_2017, Decarli_et_al_2017, Decarli_et_al_2018, Mignoli_et_al_2020, Venemans_et_al_2020}, for $z \sim 6 $ and above). In particular, \citet{Venemans_et_al_2020} followed up with the Atacama Large Millimeter Array (ALMA) a sample of 27 $z\sim 6$ quasars, previously detected in [CII], discovering 17 [CII] bright galaxies and finding that some of the quasars present multiple (2-3) companions. Furthermore, whereas no dual AGN has been confirmed so far at $z>6$ \citep[][]{Connor_et_al_2019, Connor_et_al_2020, Vito_et_al_2019a, Vito_et_al_2021}, several of them have been detected up to ${z \sim 5}$ (see \citealt{Koss_et_al_2012, Vignali_et_al_2018, Silverman_et_al_2020}). On the other hand, more than a few observations did not reveal significant galaxy count over-densities around quasars (see, e.g., \citealt{Francis_Bland-Hawthorn_2004} and \citealt{Simpson_et_al_2014a} at $z\sim 2$, \citealt{Uchiyama_et_al_2018} at $z=4$, \citealt{Kashikawa_et_al_2007} and \citealt{Kikuta_et_al_2017} at $z\sim 5$, \citealt{Banados_et_al_2013} and \citealt{Mazzucchelli_et_al_2017b} at $z>5.7$) and some theoretical studies consistently suggested that MBHs does not have to inevitably lie in the most massive halos \citep[see, for instance][]{Fanidakis_et_al_2013, Orsi_et_al_2016, Di_Matteo_et_al_2017, Habouzit_et_al_2019}. These conflicting observational results could be explained by the difficulty in detecting the satellites in the neighborhood of a luminous quasar, where the strong AGN feedback can play a fundamental role in shaping their baryonic component. AGN feedback could affect the MBH environment well beyond the host galaxy scale radius and significantly modify the star formation (SF) activity of the orbiting companions \citep[see][]{Martin-Navarro_et_al_2019, Martin-Navarro_et_al_2021}. \citet{Dashyan_et_al_2019} explicitly investigated, in cosmological hydrodynamical simulations, the AGN-driven quenching effect within galactic satellites at low redshift ($z<3$). The authors claim that AGN winds can decrease the SF process in AGN companions, by sweeping away their gas, out to five times the virial radius of the central galaxy. Conversely, \citet{Gilli_et_al_2019} observed a radio AGN at $z=1.7$, finding a possible positive effect of its feedback on the star formation of its companion, over a scale $\gsim450$~kpc. Furthermore, \citet{Fragile_et_al_2017} performed an isolated simulation in order to explain the positive feedback observed by \citet{Croft_et_al_2006} in a star-forming galaxy, where the star formation is triggered by the radio-jets of the nearby NGC--541. At higher redshift, the direct effect of AGN feedback on the satellite galaxies remains unclear. Several studies found, both through observations \citep[][]{Kashikawa_et_al_2007}, and numerically \citep[][]{Efstathiou_1992, Thoul_Weinberg_1996, Okamoto_et_al_2008}, that the ionizing background produced by the quasar is able to inhibit SF in satellites, or even suppress their assembling. In the most extreme cases, satellite halos would still populate the quasar environment, without being detectable because of their low gas and stellar content, especially given the limited sensitivity of modern instruments. In this context, the upcoming James Webb Space Telescope (JWST, \citealt{Gardner_et_al_2006}) will allow us to observe fainter AGN companions, hopefully alleviating our sensitivity bias. With a primary mirror of about 6.5 meters, JWST reaches a sensitivity 3-5 times higher than that of Hubble Space Telescope at 1$\mu$m, enabling the detection of the rest-frame UV emission that arises from distant faint star-forming galaxies with relative short exposure time. Numerical simulations are powerful tools to study the AGN effect on the surrounding galaxies, for they provide the complete spatial and time distribution of matter, as well as fundamental self-consistent sub-grid models, such as SF, stellar feedback, cooling processes, etc. Recently, enormous steps forward have been made in this field and several works successfully portrayed the formation and evolution of bright quasar hosts at high redshift. For instance, \citet{Di_Matteo_et_al_2017} investigated the accretion efficiency of high-$z$ MBHs, by employing advanced refinement techniques, focussing on the effects of AGN feedback on the host galaxy. \citet{Curtis_Sijacki_2016} and \citet{Van_der_Vlugt_Costa_2019} studied how AGN feedback shapes the dynamical components of the galaxy. \citet{Costa_et_al_2014} and \citet{Smidt_et_al_2018} analysed how AGN X-ray luminosity and feedback affect negatively SF in the host system. \citet{Lupi_et_al_2019} and \citet{Lupi_et_al_2021} detailed the consequences of AGN thermal feedback on the host interstellar medium (ISM). Differently, \citet{Richardson_et_al_2016}, \citet{Barai_et_al_2018}, and \citet{Valentini_et_al_2021} studied the effect of AGN feedback on the formation and evolution of a proto-cluster, using either different numerical methods or different feedback prescriptions. Finally, \citet{Costa_et_al_2020} developed a state of the art model to describe AGN-driven small-scale winds and tested it in isolated simulations. In this work, we analyse the environment of a powerful quasar at $z\gsim6$, resulting from the simulations by \citet{Barai_et_al_2018}. To assess the possible effect of the AGN feedback on the surrounding companion galaxies, we take advantage of the \citet{Barai_et_al_2018} suite and compare the quasar environment with a control run in which MBHs are not seeded. In detail, our goal is to ($i$) evaluate the impact of quasar feedback on its environment, far beyond its host galaxy radius, and ($ii$) provide theoretical predictions on the expected UV and rest-frame FIR luminosities of the neighbour satellites. This paper is organized as follows: in Section~\ref{hydro_sim}, we describe the numerical model adopted in this work and introduce the runs with and without AGN, hereafter called \texttt{AGNcone}{} and \texttt{noAGN}{}, respectively; in Section~\ref{sec:sample} we present the sample of satellites and statistically analyse their redshift evolution and how their properties (e.g. number of satellites, star formation rate, stellar mass, gas mass, metallicity) are affected by their position in the proto-cluster; in Section~\ref{sec:effect_individual}, we focus on the effect of AGN feedback on individual satellites, and present an interpretation of our results in Section~\ref{sec:discussion}; in Section~\ref{sec:observations} we discuss the observational properties of the galaxy group; finally, we summarise our findings and draw our conclusions in Section~\ref{sec:conclusions}. \section{Hydrodynamic simulations} \label{hydro_sim} The two runs analysed in this work belong to a suite of zoom-in cosmological simulations \citep[][]{Barai_et_al_2018} built to follow the formation of a massive galaxy proto-cluster at $z\simeq6$ through the smoothed particle hydrodynamics $N$-body code {\textsc{gadget-3}} \citep{Springel_2005, Springel_et_al_2008}. Both the simulations share the same initial conditions, generated through the code \textsc{music} \citep[see][]{Hahn_Abel_2011} and assume the same recipe for the sub-grid physics, with the exception of the MBH prescription. In particular, the cosmological parameter set refers to a flat $\Lambda$CDM Universe with $\Omega_{\rm M,0} = 0.3089$, $\Omega_{\Lambda,0} = 1-\Omega_{\rm M,0} = 0.6911$, $\Omega_{\rm b,0} = 0.0486$, and $H_{0} = 67.74$~Mpc s$^{-1}$ \citep[][results XIII]{Plank_2015}. The simulated box of side $500$~comoving Mpc has been evolved with only DM particles from $z=100$, till $z\lesssim6$, with an initial mass resolution of $2\times10^{10}~\!{\rm M}_\odot$ per particle and a softening length of $48.72$ comoving kpc. There, the Lagrangian volume of the most massive proto-cluster has been identified (through a \textit{Friends-of-Friends} algorithm) with a virial mass $M_{\rm vir} \simeq 10^{12}\, \!{\rm M}_\odot$ and a comoving virial radius $r_{\rm vir}\simeq 511$~kpc, traced back to $z=100$, refined and re-simulated along with baryons. In the most refined region -- set to be, originally, a cube of side $5.21$~Mpc --, the mass resolution is given by $m_{\rm DM} = 7.54 \times 10^6~\!{\rm M}_\odot$ for 591408 DM particles, and $m_{\rm gas} = 1.41 \times 10^6~\!{\rm M}_\odot$ for the same number of gas particles, whereas the spatial resolution is set by the gravitational softening length of all particle species ($\epsilon\simeq 1.476$ comoving kpc). The adaptive smoothing length is computed according to the standard prescription by \citet{Springel_et_al_2008} and its minimum value is set to $0.001\epsilon$. The whole suite implements radiative heating and cooling using the rates provided in the tables of \citet{Wiersma_et_al_2009} in ionization equilibrium. Metal-line cooling is also considered. Eleven element abundances (H, He, C, Ca, O, N, Ne, Mg, S, Si, Fe) are followed according to the receipt of \citet{Tornatore_et_al_2007} in the presence of a redshift-dependent cosmic ionizing background \citep{Haardt_Madau_2012}. SF is modelled following the multiphase recipes by \citet{Springel_Hernquist_2003}. More specifically, gas particles denser than $n_{\rm SF} = 0.13$~cm$^{-3}$ are converted into collisionless star particles according to the stochastic scheme of \citet{Katz_et_al_1996}. Each spawned star particle represents a stellar population described by a \citet{Chabrier_2003} initial mass function in the mass range $0.1-100~\!{\rm M}_\odot$. Stars are allowed to explode as supernovae (SN) releasing kinetic energy via a constant-velocity outflow with $v_{\rm SN} = 350$~km~s$^{-1}$ \citep[see][]{Barai_et_al_2015, Biffi_et_al_2016}. Metal enrichment of the interstellar medium is provided by Type Ia SN ($0.8<M /\!{\rm M}_\odot<8$) considering a fraction of binary of $1/10$, according to \citet{Thielemann_et_al_2003}, Type II SN ($M /\!{\rm M}_\odot>8$) according to \citet{Woosley_Weaver_1995}, and winds from asymptotic giant branch stars following \citet{van_den_Hoek_Groenewegen_1997}. The two simulations differ only in terms of the prescription used to describe massive black holes. In detail, in \texttt{noAGN}{} only cooling, metal enrichment, star-formation, and SN feedback are included, with no prescription for MBHs. By contrast, in the \texttt{AGNcone}{} run MBHs are represented as sink particles that can form and grow both via accretion of gas and through mergers with other MBHs. In particular, a MBH is seeded in a halo when: ($i$) the halo does not host any other MBH, ($ii$) the halo virial mass is $M_{\rm h} \geq 10^{9}~\!{\rm M}_\odot$ (i.e. the halo is properly resolved). When these conditions are satisfied, a $M_{\rm BH} = 10^{5}$~$\!\!{\rm M}_\odot$ MBH is placed at the centre of mass of the halo. The simulation implements a \textit{repositioning} algorithm as in \citet{Springel_et_al_2005b, Schaye_et_al_2015}. Every MBH can accrete mass from the surrounding medium via the classical Bondi--Hoyle--Lyttleton accretion rate \citep{Hoyle_Lyttleton_1939, Bondi_Hoyle_1944, Bondi_1952}: \begin{equation} \dot{M}_{\rm Bondi} = \frac{4 \pi G^2 M^2_{\rm BH} \rho}{(c^2_{\rm s}+v^2)^{3/2}}, \label{eq:Mbondi} \end{equation} where $G$ is the gravitational constant, $\rho$ is the gas density, $c_{\rm s}$ the sound speed, and $v$ the gas relative velocity with respect to the MBH\footnote{In the code, gas particles are swallowed according to a stochastic method described in \citet{Springel_et_al_2005b}.}. The accretion rate is multiplied by a boost factor of $100$, analogously to what has been done, e.g., in \citet{Springel_et_al_2005b} and it is capped to the Eddington limit, \begin{equation} \dot{M}_{\rm Edd} = \frac{4 \pi G M_{\rm BH} m_{\rm p}}{\epsilon_{r} \sigma_{\rm T} c}, \label{eq:Medd} \end{equation} where $m_{\rm p}$ is the proton mass, $\sigma_{\rm T}$ the Thomson cross section, and $c$ the speed of light in vacuum. $\epsilon_{\rm r}$ is the radiative efficiency set equal to $0.1$ \citep[see the average efficiency for an optically thick and geometrically thin accretion disks by][]{Shakura_Sunyaev_1973}. During the accretion process a MBH radiates a fraction $\epsilon_{\rm r}$ of the accreted rest-mass energy \begin{equation} L_{\rm rad} = \epsilon_{\rm r} \dot{M}_{\rm acc} c^2, \label{eq:Lrad} \end{equation} where $\dot{M}_{\rm acc}$ is the rate of the inflowing gas onto the MBH, and a fraction $\epsilon_{\rm f}$ of this luminosity is coupled to the interstellar medium as feedback energy: \begin{equation} \dot{E}_{f} = \epsilon_{\rm f} L_{\rm rad}, \label{eq:Efeed} \end{equation} where $\epsilon_{\rm f} = 0.05$ as in, e.g., \citet{Di_Matteo_et_al_2008}. Feedback is kinetic and is modelled through the ejection, in a bi-cone with a half-opening angle of $45$~degrees, of a certain mass of gas $M_{\rm w}$ with a fixed initial velocity of $v_{\rm w} = 10^{4}$~km s$^{-1}$, such that the kinetic luminosity $\frac{1}{2}\dot{M_{\rm w}}v_{\rm w}^2$ is equal to $\dot{E}_{f}$. The direction of emission is random and it is associated to the MBH when it is seeded. We note that this assumption is supported by several studies showing little or no alignment between the outflow/jet axis and the large-scale angular momentum of the host galaxy (see, e.g., \citealt{Hopkins_et_al_2012}, and references therein). To summarise, in the control run \texttt{noAGN}{} only SN feedback is included, whereas \texttt{AGNcone}{} incorporates both SN and AGN feedback. Different feedback prescriptions result into different host galaxy properties \citep[see][for a detailed discussion]{Barai_et_al_2018}. Here, we focus our analysis on the quasar environment. The properties of the two runs at their last snapshot are outlined in Table~\ref{tab:summary}. \begin{table*} \centering \caption{Summary table for the two analysed runs at $z=6$. From left to right: ($i$) name of the simulation, ($ii$) presence of SN feedback; ($iii$) presence of AGN feedback, ($iv$) total number of MBHs, ($v$) accretion rate of the most accreting MBH, ($vi$) mass of the most massive MBH, ($vii$) stellar mass and ($viii$) SFR of the central dominant galaxy. At $z=6$ in \texttt{AGNcone}{}, the most massive MBH is also the most accreting one, even if this is not always true at higher redshift.} \label{tab:summary} \begin{tabular}{cccccccc} \hline Run & SF/SN feedback & AGN feedback & \# MBH & $\dot{M}_{\rm acc} [\!{\rm M}_\odot$ yr$^{-1}]$ & $M_{\rm BH} [\!{\rm M}_\odot]$ & $M_{*}^{\rm cD} [\!{\rm M}_\odot]$ & $SFR^{\rm cD} [\!{\rm M}_\odot$ yr$^{-1}]$\\ \hline \hline \texttt{noAGN}{} & yes & no & 0 & 0 & 0 & 1.5$\times 10^{11}$ & 664\\ \hline \texttt{AGNcone}{} & yes & yes & 723 & 57.6 & 4.85$\times 10^9$ & 6.5$\times 10^{10}$ & 116\\ \hline \end{tabular} \end{table*} \section{The satellite sample} \label{sec:sample} \begin{figure} \includegraphics[width=0.49\textwidth]{Mvir_distr_histo_cumulative.pdf} \caption{{\it Left panel}: virial mass distribution of the galaxy samples in \texttt{noAGN}{} (blue) and \texttt{AGNcone}{} (red) at $z=6$. {\it Right panel}: cumulative virial mass function for the same samples. Note the highest mass bin, containing the cD galaxy of the proto-cluster.} \label{fig:Mvir_PDFs} \end{figure} We identify galaxies and their related dark matter halos through the \textsc{AMIGA} halo finder code \citep[see,][]{Knollmann_et_al_2009}, using a minimum of 20 bound particles to define a halo. The merger tree for each halo at $z\simeq6$ is built by tracing back in time the constituent DM particles: their ID is matched in the progenitor structures in the previous snaphots. Baryons are assigned to their related galaxy ($i$) when a gas or a stellar particle is found within $\beta r_{\rm vir}$ of a given halo, where $r_{\rm vir}$ is the virial radius of the halo and $\beta=0.3$ (similarly to what has been done in \citealt{Rosdahl_et_al_2018} and in \citealt{Costa_et_al_2019}) and ($ii$) when the velocity of the considered particles is lower than the escape velocity, determined by the DM potential well\footnote{The gravitational potential here is evaluated through the analytical integration of a Navarro-Frank-White profile \citep[][]{Navarro_et_al_1996}.}. In the following analysis we select only those satellites with a contamination from low-resolution DM\footnote{We cannot exclude the possibility of contamination-induced effects on the galactic satellites that orbits farther out in the refined zone of the simulation volume with respect to the main galaxy. For this reason we impose such constraint. The number of excluded galaxies is never larger than $1-2$ per snapshot.} particles lower than 20 percent in mass. We consider only galaxies with $M_{\rm vir} > 10^{9}~\!{\rm M}_\odot$ and with a minimum stellar mass of $M_{*} = 10^{7}~\!{\rm M}_\odot$. These thresholds are a good compromise to minimize the numerical errors, still maximizing the number of objects for statistical significance. The selected samples at $z=10$ consist of 35 galaxies for run \texttt{noAGN}{} and 36 for run \texttt{AGNcone}{}. At $z\simeq6$ the difference between the number of satellites in the two runs becomes notable, being in \texttt{noAGN}{}, 30~percent larger than in \texttt{AGNcone}{} (82 versus 56). We detail this difference in \S~\ref{subsec:redshift_evolution_number}. We find that at $z\simeq6$, the virial radii of satellites range from about 2 to about 21~kpc and masses from $10^{9}$ to $10^{11}~\!{\rm M}_\odot$. The distribution of the galactic virial masses at $z=6$ is shown in the left panel of Figure~\ref{fig:Mvir_PDFs} for both the analysed runs. Even though the initial conditions are identical and the virial mass is dominated by the DM component, which is far less sensitive to the feedback prescription than baryons, small but appreciable variations are present. Feedback from MBHs seems to have a remarkable effect both in redistributing the baryonic component among the companions and in driving several satellites to coalescence, resulting in larger systems. After almost 1~Gyr from the beginning of the simulation, \texttt{AGNcone}{} galaxies are less peaked around $\sim 10^{9}~\!{\rm M}_\odot$, than \texttt{noAGN}{}, whereas the cumulative mass of the galaxy populations (right panel of Figure~\ref{fig:Mvir_PDFs}) shows that the total mass is conserved. This result demonstrates that no mass is lost in \texttt{AGNcone}{} trough tidal disruption processes of smaller halos. At $z\simeq6$, about half of the galaxies in the sample of \texttt{AGNcone}{} hosts at least a MBH, whose mass ranges from $\sim10^{5}~\!{\rm M}_\odot$, to $4.8\times10^9~\!{\rm M}_\odot$. There are both quiescent MBHs (or with a negligible accretion rate) and strongly accreting MBHs with $~\sim60-70~\!{\rm M}_\odot$~yr$^{-1}$ (see Table~\ref{tab:summary}). In particular, the most accreting MBH does not remain in the same object during the whole evolution of the galactic proto-cluster and it is not always the most massive one. To summarise, the galaxies examined here exhibit a complex network of AGN, whose emitted energy varies with time and whose geometrical distribution, along with the preferred direction of emission of feedback, requires a proper model to study their influence on the satellite population. \subsection{Redshift evolution of companions} \label{subsec:redshift_evolution_number} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{gal_count.pdf} \caption{Number counts of satellites as a function of redshift. Solid, dotted, and dashed lines indicate the number count evaluated within 1, 3 and 5 virial radii, respectively. Blue lines mark the trend for \texttt{noAGN}{}, whereas red lines refer to \texttt{AGNcone}{}. Differences between the runs significantly increase after about $z=7$, here highlighted with a vertical black line.} \label{fig:gal_count} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth]{median_prop.pdf} \caption{Redshift evolution of satellite median properties (\texttt{noAGN}{} in blue, \texttt{AGNcone}{} in red). Clockwise from top left: gas mass, $M_{\rm gas}$, star formation rate, SFR, total gas metallicity, $Z$ in solar units ($Z_{\odot} = 0.0196$, according to \citealt{Vagnozzi_2019}), and stellar mass, $M_{*}$. The error bars represent the $30$-th (error bar-lower end) and the $70$-th percentile (error bar-upper end) of the distribution. We note that the median stellar mass in the \texttt{AGNcone}{} run has a trend compatible with a higher cumulative SF history with respect to \texttt{noAGN}.} \label{fig:redshift_evolution} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Mass_metallicity.pdf} \caption{Stellar mass-metallicity relation for \texttt{noAGN}{} ({\it left panel}; blue, light blue, and cyan points for $z=6, 7, 8$, respectively) and \texttt{AGNcone}{} ({\it right panel}; red, light red, and pink points for $z=6, 7, 8$, respectively). A linear fit of the same colour is superimposed to each distribution. The best-fit equations for $z=6$ populations are $y=0.33x+5.6$ and $y=0.25x+6.2$ for \texttt{noAGN}{} and \texttt{AGNcone}{}, respectively. For comparison, we plot over each panel some of $M_*-Z$ relations from the gas phase in star-forming galaxies: \citet{Mannucci_et_al_2010} at $z=0$ (light green line with a shaded region representing $90$~percent of the SDSS galaxies), \citet{Cullen_et_al_2014} at $z\gsim2$ (triangles), \citet{Maiolino_et_al_2008} at $z\sim3.5$ (squares), \citet{Faisst_et_al_2016} at $z\sim5$ (stars), and \citet{Harikane_et_al_2020} at $z\sim6$ (crosses).} \label{fig:Mstar_Z} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{intrinsic_vs_dist.pdf} \caption{Satellites intrinsic properties as a function of the distance from the accretion-weighted centre ${\bf c}_{aw}$ in the runs \texttt{noAGN}{} ({\it left column}) and \texttt{AGNcone}{} ({\it right column}) at $z=6$. The main, central galaxy is not shown. From top to bottom: gas mass, stellar mass, star formation rate, and metallicity. Brown stars refer to the most active AGN, here defined as those galaxies hosting at least a MBH with a total accretion rate ${\dot{M}_{\rm acc}>1}$~$M_{\sun} yr^{-1}$. The median of the distributions are superimposed in each panel as filled squares. Uncertainties are quantified as the 30th and 70th percentiles of each distribution. The values of the median points and the lower bounds of the errorbars not shown in the plots are equal to zero.} \label{fig:intrinsic_vs_dist} \end{figure*} To start investigating the effect of AGN feedback on the quasar environment, we compute the redshift evolution of the number of companions, as shown in Figure~\ref{fig:gal_count}. The three families of lines refer to those objects enclosed within different spheres, centered on the ``accretion-weighted center", ${\bf c}_{aw}$ (see Appendix~\ref{sec:AccretionCentre}), and with increasing radii: $1$ central dominant (cD) galaxy virial radius (about $66$~kpc at $z=6$), 3 $r_{\rm vir}$, and 5 $r_{\rm vir}$. The number of satellites in the two simulations (red and blue lines) is very similar at any redshifts within 1 virial radius, starting from a few sources at $z=10$ and reaching about 10 objects at $z=6$. At larger distances from the cD, an increasing difference between the runs starts to arise: for $z\lesssim7$, the number of satellites in the \texttt{AGNcone}{} case is smaller than in the \texttt{noAGN}{} one. After $z\sim7$, i.e. when the outflow starts affecting the properties of the host galaxy\footnote{See Figure~8 in \citealt{Barai_et_al_2018}: $\dot{M}_{\rm out}$ increases substantially after $z\sim 8$ near the cD virial radius, and by $z\sim7$, the outflows reach the outskirt of the cluster.}, the number of satellites in the outer regions of \texttt{AGNcone}{} becomes smaller than the number in \texttt{noAGN}{}, whereas a substantial agreement is maintained at smaller radii. For $r>r_{\rm vir}$, \texttt{AGNcone}{} satellite number reaches a peak at $z\sim6.7$ and is constantly reduced at lower-$z$. Differently, in \texttt{noAGN}{} the satellite number is always a decreasing function of redshift. The highest discrepancy of 24 satellites between the two simulations is reached at $z=6$, within 5 $r_{\rm vir}$. \subsection{Redshift evolution: satellite properties} \label{subsec:redshift_evolution} Figure~\ref{fig:redshift_evolution} shows the redshift evolution ($6 \mathrel{\rlap{\lower 3pt \hbox{$\sim$}} \raise 2.0pt \hbox{$<$}} z \mathrel{\rlap{\lower 3pt \hbox{$\sim$}} \raise 2.0pt \hbox{$<$}} 10$; for a sub-sample of snapshots) of several satellite\footnote{In each snapshot, we exclude from the computation the most massive galaxy in the two runs. Even if the proper way to identify a quasar companion would require to identify those galaxies which orbit around an accreting MBH host (see the method adopted in \ref{subsec:EAGN}) in \texttt{AGNcone}{} and to look for their counterpart in \texttt{noAGN}{}, here our approximation is equivalent for the analysis.} properties: gas mass ($M_{\rm gas}$), star formation rate (SFR), stellar mass ($M_{*}$), and gas metallicity ($Z$) in solar units. The median of the gas content of our satellite population slowly decreases with time, from $\sim6\times 10^{8}~\!{\rm M}_\odot$ to $\sim10^{8} \!{\rm M}_\odot$ in about $0.5$~Gyr. As a consequence, the capability to form stars of the satellites decreases: the median SFR, in fact, varies from $\sim$1 to $0.1~\!{\rm M}_\odot$~yr$^{-1}$. The median stellar mass fluctuates around $M_{*}\sim3 \times10^{7}~\!{\rm M}_\odot$, with a shallow increase from $z=10$ to $z=6$. The ISM is consequently gradually enriched in metals as stars are formed and explode as SNe, varying between $\sim0.05$ and $0.1$~$Z_{\odot}$. The differences between the runs are minimal and well within their error bars, thus suggesting that AGN feedback only plays a minor role in shaping the overall evolution of satellite properties. We notice however that the values of $M_{\rm gas}$ and SFR are higher in \texttt{AGNcone}{} till $z=7$, whereas $M_{*}$ and $Z$ are almost always larger in \texttt{AGNcone}{} with respect to \texttt{noAGN}{}. \subsubsection{Stellar mass-metallicity relation} \label{subsubsec:mstar-met} In Figure~\ref{fig:Mstar_Z}, we compare our results with observational data, concerning the $M_*-Z$ relation in isolated galaxies. In particular, we show the gas-phase metallicity from the oxygen vs hydrogen abundance ratio for the redshifts $z=6,7,8$, along with a linear fit of all these satellite populations. As suggested by observations, also in our simulations, systems with increasing stellar mass are progressively more polluted in metals. The normalization of the sub-linear $M_*-Z$ relation increases with decreasing redshift indicating that the overall metallicity floor of the galaxy group increases through cosmic times. Similarly to the other intrinsic properties, the general trends of the run \texttt{noAGN}{} are almost indistinguishable from the \texttt{AGNcone}{} results. If we linearly interpolate the $M_*-Z$ distributions at $z=6$, we notice that the slope of the best-fit relation is only slightly steeper in \texttt{noAGN}{} than in \texttt{AGNcone}{}, being $0.33\pm0.03$ versus $0.25\pm0.05$, thus consistent with the errors. This indicates that the AGN feedback does not change significantly the process of gas pollution in the galaxy group as a whole. However, we also note that the difference between the zero-points is marginally larger, with $5.6\pm0.2$ versus $6.2\pm0.3$, which is compatible with a higher SF activity in the AGNcone past evolution. The comparison between observations and our results is however not straightforward. Due to sensitivity limitations, observations are still unable to probe the low-mass end ($M_*\mathrel{\rlap{\lower 3pt \hbox{$\sim$}} \raise 2.0pt \hbox{$<$}} 10^9~\!{\rm M}_\odot$) of the relation, that is instead statistically covered by our simulated data. Future deeper observational surveys are therefore necessary to reduce this bias. In general, we note that the level of metal enrichment reached by the galaxy groups of both runs seems to agree with the extrapolation from the data of $z=6$ galaxies \citep{Harikane_et_al_2020} (green crosses). \subsection{Spatial distribution of companions} In this Section, we analyse how satellite properties are spatially distributed within the galaxy group, to check any possible correlation with the AGN activity, occurring in the \texttt{AGNcone}{} simulation. The kinetic feedback, as modelled in our simulations, might remove gas from galaxies close to ${\bf c}_{aw}$ and transfer it to more peripheral systems. The way it finally affects the gas distribution of the galaxy group is anyway complex. In the dark matter distribution, high density peaks tend to be clustered \citep{Bardeen_et_al_1986}, implying that most massive halos are closer to the center of mass of the galaxy proto-cluster (``mass segregation"). Since the center of mass almost coincides with ${\bf c}_{aw}$ at $z=6$, one should expect the effect of quasar feedback to be higher in the closest, more massive satellites. However, as a consequence of their deeper potential wells, massive systems might retain their gas content more easily with respect to less massive ones, thus being more resilient to the possible passage of outflows launched from the galaxy itself or coming from close companions. To study the effect of quasar feedback on the spatial distribution of satellites, in Figure~\ref{fig:intrinsic_vs_dist} we report the satellite intrinsic properties ($M_{gas}$, SFR, $M_*$, $Z$) as a function of their distance from ${\bf c}_{aw}$. Both runs show a highly dense environment, where the most gas- and metal-rich, star-forming, galaxies are preferentially located at small distances from the center of the galaxy group, independent of the presence of quasar feedback. This suggests that the satellite distribution within the galaxy proto-cluster is dominated by mass segregation and quasar feedback plays, at most, a second-order effect. In general, we find the distribution of satellites to be quite flat at distances larger than $100$~kpc. Still, in \texttt{AGNcone}{} there is a larger number of star-forming ($SFR\gsim1~\!{\rm M}_\odot~yr^{-1}$) massive ($M_*\gsim10^9~\!{\rm M}_\odot$) galaxies with respect to the \texttt{noAGN}{} run. This result, along with the one reported in \S~\ref{subsec:redshift_evolution} (larger values of $M_{\rm gas}$, $M_{*}$, SFR, and $Z$ in \texttt{AGNcone}{} with respect to \texttt{noAGN}{}), suggests to refine the comparison between the runs by focussing on individual satellites. In fact, the effect of quasar feedback on the environment could be washed-out by averaging the satellites properties over the entire population. Different galaxies may indeed perceive feedback effects at different times, depending on their relative position with respect to the most accreting MBH and on the variability of the MBH itself. \section{Individual companions} \label{sec:effect_individual} \begin{figure*} \includegraphics[width=\textwidth]{stars_match.pdf} \caption{$z=6.3$ stellar surface density maps of the central $270$~kpc of \texttt{noAGN}{} ({\it left panel}) and \texttt{AGNcone}{} ({\it right panel}). Matched galaxies are highlighted in the panels with a circle of the same colour.} \label{fig:stars_match} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{Gal_matching_SFR_Mstar.pdf} \caption{Redshift evolution of the six galaxies in the sample, as described in Section~\ref{sec:effect_individual}. From top to bottom: stellar mass, stellar mass relative difference (i.e. $[M_{*, \rm AGNcone}-M_{*, \rm noAGN}]/M_{*, \rm noAGN}$), SFR, SFR relative difference (i.e. $[\rm{SFR}_{\rm AGNcone}-\rm{SFR}_{\rm noAGN}]/\rm{SFR}_{\rm noAGN}$), distance from ${\bf c}_{aw}$, and $\mathcal{E}_{\rm AGN}^{\rm cum}$. The same colour coding of Figure~\ref{fig:redshift_evolution} is applied for the two runs. Horizontal lines mark the zero-level for the relative differences (second and forth rows), the time-averaged position of the cD virial radius (fifth row), and the threshold $10^{51}$~erg for the cumulative integrated flux (solid lines in sixth row; see \S~\ref{subsec:EAGN}).} \label{fig:Gal_matching_SFR_Mstar} \end{figure*} The goal of this Section is to compare the SFR and $M_*$ redshift evolution of a satellite in the \texttt{AGNcone}{} run with the corresponding satellite in the \texttt{noAGN}{} run. Finding the same satellite, available in both runs in the same redshift interval is not trivial, since different feedback prescriptions result into different merger rates (see \S~\ref{subsec:redshift_evolution_number}): in \texttt{AGNcone}{}, galaxies merge faster and more easily than in \texttt{noAGN}{}, possibly because of a more diffuse and massive stellar component around the galaxies (see Figure~\ref{fig:stars_match}). We thus describe in details the procedure we follow to setup our sample of galaxy couples. We select a sample of galaxies that are characterised by ($i$) the same position (within $5$~comoving kpc) and ($ii$) the same virial mass (within $10$\%) in both runs. Among the possible candidates from this first selection ($iii$) we select galaxies that, at $z=6$, have different distances from ${\bf c}_{aw}$ and different masses. This condition allows us to probe different regions of the quasar environment in terms of mass segregation. Furthermore, we choose galaxies whose ($iv$) merger tree starts at least\footnote{This requirement takes out from the selection those objects which suffer from violent gravitational stripping processes or merge with a galaxy of equal/higher mass, long before its counterparts in the other run.} at $z=9$ and reaches $z=6$. Finally, ($v$) we exclude from our sample those galaxies that have hosted a powerful AGN for a significant amount of their evolutionary history. \noindent Although only a small fraction of satellites hosts a powerful AGN, this last conditions is essential to isolate the effect of external from internal AGN feedback on satellites. AGN-driven outflows may, in fact, subtract part of the cold gas component from the interstellar medium of the host galaxies, affecting their SFRs and stellar masses. To select galaxies hosting powerful AGN, we first walk backward the merger tree of each galaxy (selected at redshift $z$) and then we sum, at each redshift, up to the formation redshift\footnote{We define the formation redshift $z_{\rm form}$ of a galaxy, as the first redshift (the highest) where the first ancestor of the galaxy in the merger tree is identified.} $z_{\rm form}$, the accretion rate ${\dot{M}_{acc}}$ of all the MBHs located within $\beta r_{\rm vir}$. After this, we compute the mean $\langle\dot{M}\rangle_{acc}$ of the cumulative accretion rate of the selected galaxy over the whole redshift range and exclude from the analysis those galaxies for which ${\langle\dot{M}\rangle_{\rm acc}>0.1}$~$M_{\sun}$~yr$^{-1}$. Via this method, we always exclude at least the cD galaxy, hosting some of the most active and massive MBHs during the whole evolution.\footnote{We note that, in principle, this criterion might miss MBHs that are highly accreting at low mass, possibly neglecting relevant effects in the case the MBH is hosted in a small mass galaxy.} The final selection, composed by 6 galaxies with virial masses\footnote{With an average contamination from low-resolution DM particles of a few percent for 1 out of the 6 galaxies and null for the remaining 5 objects.} $\sim 10^{10}-10^{11}~\!{\rm M}_\odot$, is highlighted in Figure~\ref{fig:stars_match}; in Figure~\ref{fig:Gal_matching_SFR_Mstar}, we show the redshift evolution of stellar mass, SFR and distance from ${\bf c}_{aw}$ for each galaxy of the sample. At higher redshift the values of $M_{*}$ and SFR (first and third rows) are almost identical in the two simulations, for all the galaxies. After $z\sim8$, the trends start to diverge, always resulting in higher stellar mass and SFR in \texttt{AGNcone}{} satellites (the three galaxies on the left - i.e A, B, and C - have larger differences with respect to the three rightmost galaxies D, E, and F, as it will be discussed in \S~\ref{subsec:EAGN}). The sudden fluctuations observable in the evolution of some satellites (e.g object A at $z\sim7$) are due to minor mergers or close fly-bys which temporary increase the mass within $\beta r_{\rm vir}$. The abrupt decrease of the red line of C is due to the upcoming coalescence of the object with the cD, at the very last snapshot. Hence, C system is majorly stripped of both its gaseous and stellar components. \noindent The increasing differences are more easily noticeable in the relative difference panels (second and forth rows), where positive values represent higher quantities in \texttt{AGNcone}{} run. As it is clear from the fifth row, satellite distances evolve quite differently among the selected objects, further increasing the generality of the sample: while B, C, and F approach the centre, galaxy D orbits increasingly far form the cD, and galaxies A and E do not significantly change their relative position during the entire simulation time. \begin{figure*} \includegraphics[width=0.8\textwidth]{mollview.pdf} \caption{Mollweide view of the gas radial velocity within $10$~kpc of the most accreting MBH in \texttt{AGNcone}{}, at $z=6.3$. The map shows the effect of the bi-conical feedback, resulting in two visible gaseous outflows. In the \texttt{AGNcone}{} run, the orientation of the outflow is randomly assigned to each MBH at the seeding time, and kept fixed for the whole MBH lifespan. The gas velocity distribution, however, also depends on the activity of the other MBHs present in the simulation. The direction of the emission axis for the most accreting MBHs is recovered by fitting the radial velocity map of the gas. The black stars mark the position of the opposite outflow axes, as they are derived from the velocity distribution, whereas the black lines represent the boundaries of the outflows, ideally confined within $45^{\circ}$ from the axis. Black dots show the angle position of the satellites at $z=6.3$. Larger coloured dots mark the galaxies selected for the matching study of Section~\ref{sec:effect_individual}}. The colour coding is the same used in Figure~\ref{fig:stars_match}. \label{fig:mollview} \end{figure*} To visualise the relative position of the selected satellites with respect to the geometry of the bi-conical outflows, we show in Figure~\ref{fig:mollview} a Mollweide map, where the location of each galaxy at $z=6.3$ is superimposed to the radial velocity map of the gas within a sphere of radius $10$~kpc and centred on the most accreting MBH in that snapshot. \subsection{AGN feedback on individual companions} \label{subsec:EAGN} Given the non-isotropic emission of the out-flowing gas in \texttt{AGNcone}{}, the evaluation of the feedback effect cannot rely only on the distance from the source and the target galaxy. In addition, galaxies randomly experience the influence of various AGN during various and discontinuous accretion periods. It is then necessary to define a function of both distance and emitted power that can also take into account the relative position of a target galaxy with respect to the emission axes. Let us consider a galaxy (top right in Figure~\ref{fig:toymodel}) at a distance $d$ from an accreting MBH (bottom left), subtending an angle $\Omega_{\rm gal}$. The MBH launches an outflow (dashed lines) towards the galaxy in an opening angle $\Omega_{\rm f}$ (identified by the dotted lines). We quantify the energy received by the target galaxy at a given redshift $z$, from any $i$-th MBH which has been accreting with $\dot{M}_{{\rm acc}, i}$ in the time interval $\Delta t_{\rm snap}$, and whose outflow cone encompasses the galaxy centre.\footnote{In the following derivation, we neglect the subscript $i$ on the MBH properties to lighten the formalism.} The total gas mass involved in the feedback emission process around the MBH is $M_{\rm tot}=M_{w}+M_{\rm e}$, where $M_{\rm w}$ is the mass of the ejected wind and $M_{e}$ is the mass of the gas in the environment surrounding the MBH and entrained within the outflow. Depending on the geometry of the feedback prescription, $M_{e}$ can be written as \begin{equation} M_{e} = \frac{\Omega_{\rm f}}{4\pi}\frac{4\pi}{3}\rho d^3, \label{eq:menv} \end{equation} where $\rho$ is the average density around the MBH. If $v_{\rm f}$ is the velocity of the gas when it hits the target galaxy, then momentum conservation\footnote{We assume that the energy of the outflow can be radiated away during the gas migration, because of gas heating and shock fronts.} allows us to write \begin{equation} M_{\rm w}v_{\rm w}+M_{\rm e}v_{\rm e} = v_{\rm f}M_{\rm tot} \label{eq:momcons} \end{equation} where we assume $v_{\rm e}=0$ when the outflow is launched. Neglecting internal energy\footnote{The energy of the outflow is almost completely kinetic, also according to the feedback receipt.}, the energy deposited by the i-th MBH on the target galaxy ($\mathcal{E}_{\rm AGN, i}$) is related to the final kinetic energy $E_{\rm f}$ through the relation: \begin{equation} \mathcal{E}_{\rm AGN, i} \equiv E_{\rm f} \frac{\Omega_{\rm gal}}{\Omega_{\rm f}} = \frac{1}{2}M_{\rm tot}v_{\rm f}^{2} \frac{\Omega_{\rm gal}}{\Omega_{\rm f}}, \label{eq:Edep1} \end{equation} where the factor $\Omega_{\rm gal}/\Omega_{\rm f}$ accounts for the fact that only a fraction of the total mass ejected by the MBH actually intercepts the target galaxy. Isolating $v_{\rm f}$ from Equation~\ref{eq:momcons} and considering that the envelope mass in Equation~\ref{eq:menv} fully dominates over the wind mass (i.e. $M_{\rm tot} \simeq M_{\rm e}$), Equation~\ref{eq:Edep1} becomes \begin{equation} \mathcal{E}_{\rm AGN, i} = \frac{3}{2}v_{\rm w}^2 \frac{M_{\rm w}^2}{\rho d^3} \frac{\Omega_{\rm gal}}{\Omega_{\rm f}^2}. \label{eq:Edep2} \end{equation} In Equation~\ref{eq:Edep2}, the solid angle $\Omega_{\rm gal}$ -- subtended by the target galaxy with respect to the MBH position -- can be approximated as \begin{equation} \Omega_{\rm gal} = \frac{A_{\rm gal}}{d^2}, \label{eq:SA} \end{equation} where $A_{\rm gal}$ is the projected area of the galaxy, as it is seen from the AGN.\footnote{We note that, the correct formula for a curved surface subtending the solid angle $\Omega_{\rm gal}$ is $\Omega_{\rm gal}=2\pi(1-\frac{d}{\sqrt{r_{\rm gal}^2+d^2}})$. Our approximation fails only when $d \sim r_{\rm gal}$, which occurs a negligible amount of times, leading our method to overestimate the solid angle, in those few cases, at most by a factor 1.7.} In our analysis we use $A_{\rm gal} = \pi \beta^2 r_{\rm vir}^2$, with $\beta = 0.3$, in agreement with our choice to compute the properties of the halos (see Section~\ref{sec:sample}). At the same time, $\Omega_{\rm f} = 2\cdot2\pi(1-\cos \alpha)$, with $\alpha = \pi/4$ in \texttt{AGNcone}{}, for each of the two cones where the energy is distributed. \begin{figure} \includegraphics[width=0.48\textwidth]{toymodel1.pdf} \centering \caption{Schematic representation of the model applied to \texttt{AGNcone}{}. From an accreting MBH, a gaseous outflow with mass $M_{\rm w}$ is ejected with velocity $v_{\rm w}$ in the cone with aperture $\Omega_{f}$ and intercepts a satellite at the distance $d$, subtending an angle $\Omega_{\rm gal}$. $M_{e}$ is the gas envelope mass encountered by the outflow when it is ejected.} \label{fig:toymodel} \end{figure} In order to estimate $M_{\rm w}$, we multiply the outflow rate $\dot{M}_{\rm w}$ into the snapshot time interval $\Delta t_{\rm snap}$: \begin{equation} M_{\rm w} = \dot{M}_{\rm w} \Delta t_{\rm snap}, \label{eq:Mdot_Deltat} \end{equation} where $\dot{M}_{\rm w}$ is obtained from the accretion rate ${\dot{M}_{acc}}$ via equations~\ref{eq:Lrad} and \ref{eq:Efeed}. Accordingly, the energy-conservation equation can be written as \begin{equation} \dot{E}_{f} = \frac{1}{2}\dot{M}_{\rm w}v_{\rm w}^2 = \epsilon_{\rm f}\epsilon_{\rm r}\dot{M}_{\rm acc}c^2. \label{eq:Erad1} \end{equation} Finally, we can write down the form of the deposited energy, by substituting $M_{\rm w}$ in Equation~\ref{eq:Edep2}: \begin{equation} \mathcal{E}_{\rm AGN, i} = 6 \pi \epsilon_{\rm r}^2 \epsilon_{\rm f}^2 c^4 \beta^2 \frac{(\Delta t_{\rm snap})^2}{v_{\rm w}^2} \frac{\dot{M}_{\rm acc}^2 r_{\rm vir}^2}{\Omega_{\rm f}^2 \rho d^5}. \label{eq:Edep3} \end{equation} Operatively, we evaluate the gas density $\rho$ by summing up all the gas particles in the cone subtended by the solid angle $\Omega_{\rm gal}$, with respect to the $i$-th MBH and the accretion rate $\dot{M}_{\rm acc}$ at the time $t-\Delta t_{\rm snap}$, in agreement with what is done in Appendix~\ref{sec:AccretionCentre}. The derived Equation~\ref{eq:Edep3} has a quite intuitive dependence on the MBH accretion rate, $\dot{M}_{\rm acc}^2$ (the stronger the AGN, the higher the effect) and on the size of the galaxy, proportional to $r_{\rm vir}^2$ (the larger is the galaxy projected area with respect to the AGN, the higher is the energy harvested by the system). Also the dependencies on the inverse of the distance galaxy-MBH, $d^5$ and on the density of the circumgalactic medium (CGM) are expected. As a matter of fact, the density of the environment and the length of the path that the outflow has to travel both contribute to lower the final energy. In a purely momentum-conservation scenario the kinetic energy of the ejected gas is continuously distributed on the surrounding medium and, in very dense environment, can be completely lost in the CGM before reaching the target system. A final consideration concerns the solid angle $\Omega_{\rm f}$, which favours more collimated outflows. This factor, on one hand is able to reduce the volume where the feedback energy is diluted but, on the other, it deeply affect the probability that a galaxy is affected at all by the outflow. In order to quantify the total effect of the MBH population on each galaxy, we define the total $\mathcal{E}_{\rm AGN}$ as \begin{equation} \mathcal{E}_{\rm AGN} = \sum_{i=1}^8 \mathcal{E}_{{\rm AGN}, i}, \end{equation} where the summation is carried over the eight most accreting MBHs as discussed in Appendix~\ref{sec:AccretionCentre}. In our model, for a given target galaxy, we consider the contribution of the $i$-th accreting MBH only if the galaxy centre of mass is enclosed within the boundaries of the outflow launched by the AGN (see the black solid lines in Figure~\ref{fig:mollview}). This equation provides a rough estimate of the energy deposited in a time interval $\Delta t_{\rm snap}$ by the most-accreting AGN on a target galaxy, at redshift $z$. At any generic redshift $z$, the cumulative energy deposited on a target galaxy ($\mathcal{E}_{\rm AGN}^{\rm cum}$) is given by the integral of $\mathcal{E}_{\rm AGN}$ between the galaxy formation redshift $z_{\rm form}$, and $z$: \begin{equation} \mathcal{E}_{\rm AGN}^{\rm cum} \equiv \kern-1em \sum_{\quad z=z_{\rm form}}^{z} \kern-1em \mathcal{E}_{\rm AGN}. \label{eq:Edep4} \end{equation} The results of this model are shown in the bottom line of Figure~\ref{fig:Gal_matching_SFR_Mstar}, where the horizontal line, at $\mathcal{E}_{\rm AGN}^{\rm cum}=10^{51}$~erg, refers to the energy injected in the medium by a single SN event and has the only purpose of helping the reader to distinguish the objects in relation to their received energy. According to the values of $\mathcal{E}_{\rm AGN}^{\rm cum}$, we separate the galaxies more affected by the AGN feedback (A, B, C, on the left) from the less affected ones (D, E, F, on the right). The SFR of galaxies from the first group is more enhanced (up to a factor 3) with respect to the second group (less than a factor 2). We note that in A, B, and C, $\mathcal{E}_{\rm AGN}^{\rm cum} \gtrsim 10^{51}$ from $z\sim 8$ up to the end of the simulation, while in D, E, and F $\mathcal{E}_{\rm AGN}^{\rm cum} \gtrsim 10^{51}$ only for $7.5<z<6$. In other words, it seems that galaxies receiving the energy by the AGN feedback for a longer time are more strongly affected, in terms of SFR and $M_*$. We discuss in the next section a possible interpretation of these findings. \section{Interpretation} \label{sec:discussion} In this section, we first summarise the main findings obtained from the comparison between the runs \texttt{noAGN}{} and \texttt{AGNcone}{}, and we propose our interpretation for our results. In both runs, we have identified a sample of galaxies in the redshift range $6<z<10$ and characterized their properties (distance from the center of the galaxy groups, SFR, $M_*$, $M_{\rm gas}$, $Z$). From the comparison between the samples extracted from the two simulations we can conclude that: \begin{enumerate}[label=(\alph*)] \item \label{itm:sat_num} In the \texttt{AGNcone}{} run satellites are less numerous, especially in the outer regions, and they are hosted by more massive DM halos (see Section~\ref{sec:sample}); \item \label{itm:sat_sfr} Although the differences between the median properties of the two samples at all redshifts are not noteworthy, individual \texttt{AGNcone}{} companions are more star-forming and massive; the SFR enhancement in those satellites which are influenced the most by the surrounding active AGN, reaches a factor up to 4 (see Sections~\ref{sec:sample} and \ref{sec:effect_individual}). \end{enumerate} We suggest that point \ref{itm:sat_num} is due to the different merger rates in the two simulations. Figure~\ref{fig:stars_match} shows the presence of a diffuse stellar component in the proto-cluster environment of \texttt{AGNcone}{}, absent in \texttt{noAGN}. This is due to the combined activity of the dimly accreting MBHs, hosted in the satellites orbiting in the outskirt of the group, and to the powerful quasars located at the centre. The disperse gas and stars can boost the effect of dynamical friction when two galaxies approach, thus lowering the dynamical timescale for a merging to occur. DM-halos follow a similar trend, suggesting that differences in the baryonic component are transferred to DM structures that, in the \texttt{AGNcone}{} run, become more massive, and therefore less in number. Interestingly, the effect is only relevant far from the cD and almost absent near the centre, where the dynamics is completely dominated by the cD environment. Strong feedback could intrinsically favour coalescence, decrease the number of systems, and speed up their bottom-up growth. These phenomena play a fundamental role in the number count of satellites around $z=6$ quasars, as discussed in the next section. Concerning point \ref{itm:sat_sfr}, we note that AGN feedback do not significantly affect the evolution of the satellite population as a whole. The median of the satellite properties show only minor differences, which are though compatible with a scenario where, in \texttt{noAGN}{}, a higher $M_{\rm gas}$ results in a higher SFR, till the exhaustion of the gas reservoirs. We note that, the cumulative effect on the SFR and $M_{*}$ is both due to the higher merger rate (more likely to occur far from the cD) and to the direct effect of the outflow on the satellite (stronger at smaller distances from the cD). We suggest that the enhanced SF in those satellites directly invested by the AGN outflows can be explained by ($i$) an increase of the fuel available for the SF process, ($ii$) a boost in its efficiency due to the induced shocks, ($iii$) the formation of stars within the outflow itself \citep[][]{Maiolino_et_al_2017,Gallagher_et_al_2019}. Several evidences of AGN positive feedback onto galaxy satellites have been reported in the literature. \citet{Gilli_et_al_2019} analyse the deep multi-band field of a type II radio galaxy at $z=1.7$ and discover an over-density of galaxies. These authors suggest that the star formation in satellites is promoted by the compression of their cold interstellar medium around the AGN-inflated bubbles. The possibility of a jet-induced star formation is also consistent with the SF measurement of a galaxy near the local radio-galaxy NGC--541 \citep{Croft_et_al_2006}, and supported by dedicated numerical simulations \citep{Fragile_et_al_2017}. Recently, \citet{Martin-Navarro_et_al_2021} measured $M_*$ and SFR in a large sample of satellites in SDSS galaxies finding that their SFR is modulated by their relative position with respect to the central galaxies, being higher in those objects presumably affected by the outflows from the central galaxies. These authors concluded that AGN-driven outflows can influence the environment well beyond the host galaxy ($1-2~R_{\rm vir}$), preserving (or even enhancing) the SF of those satellites along the direction of the injected energy. There are some unavoidable limits associated with our model to quantify $\mathcal{E}_{\rm AGN}$. On the one hand, being based only on momentum conservation, it would underestimate the final energy deposited into the target galaxy by a fully energy-conserving outflow. On the other hand, the outflow launched by AGN may interact with intervening ISM and CGM material, potentially losing a significant amount of energy, or even being stopped in the most extreme cases. This would instead lead us to overestimate the final impact on the satellite. In addition, the limited spatial and temporal resolution of the simulation suite, the random explosion of SNe, and the possible accretion episodes of MBHs within the satellites themselves\footnote{We remark though that we limit the impact of AGN hosted in satellites by excluding those objects with ${\langle\dot{M}\rangle_{\rm acc}>0.1}$~$M_{\sun}$~yr$^{-1}$.}, may dilute the effect of feedback from external AGN. All these caveats prevent us from disentangling entirely the effects of the several physical processes affecting the CGM and enhancing the SF in satellite galaxies. \section{Comparison with observations} \label{sec:observations} \begin{figure*} \centering \includegraphics[width=\textwidth]{observative_vs_dist_2.pdf} \caption{[CII] emission (top row) and UV emission (bottom row) as a function of distance from ${\bf c}_{aw}$, for the same $z=6$ sample of Figure~\ref{fig:intrinsic_vs_dist}: \texttt{noAGN}{} on the {\it left} and \texttt{AGNcone}{} on the {\it right}. Horizontal dashed lines provide an estimate of the current instrumental sensitivities, derived from some of the deepest observations in the various bands. As in Figure~\ref{fig:intrinsic_vs_dist}, where the brown stars mark the hosts of the most active AGN in \texttt{AGNcone}{}, the filled squares show the medians of the distributions and the error bars refer to the 30th and 70th percentiles. The values of the median points and the lower bounds of the errorbars not shown in the plots are equal to zero.} \label{fig:observative_vs_dist} \end{figure*} In order to evaluate how AGN feedback affects our capability to detect an over-density through the number counts of galaxies, we compute both the [CII] and UV luminosity of all the simulated satellites as detailed in Appendix~\ref{sec:observational_properties}. Figure~\ref{fig:observative_vs_dist} shows the satellite luminosity as a function of the distance from the center of the system at $z=6$. Horizontal dashed lines mark the observed limit luminosities $L_{\rm lim}$ of current observational campaigns at high redshift: $L_{\rm lim, [CII]} = 10^{8} L_{\odot}$ \citep[][hereafter \citetalias{Venemans_et_al_2020}]{Venemans_et_al_2020} and $L_{\rm lim, UV} = 10^{10} L_{\odot}$ \citep[][]{Marshall_et_al_2020}. In agreement with our previous results, \texttt{AGNcone}{} produces more luminous, and thus more easily detectable, galaxies than the control simulation. In details, above $L_{\rm lim, [CII]}$ ($L_{\rm lim, UV}$), there are 1 and 3 (1 and 5) satellites in \texttt{noAGN}{} and \texttt{AGNcone}{}, respectively.\footnote{The same result holds even if $L_{\rm lim, [CII]}=10^{7}~L_{\odot}$ ($L_{\rm lim, UV}=10^{9}~L_{\odot}$) - a luminosity threshold reasonably achievable with current facilities - where the median [CII] (UV) luminosities of the satellites are $3.6 \times 10^{6}$ and $7.8\times 10^{6}~L_{\odot}$ ($4.2 \times 10^{9}$ and $1.8\times 10^{10}~L_{\odot}$) in \texttt{noAGN}{} and \texttt{AGNcone}{}, respectively.} For a comparison with UV data we refer the reader to \citet[][]{Di_Mascia_et_al_2021a}, where radiative transfer calculations are fully accounted for. Here, we focus on the comparison between our predictions and currently available ALMA data of $z\sim 6$ quasars. In particular, we consider the results of a recent high-resolution ALMA survey of 27 (previously [CII]-detected) quasars at $z\sim6$ by \citetalias{Venemans_et_al_2020} see also \citep[see also][]{Decarli_et_al_2017, Decarli_et_al_2018}. The authors detected 17 companions\footnote{This number refers to satellites observed at $\Delta v \leq 1000$~km~s$^{-1}$ from the central quasar. The same number increases up to 19 for $\Delta v \leq 2000$~km~s$^{-1}$.} with $L_{\rm [CII]}\gsim10^{8} L_{\odot}$, corresponding to an average of $0.6$ companions per field. We further notice that some of the \citetalias{Venemans_et_al_2020} observed quasars present multiple (2-3) companion galaxies. \begin{figure*} \centering \includegraphics[width=\textwidth]{sat_counts_vs_CII.pdf} \caption{Number of satellites in \texttt{AGNcone}{} (red) and \texttt{noAGN}{} (blue) with $L_{\rm [CII]}>L_{\rm lim}$, with respect to $L_{\rm lim}$, in a volume of about $680^{3}$~kpc$^{3}$. Shaded areas show the related Poissonian errors \citep[with a 68~percent confidence level; ][]{Gehrels_1986}. From left to right, the vertical dotted lines mark the [CII] sensitivity of an ALMA observing program of 10 and 1 hour on source, respectively. Green symbols show the latest observational data from \citetalias{Venemans_et_al_2020}: filled circles refer to the whole sample of 17 satellites around 27 quasars, filled squares refer only to the quasar population with $L_{\rm FIR} \gtrsim 10^{13}L_{\odot}$, and Xs mark the densities around the most luminous quasar in FIR, i.e. J0305-3150.} \label{fig:satnum_vs_thre} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{JWST.pdf} \caption{Analogously to Figure~\ref{fig:satnum_vs_thre}, we show the number of observable satellite in the UV band with respect to the luminosity threshold $L_{\rm lim, UV}$. Solid lines refer to the same UV luminosities, corrected for dust extinction of Figure~\ref{fig:observative_vs_dist}. The uncertainties, shown by the shaded areas, are estimated by summing the contribution of the Poissonian noise (evaluated as in Figure~\ref{fig:satnum_vs_thre}) and the variation range from the minimum to the maximum optical depth considered. From left to right the dotted vertical lines show the sensitivity thresholds that JWST can reach with 10~hr and 1~hr of observing time, respectively.} \label{fig:JWST} \end{figure*} In Figure~\ref{fig:satnum_vs_thre} we show the number of detectable satellites in our simulations as a function of $L_{\lim}$. We select all those satellites above the luminosity threshold $L_{\rm lim}$ and within a spherical volume of about $680^{3}$~kpc$^{3}$, equivalent to the average volume\footnote{The companions in \citetalias{Venemans_et_al_2020} are observed in a field of view of $\pi95^{2}$~kpc$^{2}$ on the sky plane, i.e, where the sensitivity of the primary beam equals $0.2$ times the peak. We consider here the companions observed within $\pm 1000$~km~s$^{-1}$ from the quasar, corresponding to about $2.8$~Mpc at $z\sim6$.} observed by \citetalias{Venemans_et_al_2020}. Three subsets of the \citetalias{Venemans_et_al_2020} companion list are also shown: the mean satellite number of the whole 27 quasar sample (green circles), the mean satellite number of the 4 most FIR luminous quasars ($L_{\rm FIR} \gtrsim 10^{13}~L_{\odot}$; green squares), and the satellites of the most FIR-luminous quasar, (J0305-3150, with $L_{\rm FIR} = 1.2\times10^{13}~L_{\odot}$; green crosses). Figure~\ref{fig:satnum_vs_thre} shows that the number of detected satellites in \citetalias{Venemans_et_al_2020} increases with the FIR luminosity of the central quasar. A possible explanation for this trend is that more FIR-luminous quasars are likely hosted by galaxies with higher SFR, and therefore in more massive halos. As a result, they tend to be more biased. From a comparison between our results and the most FIR-luminous quasar in the ALMA sample, we note that \texttt{AGNcone}{} \citep[where the cD has $L_{\rm FIR} \sim 5\times 10^{13}L_{\odot}$;][]{Di_Mascia_et_al_2021a} agrees quite well with ALMA data, whereas \texttt{noAGN}{} predicts a number of satellites lower than observed. This implies that, for the most luminous source in FIR, the positive feedback of quasar outflows on galaxy companions is required to reproduce observational data, even though more simulations are required to verify this effect with better statistics. In contrast, although the average observed number count is well within the Poissonian noise (shaded areas), it is systematically lower than our predictions from both our runs. This can be explained in two ways: (i) not all the quasars, but only the most FIR-luminous ones, live in over-dense regions; (ii) the number of satellites detected by \citetalias{Venemans_et_al_2020} only provides a lower limit to the actual number. The latter hypothesis can be related to observational artifacts. First, of all, the finite angular resolution of real data can prevent us from correctly quantifying the actual number of satellites around the quasar. In fact, [CII] emission in the bright, central regions of $z\sim 6$ quasars has typical sizes of about $1-5$~kpc and may therefore arise from multiple, unresolved sources. While counting satellites from simulations, we are instead assuming that all of them, even the closest to the central source (within $\sim1$~kpc), are spatially resolved. The inclusion of this observational artifact in our simulations would therefore automatically improve the agreement with observations. We further note that, while ALMA observations probe distances up to $2-3$~Mpc from the central quasar, the high-resolution volume of our simulations is only limited to the inner $200-300$~kpc. If we consider ALMA observations only in a region from the center within $\pm 200$~km~s$^{-1}$ (corresponding to about $280$~kpc at $z=6$), the average number of satellites is $0.3$ (i.e., 9 galaxies around 27 quasars), resulting\footnote{These are the objects in a volume $V_{\rm A}=\pi R_{fov}^2 L \approx (200)^{3}$~kpc$^{3}$, where $R_{fov} = 95$~kpc is the radius of the ALMA field of view (fov) and $L = 280$~kpc is the displacement along the line of sight.} into a mean density of about $4 \times 10^{-8}$ satellites per kpc$^{3}$. In the noAGN and AGNcone simulations we retrieve mean densities of $2\times 10^{-8}$ satellites per kpc$^{3}$ and $6\times 10^{-8}$, respectively. This suggests that ALMA observations in this restricted volume are in remarkable agreement with the mean density expected from of our simulations. Finally, the large asymmetry of the volume probed by ALMA ($\lesssim 100$~kpc on the sky plane versus $\gtrsim 2-3$~Mpc along the line of sight), could introduce a further bias. In some quasars observed by \citetalias{Venemans_et_al_2020} the number of satellites is larger than the average (e.g. J0305-3150 in Figure~\ref{fig:satnum_vs_thre}). We can explain these cases as ``fortunate sources'', where the distribution of satellites is somewhat aligned with the ALMA volume. In these sources, ALMA observations are able to detect the real and total amount of companions, which is also perfectly compatible with the number count predicted in \texttt{AGNcone}{}, where the density is computed without assuming a preferred line of sight. A more quantitative-statistical approach on this hypothesis will be the focus of a future work. \subsection{Predictions for ALMA} \label{subsec:ALMA} Most of the ALMA data available so far to study $z\sim 6$ quasars are limited to shallow observations ($\lesssim 1$~hr per source). It is therefore interesting to investigate what we can learn from deeper observations. Figure~\ref{fig:satnum_vs_thre} shows that, if quasars are hosted in over-dense environments (as predicted by our simulations, by construction), we would be able to detect up to 6 satellites (10 as an upper limit) in a luminous quasar neighbour with $10$~hours of ALMA observing time. In the case of much deeper observing programs ($L_{\rm lim} <10^{6.5}~L_{\odot}$), the number of observable satellites in \texttt{AGNcone}{} is smaller than in \texttt{noAGN}{}, contrarily to what occurs in the case of shallower observations. This inverted trend is possibly due to the two-fold effect of quasar feedback on the surrounding satellites: enhancing the SFR (and therefore the luminosity) of luminous satellites and lowering their intrinsic number. Although such deep observations are beyond the capabilities of current (sub-)millimeter observatories, the discussed trend still implies that quasar feedback may leave signatures on the slope of the satellites number count, thus suggesting further investigations on this topic. However, we remark that the limited resolution of our simulation could play a role on this effect, since low mass galaxies could be differently impacted by quasar feedback in simulations with higher resolution, leading to different results. \subsection{Predictions for JWST} Finally, we focus on JWST in order to understand how this mission will improve our knowledge of high-$z$ quasar properties and their environment. Figure~\ref{fig:JWST} shows the expected number of satellites emitting in UV as a function of their luminosity $L_{\rm UV}>L_{\rm lim, UV}$. The UV emission of satellites is corrected for dust attenuation with a dust-to-metal ratio $f_{d} = 0.08$, according to the results of \citet[][see Appendix~\ref{sec:observational_properties} for further details]{Di_Mascia_et_al_2021b}, and the shaded area takes into account both the minimum and maximum optical depth resulting from radiative transfer calculations, and the Poissonian uncertainty. According to \texttt{AGNcone}{}, we would be able to detect between 3 and 8 satellites with 1~hr of observing time, and between 4 and 10 satellites via a 10~hr observing program. We also report our predictions for the \texttt{noAGN}{} case. Although we demonstrated that the absence of AGN feedback would decrease our capability of detecting quasar companions, this represents a lower limit ($1-2$ satellites with $1$~hr of observing time and $2-6$ objects with $10$~hr) for a deep JWST observation program on a single quasar. These calculations, along with the ones presented in Figure \ref{fig:satnum_vs_thre}, demonstrate that both ALMA and JWST will be essential to improve our understanding of high-$z$ quasars environment, by increasing the number of observed satellites, probing different scales and emission processes. In particular, the synergy between these two observatories will be of outmost importance, since each instrument will compensate the limitation of the other one: e.g., on the one hand, the larger field of view of JWST with respect to ALMA ($\sim$ $10$~arcmin$^2$ versus $1$~arcmin$^2$) will allow us to probe broader regions around quasars, on the other hand, ALMA is able to reveal even those satellites that are dust-obscured and may elude JWST observations. \section{Summary and conclusions} \label{sec:conclusions} We have investigated the effects of quasar outflows (i.e. feedback) on the visibility of companion galaxies by comparing two cosmological zoom-in simulations of a $z\sim6$ quasar, in which AGN feedback is either included or turned-off. We have identified satellites in both runs and determined their key physical properties such as gas and stellar mass, star formation star formation rate, and metallicity. \noindent Our findings can be summarized as follows. \begin{itemize} \item Within the virial radius of the central galaxy, the number of satellites in the two runs increases with time in a very similar way, resulting in 10 companions at $z \approx 6$. At larger distances, the number of satellites in \texttt{AGNcone}{} is 30 percent smaller than in \texttt{noAGN}{}; we suggest that this effect is due to AGN-driven outflows that, by dispersing stars and gas in the surrounding region, boost the dynamical friction and, consequently, increases the galaxy merger rate. The effect is negligible at smaller distances because the orbital dynamics and the CGM density are fully dominated by the presence of the central galaxy. % \item The redshift evolution of the satellite median properties does not show striking differences between the runs. % $M_{\rm gas}$ decreases from $10^9~\!{\rm M}_\odot$ at $z\sim 10$ to $10^8~\!{\rm M}_\odot$ at $z\sim 6$; the SFR decreases from 1 to $0.1~\!{\rm M}_\odot$~yr$^{-1}$; $M_*$ does not evolve appreciably from $\sim 3\times 10^7~\!{\rm M}_\odot$; $Z$ increases from $0.05$ to $0.2$ $\rm Z_{\odot}$. % To get a deeper insight into the effect of AGN feedback on its environment, we have thus considered a sub-sample of satellites, located at different distances and positions with respect to the center of the group and followed the redshift evolution of their intrinsic properties ($M_*$ and SFR) in both runs. % For all these satellites, we found that both $M_*$ and SFR grow faster in \texttt{AGNcone}{} with respect to \texttt{noAGN}{}. % We argue that the SFR enhancement in those satellites engulfed by the quasar outflow can be due to an increase of the fuel available for the SF process and/or a boost in the SF efficiency due to the induced shocks. Both possibilities are linked to the kind of feedback and SF recipes implemented in the simulation: gas particles are moved away from the galaxies hosting the most accreting MBHs towards the surrounding satellites. % \item We have developed a semi-analytical model based on momentum conservation to quantify the total energy deposited on a target galaxy by the surrounding active AGN, taking into account their duty cycles and accretion rates, the outflow orientation, the circum-galactic medium (CGM) density, and their relative distance from the target galaxy. We found positive feedback in numerous satellites, proportionally to the total energy received from the accreting MBHs in their whole evolutionary history. % \item We have computed the [CII]158$\mu$m emission of satellites and studied the effect of quasar feedback on their observed number count. When compared with the most FIR-luminous quasar of the ALMA sample, the \texttt{noAGN}{} run predicts a number of satellites lower than observed, while the \texttt{AGNcone}{} run agrees quite well with ALMA data. This implies that, for the most FIR-luminous source, the positive feedback of quasar outflows on the SFR of galaxy companions is instrumental in reproducing observational data. However, the average number of satellites observed in the whole quasar sample is lower than our predictions for both the runs. This can be explained in two ways: (i) not all the quasars, but only the most FIR-luminous ones live in over-dense regions; (ii) the number of satellites reported by \citet{Venemans_et_al_2020} only provides a lower limit to the actual number, as a consequence of observational artifacts (e.g. finite angular resolution and anisotropic volume probed by ALMA). % \item We predict that JWST will double the current ALMA detections after just $1$~hr of observing time and will allow us to directly observe a major part of the quasar neighbour, avoiding any bias introduced by the small ALMA field of view. Still, ALMA will be necessary to reveal those satellites around $z\sim 6$ quasars that are dust-obscured and may elude JWST observations. We find that with $10$~hr of observing time, ALMA and JWST will detect up to $10$ satellites per sources. \end{itemize} In conclusion, we did not find any evidence of AGN-driven quenching on the star formation of satellites surrounding high-$z$ quasars. Thus, we rule out a scenario in which the detection of companions can be undermined by the external feedback. AGN feedback could be even instrumental to explain the high satellite number count observed around the most FIR luminous quasars and -- after correcting data for ALMA observational biases -- in the whole population observed so far. \section*{acknowledgements} The authors thank Bram Venemans and Roberto Decarli for helpful insights on ALMA [CII] data. SG acknowledges support from the ASI-INAF n. 2018-31-HH.0 grant and PRIN-MIUR 2017. AF and SC acknowledge support from the ERC Advanced Grant INTERSTELLAR H2020/740120. Partial support from the Carl Friedrich von Siemens-Forschungspreis der Alexander von Humboldt-Stiftung Research Award (AF) is kindly acknowledged. We gratefully acknowledge computational resources of the Center for High Performance Computing (CHPC) at SNS. AL acknowledges funding from MIUR under the grantPRIN 2017-MB8AEZ. PB acknowledges support from the Brazilian Agency FAPESP (grants 2016/01355-5, 2016/22183-8). The authors greatly thank the anonymous referee for useful comments which improved the quality of this manuscript. \section*{Data Availability Statement} The data underlying this article will be shared on reasonable request to the corresponding author.
proofpile-arXiv_065-5755
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{} Repulsive interactions in Fermi systems are at the heart of some of the most interesting phenomena in quantum many-body physics. For instance, the interplay between the spin and orbital degrees of freedom gives rise to Stoner's itinerant ferromagnetism in the continuum~\cite{Stoner_1933} and to the complex phases of the repulsive Hubbard model on a lattice~\cite{mielke_1993}. The dilute repulsive spin-1/2 Fermi gas, where the interactions between two spin states $\uparrow$ and $\downarrow$ are described by a positive $s$-wave scattering length $a$, is one of the most fundamental quantum many-body models~\cite{Huang_1957,lee_1957,Galitskii_1958}. Among its important features, it is amenable to first-principle calculations in perturbation (for $k_\text{F}a\ll 1$, where $k_\text{F}$ is the Fermi wavenumber). In that limit, its properties (e.g. ground-state energy, Landau parameters, etc.) are universal, i.e. they depend on $a$ alone, not on details of short-range physics~\cite{Galitskii_1958,Landau_1957,Efremov_2000}. Ultracold atomic gases have emerged as a powerful platform for studying this model, because effective repulsion can be implemented on the so-called `upper' (repulsive) branch using short-range attractive potentials~\cite{Jo_2009,Sanner_2012,Lee_2012,Valtolina_2017,Scazza_2017,Amico_2018,Scazza_2020}. This implementation is particularly interesting because it can realize the regime of strong ($k_\text{F}a\gtrsim 1$) yet short-range interactions ($k_\text{F}r_0\ll 1$, where $r_0$ is the potential range), see e.g.~\cite{Pricoupenko_2004,Shenoy_2011}. However, the repulsive Fermi gas with short-range attractive potentials is intrinsically metastable. This originates from the existence of a universal bound state in the two-body problem for $a>0$, with a binding energy $\epsilon_\text{b} = \frac{\hbar^2}{m a^2}$ where $m$ is the mass of the atom. The pairing instability of the repulsive branch of the many-body system towards the lower (attractive) branch of bound pairs, depicted in Fig.~\ref{FIG:1}(a), is a complex problem; it is expected to evolve from an instability driven by \emph{universal} three-body recombination for $\epsilon_\text{b}\gg E_\text{F}$~\cite{Petrov_2003,Esry_2001}, to many-body pairing effects when $\epsilon_\text{b}\lesssim E_\text{F}$~\cite{Petrov_2003,Pekker_2011,He_2016,Amico_2018} where $E_\text{F}$ is the Fermi energy. This pairing instability has played a central role in the study of the strongly repulsive Fermi gas and the search for the itinerant-ferromagnet phase~\cite{Duine_2005,LeBlanc_2009,conduit_2009,conduit_2009_2,conduit_2009_3,chang_2010,schmidt_2011,pilati_2010,von_2011,chang_2011,Shenoy_2011,massignan_2013,pilati_2014,zintchenko_2016,He_2016}. Pioneering experiments have shown decreased lifetime of the gas with increasing interactions~\cite{Jo_2009,Sanner_2012} and larger initial rate of reduction of repulsive correlations (possibly due to the ferromagnetic instability) compared to the initial pairing rate~\cite{Amico_2018,Scazza_2020}. However, complex dynamics arising from the in-trap density inhomogeneity as well as the far-from-equilibrium nature of the initial quenched states have hindered the study of the homogeneous system's stability~\cite{Jo_2009,Amico_2018}. The advent of homogeneous gases prepared in optical box traps~\cite{Gaunt_2013,Chomaz_2015,Mukherjee_2017,navon_2021} has enabled the investigation of complex stability problems in clean settings~\cite{eigen_2017,bause_2021,shkedrov_2022}. Here, we revisit the fundamental problem of the stability of the repulsive Fermi gas by measuring the three-body recombination law in a homogeneous gas. The experiment starts with a weakly attractive gas of $^6$Li atoms in a balanced mixture of the first and third lowest Zeeman sublevels (respectively labeled as $\uparrow$ and $\downarrow$), trapped in a red-detuned optical dipole trap. The gas is evaporatively cooled at a bias magnetic field $B= 287$~G. It is then loaded in a blue-detuned (at a wavelength of $639$~nm) cylindrical box trap constructed by intersecting a `tube' beam (produced with a set of axicons) with two thin sheets, see Fig.~\ref{FIG:1}(b). The magnetic field is then ramped to $B=597$~G where the interactions are weakly repulsive ($a \approx 500~a_0$, where $a_0$ is the Bohr radius ~\cite{zurn_2013_2}). At this stage, we typically have $N_{\uparrow} \approx N_{\downarrow} \approx 6 \times 10^5$ atoms per spin state at $T \approx 0.3~T_\text{F}$ with $E_\text{F} \approx k_{\mathrm{B}} \times 0.5\;\mu\mathrm{K}$ and a spin imbalance of $\frac{N_{\downarrow}-N_{\uparrow}}{N_{\downarrow} + N_{\uparrow}} = 0.2(3)\%$. The interaction field is then ramped to its final value over $100$~ms, and left to settle for an additional $25$~ms. We then hold the atoms for a variable duration $t_\text{hold}$. We image the gas near the zero crossing of $a$ ($|a| \le 50~a_0$) by quickly ramping the field to $B= 569$~G, so that trapped pairs are converted into tightly bound molecules and thus detuned from the atomic imaging resonance~\cite{imaging,SuppMat}. \begin{figure}[!h] \includegraphics[width=1\columnwidth]{figure1} \caption{A homogeneous repulsive Fermi gas prepared in an optical box. (a) Sketch of the two lowest energy branches of a Fermi gas with a positive scattering length $a$; the `upper' (repulsive) branch is shown in red, the `lower' branch (a gas of fermion pairs) is shown in blue. The red dashed line is the repulsive Fermi gas energy up to second order in $k_\text{F}a$~\cite{Huang_1957,lee_1957}; the red shaded area depicts the energy width associated with the finite lifetime of the upper branch. (b) \emph{In-situ} imaging of the box-trapped Fermi gas. Gravity, here oriented along $-\mathbf{\hat{y}}$, is compensated by magnetic levitation. The image on the left is the column-integrated optical density (OD). The plots on the right are cuts along the white dashed lines of the image. The solid lines are derived from the fit used to extract the volume of the box; $V=7.3(6) \times 10^{-4}~\mathrm{mm}^3$. The slanted profile in the horizontal cut is caused by the slightly conical shape of our cylindrical box~\cite{SuppMat}.} \label{FIG:1} \end{figure} We show in Fig.~\ref{FIG:2}(a) examples of time evolution of the atom number $N$ per spin state for different values of $a$, normalized to the initial atom number $N_0$. Qualitatively, the gas lifetime decreases with increasing $a$, even though $N_0$ also decreases (because of losses during the interaction field ramp and the settling time~\cite{SuppMat}). The average kinetic energy per particle $\epsilon_\text{kin}$, measured after time-of-flight expansion and shown in Fig.~\ref{FIG:2}(b), also slowly decreases with $t_\text{hold}$. The origin of the decay is model-independently revealed by plotting the atom loss rate $\dot{N}/N_0$ versus $N/N_0$ (Fig.~\ref{FIG:2}(c)). The examples shown follow a scaling relation of the rate $\dot{N} \propto -N^{\gamma}$ (fits are shown as solid lines, and fitted values of $\gamma$ are in legend). We observe that $\gamma\approx 1$ at weak interactions ($a\ll 10^3~a_0$) where the losses are caused by density-independent collisions with the residual background gas. For stronger interactions, we observe $\gamma\approx 3$, consistent with an atom loss rate per unit volume \begin{equation}\label{eq:loss} \dot{n} = -L_3 n^3 \end{equation} due to three-body collisions, with a constant loss coefficient $L_3$ and a uniform density $n=N/V$, where $V$ is the volume of the box. \begin{figure}[!h] \includegraphics[width=1\columnwidth]{{figure2}} \caption{Decay of a uniform repulsive Fermi gas. (a) Evolution of atom numbers for different interaction strengths, normalized to the initial atom numbers $N_0$. The solid blue, yellow, and red lines are fits to a three-body loss model that includes a one-body loss rate determined from the green-line fit~\cite{onebody}. The three-body loss fits are limited to the region where $\epsilon_\text{kin}$ changes by less than $20\%$ of its initial value, indicated by solid circles; open circles are not used in the fit. The same marker style is used in (b) and (c). Dotted lines are extensions of the fits beyond the fitting range. (b) Evolution of the average kinetic energy per particle during atom losses. (c) Scaling relation between atom loss rate and atom number. Solid lines are power law fits and the extracted exponents $\gamma$ are listed in the legend. } \label{FIG:2} \end{figure} \begin{figure*}[!hbt] \centerline{\includegraphics[width=\textwidth]{{figure3}}} \caption{Threshold law of three-fermion recombination. (a) Scaling relation between $L_3$ and the (time-averaged) kinetic energy $\bar{\epsilon}_\text{kin}$. The solid line shows the power law fit on three sets of data, which is rescaled by a factor for clarity (see legend). The dashed line is the fit assuming $\lambda = 1$~\cite{SuppMat}. (b) Temperature evolution during three-body losses. The dashed lines are theoretical predictions without adjustable parameters, given the initial measured $(T/T_\text{F})_0$ (see legend). The solid lines are linear fits to extract the coefficient $\theta$; the dotted lines show estimate on the uncertainty of $\theta$, see panel (c). (c) Temperature-change coefficient $\theta$ versus $T/T_\text{F}$. The vertical dashed line marks the critical $(T/T_\text{F})^*$ at which $\theta$ changes sign, and the horizontal dashed line shows the asymptotic value of $\theta$ in the classical limit.} \label{FIG:3} \end{figure*} Now that we have established a range over which losses are dominated by three-body recombination, we quantitatively characterize the process. The event rate per unit volume for each type of event is $ \Omega \equiv K_3 n^3$ ($=\Omega_{\uparrow\uparrow\downarrow} = \Omega_{\uparrow\downarrow\downarrow}$) where $K_3$ is the recombination coefficient; $K_3$ can be studied through losses, since $K_3 = L_3/d$, where $d$ is the average number of atoms lost per event (either because their release energy from recombination exceeds the trap depth or because they form molecules that are optically detuned). We obtain $L_3$ by fitting $N(t)$ to the solution of Eq.~(\ref{eq:loss})~\cite{onebody} (solid lines in Fig.~\ref{FIG:2}(a)). To ensure that $L_3$ is approximately constant with $t_\text{hold}$, the fits are restricted to a range where ${\epsilon}_\text{kin}$ changes by at most $20\%$ of the initial value, see solid points in Fig.~\ref{FIG:2}~\cite{SuppMat}. We examine this assumption more carefully by studying the relationship between $L_3$ and $\epsilon_\text{kin}$. We control $\epsilon_{\text{kin}}$ by varying the box depth at an intermediate evaporative cooling stage, keeping the final box depth $U_\text{box}$ the same. As shown in Fig.~\ref{FIG:3}(a) for three different values of $a$, we observe that $L_3$ scales as a power law of $\epsilon_\text{kin}$ averaged over time, $\bar\epsilon_\text{kin}$. Theoretically, $K_3\propto \epsilon_\text{kin}^\lambda$, where the exponent $\lambda$ is determined by the three-body threshold laws, which crucially depends on the symmetries imposed by the quantum statistics of the collision participants~\cite{Esry_2001}. For instance, for three distinguishable particles or indistinguishable bosons, there is no energy dependence ($\lambda=0$); for three indistinguishable fermions, $\lambda=2$ ~\cite{Yoshida_2018,top_2021}. The generic process in the spin-$1/2$ Fermi gas corresponds to the previously-unverified case of collisions involving two indistiguishable fermions. The three-body event rate in a unit volume $\omega_3$ depends on the momenta $\mathbf{k}_1$ and $\mathbf{k}_2$ of the indistinguishable fermions, and is independent of the third participant's momentum $\mathbf{k}'$~\cite{petrov}: \begin{equation}\label{eq:omega} \omega_3(\mathbf{k}_1,\mathbf{k}_2,\mathbf{k}')\propto (\mathbf{k}_1-\mathbf{k}_2)^2. \end{equation} Integrating Eq.~(\ref{eq:omega}) over the phase space density of the three participants, one finds $\lambda=1$. Experimentally, we measure $\lambda_\text{exp} = 1.36(14)$~\cite{epsilonkin} (solid line in Fig.~\ref{FIG:3}(a)) , in reasonable agreement with the theoretical prediction. The dependence of $\omega_3$ on momentum has interesting implications on the temperature dynamics of the gas during decay. In Fig.~\ref{FIG:3}(b), we show $T/T_0$ versus $N/N_0$ (where $T_0$ is the initial $T$). Depending on $T/T_\text{F}$, the system either cools down or heats up. This effect results from an interplay between Fermi correlations and the momentum dependence of $\omega_3$. The cooling effect from the preferential removal of particles with large momenta (without spatial selectivity)~\cite{SuppMat}, strongest for $T\gg T_\text{F}$, competes with the heating from the perforation of the Fermi sea, which dominates in the deeply degenerate regime~\cite{timmermans_2001}. A theoretical model for a closed system, shown as colored dashed lines in Fig.~\ref{FIG:3}(b), yields good agreement with the observed evolution of the temperature for $N/N_0\gtrsim0.7$~\cite{SuppMat}. We attribute the discrepancy at late times for low $(T/T_\text{F})_0$ to additional cooling from plain evaporation. Quantitatively, we define the coefficient $\theta \equiv \frac{N}{T}\left(\frac{\partial T}{\partial N}\right)_V$ under this rarefaction, and measure it at $t_{\text{hold}}=0$ for various $T/T_\text{F}$ (Fig.~\ref{FIG:3}(c)). We observe that the transition from heating to cooling occurs at a critical degeneracy $(T/T_\text{F})^* \approx 0.7$. The measurements are in excellent agreement with the theoretical prediction (solid line in Fig.~\ref{FIG:3}(c))~\cite{SuppMat}, which establishes the crossing at $(T/T_\text{F})^* = 0.71$ (vertical dashed line). For $T\gg T_\text{F}$, $\theta$ approaches $2/9$, where the cooling effect is most pronounced. Note that for all $T$, $\theta<2/3$, so that this process does not increase the quantum degeneracy of the gas (see related scenarios for bosons~\cite{schemmer_2018,Dogra_2019}, and fermions near a narrow Feshbach resonance~\cite{Peng_2021}). We now turn to the dependence of $L_3$ on interactions. In Fig.~\ref{FIG:4}(a), we display $\gamma$ versus $a$; the solid points are data where losses are three-body dominated (see Fig.~\ref{FIG:4} and caption). We subsequently extract $L_3$ for all interactions by fixing $\gamma=3$ and taking one-body decay into account~\cite{onebody}; to factor out the effect of the threshold law, we display $L_3/\bar \epsilon_{\text{kin}}$, see Fig.~\ref{FIG:4}(b). We observe that over more than four orders of magnitude, $L_3/\bar \epsilon_{\text{kin}}$ follows a power law of $a$. Fitting the data in the three-body-dominated region (solid blue points in Fig.~\ref{FIG:4}(b)), we find $L_3/\bar \epsilon_{\text{kin}} \propto a^{6.1(2)}$ (solid blue line). The fact that $L_3$ scales precisely as $a^6$ is strong evidence for the universality of this process. Indeed, should three-body recombination be universal, i.e. be independent of short-range physics, the threshold law implies the scaling of $K_3$ with interaction strength~\cite{DIncao_2005}. Specifically, if $K_3\propto \epsilon_\text{kin}^\lambda$, then on dimensional grounds one should have $K_3\propto \epsilon_\text{kin}^\lambda \frac{m^{\lambda-1}}{\hbar^{2\lambda-1}}a^{4+2\lambda}$. For two identical fermions, one finds $K_3\propto a^6$, in excellent agreement with our measurements. It is interesting to note that the $a^4$ scaling for bosons is not universal, due to effects related to Efimov physics~\cite{braaten_2007,naidon_2017}. Compared to the bosonic case, an additional factor $\epsilon_{\text{kin}}/{\epsilon_\text{b}}$, $\propto(k_\text{F}a)^2$ at low $T$, can be interpreted as a suppression factor due to Pauli blocking, which arises as two identical fermions need to come within $\approx a$ of each other to form a final bound state. Now that we established $L_3 \propto \epsilon_\text{kin} a^6$, we can extract the dimensionless constant $A$ in $L_3 = d A \epsilon_{\text{kin}} a^6/\hbar$, predicted to be universal~\cite{Auniv}. As some or all products of the recombination can be lost, $d$, the link between losses and recombinations, depends on the box depth $U_\text{box}$ and $\epsilon_\text{b}$. To gain insight into this link, we implement a second imaging protocol where we image the atoms directly at the interaction field (depicted in the top left inset of Fig.~\ref{FIG:4}(b)); in our range of $a$, molecules and atoms are optically unresolved~\cite{imaging}. The measurements are displayed as red circles in Figs.~\ref{FIG:4}(a)-(b). \begin{figure}[!t] \includegraphics[width=1\columnwidth]{{figure4}} \caption{Universality of three-body recombination. (a) Atom-loss scaling exponent $\gamma$. Blue and red circles are respectively imaged near the zero crossing of $a$ or directly at the interaction field. Data in the three-body dominant region, selected by $|\gamma - 3| \le 0.5$ (blue band) and with a relative uncertainty $\le 20\%$, are shown by solid points and left open otherwise, in all panels. (b) Universal scaling of $L_3$ with $a$. The experiment sequence is shown in the upper insets. The blue line is the power law fit on the solid blue points. Vertical grey dashed lines mark the threshold values of $a$ such that $\epsilon_\text{b}/3 = 2 U_{\text{box}}$ and $2\epsilon_\text{b}/3 = U_{\text{box}}$, and the bands include average over initial energies ~\cite{SuppMat}. Bottom cartoons depict imaging and trapping regimes after recombinations for the atoms and molecules. (c) Universal constant $A$. Data points are the experimental values of $A = \hbar L_3/(3 \bar\epsilon_{\text{kin}} a^6)$, and the solid purple line is derived from a global $a^6$ fit to the data in (b) (not shown). The systematic error from the volume calibration is shown by the light purple band \cite{SuppMat}.} \label{FIG:4} \end{figure} At low $a$, $L_3$ measured by both imaging methods coincide, as $d=3$ in both cases. The separation at $a \gtrsim 1300~a_0$ occurs close to the condition $\epsilon_\text{b}/3 \approx 2 U_{\text{box}}$ at which the molecules remain trapped (see cartoons at the bottom of Fig.~\ref{FIG:4}(b))~\cite{deposit}. For larger $a$, $d<3$ for the `interaction field' imaging. For the `zero-crossing' imaging, $d=3$ still holds; the $a^6$ scaling extends up to the point where $2\epsilon_\text{b}/3 < U_{\text{box}}$, beyond which all recombination products may be trapped~\cite{Petrov_2003,unitary}. The maximum of $L_3(a)$ is located marginally beyond this threshold. Fixing $d=3$, we fit $L_3/\bar \epsilon_{\text{kin}}$ (solid blue points) and find $A = 143(16)_{\mathrm{stat.}}(24)_{\mathrm{sys.}}$. To examine more closely the quality of the $a^6$ scaling, we extract $A$ without free parameters from $(\hbar L_3/(3 \bar\epsilon_{\text{kin}})/a^6$ (Fig.~\ref{FIG:4}(c)). Our measurements are in excellent agreement with the theoretical prediction $A=148$ for the mass-balanced three-fermion problem~\cite{Petrov_2003}. The range over which the $a^6$ scaling law applies is surprisingly large. First, it extends even at large $a$ where the measured $\gamma$ is only marginally close to $3$ (see open circles in Fig.~\ref{FIG:4}). Secondly, at the highest $a$ for which we observe $a^6$ scaling, $\epsilon_{\text{kin}}\gtrsim k_\text{B}\times 0.5$~$\mu$K is only slightly smaller than $\epsilon_\text{b}\approx k_\text{B}\times 1.2$~$\mu$K, even though the condition for the universal scaling is expected to be valid for $\epsilon_{\text{kin}} \ll \epsilon_\text{b}$~\cite{Petrov_2003}. Finally, our measurement of $K_3$ provides an important ingredient for assessing the limits of equilibrium for a strongly interacting repulsive Fermi gas. To ensure equilibrium, $\Gamma_3 \equiv 3 K_3 n^2$~\cite{gamma3} must be significantly slower than $\Gamma_2$, the two-body elastic collision rate. We find $\Gamma_2/\Gamma_3 = (k_{\mathrm{F}} a)^{-4} I(T/T_\mathrm{F})$ where $I(T/T_\mathrm{F})$ is a universal function that reaches its maximum at $T \approx 1.2~T_{\mathrm{F}}$. At this temperature, $\Gamma_2=\Gamma_3$ at $k_\text{F} a \approx 1.3$, providing an upper bound to the interaction strength of a repulsive Fermi gas in equilibrium~\cite{SuppMat,kFalim}. This limit is close to the predicted point for the ferromagnetic transition, $k_\text{F} a=\pi/2$ in the mean-field approximation~\cite{houbiers_1997} and $\approx 1$ in quantum Monte Carlo simulations~\cite{pilati_2010,chang_2011,He_2016}. In conclusion, we studied the stability of the repulsive Fermi gas with short-range interactions. We measured the universal recombination law for three particles of equal mass involving two identical fermions. This work paves the way for the study of complex stability problems of Fermi systems in clean uniform settings, e.g. multi-component gases~\cite{ottenstein_2008,huckans_2009,nakajima_2010}, mass-imbalanced mixtures~\cite{Taglieber_2008,Wille_2008,Barontini_2009,Pires_2014,Tung_2014}, and molecules~\cite{hoffmann_2018,duda_2022}. A future work could leverage uniform Fermi gases to explore the regime $\epsilon_\text{b}\lesssim\epsilon_{\text{kin}}$, where $K_3\propto \epsilon_{\text{kin}} a^6$ should no longer hold, and at low temperature many-body pairing mechanisms are expected to take over~\cite{Pekker_2011,He_2016}. To access the shorter time scales expected, fast state preparation and probing techniques such as internal state manipulation could be useful~\cite{Amico_2018,Scazza_2020}. We thank D.S. Petrov, F. Scazza, M. Zaccanti, and G. Roati for fruitful discussions. We also thank Z. Hadzibabic, F. Werner, and L. Chambard for comments on the manuscript. This work was supported by the NSF, DARPA, the David and Lucile Packard Foundation, and the Alfred P. Sloan Foundation.
proofpile-arXiv_065-5757
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Topological field theory has offered a rich domain of common interest for mathematicians and theoretical physicists over the last few decades. In this paper we examine how and when a constructive method from physics -- the Batalin--Vilkovisky (BV) formalism in conjunction with rigorous renormalization techniques of Axelrod--Singer and Kontsevich for Chern--Simons-type theories -- produces an algebra over the {\em framed} little $n$-disks operad. Our work here builds upon and extends prior work by the first author \cite{ElliottSafronov} that explains how this constructive method can produce algebras over the framed little $n$-disks operad in general. We will see that the obstruction to lifting from the unframed to the framed setting, or {\em framing anomaly}, is always expressed in terms of Pontryagin classes, suitably interpreted. Our methods are an analog, for a class of theories we will refer to as topological AKSZ theories, of the formalism for anomalies associated with Stora, Wess, and Zumino \cite{Zumino, Stora}, but in this topological setting, we can relate directly to the obstruction theory for algebras over these operads. Here we focus on an explicit computation within the BV framework as articulated by Costello \cite{CostelloBook} and developed further in~\cite{ElliottSafronov}. Let us describe concisely some concrete consequences of the results proved here. Our results apply to theories like Chern--Simons theory, topological BF theories, and topological AKSZ theories in general. Using a simple point-splitting regularization (sometimes called the ``configuration space method''), one can handle divergences in such theories; the only obstruction to quantization is whether the quantized action satisfies the quantum master equation. When this obstruction vanishes, the results of \cite{ElliottSafronov} show that the observables of the theory provide an ${\bb E}_n$ algebra. Here we compute the obstruction-deformation complex describing the ability to lift such an ${\bb E}_n$ algebra structure to a framed ${\bb E}_n$ algebra structure; we also explain how the obstruction to lifting can be seen as arising from a kind of equivariant quantum master equation. Why bother to make such a lift? And how do these algebras relate to more conventional approaches to topological field theory? We will offer answers aimed at topologists and then at physicists. Functorial field theories, in the style of Atiyah--Segal--Lurie, arise from (framed) ${\bb E}_n$ algebras via factorization homology (see \S4.1 of \cite{LurieTFT} or \cite{ScheimbauerThesis}). Briefly, an ${\bb E}_n$ algebra $A$ determines a {\em framed} fully extended $n$-dimensional topological field theory with values in a ``higher Morita category'' built from ${\bb E}_n$-algebras. A $k$-manifold $X$ with a framing of the bundle $TX \times \RR^{n-k}$ is assigned the invariant \[ Z(X) = \int_{X \times \RR^{n-k}} A, \] the factorization homology over the $n$-dimensional manifold made by thickening $X$. The functor $Z$ offers a sophisticated invariant of such $n$-framed manifolds, but such manifolds are relatively rare. (Thank of the $n=2$ case. The only framed closed 2-manifolds are genus 1.) On the other hand, a framed ${\bb E}_n$ algebra $A$ determines an {\em oriented} fully extended $n$-dimensional topological field theory with values in this ``higher Morita category'' built from ${\bb E}_n$-algebras. We now ask that a $k$-manifold $X$ admits an orientation on $TX \times \RR^{n-k}$. Such manifolds are much more abundant. Our results thus show how a large class of TFTS -- in the physicist's sense -- determine extended oriented TFTs in the sense of Baez--Dolan and Lurie. This rather abstract formulation can be expressed in more concrete, physical terms. The ${\bb E}_n$ algebra of a TFT encodes the operator product expansion of the local operators, with extensive thoroughness. Think of the local operator that arises from picking a configuration of $k$ distinct, ordered points in $\RR^n$ and inserting a local operator at each point. Although the value itself is essentially independent of the location of the insertions (you can wiggle the points without changing the output, up to exact terms), the topology of the configurations of points is quite rich, and the ${\bb E}_n$ algebra keeps track of how the OPE depends on that topology. In other words, it encodes Witten descent and related manipulations. The associated functorial TFT associates to a $k$-manifold $X$ the ${\bb E}_{n-k}$-algebra encoding the OPE of the full theory dimensionally reduced along~$X$. Our results explain the conditions under which you can implement this construction -- the OPE algebras and their dimensional reductions -- on oriented manifolds. In other words, one needs to know how to encode descent given an orientation, and the anomaly to such descent lies in our obstruction-deformation complex. In this paper we do not compute any explicit anomalies, leaving that for a forthcoming companion paper \cite{EGWfr}, but we do note theories for which the anomaly must vanish because the relevant cohomology group vanishes. The following is a concrete example. \begin{example} Consider a topological BF theory on $\RR^n$ for $n \ge 3$, with gauge Lie algebra $\gg$ a simple Lie algebra. The results in this paper demonstrate that the only possible framing anomaly for such a theory lies in the Lie algebra cohomology group \[\bigoplus_{i =1}^{n-1}\mr H^i(\so(n)) \otimes \mr H^{n-i}(\gg).\] Above degree zero, the cohomology of $\so(n)$ is supported in degrees $3 \text{ mod } 4$, and the cohomology of $\gg$ is supported in odd degrees $\ge 3$. Thus, we can conclude: \begin{prop} For a topological BF theory as above: \begin{enumerate} \item[(1)] The framing anomaly vanishes when the dimension $n$ is odd. \item[(2)] For $\gg = \so(k)$ or $\gg = \sp(k)$, the framing anomaly vanishes when the dimension $n$ is not equal to $2 \text{ mod } 4$. \end{enumerate} \end{prop} \end{example} \begin{example} Let us now consider the example of 3-dimensional Chern--Simons theory with an arbitrary semisimple gauge group, which has a well-known framing anomaly for ordinary Chern--Simons theory \cite{WittenJones, AxelrodSinger}. Although classical Chern--Simons theory can be defined on any oriented 3-manifold, its quantization depends on a choice of framing for the 3-manifold. The quantization of Chern--Simons theory, including the framing anomaly, is discussed in the language of the BV formalism by Iacovino \cite{Iacovino}. \begin{prop} There is no obstruction to quantizing the $\mf{iso}(3)_\mr{dR}$ action for Chern--Simons theory on $\RR^3$. However there \emph{is} a potential obstruction to this quantization as an \emph{inner} action.\ \end{prop} \end{example} \begin{example} In higher dimensions, there are \emph{abelian} Chern--Simons theory on $\RR^n$ for any odd integer $n \ge 3$, having to do with connection-type data on higher $U(1)$-gerbes. Concretely, we consider the perturbative theories expressed in terms of the formal mapping space $\mr{Map}(\RR^n_{\mr{dR}}, B^{\frac{n-1}2}\mf u(1))$. Our methods let us understand the possible obstructions to Chern--Simons theories of this type. \begin{prop} There is no obstruction to quantizing the $\mf{iso}(n)_\mr{dR}$ action for Chern--Simons theory on $\RR^n$ with gauge Lie algebra $\mf u(1)$, for any odd integer $n \ge 3$. However there is a potential obstruction to this quantization as an inner action whenever $n \equiv 3 \text{ mod } 4$. \end{prop} \end{example} \subsection{Overview of the Paper} We begin in Section \ref{AKSZ_section} by discussing the class of field theory to which our results apply: topological AKSZ theories. These are topological field theories whose fields can be described in terms of mapping spaces, with BV action functional generated by the AKSZ approach \cite{AKSZ}, via transgression of a shifted symplectic structure on the target of the mapping space. While these theories make sense on any smooth manifold, in Section \ref{dR_iso_section} we specialize to theories defined on a vector space $\RR^n$, and begin to incorporate the action of the group of isometries. At the classical level, topological AKSZ theories admit not only an action of the isometry group, but also a trivialization of this action up to homotopy. The main results of the present paper concern the lift of this homotopy trivialization to the quantum level. We discuss the implications of such a lift in Section \ref{En_section}, in which we recall results from \cite{ElliottSafronov} that allow for the realization of a framed $\bb E_n$-algebra structure on the observables of a quantum field theory on $\RR^n$, provided we can define a quantization of the homotopy trivialization of the isometry action. Such a structure permits the application of the tool of factorization homology to extend such a quantum field theory on $\RR^n$ to more general oriented smooth $n$-manifolds. In the final section, Section \ref{anomaly_section}, we characterize exactly when it is possible to quantize the homotopically trivial isometry action. There is a potential anomaly (the framing anomaly) obstructing this quantization, and we explicitly compute the cohomology group in which the obstruction lives. In many examples, as discussed above, this immediately tells us that the framing anomaly vanishes, so that there is no obstruction to quantization. \subsection{Acknowledgements} The authors would like to thank Pavel Safronov and Brian Williams for helpful comments and conversations during the preparation of this paper. The National Science Foundation supported O.G. through DMS Grants No. 1812049 and 2042052. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. \section{Topological AKSZ Theories} \label{AKSZ_section} In this paper we will focus on a natural class of topological field theories that can be defined in any dimension, which we will refer to as \emph{topological AKSZ theories}. \begin{remark} In this paper we will model classical and quantum field theories in terms of the Batalin--Vilkovisky (BV) formalism \cite{BatalinVilkovisky}. More specifically, we will be using the model for perturbative classical field theory described in \cite{CostelloBook, Book2}. See also \cite[Section 1]{ESW} for a summary of the definitions that we will using when we define a classical field theory. \end{remark} \begin{definition}\label{def E_L} Let $M$ be an oriented $n$-manifold, and let $L$ denote an $L_\infty$ algebra equipped with a cyclic pairing of degree $n-3$. We view $L$ as presenting a formal moduli space $\mr B L$ equipped with a shifted symplectic form of degree $n-1$. The \emph{topological AKSZ theory} on $M$ with target $\mr B L$ is the classical BV theory with whose underlying graded space of fields is \[ \Omega^\bullet(M) \otimes L[1] \] and whose dynamics are encoded by an $L_\infty$ structure on the cochain complex \[\mc E_L = (\Omega^\bullet(M) \otimes L, \d_{\mr{dR}} \otimes 1 + 1 \otimes \d_L)\] arising from the wedge product of forms and the brackets on $L$. \end{definition} The pairing on $L$ and integration over $M$ provide a local shifted symplectic structure, or, more accurately, the antibracket on observables (i.e., the Chevalley-Eilenberg cochains of~$\mc E_L$). We remark that these theories are also often called {\em generalized Chern-Simons theories} \cite{SchwarzGCS, MovshevSchwarz}. \begin{remark} If $M$ is compact and $X$ is an $n-1$-shifted symplectic derived stack, then there is a $-1$-shifted symplectic structure on the derived mapping stack \[\mc M_X(M) = \mr{Map}(M_{\mr{dR}}, X)\] given by the AKSZ construction of Pantev, To\"en, Vaqui\'e and Vezzosi \cite{PTVV}. The shifted tangent complex $L = T_x[-1]X$ at a closed point $x$ of $X$ has the structure of an $L_\infty$ algebra with a degree $n-3$ symplectic pairing. We can identify $\mc E_L$ with the shifted tangent complex of the mapping stack $\mc M$ at the constant map with value $x$. \end{remark} \begin{examples} Many standard examples fit inside this framework. \begin{enumerate} \item For $n=3$ and $L = \gg$ a reductive Lie algebra equipped with an invariant pairing, the topological AKSZ theory describes perturbative Chern--Simons theory on $M$ with gauge Lie algebra~$\gg$. \item For general $n$, let $L = \gg \oplus \gg^*[n-3]$ where $\gg$ is a finite-dimensional Lie algebra acting on $\gg^*[n-3]$ by its coadjoint representation. In this case the topological AKSZ theory describes perturbative topological BF theory on $M$ with gauge Lie algebra~$\gg$. \item More generally, we can replace $\gg$ in the above example by the shifted tangent space $T_y[-1]Y$ to a complex manifold $Y$, and consider \[L = T_y[-1]Y \oplus T^*_y[n-2]Y \iso T_{(y,0)}[-1](T^*[n-1]Y).\] We can now identify the topological AKSZ theory with the perturbation theory around a constant map of the derived mapping space~$T^*[-1]\mr{Map}(M_{\mr{dR}},Y)$. \end{enumerate} \end{examples} Topological AKSZ theories are extremely amenable to quantization, using techniques developed by Axelrod and Singer \cite{AxelrodSinger} and Kontsevich \cite{KontsevichECM}. (See also the summary of Costello, written in language closer to that used in this article~\cite[Section 15]{CostelloBVR}). We use the term \emph{prequantization} to mean the construction of a family of effective action functionals compatible under the renormalization group flow. In this terminology, to provide a \emph{quantization}, these effective action functionals must also satisfy the quantum master equation. \begin{theorem} Any topological AKSZ theory can be prequantized to all orders, uniquely up to a contractible choice. This prequantization can be computed explicitly, and there are no counter-terms. \end{theorem} The explicit computation involves a nice description of the propagator, and consequently a computation of the Feynman weights, using partial compactifications of the configurations spaces $\mr{Conf}_m(\RR^n)$ first constructed by Kontsevich \cite{KontsevichECM}. It will be useful to concretely describe the ring of local functionals associated to the topological AKSZ theory~$\mc E_L$. See \cite[Chapter 5, Section 10]{CostelloBook} for more. \begin{lemma} \label{AKSZ_functionals_lemma} The ring $\OO_{\mr{loc}}(\mc E_L)$ of local functionals for the theory $\mc E_L$ on an $n$-manifold $M$ is quasi-isomorphic to the shifted de Rham complex $(\Omega^\bullet(M) \otimes \mr C^\bullet_{\mr{red}}(L)[n], \d_{\mr{dR}} \otimes 1 + 1 \otimes \d_{\mr{CE}})$. \end{lemma} For $M = \RR^n$, the Poincar\'e lemma then ensures that \[ \OO_{\mr{loc}}(\mc E_L) \simeq C^\bullet_{\mr{red}}(L)[n]. \] In particular, for topological BF theories or Chern-Simons theories for gauge Lie algebra $\gg$, deformations and anomalies correspond to cocycles of Lie algebra cohomology groups for~$\gg$. These are well-known for semisimple Lie algebras. \begin{proof} See \cite[Lemma 3.5.4.1]{Book2}. \end{proof} \section{The de Rham Isometry Action} \label{dR_iso_section} From now on, let $M = \RR^n$. We will study anomalies for the action of the isometry group $\ISO(n) = \SO(n) \ltimes \RR^n$ of~$\RR^n$. Let $\mf{iso}(n)$ denote the Lie algebra of~$\ISO(n)$. \begin{definition} If $\gg$ is a Lie algebra, define $\gg_{\mr{dR}}$ to be the dg Lie algebra whose underlying graded vector space is $\gg[1] \oplus \gg$, with differential given by the identity, and Lie bracket given by the bracket on $\gg$ and the adjoint action of $\gg$ on $\gg[1]$. \end{definition} \begin{remark} This dg Lie algebra $\gg_{\mr{dR}}$ is homotopy equivalent to a trivial Lie algebra. On the other hand, it has an important interpretation from the point of moduli spaces: its associated formal moduli space offers a useful model of the {\em de Rham space} $\mr B\gg_{\mr{dR}}$ of the formal moduli space $\mr B\gg$. In more explicit terms, note that there is a natural map of dg Lie algebras $\gg \to \gg_{\mr{dR}}$. A representation of $\gg_{\mr{dR}}$ pulls back to a representation of $\gg$, but with an explicit trivialization (up to chain homotopy). Indeed, we can view the representations of $\gg_{\mr{dR}}$ as the representations of $\gg$ equipped with a homotopical trivialization. \end{remark} Every topological AKSZ theory on $\RR^n$ has a natural action of $\mf{iso}(n)$ by the Lie derivative action of vector fields, since this Lie algebra acts canonically on the de Rham complex. This action extends canonically to an action by $\mf{iso}(n)_{\mr{dR}}$, where the component $\mf{iso}(n)[1]$ acts by contraction of vector fields with differential forms, thanks to Cartan's formula. This action can be encoded by a current, in the sense of Noether, as follows. Consider the degree -1 local functional \[S_{\mr{eq}} \in \mr C^\bullet(\mf{iso}(n)_{\mr{dR}}, \OO_{\mr{loc}}(\mc E_L))\] defined by the formula \[S_{\mr{eq}}(\wt X, X)(A) = S(A) - \int \left(\langle A \wedge \iota_{\wt X} A \rangle + \langle A \wedge L_{X} A \rangle \right).\] Here $(\wt X, X)$ is an element of $\mf{iso}(n)_{\mr{dR}}$, $A$ is an element of $\mc E_L$, and $\langle-,- \rangle$ denotes the symplectic pairing on $L$. This current determines a derivation $\{S_{\mr{eq}}, -\}$ acting on the classical observables, as we now verify. \begin{prop} \label{isometry_action_prop} There is a map of dg Lie algebras from $\mf{iso}(n)_{\mr{dR}}$ to vector fields on the formal moduli space $\mc E_L$ given by $\{S_{\mr{eq}},-\}$. In particular, this local functional $S_{\mr{eq}}$ satisfies the equivariant classical master equation. \end{prop} \begin{proof} The functional $S$ satisfies the classical master equation by assumption, so we only need to consider terms in the equivariant classical master equation with a non-trivial dependence on the auxiliary (or background) fields $X$ or $\wt X$ from~$\mf{iso}(n)_{\mr{dR}}$. Write $I_X(A) = -\langle A \wedge L_{X} A \rangle$, and $J_{\wt X}(A) = -\langle A \wedge \iota_{\wt X} A \rangle$. We will show that the following equations hold: \begin{align} \{S, I_X\} &= 0 \\ \frac 12 \{I_X, I_X\} + \d_{\mr{CE}}I_{X} &= 0 \\ \{S, J_{\wt X}\} + \{I_X, J_{\wt X}\} + \d_{\mr{CE}}J_{\wt X} &= 0 \\ \frac 12 \{J_{\wt X}, J_{\wt X}\} &= 0. \end{align} Equations (1) and (2) together say that $\mc E_L$ is $\mf{iso}(n)$-equivariant, which follows by observing that there is a smooth action of the \emph{Lie group} $\mr{ISO}(n)$ on $\mc E_L$ by isometries of $\RR^n$, which is infinitesimally generated by the functional~$I_X$. Equation (4) is straightforward: it follows from the fact that $\iota_{\wt X}^2 = 0$. It remains to deduce equation~(3), which is a consequence of Cartan's formula. Indeed, \begin{align*} \d_{\mr{CE}}J_{\wt X}(A) &= -\left\langle A \wedge (L_{\wt X}(A) + \iota_{[X,\wt X]}(A)) \right\rangle \\ &= -\left\langle A \wedge ([\d, \iota_{\wt X}](A) + [L_X,\iota_{\wt X}](A)) \right\rangle \\ &= -\{S, J_{\wt X}\}(A) - \{I_X, J_{\wt X}\}(A). \end{align*} \end{proof} Let us restrict this action to an action of the ordinary algebra of isometries $\mf{iso}(n)$ alone, acting by the Lie derivative. This action can be defined at the quantum level, and it naturally comes from a smooth action of the \emph{group}~$\mr{ISO}(n)$. \begin{prop} \label{quantum_iso_action_prop} There is a smooth classical action of the Lie group $\mr{ISO}(n)$ on the topological AKSZ theory~$\mc E_L$. This action can be lifted to an action at the quantum level. \end{prop} We applied this result in a specific family of examples in \cite[Proposition 5.10]{EGWHigherDef}, using the same argument. \begin{proof} This claim follows from the result \cite[Proposition 9.1.1.2]{Book2}. This proposition proves the given claim for the group of translations, but as remarked following the result in {\it loc. cit.}, the same argument works for the full group of isometries. According to the cited result, it suffices to prove that the effective interaction associated to any parametrix is isometry invariant. In turn, it is enough for the classical interaction, along with the choice $\d^*$ of gauge-fixing operator to be isometry invariant. \end{proof} Likewise, let us restrict the $\mf{iso}(n)_{\mr{dR}}$ action to an action of $\RR^n_{\mr{dR}}$. We can, again, define this action at the quantum level. \begin{prop} \label{translation_dR_no_obstruction_prop} There is no anomaly obstructing the lift of the classical action of $\RR^n_{\mr{dR}}$ on the topological AKSZ theory $\mc E_L$ to the quantum level. \end{prop} \begin{proof} We will prove this claim by thinking about the weights of Feynman diagrams that would generate an anomaly obstructing the quantization of such an action. Consider a Feynman diagram of shape $\Gamma$ containing a vertex at position $x \in \RR^n$ labelled by the interaction $J_{\wt X}(A)$, where $\wt X$ is the vector field generating a translation. The weight of the diagram $\Gamma$ can be decomposed as a sum of weights $W_{\Gamma, e}$, where a single internal edge $e$ of $\Gamma$ is labelled by the heat kernel, and the remaining edges are all labelled by the propagator. Let us show that this sum vanishes. There are two classes of summand: \begin{enumerate} \item Suppose we label $\Gamma$ so that the special edge $e$ labelled by $K$ is not adjacent to the vertex at $x$. Then the associated Feynman weight is a limit of terms of the form \[\int_{t \in [\eps, \Lambda]} \int_{(x_1, \ldots, x_{N-1}) \in (\RR^n)^{N-1}} \left(\int_{x \in \RR^n} \d^*K_t(x-x_1) \wedge \iota_{\dd_j} \d^*K_t(x-x_2)\right) \wedge F(x_1, \ldots, x_{N-1}),\] where $F$ is some differential form (we won't need its explicit form, only the fact that it is independent of the location $x$). Because $\d^*$ and $\iota_{\dd_j}$ commute, the term inside the parentheses vanishes, so $W_{\Gamma, e}=0$. \item There are two labellings where the special edge $e$ labelled by $K$ is adjacent to $x$, say $e=e_1$ or $e=e_2$. The weights of these two labelled diagrams differ by a sign, and therefore they cancel when we sum over all labellings. Indeed, by integration by parts in the $x$ variable \begin{align*} W_{\Gamma, e_1} &= \int_{t \in [\eps, \Lambda]} \int_{(x_1, \ldots, x_{N-1}) \in (\RR^n)^{N-1}} \left(\int_{x \in \RR^n} \d^*K_t(x-x_1) \wedge \iota_{\dd_j} K_t(x-x_2)\right) \wedge F(x_1, \ldots, x_{N-1}) \\ & = - \int_{t \in [\eps, \Lambda]} \int_{(x_1, \ldots, x_{N-1}) \in (\RR^n)^{N-1}} \left(\int_{x \in \RR^n} K_t(x-x_1) \wedge \iota_{\dd_j} \d^*K_t(x-x_2)\right) \wedge F(x_1, \ldots, x_{N-1}) \\ &= - W_{\Gamma, e_2}. \end{align*} \end{enumerate} As a result, the sum of the weights $W_{\Gamma, e}$ over all edges vanishes, which implies that the anomaly for the $\RR^n_{\mr{dR}}$ action vanishes as claimed. \end{proof} So, putting this together, we find it is always possible to quantize a topological AKSZ theory on $\RR^n$ equivariantly for the action of the dg Lie algebra $\so(n) \ltimes (\RR^n_{\mr{dR}})$. In the next section we will fix an equivariant quantization for this dg Lie algebra, and study lifts to $\mr{iso}(n)_{\mr{dR}}$-equivariant quantizations. \section{\texorpdfstring{${\bb E}_n$}{En}-Algebras from Topological AKSZ theories} \label{En_section} Let us briefly review the relationship between topological field theories and $\bb E_n$ algebras as described in \cite{ElliottSafronov}. Consider a classical field theory $\mc E$ on $\RR^n$, and suppose that $\mc E$ admits a smooth action of $\RR^n_{\mr{dR}}$ as discussed in the previous section. For example, $\mc E$ might be a topological AKSZ theory. We can often describe either the classical or the quantum \emph{observables} of the field theory using the language of homotopical algebra. Recall that an \emph{$\bb E_n$-algebra} is defined as a module, in the category of cochain complexes, over the operad of little $n$-disks. A \emph{framed} $\bb E_n$-algebra is an $\bb E_n$-algebra equipped with a compatible action of the group $\SO(n)$ of rotations. In this section we will discuss the realization of $\bb E_n$-algebras as a special case of the theory of factorization algebras, as developed in \cite{Book1, Book2} in the context of quantum field theory. Let us write $\obscl(\mc E)$ for the factorization algebra of classical observables of the theory $\mc E$. This factorization algebra inherits a smooth action of $\RR^n_{\mr{dR}}$ from the action on the classical fields. If, furthermore, there is no anomaly obstructing the action of $\RR^n_{\mr{dR}}$ at the quantum level -- for instance, for topological AKSZ theories by Proposition \ref{translation_dR_no_obstruction_prop} -- then there an action of $\RR^n_{\mr{dR}}$ on the factorization algebra $\obsq(\mc E)$ of \emph{quantum} observables. This is exactly the context in which we can invoke the following result. \begin{definition} Let $\obs$ be a factorization algebra on $\RR^n$ with a smooth action of $\RR^n_{\mr{dR}}$. It is \emph{rescaling-invariant} if the structure map \[ \obs(B_r(0)) \to \obs(B_R(0)) \] for the inclusion of concentric balls is a quasi-isomorphism for any $r < R$. \end{definition} \begin{theorem}[{\cite[Corollary 2.30]{ElliottSafronov}}] Let $\obs$ be a rescaling-invariant factorization algebra on $\RR^n$ with a smooth action of $\RR^n_{\mr{dR}}$. Then the cochain complex $\obs(B_1(0))$ of observables on the unit ball can be canonically equipped with the structure of an $\bb E_n$ algebra. \end{theorem} \begin{remark} We will apply this result to the factorization algebra of quantum observables of a topological AKSZ theory, where the condition of rescaling invariance is automatically satisfied. At the classical level it is immediate from Lemma \ref{AKSZ_functionals_lemma}, since the de Rham complex is locally constant. When we quantize, as a graded vector space the quantum observables are isomorphic to \[\Omega^\bullet(U) \otimes \mr C_{\mr{red}}^\bullet(L)[n])[\![\hbar]\!],\] and we need to observe that the quantum corrections to the differential on the factorization algebra of observables do not violate rescaling invariance. We can see this using the spectral sequence associated to the filtration by $\hbar$ degree, whose $E_2$ page recovers the factorization algebra of classical observables. The rescaling map is a map of filtered complexes and induces a quasi-isomorphism on the $E_2$ page of this spectral sequence, so we therefore obtain a quasi-isomorphism at the $E_\infty$ page. \end{remark} If we can promote the smooth action of translations to a smooth action of rotations, then we can strengthen this result to provide a \emph{framed} $\bb E_n$ algebra structure. \begin{theorem}[{\cite[Corollary 2.39]{ElliottSafronov}}] Let $\obs$ be a rescaling-invariant factorization algebra on $\RR^n$ with a smooth action of $\mr{ISO}(n)_{\mr{dR}}$. Then the cochain complex $\obs(B_1(0))$ of observables on the unit ball can be canonically equipped with the structure of an $\bb E_n^{\mr{fr}}$ algebra. \end{theorem} Field theories provide our main source of factorization algebras, by the central result of \cite{Book2}: a BV theory on $\RR^n$ determines a factorization algebra on $\RR^n$. Hence a deformation of the theory determines a deformation of the factorization algebra, and in fact there is a map from the deformation complex of the theory to the deformation complex of its factorization algebra of observables. For this reason, if we want to show that a group acts smoothly on the observables, it suffices to understand how it acts on the theory. In particular, for this paper, we want to characterize when a quantization is $\mr{ISO}(n)_{\mr{dR}}$-equivariant. For a topological theory, as given by Definition~\ref{def E_L}, we have seen that the theory (and its usual quantization) is rescaling-invariant and $\mr{ISO}(n)$-equivariant, and thus so is the factorization algebra of observables. In fact, we have also shown that the translation action is homotopically trivial, so what remains is to trivialize homotopically the $\so(n)$-action. \section{Computation of Framing Anomalies} \label{anomaly_section} Let's follow the procedure we just outlined in Section \ref{dR_iso_section}, using the classical action of $\mf{iso}(n)_\mr{dR}$ of Proposition \ref{isometry_action_prop} and the quantization of the algebras $\mf{iso}(n)$ and $\RR^n_{\mr{dR}}$ that we have already constructed in Proposition \ref{quantum_iso_action_prop} and Proposition \ref{translation_dR_no_obstruction_prop}, respectively. We would like to lift this to an action of all of $\mf{iso}(n)_{\mr{dR}}$ at the quantum level. Our main result is the following. \begin{theorem} \label{obstruction_theorem} Fix an $\so(n) \ltimes (\RR^n_{\mr{dR}})$-equivariant quantization of the topological AKSZ theory $\mc E_L$ associated to a cyclic $L_\infty$ algebra $L$, as described in Definition~\ref{def E_L}. The obstruction to lifting this quantization to an $\mr{iso}(n)_{\mr{dR}}$-equivariant quantization is given by an element in \begin{equation} \label{anomaly_cohomology_eqn} \bigoplus_{i+j=n}\mr H^i_{\mr{red}}(\so(n)) \otimes \mr H^j_{\mr{red}}(L). \end{equation} The obstruction to lifting to an \emph{inner} $\mr{iso}(n)_{\mr{dR}}$-equivariant quantization is given by an element~of \begin{equation} \bigoplus_{i+j=n}\mr H^i_{\mr{red}}(\so(n)) \otimes \mr H^j(L). \end{equation} \end{theorem} \begin{corollary} If the cohomology group \ref{anomaly_cohomology_eqn} vanishes then the factorization algebra of quantum observables for the topological AKSZ theory $\mc E_L$ can be canonically equipped with the structure of a framed $\bb E_n$-algebra. \end{corollary} We will outline the argument and then prove the intermediate results that realize it. The reader should pay attention to how Pontryagin classes can be seen as labeling obstruction classes. First, we identify the obstruction-deformation complex where the obstruction to our quantization will live. Let \[\mr{Act}_{\gg}(\mc E_L)= \mr C^\bullet_{\mr{red}}(\gg, \OO_{\mr{loc}}(\mc E_L))\] denote the formal moduli space describing $\gg$-equivariant deformations of a classical theory $\mc E_L$. (For an overview, see Chapter 11 of \cite{Book2}, and for extensive discussion, see Section 2, Chapter 12 and Section 2, Chapter 13 of \cite{Book2}.) Its tangent complex is a cochain complex, and the obstruction to $\gg$-equivariant quantization is a degree 1 cocycle in that complex. These results lead to equivariant refinements of Lemma~\ref{AKSZ_functionals_lemma}, which characterize the equivariant local functionals up to equivalence: \[ \mr{Act}_{\gg}(\mc E_L) \simeq \mr C^\bullet_{\mr{red}}(\gg, C^\bullet_{\mr{red}}(L)[n]). \] In this paper, $\gg$ will be $\mf{iso}(n)_{\mr{dR}}$ or~$\so(n) \ltimes \RR^n_{\mr{dR}}$. By hypothesis, we have an $\so(n) \ltimes \RR^n_{\mr{dR}}$-equivariant quantization and we are asking to lift to an $\mf{iso}(n)_{\mr{dR}}$-equivariant quantization. Hence we need to describe the fiber of the map \[ \mr{Act}_{\mf{iso}(n)_{\mr{dR}}} \to \mr{Act}_{\so(n) \ltimes \RR^n_{\mr{dR}}} \] to characterize the lifting problem. This fiber is derived in nature; it has an explicit cochain model $\mc C_{n,L}$ that we describe in Lemma~\ref{description_of_fiber_lemma} below. \begin{remark} The case of an inner action is quite similar. Here we extend $\mr{Act}_{\gg}(\mc E_L)$ to $\mr{InnerAct}_{\gg}(\mc E_L)$ by $\mr C^\bullet_{\mr{red}}$. (See Lemma 12.2.3.2 of \cite{Book2}.) Concretely, we are asking for the $\gg$-action to be inner, i.e., realized by local functionals. \end{remark} Next, we begin to calculate the cohomology of $\mc C_{n,L}$ by using a spectral sequence arising from a natural filtration on the complex $\mc C_{n, L}$. The first pages of this spectral sequence reduce to computations well-known from topology. Recall that the cohomology of the classifying spaces $\mr{BSO}(n)$ are graded polynomial rings with even generators given by the Pontryagin classes (and when $n$ is even, an extra generator called the Pfaffian). Explicitly, \begin{equation}\label{hbso} \mr H^\bullet(\mr{BSO}(n), \CC) \iso \begin{cases} \CC[p_1, p_2, \ldots, p_{k}] & \text{$n =2k+1 $ odd} \\ \CC[p_1, p_2, \ldots, p_{k-1}, p'_k] & \text{$n = 2k$ even} \\ \end{cases} \end{equation} where each generator $p_j$ has degree $4j$ and, for $n=2k$ even, the generator $p'_k$ has degree $2k=n$. Recall as well that the Lie algebra cohomology $\mr H^\bullet(\so(n))$ equals the cohomology $\mr H^\bullet(\SO(n),\CC)$; these are graded polynomial rings with odd generators, where each generator's degree is one less than the corresponding Pontryagin class. Explicitly, \begin{equation}\label{hso} \mr H^\bullet(\so(n)) \iso \begin{cases} \CC[\eta_1, \eta_2, \ldots, \eta_{k}] & \text{$n =2k+1 $ odd} \\ \CC[\eta_1, \eta_1, \ldots, \eta_{k-1}, \eta_{k}'] & \text{$n = 2k$ even} \\ \end{cases} \end{equation} where $|\eta_k| = 4j-1$ and, for $n=2k$ even, $|\eta'_k| = |p'_k|-1$. Thus, there are generators in degrees 3, 7, and so on. In terms of those cohomology rings, the $E_2$-page of the spectral sequence computing $\mr H^\bullet(\mc C_{n,L})$ is \[ E_2^i \iso\bigoplus_{\substack{j+k+\ell = n+i \\ k>0}} \mr H^j(\so(n)) \otimes \mr H^{k}(\mr{BSO}(n)) \otimes \mr H^\ell_{\mr{red}}(L). \] This isomorphism is the content of Proposition~\ref{E2_page_prop}. The differential on this $E_2$-page sends each $\eta_j$ to $p_j$. As shown in Lemma~\ref{d3_differential_lemma}, the $E_3$-page is then isomorphic to \[ \mr H^i(\mc C_{n,L}) = \bigoplus_{\substack{j+\ell=n+i-1\\ i>0}} \mr H^j(\so(n)) \otimes \mr H^\ell_{\mr{red}}(L) \] and the spectral sequence collapses on this page. Thus, we know that anomalies obstructing the $\so(n)_{\mr{dR}}$ action live in \[ \bigoplus_{\substack{j+\ell=n\\j>0}} \mr H^j(\so(n)) \otimes \mr H^\ell_{(\mr{red})}(L), \] as claimed. \begin{remark} We have identified the space of possible anomalies abstractly in terms of classes in $\mr H^{>0}(\so(n))$, but we can make our description more explicit. That is, we can describe where these classes came from in $\mr H^{>0}(\mr{BSO}(n)) \otimes \mr H^\bullet(\so(n))$. The classes that survive to the $E_\infty$ page of the spectral sequence are all linear in the Pontryagin classes $p_k$. If we identify $\mr H^\bullet(\mr{BSO}(n)) \otimes \mr H^\bullet(\so(n))$ as the polynomial algebra in the classes $p_j$ and $\eta_j$, we can identify the factor in our spectral sequence surviving to the $E_\infty$ page as the image of the generators of $\RR[\eta_1, \ldots, \eta_k]$ under the differential induced by the map sending $\eta_i$ to $p_i$ for all $i$. In other words, the classes that survive take the form \[\sum_{j=1}^\ell \eta_{i_1} \eta_{i_2} \cdots \eta_{i_{j-1}} p_{i_j} \eta_{i_{j+1}} \cdots \eta_{i_\ell},\] for any sequence $1 \le i_1 < i_2 < \cdots < i_\ell \le k$. This description will follow directly from our proof of Theorem~\ref{obstruction_theorem}. We discuss the simplest two examples, where the dimension $n$ is equal to 3 or 4, in Examples \ref{3d_example} and \ref{4d_example}. \end{remark} Now that we have traced the path, we will begin with the first step. \begin{lemma} \label{description_of_fiber_lemma} The fiber of the map \begin{equation}\label{htpyfib} \mr{Act}_{\mf{iso}(n)_{\mr{dR}}} \to \mr{Act}_{\so(n) \ltimes \RR^n_{\mr{dR}}} \end{equation} is quasi-isomorphic to $\mc C_{n,L}$, whose underlying graded vector space agrees with that of \[ \mr C^\bullet(\so(n), \sym^{>0}(\so(n)^*[-2]) \otimes \Omega^\bullet(\RR^n)) \otimes \mr C^\bullet_\mr{red}(L)[n], \] but whose differential is \begin{equation} \label{fib_dif} (\d_{\mr{CE}}+ \d_{\mr{dR}} + \d') \otimes 1 + 1 \otimes \d_{\mr{CE}}, \end{equation} where $\d_{\mr{CE}}$ is the Chevalley--Eilenberg differential (for the relevant Lie algebra acting on the relevant module), $\d_{\mr{dR}}$ is the de Rham differential on $\Omega^\bullet(\RR^n)$, and $\d'$ is the operator extended as a derivation from the identity map $\so(n)^*[-1] \to \so(n)^*[-2]$. \end{lemma} Observe that this model of the fiber is, in fact, a dg commutative algebra: the tensor factors are dg commutative algebras and the differential can be checked to be a derivation. \begin{proof} We can describe the complexes $\mr{Act}_{\mf{iso}(n)_{\mr{dR}}}$ and $\mr{Act}_{\so(n) \ltimes \RR^n_{\mr{dR}}}$ directly as in \cite[Section 11.2]{Book2}. This complex $\mc C_{n,L}$ is the set-theoretic fiber product (i.e., kernel of the map of cochain complexes). But, using the projective model structure \cite{HinichHAHA} on cochain complexes (or dg Lie algebras), we see the map \eqref{htpyfib} is a fibration and so the kernel provides the homotopy fiber product. \end{proof} As discussed in the outline, we now consider the spectral sequence associated to the filtration that turns on the term $\d'$ in the differential. \begin{prop}\label{E2_page_prop} Consider the spectral sequence associated to the filtration $F_p$ on the complex $\mc C_{n,L}$ with \[ F_p\, \mc C_{n,L} = \bigoplus_{a \ge p}\mr C^\bullet(\so(n), {\sym}^{a}(\so(n)^*[-2]) \otimes \Omega^b(\RR^n)) \otimes \mr C^\bullet_\mr{red}(L)[n] \] for $p \ge 1$, where the right hand side is equipped with the differential~\eqref{fib_dif}, and $F_{\le 0} \, \mc C_{n,L} = 0$. The $E_2$-page of this spectral sequence is equivalent to \begin{align*} E_2^i &\iso \bigoplus_{\substack{j+k+\ell = n+i \\ k>0}} \mr H^j(\so(n)) \otimes \mr H^{k}(\mr{BSO}(n)) \otimes \mr H^\ell_{\mr{red}}(L). \end{align*} \end{prop} This filtration produces a spectral sequence of graded commutative algebras. This isomorphism on the $E_2$-page is, in fact, a map of graded commutative algebras. To compute the $E_2$-page of this spectral sequence, the following result is useful. \begin{lemma}\label{invariant_coeff_prop} If $V$ is a finite-dimensional $\so(n)$-representation, then there is a natural isomorphism \[\mr H^\bullet(\so(n), V \otimes \Omega^\bullet(\RR^n)) \iso \mr H^\bullet(\so(n)) \otimes (V \otimes \Omega^\bullet(\RR^n))^{\so(n)}.\] \end{lemma} In words, computing the cohomology decouples into knowing $H^\bullet(\so(n))$ and knowing the strict invariants of $V$-valued differential forms. \begin{proof}[Proof of Lemma~\ref{invariant_coeff_prop}] The complex $\mr C^\bullet(\so(n), V \otimes \Omega^\bullet(\RR^n))$ is the totalization of a double complex where one differential is the exterior derivative and the other is the Chevalley-Eilenberg differential. Consider the spectral sequence of this double complex, where we take the exterior derivative first. Then the $E_2$-page is $\mr H^\bullet(\so(n), V)$. For any finite-dimensional representation $W$ of a semisimple Lie algebra $\gg$, there is a natural isomorphism \[ H^\bullet(\gg, W) \iso H^\bullet(\gg) \otimes W^\gg \] so the $E_2$-page is isomorphic to $\mr H^\bullet(\so(n)) \otimes V^{\so(n)}$. The sequence collapses on this page, so the claim is shown. \end{proof} That lemma makes the proof of the proposition straightforward. \begin{proof}[Proof of Proposition~\ref{E2_page_prop}] By examining the filtration, one finds that computing the $E_2$-page boils down to computing the cohomology of the double complex \[ \mr C^\bullet(\so(n), {\sym}^{a}(\so(n)^*[-2]) \otimes \Omega^\bullet(\RR^n)) \] for each natural number~$a$. But then \[ \left( {\sym}(\so(n)^*[-2]) \right)^{\so(n)} \iso \mr H^\bullet(\mr{BSO}(n), \CC), \] by Chern--Weil theory, as in~\cite{Chern}. \end{proof} \begin{lemma}\label{d3_differential_lemma} The differential $\d_3$ on the $E_2$-page \[ E_2^i \iso \bigoplus_{\substack{j+k+\ell = n+i\\k>0}} \mr H^j(\so(n)) \otimes \mr H^{k}(\mr{BSO}(n)) \otimes \mr H^\ell_{\mr{red}}(L) \] of our spectral sequence is induced, as a module over $\mr H^\bullet(\mr{BSO}(n))$, by the map sending $\eta_j \otimes \alpha$ to $1 \otimes (p_j \wedge \alpha)$, where $1 \otimes (p_j \wedge \alpha) \in \mr H^\bullet(\so(n)) \otimes \mr H^\bullet(\mr{BSO}(n))$. \end{lemma} \begin{proof} This statement follows immediately from the definition of the spectral sequence of a filtered complex. The differential on the $E_2$ page is inherited from the restriction of the differential \eqref{fib_dif} to the space of 2-almost cycles, i.e. those terms closed for the piece of the differential that does not raise filtered degree. This restricted differential is identical to the restriction of the summand $\d' \otimes 1$ of the differential, and $\d'$ acts on generators of $\sym^\bullet(\so(n)^*)$ exactly as stated. \end{proof} \begin{example} \label{3d_example} It may be useful to the reader to understand the cohomology of the differential $\d_3$ in some small examples. Let us consider the example where $n=3$, and where $L$ is trivial. So $\mr H^\bullet(\mr{BSO}(3)) \iso \RR[p]$ is a polynomial ring in a single variable of degree 4, and $\mr H^\bullet(\so(3)) \iso \RR[\eta]$ is an exterior algebra in a single variable of degree 3. Our $E_2$ page is therefore identified with the ideal $I$ in the ring $\RR[\eta, p]$ generated by $p$, and the differential $\d_3$ sends the generator $\eta$ to $p$. In terms of a linear basis we can illustrate the complex $(E_2, \d_3)$ pictorially as \[\xymatrix{ p & p^2 & p^3 & p^4 & \cdots \\ p\eta \ar[ur] &p^2\eta \ar[ur] &p^3\eta \ar[ur] &\cdots & }\] where the arrows all represent isomorphisms between one-dimensional summands. So, when we compute the cohomology with respect to $\d_3$ the result is the one-dimensional vector space generated by $p$. \end{example} \begin{example} \label{4d_example} If we consider the next simplest example, where $n=4$, we now have a pair of even generators for $\mr H^\bullet(\mr{BSO}(4))$, namely the first Pontryagin class $p$ and the Pfaffian $p'$, and we have a corresponding pair of odd generators $\eta, \eta'$ for $\mr H^\bullet(\so(4))$. When we compute the cohomology with respect to the differential $\d_3$, we find a three-dimensional vector space, spanned by the classes $p$, $p'$, and $p\eta' - p'\eta$. \end{example} Now we are finally ready to prove the main theorem of this section. \begin{proof}[Proof of Theorem \ref{obstruction_theorem}] It is sufficient to identify the $E_3$-page of our spectral sequence with the desired expression, and to verify that the higher differentials all vanish. First, we reinterpret the $E_2$-page in more algebraic terms. Observe that the graded vector space \[H^\bullet(\so(n)) \otimes \mr H^\bullet(\mr{BSO}(n)) \otimes \mr H^\ell_{\mr{red}}(L)\] naturally forms a graded commutative algebra, as it is the tensor product of three graded commutative algebras. Let $A = \mr H^\ell_{\mr{red}}(L)$ denote the nonunital algebra of the third tensor factor. Then we can write the full algebra as \[R = A[\{\eta_j\}],\{p_j\}]\] the free commutative algebra over $A$ with the generators $c_j$ and $\eta_j$ from the algebras $\mr H^\bullet(\so(n))$ and $\mr H^\bullet(\mr{BSO}(n))$. More useful for us is this algebra's maximal ideal \[\overline{R} = \bigoplus_{m+n > 0} A\, \eta^m p^n ,\] spanned by monomials other than the unit monomial $c^0 \eta^0$. (Here $m$ and $n$ are vectors that encode multiple exponents. For instance, $m = (m_1, m_2,\ldots)$ and $\eta^m = \eta_1^{m_1} \eta_2^{m_2} \cdots$.) But the $E_2$-page corresponds to the subcomplex \[\mr H^\bullet(\so(n)) \otimes \mr H^\bullet_{\mr{red}}(\mr{BSO}(n)) \otimes \mr H^\ell_{\mr{red}}(L),\] which is the ideal $I \sub \overline{R}$ generated by~$(\{p_j\})$. Note that the quotient algebra is \[\overline{R}/I =\bigoplus_{m> 0} A \,\eta^m,\] the maximal ideal of the polynomial ring over $A$ generated by all the~$\eta$s. In fact, $\overline{R}/I \iso \mr A \otimes \mr H_{\rm{red}}(\so(n))$. Now we turn to the differential, which has a convenient description in terms of these algebraic structures. The filtration from Proposition~\ref{E2_page_prop} has an analog where we work with all of $\sym(\so(n)[-2])$ and do not keep only positive symmetric powers. Its $E_2$-page can be identified with $R$. The differential on that page makes $R$ into a dg algebra over $A$ with derivation $\d$ sending $\eta_j$ to $p_j$. This differential makes both $\overline{R}$ and $I$ into {\em dg} ideals; and this dg ideal $(I, \d)$ is precisely the $E_2$-page of our spectral sequence, as can be verified by unwinding the construction. Hence, the $E_3$-page is the cohomology of this dg ideal $(I, \d)$. To compute its cohomology, we use the long exact sequence associated to the short exact sequence \[0 \to I \to \overline{R} \to \overline{R}/I \to 0,\] where we mean the cochain complexes with differential $\d$. For the middle term, observe that $\mr H^\bullet(\overline{R},\d) = 0$ by direct computation (as any monomial goes to another monomial). For $\overline{R}/I$, the differential inherited from $\overline{R}$ is zero, so the $(i+1)^{\text{st}}$ cohomology group is simply the degree $i+1$ component $(\overline{R}/I)^{i+1}$ of $\overline{R}/I$. Hence, the long exact sequence tells us that $\mr H^0(I) = 0$ and \[ \mr H^i(I) \iso (\overline{R}/I)^{i+1}\] for all $i > 0$. We conclude that \[ E_3^i \iso (R/I)^{i+1} = \bigoplus_{j+k=n+i}\mr H^j_{\mr{red}}(\so(n)) \otimes \mr H^k_{\mr{red}}(L). \] These cohomology classes are all represented by elements in the $E_2$-page $\mr H^\bullet(\so(n)) \otimes \mr H^\bullet(\mr{BSO}(n)) \otimes \mr H^\bullet_{\mr{red}}(L)$ with degree zero in the $\mr H^\bullet(\mr{BSO}(n))$ factor. In other words, these are polynomials in the generators $p_j$, not involving the $\eta_i$. There are no higher differentials in our spectral sequence between terms of this type, and so the spectral sequence collapses at the $E_3$-page. We have, therefore, obtained the equivalence we desired. The case of inner actions remains, but the proof carries over easily. We need to replace the (homotopy) fiber of the map \eqref{htpyfib} by the (homotopy) fiber of the analogous map. That is, the fiber of the map \begin{equation} \mr{InnerAct}_{\mf{iso}(n)_{\mr{dR}}} \to \mr{InnerAct}_{\so(n) \ltimes \RR^n_{\mr{dR}}} \end{equation} where $\mr{InnerAct}$ means we allow local functionals purely of the background fields (as in the non-inner case, we refer to \cite[Section 11.2]{Book2}). Concretely, that means the fiber is quasi-isomorphic to $\mr {Inner}\, \mc C_{n,L}$, whose underlying graded vector space agrees with that of \[ \mr C^\bullet(\so(n), \sym^{>0}(\so(n)^*[-2]) \otimes \Omega^\bullet(\RR^n)) \otimes \mr C^\bullet(L)[n], \] with differential as in~\eqref{fib_dif}. Note that the only change is in the far right term: $\mr C^\bullet_{\mr{red}}(L)[n]$ is replaced by $\mr C^\bullet(L)[n]$. Such a change does not affect the proof above, which focuses on the $\so(n)$ contributions. \end{proof} \begin{remark} Our main theorem establishes a sufficient condition for the vanishing of the framing anomaly of a topological AKSZ theory, namely the triviality of the cohomology group $\mr H^i(\so(n)) \otimes \mr H^{n-i}_{\mr{red}}(L)$ for all $i > 0$. We believe that this condition will also be necessary. In a follow-up paper \cite{EGWfr} we will establish the non-vanishing of the framing anomaly at the one-loop level in the case where this cohomology is non-trivial. This is possible by evaluating the appropriate one-loop Feynman diagrams using a maximally holomorphic gauge fixing condition, by applying the results of \cite{BWhol}. In this way it is possible to obtain a concrete identification of the one-loop framing anomaly in terms of characteristic classes. \end{remark} \printbibliography \textsc{University of Massachusetts, Amherst}\\ \textsc{Department of Mathematics and Statistics, 710 N Pleasant St, Amherst, MA 01003}\\ \texttt{[email protected]}\\ \texttt{[email protected]} \end{document}
proofpile-arXiv_065-5805
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Conclusion} \vspace{-2mm} In this paper, we propose an unsupervised prompt learning (UPL) framework to avoid time-consuming prompt engineering and simultaneously facilitate the zero-shot transfer of vision-language models. In contrast to prior supervised approaches such as CoOp, CLIP-Adapter, and TIP-Adapter, our UPL is the first unsupervised framework to better adapt pre-trained vision-language models to the downstream image recognition task. We conduct abundant experiments on ImageNet as well as 10 widely used image classification datasets. Our UPL outperforms the original CLIP with prompt engineering on Imagenet and other 10 datasets. Furthermore, our unsupervised method also outperforms 2-shot CoOp in terms of the averaged accuracy across 11 datasets, and an enhanced version of UPL is even on part with the 8-shot CoOp and the 8-shot Tip-Adapter on most datasets. \section{Experiment} \vspace{-2mm} \subsection{Implementation Details} \label{experiment_settings} \subsubsection{Vision-Language Models.} We use CLIP~\cite{radford2021learning} as our pre-trained vision-language model. UPL applied on CLIP with ResNet-50~\cite{he2016deep} is served as our baseline. As described in Section~\ref{sec:pseudo} and illustrated in Figure~\ref{fig:model_class_aware}, we observe that CLIP models with different vision encoders have preferences for different categories. Therefore, we propose an enhanced version named UPL* which leverages additional CLIP models with various vision architectures including ResNet-101, ResNet50x4, ResNet50x16, ResNet50x64, ViT-B/32, ViT-B/16 and ViT-L/14~\cite{dosovitskiy2020image} to improve the quality of pseudo labels. Notice that these additional models are only used for the purpose of pseudo-labeling, UPL* still has the same network architecture (CLIP with ResNet-50) with UPL. \vspace{-3mm} \subsubsection{Pseudo Label Generation.} CLIP provides a series of prompt templates for zero-shot inference. For example, 80 hand-crafted prompts are used for zero-shot inference on ImageNet. Involving all prompts provided by CLIP for pseudo label generation may violate our intention of avoiding laborious prompt engineering. Thus, we only use the simplest prompt to generate pseudo labels. For example, we adopt ``a photo of a [CLS]'' and ``a photo of a [CLS], a type of pet'' for Imagenet and OxfordPets respectively. Please refer to supplementary materials for more details about prompt templates used in pseudo label generation. Unless otherwise specific, we select top-$16$ confident samples per class and generate a pseudo-labeled set for prompt representation optimization. \vspace{-3mm} \subsubsection{Learnable Prompt Representations.} The prompt representations are randomly initialized by drawing from a zero-mean Gaussian distribution with standard deviation equal to 0.02. We set length $L=16$ in Eq.\ref{eq:prompt_def} by default. We use $16$ prompt representations for ensemble in system-level comparison with prior methods. Unless otherwise specific, for all ablation studies, we use a single prompt representation for efficiency. \vspace{-3mm} \subsubsection{Training Details.} We use SGD with an initial learning rate of 0.002 and a cosine decay learning rate scheduler for optimization. We train 50 epochs for all datasets, and the batch size is set as 32. We fix the learning rate to 1e-5 and warm up the training in the first epoch. \vspace{-2mm} \subsection{Datasets} \vspace{-2mm} Following CLIP~\cite{radford2021learning} and CoOp~\cite{zhou2021learning}, we use 11 publicly available image classification datasets including ImageNet~\cite{deng2009imagenet}, Caltech101~\cite{fei2004learning}, DTD~\cite{cimpoi2014describing}, EuroSAT~\cite{helber2019eurosat}, FGVCAircraft~\cite{maji2013fine}, Food101~\cite{bossard2014food}, Flowers102~\cite{nilsback2008automated}, OxfordPets~\cite{parkhi2012cats}, SUN397~\cite{xiao2010sun}, StandfordCars~\cite{krause20133d}, and UCF101~\cite{soomro2012ucf101}. These datasets cover a variety of different visual classification tasks, such as general objects, fine-grained and even textures classification, constituting a comprehensive benchmark. \begin{table}[t] \centering \caption{Main Results of UPL and UPL* on 11 datasets. We compare our unsupervised approach with: 1) zero-shot CLIP with prompt engineering; 2) supervised methods including CoOp and Tip-Adapter. Both UPL and UPL* boost the performance of zero-shot CLIP. UPL outperforms 2-shot CoOp and UPL* is on par with 8-shot CoOp and 8-shot Tip-Adapter on most datasets.} \setlength{\tabcolsep}{2.8pt} \begin{tabular}{@{}l|ccc|cccccc@{}} \toprule \multirow{3}{*}{Datasets} & \multicolumn{3}{c|}{Unsupervised} & \multicolumn{6}{c}{Supervised} \\ \cmidrule{2-10} & \multicolumn{1}{c}{CLIP~\cite{radford2021learning}} & \multicolumn{1}{c}{UPL} & UPL* & \multicolumn{3}{c}{CoOp~\cite{zhou2021learning}} & \multicolumn{3}{c}{Tip-Adapter~\cite{zhang2021tip}} \\ \cmidrule{2-10} & \multicolumn{1}{c}{\textbf{0}-shot} & \multicolumn{1}{c}{\textbf{0}-shot} & \textbf{0}-shot & \textbf{2}-shot & \textbf{4}-shot & \textbf{8}-shot &\textbf{2}-shot & \textbf{4}-shot & \textbf{8}-shot \\ \midrule ImageNet & 60.34 & 60.51 & 61.09 & 57.13 & 59.72 & 61.52 & 60.96 &60.98 & 61.45 \\ Caltech101 & 86.09 & 89.94 & 91.40 & 87.76 & 89.67 & 90.14 & 89.25 &89.41 & 89.94 \\ DTD & 41.61 & 46.57 & 55.08 & 47.48 & 54.19 & 58.65 & 49.76 &54.14 & 57.33 \\ EuroSAT & 38.23 & 54.83 & 71.04 & 59.98 & 62.17 & 68.73 & 61.10 &65.30 & 66.89 \\ FGVCAircraft & 16.92 & 17.34 & 21.75 & 20.36 & 22.10 & 24.99 & 21.25 &21.54 & 24.48 \\ Food101 & 77.33 & 77.58 & 77.93 & 72.92 & 73.74 & 76.28 & 77.58 &77.60 & 77.79 \\ Flowers102 & 66.06 & 68.90 & 76.65 & 76.58 & 84.59 & 88.27 & 76.82 &81.53 & 85.95 \\ OxfordPets & 85.83 & 88.28 & 89.51 & 84.53 & 87.11 & 87.71 & 87.38 &87.67 & 87.87 \\ SUN397 & 60.18 & 63.98 & 66.42 & 61.35 & 65.08 & 67.47 & 62.82 &64.32 & 65.57 \\ StandfordCars & 55.64 & 62.13 & 70.97 & 59.49 & 61.92 & 65.25 & 59.86 &62.03 & 63.35 \\ UCF101 & 62.70 & 67.17 & 70.18 & 65.06 & 68.26 & 71.67 & 66.59 &67.51 & 69.10 \\ \midrule \textbf{Average} & {\textbf{59.18}} & {\textbf{63.38}} &{\textbf{68.37}}& \textbf{62.97} & \textbf{66.23} & \textbf{69.15} & \textbf{64.85} & \textbf{66.55} & \textbf{68.16} \\ \bottomrule \end{tabular} \label{Table:SOTA} \vspace{-5mm} \end{table} \vspace{-2mm} \subsection{Main Results} \label{performance_analysis} \vspace{-2mm} The main results across 11 datasets are reported in Table \ref{Table:SOTA}. We compare our UPL and UPL* with: 1) zero-shot CLIP with prompt engineering; 2) supervised methods including CoOp~\cite{zhou2021learning} and Tip-Adapter~\cite{zhang2021tip}. Original CLIP defines a series of prompt templates to enhance the zero-shot transfer performance. Our UPL not only avoids such prompt engineering, but also outperforms CLIP by $+4.2$ point in terms of the averaged accuracy. Our UPL*, which involves different CLIP models for pseudo-labeling while using the single CLIP with ResNet-50 for training and zero-shot inference, further boosts the averaged accuracy to $68.37$. Compared with supervised methods, UPL surpasses 2-shot CoOp, and UPL* is even on par with 8-shot CoOp and 8-shot Tip-Adapter on most datasets (ImageNet, Caltech101, EuroSAT, Food101, OxfordPets, SUN397 and StandfordCars) though our method does not need any labels of the target datasets. \vspace{-3mm} \subsection{Ablation Study} \label{saming_strategies} \subsubsection{Different Pseudo-Labeling Strategies.} Traditional self-training and semi-supervised learning approaches usually select unlabeled samples whose confidence scores are higher than a pre-defined threshold as pseudo-labeled data. We discuss the motivations of using top-$K$ pseudo-labeling strategy for CLIP in Section~\ref{sec:pseudo}. We compare this strategy with threshold-based strategies in Table~\ref{ablation:sample_strategy}. We also visualize the pseudo label accuracy and the number of correct pseudo labels for different strategies in Figure~\ref{fig:ablation_sampling}. Using a higher threshold (0.9) results in an imbalanced distribution of pseudo-labeled data, while a lower threshold (0.3) introduces too much noise which disturbs the training. In contrast, our default top-$K$ pseudo-labeling strategy guarantees a balanced distribution of pseudo-labeled data and prevent the vast number of samples of specific categories from overwhelming the model during training. \begin{table}[!t] \centering \caption{Ablation study of different pseudo-labeling strategies on UCF101 dataset. We compare our top-$K$ strategy with confidence threshold strategies which are commonly used in self-training and semi-supervised learning.} \begin{tabular}{l|c} \toprule Strategy & Zero-shot accuracy \\ \midrule Confidence$>$ 0.3 & 63.83 \\ Confidence$>$ 0.9 & 57.32 \\ Top-16 (default) & \textbf{64.84} \\ \bottomrule \end{tabular} \vspace{-1mm} \label{ablation:sample_strategy} \end{table} \begin{comment} \begin{table}[!t] \centering \caption{Ablation study of different pseudo-labeling strategies on UCF101 dataset. Compared to threshold-based strategies, our top-16 based sampling strategy can achieve the better performance.} \begin{tabular}{c|c|c|c} \toprule Strategy & Pseudo label accuracy & \# Selected images & Transfer accuracy \\ \midrule Conf Threshold $>$ 0.3 & 71.96 & 5912 & 63.83 \\ Conf Threshold $>$ 0.9 & 97.84 & 1206 & 57.32 \\ Top-16 (default) & 79.34 & 1597 & \textbf{64.84} \\ \bottomrule \end{tabular} \vspace{-5mm} \label{ablation:sample_strategy} \end{table} \end{comment} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{figures/num_shots.pdf} \vspace{-2mm} \caption{Ablation study on top-$K$ pseudo-labeling strategy with varying $K$. With the increase of pseudo-labeled samples, the performance increases.} \vspace{-5mm} \label{fig:nums_shots} \end{figure} \begin{comment} \begin{figure}[t] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/num_shots.pdf} \caption{Study on top-$K$ pseudo-labeling strategy with varying $K$.} \label{fig:nums_shots} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/num_prompts.pdf} \caption{Performance under different number of prompts.} \label{fig:nums_prompts} \end{subfigure} \caption{(a) The influence of using different amount of pseudo-labeled data per class ($K$). With the increase of pseudo-labeled samples, the performance increases. (b) Multi-prompt representations boost the transfer performance of CLIP.} \label{fig:ablation_nums} \end{figure} \end{comment} \vspace{-3mm} \subsubsection{Select Top-$K$ Confident Samples as Pseudo-Labeled Data.} We have shown the superiority of top-$K$ pseudo-labeling strategy, here we study how many pseudo-labeled samples ($K$) of each class should be used. Concretely, we vary the value of $K$ ($K=2,4,8,16$) and train our UPL with different amount of pseudo-labeled data. The results are visualized in Figure~\ref{fig:nums_shots}. With the increase of pseudo-labeled samples, the performance increases. Thus we set $K=16$. \vspace{-3mm} \subsubsection{Pseudo Label Ensemble.} As discussed in Section~\ref{sec:pseudo} and illustrated in Figure~\ref{fig:model_class_aware}, CLIP models with various vision architectures have preferences for different classes. Here we quantitatively evaluate the pseudo label accuracy as well as the the zero-shot transfer accuracy of using different CLIP models for pseudo-labeling. Table~\ref{ablation:model_ensemble} shows the results. \begin{table}[!t] \centering \caption{Ablation study of pseudo label ensemble strategy on UCF101, DTD and SUN397 datasets. For UPL*, we use different CLIP models with various vision encoders to generate pseudo labels and evaluate the pseudo label accuracy as well as the zero-shot transfer accuracy. Notice that different CLIP models are only used for pseudo-labeling, the training and inference are still performed on CLIP with ResNet-50.} \begin{tabular}{c|c|c|c} \toprule Dataset & Model & Pseudo label accuracy & Zero-shot accuracy \\ \midrule \multirow{4}{*}{UCF101} & ResNet-50 & 79.34 & 64.84 \\ & ResNet-50x64 & 88.38 & 68.71 \\ & ViT-B/14 & 90.50 & 67.72 \\ & Ensemble & \textbf{90.51} & \textbf{68.86}\\ \midrule \multirow{4}{*}{DTD} & ResNet-50 & 61.67 & 45.98 \\ & ResNet-50x64 & 69.25 & 48.15 \\ & ViT-B/14 & 77.46 & 52.63 \\ & Ensemble & \textbf{79.33} & \textbf{53.94}\\ \midrule \multirow{4}{*}{SUN397} & ResNet-50 & 78.87 & 62.83 \\ & ResNet-50x64 & 83.47 & 64.87 \\ & ViT-B/14 & 83.95 & 64.77 \\ & Ensemble & \textbf{87.18} & \textbf{65.02}\\ \bottomrule \end{tabular} \label{ablation:model_ensemble} \vspace{-4mm} \end{table} \vspace{-3mm} \subsubsection{The Length of Prompt Representation.} We define the prompt representation in Section~\ref{sec:prompt_opt}. Now we study the hyper-parameter $L$, i.e., the length of the prompt representation. Table~\ref{ablation:prompt_representation_length} shows the study. From the table we can see our UPL is less sensitive to the change of $L$. \begin{table}[!t] \centering \caption{Study the effects of length $L$ of prompt representation on Caltech101, DTD and StandfordCars. UPL is less sensitive to the change of $L$.} \begin{subtable}[t]{0.3\linewidth} \centering \begin{tabular}{c|c} \toprule Length & Accuracy \\ \midrule 4 & 89.81 \\ 8 & 89.58 \\ 16 & 89.79 \\ \bottomrule \end{tabular} \caption{Caltech101.} \end{subtable} \begin{subtable}[t]{0.3\linewidth} \centering \begin{tabular}{c|c} \toprule Length & Accuracy \\ \midrule 4 & 46.02 \\ 8 & 46.28 \\ 16 & 45.98 \\ \bottomrule \end{tabular} \caption{DTD.} \end{subtable} \begin{subtable}[t]{0.3\linewidth} \centering \begin{tabular}{c|c} \toprule Length & Accuracy \\ \midrule 4 & 59.36 \\ 8 & 60.61 \\ 16 & 60.75 \\ \bottomrule \end{tabular} \caption{StandfordCars.} \end{subtable} \vspace{-6mm} \label{ablation:prompt_representation_length} \end{table} \vspace{-3mm} \subsubsection{Prompt Representation Ensemble.} We propose a prompt representation ensemble strategy in Section~\ref{sec:prompt_opt}. CLIP designs a series of prompt templates to promote zero-shot transfer, inspiring us to separately learn multi prompt representations and ensemble them in the inference stage. Here we study the effects of ensembling different numbers of the prompt representations. Concretely, we ensemble $N=2,4,8,16$ well-optimized prompt representations and the results are shown in Figure~\ref{fig:nums_prompts}. We find the performance is almost saturated when $N=16$. Next, we further explore why the prompt representation ensemble can be effective. To be specific, we randomly initialize three prompt representations (namely PR-1,2 and 3) and optimize them separately via our UPL on UCF101 dataset. For each of the three well-learned prompt representations, we compute the per-class accuracy on UCF101 test set. We use PR-1 as our baseline and calculate per-class accuracy difference with the rest two (PR-2 and PR-3). The results are visualized in Figure~\ref{fig:prompt_class_aware}. Though the overall accuracies of three prompt representations are almost similar (64.58, 64.66, 64.39 for PR-1,2 and 3 respectively), the per-class accuracy varies a lot. We conjecture the root cause comes from the different initializations. \begin{figure}[t] \centering \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=0.98\textwidth]{figures/num_prompts.pdf} \caption{Study of ensembling different numbers of prompt representations.} \label{fig:nums_prompts} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/UCF101_prompt_class_aware.pdf} \caption{Well-learned prompt representations show preferences for different classes.} \label{fig:prompt_class_aware} \end{subfigure} \vspace{-2mm} \caption{(a) We study the effects of ensembling different numbers of the prompt representations. (b) We optimize three prompt representations (PR-1, 2 and 3) with different initializations on UCF101 dataset. We use PR-1 as baseline and calculate the per-class accuracy difference with PR-2 and PR-3. We find the well-learned prompt representations have biased preferences for different classes, inspiring us to ensemble them to facilitate the zero-shot transfer.} \label{fig:ablation_nums} \vspace{-6mm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{figures/UCF101_pseudo_acc.pdf} \vspace{-2mm} \caption{We calculate per-class pseudo label accuracy and per-class zero-shot transfer improvement on UCF101 dataset. There are no obvious correspondences between pseudo label accuracy and zero-shot transfer accuracy. We still see significant zero-shot transfer improvement for these classes with low pseudo label accuracy, indicating our UPL is robust to the noisy pseudo-labeled samples.} \label{fig:acc_vs_conf} \vspace{-6mm} \end{figure} \vspace{-3mm} \subsubsection{Robustness to Noisy Pseudo Labels.} Pseudo labels play an important role in self-training and semi-supervised learning. The crucial principle of pseudo-labeling is to generate a sufficient amount of pseudo labeled samples with high quality. Some recent works~\cite{hu2021semi,rizve2021defense,xu2021end} have explored the directions of reducing negative impacts of noisy pseudo labels. As shown in Figure~\ref{fig:sampling_acc}, we find CLIP has preferences for different classes, resulting in biased per-class accuracies, i.e., the performance of some classes are worse than others. This indicates that the pseudo labels predicted by CLIP are noisy for these under-performing categories. It is natural to raise a question: are there any correspondences between per-class pseudo label accuracy and per-class zero-shot transfer accuracy? To answer this question, we calculate: 1) per-class pseudo label accuracy; 2) per-class accuracy improvement of zero-shot transfer on UCF101 dataset. Figure~\ref{fig:acc_vs_conf} visualizes the results. We observe no obvious correspondences existing between pseudo label accuracy and zero-shot transfer accuracy. Actually, for some categories with low pseudo label accuracy caused by high noise, we still see significant zero-shot transfer improvement, indicating our UPL is robust to the noise of pseudo labels. Now we briefly analyze the causes. As described in Section~\ref{sec:prompt_opt}, all classes share the identical prompt representation (see Eq.~\ref{eq:prompt_def}). Therefore, the prompt representations are optimized by the pseudo-labeled samples from all classes. Though noisy pseudo labels exist in some certain categories, we can still optimize a favourable shared prompt representation with a large amount of qualified pseudo-labeled samples. \section{Introduction} \label{sec:intro} Recently, vision-language models such as CLIP~\cite{radford2021learning}, ALIGN~\cite{jia2021scaling} and FLIP~\cite{yao2021filip} have achieved promising progress in visual representation learning. In contrast to traditional visual frameworks trained by a fixed set of discrete labels, vision-language models use large-scale image-text pairs for training and align images with raw texts in the embedding space via a two-tower architecture, i.e., an image encoder and a text encoder. For zero-shot transfer, one needs to carefully design the text description, known as \textit{prompt}, to classify the target images. For example, one of the prompt templates used in CLIP is ``a photo of a [CLS]'' (Figure~\ref{fig:teaser_a}). However, identifying the proper prompt is non-trivial, which often requires domain knowledge and laborious prompt engineering. To avoid hand-crafted prompt design and improve transfer performance, some supervised methods, e.g. CoOp~\cite{zhou2021learning}, CLIP-Adapter~\cite{gao2021clip} and Tip-Adapter~\cite{zhang2021tip} propose to use a small set of labeled images from the target dataset to better adapt vision-language models for downstream image recognition tasks: CoOp learns a continuous prompt representation to replace hand-crafted prompts (Figure~\ref{fig:teaser_b}); CLIP-Adapter adopts additional networks to learn new features (Figure~\ref{fig:teaser_c}); TIP-Adapter further extends CLIP-Adapter by constructing a query-key cache model from few-shot supervisions of the target datasets to obtain the weights of newly added networks (Figure~\ref{fig:teaser_c}). Nevertheless, all these methods rely on the annotations of the target datasets. \begin{figure}[t] \centering \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/teaser_a.pdf} \caption{CLIP.} \label{fig:teaser_a} \end{subfigure} \hfill \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/teaser_b.pdf} \caption{CoOp.} \label{fig:teaser_b} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{figures/teaser_c.pdf} \caption{CLIP/TIP-Adapter.} \label{fig:teaser_c} \end{subfigure} \hfill \begin{subfigure}[b]{0.22\textwidth} \centering \includegraphics[width=\textwidth]{figures/teaser_d.pdf} \caption{UPL (ours).} \label{fig:teaser_d} \end{subfigure} \vspace{-2mm} \caption{Illustration of CLIP and prompt learning methods: (a) zero-shot inference of CLIP; (b) few-shot prompt representation learning of CoOp; (c) few-shot adapter optimization of CLIP-Adapter and TIP-Adapter, the query-key cache model proposed in TIP-Adapter causes it to be training-free; (d) our UPL facilitates zero-shot transfer of original CLIP while having no dependency on labeled images from the target datasets.} \label{fig:teaser} \vspace{-5mm} \end{figure} One of the most attractive advantages of CLIP-like vision-language models is their zero-shot transfer capacity to downstream tasks such as image classification~\cite{radford2021learning,jia2021scaling,yao2021filip,yuan2021florence}, object detection~\cite{gu2021open,du2022learning} and semantic segmentation~\cite{xu2021simple}. Using labeled data from target dataset to facilitate transfer performance may violate the intention of zero-shot transfer. In this work, we propose an unsupervised prompt learning (UPL) framework to better adapt vision-language models for downstream image recognition task (Figure~\ref{fig:teaser_d}). Unlike existing works, UPL has no dependency on labeled images of the target dataset. Concretely, UPL firstly utilizes a pre-trained vision-language model, e.g., CLIP, to generate pseudo labels for target images, then a self-training procedure is performed to optimize the continuous prompt representations. As a result, UPL greatly boosts the CLIP's zero-shot transfer performance. In contrast to threshold-based self-training methods, we select top-$K$ confident samples per class for self-training according to the observations: 1) vision-language models have biased preferences for different classes, using a pre-defined threshold to filter out unconfident samples leads to an imbalanced data distribution; 2) there is no obvious correlation between confidence scores and pseudo label accuracy, which means the confidence may not be a good indicator to reflect the quality of pseudo labels. Though noisy pseudo labels may be introduced simultaneously, we experimentally find our method is robust to the noise since all classes share the identical prompt representations. Motivated by the prompt ensemble strategy proposed by CLIP, we introduce pseudo label ensemble and prompt representation ensemble to further boost the zero-shot trasnfer of our UPL. Our contributions can be summarized as follows: \begin{itemize} \item We present an unsupervised prompt learning (UPL) framework to avoid time-consuming prompt engineering and better adapt vision-language models (e.g. CLIP) for the downstream image recognition task. In contrast to existing supervised methods, UPL does not rely on any labeled images of the target datasets, which conforms with the intention of zero-shot transfer. As far as we know, UPL is the first work to introduce unsupervised learning into prompt learning of vision-language models. \item We thoroughly analyze the characters of CLIP for pseudo-labeling. Based on the observations, we propose a series of technologies including top-$K$ pseudo-labeling strategy, pseudo label ensemble, and prompt representation ensemble to facilitate the zero-shot transfer. \item Our UPL outperforms CLIP with prompt engineering on ImageNet and other 10 image classification datasets. An enhanced verision of UPL is even on par with the 8-shot CoOp and the 8-shot TIP-Adapter on most datasets. \end{itemize} \section{Method} \vspace{-2mm} In this section, we introduce our unsupervised prompt learning (UPL) framework for vision-language models, especially for CLIP~\cite{radford2021learning}. UPL aims to avoid laborious prompt engineering while improving the performance of zero-shot transfer of vision-language models. Unlike prior supervised methods~\cite{zhou2021learning,gao2021clip,zhang2021tip}, UPL does not need any annotations of the target datasets. We first present an overview of our UPL in Section~\ref{sec:overview}. Next, we introduce the processing of generating pseudo labels for target images in Section~\ref{sec:pseudo}. Finally, the details of prompt representation optimization via a well-designed self-training approach are described in Section~\ref{sec:prompt_opt}. \begin{figure}[t] \centering \includegraphics[width=0.98\linewidth]{figures/overview.pdf} \vspace{-2mm} \caption{ Overview of the proposed unsupervised prompt learning (UPL) framework. Our UPL mainly contains two parts, namely pseudo label generation and prompt representation optimization. We first use CLIP with a simple prompt (e.g., ``a photo of a [CLS]'') to generate pseudo labels for target datasets and select top-$K$ confident samples per class for subsequent training. Then we define a learnable prompt representation which is optimized on selected pseudo-labeled samples. For inference, we simply replace the hand-crafted prompts with the well-optimized prompt representations.} \vspace{-6mm} \label{fig:overview} \end{figure} \subsection{Overview of UPL} \label{sec:overview} Our UPL aims to avoid prompt engineering and boost the zero-shot transfer performance of vision-language models in an unsupervised manner. Figure~\ref{fig:overview} shows an overview. UPL mainly consists of two modules, namely pseudo label generation and prompt representation optimization. In the first stage, we utilize a pre-trained vision-language model (e.g. CLIP) to generate pseudo labels for unlabeled images from the target dataset. Based on the observations that: 1) the correlation between confidence scores and pseudo label accuracy is relatively low; 2) vision-language models have biased per-class accuracy, we thus select top-$K$ confident samples for each class, instead of keeping all samples with confidence scores higher than a pre-defined threshold, for the subsequent prompt representation optimization. In the second stage, we define a learnable prompt representation which is similar with CoOp~\cite{zhou2021learning}. The prompt representation is shared across all categories and optimized on the selected unlabeled samples with generated pseudo-labels. In the zero-shot transfer, we simply replace the hand-crafted prompts with the well-optimized prompt representations and use the inference pipeline of CLIP for image recognition. \vspace{-2mm} \subsection{Pseudo Label Generation} \label{sec:pseudo} \vspace{-2mm} \subsubsection{Zero-Shot Inference of CLIP.} CLIP is pre-trained on large-scale image-text pairs to align the images to the raw texts in a common embedding space. Hence, it naturally fits the zero-shot recognition. We first revisit the zero-shot inference of CLIP. Given a target dataset containing $C$ classes, CLIP converts the prompt, e.g. ``a photo of a [CLS]\footnote{We use [CLS] to denote class token, and [CLS] is replaced by the specific class name, such as ``cat” or “car” in zero-shot inference.}'', into a lower-cased byte pair encoding (BPE) representation~\cite{sennrich2015neural} which is subsequently fed into the CLIP's text encoder to generate class embeddings for each category. We use $\{\boldsymbol{f}_c^{text}\}_{c=1}^C$ to denote the the set of class embeddings, where $\boldsymbol{f}_c^{text}$ denotes the class embedding of $c$-th category. Meanwhile, for an image $I$, we use $\boldsymbol{f}^{image}$ to denote the visual feature extracted by the image encoder. The prediction probability of class $c$ is then computed as: \begin{equation} p_c=\frac{\exp \left(<\boldsymbol{f}_c^{text}, \boldsymbol{f}^{image}>/ \tau\right)}{\sum_{j=1}^{C} \exp \left(<\boldsymbol{f}_j^{text}, \boldsymbol{f}^{image}>/ \tau\right)}, \label{eq:prob} \end{equation} where $\tau$ is a temperature parameter learned by CLIP, and $<\cdot,\cdot>$ denotes cosine similarity. We can easily identify the prediction $\hat{y}$ by: \begin{equation} \hat{y} = \argmax_c p_c, \label{eq:pseudo_label} \end{equation} \begin{figure}[t] \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/UCF101_pseudo_acc_conf.pdf} \caption{Per-class pseudo label accuracy of different pseudo-labeling strategies.} \label{fig:sampling_acc} \end{subfigure} \hfill \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/UCF101_pseudo_samples_conf.pdf} \caption{The number of correct pseudo labels of different pseudo-labeling strategies.} \label{fig:sampling_nums} \end{subfigure} \vspace{-2mm} \caption{We analyze two types of pseudo-labeling strategies on UCF-101 dataset. We observe CLIP shows biased preferences for different classes in zero-shot transfer. Classical self-training uses a pre-defined threshold to select samples of high probability, resulting in an imbalanced distribution of pseudo-labeled data (orange and green lines). Instead, we advocate to select top-$K$ confident samples per class to generate a balanced set of pseudo-labeled data for self-training (blue line). } \label{fig:ablation_sampling} \vspace{-6mm} \end{figure} \vspace{-5mm} \subsubsection{Pseudo Label Generation.} Given a pre-trained vision-language model, e.g. CLIP, we can use Eq.\ref{eq:prob} and Eq.\ref{eq:pseudo_label} to generate pseudo labels for unlabeled samples from the target dataset. Self-training and semi-supervised learning methods usually keep the confident samples whose scores are higher than a pre-defined threshold for optimization. However, we find it is non-trivial to directly apply this strategy on CLIP. The reasons are two-fold: \begin{itemize} \item We observe CLIP shows biased preferences for different classes in zero-shot transfer, which is mainly caused by the domain gap between the pre-training dataset and the target dataset. We show this phenomenon in Figure~\ref{fig:ablation_sampling}. Using a fixed pre-defined threshold to filter out unconfident samples results in an imbalanced distribution of pseudo-labeled data, which further hinders optimization. \item Self-training assumes that confidence (probability) can well reflect the qualify of pseudo labels and thus a pre-defined threshold (e.g. 0.9) can be used to select high-quality samples. Nevertheless, we observe that the correlation between confidence scores and pseudo label accuracy in CLIP is relatively low. Figure~\ref{fig:pseudo_vs_acc} illustrates this phenomenon. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{figures/UCF101_acc_conf.pdf} \vspace{-2mm} \caption{We select top-$16$ confident samples per class on UCF-101 dataset and compute averaged probability and pseudo label accuracy for each class. We observe that probability (confidence) can not completely reflect the quality of pseudo labels. Categories with low averaged probability may achieve high pseudo label accuracy. } \vspace{-6mm} \label{fig:pseudo_vs_acc} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{figures/UCF101_model_class_aware.pdf} \vspace{-2mm} \caption{CLIP models with different vision encoders have preferences for different classes. We study this phenomenon on UCF-101 dataset. We compare three CLIP models, namely ResNet-50, ResNet-50x64 and ViT-L/14 and compute the class-wise pseudo label accuracy for each model. We show the accuracy gap between ResNet-50x64 and ResNet-50 (blue line), and the accuracy gap between ViT-L/14 and ResNet-50 (orange line).} \vspace{-6mm} \label{fig:model_class_aware} \end{figure} Therefore, we advocate to select top-$K$ confident samples per class by Eq.~\ref{eq:prob} and Eq.~\ref{eq:pseudo_label} for subsequent optimization, which prevents the vast number of samples of specific categories from overwhelming the model during training. We set $K=16$ experimentally. \vspace{-3mm} \subsubsection{Pseudo Label Ensemble.} CLIP provides a series of vision models including ResNet-50, ResNet-101, ResNet-50x4, ResNet-50x16, ResNet-50x64, ViT-B/32, ViT-B/16 and ViT-L/14. We observe that CLIP models with different vision architectures have biased class-wise accuracy as shown in Figure~\ref{fig:model_class_aware}. Motivated by this finding, we propose a simple pseudo label ensemble strategy to further enhance the quality of pseudo labels. Specifically, given $M$ CLIP models with different vision architectures, we use Eq.\ref{eq:prob} to obtain probability $p_i^m$ predicted by the $m$-th CLIP model, then the final probability $\overline{p}_i$ is simply averaged by $\overline{p}_i= \sum_{m=1}^{M} p_i^m / M$. Similarly, Eq.\ref{eq:pseudo_label} can be applied on $\overline{p}_i$ to generate enhanced pseudo labels. Once the process is finished, we use pseudo-labeled data for unsupervised prompt representation optimization. \subsection{Unsupervised Prompt Representation Optimization} \label{sec:prompt_opt} The original CLIP defines various prompt templates, e.g. ``a photo of a [CLS]'' for zero-shot image recognition. However, identifying the proper prompt is non-trivial, since it often requires domain knowledge and laborious prompt engineering. A slight change in prompt may lead to a huge difference in performance. CoOp~\cite{zhou2021learning} presents a framework to avoid hand-crafted prompt design by optimizing a continuous prompt representation on a small set of labeled data. Our UPL is similar to CoOp, however, our method does not need any labeled data from the target dataset. \vspace{-3mm} \subsubsection{Learnable Prompt Representation.} Our goal is to learn a prompt representation on pseudo-labeled data to facilitate zero-shot transfer of CLIP. Formally, we define the learnable prompt representation $\boldsymbol{V} \in \mathcal{R}^{D \times L}$, where $D$ denotes the dimension of word embeddings (512 for CLIP), $L$ is a hyper-parameter which is set to 16 by default. Given a target dataset containing $C$ classes, we define the continuous prompt $\boldsymbol{V}_c \in \mathcal{R}^{D \times (L+1)}$ for class $c~(1 \leq c \leq C)$ as: \begin{equation} \boldsymbol{V}_c=[\boldsymbol{V}][\boldsymbol{w}_c], \label{eq:prompt_def} \end{equation} where $\boldsymbol w_{c} \in \mathcal{R}^D$ represents the fixed word embedding of class $c$. Note that all classes share the identical prompt representation $\boldsymbol{V}$. The training is extremely simple as shown in Figure~\ref{fig:overview} (right part). For each pseudo labeled image, we extract its vision feature $\boldsymbol{f}^{image}$ by feeding the input image into CLIP's vision encoder; meanwhile, class embeddings can be generated by feeding $\{\boldsymbol{V}_c\}_{c=1}^{C}$ into the CLIP's text encoder $g(\cdot)$. The probability of $c$-th class is computed as \begin{equation} p_c=\frac{\exp \left(<g(\boldsymbol{V}_c), \boldsymbol{f}^{image}>/ \tau\right)}{\sum_{j=1}^{C} \exp \left(<g(\boldsymbol{V}_j), \boldsymbol{f}^{image}>/ \tau\right)}, \label{eq:UPL_prob} \end{equation} where $\tau$ is the temperature parameter. For a training image, we calculate the probabilities of all classes by Eq.~\ref{eq:UPL_prob} and minimize the cross-entropy loss with its pseudo label. The gradients can back-propagate through the text encoder $g(\cdot)$, which takes advantage of the wealth of information encoded in the text encoder, and finally update the learnable prompt representation $\boldsymbol{V}$. Notice the weights of both image encoder and text encoder are kept unchanged during training. \vspace{-3mm} \subsubsection{Inference.} Once the optimization of prompt representation $\boldsymbol{V}$ is finished, given the target dataset, we feed $\{\boldsymbol{V}_c\}_{c=1}^{C}$ into the CLIP's text encoder to generate class embeddings of all categories. For a test image, we simply feed it into the CLIP's image encoder to extract its vision feature and apply Eq.\ref{eq:UPL_prob} to compute the probabilities for zero-shot image recognition. \vspace{-3mm} \subsubsection{Prompt Representation Ensemble.} Original CLIP defines various prompts to enhance the zero-shot recognition, which inspires us to learn multi prompt representations with different initializations. Concretely, in the training stage, we independently optimize $N$ randomly initialized prompt representations. While in the inference stage, we compute the probabilities predicted by all prompt representations and the averaged probability is used as the final prediction. \label{sec:inference} \section{Related Work} \vspace{-1mm} \subsection{Vision-Language Models} Vision-language models pre-trained on large-scale image-text pairs have demonstrated great potential in visual representation learning. CLIP~\cite{radford2021learning} creates a 400 million dataset, ALIGN~\cite{jia2021scaling} exploits 1.8 billion noisy image-text pairs, FLIP~\cite{yao2021filip} collects a set of 300 million paired data for fine-grained vision-language pre-training, Wukong~\cite{gu2022wukong} presents a large-scale Chinese cross-modal dataset containing 100 million data for benchmarking different multi-modal pre-training methods, and Florence~\cite{yuan2021florence} constructs a 900 million image-text-pair dataset called FLD-900M and achieves new state-of-the-art results in majority of 44 representative benchmarks. These vision-language models all utilize a two-tower architecture consisting of a vision (image) encoder with ResNet~\cite{he2016deep}, ViT~\cite{dosovitskiy2020image} or Swin Transformer~\cite{liu2021swin} and a language (text) encoder with standard Transformers~\cite{vaswani2017attention}. To align images with raw texts in the embedding space, text-to-image and image-to-text contrastive learning~\cite{van2018representation} are adopted. In contrast to self-supervised pretraining approaches~\cite{grill2020bootstrap,chen2020simple,he2020momentum,chen2021exploring} for visual representation learning, vision-language models have inherent zero-shot image recognition capacity. Moreover, the representative framework CLIP has been adapted to a series of vision tasks, such as object detection~\cite{gu2021open,du2022learning}, semantic segmentation~\cite{xu2021simple}, action recognition~\cite{wang2021actionclip}, video clip retrieval~\cite{luo2021clip4clip}, video caption~\cite{tang2021clip4caption} and 3D recognition~\cite{zhang2021pointclip}. \vspace{-1mm} \subsection{Prompt Learning} Pre-trained vision-language models use prompts (e.g., ``a photo of a [CLS]'') to generate class embeddings for image recognition. Identifying the proper prompt is non-trivial, which often takes a significant amount of time for prompt engineering. Inspired by the progress of prompt learning in NLP~\cite{zhong2021factual,li2021prefix,lester2021power,shin2020autoprompt,jiang2020can}, CoOp~\cite{zhou2021learning} proposes a continuous prompt optimization strategy to avoid prompt design. CLIP-Adapter~\cite{gao2021clip} trains additional adapter networks to map the text feature and image feature to a new embedding space to better adapt the target dataset. Tip-Adapter~\cite{zhang2021tip} further extends CLIP-Adapter by creating the weights by a key-value cache model constructed from the few-shot training set. However, all these methods rely on few-shot labeled data, which may violate the intention of zero-shot transfer of vision-language models. In contrast, our proposed UPL improves transfer performance of pre-trained vision-language models while having no dependency on annotations of the target dataset. \vspace{-2mm} \subsection{Self-Training} \vspace{-1mm} Self-training~\cite{scudder1965probability,yarowsky1995unsupervised,riloff1996automatically} is a simple semi-supervised learning approach. In this paradigm, a well-trained model first generates pseudo labels on the unlabeled dataset, and then the model is finetuned by using both labeled data and pseudo-labeled data. Recently, self-training has shown significant progress in deep learning, e.g., image classification~\cite{yalniz2019billion,xie2020self}, object detection~\cite{xu2021end,sohn2020simple}, semantic segmentation~\cite{hu2021semi}, speech recognition~\cite{kahn2020self,parthasarathi2019lessons}, action recognition~\cite{xu2021cross} and machine translation~\cite{he2019revisiting}. Vision-language models are usually pre-trained on large-scale image-text paris (e.g. 400 million data for CLIP) and show promising zero-shot transfer performance via prompting. Our proposed UPL generates pseudo labels for the target datasets and optimizes the continuous prompt representations via a well-designed self-training approach. This processing greatly boosts the performance of zero-shot transfer. In contrast to traditional self-training which finetunes all layers in a network, UPL only optimizes the continuous prompt representations while keeping the whole networks (i.e. image encoder and text encoder) fixed. As far as we know, this is the first work of introducing self-training into prompt learning of vision-language models. \section{Dataset Details and Hand-Craft Prompts} The details of each dataset and the corresponding hand-craft prompts for pseudo-labeling are provided in Table \ref{datasets_datails}. Note that we only use the simplest prompt to generate pseudo labels without complex prompt engineering. \vspace{-5mm} \begin{table}[!h] \centering \caption{We show the class number, the size of train/test set and the hand-crafted prompt for pseudo-labeling for each dataset.} \begin{tabular}{l|cccc} \toprule Dataset & Classes & Train & Test & Hand-crafted prompt for pseudo-labeling \\ \midrule ImageNet & 1,000 & $1.28 \mathrm{M}$ & 50,000 & ``a photo of a [CLASS].'' \\ Caltech101 & 100 & 4,128 & 2,465 & ``a photo of a [CLASS].'' \\ OxfordPets & 37 & 2,944 & 3,669 & ``a photo of a [CLASS], a type of pet.'' \\ StanfordCars & 196 & 6,509 & 8,041 & ``a photo of a [CLASS].'' \\ Flowers102 & 102 & 4,093 & 2,463 & ``a photo of a [CLASS], a type of flower.'' \\ Food101 & 101 & 50,500 & 30,300 & ``a photo of [CLASS], a type of food.'' \\ FGVCAircraft & 100 & 3,334 & 3,333 & ``a photo of a [CLASS], a type of aircraft.'' \\ SUN397 & 397 & 15,880 & 19,850 & ``a photo of a [CLASS].'' \\ DTD & 47 & 2,820 & 1,692 & ``[CLASS] texture.'' \\ EuroSAT & 10 & 13,500 & 8,100 & ``a centered satellite photo of [CLASS].'' \\ UCF101 & 101 & 7,639 & 3,783 & ``a photo of a person doing [CLASS].'' \\ \bottomrule \end{tabular} \label{datasets_datails} \end{table} \section{Nearest Words of Well-Optimized Prompts} Interpreting the well-optimized prompt representations is not easy since they are optimized in a continuous space. Following CoOp~\cite{zhou2021learning}, we search within the vocabulary for words that are closest to the optimized prompt representations (each prompt representation consists of 16 learnable vectors) based on the Euclidean distance. Note that CLIP~\cite{radford2021learning} utilizes the BPE representation~\cite{sennrich2015neural} for tokenization, so the vocabulary includes subwords that frequently appear in the text. We first compare the nearest words of well-optimized prompt representations trained by our UPL for five datasets (ImageNet, Food101, OxfordPets, DTD and UCF101) in Table~\ref{words_different_datasets}. Though we observe no obvious correspondences existing among optimized prompt representations and subwords, it is surprising to find that some nearest subwords show very strong correlations with the corresponding dataset, such as ``pie'' for Food101, ``cat'' for OxfordPets. Then, we report the nearest words of four well-optimized prompts with different initializations on ImageNet in Table~\ref{words_different_prompts} to explore the relationship among different optimized prompt representations. \begin{table}[!t] \centering \setlength{\tabcolsep}{4pt} \caption{We search within the vocabulary for words that are closest to the well-optimized representations learned by UPL according to the Euclidean distance on ImageNet, Food101, OxfordPets, DTD, and UCF101. N/A denotes non-Latin characters.} \begin{tabular}{c|lc|lc|lc|lc|lc} \toprule ID & \multicolumn{2}{c|}{ImageNet} & \multicolumn{2}{c|}{Food101} & \multicolumn{2}{c|}{OxfordPets} & \multicolumn{2}{c|}{DTD} & \multicolumn{2}{c}{UCF101} \\ \midrule 1& grp&1.20 & kc&0.53 & milo&1.85 & kle&0.80 & chillin&0.86 \\ 2& shows&0.94 & led&0.52 & pi&1.09 & cutout&0.79 & N/A & 0.74\\ 3& beh&1.07 & ila&0.53 & calls&1.30 & con&0.72 & u&0.86 \\ 4& b&0.92 & detri&0.56 & N/A & 1.12 & boston&0.79 & alter&0.63 \\ 5& listing&1.27 & 2&0.49 & cat&1.32 & favorite&0.63 & criti&0.66 \\ 6& on&1.03 & pie&0.59 & *&0.83 & ist&0.86 & starting&0.80 \\ 7& did&0.91 & join&0.63 & zes&1.29 & bod&0.67 & presents&0.82 \\ 8& N/A & 0.96 & a&0.62 & radi&2.38 & roe&1.06 & ranked&0.98 \\ 9& then&1.04 & N/A & 0.57 & sey&2.30 & spring&0.90 & pattern&1.04 \\ 10& it&0.98 & over&0.63 & usage&1.29 & acmilan&0.93 & hookah&0.77 \\ 11& tweeter&1.12 & ...:&0.64 & courtesy&1.17 & dusk&0.79 & decorate&0.85 \\ 12& ds&1.19 & at&0.60 & xx&1.28 & nap&0.69 & siddi&1.00 \\ 13& exp&1.10 & on&0.56 & nad&1.18 & celebrations&0.86 & ates&0.83 \\ 14& held&1.19 & N/A & 0.64 & gp&1.48 & they&0.87 & coming&0.74 \\ 15& N/A & 0.99 & yemen&0.74 & N/A & 2.10 & N/A & 1.00 & ayyy&1.09 \\ 16& a&1.26 & minute&0.58 & 3&1.24 & ley&1.05 & only&1.07 \\ \bottomrule \end{tabular} \label{words_different_datasets} \end{table} \begin{table}[!h] \centering \setlength{\tabcolsep}{5.5pt} \caption{We show the nearest words of four well-optimized prompt representations (PR-1, 2, 3 and 4) on ImageNet. N/A denotes non-Latin characters.} \begin{tabular}{c|lc|lc|lc|lc} \toprule ID & \multicolumn{2}{c|}{PR-1} & \multicolumn{2}{c|}{PR-2} & \multicolumn{2}{c|}{PR-3} & \multicolumn{2}{c}{PR-4} \\ \midrule 1& grp&1.20 & rt&1.24 & algorithms&1.15 & relaxing&1.43 \\ 2& shows&0.94 & around&0.89 & zana&1.36 & azu&1.15 \\ 3& beh&1.07 & once&1.11 & dome&1.09 & .(&1.27 \\ 4& b&0.92 & N/A & 0.88 & much&1.13 & N/A & 1.18 \\ 5& listing&1.27 & than&0.76 & !!!!!!!!!!&0.99 & hanging&1.11 \\ 6& on&1.03 & trajec&1.08 & now&0.78 & look&1.01 \\ 7& did&0.91 & N/A & 0.91 & i&1.01 & nin&1.16 \\ 8& N/A & 0.96 & \& &1.20 & N/A & 0.99 & 2&1.10 \\ 9& then&1.04 & dt&0.98 & cant&0.86 & unit&0.94 \\ 10& it&0.98 & N/A & 0.99 & thepersonalnetwork&1.07 & probab&1.04 \\ 11& tweeter&1.12 & aw&0.91 & ,&1.17 & ard&0.78 \\ 12& ds&1.19 & pushing&0.96 & pride&1.18 & :...&1.10 \\ 13& exp&1.10 & whom&1.02 & -(&0.92 & epp&1.08 \\ 14& held&1.19 & t&0.98 & who&1.26 & no&1.11 \\ 15& N/A & 0.99 & a&1.46 & todo&1.12 & a&1.19 \\ 16& a&1.26 & a&1.16 & a&1.11 & ;;&1.06 \\ \bottomrule \end{tabular} \label{words_different_prompts} \end{table} \section{Visualization Results} We describe that the probability (confidence) can not completely reflect the quality of pseudo labels in Section 3.2 of the main paper, i.e., unlabeled samples with low confidence scores may be also correctly predicted. Thus we advocate to adopt a top-$K$ pseudo labeling strategy. Here we show some visualization results on DTD and FGVCAircraft datasets in Figure~\ref{fig:sup_vis}. We observe that this phenomenon is more obvious on fine-grained classification datasets, e.g. FGVCAircraft. \begin{comment} \begin{itemize} \item In the low-confidence category, the predictions of CLIP are somewhat reasonable. The actions in the left three figures look like the ``hockey penalty'' or ``hammer throw'' even by humans, leading the confidence predicted by CLIP of the correct category to decrease. Hence, the strategy that utilizes the pre-defined high confidence threshold to filter the training samples can not handle these categories. \item In the high-confidence category, the CLIP model knows the general semantics of figures (e.g., swimming), but lack the ability for classifying fine-grained action, causing pseudo-labeling based solely on confidence to be inaccurate. \end{itemize} \end{comment} \vspace{-3mm} \begin{figure}[h] \centering \vspace{-5mm} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/supp/DescribableTextures_p0.23_correct_grooved_1047.pdf} \label{fig:c1} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/supp/FGVCAircraft_p0.13_correct_747-300_426.pdf} \label{fig:w1} \end{subfigure} \vspace{-5mm} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/supp/DescribableTextures_p0.31_correct_wrinkled_2752.pdf} \label{fig:c2} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/supp/FGVCAircraft_p0.15_correct_A330-300_942.pdf} \label{fig:w2} \end{subfigure} \vspace{-5mm} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/supp/DescribableTextures_p0.38_correct_woven_2670.pdf} \label{fig:c3} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{figures/supp/FGVCAircraft_p0.19_correct_MD-87_2919.pdf} \label{fig:w3} \end{subfigure} \vspace{-6mm} \caption{The probability (confidence) can not completely reflect the quality of pseudo labels. Predictions with low confidence may be also correct. We study this phenomenon on DTD (left) and FGVCAircraft (right) datasets.} \label{fig:sup_vis} \end{figure}
proofpile-arXiv_065-5812
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Experiments} \subsection{Audio Captioning} Our basic workflow was as follows: we used the baseline model as a starting point, and in each experiment we made one or more modifications to the architecture. We compared the performance of our modifications to Table~\ref{table:baseline} to gauge whether the changes we made improvements to the baseline model performance. Given the limit of resources and GPU power, we halted the progress of training for an experiment if it wasn't comparable or better than the performance of the baseline model trained from scratch. Our code can be found here: \url{https://github.com/PatrickKollman/Deep-Learning-Final-Project}. \subsubsection{BART vs CRNN Encoder} The first modification we experimented with was choosing a Convolutional Recurrent Neural Network (CRNN) as our encoder instead of the Bart encoder. We chose to experiment with this encoder because it was the second-best encoder available to the team, and felt like a good starting point to understand the behavior of the model in relation to the data. We expected the results to vary because the data we are dealing with is time sensitive. The results seen in Table~\ref{table:crnn} confirm this speculation. The CRNN most likely struggled on the Clotho dataset because it is not passing time steps efficiently. The CRNN only contains one GRU encoder, which is much simpler compared to the complexity of the Bart transformer model. \vspace{-1em} \begin{table}[H] \caption{CRNN Encoder} \label{table:crnn} \centering \resizebox{100pt}{!}{% \begin{tabular}{ll} \toprule \cmidrule(r){1-2} Metric & Epoch 1 \\ \midrule BLEU$_1$ & 0.318 \\ BLEU$_2$ & 0.142 \\ BLEU$_3$ & 0.065 \\ BLEU$_4$ & 0.022 \\ METEOR$_1$ & 0.100 \\ ROUGE$_L$ & 0.253 \\ CIDE$_r$ & 0.031 \\ SPICE$_1$ & 0.035 \\ SPICE$_r$ & 0.033 \\ \bottomrule \end{tabular}% } \end{table} \subsubsection{Encoder Hidden State Output} The second modification we examined was the intersection of the encoder and decoder. Instead of only passing the last hidden state of the encoder to the decoder, we took the mean of all of the hidden states of the encoder, and passed this average to the decoder. This modification was motivated by the concern that if the time sequences of our data were too long, a singular hidden state may be insufficient to capture all the information contained by the sequence. The results seen in Table~\ref{table:avg} argued otherwise. Since this modification yielded poor results, we hypothesized that the average of all the hidden states may have performed worse because the earlier hidden states contain little to no information. As a result, we continued by only averaging the last two hidden states of the encoder. The results seen in Table~\ref{table:two} show a minor improvement in performance to the baseline model. \begin{table}[H] \centering \begin{minipage}[b]{0.4\linewidth} \centering \caption{Hidden State Averaging} \label{table:avg} {\begin{tabular}{lll} \toprule \cmidrule(r){1-3} Metric & Epoch 1 & Epoch 2 \\ \midrule BLEU$_1$ & 0.480 & 0.486 \\ BLEU$_2$ & 0.260 & 0.315 \\ BLEU$_3$ & 0.144 & 0.211 \\ BLEU$_4$ & 0.058 & 0.127 \\ METEOR$_1$ & 0.121 & 0.134 \\ ROUGE$_L$ & 0.312 & 0.341 \\ CIDE$_r$ & 0.087 & 0.198 \\ SPICE$_1$ & 0.060 & 0.088 \\ SPICE$_r$ & 0.074 & 0.143 \\ \bottomrule \end{tabular}% } \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \caption{2 Hidden States Averaged} \label{table:two} \begin{tabular}{llll} \toprule \cmidrule(r){1-3} Metric & Epoch 1 & Epoch 2 & Epoch 3 \\ \midrule BLEU$_1$ & 0.474 & 0.497 & 0.508 \\ BLEU$_2$ & 0.268 & 0.323 & 0.333 \\ BLEU$_3$ & 0.153 & 0.217 & 0.221 \\ BLEU$_4$ & 0.069 & 0.132 & 0.136 \\ METEOR$_1$ & 0.119 & 0.138 & 0.143 \\ ROUGE$_L$ & 0.313 & 0.348 & 0.349 \\ CIDE$_r$ & 0.100 & 0.212 & 0.244 \\ SPICE$_1$ & 0.067 & 0.090 & 0.094 \\ SPICE$_r$ & 0.084 & 0.151 & 0.169 \\ \bottomrule \end{tabular}% \end{minipage} \end{table} \subsubsection{Increasing the Size of the Model} We then explored the possibility that making the network deeper and more complex would improve performance. To do so, we ran one experiment which added a layer to the audio adapter and another separate experiment that increased the numbers of layers of encoders and decoders to 7. We initially expected this to not improve performance because we expected the creators of the baseline model to optimize these model hyper-parameters, but the results argued otherwise. Adding layers to both the audio adapter and transformer provided a significant increase in performance. These results can be seen in Table~\ref{table:adapter} and Table~\ref{table:transformer}. \begin{table}[H] \centering \begin{minipage}[b]{0.4\linewidth} \centering \caption{2 Layer Audio Adapter} \label{table:adapter} \resizebox{160pt}{!} {\begin{tabular}{llll} \toprule \cmidrule(r){1-3} Metric & Epoch 1 & Epoch 2 & Epoch 3 \\ \midrule BLEU$_1$ & 0.497 & 0.501 & 0.509 \\ BLEU$_2$ & 0.278 & 0.324 & 0.336 \\ BLEU$_3$ & 0.155 & 0.216 & 0.225 \\ BLEU$_4$ & 0.071 & 0.131 & 0.139 \\ METEOR$_1$ & 0.124 & 0.136 & 0.141 \\ ROUGE$_L$ & 0.322 & 0.343 & 0.348 \\ CIDE$_r$ & 0.106 & 0.213 & 0.248 \\ SPICE$_1$ & 0.070 & 0.091 & 0.093 \\ SPICE$_r$ & 0.088 & 0.152 & 0.171 \\ \bottomrule \end{tabular}% } \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \caption{7 Layer Encoder/Decoder} \label{table:transformer} \resizebox{160pt}{!} {\begin{tabular}{llll} \toprule \cmidrule(r){1-3} Metric & Epoch 1 & Epoch 2 & Epoch 3 \\ \midrule BLEU$_1$ & 0.503 & 0.500 & 0.507 \\ BLEU$_2$ & 0.280 & 0.325 & 0.337 \\ BLEU$_3$ & 0.159 & 0.217 & 0.226 \\ BLEU$_4$ & 0.070 & 0.132 & 0.141 \\ METEOR$_1$ & 0.125 & 0.139 & 0.141 \\ ROUGE$_L$ & 0.324 & 0.346 & 0.351 \\ CIDE$_r$ & 0.118 & 0.220 & 0.248 \\ SPICE$_1$ & 0.072 & 0.091 & 0.091 \\ SPICE$_r$ & 0.095 & 0.156 & 0.170 \\ \bottomrule \end{tabular}% } \end{minipage} \end{table} \subsubsection{Final Architecture} \label{sec:final} Pooling the conclusions of all the experiments, we chose a final architecture of 2 linear layers for the audio adapter, 7 layers for the encoder and decoder of the Bart model, and to average the last two hidden states of the encoder. We trained this architecture for 20 epochs and the results can be seen in Table~\ref{table:final}. Our training showed great success in the earlier epochs, producing scores that we're better than the scores of the baseline model trained from scratch. Unfortunately, this success plateaued in the later epochs, and our final architecture could not reach the performance of the baseline model with pre-trained weights. Even though our model's performance plateaued during training, we hypothesize that the model hasn't actually reached convergence yet. Given more time and resources, our team would train the model even further to find out if convergence had occurred or not. \begin{table}[h] \caption{Final Architecture} \label{table:final} \centering {% \begin{tabular}{lllll} \toprule \cmidrule(r){1-5} Metric & Epoch 1 & Epoch 2 & Epoch 3 & Epoch 20 \\ \midrule BLEU$_1$ & 0.492 & 0.499 & 0.511 & 0.553 \\ BLEU$_2$ & 0.283 & 0.325 & 0.336 & 0.357 \\ BLEU$_3$ & 0.172 & 0.218 & 0.225 & 0.236 \\ BLEU$_4$ & 0.086 & 0.131 & 0.139 & 0.147 \\ METEOR$_1$ & 0.128 & 0.139 & 0.141& 0.159 \\ ROUGE$_L$ & 0.330 & 0.343 & 0.349 & 0.361 \\ CIDE$_r$ & 0.120 & 0.216 & 0.245 & 0.321 \\ SPICE$_1$ & 0.070 & 0.094 & 0.92 & 0.104 \\ SPICE$_r$ & 0.095 & 0.155 & 0.168 & 0.213 \\ \bottomrule \end{tabular}% } \end{table} \subsection{Language-Based Audio Retrieval} Similar to the Audio Captioning task, we conducted various experiments by modifying the baseline architecture and comparing our performance against the initial results. Each model was trained for up to 50 epochs, and the best scores were noted (i.e., we used early stopping based on validation scores). Results for all experiments have been summarized in Table~\ref{table:two}. Plots for the four metrics (R1, R5, R10 and mAP10) are also shown in Figure~\ref{figure:metrics}. Our code can be found here: \url{https://github.com/PatrickKollman/Deep-Learning-Final-Project} \begin{table}[H] \caption{Evaluation Metrics for Task 2 Audio Retrieval} \label{table:two} \centering {% \begin{tabular}{lllll} \toprule \cmidrule(r){1-3} Model & R1 & R5 & R10 & mAP10 \\ \midrule Baseline & 0.02 & 0.09 & 0.16 & 0.05 \\ LSTM & 0.02 & 0.08 & 0.15 & 0.04 \\ VGGish features & 0.02 & 0.08 & 0.14 & 0.04 \\ Shared embedding space & \textbf{0.02} & \textbf{0.10} & \textbf{0.17} & \textbf{0.06} \\ Sentence Transformer & \textbf{0.04}& \textbf{0.16} & \textbf{0.25} & \textbf{0.09} \\ \bottomrule \end{tabular}% } \end{table} \begin{figure}[H] \centering \includegraphics[width=6.6cm, height=4cm]{r1.png} \includegraphics[width=6.6cm, height=4cm]{r5.png} \includegraphics[width=6.6cm, height=4cm]{r10.png} \includegraphics[width=6.6cm, height=4cm]{map.png} \caption{Evaluation Metrics for first 30 epochs (Audio Retrieval)} \label{figure:metrics} \end{figure} \subsubsection{GRU vs LSTM Cells} First, we tried a very simple modification: the baseline CRNN network used a single GRU cell at the end of the CNN network; we simply replaced it with an LSTM cell and evaluated the resulting performance. The idea was that, since an LSTM cell has three rather than two gates, it could perform better than a GRU cell (at the cost of more training). However, we didn't observe any significant gains from using an LSTM (as seen in Table~\ref{table:two}). We believe this could be due to one of two reasons: (1) the baseline model is not complex enough to show an improvement (if any) from using an LSTM; or (2) the audio features used were encoded sufficiently well using a GRU and the added complexity of an LSTM was not required for this task. \subsubsection{Log-Mel vs VGGish features} Apart from the standard log-mel features, we also tried using the VGGish features used in the Audio Captioning task as the input to our CRNN encoder. Since these features were specifically trained for audio tasks, we expected to obtain a better performance than when using log-mel features. Unfortunately, the model performed a bit poorly compared to the baseline. We believe this may be because the CRNN encoder is not complex enough to make proper use of the richer VGGish features and may need a transformer-like architecture, as in the Audio Captioning task, to get better results. \subsubsection{Shared Embedding Space} \label{sec:shared} In the baseline implementation, we are using fixed text embeddings (i.e., Word2Vec) and learning embeddings for audio files that are close to those of text that describe them. The inherent issue here is that these embeddings for the audio and text, though close in value for similar pairs, are in different embedding spaces. Therefore, as proposed by \cite{elizalde2019cross}, we added an additional fully-connected layer at the end of our baseline model that takes the previous output text and audio embeddings, and learns new embeddings for both in a shared space (we converted the 300-long embedding to 1024 dimensions). As expected, we did indeed get an improvement, though not a lot. We believe that using a more complex architecture for the shared-embeddings model may lead to even better performance. \subsubsection{Using Sentence Transformers for Text Embedding} The Word2Vec embedding used in the previous experiments has shown great success as it can describe word similarities beyond simple syntactic regularities. However, there is one major limitation about Word2Vec vectors in that the representation of a single word is fixed once the training is done, which means that the word embedding is static. However, many words have difference meanings in difference contexts, which suggests that the word embedding should be dynamic and change according to the context. Transformers, owing to their self-attention mechanism, can generate word embedding that are fully contextualized. Popular transformer models such as BERT\cite{devlin2018bert} are often pre-trained on an enormously large amount of unstructured text data using two training tasks, sentence modeling(where a part of words are masked randomly and the transformer model is trained to predict the next word) and next sentence prediction. \\ \\ A large disadvantage of BERT is that it does not compute an independent sentence embedding, making it difficult to derive a good sentence level embedding from BERT-based models. A better method called S-BERT(sentence transformer) was proposed by \cite{reimers2019sentence} that uses Siamese network architecture to fine-tune a pre-trained BERT-like model with Natural Language Inference data. The sentence transformer has shown a great improvement over BERT-based transformer on modeling sentences. Two sentences with similar meaning will have similar embeddings. We believe using embedding generated from the sentence transformer would yield a much better result in comparison to the Word2Vec vectors, as the embeddings will capture the meaning of the entire sentence. \\ \\ Since the competition allows us to use any pre-trained model, we used a pre-trained sentence transformer using the python sentence transformer library. In this approach, the text encoder part is not trainable (weights were frozen), and we used the triplet ranking loss to train the audio encoder, so it generates audio embedding that follows the distribution of the sentence embedding of the captions. As expected, the model performed significantly better than all previous experiments (as seen in Table~\ref{table:two}). The scores of R1, R5, R10, and mAP10 are improved by 50$\%$,77.8$\%$,56.3$\%$,80$\%$ respectively. \subsubsection{Binary Cross Entropy Loss with Exponential Negative Euclidean Distance} The baseline implementation uses a dot product as the similarity score between audio and text embeddings. \cite{elizalde2019cross} proposed a binary cross entropy loss with exponential euclidean distance as an alternative for the task of learning joint embeddings. The distance $d$ has the following equation for an audio text embedding pair $(a_i, t_i)$: \begin{align*} d = \exp\left(-\sqrt{\sum_{j=1}^{D} (a_{i,j}-t_{i,j})^2}\right) \;, \end{align*} where $D$ is the dimension of the embedding vectors. We think this loss could be better than the dot product because the magnitude of dot product is unbounded, while this score function computed using the exponential euclidean distance is bounded between 0 and 1. However, there wasn't sufficient time to test this in practice. \section{Baseline Implementation} \subsection{Audio Captioning} The baseline model for Audio Captioning is a seq2seq transformer model consisting of 6 encoders and 6 decoder layers. The baseline repository can be found here: \url{https://github.com/felixgontier/dcase-2022-baseline}. The model begins with an audio adapter, which is an affine layer that is used to extract embeddings from the audio samples. This audio adapter takes in as input embeddings from a pre-trained VGGish model of dimension 128, and outputs embeddings of 768. The VGGish model is a feature embedding model for audio classification tasks, and the code for this model can be found here: \url{https://github.com/harritaylor/torchvggish/}. The transformer in the model is a Bart seq2seq transformer. The encoder of the transformer takes as an input these embeddings of dimension 768, and outputs a sequence of embeddings of the same length as the input sequence. Each transformer layer of the encoder outputs 768 features for each time-step of the input representation. The performance of the pre-trained weights for the baseline model can be seen in Table~\ref{table:pretrained}, and the performance of the baseline model trained from scratch can be seen in Table~\ref{table:baseline}. \begin{table}[H] \centering \begin{minipage}[b]{0.4\linewidth} \centering \caption{Pre-trained Baseline} \label{table:pretrained} \begin{tabular}{ll} \toprule Metric & Score \\ \midrule BLEU$_1$ & 0.555 \\ BLEU$_2$ & 0.358 \\ BLEU$_3$ & 0.239 \\ BLEU$_4$ & 0.156 \\ METEOR$_1$ & 0.164 \\ ROUGE$_L$ & 0.364 \\ CIDE$_r$ & 0.358 \\ SPICE$_1$ & 0.109 \\ SPICE$_r$ & 0.233 \\ \bottomrule \end{tabular}% \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \caption{Baseline trained from Scratch} \label{table:baseline} \begin{tabular}{llll} \toprule \cmidrule(r){1-3} Metric & Epoch 1 & Epoch 2 & Epoch 3 \\ \midrule BLEU$_1$ & 0.479 & 0.494 & 0.506 \\ BLEU$_2$ & 0.272 & 0.322 & 0.331 \\ BLEU$_3$ & 0.155 & 0.216 & 0.219 \\ BLEU$_4$ & 0.069 & 0.132 & 0.135 \\ METEOR$_1$ & 0.122 & 0.138 & 0.140 \\ ROUGE$_L$ & 0.314 & 0.347 & 0.346 \\ CIDE$_r$ & 0.103 & 0.212 & 0.237 \\ SPICE$_1$ & 0.069 & 0.090 & 0.093 \\ SPICE$_r$ & 0.086 & 0.151 & 0.165 \\ \bottomrule \end{tabular}% \end{minipage} \end{table} The Clotho dataset is given as WAV and CSV files. Preprocesssing is required to extract the VGGish embeddings from the Clotho dataset. This preprocessing code is also provided and can be found at \url{https://github.com/felixgontier/dcase-2022-preprocessing} The decoder employs encoder outputs to generate the caption in an autoregressive manner. To do so, previously generated words are tokenized and transformed into embeddings as inputs to the decoder. In the baseline model, the tokenizer is pre-trained with a byte-pair-encoding process, i.e., each token corresponds to a sub-word from the model vocabulary instead of a full English word. This tokenizer has a vocabulary size of 50265 tokens. Each token in the past sequence is then associated to a feature vector through an embedding map, and input to the decoder. Each layer of the decoder attends on the previously generated tokens with self-attention, as well as on the entire encoder output sequence with cross-attention. The embedding dimension of each decoder layer is 768. Lastly, a classifier head consisting of one linear layer with softmax activation outputs a probability for each token of the vocabulary. Caption evaluation is then performed using a version of the caption evaluation tools used for the MS COCO challenge. At evaluation, generation is performed through beam search, although greedy decoding is also provided in the code. These tools be found online at \url{https://github.com/audio-captioning/caption-evaluation-tools}. \subsection{Audio Retrieval} The baseline model for this taskk involves an audio-to-text aligning framework (\cite{xie2021unsupervised}); a link to baseline repository is \url{https://github.com/xieh97/dcase2022-audio-retrieval}. The system use a CRNN encoder (a set of convolutional layers followed by one or more recurrent layers, as shown in Figure~\ref{figure:crnn}) to generate frame-level acoustic embeddings from audio clips (\cite{xu2020crnn}); 64 log mel features were used as the input which were computed on a sliding window of 40 ms and hop sizes of 20 ms. For the text (i.e., captions), a pre-trained Word2Vec model (trained on the Google News dataset) was used to transform textual descriptions into sequences of word embeddings \cite{mikolov2013efficient}. Both the acoustic embeddings and word embeddings were 300-long vectors and were averaged across time-steps to produce a single vector each. \begin{figure}[H] \centering \includegraphics[width=\linewidth]{crnn.png} \caption{CRNN Encoder Architecture (\cite{xu2020crnn})}% \label{figure:crnn} \end{figure} The idea behind this approach is to retrieve audio and text embedding having a high similarity score (dot product was used here) for matching pairs of audio and text descriptions and lower scores elsewhere. Triplet Margin loss was used to optimize the baseline, where the goal was to increase the margin between the similarity scores of matching pairs vs imposters (to select an imposter, a random audio or text was selected from within the training batch). The baseline model was trained using the Adam optimizer and a ReduceLROnPlateau schedulder for 50 epochs. Performance was measured in terms of recall at R1, R5, R10 and mAP10 (mean average precision). For the developement-evaluation split in Clotho dataset (\ref{sec:data}), the results of our baseline model can be seen in Table~\ref{table:retrival}. As can be seen, our baseline results were lower than what the original reference (*Add reference) listed. This could be due to various reasons: (1) the model was not trained long enough; (2) the hyperparameters used were not the best; (3) more regularization was needed to reduce overfitting; and so on. Nonetheless, we used the metrics obtained in our 50-epochs training as the baseline performance for the experiments we run. \begin{table}[h] \caption{Evaluation Metrics} \label{table:retrival} \centering \begin{tabular}{lll} \toprule \cmidrule(r){1-3} Metric & Benchmark & Our Baseline \\ \midrule R1 & 0.03 & 0.02 \\ R5 & 0.11 & 0.09 \\ R10 & 0.19 & 0.16 \\ mAP10 & 0.07 & 0.05 \\ \bottomrule \end{tabular} \end{table} \section{Conclusion} Our experiments for both Audio Captioning and Language-Based Audio Retrieval yielded promising results. The \hyperref[sec:final]{final architecture} chosen for the Audio Captioning task (increased model size, and modified encoder output in the Bart transformer) yielded metrics that approached the baseline, while the \hyperref[sec:shared]{Shared-Embedding Space and Sentence Transformer} models have surpassed the performance of the baseline for Language-Based Audio Retrieval. In doing so, we have successfully addressed the problem of Automatic Audio Captioning and Language-Based Audio Retrieval. Our final model for the Automatic Audio Captioning task automatically generates a language description for sounds. As for Language-Based Audio Retrieval task, we have successfully ranked audio files based on textual descriptions, and the scores of R1, R5, R10, and mAP10 are improved by 50\%, 77.8\%, 56.3\%, 80\% respectively. Given more time and resources, our team would have trained the final architecture for the Audio Captioning task even further to see if the plateau of scores was a false sign of convergence. We would also have pursued the use of sound event vectors as well. As in \cite{eren2021audio}, we extract sound event features using Cnn14\_DecisionLevelMax pretrained model. For Audio Captioning task, we made a vector of time step size and save the class of sound generated at each time step in this vector. In addition, for Audio Retrieval task, we made a vector of the size of the number of classes, and whether each class occurred in the corresponding audio file was indicated in binary. The findings from our research showed that including sound event vectors as an additional input to our models would provide valuable semantic information that would improve performance. Therefore, the performance of both tasks can be further improved by using the sound event vector. \section{Dataset} \label{sec:data} The primary dataset for training and evaluation of both tasks is the Clotho dataset (\cite{drossos2020clotho}). This dataset contains captions for 6974 audio files (5 captions per audio); duration of these audios vary between 15 and 30 seconds while captions are 8 to 20 words long. These captions describe the events taking place in the audio (e.g., “a person is turning a map over and over”)—which is the desired output of our first task. The dataset has a development, validation, and evaluation split. 3840 audio files are used in the development split, 1046 audio files are used in the validation split, and 2088 are used in the evaluation split. The captions for each split can be found in a respective CSV file. The audio samples are available as WAV, while the captions are available as CSV files. For use in the development of an audio captioning method, features have to be extracted from the WAV audio clips and the captions in the CSV files have to be pre-processed (e.g., punctuation removal). The extracted features and processed words are then matched to be input-output pairs. Besides the Clotho dataset, we have also located a few other audio datasets, specifically for sound event detection (which we can use for our second approach to Audio Captioning). These include the Freesound Audio Tagging dataset[20] from Kaggle (80 categories of everyday sounds like “slam”, “squeak”, etc.), the Urban Sound 8K dataset[21] (10 classes of urban sounds like “dog bark”), and the Environmental Sounds dataset[22] (50 classes of sounds like 'rain', 'crickets', and so on). We may potentially find and use other datasets as we begin to go through research papers over the course of this project. \section{Introduction} Automatic Audio Captioning (AAC) is defined as the problem of automatically generating a language description from sounds (\cite{drossos2017automated}); this description is called a caption. In this task, we aim to generate a caption as close as possible to human-perceived information for audio signals. Methods can model concepts such as loudness, physical descriptions of objects and environments, and intelligent descriptions such as frequency, e.g., “a large car honks three times as it passes an empty road”. We also address language-based audio retrieval. Here, the goal is to find audio signals within a fixed dataset that match a given textual description. For both tasks, our goal is to develop robust models that can handle audio clips of varying lengths. The Clotho dataset will be used for training and evaluation of both tasks (\cite{drossos2020clotho}). For audio captioning, model outputs will be scored on the following metrics: BLEU1, BLEU2, BLEU3, ROUGEL, METEOR, CIDEr, SPICE, and SPIDEr; of these, the main metric is SPIDEr which is a linear combination of SPICE and CIDEr using a policy gradient method to optimize (\cite{liu2017improved}). The R1, R5, R10 and mARP10 metrics are used for audio retrieval. \section{Literature Review} \subsection{Audio Captioning} Compared to video and image captioning, audio captioning has recently begun to receive attention in the area of intelligent audio research, particularly since the 2020 DCASE challenge (\cite{gebhard1automated}). In order to improve the performance of audio captioning models, several studies have used the seq2seq approach. In a recent paper by \cite{weck2021evaluating}, the authors looked into the use of off-the-shelf models in performing the audio-captioning task. They evaluated four embedding models (VGGish, YAMNet, OpenL3 and COALA) as encoders, three variations of embedding adapters (identity function, multi-layer perceptron and multi-head attention), four word embeddings (Word2Vec, GloVe, fastText and BERT), a transformer-based decoder, and their combinations in various settings. Results show that YAMNet performed best as an encoder and can be improved using a multi-head attention-based adapter. As for word embeddings, BERT provided the best results. Among works that build an audio-captioning model from the ground up, key variations include choice of architecture for the encoder and decoder, various feature extraction techniques, using overlapping versus non-overlapping time segments and the use of regularization. The approach of \cite{wu2020audio} during the 2020 DCASE Challenge involved the use of a 10-layer CNN encoder with a Transformer as the decoder. \cite{xu2021sjtu} modified this architecture and used a single-layer GRU with temporal attention in place of the Transformer. More recent studies (\cite{mei2021audio}, \cite{labbeirit}) have attempted to use a fully recurrent encoder and decoder, owing to the limitation of CNNs in modeling long-ranged temporal information. Others have tried to use richer features to improve the performance of the audio-captioning system. For one, Tran et al. make use of both RNN and CNN blocks to capture temporal information as well as time-frequency patterns. \cite{kim2019audiocaps} presents another encoder-decoder model, but one that includes semantic attention in the encoder phase. Using a large audio caption dataset, they extract words from the captions and apply the nearest neighbor approach to retrieve the nearest labels as attribute words. These attribute words are then included in the encoder phase as additional semantic information. There are various other ways to include semantic information in a model. \cite{eren2020audio} offer the idea that subject-verb embeddings can be extracted from the captions and then be included in the encoder phase as well. For both models, the additional semantic information was proven to enhance the performance of the model. In conjunction with semantic embeddings, \cite{eren2021audio} include another type of method sound event detection based audio captioning in their model. They apply pre-trained neural networks (PANNs) to generate PANN features and sound event vectors, which are concatenated as an input to the encoder. They use a combination of GRU and bidirectional GRU and for the encoder-decoder architecture. \cite{xu2021text} creates an Audio-Grounding dataset, which offers a correspondence series of \textit{audio – caption – sound event phrase – sound event timestamp segmentation}. Based on such, they propose the text-to-audio grounding task to classify and localize particular sound events in an audio clip in a more cross-modal way. \subsection{Language-Based Audio Retrieval} The task of Audio Retrieval has received limited attention in literature. As such, only a handful of papers on this topic have been published. One such work is the study by \cite{koepke2022audio} which proposed two new benchmarks for text-based audio retrieval on the AudioCaps and Clotho datasets. These benchmarks were used to build baselines, and the benefits of multiple pre-trained audio expert networks were demonstrated. Realizing the limitation of keyword matching retrieval, \cite{song2020agricultural} focused on the inverted index of silence words, and designed a hybrid model with representation retrieval and semantic retrieval. Another interesting approach was proposed by \cite{oncescu2021audio} which involves learning cross-modal embeddings for the audio samples and text descriptions such that captions that describe the audio well would be closer to the audio sample in this shared embedding space. A more recent study \cite{xie2021unsupervised} proposed an unsupervised audio-text aligning framework between unaligned and annotated audio embeddings and word embeddings. The audio embedding is extracted from the audio clip using convolutional recurrent neural network (CRNN) and the word embedding extracted from the caption using Word2Vec. Event-phrase correspondences can be predicted by calculating the similarity of the clip caption pairs by averaging frame-word similarities. \section{Model description} \subsection{Audio Captioning} Overall, our task is to accept an audio sample as input and output a suitable caption for that sample. The input can be denoted as $X \in \mathbb{R}^{T \times N_{features}}$ where $T$ is the number of frames and $N_{features}$ is the number of features. The output can be denoted as $Y \in \mathbb{R}^{I \times L_{words}}$ where $I$ is the number of captions and $L_{words}$ is the length of words. We will use a Seq2Seq approach to model the relationships between the audio clips and text descriptions. Similar to \cite{ikawa2019neural}, we intend to use RNNs for the encoder-decoder framework where the encoder takes in features extracted from the audio, while the decoder converts it into a sequence of words. To improve the model performance, we will likely use pre-trained models to extract audio features, such as YAMNet. To train the model, we use cross-entropy loss where $y_{t}$is the ground truth word at time step t $$ L_{CE}=-\frac{1}{T}\sum^{T}_{t=1}logp(y_{t}|y_{1:t-1},\theta) $$ Model outputs will be scored on the following metrics: BLEU1, BLEU2, BLEU3, ROUGE$_{l}$, METEOR, CIDE$_{r}$, SPICE, and SPIDE$_{r}$ (\cite{mei2021audio}). BLEU$_{n}$ calculates the weighted geometric mean over different length n-grams. ROUGE$_{l}$ computes F-measures by counting the longest common subsequence. METEOR calculates the harmonic mean of precision and recall based on explicit word-to-word matches between the caption and given references.CIDE$_{r}$ measures the cosine similarity between term frequency and inverse document frequency. SPICE creates scene graphs for captions and calculates F-score based on tuples in the scene graphs. Proposed by \cite{liu2017improved}, SPIDE$_{r}$ combines the semantic stability of SPICE and the fluency of CIDEr, which enable easier optimization by using Monte Carlo rollouts instead of mixing MLE training. \subsection{Language-Based Audio Retrieval} Audio retrieval involves retrieving audio signals with content that matches a given textual description. This task has two inputs: (1) a set of audio samples of varying lengths; and (2) a free-form text $X \in \mathbb{R}^{L_{words}}$, where $L_{words}$ is the length of the description. The output is a vector $Y \in \mathbb{R}^{N}$ containing a score for each audio sample (i.e., $N$ is the number of audio clips). To perform language-based audio retrieval, we would use our model from the previous audio captioning task to first create captions for all audio files in our database. We would then use a similarity metric to score these descriptions based on the free-form input text. These scores would be used to rank the audio files and return the top-k results. We use the following evaluation metrics: R1 (recall score at top 1 retrieved result); R5 (recall score at top 5 retrieved result); R10 (recall score at top 10 retrieved result); mAP10 (mean average precision at top 10 retrieved results).
proofpile-arXiv_065-5830
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Acquisition and Proprocessing} \subsection{Blur Prediction Network} \subsection{Multi-scale Blur Prediction} \subsection{Training Supervision} \subsection{Image Alignment and Focusing} \subsection{Training} \subsubsection{Dataset} \subsubsection{Losses (or sth else)} \section{Introduction} \label{sec:intro} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.2285\textwidth} \includegraphics[width=\textwidth]{jpgs/teaser_a.jpg} \caption{Input handheld burst} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{jpgs/teaser_b.jpg} \caption{Refocusing on books} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{jpgs/teaser_c.jpg} \caption{Refocusing on reflection} \end{subfigure} \begin{subfigure}[b]{0.22\textwidth} \includegraphics[width=\textwidth]{jpgs/teaser_d.jpg} \caption{Magnified view} \end{subfigure} \vspace{-0.2cm} \caption{Our method takes an input handheld burst and directly generates the shallow DOF output. Our method preserves fine silhouette details and correctly defocuses through transparent or reflective surfaces. We demonstrate with these image regions of reflected vegetation which exhibit both detailed silhouettes and layered depth.} \label{fig:intro} \vspace{-0.3cm} \end{figure*} Shallow depth-of-field (DOF) is an aesthetically pleasing effect in photography. Defocus blur suppresses details in the foreground and background which are outside the DOF, while viewer attention is directed to the subject which is within the DOF. Large aperture lenses are necessary to produce shallow DOF with strong defocus blur in foreground and background regions. However, the ubiquitous smartphone cameras often have small apertures 1 to 2 mm in diameter. They are insufficient for producing strong defocus blur naturally, and can only produce sharper images with high DOF. For cameras with small aperture lenses, shallow DOF images are synthesized from sharp images in post-process. The conventional approach to synthetic defocus blur is depth-based image blurring. The input is a color+depth (RGB-D) image where the depth image is acquired by depth sensors, recovered using structure from motion (SfM) or multi-view stereo (MVS) techniques~\cite{Schonberger_2016_CVPR,seitz2006comparison,6909904, Luo_2020_CVPR}, or estimated from the monocular color image itself~\cite{saxena2005learning,garg2016unsupervised}. The blur is weaker for pixels within the DOF and stronger outside the DOF. The spatially varying blur strength is called the defocus map. Alternatively, the defocus map can be estimated from any existing defocus blur in the image, then magnified to compress the original DOF~\cite{ZHUO20111852,bae2007defocus}. The defocus map can also be approximated from image segmentation such that the blurring is applied only to regions outside the subject's image segment. A hybrid approach applies depth-based blurring in context regions while keeping the subject's image segment in perfect focus~\cite{10.1145/3197517.3201329}. However, some image features remain challenging for this conventional approach due to limitations in depth estimation. For example, silhouette edges have sub-pixel depth features and suffer from depth estimation errors; transparent and reflective surfaces do not admit a single depth value, hence blur strength, per pixel~\cite{10.1145/3272127.3275032}. Light field rendering is another approach for synthesizing shallow DOF images~\cite{10.1145/237170.237199}. The 4D light field can be acquired manually using a single lens --- the user captures images of the same scene from an array of viewpoints. However, the user has to laboriously scan multiple viewpoints within the simulated aperture using 2D hand motion~\cite{levoy2012synthcam}, while a simpler 1D trajectory only simulates an elongated synthetic aperture~\cite{5559009}. Viewpoint interpolation techniques can generate the dense light field from sparse acquisition viewpoints, but these viewpoints must align with predefined positions, necessitating a calibrated multi-lens arrangement~\cite{10.1145/2980179.2980251}. The novel learning-based approach estimates the dense light field from a single color image. However, it does not generalize well to image categories unseen during training~\cite{srinivasan2017learning}. Overall, light fields are powerful intermediate representations for generating defocus effects, but they are challenging to acquire. We introduce a method for simulating defocus blur directly from a burst of sharp images taken with a small aperture camera, without explicit depth-based image blurring. Our method takes a short handheld burst of images as input. The images are first aligned and refocused conventionally at the focus depth. Our deep learning model then generates a shallow DOF output image, where the simulated aperture size equals the user's lateral hand translation during burst acquisition. In other words, the simulated defocus blur for each region is as strong as its disparity. Our method does not require multiple lenses, depth sensing capability, or laborious manual acquisition. Compared with conventional depth-based image blurring, our method successfully tackles challenging image regions such as transparent or reflective surfaces, as well as fine object silhouettes (Fig.~\ref{fig:intro}). \section{Proposed Method} \begin{figure*} \centering \vspace{-0.6cm} \includegraphics[width=\linewidth]{jpgs/merged_model.jpg} \vspace{-0.6cm} \caption{From a handheld burst, the Blur Prediction Network (BPN) infers defocus blur images $I^{def}_{c,s}$ and corresponding disparity maps $I^{disp}_{c,s}$ in parallel. The Multi-scale Merging Network (MMN) merges $I^{def}_{c,s}$ from different scales with blending weights $W_s$. In the absence of MMN, the full-scale BPN output $I^{def}_{c,s=1}$ shows significant ghosting artifacts in high disparity regions.} \vspace{-0.6cm} \label{figure:full_setup} \end{figure*} A handheld burst is captured for each scene. During burst acquisition, the camera moves laterally over a small distance that equals the diameter of the simulated aperture. We assume the trajectory is from top to bottom in the camera's reference frame. In general, any captured burst can be re-oriented to achieve a top-to-bottom trajectory. The trajectory need not be perfectly linear, as slight deviations are expected and leveraged by the algorithm. In the case of spontaneous hand motion over a short burst, the trajectory still approximates straight lines or gentle curves~\cite{10.1145/3306346.3323024,Park_2014_CVPR}. We assume the longitudinal translation is much less than scene depth, and the lateral translation is much greater than the diameter of the real aperture. The burst undergoes two preprocessing steps using classical computer vision algorithms. First, the images are warped to align all camera orientations. This warping is a homography transformation calculated by aligning distant feature points or using onboard gyroscopes~\cite{6831799,Park_2014_CVPR}. Second, the warped images are refocused by shifting in image X-Y until the in-focus region is aligned across all frames. This is similar to auto focusing by contrast detection. For portraits, it is facilitated by face detection. The aligned and refocused burst of images is the input to our ML model (Fig.~\ref{figure:full_setup}). The aligned and refocused images can be considered an incomplete light field $L(x,y,u,v)$ with only a subset of viewpoints $(u,v)$ available. However, we don't perform camera localization and therefore don't know the exact $(u,v)$ coordinates. We do assume the image sequence to have decreasing $v$ coordinates due to the top-to-bottom trajectory. Fig.~\ref{fig:49views} (e) shows an example viewpoint trajectory for a 9-frame handheld burst, represented as an incomplete light field. The blur prediction network (BPN) is employed on one color channel at a time. BPN is a U-Net~\cite{RFB15a} that directly predicts the defocus blur image for each color channel. The 3 output channels combine to produce the full RGB output. In addition to the defocus decoder $D_{def}$, the U-Net has a second decoder $D_{disp}$ whose purpose is twofold -- First, it generates an estimated disparity output to facilitate \emph{multi-task training} (Sec. \ref{sec:multitask}). Second, the additional disparity output infers a reliability measure for merging BPN results during \emph{multi-scale inference} (Sec. \ref{sec:multiscale}). \subsection{Multi-task Training} \label{sec:multitask} The second decoder $D_{disp}$ is appended to the BPN bottleneck layer. $D_{disp}$ is tasked with estimating the scene disparity, and it is supervised by the ground truth disparity during training. This supervision ensures that the encoder $E$ is attentive to the disparity across the input frames. However, the disparity output is not used to generate defocus blur --- all defocus blur is generated in parallel with the estimated disparity. \subsection{Multi-scale Inference} \label{sec:multiscale} In some scenes, image regions can have disparities high enough to exceed the receptive field of BPN, then it becomes impossible for BPN to observe the disparity and to output the correct defocus blur image. This is caused by the foreground or background context being too far from the plane of focus, or excessive camera translation during burst acquisition. Deepening the BPN U-Net or using larger convolutional filters can expand the receptive field to cover larger disparities, but we choose a multi-scale approach that is more efficient. First, BPN is employed on inputs $I^{burst}_{c,s}$ at original, half, and quarter scales $s$, producing both defocus blur image $I^{def}_{c,s}$ and disparity $I^{disp}_{c,s}$ at each of the corresponding scales. Then, the multi-scale merging network (MMN) predicts the per-pixel blending weight $W_s$. The merged output is thus a weighted sum of BPN's outputs: $I^{out}_c=\sum_s W_s\times I^{def}_{c,s}$. MMN predicts blending weights $W_s$ from the magnitude and consistency of disparity maps --- high or inconsistent disparity across color channels indicate the scene disparity has reached or exceeded BPN's receptive field, and $I^{def}_{c,s}$ at a lower scale should be given more weight. Low disparity indicates the scene region is sharp, and $I^{def}_{c,s}$ at a higher scale should be given more weight. \section{Experiments} We train the BPN and MMN models separately, as BPN by itself should correctly defocus images if the scene disparity were not excessive. Therefore, BPN is trained first, then its parameters are frozen while MMN is trained second within the full pipeline (Fig.~\ref{figure:full_setup}). Light field datasets are leveraged for model training and evaluation --- light field datasets support straightforward simulation of ground truth as well as burst inputs for supervised learning. Specifically, both models are trained using the DeepFocus synthetic light field dataset~\cite{10.1145/3272127.3275032}. This dataset consists of rendered 3D scenes of various 3D models, posed randomly, and colored by random textures. For quantitative evaluation, our pipeline is employed on the Stanford Lytro Light Field dataset~\cite{raj2016stanford}, a collection of natural scenes acquired using the Lytro Illum camera. We also acquire Lytro Illum images of certain challenging scenes in order to highlight the improvement over conventional algorithms (Fig.~\ref{fig:visual}). Additionally, we capture real handheld burst images using a smartphone camera (Fig.~\ref{fig:intro}, ~\ref{figure:full_setup}, and~\ref{fig:additional}), and we also extract consecutive frames from computer game footage (Fig.~\ref{fig:additional}, right). These images are only for visual evaluation as they lack ground truths for quantitative evaluation. \subsection{Training Setup} The input to our pipeline is a sequence of light field sub-aperture images corresponding to the viewpoint trajectory of a simulated top-to-bottom handheld burst. The trajectory deviates from a perfect line with randomly selected viewpoints slightly offset to the left or right. The ground truth for both BPN and MMN supervision is produced by averaging all sub-aperture images whose viewpoints lie within a circle representing the shape of the circular simulated aperture. Within the light field consisting of $9\times9=81$ sub-aperture viewpoints, the circular subset covers $49$ viewpoints (Fig.~\ref{fig:49views}). \begin{figure} \centering \begin{subfigure}[b]{0.19\linewidth} \centering \includegraphics[width=\textwidth]{jpgs/reflective_13_square.jpg} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.19\linewidth} \centering \includegraphics[width=\textwidth]{jpgs/49view_b.jpg} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.19\linewidth} \centering \includegraphics[width=\textwidth]{jpgs/49view_c.jpg} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.19\linewidth} \centering \includegraphics[width=\textwidth]{jpgs/49view_d.jpg} \caption{} \end{subfigure} \hfill \begin{subfigure}[b]{0.19\linewidth} \centering \includegraphics[width=\textwidth]{jpgs/49view_e.jpg} \caption{} \end{subfigure} \vspace{-0.3cm} \caption{a) A dense light field consists of $9\times9$ sub-aperture images. b) The simulated aperture. c) The 49 sub-aperture images that average to the simulated aperture ground truth. d) Camera viewpoints during handheld burst acquisition. e) The 9 sub-aperture images that simulate the handheld burst.} \label{fig:49views} \vspace{-0.3cm} \end{figure} Specifically, the ground truth $I^{gt}_c$ is calculated from the parameterized 4D light field $L_c(x,y,u,v)$ where $(x,y)$ denotes the 2D ray direction, and $(u,v)$ denotes the sub-aperture viewpoint coordinate. This parameterization maps straightforwardly to burst acquisition where $(x,y)$ is pixel coordinate within a frame, and $(u,v)$ denotes its camera viewpoint. With $x,y,u,v$ all in discrete pixel and viewpoint coordinates: \begin{equation} I^{gt}_c(x,y) = \frac{1}{|A|}\sum_{(u,v)\in A}^{} L_c(x-\alpha u, y-\alpha v, u, v) \label{eq:photography_op} \end{equation} where A is the subset of viewpoints making up the simulated aperture (Fig.~\ref{fig:49views} (c)), and $\alpha$ is the refocusing factor. The input images are similarly refocused by $(x,y)\leftarrow(x-\alpha u, y-\alpha v)$. We vary $\alpha$ between $0$ and $4$ during training as data augmentation but keep $\alpha=0$ for inference. $I^{disp\_gt}$ is lifted directly from the dataset then biased according to the $\alpha$ setting. Other augmentations include color inversion and image scaling. \begin{figure} \centering \includegraphics[width=\linewidth]{jpgs/Picture6-camera-ready.jpg} \vspace{-0.7cm} \caption{Our method defocuses correctly around detailed silhouettes and within areas of layered depth (top rows). Our method is well-suited to portrait photography (bottom rows).} \label{fig:visual} \vspace{-0.6cm} \end{figure} BPN training is supervised through this loss function: \begin{equation} \label{eq:bnp_loss} \begin{split} {\cal L}_{bpn} = & {\cal L}_{ssim}(I^{def}_{c,1}, I^{gt}_c) + \lambda_c \lVert I^{def}_{c,1} - I^{gt}_c \rVert_{1} \\ + & \lambda_d \lVert I^{disp}_{c,1} - I^{disp\_gt}_c \rVert_{1} \end{split} \end{equation} where ${\cal L}_{ssim}(x,y)$ is negative multiscale structural similarity~\cite{1292216} with two scales weighted $[0.9, 0.1]$ and window size $7$; the weights $\lambda_c$ and $\lambda_d$ are experimentally set at $0.5$ and $0.1$. The MMN loss function is defined as: \begin{equation} \label{eq:msn_loss} {\cal L}_{mmn} = {\cal L}_{ssim}(I^{out}_c, I^{gt}_c) \end{equation} where ${\cal L}_{ssim}$ uses the same parameters, but is applied between the merged image $I^{out}_c$ and the ground truth $I^{gt}_c$. \subsection{Evaluation} \textbf{Qualitative} Fig.~\ref{fig:visual} shows a direct visual comparison with the conventional method based on defocus maps, here represented by the Lens Blur feature in the Adobe Photoshop application. The direct monocular to DOF method of Dutta et al.~\cite{Dutta_2021_CVPR} is also shown for comparison. Each scene is acquired by the Lytro Illum camera, whose light field images simulate handheld bursts for our method according to Fig.~\ref{fig:49views} (e), as well as provide RGB-D images for the conventional method. Our method avoids artifacts due to limitations in the depth-based defocus map. Particularly, our method successfully defocuses sub-pixel features and pixels with more than one depth value. Common portrait artifacts occurring near loose hairs are significantly reduced with our method. Fig.~\ref{fig:additional} shows additional results in other natural and synthetic scenes. \begin{table}[b] \vspace{-4mm} \centering \caption{Performance on the Stanford Lytro Light Field dataset. "Center view only" is provided as a baseline.} \vspace{-0.3cm} \label{table:quantitative} \begin{tabular}{l c c} \toprule {} & {SSIM$\uparrow$} & {LPIPS$\downarrow$} \\ \midrule {Kalantari et al. (4 corner viewpoints)} & {0.941} & {0.0580} \\ {Ours (4-frame burst)} & {\textbf{0.972}} & {\textbf{0.0491}} \\ \midrule {Ours (9-frame burst)} & {0.980} & {0.0373} \\ {Center view only, no defocus} & {0.894} & {0.1425} \\ \bottomrule \end{tabular} \end{table} \textbf{Quantitative} The light field rendering technique synthesizes defocus effects free from the limitations of conventional defocus maps. Our method is compared with the viewpoint interpolation technique of Kalantari et al.~\cite{10.1145/2980179.2980251}, which reconstructs dense light fields from only 4 corner viewpoints. The Stanford Lytro Light Field Archive~\cite{raj2016stanford} is leveraged to simulate burst images for our algorithm according to Fig.~\ref{fig:49views} (e); it also provides the 4-viewpoint sub-aperture images for the prior work's viewpoint interpolation algorithm. We choose random handheld trajectories where the center of mass of the viewpoints coincides with the center of the full aperture; we also set our inputs to 4-frame bursts to align with the 4 corner viewpoint requirement of the prior work. The prior work interpolates $8\times8$ viewpoints instead of $9\times9$, missing the topmost and left most viewpoints in Fig.\ref{fig:49views} (c). Therefore, these 2 viewpoints are substituted by sub-aperture images directly lifted from the dataset. Eq.~\ref{eq:photography_op} is used to produce the defocus blur images from prior work's interpolated viewpoints; it is also used to produce truth images from the dataset's raw light fields. Image quality is measured in SSIM~\cite{wang2004image} and LPIPS~\cite{Zhang_2018_CVPR}. The results are shown in Table~\ref{table:quantitative}. Our method produces superior results to the prior work algorithm at the same number of input frames. Notably, our algorithm only requires loosely structured handheld bursts from a single camera, while the prior work algorithm requires multiple cameras at exactly arranged viewpoints. Furthermore, our algorithm scales well with additional input frames, thus it can take advantage of increased burst acquisition frame rates in real-world camera systems. Our algorithm is efficient. On a computer with Intel Core i7-8700K and Nvidia RTX 2080, our algorithm processes a benchmark scene with 4-frame input in 1.97s, while the prior work requires 487s to interpolate all 49 needed viewpoints. \begin{figure}[h] \centering \vspace{-0.2cm} \includegraphics[width=\linewidth]{jpgs/additional.jpg} \vspace{-0.6cm} \caption{Our method generalizes well to diverse natural and synthetic scenes. Top row: one of the input frames. Bottom row: our results. Left to right: Lytro Illum acquisition, handheld burst acquisition, computer game sequence.} \label{fig:additional} \vspace{-0.7cm} \end{figure} \endinput \begin{table}[b] \small \centering \caption{Our model evaluated on synthetic (DeepFocus) and natural (Stanford) datasets.} \label{table:quantitative} \begin{tabular}{l c c} \toprule {} & {\textbf{DeepFocus}} & {\textbf{Stanford Lytro}} \\ \cmidrule(lr){2-2} \cmidrule(lr){3-3} {} & {SSIM$\uparrow$ / LPIPS$\downarrow$} & {SSIM$\uparrow$ / LPIPS$\downarrow$} \\ \midrule {Kalantari et al. (4 corners)} & {1.0} & {0.941 / 0.0580} \\ {Ours (4-frame burst)} & {\textbf{1.0}} & {\textbf{0.972 / 0.0491}} \\ {Ours (9-frame burst)} & {} & {} \\ {center view} & {1.0} & {0.894 / 0.1425} \\ \midrule {DeepFocus (5 inputs)} & {0.993 / 0.0052} & {1.0} \\ {Ours (5-frame burst)} & {\textbf{0.977 / 0.0395}} & {\textbf{1.0}} \\ {center view} & {0.660 / 0.4543} & {1.0} \\ \bottomrule \end{tabular} \vspace{-4mm} \end{table} \section{Conclusion} We address the problem of simulating shallow DOF by proposing a novel approach that produces defocus blur directly from observable scene disparity in a handheld burst. Our method does not require any depth sensing capabilities, it leverages existing image warping and alignment infrastructure, and it is agnostic to image content. By eschewing the defocus map used in the conventional approach, our method succeeds in image regions that are otherwise challenging, such as transparencies, reflections, and fine silhouette details. Compared with existing light field acquisition, completion, and rendering techniques, our method produces superior results yet requires simple monocular burst imaging instead of multi-lens setups. Our method generalizes well to diverse scenes. Indeed, our model is trained on a fully synthetic dataset, yet produces satisfactory results on natural and synthetic scenes alike. \section{Discussion} Pros: \begin{itemize} \item Possible to sample around occlusion \end{itemize} Assumption: \begin{itemize} \item Scene objects far enough away (for alignment?) \item (from above) no significant fwd/bkwd motion and/or has small fov \item No lens distortion (or already corrected) \item Rigid scene \item Infinite depth of field inputs \end{itemize} \section{Introduction} \subsection{Understood} \cite{Ng2005LightFP} \textbf{Lightfield photography with a hand-held plenoptic camera} - Handheld light field Lytro (eq (2) is what I use for GT generation, (6) for f-2)\\ \cite{10.1145/3197517.3201329} \textbf{Synthetic depth-of-field with a single-camera mobile phone} - conventional blur Marc Levoy, Google Pixel\\ \cite{6909904} \textbf{3D Reconstruction from Accidental Motion} - 2014, "without the need for multi-lens optics, active sensors, or intentional motions by the photographer"; 3.9mm over 3secs ~100shots; init same camera pose, constrain 3D point to depth value from frame 0\\ \cite{5559009} \textbf{Programmable Defocus using Lens and Sensor Motion} - 2009, Lame 1D motion defocus, 5 cite; we can expand Lame 1D motion into 2D blur!\\ \cite{10.1145/3272127.3275032} \textbf{DeepFocus: Learned Image Synthesis for Computational Displays} - Where my data comes from\\ \subsection{Not-quite-read} \cite{10.1007/978-3-030-01240-3_3} \textbf{Rendering Portraitures from Monocular Camera and Beyond} - Ming-Hsuan Yang Merced\\ \cite{4392746} \textbf{Towards Digital Refocusing from a Single Photograph}\\ \subsection{Not-read-at-all} \cite{Sakurikar_2018_ECCV} \textbf{Refocusgan: Scene refocusing using a single image}\\ \cite{Srinivasan_2017_ICCV} \textbf{Learning to Synthesize a 4D RGBD Light Field From a Single Image}\\ \cite{10.5555/3157096.3157178} \textbf{Single-image depth perception in the wild} - 319 cite\\ \cite{bae2007defocus} \textbf{Defocus magnification} - Fredo 2007, 327 cite; "In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions." But we can re-focus, they can't; we can use all sharp img. "A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses." But we use smaller lens than point and shoot, so even less existing blur!\\ \cite{ZHUO20111852} \textbf{Defocus map estimation from a single image} - estimate defocus from a single image\\ \cite{7589980} \textbf{Defocus map estimation from a single image based on two-parameter defocus model} (392 cite) \cite{tai2009single} \textbf{Single image defocus map estimation using local contrast prior} (123 cite) \cite{10.1145/3355089.3356508} \textbf{Handheld Mobile Photography in Very Low Light} Burst alignment yadda yadda \cite{10.1145/237170.237199} \textbf{Light Field Rendering} 1996 SIGGRAPH \bibliographystyle{IEEEbib}
proofpile-arXiv_065-5831
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} There has been a growing demand for video conferencing solutions to provide stable and high-quality video communication with low latency. Due to the limited bandwidth resources, it is critical to effectively reduce the video bit-rate to ensure a smooth user experience with little transmission lag or artifacts. Traditional video coding tools such as H.264/H.265 aim at compressing general video content, which, albeit may be improved through continuous study, are suboptimal for the video conferencing scenario with its unique characteristics. Human faces are especially important in video conferences, which usually occupy a large portion of the frames and also are the main focus of the frames. Therefore, it is particularly important to improve the compression quality of the human faces in this scenario. With the great success of the generative adversarial network (GAN), significant progress has been made in human face generation. These methods \cite{nirkin2019fsgan,siarohin2019first} usually generate face images based upon face-related semantic information such as facial landmarks~\cite{wang2020deep}, estimated poses~\cite{ruiz2018fine}, and segmentation masks~\cite{ronneberger2015u}. Such face-related data have a much smaller scale compared to the original frames, which makes it possible to build an extremely low bit-rate and high-quality coding framework for enhancing visual quality in video conferencing. For example, NVIDIA's video conferencing solution \cite{wang2021one} transfers only a keypoint representation of faces, which is used to reconstruct the source frames in decoder. Although theoretically appealing, such methods suffer from several challenges in real applications, such as (complex) background generation, large differences for pose transfer, large illumination mismatch, facial occlusions, \textit{etc.} Targeting at robust performance in real applications, we propose {FAIVConf}, a video compression framework specially designed for practical video conferences. Our contribution can be summarized as follows: First, we propose a simple yet effective facial blurring mechanism to decrease the bit-rate of transmitting the facial area while keeping the essence of facial features, since blurring can effectively reduce the blocky artifacts caused by heavy compression, which affects accuracy of facial landmark extraction. To ensure high-quality face generation with a large range of head poses in real applications, we propose a dynamic updating mechanism for the view interpolation method in the face reenactment module. Based on the above improvements, we propose the {FAIVConf} to reduce bit-rate and optimize the video transmission quality in the video conference scenario. Our proposed framework achieves significant bit-rate reduction, \textit{i.e.}, only 0.001875 bits per pixel for transmitting a conference video with 800$\times$800 pixels, which is about 0.8\% of streaming the original video. Compared with the commercial H.264 and H.265, our method gives much better visual quality under the same bit-rate. \section{Motivation} \label{sec:motivation} GAN-based guided face generation enables highly efficient compression in the scenario of video conference. Guided by only facial landmarks, one can achieve optimal compression rate since only one source image and successive keypoints need to be transferred to the decoder for face generation. Such methods assumes simple clean background and limited pose difference between the source and the drive image, without significant occlusions over the face area. Unfortunately, these assumptions are often violated in practical video conference sessions, and these methods suffer from severe artifacts. To avoid modeling unconstrained backgrounds, we need to confine the face generation only to the true face area, and compress facial and non-facial areas separately. To avoid the difficulty of face reenactment with large pose and illumination differences, we also transfer a low-quality (\textit{e.g.}, highly blurred) version of the original frame to guide the face generation. Fig.~\ref{fig:gaussianblur} gives an example of face generation with or without using the low-quality face area in the original frame. The face generation result with Gaussian blurred face (bottom right) can simulate the lighting condition much better than the result using only background (top right). In addition, we propose dynamic updating face view interpolation mechanism using multiple source images to further improve the robustness of the generated face with regard to flexible head pose in real video conference. One caveat here is the increased transmission overhead. The background is compressed and transferred by itself to maintain the fidelity of the video background. Since face is the main changing part in video conferences, which usually occupy the majority of bits. The background can be compressed efficiently in general. Regarding the detected face area, to reduce the transmission overhead as much as possible, we perform facial landmark extraction and face segmentation all on the decoder side, and only transfer a blurred version of an extended face area. We conduct Gaussian blur before encoding the face area to achieve a good balance between reduced bit-rate and quality of extracted landmarks. We observe that the accuracy of facial landmark extraction is quite sensitive to the blocky artifacts from compression. Gaussian blur can not only perform low-pass filtering to reduce the blocky artifacts for high compression ratio, but also lead to better bit-rate due to the blurred content. \begin{figure}[t] \centering \includegraphics[width=0.75\columnwidth]{Figs/blur_vs_erase.pdf} \\ \caption{Example comparison of face generation by: 1. only using the background information; 2. using both background information and a low-quality face of the original frame.} \vspace{-0.5cm} \label{fig:gaussianblur} \end{figure} \section{The FAIVConf~Framework} As shown in Fig.~\ref{fig:framework}, the proposed {FAIVConf} framework mainly consists of two parts. \textbf{Encoder} and \textbf{Decoder}. \begin{figure*}[tb] \centering \includegraphics[width=2.0\columnwidth]{Figs/framework_AIV_v3.pdf} \\ \caption{The overall architecture of our proposed {FAIVConf} framework which is separated into encoding and decoder sides. The blue arrows indicate the data flow in the framework, and the red arrows indicate the control signals for the data flow.} \label{fig: distribution} \vspace{-0.5cm} \label{fig:framework} \end{figure*} \noindent{\bf Encoding Process.} Let $\{D_1, D_2,D_3...D_i...D_n\}$ be the input conference video frames (driving video) to be transferred, where $D_i$ is the $i$-th driving frame in a total number of $n$ frames. Let $\{S_1, S_2, S_3...S_m\}$ be a set of source frames which has been collected from encoder side and stored in decoder side. The goal of encoding process is to extract the feature of each frame which can be transferred with small bandwidth to decoder. The transferred feature should be enough for the decoder to generate the output video $\{O_1, O_2, O_3...O_n\}$, which is highly similar to driving video. Specifically, we first apply face detection on the driving frame to get the bounding box of the face position $B_i$. Then we compute the head pose $P_i$ and perform face segmentation in parallel. The head pose information is compared with that of each source frame to see if similar poses exist. If the answer is yes, Gaussian blur will be applied on the segmented face area to decrease the video compression load. Then, the blurred frame will be encoded by the H.265 codec, and transferred to decoder side with the face bounding box and head pose. If the answer is no, the whole frame $D_i$ will be transferred to the decoder side directly, and the face area in $B_i$ will be stored in decoder as a new source face. \noindent{\bf Decoding Process.} The decoding process is executed at the user terminal. After receiving the transferred data of the $i$-th driving frame, \textit{i.e.}, the recovered frame $D^{\prime}_i$ (after H.265 decoding), the head pose $P_i$ (expressed as Euler angles), and the face bounding box $B_i$, the decoder extracts both the facial landmarks $L_i$ and the segmented face of $D^{\prime}_i$ in parallel. Then one or several source frames will be chosen to perform the face reenactment according to the head pose similarity as shown in Sec.~\ref{Face View Interpolation}. The landmarks $L_i$ and reenacted output from the $k$-th source frame is represented as $O_{i(k)}$ ($1 \leq k \leq m$): \begin{equation} \begin{aligned} L_i = G_l(B_i; D^{ \prime}_i); \\ O_{i(k)}=G_r(S_k; L_i) \end{aligned} \end{equation} where $G_l$ and $G_r$ are the facial landmark detector and reenactment generator, respectively. Then we compute the interpolated reenacted output $O_i$ through face view interpolation, which will be discussed in detail in Sec.~\ref{Face View Interpolation}. Then, inpainting~\cite{yeh2017semantic} and blending~\cite{wu2019gp} will be performed to swap $O_i$ back into the transferred driving frame $D^{\prime}_i$ within the bounding box area defined by $B_i$ to generate the final output frame. \noindent{\bf Dynamic source pool Update for Face View Interpolation.} \label{Face View Interpolation} To overcome the difficulty of image animation posed by a large range of facial orientations in real applications, we incorporate the face view interpolation technique~\cite{nirkin2019fsgan} in our framework. This method allows using multiple source frames rather than a single image to generate the face reenactment result and makes it possible to synthesize faces with different pose with little distortion. We further improve this method by dynamically updating the source frames through the video conference session to ensure small pose differences for quality reenactment. For the source frames set (SFS) $\{S_1, S_2,...S_m\}$, the corresponding Euler angles are represented as $\{e_1, e_2,...e_m\}$. Each angle $e_j$ consists of yaw $y_j$, pitch $p_j$ and roll $r_j$. We first project the Euler angle into a plane by dropping roll, \textit{i.e.}, $\{e_{1(y, p)}, e_{2(y, p)},...e_{m(y, p)}\}$, and then remove points in the angular domain that are too close to each other with a distance threshold $\mathcal{L}$. A mesh can be built based upon the remaining points in the angular domain by Delaunay Triangulation. For the $i$-th driving frame with head pose $P_i$, its corresponding projected point in the angular domain is $e^d_{i(y, p)}$. We find the most related three source frames $S_{k_1}$, $S_{k_2}$, $S_{k_3}$ at the vertices of the triangle which contains $e^d_{i(y, p)}$. With the barycentric coordinates $(\lambda_{k_1}, \lambda_{k_2}, \lambda_{k_3})$ of $e^d_{i(y, p)}$ in the triangle, the face view interpolation result $O_{i}$ can be calculated as: \begin{equation} O_{i}=\sum\nolimits_{r=1}^{3} \lambda_{k_r} O_{i(k_r)}, \end{equation} where $O_{i(k_r)}$ is the reenacted output from the source frame $S_{k_r}$ corresponding to the $k_r$-th vertex. In addition, we update the set of source frames dynamically. When the projected point of a driving frame in the angular domain is not in any of the existing triangles, we only use the source frame of the closest existing vertex, and set $\lambda$ as $1$. If a driving frame is far from all source frames in the angular domain, \textit{i.e.}, distances between the projected points of the driving frame and the source frames are all above a threshold $\mathcal{L}$, this driving frame will be added into the set of source frames and transferred to the decoder. The Delaunay Triangulation mesh will be rebuilt to include the new source frame to enable a more comprehensive range of head pose. \section{Experiments} \noindent{\bf Experiment Setup.} We used the mixed video sequences of the IJB-C~\cite{maze2018iarpa} and Human-centric video matting~\cite{humancentricmatting} dataset to train our reenactment model, and used the WFLW dataset~\cite{wu2018look} to train the facial landmark detector. Hopenet~\cite{ruiz2018fine} was used as our pose estimator in our framework. For the reenactment model training, we used two VGG-19 models~\cite{simonyan2014very} pretrained on VGGFace2~\cite{cao2018vggface2} and CelebA~\cite{liu2018large} datasets to compute the perceptual loss~\cite{johnson2016perceptual} towards face recognition and face attribute classification, respectively. This perceptual loss has been wildly used recently~\cite{nirkin2019fsgan,wang2021one}. Besides that, we also used the multi-scale GAN loss~\cite{wang2019few} and the pixelwise loss in the training process to enhance the generator's performance. \noindent{\bf Landmark Detection of Blurred Face.} We first preprocessed the WFLW dataset with Gaussian blur within the face area to simulate the face blurring of frames transmitted between the encoder and decoder. Then, we trained HRNetV2 to extract facial landmark with the same setting in the work of~\cite{wang2020deep}. As shown in Table~\ref{tab:landmark}, we compare the normalized mean error (NME) of HRNetV2 (Trained on original WLFW) and HRNetV2-blur (trained on blurred WLFW data). We can see that training landmark detector with blurred data can significantly decrease the NME of all the subsets in the Gaussian blur situation, and the loss is just a little higher than that of the original model on non-blurred dataset. This means that HRNetV2-blur on blurred image has the same accuracy level compared with the original model on the same image without blurring. Therefore, HRNetV2-blur is good enough to handle transmitted frames in decoder. \begin{table} \centering \caption{Facial landmark detection results (NME) on original WFLW dataset and the preprocessed WFLW dataset (Gaussian blur) with test and 5 subsets: large pose, expression, illumination, makeup, and occlusion. Lower is better.} \scalebox{0.55}{ \begin{tabular}{llrrrrrrr} \toprule Model & Dataset & Test & Large-Pose & Expression & Illumination & Makeup & Occlusion \\ \midrule HRNetV2~\cite{wang2020deep} & WLFW & 4.60 & 7.94 & 4.85 & 4.55 & 4.29 & 5.44 \\ HRNetV2~\cite{wang2020deep} & WFLW-blur& 13.22 & 27.15 & 11.81 & 13.50 & 12.33 & 16.24 \\ HRNetV2-blur & WFLW & 5.26 & 8.87 & 5.63 & 5.11 & 5.12 & 6.20 \\ HRNetV2-blur & WFLW-blur & \textbf{5.76} & \textbf{10.01} & \textbf{6.03} & \textbf{5.84} & \textbf{5.77} & \textbf{7.07} \\ \bottomrule \end{tabular} } \vspace{-0.5cm} \label{tab:landmark} \end{table} \noindent{\bf Performance on Conference Video.} To evaluate our performance in video conference, we propose a High-resolution Talking Head (HSTH) dataset, which contains a scene where a single person is talking with high resolution up to 800$\times$800 collected from YouTube. Our method can re-create a talking-head video at decoder side based on the transmitted encoded frame. Fig~\ref{fig:H264} represents the comparison between final result of our approach with that of using commercial H.264/H.265 coding scheme directly. To ensure fair comparison, we encoded the intermediate transmitted data in~FAIVConf~ with the same total bit-rate, \textit{i.e.}, the total summed bit-rate of $D^{\prime}_i$, $P_i$, and $B_i$ was equal to that of the encoded frame in H.264/H.265 coding scheme. The leftmost image is the original frame (driving frame) in encoder. On the right side, the three rows show the result of H.264/H.265 coding and~FAIVConf. From left to right, the bit-rate of the transferred data decreases one after another. We also list the bpp (bits per pixel) of each level. Our approach achieves consistent good performance in different bit-rate levels, while H.264/H.265 shows larger compression artifact as bpp gets lower. FAIVConf~can generate acceptable output even the transmission is only 0.001875 bbp, which is 0.8\% of the original video. We test the PSNR and SSIM value on our proposed HSTH dataset. For bits per pixel with the level of 0.002, the average PSNR and SSIM achieve 26.3dB and 0.89, respectively. \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{Figs/final_concat026.pdf} \\ \caption{Comparison of result at the same bit-rate with H.264/H. 265.} \label{fig: distribution} \vspace{-0.5cm} \label{fig:H264} \end{figure} \begin{figure}[t] \centering \includegraphics[width=1.0\columnwidth]{Figs/oneshot-vs-FAIVConf.pdf} \\ \caption{Comparison of OSFV with~FAIVConf~under different head rotation angles. Our approach shows better performance especially when the angle is larger than $45^{\circ}$.} \label{fig:oneshot-vs-FAIVConf} \vspace{-0.5cm} \label{fig:angle} \end{figure} Fig~\ref{fig:angle} gives the comparison between OSFV (One-shot Free-view~\cite{wang2021one}) with our framework~FAIVConf. The first row shows the transferred video frames with the increasing head rotation angle from left to right. The second and third rows show the corresponding face generation results using OSFV and~FAIVConf, respectively. Here, the leftmost driving frame with $0^{\circ}$ rotation angle is used as the source image of OSFV. When the head rotation angle is small, both frameworks provide good results. However, as the angle increases, especially when it is greater than 45 degrees, the face generated by OSFV gradually deforms and becomes flattened, while~FAIVConf~can still generate high quality images which maintain the identity and fidelity similar to the original frames. \section{Conclusion} We introduce FAIVConf, a framework dedicated to highly efficient video compression for the video conference scenario. Compared with the commercial H.264/H.265 coding schemes, our approach achieves much better visual video quality with less transmission data. We achieve significant bit-rate reduction under similar visual quality via only $0.8\%$ of streaming the original video in a resolution of 800$\times$800 pixels. \bibliographystyle{IEEEbib}
proofpile-arXiv_065-5833
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this work we study the \emph{Fractional Zakharov-Kuznetsov-(FZK) }equation \begin{equation}\label{zk4} \left\{ \begin{array}{ll} \partial_{t}u-\partial_{x_{1}}(-\Delta)^{\frac{\alpha}{2}}u+u\partial_{x_{1}}u=0, & 0<\alpha< 2, \\ u(x,0)=u_{0}(x), \quad x=(x_{1},x_{2},\dots,x_{n})\in \mathbb{R}^{n},& t\in\mathbb{R}, \\ \end{array} \right. \end{equation} where $u=u(x,t)$ is a real valued function and $\left(-\Delta\right)^{\frac{\alpha}{2}}$ stands for the \emph{fractional Laplacian} whose description in the Fourier space is given by \begin{equation*} \mathcal{F}\left((-\Delta)^{\alpha/2}f\right)(\xi):=|2\pi\xi|^{\alpha}\mathcal{F}(f)(\xi)\quad f\in\mathcal{S}(\mathbb{R}^{n}). \end{equation*} The FZK equation formally satisfies the following conservation laws, at least for smooth solutions \begin{equation*} \mathcal{I}u(t)= \int_{\mathbb{R}^{n}}u(x,t)\, dx=\mathcal{I}(0), \end{equation*} \begin{equation*} \mathcal{M}(t) =\int_{\mathbb{R}^{n}}u^{2}(x,t)\, dx=\mathcal{M}(0), \end{equation*} and the Hamiltonian \begin{equation*} \mathcal{H}(t)=\frac{1}{2}\int_{\mathbb{R}^{n}}\left(\left(-\Delta\right)^{\frac{\alpha}{2}}u(x,t)\right)^{2} \, dx -\frac{1}{6}\int_{\mathbb{R}^{n}}u^{3}(x,t) \, dx=\mathcal{H}(0). \end{equation*} In the case $\alpha=1$ the equation \eqref{zk4} can be formally rewritten as \begin{equation}\label{shrira} \partial_{t}u-\mathcal{R}_{1}\Delta u+u\partial_{x_{1}}u=0, \end{equation} where $\mathcal{R}_{j}$ denotes the \emph{Riesz transform} in the $x_{1}-$variable, that is, \begin{equation*} \mathcal{R}_{1}(f)(x):=\frac{\Gamma\left(\frac{n+1}{2}\right)}{\pi^{\frac{n+1}{2}}}\,\mathrm{p.v.}\int_{\mathbb{R}^{n}}\frac{x_{1}-y_{1}}{|x-y|^{n+1}}f(y)\,dy, \end{equation*} whenever $f$ belongs to a suitable class of functions. The model \eqref{shrira} seems to be firstly deduced by Shrira \cite{Schira} to describe the bi-dimensional long-wave perturbations in a boundary-layer type shear flow. More precisely, \eqref{shrira} represents the equation of the longitudinal velocity of fluid under certain conditions (see Shrira \cite{Schira} for a more detailed description). We shall also refer the works \cite{astep}, \cite{Dyachenko},\cite{Scrira pelynosvky} and \cite{PSTEP} where several variants (either be extensions or reductions) of the model \eqref{shrira} has been studied. We shall also point out that the equation in \eqref{shrira} represents a higher dimensional extension of the famous \emph{Benjamin-Ono} equation \begin{equation*} \partial_{t}u-\mathcal{H}\partial_{x}^{2}u+u\partial_{x}u=0, \end{equation*} where $\mathcal{H}$ denotes the Hilbert transform. The equation \eqref{shrira} has called the attention in recent years and several results concerning local well-posedness have been established. In this sense, we can mention the work of Hickman, Linares, Ria\~{n}o, Rogers, Wright \cite{HLKW}, who establish local well-posedness of the IVP associated to \eqref{shrira} in the Sobolev space $H^{s}(\mathbb{R}^{n}),$ where $s>\frac{5}{3}$ in the bi-dimensional case and $s>\frac{n}{2}+\frac{1}{2}$ whenever $n\geq 3.$ Also, Schippa \cite{RS} improves the work in \cite{HLKW}, by showing that \eqref{zk4} is locally well-posed in $H^{s}(\mathbb{R}^{n})$ for $s>\frac{n+3}{2}-\alpha$ and $1\leq \alpha<2.$ See also Riaño \cite{oscarmari} where some results of local well-posedness in weighted Sobolev spaces are established, as well as, some unique continuation principles for equation \eqref{shrira}. The case $\alpha=2$ in \eqref{zk4} corresponds to the \emph{Zakharov-Kuznetsov-(ZK) } equation \begin{equation}\label{zkeq} \partial_{t}u+\partial_{x_{1}}\Delta u+u\partial_{x_{1}}u=0. \end{equation} Originally \eqref{zkeq} was derived by Zakharov, Kuznetsov \cite{ZAKHARIV} in the three-dimensional case as a model to describe the propagation of ionic-acoustic waves in magnetized plasma. More recently, Lannes, Linares, and Saut \cite{lls} justify that the ZK equation can be formally deduced as a long wave small-amplitude limit of the Euler-Poisson system in the \textquotedblleft cold plasma \textquotedblright \, approximation. From the physical point of view the ZK model is not only interesting in the 3-dimensional case but also in the 2-dimensional since \textit{e.g.} it describes under certain conditions the amplitude of long waves on the free surface of a thin film in a specific fluid with particular parameters of viscosity (see Melkonian, Maslowe \cite{MM} for more details) The FZK equation has been little studied except in the distinguish cases we pointed out above, that is, $\alpha=1$ and $\alpha=2.$ Nevertheless, results of local and global well-posedness of the FZK equation in the range $\alpha\in(0,2)-\{1,2\}$ are scarce. To our knowledge, we can only mention the work of Schippa \cite{RS} where the well-posedness problem in $H^{s}(\mathbb{R}^{n})$ is addressed. In \cite{RS} is proved that the IVP \eqref{zk4} is locally well-posed in $H^{s}(\mathbb{R}^{n})$ for $s>\frac{n+3}{2}-\alpha$ whenever $n>2$ and $ \alpha\in [1,2).$ See also \cite{RS} for additional results of local well-posedness on $H^{s}(\mathbb{T}^{n}).$ In contrast, the ZK equation has been the object of intense study in the recent years, this has lead to a enormous improvements concerning the local and global well posedness. In this sense we could mention the work of Faminski\u{\i} \cite{FAMI1} who shows local well posedness in $H^{m}(\mathbb{R}^{2}),m\in\mathbb{N}.$ Also in the $2d$ case Linares and Pastor \cite{LIPAS} prove local well-posedness in $H^{s}(\mathbb{R}^{2})$ for $s>\frac{3}{4}.$ Later, Molinet and Pilod \cite{Molipilo} and Gr\"{u}nrock and Herr \cite{GRUHER} extend the local well-posedness to $H^{s}(\mathbb{R}^{2}), \, s>\frac{1}{2},$ by using quite similar arguments based on the Fourier restriction method. Regarding the $3d$ case, Molinet, Pilod \cite{Molipilo} and also Ribaud and Vento \cite{RIBAUDVENT} prove local well posedness in $H^{s}(\mathbb{R}^{3}),\, s>1.$ In this direction, the most recent work of Kinoshita \cite{KINO} and Herr, Kinoshita \cite{herkino} establish well-posedness in the best possible Sobolev range where the Picard iteration scheme can be applied, that is, $H^{s}(\mathbb{R}^{2}),\, s>-\frac{1}{4}$ and $H^{s}(\mathbb{R}^{n}),\, s>\frac{n-4}{2}$ when $n>2.$ In this work, we do not pursuit to improve results concerning local or global well-posedness, but on the contrary we need to establish local well-posedness on certain Sobolev space that allow us to describe particular properties of the solutions. The energy method proved by Bona and Smith \cite{Bonasmith} yields local well-posedness in $H^{s}(\mathbb{R}^{n})$ for $s>\frac{n+2}{2}.$ More precisely, the following result holds: \begin{thm}\label{lwp} Let $s>s_{n}$ where $s_{n} := \frac{n+2}{2} $. Then, for any $u_{0} \in H^{s}(\mathbb{R}^{n}),$ there exist $T = T(\|u_{0}\|_{H^{s}})>0$ and a unique solution $u$ to the IVP \eqref{zk4} such that \begin{equation}\label{conditions} u\in C\left([0,T]: H^{s_{n}}(\mathbb{R}^{n})\right)\quad \mbox{and}\quad \nabla u\in L^{1}\left((0,T): L^{\infty}(\mathbb{R}^{n})\right). \end{equation} Moreover, the flow map $u_{0}\longmapsto u$ defines a continuous application from $H^{s_{n}}(\mathbb{R}^{n})$ into $H^{s_{n}}(\mathbb{R}^{n})$. \end{thm} The energy method in this case does not consider the effects of the dispersion and it is mainly based on \emph{a priori estimates} for smooth solutions, that combined with Kato-Ponce commutator estimate ( see Theorem \ref{KPDESI}) give \begin{equation*} \|u\|_{L^{\infty}_{T}H^{s}_{x}}\lesssim \|u_{0}\|_{H^{s}_{x}}e^{\|\nabla u\|_{L^{1}_{T}L^{\infty}_{x}}}. \end{equation*} The condition on the gradient in \eqref{conditions} as the reader can see later is fundamental in the solution of the problems we address in this work. A quite remarkable property that dispersive equations satisfies is \emph{Kato's smoothing effect}, this is a property found by Kato \cite{KATO1} in the context of the \emph{Korteweg-de Vries-(KdV)} equation \begin{equation}\label{kdv} \partial_{t}u+\partial_{x}^{3}u+u\partial_{x}u=0. \end{equation} In \cite{KATO1} Kato proves that solutions of the KdV equation \eqref{kdv} becomes more regular locally by one derivative with respect to the initial data, that is, if $u$ is a solution of \eqref{kdv} on a suitable Sobolev space then: for any $r>0, \begin{equation}\label{smoothingkdv} \int_{0}^{T}\int_{-r}^{r}\left(\partial_{x}u(x,t)\right)^{2}\, dx\, dt< c(T,r)\|u_{0}\|_{L^{2}_{x}}. \end{equation} Independently, Kruzhkov, Faminski\u{\i} \cite{KF} obtained a quite similar result to \eqref{smoothingkdv}. As was shown later, and almost simultaneously by Constantine, Saut \cite{CONSAUT}, Sj\"{o}lin \cite{SJOLin} and Vega \cite{VEGA}, the local smoothing is an intrinsic property of linear dispersive equations (see \cite{Lipolibro} Chapter 4 and the references therein) A question that arise naturally is to determine whether the solutions $u$ of IVP \eqref{zk4} have a local smoothing effect similar to that one satisfied by the solutions of the KdV equation \eqref{smoothingkdv}. Certainly, this is not an easy question to answer in the full range $\alpha\in (0,2).$ In the case $\alpha=2,$ the operator that provides the dispersion in the linear part of the equation \eqref{zk4} is a local operator and it is possible to obtain by performing energy estimates that solutions of the ZK equation in a suitable Sobolev space satisfy an inequality similar in spirit to \eqref{smoothingkdv} with $\nabla$ instead of $\partial_{x},$ but on another class of subsets of the euclidean space. However, when we turn our attention to the case $\alpha\in (0,2),$ the situation is not so easy to address, since the operator $\left(-\Delta\right)^{\alpha/2}$ is fully non-local. Nevertheless, as is indicated in the work of Constantine, Saut \cite{CONSAUT} we expect a local gain of $\alpha/2$ of a derivative either the operator be local or not. One of the main goals of this work is to prove \emph{à la Kato} that solutions of the IVP \eqref{zk4} gain locally $\alpha/2$ of a spatial derivative. Certainly this problem has been addressed previously in the one-dimensional case \emph{e.g.} Ponce \cite{GP} and Ginibre, Velo \cite{Ginibrev1, Gnibrev2} for solutions of the Benjamin-Ono equation. We shall also mention the work of Kenig, Ponce and Vega \cite{KPVOS} where is proved that solutions of the IVP \begin{equation}\label{zk4.111} \left\{ \begin{array}{ll} \mathrm{i} \partial_{t}u+P(D)u=0, & \\ u(x,0)=u_{0}(x), \quad x\in \mathbb{R}^{n},& t\in\mathbb{R}, \\ \end{array} \right. \end{equation} where $$P(D)f(x):=\int_{\mathbb{R}^{n}}e^{ix\cdot \xi}P(\xi)\mathcal{F}(f)(\xi)\, d\xi,$$ with $P$ satisfying certain conditions, enjoy of local smoothing effect. Also, in \cite{KPVOS} is showed that solutions of the IVP \eqref{zk4.111} satisfy a global smoothing effect (see sections 3 and 4 in\cite{KPVOS}). Their proofs are mainly based on estimates of oscillatory integrals, as well as, the use of the Fourier restriction method. The results in \cite{Ginibrev1} and its extension in \cite{Gnibrev2} are quite versatile and allow us to obtain the smoothing effect in the desired range $\alpha\in(0,2).$ The main idea behind these arguments relies on obtaining a pointwise decomposition for the commutator \begin{equation}\label{commu1} [(-\partial_{x}^{2})^{\frac{\alpha}{2}}\partial_{x_{1}}; \varphi ], \end{equation} where $\varphi$ is a real valued smooth function with certain decay at infinity. Heuristically, the idea is to decouple \eqref{commu1} in lower-order pieces plus some non-localized error term easy to handle. However, in higher dimensions there is not known pointwise decomposition formula for the commutator \begin{equation}\label{formula3} \left[\left(-\Delta\right)^{\frac{\alpha}{2}}\partial_{x_{1}}; \varphi \right], \end{equation} similar to that one in \cite{Ginibrev1}, so that new ideas are required to obtain the desired smoothing. After obtaining a decomposition of commutator \eqref{formula3} in pieces of lower order, we replace the operator $(-\Delta)^{\frac{\alpha}{2}}$ by $(I-\Delta)^{\frac{\alpha}{2}},$ to our proposes the main difference between both operators relies on the fact that $(I-\Delta)^{\frac{\alpha}{2}}$ is a pseudo-differential operator, instead of $(-\Delta)^{\frac{\alpha}{2}}$ that is not for $\alpha$ in the indicated range. Few years ago, Bourgain and Li \cite{BL} established the pointwise formula \begin{equation}\label{formula2} \left(I-\Delta \right)^{\frac{\alpha}{2}}=\left(-\Delta \right)^{\frac{\alpha}{2}}+ \mathcal{K}_{\alpha},\quad 0<\alpha\leq 2, \end{equation} where $\mathcal{K}_{\alpha}$ an integral operator that maps $L^{p}(\mathbb{R}^{n})$ into $L^{p}(\mathbb{R}^{n})$ for all $p\in[1,\infty].$ The expression above allow us to obtain after replacing in \eqref{formula3} \begin{equation}\label{formula4} \left[\left(-\Delta\right)^{\frac{\alpha}{2}}\partial_{x_{1}}; \varphi \right]= \left[\left(I-\Delta\right)^{\frac{\alpha}{2}}\partial_{x_{1}}; \varphi \right]+ \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \varphi \right], \end{equation} At this point, the situation is easier to handle since for the first expression on the r.h.s we use Pseudo-differential calculus to decouple pointwise the commutator expression in terms of lower order in the spirit of Ginibre and Velo decomposition \cite{Ginibrev1}. Even when the situation is more manageable in comparison to \eqref{formula3}, the decomposition produces a considerable amount of error terms that are not easy to handle due to the interactions between terms of higher regularity vs lower regularity. If somehow we had to summarize in a few words the arguments used to estimate the first term in \eqref{formula4} we could refer to a maxim Roman emperor's Julius Caesar \textquotedblleft \emph{divide et vinces}\textquotedblright. The situation is quite different for the term involving $\mathcal{K}_{\alpha},$ this one is by far the hardest to deal with. It contains a sum that requires to know information about the behavior of Bessel potentials at origin and infinity (see Appendix A) and several cases depending on the dimension have to be examined. In this process the Gamma function and its properties are fundamentals to guarantee control in certain Sobolev norm. After decomposing the first term on the r.h.s of \eqref{formula4} we clearly obtain that solutions of the IVP \eqref{zk4} gain locally in space the expected $\frac{\alpha}{2}$ of a derivative in the full range $\alpha\in (0,2)$ without restrictions, this constitutes our first main result whose statement we show below. \begin{lem}\label{main2} Let $u\in C ((0, T) : H^{s}(R^{n})),\, s>\frac{n}{2}+1,$ be a solution of (1.1) with $0< \alpha<2$ and $n\geq 2.$ If $\varphi:\mathbb{R}^{n}\longrightarrow \mathbb{R}$ is a $C^{\infty}(\mathbb{R}^{n})$ function satisfying: \begin{itemize} \item[(i)] There exist a non-decreasing smooth function $\phi:\mathbb{R}\longrightarrow\mathbb{R}, $ and $\nu=(\nu_{1},\nu_{2},\dots,\nu_{n})\in\mathbb{R}^{n}$ such that \begin{equation*} \varphi(x)=\phi\left(\nu\cdot x+\delta\right)\quad x\in\mathbb{R}^{n}, \end{equation*} for some $\delta\in\mathbb{R}.$ The vector $\nu$ is taken in such a way that it satisfies one and only one of the following conditions: \begin{itemize} \item[\sc Case 1:] $\nu_{1}>0$ and $\nu_{2},\nu_{3},\dots,\nu_{n}=0.$ \item[\sc Case 2:] $\nu_{1}>0,$ $(\nu_{2},\nu_{3},\dots,\nu_{n})\neq 0,$ verify the inequality \begin{equation}\label{condi1} 0< \sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}<\min\left\{ \frac{2\nu_{1}}{C\sqrt{\alpha(n-1)}},\frac{\nu_{1}(1+\alpha)}{\alpha\epsilon\sqrt{n-1}}\right\}, \end{equation} with \begin{equation}\label{condi2} 0<\epsilon<\frac{\nu_{1}}{|\overline{\nu}|\sqrt{n-1}}-\frac{\alpha\sqrt{n-1}|\overline{\nu}|}{4\nu_{1}}C^{2}, \end{equation} where $$C:=\inf_{f\in L^{2}(\mathbb{R}^{n}), f\neq 0}\frac{\|J^{-1}\partial_{x_{j}}f\|_{L^{2}}}{\|f\|_{L^{2}}},\quad j=2,3,\dots,n,$$ and \begin{equation*} |\overline{\nu}|:=\sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}. \end{equation*} \end{itemize} \item[(ii)] The function $\phi$ satisfies: \begin{equation*} \phi'\equiv 1\quad \mbox{on}\quad [0,1]. \end{equation*} \item[(ii)] There exist a positive constant $c$ such that \begin{equation*} \sup_{0\leq j\leq 4}\sup_{x\in\mathbb{R}}\left|(\partial_{x}^{j}\phi)(x)\right|\leq c. \end{equation*} \item[(iii)]For all $x\in\mathbb{R},$ $$\phi'(x)\geq0.$$ \item[(iv)] The function $\phi^{1/2}$ satisfies \begin{equation*} \sup_{x\in\mathbb{R}} \left|\partial_{x}^{j}\left(\sqrt{\phi(x)}\right)\right|\leq c\quad \mbox{for}\quad j=1,2. \end{equation*} \end{itemize} Then \begin{equation*} \begin{split} &\int_{0}^{T}\int_{\mathbb{R}^{n}} \left(J^{s+\frac{\alpha}{2}}u(x,t)\right)^{2}\partial_{x_{1}}\varphi(x)\,dx\,dt+\int_{0}^{T}\int_{\mathbb{R}^{n}} \left(\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u(x,t)\right)^{2}\partial_{x_{1}}\varphi(x)\,dx\,dt\\ &\lesssim_{n,\alpha}\left(1+ T+ \left\|\nabla u\right\|_{L^{1}_{T}L^{\infty}_{x}}+ T\left\|u\right\|_{L^{\infty}_{T}H^{r}_{x}}\right)^{1/2}\|u\|_{L^{\infty}_{T}H^{s_{n}^{+}}_{x}}, \end{split} \end{equation*} whenever $r>\frac{n}{2}.$ \end{lem} The reader should keep in mind that the results Lemma \eqref{main2} show a strong dependence on the variable $x_{1},$ this can be observed in the condition on $\nu_{1},$ it never can be null unlike the other coordinates that can be null but not all of them simultaneously as it is pointed out in the case 2 in Lemma \ref{main2}. After translating properly $\varphi,$ it is possible to describe some regions where the smoothing effect is valid. We show that depending on the dimension and the possible sign of the coordinates of vector $\nu$ in \eqref{condi1} the geometry of the regions might change. See figures \ref{fig:2}-\ref{fig:3}. A question that comes out naturally from Lemma \ref{main2} is to determine whether a homogeneous version of smoothing also holds. Indeed, it holds and its proof is a consequence of combining Lemma \ref{main2} and formula \eqref{formula2}. \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.7] \filldraw[thick, color=gray!30](-4,3.5) -- (-4,5.6)--(8.5,-1) -- (4.5,-1); \filldraw[thick, color=gray!30](4,3.5) -- (4,5.6)--(-8.5,-1) -- (-4.5,-1); \draw[thick,dashed] (-4,5.6) -- (8.5,-1); \draw[thick,dashed] (4.5,-1)--(-4,3.5) ; \draw[thick,dashed] (4,5.6) -- (-8.5,-1); \draw[thick,dashed] (-4.5,-1)--(4,3.5) ; \draw[->] (-9,0) -- (9,0) node[below] {$x$}; \draw[->] (0,-1) -- (0,5) node[right] {$y$}; \node at (3,1){$\footnotesize{\mathcal{Q}_{\{ \epsilon,\tau,\nu \}}} $}; \node at (-3,1){$\footnotesize{\mathcal{Q}_{\{ \epsilon,\tau,\nu \}}} $}; \end{tikzpicture} \qquad \end{center} \caption{\label{fig:2} \emph{Regions where the smoothing effect occurs. The region on the left corresponds to $\nu_{1}>0, \nu_{2}<0.$ \\ The region on the right corresponds to the case $\nu_{1}>0,\nu_{2}>0.$}} \end{figure} \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.8] \filldraw[thick, color=gray!30] (2,-1) -- (2,5) -- (5.8,5) -- (5.8,-1); \draw[thick,dashed] (2,-1) -- (2,5); \draw[thick,dashed] (5.8,-1) -- (5.8,5); \node at (4,1){$\footnotesize{\mathcal{Q}_{\{ \tau_{1},\tau_{2},\nu \}}} $}; \node at (1.7,1.5){$\footnotesize{\frac{\tau_{1}}{\nu_{1}}} $}; \node at (6.2,1.5){$\footnotesize{\frac{\tau_{2}}{\nu_{1}}} $}; \draw[->] (-2,2) -- (9,2) node[below] {$x$}; \draw[->] (1,-1) -- (1,5) node[right] {$y$}; \end{tikzpicture} \qquad \end{center} \caption{Particular region where the smoothing effect is valid:$ \mathcal{Q}_{\{\nu, \tau_{1},\tau_{2}\}}$ where $0<\tau_{1}<\tau_{2}$ and $\nu_{1}>0,\,\nu_{2}=0.$}\label{fig:1} \end{figure} \begin{center} \begin{figure}[h] \begin{tikzpicture}[x=0.5cm,y=0.5cm,z=0.3cm,>=stealth] \draw[->] (xyz cs:x=-5) -- (xyz cs:x=9) node[above] {$x$}; \draw[->] (xyz cs:y=-2) -- (xyz cs:y=9) node[right] {$z$}; \draw[->] (xyz cs:z=-2) -- (xyz cs:z=9) node[above] {$y$}; \filldraw[thick, color=gray!30](0,3,0) -- (4,0,0) -- (0,0,4)--(0,3,0); \draw[thick,dashed] (0,3,0) -- (4,0,0)--(0,0,4)--(0,3,0); \filldraw[thick, color=gray!30](0,6,0) -- (7,0,0) -- (0,0,7)--(0,6,0); \draw[thick,dashed] (0,6,0) -- (7,0,0)--(0,0,7)--(0,6,0); \node[align=center] at (4,-4) (ori) {\\$\mathcal{Q}_{\{\tau_{1},\tau_{2},\nu \}} $}; \draw[->,help lines,shorten >=3pt] (ori) .. controls (1,-2) and (1.2,-1.5) .. (5,2,-1); \end{tikzpicture} \caption{\emph{Description of the region where the smoothing takes place in dimension 3 with $0<\tau_{1}<\tau_{2}$ and $\nu_{1},\nu_{2},\nu_{3}>0.$}} \label{fig:3} \end{figure} \end{center} \begin{cor} Under the hypothesis of Lemma \ref{main2} the solution of the IVP \eqref{zk4} satisfies \begin{equation}\label{energyasympt} \begin{split} &\int_{0}^{T}\int_{\mathbb{R}^{n}} \left(\left(-\Delta\right)^{s+\frac{\alpha}{2}}u(x,t)\right)^{2}\partial_{x_{1}}\varphi\,dx\,dt\\ &\lesssim_{n,\alpha }\left(1+ T+ \left\|\nabla u\right\|_{L^{1}_{T}L^{\infty}_{x}}+ T\left\|u\right\|_{L^{\infty}_{T}H^{s_{n}^{+}}_{x}}\right)^{1/2}\|u\|_{L^{\infty}_{T}H^{s_{n}^{+}}_{x}}. \end{split} \end{equation} \end{cor} In the study of the asymptotic behavior of the solutions of the Zakharov-Kuznetsov equation in the energy space is required to know the behavior of the function and its derivatives on certain subsets of the plane \emph{e.g.} channels, squares. In this sense, estimates such as \eqref{energyasympt} are quite useful in the description of such behavior (see Mendez, Mu\~{n}oz, Poblete and Pozo \cite{MMPP} for more details) \begin{cor}\label{cor1} Let $u\in C ([0, T] : H^{s}(R^{n})),\,s>\frac{n}{2}+1$ with $n\geq 2,$ and $u$ be a solution of \eqref{zk4}. Let $\vec{\kappa}=\left(\kappa_{1},\kappa_{2},\dots,\kappa_{n}\right)\in\mathbb{Z}^{n}.$ For $\vec{\kappa}\neq 0$ we define \begin{equation*} \mathfrak{P}_{\vec{\kappa}}:=\left\{x\in\mathbb{R}^{n}\,|\,\kappa_{j}< x_{j}\leq \kappa_{j}+1, j=1,2,\dots, n \right\}. \end{equation*} Then \begin{equation}\label{eq1.1} \begin{split} &\int_{0}^{T}\int_{\mathfrak{P}_{\vec{\kappa}}} \left(J^{s+\frac{\alpha}{2}}u(x,t)\right)^{2}\,dx\,dt+\int_{0}^{T}\int_{\mathfrak{P}_{\vec{\kappa}}} \left(\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u(x,t)\right)^{2}\,dx\,dt\\ &\lesssim_{n,\alpha}\left(1+ T+ \left\|\nabla u\right\|_{L^{1}_{T}L^{\infty}_{x}}+ T\left\|u\right\|_{L^{\infty}_{T}H^{r}_{x}}\right)^{1/2}\|u\|_{L^{\infty}_{T}H^{s_{n}^{+}}_{x}}, \end{split} \end{equation} whenever $r>\frac{n}{2}.$ \end{cor} Also in the homogeneous case is possible to describe the smoothing effect in the same region as above. \begin{cor} Let $u\in C ([0, T] : H^{s}(R^{n})),\,s>\frac{n}{2}+1,$ with $n\geq 2,$ and $u$ be a solution of \eqref{zk4}. For $\vec{\kappa}$ and $\mathfrak{P}_{\vec{\kappa}}$ as in Corollary \ref{cor1}, the solution $u$ associated to \eqref{zk4} satisfies: \begin{equation*} \begin{split} &\int_{0}^{T}\int_{\mathfrak{P}_{\vec{\kappa}}} \left(\left(-\Delta\right)^{s+\frac{\alpha}{2}}u\right)^{2}(x,t)\,dx\,dt\\ &\lesssim_{n,\alpha}\left(1+ T+ \left\|\nabla u\right\|_{L^{1}_{T}L^{\infty}_{x}}+ T\left\|u\right\|_{L^{\infty}_{T}H^{r}_{x}}\right)^{1/2}\|u\|_{L^{\infty}_{T}H^{s_{n}^{+}}_{x}}, \end{split} \end{equation*} whenever $r>\frac{n}{2}.$ \end{cor} Kato's smoothing effect has found diverse applications in the field of dispersive equations. Our intention in this part of the work is to present to the reader an application of Kato's smoothing effect for the solutions of the IVP \eqref{zk4}. The question addressed is the following: \emph{If the initial data $u_{0}$ in the IVP \eqref{zk4} is provided with extra regularity in the half space $\mathcal{H}_{\{\epsilon,\nu\}}$ where \begin{equation*} \mathcal{H}_{\{\nu,\nu \}}:=\left\{x\in\mathbb{R}^{n}\, |\, \nu\cdot x>\epsilon\right\}, \end{equation*} $\epsilon>0$ and $\nu$ is a non-null vector in $\mathbb{R}^{n}.$ Does the solution $u$ preserve the same regularity for almost all time $t>0$?} Surprisingly, that extra regularity is propagated by the flow solution with infinity speed and this property has been shown to be true in several nonlinear dispersive models, in fact, this property is known nowadays as \emph{principle of propagation of regularity}. The description of such phenomena depends strongly on Kato's smoothing effect. Indeed, the method to establish this particular property is mainly based on weighted energy estimates where it is also possible to show that the localized regularity entails the gain of extra derivatives on the channel $\mathcal{Q}_{\{\epsilon,\tau,\nu\}}$ traveling in some specific direction, where \begin{equation*} \mathcal{Q}_{\{\epsilon,\tau,\nu\}}:=\left\{x\in\mathbb{R}^{n}\,|\, \epsilon<\nu\cdot x<\tau\right\}, \end{equation*} being $\tau>\epsilon.$ The figure \ref{fig:5} below intends to describe this particular phenomena in dimension $2.$ \begin{figure}[h!] \begin{center} \begin{tikzpicture}[scale=0.7] \filldraw[thick, color=gray!30](-4,7) -- (-4,3.5)--(4.3,-0.8) -- (10,-0.8); \draw[thick,dashed] (-4,7) -- (8.5,0); \draw[thick,dashed] (4.5,-1)--(-4,3.5) ; \draw[->] (-6,0) -- (9,0) node[below] {$x$}; \draw[->] (0,-1) -- (0,6) node[right] {$y$}; \node[align=center] at (4,-4) (ori) {\\$\mathcal{H}_{\{\epsilon,\nu \}} $}; \draw[->,help lines,shorten >=3pt] (ori) .. controls (1,-2) and (1.2,-1.5) .. (5.2,4); \node[align=center] at (-1,-4) (ori) {\\$\mathcal{Q}_{\{\epsilon,\tau,\nu \}} $}; \draw[->,help lines,shorten >=3pt] (ori) .. controls (0,-2) and (-2.5,-1) .. (1,3); \draw[thick, dashed,->] (-2,2) -- (-3.2,0.2); \draw[thick, dashed,->] (2,2) -- (0.7,0.2); \end{tikzpicture} \qquad \end{center} \caption{\label{fig:5} \emph{Propagation of regularity sense in the 2 dimensional case with $\nu_{1}>0,\nu_{2}>0$. The dashed arrows denotes the propagation sense.}} \end{figure} \begin{center} \end{center} This question was originally address by Isaza, Linares and Ponce \cite{ILP1,ILP2,ILP3} for solutions of the KdV equation and later it was studied for solutions of the Benjamin-Ono equation and the Kadomset-Petviashvili equation \emph{resp.} In the case of one dimensional models where the dispersion is weak this was established in \cite{AM1,AM2} for solutions of the dispersive generalized Benjamin-Ono and the fractional Korteweg-de Vries equation \emph{resp.} In the quasi-linear type equations case Linares, Smith and Ponce show under certain conditions that this principle also holds. In the case of higher dispersion, Segata and Smith show that for the fifth order KdV equation. The extension to the case where the regularity of the initial data was fully addressed by Kenig, Linares, Ponce and Vega \cite{KLPV} for solutions of the KdV. The results in \cite{KLPV} were later extended in \cite{AMZK} to the $n-$ dimensional case and subsequently these techniques were applied by Freire, Mendez and Riaño \cite{FMR} for solutions of the dispersive \emph{generalized Benjamin-Ono-Zakharov-Kuznetsov} equation, that is, \begin{equation}\label{propaail} \partial_{t}u-\partial_{x_{1}}(-\partial_{x_{1}}^{2})^{\frac{\alpha+1}{2}}u+\partial_{x_{2}}^{2}\partial_{x_{1}}u+u\partial_{x_{1}}u=0\quad 0\leq \alpha\leq 1. \end{equation} The case $\alpha=0$ in \eqref{propaail} was first addressed by Nascimento in \cite{ailton} in the spirit of \cite{ILP2}. For the most recent compendium of propagation of regularity results we refer to Linares and Ponce \cite{LPPROPA} and the references therein. Our second main result is devoted to show that solution of the FZK also satisfies the propagation of regularity principle and it is summarized in the following theorem: \begin{thm}\label{zk9} Let $u_{0}\in H^{s}(\mathbb{R}^{n})$ with $s>s_{n}.$ Let $\nu=(\nu_{1},\nu_{2},\dots,\nu_{n})\in \mathbb{R}^{n},\, n\geq 2$ with $\nu$ satisfying \eqref{condi1}-\eqref{condi2}. If the initial data $u_{0}$ satisfies \begin{equation}\label{e1.1} \int_{\mathcal{H}_{\{\beta, \nu\}}} \left(J^{\widetilde{s}}u_{0}(x)\right)^{2}\,dx<\infty, \end{equation} then the corresponding solution $u=u(x,t)$ of the IVP \eqref{zk4} with $1\leq \alpha <2$: satisfies that for any $\omega > 0,\, \epsilon>0$ and $\tau\geq5 \epsilon,$ \begin{equation}\label{g1} \sup_{0\leq t\leq T}\int_{\mathcal{H}_{\{\beta+\epsilon-\omega t,\nu\}}}\left(J^{r}u(x,t)\right)^{2}dx\leq c^{*}, \end{equation} for any $r\in (0,\widetilde{s}]$ with $c^{*}=c^{*}\left(\epsilon; T; \omega ; \|u_{0}\|_{H^{s_{n}+}}; \|J^{s}u_{0}\|_{L^{2}\left(\mathcal{H}_{\{\beta,\nu \}}\right)}\right).$ In addition, for any $\omega> 0,\, \epsilon>0$ and $\tau\geq 5\epsilon$ \begin{equation}\label{g2.1} \int_{0}^{T}\int_{\mathcal{Q}_{\{\epsilon-\omega t+\beta ,\beta-\omega t+\tau,\nu }\}}\left(J^{\widetilde{s}+1}u\right)^{2}(x,t)\,dx\,dt\leq c^{*}, \end{equation} with $c^{*}=c^{*}\left(\epsilon;\tau; T; \omega ; \|u_{0}\|_{H^{s_{n}{+}}}; \|J^{\widetilde{s}}u_{0}\|_{L^{2}(\mathcal{H}_{\{\beta,\nu \}})}\right).$ If in addition to (\ref{e1.1}) there exists $\beta>0,$ such that \begin{equation}\label{clave1} J^{\widetilde{s}+\frac{2-\alpha}{2}}u_{0}\in L^{2}\left(\mathcal{H}_{\{\beta,\nu\}}\right), \end{equation} then for any $\omega > 0,\,\epsilon>0$ and $\tau\geq 5\epsilon,$ \begin{equation}\label{l1} \begin{split} &\sup_{0\leq t\leq T}\int_{\mathcal{H}_{\{\beta+\epsilon-\omega t,\nu\}}}\left(J^{\widetilde{s}+\frac{1-\alpha}{2}}u\right)^{2}(x,t)\,dx\\ &\qquad +\int_{0}^{T}\int_{\mathcal{Q}_{\{\epsilon-\omega t+\beta ,\beta-\omega t+\tau,\nu }\}}\left(J^{\widetilde{s}+1}u\right)^{2}(x,t)\,dx\,dt\leq c, \end{split} \end{equation} with $c=c\left(T;\epsilon;\omega ;\alpha;\|u_{0}\|_{H^{\widetilde{s}}};\left\|J^{\widetilde{s}+\frac{1-\alpha}{2}}u_{0}\right\|_{L^{2}((x_{0},\infty))}\right)>0.$ \end{thm} The proof of of Theorem \ref{zk9} is based on weighted energy estimates combined with an inductive argument, that due to the weak effects of dispersion it has to be carried out in two steps. \begin{rem} The result in Theorem \ref{zk9} is also true in the case where the dispersion is even weaker \emph{e.g.} $0<\alpha<1,$ the proof in this case follows by combining the ideas of the proof of Theorem \ref{zk9} and the bi-inductive argument applied in \cite{AM2,AMTHESIS} for solutions of the fKdV. \end{rem} As a corollary we obtain that in the case that the extra regularity of the initial data is provided on a integer scale the result also holds true. \begin{cor} Let $u_{0}\in H^{s}(\mathbb{R}^{n})$ with $s>s_{n}.$ If for some $\nu=(\nu_{1},\nu_{2},\dots,\nu_{n})\in \mathbb{R}^{n},\, n\geq 2$ with $\nu$ satisfying \eqref{condi1}-\eqref{condi2}. If there exist $m\in\mathbb{N}, m> 1+ \left\lceil\frac{n}{2}\right\rceil$ such that, initial data $u_{0}$ satisfy \begin{equation} \partial_{x}^{\alpha}u_{0}\in L^{2}\left(\mathcal{H}_{\{\beta,\nu\}}\right)<\infty,\quad\mbox{such that}\quad |\alpha|=m, \end{equation} then the corresponding solution $u=u(x,t)$ of the IVP \eqref{zk4} satisfies: for any $\nu > 0,\, \epsilon>0$ and $\tau\geq5 \epsilon,$ \begin{equation} \sup_{0\leq t\leq T}\int_{\mathcal{H}_{\{\beta+\epsilon-\omega t,\nu\}}}\left(J^{r}u(x,t)\right)^{2}dx\leq c^{*}, \end{equation} for any $r\in(0,m]$ with $c^{*}=c^{*}\left(\epsilon; T; \nu ; \|u_{0}\|_{H^{s_{n}+}}; \|\partial_{x}^{\alpha} u_{0}\|_{L^{2}\left(\mathcal{H}_{\{\nu,\beta\}}\right)}\right).$ In addition, for any $\omega > 0,\, \epsilon>0$ and $\tau\geq 5\epsilon$ \begin{equation} \int_{0}^{T}\int_{\mathcal{Q}_{\{\epsilon-\omega t+\beta ,\beta-\omega t+\tau,\nu }\}}\left(J^{m+\frac{\alpha}{2}} u\right)^{2}(x,t)\,dx\,dt\leq c^{*}, \end{equation} with $c^{*}=c^{*}\left(\epsilon;\tau; T; \nu ; \|u_{0}\|_{H^{s_{n}{+}}}; \|J^{\widetilde{s}}u_{0}\|_{L^{2}(\mathcal{H}_{\{\beta,\nu \}})}\right).$ If in addition to (\ref{e1.1}) there exists $\beta>0,$ such that \begin{equation} J^{\frac{2-\alpha}{2}}\partial_{x}^{\alpha}u_{0}\in L^{2}\left(\mathcal{H}_{\{\beta,\nu\}}\right), \end{equation} then for any $\omega> 0,\,\epsilon>0$ and $\tau\geq 5\epsilon,$ \begin{equation}\label{l1} \begin{split} &\sup_{0\leq t\leq T}\int_{\mathcal{H}_{\{\beta+\epsilon-\omega t,\nu\}}}\left(J^{r}u\right)^{2}(x,t)\,dx\\ &\qquad +\int_{0}^{T}\int_{\mathcal{Q}_{\{\epsilon-\omega t+\beta ,\beta-\omega t+\tau,\nu }\}}\left(J^{m+1}u\right)^{2}(x,t)\,dx\,dt\leq c, \end{split} \end{equation} with $r\in \left(0, m+\frac{1-\alpha}{2}\right]$ and $c=c\left(T;\epsilon;\omega ;\alpha;\|u_{0}\|_{H^{s_{n}^{+}}};\left\|J^{m+\frac{1-\alpha}{2}}u_{0}\right\|_{L^{2}\left(\mathcal{H}_{\{\beta,\nu \}}\right)}\right)>0.$ \end{cor} Also, the solutions associated to the IVP \eqref{zk4} satisfies the following transversal propagation of regularity, which is summarized in the following theorem. \begin{cor}\label{zk10} Let $u_{0}\in H^{s_{n}}(\mathbb{R}^{n}).$ If for some $\nu=(\nu_{1},\nu_{2},\dots,\nu_{n})\in \mathbb{R}^{n}$ with and for some $s\in \mathbb{R},s>s_{n}$ \begin{equation*} \int_{\mathcal{H}_{\{\beta,\nu\}}}\left(J^{s}_{x_{1}}u_{0}(x)\right)^{2}\, dx<\infty, \end{equation*} then the corresponding solution $u=u(x,t)$ of the IVP provided by Theorem \ref{lwp} satisfies that for any $\omega>0,\, \epsilon>0$ and $\tau \geq5\epsilon$ \begin{equation*} \sup_{0< t<T}\int_{\mathcal{H}_{\{\beta+\epsilon-\omega t,\nu \}}}\left(J^{r}_{x_{1}}u\right)^{2}(x,t)dx\leq c^{*}, \end{equation*} for any $r\in (0,s]$ with $c^{*}=c^{*}\left(\epsilon; T; \omega; \|u_{0}\|_{H^{s_{n}}_{x_{1}}}; \|J^{r}u_{0}\|_{L^{2}\left(\mathcal{H}_{ \{\nu,\beta\}}\right)}\right).$ In addition, for any $\omega > 0,\, \epsilon>0$ and $\tau\geq5\epsilon$ \begin{equation*} \int_{0}^{T}\int_{\mathcal{Q}_{\{\beta-\omega t+\epsilon,\beta-\omega t+\tau,\nu }\}}\left(J^{s+1}_{x_{1}}u\right)^{2}(x,t)\,dx\,dt\leq c \end{equation*} with $c=c\left(\epsilon;\tau; T; \omega ; \|u_{0}\|_{H^{s_{n}+}}; \|J^{r}_{x_{1}}u_{0}\|_{L^{2}\left(\mathcal{H}_{\{\beta,\nu\}}\right)}\right)>0.$ \end{cor} \subsection{Organization of the paper} In section \ref{seccion2} we introduce the notation to be used in this work. In the section \ref{prooflema1} we provide in a detailed manner the arguments of the proof of our first main result . Finally in section \ref{seccion5} we provide an application of Lemma \ref{main2} by proving that solutions of the IVP \eqref{zk4} satisfies the propagation of regularity principle. \subsection{Notation}\label{seccion2} For two quantities $A$ and $B$, we denote $A\lesssim B$  if $A\leq cB$ for some constant $c>0.$ Similarly, $A\gtrsim B$  if  $A\geq cB$ for some $c>0.$  Also for two positive quantities, $A$  $B$  we say that are \emph{ comparable}  if $A\lesssim B$ and $B\lesssim A,$ when such conditions are satisfied we indicate it by writing $A\approx B.$   The dependence of the constant $c$  on other parameters or constants are usually clear from the context and we will often suppress this dependence whenever it is possible. For any pair of quantities $X$ and $Y,$ we denote $X\ll Y$ if $X\leq cY$ for some sufficiently small positive constant $c.$ The smallness of such constant is usually clear from the context. The notation $X\gg Y$ is similarly defined. For $f$ in a suitable class is defined the \emph{Fourier transform} of $f$ as \begin{equation*} (\mathcal{F}f)(\xi ):=\int_{\mathbb{R}^{n}}e^{2\pi \mathrm{i} x\cdot \xi} f(x)\, dx. \end{equation*} For $x\in \mathbb{R}^{n}$ we denote $$\langle x\rangle :=\left(1+|x|^{2}\right)^{\frac{1}{2}}.$$ For $s\in \mathbb{R}$ is defined the \emph{ Bessel potential of order $-s$} as $J^{s}:=(1-\Delta)^{\frac{s}{2}},$ following this notation the operator $J^{s}$ admits representation via Fourier transform as \begin{equation*} \mathcal{F}(J^{s}f)(\xi)=\langle 2\pi\xi\rangle^{s}\mathcal{F}(f)(\xi). \end{equation*} We denote by $\mathcal{S}(\mathbb{R}^{n})$ the \emph{Schwartz functions space} and the space of \emph{tempered distributions} by $\mathcal{S}'(\mathbb{R}^{n}).$ Additionally, for $s\in\mathbb{R}$ we consider the \emph{Sobolev spaces} $H^{s}(\mathbb{R}^{n})$ that are defined as \begin{equation*} H^{s}(\mathbb{R}^{n}):=J^{-s}L^{2}(\mathbb{R}^{n}). \end{equation*} For $p\in[1,\infty]$ we consider the classical Lebesgue spaces $L^{p}(\mathbb{R}^{n}).$ Also, we shall often use mixed-norm spaces notation. For example, for $f:\mathbb{R}^{n}\times[0,T]\longrightarrow \mathbb{R},$ we will denote \begin{equation*} \|f\|_{L^{p}_{T}L^{q}_{x}}:=\left(\int_{0}^{T}\|f(\cdot, t)\|_{L^{q}_{x}}^{p}\, dt\right)^{\frac{1}{p}}, \end{equation*} with the obvious modifications in the cases $p=\infty$ or $q=\infty$. Additionally, the \emph{mixed Sobolev spaces} \begin{equation*} \|f\|_{L^{p}_{T}H^{s}_{x}}:=\left(\int_{0}^{T}\|f(\cdot, t)\|_{H^{s}_{x}}^{p}\, dt\right)^{\frac{1}{p}}. \end{equation*} We recall for operators $A$ and $B$ we define the \emph{commutator }between the operator $A$ and $B$ as $[A,B] = AB -BA.$ Let $\epsilon\in\mathbb{R}.$ For $\nu=\left(\nu_{1},\nu_{2},\dots,\nu_{n}\right)\in\mathbb{R}^{n}$ we define the \emph{half space} \begin{equation}\label{half} \mathcal{H}_{\{\nu,\nu \}}:=\left\{x\in\mathbb{R}^{n}\, |\, \nu\cdot x>\epsilon\right\}, \end{equation} where $\cdot$ denotes the canonical inner product in $\mathbb{R}^{n}.$ Let $\tau>\epsilon.$ For $\nu=\left(\nu_{1},\nu_{2},\dots,\nu_{n}\right)\in\mathbb{R}^{n}$ we define the \emph{channel} as the set $\mathcal{Q}_{\{\nu,\epsilon,\tau\}}$ satisfying \begin{equation}\label{channel} \mathcal{Q}_{\{\epsilon,\tau,\nu\}}:=\left\{x\in\mathbb{R}^{n}\,|\, \epsilon<\nu\cdot x<\tau\right\}. \end{equation} \section{Proof of Lemma \ref{main2}}\label{prooflema1} In this section we describe the details of the proof of the main lemma in this paper. \begin{proof} Let $\varphi:\mathbb{R}^{n}\longrightarrow\mathbb{R}$ be a smooth function such that \begin{equation}\label{weight} \sup_{\gamma\in(\mathbb{N}_{0})^{n},|\gamma|\leq 4}\sup_{x\in\mathbb{R}^{n}} |\partial_{x}^{\gamma}\varphi(x)|\leq c, \end{equation} for some positive constant $c.$ By standard arguments we obtain \begin{equation*} \begin{split} &\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^{n}} \left(J^{s}u\right)^{2} \varphi\,dx\underbrace{-\int_{\mathbb{R}^{n}}J^{s}\partial_{x_{1}}(-\Delta)^{\alpha/2}uJ^{s}u\varphi(x)\,dx}_{\Theta_{1}(t)}\\ &+\underbrace{\int_{\mathbb{R}^{n}}J^{s}\left(u\partial_{x_{1}}u\right)J^{s}u \varphi\,dx}_{\Theta_{2}(t)}=0. \end{split} \end{equation*} First, we handle the term providing the dispersive part of the equation, by noticing that after apply integration by parts we obtain \begin{equation*} \Theta_{1}(t)=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[(-\Delta)^{\alpha/2}\partial_{x_{1}}; \varphi\right]J^{s}u\,dx. \end{equation*} The next step is crucial in our argument mainly because we will replace the operator $(-\Delta)^{\frac{\alpha}{2}}$ by \begin{equation}\label{decomposition} (-\Delta)^{\frac{\alpha}{2}}=\underbrace{(I-\Delta)^{\frac{\alpha}{2}}}_{J^{\alpha}}+\mathcal{K}_{\alpha}, \end{equation} where $\mathcal{K}_{\alpha}$ is an operator that satisfies the following properties: \begin{itemize} \item[(i)] There exists a kernel $k_{\alpha} $ such that \begin{equation}\label{bo1} (\mathcal{K}_{\alpha}f)(x):=\int_{\mathbb{R}^{n}} k_{\alpha}(x,x-y)f(y)\,dy,\quad f\in\mathcal{S}(\mathbb{R}^{n}). \end{equation} \item[(ii)] For $1\leq p\leq \infty,$ \begin{equation}\label{bo2} \mathcal{K}_{\alpha}: L^{p}(\mathbb{R}^{n})\longrightarrow L^{p}(\mathbb{R}^{n}) \end{equation} \end{itemize} with \begin{equation}\label{bo3} \left\|\mathcal{K}_{\alpha}f\right\|_{L^{p}}\lesssim \left\|J^{\alpha-2}f\right\|_{L^{p}}. \end{equation} For more details on the decomposition \eqref{decomposition} see Bourgain, Li \cite{BL}. Thus, after replacing \eqref{decomposition} yield \begin{equation*} \begin{split} \Theta_{1}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[J^{\alpha}\partial_{x_{1}}; \varphi\right]J^{s}u\,dx+\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \varphi\right]J^{s}u\,dx\\ &=\Theta_{1,1}(t)+\Theta_{1,2}(t). \end{split} \end{equation*} To handle the term $\Theta_{1,1}$ we define \begin{equation}\label{comono1} \Psi_{c_{\alpha}}:=\left[J^{\alpha}\partial_{x_{1}}; \varphi\right]. \end{equation} Clearly $\Psi_{c_{\alpha}}\in\mathrm{OP}\mathbb{S}^{\alpha}.$ In virtue of pseudo-differential calculus (see appendix \ref{apendice1}), its principal symbol has the following decomposition \begin{equation} \begin{split}\label{comono1.1} c_{\alpha}(x,\xi)&=\sum_{|\beta|=1}\frac{ 1}{2\pi \mathrm{i}}\partial_{\xi}^{\beta}\left(2\pi \mathrm{i}\xi_{1} \langle 2\pi\xi\rangle^{\alpha}\right)\partial_{x}^{\beta}\varphi+ \sum_{|\beta|=2}\frac{1}{(2\pi \mathrm{i})^{2}}\partial_{\xi}^{\beta}\left(2\pi \mathrm{i}\xi_{1} \langle 2\pi\xi\rangle^{\alpha}\right)\partial_{x}^{\beta}\varphi\\ &\quad +r_{\alpha-2}(x,\xi)\\ &=p_{\alpha}(x,\xi)+p_{\alpha-1}(x,\xi)+r_{\alpha-2}(x,\xi), \end{split} \end{equation} where $r_{\alpha-2}\in\mathbb{S}^{\alpha-2}\subset \mathbb{S}^{0}.$ After rearranging the expressions for $p_{\alpha}$ and $p_{\alpha-1}$ yield \begin{equation}\label{comono2} \begin{split} p_{\alpha}(x,\xi) &= \langle2\pi \xi\rangle^{\alpha} \partial_{x_{1}}\varphi- \alpha \sum_{|\beta|=1}\langle 2\pi\xi \rangle^{\alpha-2}(2\pi\mathrm{i}\xi)^{\beta+\mathrm{e}_{1}}\partial_{x}^{\beta} \varphi \end{split} \end{equation} and \begin{equation}\label{comono3} \begin{split} p_{\alpha-1}(x,\xi) &=-\alpha\sum_{|\beta|=1}(2\pi \mathrm{i}\xi_{1})\langle 2\pi\xi\rangle^{\alpha-2}\partial_{x_{1}}\partial_{x}^{\beta}\varphi-\frac{\alpha}{2\pi}\sum_{|\beta|=1}(2\pi \mathrm{i}\xi)^{\beta}\langle 2\pi \xi\rangle ^{\alpha-2}\partial_{x}^{\beta}\partial_{x_{1}}\varphi\\ &\quad -\frac{\alpha}{2\pi}\sum_{|\beta|=1}(2\pi\mathrm{i}\xi_{1})\langle2\pi \xi \rangle ^{\alpha-2}\partial_{x}^{2\beta}\varphi\\ &\quad +\frac{\alpha(\alpha-2)}{2\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}(2\pi\mathrm{i}\xi_{1})(2\pi\mathrm{i}\xi)^{\beta_{1}}(2\pi\mathrm{i}\xi)^{\beta_{2}}\langle 2\pi\xi \rangle^{\alpha-4}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}} \varphi. \end{split} \end{equation} Therefore, \begin{equation}\label{comono4} \Psi_{c_{\alpha}}=p_{\alpha}(x,D)+p_{\alpha-1}(x,D)+r_{\alpha-2} (x,D), \end{equation} where \begin{equation}\label{energy} p_{\alpha}(x,D)=\partial_{x_{1}}\varphi J^{\alpha}-\alpha\partial_{x_{1}}\varphi J^{\alpha-2}\partial_{x_{1}}^{2}-\alpha\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\partial_{x}^{\beta}\varphi J^{\alpha-2}\partial_{x}^{\beta}\partial_{x_{1}}, \end{equation} \begin{equation}\label{energy2.1.1} \begin{split} p_{\alpha-1}(x,D) &=-\alpha\sum_{|\beta|=1}\partial_{x_{1}}\partial_{x}^{\beta}\varphi \partial_{x_{1}}J^{\alpha-2}-\frac{\alpha}{2\pi}\sum_{|\beta|=1}\partial_{x}^{\beta}\partial_{x_{1}}\varphi\partial_{x}^{\beta}J^{\alpha-2}\\ &-\frac{\alpha}{2\pi}\sum_{|\beta|=1}\partial_{x}^{2\beta}\varphi\partial_{x_{1}}J^{\alpha-2} +\frac{\alpha(\alpha-2)}{2\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}\varphi\partial_{x_{1}}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}J^{\alpha-4}, \end{split} \end{equation} and $r_{\alpha-2}(x,D)\in\mathrm{OP}\mathbb{S}^{\alpha-2}\subset \mathrm{OP}\mathbb{S}^{0}.$ Thus, after replacing the operators $p_{\alpha}(x,D), p_{\alpha-1}(x,D)$ and $p_{\alpha-2}(x,D)$ in $\Theta_{1,1}$ we get \begin{equation*} \begin{split} &\Theta_{1,1}(t)\\ &=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}uJ^{s+\alpha}u\partial_{x_{1}}\varphi\,dx-\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u J^{s+\alpha-2}\partial_{x_{1}}^{2}u\partial_{x_{1}}\varphi\,dx\\ &\quad -\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}J^{s}uJ^{s+\alpha-2}\partial_{x_{1}}\partial_{x}^{\beta}u \partial_{x}^{\beta}\varphi\,dx-\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\int_{\mathbb{R}^{n}}J^{s}uJ^{s+\alpha-2}\partial_{x_{1}}u \partial_{x}^{\beta}\partial_{x_{1}}\varphi\,dx\\ &\quad -\frac{\alpha}{4\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}} J^{s}u\partial_{x}^{\beta}J^{\alpha-2+s}u \partial_{x_{1}}\partial_{x}^{\beta}\varphi\,dx-\frac{\alpha}{4\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s}u\partial_{x_{1}}J^{s+\alpha-2}u\partial_{x}^{2\beta}\varphi\,dx\\ &\quad \frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s}u\partial_{x_{1}}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}J^{\alpha-4+s}u \partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}\varphi\,dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}ur_{\alpha-2}(x,D)J^{s}u\,dx\\ &=\Theta_{1,1,1}(t)+\Theta_{1,1,2}(t)+\Theta_{1,1,3}(t)+\Theta_{1,1,4}(t)+\Theta_{1,1,5}(t)+\Theta_{1,1,6}(t)\\ &\quad +\Theta_{1,1,7}(t)+\Theta_{1,1,8}(t). \end{split} \end{equation*} In the first place, we rewrite $\Theta_{1,1,1}$ as \begin{equation}\label{r1} \begin{split} \Theta_{1,1,1}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha}{2}}u\right)^{2}\partial_{x_{1}}\varphi\,dx+\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha}{2}}u\left[J^{\frac{\alpha}{2}};\partial_{x_{1}}\varphi\right]J^{s}u\,dx\\ &=\Theta_{1,1,1,1}(t)+\Theta_{1,1,1,2}(t). \end{split} \end{equation} The first expression in the r.h.s above represents after integrating in time the smoothing effect. Nevertheless, the price to pay for such expression becomes reflected in estimating $\Theta_{1,1,1,2},$ which is not an easy task to tackle down. For $s\in\mathbb{R}$ we set \begin{equation}\label{eq1} a_{s}(x,D):=\left[J^{s}; \phi\right],\quad \phi\in C^{\infty}(\mathbb{R}^{n}) \end{equation} with $$\sup_{x}|\partial_{x}^{\gamma}\phi(x)|\leq c_{\gamma}\quad \mbox{ for all}\quad \gamma\in (\mathbb{N}_{0})^{n}.$$ By the pseudo-differential calculus, its principal symbol admits the decomposition \begin{equation}\label{eq2} a_{s}(x,D)=-s\sum_{|\beta|=1}\partial_{x}^{\beta}\phi \partial_{x}^{\beta}J^{s-2} +r_{s-2}(x,D), \end{equation} where $r_{s-2}(x,D)\in\mathrm{OP}\mathbb{S}^{s-2}.$ Back to our case \begin{equation*} \begin{split} &\Theta_{1,1,1,2}(t)\\ &=-\frac{\alpha}{4}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}\varphi\right]\partial_{x}^{\beta}J^{s+\frac{\alpha-4}{2}}u\,dx-\frac{\alpha}{4}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s}u\partial_{x}^{\beta}\partial_{x_{1}}\varphi\partial_{x}^{\beta}J^{s+\alpha-2}u\,dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}ur_{\frac{\alpha-4}{2}}(x,D)J^{s}u\,dx\\ &=-\frac{\alpha}{4}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}\varphi\right]\partial_{x}^{\beta}J^{s+\frac{\alpha-4}{2}}u\,dx\\ &\quad -\frac{\alpha}{4}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}\varphi \right]\partial_{x}^{\beta} J^{s+\frac{\alpha-2}{2}}u\,dx\\ &\quad +\frac{\alpha}{8}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x}^{2\beta}\partial_{x_{1}}\varphi\, dx +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}ur_{\frac{\alpha-4}{2}}(x,D)J^{s}u\,dx. \end{split} \end{equation*} At this point we have by Theorem \ref{continuity} that \begin{equation}\label{contieq} \int_{0}^{T} |\Theta_{1,1,1,2}(t)|\, dt\lesssim_{\alpha} T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}. \end{equation} On the other hand we get after rearranging \begin{equation*} \begin{split} \Theta_{1,1,2}(t)&=\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{\alpha-2}{2}};\partial_{x_{1}}\varphi\right] \partial_{x_{1}}J^{s}u\,dx+\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\left(\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}\varphi\,dx\\ &\quad +\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u J^{s+\alpha-2}\partial_{x_{1}}u\partial_{x_{1}}^{2}\varphi\,dx\\ &=\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{\alpha-2}{2}};\partial_{x_{1}}\varphi\right] \partial_{x_{1}}J^{s}u\,dx+\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\left(\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}\varphi\,dx\\ &\quad +\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u J^{s+\alpha-2}\partial_{x_{1}}u\partial_{x_{1}}^{2}\varphi\,dx +\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{\alpha-2}{2}};\partial_{x_{1}}^{2}\varphi\right]J^{s}u\,dx\\ &\quad -\frac{\alpha}{4}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}^{3}\varphi\,dx\\ &=\Theta_{1,1,2,1}(t)+\Theta_{1,1,2,2}(t)+\Theta_{1,1,2,3}(t)+\Theta_{1,1,2,4}(t)+\Theta_{1,1,2,5}(t). \end{split} \end{equation*} We shall point out that an argument as the one used in \eqref{eq1}-\eqref{contieq} allow us to estimate $\Theta_{1,1,2,1},\Theta_{1,1,2,3}$ and $\Theta_{1,1,2,4}$ with the bound \begin{equation*} \int_{0}^{T}\max\left\{ |\Theta_{1,1,2,1}(t)|, |\Theta_{1,1,2,3}(t)|,|\Theta_{1,1,2,4}(t)|\right\}\, dt\lesssim_{\alpha}T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}. \end{equation*} Notice that $\Theta_{ 1,1,2,2}$ has the correct sing in front and the correct regularity desired. More precisely, it provides the smoothing effect after integrating in time. Finally, by the $L^{2}-$continuity (see Theorem \ref{continuity}) \begin{equation*} \int_{0}^{T} |\Theta_{1,1,2,5}(t)|\, dt \lesssim T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\|\partial_{x_{1}}^{3}\varphi\|_{L^{\infty}_{x}}. \end{equation*} For the term $\Theta_{1,1,3}$ we have \begin{equation*} \begin{split} \Theta_{1,1,3}(t) &= \frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s}uJ^{s+\alpha-2}\partial_{x_{1}}u \partial_{x}^{\beta}\varphi\,dx+\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}J^{s}uJ^{s+\alpha-2}\partial_{x_{1}}u \partial_{x}^{2\beta}\varphi\,dx\\ &=\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\left[ J^{\frac{2-\alpha}{2}};\partial_{x}^{\beta}\varphi\right]\partial_{x_{1}}J^{s+\alpha-2}u\,dx\\ &\quad +\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\partial_{x}^{\beta}\varphi\,dx\\ &\quad +\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{\alpha-2}{2}};\partial_{x}^{2\beta}\varphi\right]J^{s}u\,dx\\ &\quad -\frac{\alpha}{4}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}\partial_{x}^{2\beta}\varphi\,dx\\ &=\Theta_{1,1,3,1}(t)+\Theta_{1,1,3,2}(t)+\Theta_{1,1,3,3}(t)+\Theta_{1,1,3,4}(t). \end{split} \end{equation*} Thus, after applying the decomposition \eqref{eq2} is clear that \begin{equation*} \begin{split} \Theta_{1,1,3,1}(t)&=\frac{\alpha(\alpha-2)}{4}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\,\sum_{|\gamma|=1}\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\partial_{x}^{\gamma}\partial_{x_{1}}J^{s+\frac{\alpha-6}{2}}u\partial_{x}^{\beta}\partial_{x}^{\gamma}\varphi\,dx\\ &\quad +\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u r_{\frac{\alpha-6}{2}}(x,D)\partial_{x_{1}}J^{s}u\,dx\\ &=\frac{\alpha(\alpha-2)}{4}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\,\sum_{|\gamma|=1}\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\partial_{x}^{\gamma}\partial_{x_{1}}J^{s+\frac{\alpha-6}{2}}u\partial_{x}^{\beta}\partial_{x}^{\gamma}\varphi\,dx\\ &\quad -\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u \partial_{x}^{\beta}r_{\frac{\alpha-6}{2}}(x,D)\partial_{x_{1}}J^{s}u\,dx\\ &= \Theta_{1,1,3,1,1}(t)+ \Theta_{1,1,3,1,2}(t), \end{split} \end{equation*} where $r_{\frac{\alpha-6}{2}}(x,D)\in \mathrm{OP}\mathbb{S}^{\frac{\alpha-6}{2}}\subset \mathrm{OP}\mathbb{S}^{0}.$ Since \begin{equation*} \begin{split} \Theta_{1,1,3,1,1}(t)&= \frac{\alpha(\alpha-2)}{4}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\quad\sum_{\mathclap{\substack{|\gamma|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\partial_{x}^{\gamma}\partial_{x_{1}}J^{s+\frac{\alpha-6}{2}}u\partial_{x}^{\beta}\partial_{x}^{\gamma}\varphi\,dx\\ &\quad +\frac{\alpha(\alpha-2)}{4}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\partial_{x_{1}}\partial_{x_{1}}J^{s+\frac{\alpha-6}{2}}u\partial_{x}^{\beta}\partial_{x_{1}}\varphi\,dx\\ &=\Xi_{1}(t)+\Xi_{2}(t). \end{split} \end{equation*} The next terms are quite important since these ones determine the kind of sets where the smoothing can take place. More precisely, it force us to impose conditions on the weighted function $\varphi$ to decouple certain terms. To handle $\Xi_{1}$ notice that there exists a skew symmetric operator $\Psi_{-1}\in \mathrm{OP}\mathbb{S}^{-1},$ such that \begin{equation*} \begin{split} \Xi_{1}(t)= \frac{\alpha(\alpha-2)}{8}\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}\partial_{x}^{\gamma}\varphi f\Psi_{-1}f\, \,dx, \end{split} \end{equation*} where \begin{equation*} f:=\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u. \end{equation*} If we assume that $\varphi$ has the following representation \begin{equation}\label{function1} \varphi(x)=\phi\left(\nu\cdot x+\delta\right)\quad x\in\mathbb{R}^{n},\delta\in\mathbb{R}, \end{equation} where $\nu=(\nu_{1},\nu_{2},\dots, \nu_{n}) \in\mathbb{R}^{n}$ is a non-null vector and $\phi:\mathbb{R}\longrightarrow\mathbb{R}$ satisfies \eqref{weight}, this assumption allow us to say \begin{equation*} \begin{split} \Xi_{1}(t)&= \frac{\alpha(2-\alpha)}{16}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\quad \sum_{\mathclap{\substack{|\gamma|=1\\\gamma\neq \mathrm{e}_{1}}}}\,\,\int_{\mathbb{R}^{n}}\partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\left[\Psi_{-1};\phi\right]\partial_{x}^{\gamma}J^{s+\frac{\alpha-2}{2}}u \,dx\\ &=\frac{\alpha(\alpha-2)}{16}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\quad \sum_{\mathclap{\substack{|\gamma|=1\\\gamma\neq \mathrm{e}_{1}}}}\,\,\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u\,\partial_{x}^{\beta}\left[\Psi_{-1};\phi\right]\partial_{x}^{\gamma}J^{s+\frac{\alpha-2}{2}}u \,dx. \end{split} \end{equation*} Hence, combining pseudo-differential calculus with Theorem \eqref{continuity} imply that \begin{equation*} \begin{split} \int_{0}^{T} |\Xi_{1}(t)|\, dt \lesssim_{n}T\left(\frac{\alpha|\alpha-2|}{16}+1\right)\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}, \end{split} \end{equation*} and \begin{equation*} \begin{split} \int_{0}^{T}| \Theta_{1,1,3,1,2}(t)|\,dt \lesssim_{\alpha}T(n-1)\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}. \end{split} \end{equation*} To provide some control over $ \Theta_{1,1,3,1,1}$ is not an easy task at all since several interactions between the variables have to be taken into consideration. Thus, after using \eqref{eq2} yield \begin{equation*} \begin{split} \Theta_{1,1,3,3}(t)&=\frac{\alpha(2-\alpha)}{8}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\sum_{|\gamma|=1}\int_{\mathbb{R}^{n}}\partial_{x}^{2\beta}\partial_{x}^{\gamma}\varphi \partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\partial_{x}^{\gamma}J^{s+\frac{\alpha-6}{2}}u\,dx\\ &\quad -\frac{\alpha}{8\pi}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}ur_{\frac{\alpha-6}{2}}(x,D)J^{s}u\,dx\\ &=\frac{\alpha(2-\alpha)}{8}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\sum_{|\gamma|=1}\int_{\mathbb{R}^{n}}\partial_{x}^{2\beta}\partial_{x}^{\gamma}\varphi \partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\partial_{x}^{\gamma}J^{s+\frac{\alpha-6}{2}}u\,dx\\ &\quad +\frac{\alpha}{8\pi}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u \partial_{x_{1}}r_{\frac{\alpha-6}{2}}(x,D)J^{s}u\,dx, \end{split} \end{equation*} where $\partial_{x_{1}}r_{\frac{\alpha-6}{2}}(x,D)\in \mathrm{OP}\mathbb{S}^{\frac{\alpha-4}{2}}\subset \mathrm{OP}\mathbb{S}^{0}.$ Since $0<\alpha<2,$ we obtain by the $L^{2}-$continuity (Theorem \ref{continuity}) \begin{equation*} \begin{split} \int_{0}^{T} |\Theta_{1,1,3,3}(t)|\,dt &\lesssim T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\left\{\alpha(2-\alpha)\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\sum_{|\gamma|=1}\left\|\partial_{x}^{2\beta}\partial_{x}^{\gamma}\varphi\right\|_{L^{\infty}_{x}}+\alpha(n-1)\right\} \end{split} \end{equation*} and \begin{equation*} \int_{0}^{T} |\Theta_{1,1,3,4}(t)|\, dt\lesssim_{\alpha}T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\,\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\left\|\partial_{x}^{2\beta}\partial_{x_{1}}\varphi\right\|_{L^{\infty}_{x}}. \end{equation*} The term $\Theta_{1,1,3,2}$ will be crucial in our analysis, since from it we shall extract the desired smoothing effect after integrating in time. Nevertheless, we postpone this tedious task for the next section where we unify all the estimates. First, we rewrite the term as follows \begin{equation*} \begin{split} \Theta_{1,1,4}(t)&= -\frac{\alpha}{2}\,\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\, \int_{\mathbb{R}^{n}} J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\partial_{x}^{\beta}\varphi\right]\partial_{x_{1}}J^{\alpha-2+s}u\,dx\\ &\quad +\frac{\alpha}{4}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\, \int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}^{2}\partial_{x}^{\beta} \varphi\,dx\\ &=\Theta_{1,1,4,1}(t)+\Theta_{1,1,4,2}(t). \end{split} \end{equation*} By Theorem \ref{continuity} it follows \begin{equation*} \int_{0}^{T} |\Theta_{1,1,4,2}(t)|\,dt\lesssim_{\alpha}T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\left\|\partial_{x_{1}}^{2}\partial_{x}^{\beta}\varphi\right\|_{L^{\infty}_{x}}, \end{equation*} and for the remainder term we use decomposition \eqref{eq1}-\eqref{eq2} to obtain \begin{equation*} \begin{split} \Theta_{1,1,4,1}(t)&=\frac{\pi\alpha(2-\alpha)}{2}\sum_{|\beta|=1}\sum_{|\gamma|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u \partial_{x}^{\gamma}\partial_{x_{1}}\partial_{x}^{\beta} \varphi\partial_{x}^{\gamma}\partial_{x_{1}}J^{s+\frac{\alpha-6}{2}}\,dx\\ &\quad-\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}} J^{s+\frac{\alpha-2}{2}}ur_{-\left(\frac{\alpha+2}{2}\right)}(x,D)\partial_{x}^{\beta}J^{\alpha-2+s}u\,dx, \end{split} \end{equation*} where $r_{-\left(\frac{\alpha+2}{2}\right)}(x,D)\in \mathrm{OP}\mathbb{S}^{-\left(\frac{\alpha+2}{2}\right)}\subset \mathrm{OP}\mathbb{S}^{0}.$ In virtue of Theorem \ref{continuity} \begin{equation*} \begin{split} \int_{0}^{T}|\Theta_{1,1,4,1}(t)|\, dt &\lesssim T \|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2} \left\{\frac{\alpha(2-\alpha)}{2}\sum_{|\beta|=1}\sum_{|\gamma|=1} \left\|\partial_{x_{1}}\partial_{x}^{\beta}\partial_{x}^{\gamma}\varphi\right\|_{L^{\infty}_{x}} +\frac{(n-1)\alpha}{4\pi}\right\}. \end{split} \end{equation*} On the other hand \begin{equation*} \begin{split} \Theta_{1,1,5}(t)&= -\frac{\alpha}{4\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}} J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\partial_{x}^{\beta}\varphi\right]\partial_{x}^{\beta}J^{\alpha-2+s}u\,dx\\ &\quad +\frac{\alpha}{8\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}\partial_{x}^{2\beta} \varphi\,dx\\ &=\Theta_{1,1,5,1}(t)+\Theta_{1,1,5,2}(t). \end{split} \end{equation*} For the first term above we have by Theorem \ref{continuity} \begin{equation*} \int_{0}^{T}|\Theta_{1,1,5,2}(t)|\,dt \lesssim T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\left(\frac{\alpha}{8\pi}\sum_{|\beta|=1}\left\|\partial_{x_{1}}\partial_{x}^{2\beta}\varphi\right\|_{L^{\infty}_{x}}\right). \end{equation*} Instead, for $\Theta_{1,1,5,2}$ we require an extra work that can be handled by using \eqref{eq1}-\eqref{eq2}. More precisely, \begin{equation}\label{eq3} \begin{split} \Theta_{1,1,5,1}(t)&=\frac{\alpha(2-\alpha)}{4}\sum_{|\beta|=1}\sum_{|\gamma|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u \partial_{x}^{\gamma}\partial_{x_{1}}\partial_{x}^{\beta} \varphi\partial_{x}^{\gamma}\partial_{x}^{\beta}J^{s+\frac{\alpha-6}{2}}\,dx\\ &\quad -\frac{\alpha}{4\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}} J^{s+\frac{\alpha-2}{2}}ur_{-\left(\frac{\alpha+2}{2}\right)}(x,D)\partial_{x}^{\beta}J^{\alpha-2+s}u\,dx, \end{split} \end{equation} where $r_{-\left(\frac{\alpha+2}{2}\right)}(x,D)\in \mathrm{OP}\mathbb{S}^{-\left(\frac{\alpha+2}{2}\right)}\subset \mathrm{OP}\mathbb{S}^{0}.$ At this point we obtain by Theorem \ref{continuity} \begin{equation}\label{eq4} \begin{split} \int_{0}^{T}|\Theta_{1,1,5,1}(t)|\,dt &\lesssim T \|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2} \left\{\frac{\alpha(2-\alpha)}{4}\sum_{|\beta|=1}\sum_{|\gamma|=1} \left\|\partial_{x_{1}}\partial_{x}^{\beta}\partial_{x}^{\gamma}\varphi\right\|_{L^{\infty}_{x}} +\frac{n\alpha}{4\pi}\right\}. \end{split} \end{equation} Notice that this term is quite similar to the previous one and the way to bound it follows the same. Indeed, \begin{equation*} \begin{split} \Theta_{1,1,6}(t)&=-\frac{\alpha}{4\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}} J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{2\beta}\varphi\right]\partial_{x_{1}}J^{\alpha-2+s}u\,dx\\ &\quad +\frac{\alpha}{8\pi}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}\partial_{x}^{2\beta} \varphi\,dx\\ &=\Theta_{1,1,6,1}(t)+\Theta_{1,1,6,2}(t). \end{split} \end{equation*} Applying an argument similar to the one in \eqref{eq3}-\eqref{eq4} \textit{mutatis mutandis} yield \begin{equation*} \int_{0}^{T} |\Theta_{1,1,6,2}(t)|\, dt \lesssim T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\left(\frac{\alpha}{8\pi}\sum_{|\beta|=1}\left\|\partial_{x_{1}}\partial_{x}^{2\beta}\varphi\right\|_{L^{\infty}_{x}}\right) \end{equation*} and \begin{equation*} \begin{split} \int_{0}^{T} |\Theta_{1,1,6,1}(t)|\, dt &\lesssim T \|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2} \left\{\frac{\alpha(2-\alpha)}{4}\sum_{|\beta|=1}\sum_{|\gamma|=1} \left\|\partial_{x}^{2\beta}\partial_{x}^{\gamma}\varphi\right\|_{L^{\infty}_{x}} +\frac{n\alpha}{4\pi}\right\}. \end{split} \end{equation*} After rewriting \begin{equation*} \begin{split} \Theta_{1,1,7}(t)&= \frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s}u\partial_{x_{1}}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}J^{\alpha-4+s}u \partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}\varphi\,dx\\ &=-\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}\partial_{x}^{\beta_{1}}J^{s}u\partial_{x}^{\beta_{2}}\partial_{x_{1}}J^{\alpha-4+s}u \partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}\varphi\,dx\\ &\quad -\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s}u\partial_{x}^{\beta_{2}}\partial_{x_{1}}J^{\alpha-4+s}u \partial_{x}^{\beta_{2}}\partial_{x}^{2\beta_{1}}\varphi\,dx\\ &=-\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-4}{2}}\partial_{x}^{\beta_{1}}u\partial_{x_{1}}\partial_{x}^{\beta_{2}}J^{s+\frac{\alpha-4}{2}}u\partial_{x}^{\beta_{1}}\partial_{x}^{\beta_{2}}\varphi\,dx\\ &\quad-\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-4}{2}}\partial_{x}^{\beta_{1}}u\left[J^{\frac{4-\alpha}{2}};\partial_{x}^{\beta_{1}}\partial_{x}^{\beta_{2}}\varphi\right]\partial_{x_{1}}J^{s+\alpha-4}\partial_{x}^{\beta_{2}}u\,dx\\ &\quad-\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s}u\partial_{x}^{\beta_{2}}\partial_{x_{1}}J^{\alpha-4+s}u \partial_{x}^{\beta_{2}}\partial_{x}^{2\beta_{1}}\varphi\,dx\\ &=\Theta_{1,1,7,1}(t)+\Theta_{1,1,7,2}(t)+\Theta_{1,1,7,3}(t). \end{split} \end{equation*} In virtue of \eqref{function1} \begin{equation*} \begin{split} \Theta_{1,1,7,1}(t)&=-\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-4}{2}}\partial_{x}^{\beta_{1}}u\partial_{x_{1}}\partial_{x}^{\beta_{2}}J^{s+\frac{\alpha-4}{2}}u \nu^{\beta_{1}}\nu^{\beta_{2}}\phi''\, dx\\ &=-\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-4}{2}}\partial_{x}^{\beta_{1}}u\partial_{x_{1}}\partial_{x}^{\beta_{2}}J^{s+\frac{\alpha-4}{2}}u \nu^{\beta_{1}}\nu^{\beta_{2}}\phi''\, dx\\ &=-\frac{\alpha(\alpha-2)}{4\pi}\int_{\mathbb{R}^{n}}\left(\sum_{|\beta_{1}|=1}\partial_{x}^{\beta_{1}}J^{s+\frac{\alpha-4}{2}}u\, \nu^{\beta_{1}}\right)\partial_{x_{1}}\left(\sum_{|\beta_{2}|=1}\partial_{x}^{\beta_{2}}J^{s+\frac{\alpha-4}{2}}u\, \nu^{\beta_{2}}\right)\,\phi''\, dx\\ &=\frac{\alpha(2-\alpha)\nu_{1}}{8\pi}\int_{\mathbb{R}^{n}}\left(\sum_{|\beta|=1}\partial_{x}^{\beta}J^{s+\frac{\alpha-4}{2}}u\, \nu^{\beta}\right)^{2}\,\phi'''\, dx, \end{split} \end{equation*} which by continuity implies \begin{equation*} \begin{split} \int_{0}^{T}|\Theta_{1,1,7,1}(t)|\, dt &\lesssim_{\alpha,n}\int_{0}^{T}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}\nu^{2\beta}\left(\partial_{x}^{\beta}J^{s+\frac{\alpha-4}{2}}u\right) ^{2}|\phi'''|\,dx\, dt\\ &\lesssim_{\alpha,n,\nu}T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\|\phi'''\|_{L^{\infty}_{x}}, \end{split} \end{equation*} and \begin{equation*} \int_{0}^{T} |\Theta_{1,1,7,3}(t)|\, dt \lesssim T\frac{\alpha(2-\alpha)}{4\pi}\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\left\|\partial_{x}^{2\beta_{1}}\partial_{x}^{\beta_{2}}\varphi\right\|_{L^{\infty}_{x}}. \end{equation*} After adapting the decomposition \eqref{eq1}-\eqref{eq2} to our case we can rewrite the term above as \begin{equation*} \begin{split} &\Theta_{1,1,7,2}(t)\\ &=\frac{\alpha(\alpha-2)(4-\alpha)}{4}\sum_{|\gamma|=1}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}\partial_{x}^{\gamma}\partial_{x}^{\beta_{1}}\partial_{x}^{\beta_{2}}\varphi J^{s+\frac{\alpha-4}{2}}\partial_{x}^{\beta_{1}}uJ^{s+\frac{\alpha-8}{2}}\partial_{x_{1}}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}u\,dx\\ &\quad -\frac{\alpha(\alpha-2)}{4\pi}\sum_{|\gamma|=1}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-4}{2}}\partial_{x}^{\beta_{1}}u\, r_{-\frac{\alpha}{2}}(x,D)\partial_{x_{1}}J^{s+\alpha-4}\partial_{x}^{\beta_{2}}u\,dx. \end{split} \end{equation*} Hence, by continuity \begin{equation*} \begin{split} &\int_{0}^{T} |\Theta_{1,1,7,2}(t)|\,dt\\ &\lesssim T \|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}\left(\frac{\alpha(2-\alpha)(4-\alpha)}{4}\sum_{|\gamma|=1}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\left\|\partial_{x}^{\gamma}\partial_{x}^{\beta_{1}}\partial_{x}^{\beta_{2}}\varphi\right\|_{L^{\infty}_{x}}+\frac{\alpha(2-\alpha)}{4\pi}n^{3}\right). \end{split} \end{equation*} Since $r_{\alpha-2}(x,D)\in \mathrm{OP}\mathbb{S}^{\alpha-2}\subset \mathrm{OP}\mathbb{S}^{0},$ the $L^{2}-$continuity of the order zero pseudo-differential operators implies \begin{equation*} \int_{0}^{T}|\Theta_{1,1,8}(t)|\,dt\lesssim T\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}. \end{equation*} \begin{flushleft} {\sc \underline{Claim 1:}} \end{flushleft} There exist a constant $\lambda=\lambda(n,\nu,\alpha)>0,$ such that \begin{equation}\label{claim1} \begin{split} &\lambda\left(\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha}{2}}u\right)^{2}\partial_{x_{1}}\varphi\,dx+\int_{\mathbb{R}^{n}}\left(\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right)^{2}\partial_{x_{1}}\varphi\,dx\right)\\ &\leq \Theta_{1,1,1,1}(t)+\Theta_{1,1,2,2}(t)+\Theta_{1,1,3,2}(t). \end{split} \end{equation} We shall remind that from \eqref{function1} \begin{equation*} \varphi(x)=\phi\left(\nu\cdot x+\delta\right)\quad x\in\mathbb{R}^{n}. \end{equation*} Therefore, \begin{equation}\label{ineq1} \begin{split} | \Theta_{1,1,3,2}(t)| &\leq\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\nu^{\beta}\int_{\mathbb{R}^{n}} \left| \partial_{x}^{\beta}J^{s+\frac{\alpha-2}{2}}u\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right| \,\phi' \,dx\\ &\leq\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\nu^{\beta}\left\|\eta\Psi_{\beta}J^{s+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}\\ &=\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\nu^{\beta}\left(\left\|\Psi_{\beta}\left(J^{s+\frac{\alpha}{2}}u\eta\right)\right\|_{L^{2}_{x}}+\left\|\left[\Psi_{\beta};\eta\right]J^{s+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}\right)\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}, \end{split} \end{equation} where $\eta:=\left(\phi'\right)^{\frac{1}{2}}$ and $\Psi_{\beta}:=\partial_{x}^{\beta}J^{-1}.$ Additionally, \begin{equation}\label{sharp} \left\|\Psi_{\beta}\left(J^{s+\frac{\alpha}{2}}u\eta\right)\ \right\|_{L^{2}_{x}}\leq C \left\|J^{s+\frac{\alpha}{2}}u\eta\right\|_{L^{2}_{x}}, \end{equation} where $$C:=\inf_{f\in L^{2}(\mathbb{R}^{n}),f\neq 0}\frac{\|J^{-1}\partial_{x_{j}}f\|_{L^{2}}}{\|f\|_{L^{2}}},\quad j=2,3,\dots,n.$$ Also, \begin{equation*} \left\|\left[\Psi_{\beta};\eta\right]J^{s+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}\leq c(\alpha,\beta)\|J^{s}u(t)\|_{L^{2}_{x}}. \end{equation*} Thus, for $\epsilon>0,$ \begin{equation}\label{ineq2} \begin{split} &\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\nu^{\beta}\left(\left\|\Psi_{\beta}\left(J^{s+\frac{\alpha}{2}}u\eta\right)\right\|_{L^{2}_{x}}+\left\|\left[\Psi_{\beta};\eta\right]J^{s+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}\right)\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}\\ &\leq \frac{\alpha \sqrt{n-1}|\overline{\nu}|C}{2}\left\|J^{s+\frac{\alpha}{2}}u\eta\right\|_{L^{2}_{x}}\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}+\frac{\alpha}{8\epsilon}\sum_{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}\,\nu^{\beta} \left\|\left[\Psi_{\beta};\eta\right]J^{s+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}^{2} \\ &\quad +\frac{\alpha\epsilon\sqrt{n-1} |\overline{\nu}|}{2}\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}, \end{split} \end{equation} where $|\overline{\nu}|:=\sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}.$ Hence, \begin{equation}\label{ineq1} \begin{split} & \frac{\nu_{1}}{2}\left\|J^{s+\frac{\alpha}{2}}u\eta\right\|_{L^{2}_{x}}^{2}+ \frac{\alpha\nu_{1}}{2}\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}^{2}-\frac{\alpha |\overline{\nu}|\sqrt{n-1}C}{2}\left\|J^{s+\frac{\alpha}{2}}u\eta\right\|_{L^{2}_{x}}\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}\\ &\quad -\frac{\alpha\epsilon\sqrt{n-1} |\overline{\nu}|}{2}\left\|\eta\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\right\|_{L^{2}_{x}}^{2}\\ &\leq \frac{\alpha}{8\epsilon}\sum_{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}\,\nu^{\beta} c(\alpha,\beta)^{2}\|J^{s}u(t)\|_{L^{2}_{x}}^{2}+\Theta_{1,1,1,1}(t)+\Theta_{1,1,2,2}(t)+\Theta_{1,1,3,2}(t). \end{split} \end{equation} At this point we shall make emphasis in two possible situations that we proceed to discuss below. \begin{itemize} \item[(i)] If $\nu_{1}>0$ and $\nu_{2}=\nu_{3}=\dots=\nu_{n}=0,$ then \eqref{claim1} holds with $$\lambda(\alpha,\nu,n)=\frac{\alpha\nu_{1}}{2}>0.$$ We call the reader attention on the dependence on the dispersion, as well as, on the direction $x_{1}.$ \item[(ii)] If $\nu_{1}>0$ and \begin{equation*} 0< \sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}<\min\left\{ \frac{2\nu_{1}}{C\sqrt{\alpha(n-1)}},\frac{\nu_{1}(1+\alpha)}{\alpha\epsilon\sqrt{n-1}}\right\}, \end{equation*} with $\epsilon$ satisfying \begin{equation*} 0<\epsilon<\frac{\nu_{1}}{|\overline{\nu}|\sqrt{n-1}}-\frac{\alpha\sqrt{n-1}|\overline{\nu}|}{4\nu_{1}}C^{2}, \end{equation*} implies that \begin{equation*} \begin{split} \lambda&=\lambda(\alpha,\nu,n,\epsilon)\\ &=\frac{\nu_{1}(1+\alpha)}{4}-\frac{\alpha\epsilon\sqrt{n-1}|\overline{\nu}|}{4}\\ &\quad -\frac{1}{2}\sqrt{\nu_{1}^{2}\frac{(1-\alpha)^{2}}{4}+|\overline{\nu}|\nu_{1}\frac{\alpha(1-\alpha)\epsilon\sqrt{n-1}}{2}+|\overline{\nu}|^{2}\frac{\alpha^{2}(n-1)(\epsilon^{2}+C^{2})}{4}}, \end{split} \end{equation*} in \eqref{claim1}. \end{itemize} In some sense we recover the results obtained by Linares \& Ponce in \cite{LPZK} for the Zakharov- Kuznetsov equation in the $2d$ and $3d$ cases. More precisely, in \cite{LPZK} is shown that inequality \eqref{claim1} holds true whenever \begin{equation*} \sqrt{3}\nu_{1}>\sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}>0,\quad\nu_{1}>0,\nu_{2},\nu_{3},\dots,\nu_{n}\geq0, \end{equation*} for dimension $n=2$ and $n=3.$ Nevertheless, a quick inspection shows that for $\alpha=2$ the value $\sqrt{3}$ is not obtained directly from our calculations. We shall point that this particular number is quite related to an specific cone where the radiation part of the solutions falls into (see \cite{CMPS} and also the recent work in \cite{RRY}, that provides certain numerical simulations for solutions of \eqref{zk4} where this situation is described). Now we show how to deal with the term that presents the major difficulties. \begin{equation}\label{commutatrot1} \begin{split} \Theta_{1,2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \varphi\right]J^{s}u\,dx. \end{split} \end{equation} As is pointed out by Bourgain \& Li \cite{BL}, the operator $\mathcal{K}_{\alpha}$ can be rewritten as \begin{equation}\label{decompo} \begin{split} \widehat{\mathcal{K}_{\alpha}f}(\xi)&=\left(\langle 2\pi\xi \rangle^{\alpha}-|2\pi\xi|^{\alpha}\right)\widehat{f}(\xi)\\ &=\langle2\pi \xi\rangle^{\alpha-2}\psi(\xi)\widehat{f}(\xi), \end{split} \end{equation} where \begin{equation}\label{a1} \psi(\xi):=\sum_{j=1}^{\infty} {\alpha/2 \choose j}\langle 2\pi\xi\rangle ^{2-2j}. \end{equation} For $\beta>0,$ the binomial coefficient has the following asymptotic equivalence \begin{equation}\label{asym1} {\beta \choose k}=\frac{(-1)^{k}}{\Gamma(-\beta)k^{1+\beta}}\left(1+o(1)\right)\quad\mbox{as}\quad k\rightarrow \infty. \end{equation} More precisely, \begin{equation}\label{asym2} \left| {\beta \choose k}\right|\approx\frac{1}{k^{\beta+1}}\quad \mbox{for}\quad k\gg 1. \end{equation} From \eqref{asym1}-\eqref{asym2} is clear that \begin{equation*} |\psi(\xi)|<\infty,\, \forall\, \xi \in \mathbb{R}^{n}. \end{equation*} The decomposition \eqref{decompo} allow us to write \begin{equation}\label{expre1} \mathcal{K}_{\alpha}f(x)=\left(\mathcal{T}_{\psi}J^{\alpha-2}\right)f(x), \end{equation} where $\left(\mathcal{T}_{\psi}f\right)^{\widehat{}}(\xi):=\psi(\xi)\widehat{f}(\xi),\, f\in\mathcal{S}(\mathbb{R}^{n}).$ From \eqref{expre1} is clear that \begin{equation*} \begin{split} \Theta_{1,2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \varphi\right]J^{s}u\,dx\\ &= \frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \mathcal{T}_{\psi}\left[J^{\alpha-2};\varphi\right]\partial_{x_{1}}J^{s}u\,dx+ \frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u\left[\mathcal{T}_{\psi}; \varphi\right]J^{\alpha-2}\partial_{x_{1}}J^{s}u\, dx\\ &=\Theta_{1,2,1}(t)+\Theta_{1,2,2}(t). \end{split} \end{equation*} After combining Plancherel's Theorem, Theorem \ref{continuity} and \eqref{eq1}-\eqref{eq2} \begin{equation*} \begin{split} \int_{0}^{T}|\Theta_{1,2,1}(t)|\,dt&\lesssim T \|u\|_{L^{\infty}_{T}H^{s}_{x}}\left\|\left[J^{\alpha-2};\varphi\right]\partial_{x_{1}}J^{s}u\right\|_{L^{\infty}_{T}L^{2}_{x}}\\ &\lesssim_{\alpha,T} \|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}. \end{split} \end{equation*} At this point only reminds to estimate $\Theta_{1,2,2}.$ In this sense, we consider for $b>0,$ the function \begin{equation*} \mathcal{B}_{b}(y):=\frac{1}{(4\pi)^{\frac{b}{2}}\Gamma\left(\frac{b}{2}\right)}\int_{0}^{\infty} e^{-\frac{\delta}{4\pi}} e^{-\frac{\pi|y|^{2}}{\delta}}\delta^{\frac{b-n}{2}}\,\frac{d\delta}{\delta},\quad y\in\mathbb{R}^{n},\quad b>0. \end{equation*} It is well known that for any $b>0$ the following properties are true: \begin{enumerate} \item[(i)] $\mathcal{B}_{b}\in L^{1}(\mathbb{R}^{n})$ with $\|\mathcal{B}_{b}\|_{L^{1}}=1.$ \item[(ii)] $\mathcal{B}_{b}$ has Fourier transform and it is given by the formula \begin{equation*} \widehat{\mathcal{B}_{b}}(\xi)=\frac{1}{\langle 2\pi\xi\rangle^{b}}. \end{equation*} \item[(iii)] $\mathcal{B}_{b}$ is smooth in $\mathbb{R}^{n}- \{0\}.$ \item[(iv)] $\mathcal{B}_{b}$ is strictly positive. \end{enumerate} For the proof of these and other properties see Stein \cite{stein1}, chapter V. The consideration of this function allow us to write \begin{equation*} \begin{split} &\left[\mathcal{T}_{\psi}; \varphi\right]J^{\alpha-2}\partial_{x_{1}}J^{s}u\\ &\quad =\sum_{j=2}^{\infty}{\alpha/2 \choose j}\left(\mathcal{B}_{2j-2}*(\varphi J^{\alpha-2}\partial_{x_{1}}J^{s}u)-\varphi\mathcal{B}_{2j-2}* J^{\alpha-2}\partial_{x_{1}}J^{s}u\right). \end{split} \end{equation*} Thus, we restrict our attention to estimate the commutator in the expression above. More precisely, for $x\in\mathbb{R}^{n},$ \begin{equation*} \begin{split} &\left(\mathcal{B}_{2j-2}*(\varphi J^{\alpha-2}\partial_{x_{1}}J^{s}u)-\varphi\mathcal{B}_{2j-2}* J^{\alpha-2}\partial_{x_{1}}J^{s}u\right)(x)\\ &=-\int_{\mathbb{R}^{n}}\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(\partial_{y_{1}}J^{s+\alpha-2}u\right)(x-y)\,dy. \end{split} \end{equation*} Now, to our propose we require make use of the following smooth partition of unity: let $\rho\in C^{\infty}_{0}(\mathbb{R}^{n})$ such that $\rho(y)=1$ on $\{|y|\leq \frac{1}{2}\}$ and $\rho(y)=0$ on $\{|y|\geq 1\}.$ Setting $\chi(y)=\rho(y/2)-\rho(y)$ then, for all $y\in \mathbb{R}^{n}$ \begin{equation}\label{sum1} 1=\rho\left(\frac{y}{2}\right)+\sum_{p\geq 0}\chi_{p}(y), \end{equation} where $\chi_{p}(y):=\chi\left(\frac{y}{2^{p+1}}\right),y\in\mathbb{R}^{n}.$ Thus, \begin{equation}\label{i1} \begin{split} &\left(\mathcal{B}_{2j-2}*(\varphi J^{\alpha-2}\partial_{x_{1}}J^{s}u)-\varphi\mathcal{B}_{2j-2}* J^{\alpha-2}\partial_{x_{1}}J^{s}u\right)(x)\\ &=-\int_{\mathbb{R}^{n}}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(\partial_{y_{1}}J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad -\sum_{p\geq 0}\int_{\mathbb{R}^{n}}\chi_{p}(y)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(\partial_{y_{1}}J^{s+\alpha-2}u\right)(x-y)\,dy\\ &=I+II. \end{split} \end{equation} At this stage we shall take into consideration the different behaviors of $\mathcal{B}_{2j-2}$ depending on $j.$ \begin{flushleft} {\sc \fbox{Case $j=2:$}} \end{flushleft} \begin{flushleft} {\sc \underline{Sub case: $n=2:$}} \end{flushleft} By Lemma \ref{b1}, the function $\mathcal{B}_{2}$ can be represented as \begin{equation*} \mathcal{B}_{2}(y)=\frac{e^{-|y|}}{2\sqrt{2}\pi}\int_{0}^{\infty}e^{-|y|s}\left(s+\frac{s^{2}}{2}\right)^{-\frac{1}{2}}\,ds. \end{equation*} In addition, Lemma \ref{b1} provides the following estimate: for $\beta$ multi-index with $|\beta|=1,$ \begin{equation}\label{e1} \left|\partial_{y}^{\beta}\mathcal{B}_{2}(y)\right|\leq c_{\beta} e^{-|y|}\left(1+|y|^{-1}\right), \quad y\in\mathbb{R}^{2}-\{0\}. \end{equation} Hence \begin{equation*} \begin{split} I&= \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(\partial_{y_{1}}J^{s+\alpha-2}u\right)(x-y)\,dy\\ &=-\frac{1}{2}\int_{B(0;2)}(\partial_{y_{1}}\rho)\left(\frac{y}{2}\right)\mathcal{B}_{2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad - \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)(\partial_{y_{1}}\mathcal{B}_{2})(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad +\int_{B(0;2)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2}(y)(\partial_{y_{1}}\varphi)(x-y)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad + \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)} \vartheta_{1}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dS_{y}\\\ &=I_{1}+I_{2}+I_{3}+I_{4}, \end{split} \end{equation*} where $\vartheta=(\vartheta_{1},\vartheta_{2},\dots,\vartheta_{n})$ denotes the inward pointing unit normal along $\partial B(0;\epsilon).$ Thus, by the mean value Theorem \begin{equation}\label{mvt} |\varphi(x-y)-\varphi(x)|\leq \int_{0}^{1}\left|\nabla\varphi(x+(\theta-1)y)\cdot y\right|\,d\theta \leq \left\|\nabla \varphi\right\|_{L^{\infty}}|y|,\,x,y\in\mathbb{R}^{n}. \end{equation} Hence combining \eqref{e1} and \eqref{mvt} we get \begin{equation*} \begin{split} I_{1}&=-\frac{1}{2}\int_{B(0;2)}(\partial_{y_{1}}\rho)\left(\frac{y}{2}\right)\mathcal{B}_{2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left|J^{s+\alpha-2}u(x-y)\right|\,dy\\ &\lesssim\|\nabla\varphi\|_{L^{\infty}_{x}} \int_{B(0;2)}\left|\partial_{y_{1}}\rho \left(\frac{y}{2}\right)\right|\mathcal{B}_{2}(y)\left|J^{s+\alpha-2}u(x-y)\right|\,dy, \end{split} \end{equation*} which by Young's inequality allow us to obtain \begin{equation*} \int_{0}^{T} \|I_{1}(t)\|_{L^{2}_{x}}\,dt\lesssim_{T} \|\nabla\varphi\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}. \end{equation*} For the second term, that is, $I_{2}$ we have in virtue of \eqref{e1} that \begin{equation*} \begin{split} |I_{2}(t)|&\lesssim \|\nabla\varphi\|_{L^{\infty}_{x}}\lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)e^{-|y|}\left(|y|+1\right) \left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\lesssim \|\nabla\varphi\|_{L^{\infty}_{x}}\int_{B(0;2)}\rho\left(\frac{y}{2}\right)e^{-|y|}\left(|y|+1\right) \left(J^{s+\alpha-2}u\right)(x-y)\,dy. \end{split} \end{equation*} Then by Young's inequality \begin{equation*} \int_{0}^{T} \|I_{2}(t)\|_{L^{2}_{x}}\,dt\lesssim T \|\nabla\varphi\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}. \end{equation*} The term $I_{3}$ is quite straightforward to handle since a direct application of young inequality yield \begin{equation*} \int_{0}^{T}\|I_{3}(t)\|_{L^{2}_{x}}\,dt \lesssim T \|\partial_{x_{1}}\varphi\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}. \end{equation*} On the other hand Lemma \ref{b2} implies \begin{equation*} \begin{split} |I_{4}|&=\left| \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\vartheta_{1}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dS_{y}\right|\\ &\lesssim \|\nabla\varphi\|_{L^{\infty}_{x}}\lim_{\epsilon\downarrow 0}\epsilon^{2} e^{-\epsilon}\left(1+\log^{+}\left(\frac{1}{\epsilon} \right)\right) \int_{\partial B(0;\epsilon)}\frac{\left|J^{s+\alpha-2}u(x-y)\right|}{|y|}\,dS_{y} \end{split} \end{equation*} and since \begin{equation*} \lim_{\epsilon\downarrow 0} \fint_{\partial B(x;\epsilon)}\left|J^{s+\alpha-2}u(y)\right|\,dS_{y}= \left|J^{s+\alpha-2}u(x)\right|, \end{equation*} whenever $x \in\mathbb{R}^{n}$ be a Lebesgue point\footnote{ If $\omega_{n}$ denotes the volume of the unitary $n-$dimensional sphere we set $\fint_{\partial B(x,r)}f(y)dS_{y}:=\frac{1}{\omega_{n}r^{n-1}}\int_{\partial B(x,r)}f(y) dS_{y}.$} Therefore, $I_{4}=0.$ Next, \begin{equation*} \begin{split} II &=\sum_{p\geq 0}\frac{1}{2^{p+1}}\int_{\mathbb{R}^{n}}(\partial_{y_{1}}\chi_{p})(y)\mathcal{B}_{2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad +\sum_{p\geq 0}\int_{\mathbb{R}^{n}}\chi_{p}(y)(\partial_{y_{1}}\mathcal{B}_{2})(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad -\sum_{p\geq 0}\int_{\mathbb{R}^{n}}\chi_{p}(y)\mathcal{B}_{2}(y)(\partial_{y_{1}}\varphi)(x-y)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &=II_{1}+II_{2}+II_{3}. \end{split} \end{equation*} The terms $II_{1}$ and $II_{2}$ satisfy \begin{equation*} |II_{1}|\lesssim \|\varphi\|_{L^{\infty}_{x}}\sum_{p\geq 0}\frac{1}{2^{p}}\left(\mathcal{B}_{2}*\left|J^{s+\alpha-2}u\right|\right)(x), \end{equation*} and \begin{equation*} | II_{2}|\lesssim\|\varphi\|_{L^{\infty}_{x}}\sum_{p\geq 0}\left(\chi_{p}e^{-|\cdot|}*\left|J^{s+\alpha-2}u\right|\right)(x), \end{equation*} where we have used \eqref{e1}. On the other hand we have after rewriting the following bound for $I_{3}$ \begin{equation*} \begin{split} | II_{3}|\leq\sum_{p\geq 0}\left(\left(\chi_{p}\mathcal{B}_{2}\right)*\left|\partial_{x_{1}}\varphi J^{s+\alpha-2}u\right| \right)(x). \end{split} \end{equation*} Therefore, Young's inequality ensure that \begin{equation*} \int_{0}^{T}\max\left( \|II_{1}(t)\|_{L^{2}_{x}},\, \|II_{2}(t)\|_{L^{2}_{x}},\, \|II_{3}(t)\|_{L^{2}_{x}}\right)\, dt \lesssim_{T} \|u\|_{L^{\infty}_{T}H^{s}_{x}}\|\nabla\varphi\|_{L^{\infty}_{x}}. \end{equation*} \begin{flushleft} {\sc \underline{Sub case: $n>2:$}} \end{flushleft} This case is quite similar to the sub-case $j>1+\frac{n}{2}$ below, so for the sake of brevity we omit here and we refer to reader to the pointed out case where are indicated all the details. \begin{flushleft} {\sc \fbox{Case $j>2:$}} \end{flushleft} The following sub-cases are examined in the case that such integer $j$ satisfies the indicated condition. The reader shall notice that in some cases for lower dimensions \textit{\emph{e.g.}} $n=2,3,4$ some cases are empty. We start by the easier case. \begin{flushleft} {\sc \underline{Sub case: $j>1+\frac{n}{2}:$}} \end{flushleft} \begin{equation*} \begin{split} I&= \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(\partial_{y_{1}}J^{s+\alpha-2}u\right)(x-y)\,dy\\ &=- \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\partial_{y_{1}}\left(\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\right) \left(J^{s+\alpha-2}u\right)(x-y),dy\\ &\quad +\nu_{1} \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dS_{y}\\ &=-\frac{1}{2}\int_{B(0;2)}(\partial_{y_{1}}\rho)\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad - \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)(\partial_{y_{1}}\mathcal{B}_{2j-2})(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad +\int_{B(0;2)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)(\partial_{y_{1}}\varphi)(x-y)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad + \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\vartheta_{1}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dS_{y}\\ &=I_{1}+I_{2}+I_{3}+I_{4}. \end{split} \end{equation*} The terms $I_{1}$ and $I_{3}$ can be easily bounded by using Young's inequality. Indeed, \begin{equation*} \int_{0}^{T} \|I_{1}(t)\|_{L^{2}_{x}}\, dt \lesssim_{T}\|\varphi\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}. \end{equation*} and \begin{equation*} \int_{0}^{T}\|I_{3}(t)\|_{L^{2}_{x}}\, dt\lesssim_{T}\|\partial_{x_{1}}\varphi\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}. \end{equation*} Since \begin{equation*} \left| \partial_{y_{1}}\mathcal{B}_{2j-2}(y)\right| \lesssim \frac{\Gamma\left(j-1-\frac{n}{2}\right)}{\Gamma(j-1)},\quad \mbox{for}\quad 0< |y|\leq 2, \end{equation*} then \begin{equation*} \int_{0}^{T}\|I_{2}(t)\|_{L^{2}_{x}}\, dt\lesssim_{T}\frac{\Gamma\left(j-1-\frac{n}{2}\right)}{\Gamma(j-1)}\|\varphi\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}. \end{equation*} By \eqref{mvt} we provide the following upper bound \begin{equation}\label{p11} \begin{split} |I_{4}|&\lesssim\left\| \partial_{x}\varphi\right\|_{L^{\infty}_{x}} \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)|y|\left|J^{s+\alpha-2}u\right|(x-y)\,dS_{y}\\ &\approx\frac{\Gamma\left(j-1-\frac{n}{2}\right)}{\Gamma\left(j-1\right)} \left\|\partial_{x}\varphi\right\|_{L^{\infty}_{x}} \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\rho\left(\frac{y}{2}\right)|y|\left|J^{s+\alpha-2}u\right|(x-y)\,dS_{y}, \end{split} \end{equation} where we have used that \begin{equation}\label{p12} \mathcal{B}_{2j-2}(y)\approx\frac{\pi^{\frac{n}{2}}\Gamma\left(j-1-\frac{n}{2}\right)}{\Gamma(j-1)},\quad \mbox{for}\,\, y\rightarrow 0, \end{equation} see Lemma \ref{Lemmasimpt}. Notice that \begin{equation}\label{p13} \begin{split} &\lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\rho\left(\frac{y}{2}\right)|y|\left|J^{s+\alpha-2}u\right|(x-y)\,dS_{y}\\ &\lesssim \lim_{\epsilon\downarrow 0}\epsilon^{n}\int_{\partial B(0;\epsilon)}\frac{\left|J^{s+\alpha-2}u\right|(x-y)}{|y|^{n-1}}\,dS_{y}, \end{split} \end{equation} where \begin{equation}\label{p14} \lim_{\epsilon\downarrow 0}\fint_{\partial B(x;\epsilon)}\left|J^{s+\alpha-2}u(y)\right|\,dS_{y}=|J^{s+\alpha-2}u(x)|, \end{equation} whenever $x$ be a Lebesgue point. Therefore, the estimates \eqref{p11}-\eqref{p14} imply $I_{3}=0.$ An argument quite similar to the one used above also applies to prove that $I_{4}=0,$ and to avoid repeating the same arguments we will omit the details. On the other hand \begin{equation*} \begin{split} II &=\sum_{p\geq 0}\frac{1}{2^{p+1}}\int_{\mathbb{R}^{n}}(\partial_{y_{1}}\chi_{p})(y)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad +\sum_{p\geq 0}\int_{\mathbb{R}^{n}}\chi_{p}(y)(\partial_{y_{1}}\mathcal{B}_{2j-2})(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad -\sum_{p\geq 0}\int_{\mathbb{R}^{n}}\chi_{p}(y)\mathcal{B}_{2j-2}(y)(\partial_{y_{1}}\varphi)(x-y)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &=II_{1}+II_{2}+II_{3}. \end{split} \end{equation*} The expression $II_{1}$ clear that it satisfies the inequality \begin{equation}\label{p1} \begin{split} |II_{1}|\lesssim \|\varphi\|_{L^{\infty}_{x}}\sum_{p\geq 0}\frac{1}{2^{p}}\left(\mathcal{B}_{2j-2}*\left|J^{s+\alpha-2}u\right| \right)(x). \end{split} \end{equation} Since $\mathcal{B}_{2j-2}$ is smooth away from zero, the following identity holds: for $y\in \mathbb{R}^{n}-\{0\},$ \begin{equation*} (\partial_{y_{1}}\mathcal{B}_{2j-2})(y)=-\frac{y_{1}}{2(j-2)}\mathcal{B}_{2j-4}(y),\quad \mbox{whenever}\quad j>2. \end{equation*} If we set \begin{equation*} \widetilde{\mathcal{B}_{j}}(y):=y_{1}\mathcal{B}_{j-2}(y), \quad \mbox{for}\quad j>2, \end{equation*} then \begin{equation}\label{p2} \begin{split} | II_{2}|\lesssim\frac{\|\varphi\|_{L^{\infty}_{x}}}{j-2}\sum_{p\geq 0}\left(\chi_{p}\widetilde{\mathcal{B}_{2j}}*\left|J^{s+\alpha-2}u\right|\right)(x), \end{split} \end{equation} and \begin{equation}\label{p3} \begin{split} | II_{3}|\leq\sum_{p\geq 0}\left(\left(\chi_{p}\mathcal{B}_{2j-2}\right)*\left|\partial_{x_{1}}\varphi J^{s+\alpha-2}u\right|\right)(x), \quad \forall x\in\mathbb{R}^{n}. \end{split} \end{equation} Finally, gathering \eqref{p1},\eqref{p2} and \eqref{p3} and taking into consideration \eqref{sum1} \begin{equation*} \begin{split} \| II\|_{L^{2}_{x}}&\lesssim \|\varphi\|_{L^{\infty}_{x}}\left\|\mathcal{B}_{2j-2}*\left|J^{s+\alpha-2}u\right|\right\|_{L^{2}_{x}}+ \frac{\|\varphi\|_{L^{\infty}_{x}}}{2(j-2)} \left\|\widetilde{\mathcal{B}_{2j}}*\,\left|J^{s+\alpha-2}u\right| \,\right\|_{L^{2}_{x}}\\ &\quad +\left\|\mathcal{B}_{2j-2}*\left|\partial_{x_{1}}\varphi J^{s+\alpha-2}u\right|\right\|_{L^{2}_{x}}, \end{split} \end{equation*} which by Young's inequality allow us to obtain the bound \begin{equation*} \begin{split} \int_{0}^{T} \| II(t)\|_{L^{2}_{x}}\, dt&\lesssim\int_{0}^{T}\left\{ \|\varphi\|_{L^{\infty}_{x}}\|J^{s}u(t)\|_{L^{2}_{x}}+\frac{\|\nabla_{x}\varphi\|_{L^{\infty}_{x}}}{2(j-2)} \left\|\widetilde{\mathcal{B}_{2j}}\right\|_{L^{1}}\|J^{s}u(t)\|_{L^{2}_{x}}\right\}\, dt\\ &\quad +T\left\|\partial_{x_{1}}\varphi\right\|_{L^{\infty}_{x}}\|u\|_{L^{\infty}_{T}H^{s}_{x}}\\ &\lesssim_{n,T}\|u\|_{L^{\infty}_{T}H^{s}_{x}}\|\nabla\varphi\|_{W^{1,\infty}_{x}}, \end{split} \end{equation*} where we have used that \begin{equation*} \left\|\widetilde{\mathcal{B}_{2j}}\right\|_{L^{1}}\approx_{n} \frac{(j-1)(j-2)}{j^{\frac{3}{2}}},\quad \mbox{whenever }\, j>2. \end{equation*} \begin{flushleft} {\sc \underline{Sub case: $j=1+\frac{n}{2}:$}} \end{flushleft} This case is quite similar to the sub case $n=2,$ so that for the sake of brevity we omit the details. \begin{flushleft} {\sc \underline{Sub case: $j<1+\frac{n}{2}:$}} \end{flushleft} We start by specifying the implicit constant in inequality (c) in Lemma \ref{b2}. More precisely, for any multi-index $\beta\in(\mathbb{N}_{0})^{n}$ with $|\beta|=1,$ the following identity holds for $y\in\mathbb{R}^{n}\setminus \{0\},$ \begin{equation}\label{ineq1.1} \begin{split} \left(\partial_{y}^{\beta}\mathcal{B}_{2j-2}\right)(y)&=-\frac{y^{\beta}}{|y|}\left(\mathcal{B}_{2j-2}(y)+\gamma_{j}e^{-|y|}\int_{0}^{\infty}e^{-s|y|}\, s\left(s+\frac{s^{2}}{2}\right)^{\frac{n-(2j-2)-1}{2}}\,ds\right) \end{split} \end{equation} where \begin{equation*} \gamma_{j}:=\frac{1}{(2\pi)^{\frac{n-1}{2}}2^{j-1}\Gamma(j-1)\Gamma\left(\frac{n-2j+3}{2}\right)}. \end{equation*} Therefore, a rough upper bound for \eqref{ineq1.1} is \begin{equation}\label{bound1} \begin{split} &\left| \left(\partial_{y}^{\beta}\mathcal{B}_{2j-2}\right)(y)\right|\\ &\leq \mathcal{B}_{2j-2}(y)+e^{-|y|}\gamma_{j}\left(\frac{3}{2}\right)^{\frac{n+1}{2}}\left(\left(\frac{2}{3}\right)^{j}+2^{j}\frac{\Gamma(n-2j+3)}{|y|^{n-2j+3}}\right),\, y\in\mathbb{R}^{n}\setminus \{0\}. \end{split} \end{equation} Now, we shall pay attention to the terms in the r.h.s above. By Legendre's\footnote{ For $z\in\mathbb{C}$ the \emph{Legendre's duplication Gamma formula} is given by $$ \sqrt{\pi}\Gamma(2z)=2^{2z-1}\Gamma(z)\Gamma\left(z+\frac{1}{2}\right).$$} duplication formula for Gamma function we obtain \begin{equation*} \sup_{j\gg 1}\left\{\gamma_{j}\left(\frac{2}{3}\right)^{j}, \gamma_{j}2^{j}\Gamma(n-2j+3)\right\}\lesssim_{n} 1. \end{equation*} Next, we turn our attention to $I,$ \begin{equation*} \begin{split} I&= \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(\partial_{y_{1}}J^{s+\alpha-2}u\right)(x-y)\,dy\\ &=-\frac{1}{2}\int_{B(0;2)}(\partial_{y_{1}}\rho)\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad - \lim_{\epsilon\downarrow 0}\int_{B(0;2)\setminus B(0;\epsilon)}\rho\left(\frac{y}{2}\right)(\partial_{y_{1}}\mathcal{B}_{2j-2})(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad +\int_{B(0;2)}\rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)(\partial_{y_{1}}\varphi)(x-y)\left(J^{s+\alpha-2}u\right)(x-y)\,dy\\ &\quad + \lim_{\epsilon\downarrow 0}\int_{\partial B(0;\epsilon)}\vartheta_{1} \rho\left(\frac{y}{2}\right)\mathcal{B}_{2j-2}(y)\left(\varphi(x-y)-\varphi(x)\right)\left(J^{s+\alpha-2}u\right)(x-y)\,dS_{y}\\ &=I_{1}+I_{2}+I_{3}+I_{4}. \end{split} \end{equation*} Nevertheless, we realize that after incorporating the bound obtained in \eqref{bound1} combined with the arguments already described in the case $n=2$ implies that $I_{3}=0,$ \begin{equation*} \int_{0}^{T} \max\left( \|I_{1}(t)\|_{L^{2}_{x}},\, \|I_{2}(t)\|_{L^{2}_{x}},\, \|I_{4}(t)\|_{L^{2}_{x}}\right)\, dt \lesssim_{T} \|u\|_{L^{\infty}_{T}H^{s}_{x}}\|\varphi\|_{W^{1,\infty}_{x}}. \end{equation*} and \begin{equation*} \int_{0}^{T} \max\left( \|II_{1}(t)\|_{L^{2}_{x}},\, \|II_{2}(t)\|_{L^{2}_{x}},\, \|II_{3}(t)\|_{L^{2}_{x}}\right)\, dt\lesssim_{T} \|u\|_{L^{\infty}_{T}H^{s}_{x}}\|\varphi\|_{W^{1,\infty}_{x}}. \end{equation*} In summary, we have obtained \begin{equation*} \begin{split} \int_{0}^{T} |\Theta_{1,2}(t)|\, dt&\lesssim_{\alpha,n,T,\nu}\|u\|_{L^{\infty}_{T}H^{s}_{x}}\|\varphi\|_{W^{1,\infty}_{x}}\left(\sum_{j=2}^{\infty}\frac{1}{j^{\alpha+1}}\right)\\ &\lesssim_{\alpha} \|u\|_{L^{\infty}_{T}H^{s}_{x}}\|\varphi\|_{W^{1,\infty}_{x}}, \end{split} \end{equation*} for any $\alpha\in (0,2).$ \begin{rem} \textit{Notice that the last inequality above is quite instructive to understand the effects of the dispersion on the solutions, since the argument above suggest that for $\alpha\leq 0,$ the pursuit smoothing effect does not hold.} \end{rem} Finally, by inequality \eqref{KPDESI} \begin{equation*} \begin{split} \Theta_{2}(t)&=\int_{\mathbb{R}^{n}} J^{s}\left(u\partial_{x_{1}}u\right) J^{s}u \varphi\,dx\\ &=\int_{\mathbb{R}^{n}} J^{s}u \varphi \left[J^{s}; u\right]\partial_{x_{1}}u\,dx-\frac{1}{2}\int_{\mathbb{R}^{n}}\left(J^{s}u\right)^{2}\left(\partial_{x_{1}}u\varphi+u\partial_{x_{1}}\varphi\right)\,dx\\ &\lesssim \|\varphi\|_{L^{\infty}_{x}} \left\|J^{s}u(t)\right\|_{L^{2}_{x}}^{2}\left\|\nabla u(t)\right\|_{L^{\infty}_{x}}+\left\|J^{s}u(t)\right\|_{L^{2}_{x}}^{2}\left(\|\partial_{x_{1}} u(t)\|_{L^{\infty}_{x}}+\|\partial_{x_{1}}\varphi\|_{L^{\infty}_{x}}\|u(t)\|_{L^{\infty}_{x}}\right). \end{split} \end{equation*} Finally, gathering the estimates above we obtain after integrating in the time variable the expression \eqref{energy} \begin{equation*} \begin{split} &\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\left(J^{s+\frac{\alpha}{2}}u(x,t)\right)^{2}+\left(\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u(x,t)\right)^{2}\right)\partial_{x_{1}}\varphi\,dx\,dt\\ &\lesssim_{n,\alpha,\nu,T} \left(1+T+\|\nabla u\|_{L^{1}_{T}L^{\infty}_{x}}+T\|u\|_{L^{\infty}_{T}H^{r}_{x}}\right)\|u\|_{L^{\infty}_{T}H^{s}_{x}}^{2}, \end{split} \end{equation*} whenever $r>\frac{n}{2}.$ \end{proof} \section{Proof of Theorem \ref{zk9}}\label{seccion5} In this section we focus our attention in to provide some immediate applications of the smoothing effect deduced in the previous section. in this sense, we prove that solution of the IVP \ref{zk4} satisfy the principle of propagation of regularity in dispersive equations. More precisely, we prove Theorem \ref{zk9} and its method of proof follows the ideas from \cite{ILP1}, \cite{KLPV},\cite{AM1}, and \cite{AMZK}. For the proof we consider the following standard notation, that it has shown to be versatile. In detail, for $\epsilon>0$ and $\tau\geq 5\epsilon$ we consider the following families of functions \begin{equation*} \chi_{\epsilon, \tau},\widetilde{\phi_{\epsilon,\tau}}, \phi_{\epsilon,\tau},\psi_{\epsilon}\in C^{\infty}(\mathbb{R}), \end{equation*} satisfying the conditions indicated below: \begin{itemize} \item[(i)] $ \chi_{\epsilon, \tau}(x)= \begin{cases} 1 & x\geq \tau \\ 0 & x\leq \epsilon \end{cases}, $ \item[(ii)] $\supp(\chi_{\epsilon, \tau}')\subset[\epsilon,\tau],$ \item[(iii)] $\chi_{\epsilon, \tau}'(x)\geq 0,$ \item[(iv)] $ \chi_{\epsilon, \tau}'(x)\geq\frac{1}{10(\tau-\epsilon)}\mathbb{1}_{[2\epsilon,\tau-2\epsilon]}(x),$ \item[(v)] $\supp\left(\widetilde{\phi_{\epsilon,\tau}}\right),\supp(\phi_{\epsilon,\tau})\subset \left[\frac{\epsilon}{4},\tau\right],$ \item[(vi)] $\phi_{\epsilon,\tau}(x)=\widetilde{\phi_{\epsilon,\tau}}(x)=1,\,\mbox{if}\quad x\in \left[\frac{\epsilon}{2},\epsilon\right],$ \item[(vii)] $\supp(\psi_{\epsilon})\subset \left(-\infty,\frac{\epsilon}{4}\right].$ \item[(viii)] For all $x\in\mathbb{R}$ the following quadratic partition of the unity holds $$ \chi_{\epsilon, \tau}^{2}(x)+\widetilde{\phi_{\epsilon,\tau}}^{2}(x)+\psi_{\epsilon}(x)=1,$$ \item[(ix)] also for all $x\in\mathbb{R}$ $$ \chi_{\epsilon, \tau}(x)+\phi_{\epsilon,\tau}(x)+\psi_{\epsilon}(x)=1,\quad x\in\mathbb{R}. $$ \end{itemize} For a more detailed construction of these families of weighted functions see \cite{ILP1}. \begin{proof} In order to not saturate the notation the weighted functions $\chi_{\epsilon, \tau},\phi_{\epsilon, \tau},\widetilde{\phi_{\epsilon,\tau}}$ and $\psi_{\epsilon}$ will be considered as functions of the variable $\nu\cdot x+\omega t$ \textit{e.g.} in the case of $\chi_{\epsilon, \tau}$ we understand it as \begin{equation} \chi_{\epsilon, \tau}(x,t):=\chi_{\epsilon, \tau}(\nu\cdot x+\omega t), \end{equation} and the dependence on $x,t$ will be suppressed. Performing energy estimates allow us to obtain \begin{equation}\label{energy1} \begin{split} &\frac{d}{dt}\int_{\mathbb{R}^{n}}\left(J^{s}u\right)^{2}\chi_{\epsilon,\tau}^{2}\,dx\underbrace{-\frac{\omega }{2}\int_{\mathbb{R}^{n}}\left(J^{s}u\right)^{2}\chi_{\epsilon,\tau}\chi_{\epsilon,\tau}'\,dx}_{\Theta_{1}(t)}\underbrace{-\int_{\mathbb{R}^{n}}J^{s}u \partial_{x_{1}}(-\Delta)^{\frac{\alpha}{2}}J^{s}u \chi_{\epsilon,\tau}^{2}\,dx}_{\Theta_{2}(t)}\\ &\underbrace{+\int_{\mathbb{R}^{n}}J^{s}uJ^{s}(u\partial_{x_{1}}u)\chi^{2}_{\epsilon,\tau}\,dx}_{\Theta_{3}(t)}=0. \end{split} \end{equation} The proof follows by using an inductive argument we describe below: \begin{figure}[h]\label{figure1.1} \begin{tikzcd}[column sep=large] J^{s}\arrow{r}\arrow{d} &J^{s+\frac{2-\alpha}{2}}\arrow{r}{\text{P.R}}\arrow{d} &J^{s+1}\arrow{d}\\ J^{s+\frac{\alpha}{2}}&J^{s+1}&J^{s+1+\frac{\alpha}{2}} \end{tikzcd} \caption{Description of the argument to reach more regularity at steps of order $\alpha/2$ in the inductive process. The abbreviations P.R stands for \emph{Propagated Regularity} and the down arrows indicate local regularity gained. The downward arrows show the local gain of regularity at the corresponding step} \end{figure} the idea is to start by showing that regularity of the solutions close to Sobolev index where there exist local well-posedness, is propagated with infinity speed over the moving half spaces. In this process, the smoothing effect is fundamental to estimate several terms when we perform the energy estimate \eqref{energy1} \textit{e.g.} the to show that $\Theta_{1}\in L^{1}_{T}$ we requires the extra regularity provided by the smoothing. Since the smoothness provided is too weak we require two steps (compare to ZK equation see \cite{LPZK}) and \cite{AMZK}) to reach one extra derivative. The figure \ref{figure1.1} describes the two steps inductive process. \begin{flushleft} {\sc Case : $s\in (s_{n},s_{n}+\frac{\alpha}{2})$} \end{flushleft} \begin{flushleft} {\sc \underline{Step 1} } \end{flushleft} Notice that \begin{equation}\label{teta1} \begin{split} \int_{0}^{T}|\Theta_{1}(t)|\, dt&\lesssim_{\nu}\int_{0}^{T}\int_{\mathbb{R}^{n}}(J^{s}u)^{2}\chi_{\epsilon, \tau}\chi_{\epsilon, \tau}'\, dx\, dt\\ &\lesssim \int_{0}^{T}\int_{\mathbb{R}^{n}}\mathbb{1}_{\mathcal{H}_{\{\epsilon-\omega t,\nu\}}\cap\mathcal{H}_{\{\tau-\omega t,\nu\}}^{c}}(J^{s}u)^{2}\, dx\, dt. \end{split} \end{equation} Although, by Lemma \ref{main2} the solutions enjoys of extra regularity on the channel $$\mathcal{H}_{\{\epsilon-\omega t,\nu\}}\cap\mathcal{H}_{\{\tau-\omega t,\nu\}}^{c},\quad \mbox{for}\, \epsilon>0,\, \tau>5\epsilon,$$ More precisely, we can choose $\varphi$ in Lemma \ref{main2} properly to obtain \begin{equation} \int_{0}^{T}\int_{\mathcal{H}_{\{\epsilon-\omega t,\nu\}}\cap\mathcal{H}_{\{\tau-\omega t,\nu\}}^{c}}\left(J^{s+\frac{\alpha}{2}}u\right)^{2}\,dx\, dt\lesssim c. \end{equation} In virtue of Lemma \ref{zk37} \emph{mutatis mutandis} in the weighted functions we get \begin{equation}\label{smoot} \int_{0}^{T}\int_{\mathcal{H}_{\{\epsilon-\omega t,\nu\}}\cap\mathcal{H}_{\{\tau-\omega t,\nu\}}^{c}}\left(J^{r}u\right)^{2}\,dx\, dt\lesssim c, \end{equation} for $r\in(0,s_{n}+\frac{\alpha}{2}),$ for $\epsilon>0$ and $\tau\geq 5\epsilon.$ Then $\Theta_{1}\in L^{1}_{T}.$ The arguments in \eqref{decomposition}-\eqref{bo3} allow us to rewrite $\Theta_{2}$ as follows: \begin{equation}\label{comm1.1} \begin{split} \Theta_{2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[J^{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s}u\,dx+\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}u \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s}u\,dx\\ &=\Theta_{2,1}(t)+\Theta_{2,2}(t). \end{split} \end{equation} Next, we consider the operator \begin{equation}\label{comm1.2} {c_{\alpha}}(x,D):=\left[J^{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]. \end{equation} Thus, by using pseudo-differential calculus there exist operators $p_{\alpha-k}(x,D),$ \linebreak $\,j\in \{1,2\cdots,m\}$ for some $m\in \mathbb{N}$ such that \begin{equation}\label{comm1.3} \begin{split} c_{\alpha}(x,D)=p_{\alpha}(x,D)+p_{\alpha-1}(x,D)+\dots+p_{\alpha-m}(x,D)+r_{\alpha-m-1}(x,D), \end{split} \end{equation} where $p_{\alpha-j}\in \mathrm{OP}\mathbb{S}^{\alpha-j}$ and $r_{\alpha-m-1}\in \mathrm{OP}\mathbb{S}^{\alpha-1-m}.$ The representation above presents two main difficulties. The first one consist into describe accurately the terms $p_{\alpha-j}(x,D)$ for each $j.$ The second problem deals into determine $m$ adequately. We will show later that it is only required to estimate $p_{\alpha}(x,D)$ and $p_{\alpha-1}(x,D).$ According to \eqref{comono1}-\eqref{energy2.1.1} \begin{equation}\label{disperive} p_{\alpha}(x,D)=\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})J^{\alpha}-\alpha\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2}) J^{\alpha-2}\partial_{x_{1}}^{2}-\alpha\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\partial_{x}^{\beta}(\chi_{\epsilon, \tau}^{2})J^{\alpha-2}\partial_{x}^{\beta}\partial_{x_{1}}, \end{equation} and \begin{equation*} \begin{split} &p_{\alpha-1}(x,D) =-\alpha\sum_{|\beta|=1}\partial_{x_{1}}\partial_{x}^{\beta}(\chi_{\epsilon, \tau}^{2})\partial_{x_{1}}J^{\alpha-2}-\frac{\alpha}{2\pi}\sum_{|\beta|=1}\partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\partial_{x}^{\beta}J^{\alpha-2}\\ &-\frac{\alpha}{2\pi}\sum_{|\beta|=1}\partial_{x}^{2\beta}(\chi_{\epsilon, \tau}^{2})\partial_{x_{1}}J^{\alpha-2} +\frac{\alpha(\alpha-2)}{2\pi}\sum_{|\beta_{2}|=1}\sum_{|\beta_{1}|=1}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}(\chi_{\epsilon, \tau}^{2})\partial_{x_{1}}\partial_{x}^{\beta_{2}}\partial_{x}^{\beta_{1}}J^{\alpha-4}. \end{split} \end{equation*} The remainder terms are obtained by using pseudo-differential calculus and these ones can be rewritten as \begin{equation}\label{decomp1.2} \begin{split} p_{\alpha-j}(x,D)=\sum_{|\beta|=j}c_{\beta,j}\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,j}J^{\alpha-j},\quad j\geq 2, \end{split} \end{equation} where $\Psi_{\beta,j}\in \mathrm{OP}\mathbb{S}^{0}$ for $j\in \{2,\dots,m\}.$ Setting $m$ as being \begin{equation*} m=\lceil2s+\alpha-1-s_{n}\rceil. \end{equation*} Thus, \begin{equation*} \begin{split} \int_{0}^{T}|\Theta_{2,1,m+1}(t)|\, dt& \leq T\|u_{0}\|_{L^{2}_{x}}\left\|J^{s}r_{\alpha-m-1}(x,D)J^{s}u\right\|_{L^{\infty}_{T}L^{2}_{x}}\\ &\lesssim_{\epsilon,\tau,\alpha,n}T\|u_{0}\|_{L^{2}_{x}}\|u\|_{L^{\infty}_{T}H^{s_{n}+}_{x}}. \end{split} \end{equation*} When replacing \eqref{disperive} into $\Theta_{2,1}$ we obtain \begin{equation}\label{disppart} \begin{split} \Theta_{2,1}(t) &=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}uJ^{s+\alpha}u\partial_{x_{1}}(\chi_{\epsilon,\tau}^{2})\,dx-\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u J^{s+\alpha-2}\partial_{x_{1}}^{2}u\partial_{x_{1}}(\chi_{\epsilon,\tau}^{2})\,dx\\ &\quad -\frac{\alpha}{2}\sum_{\mathclap{\substack{|\beta|=1\\\beta\neq \mathrm{e}_{1}}}}\,\int_{\mathbb{R}^{n}}J^{s}uJ^{s+\alpha-2}\partial_{x_{1}}\partial_{x}^{\beta}u \partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\,dx+\frac{1}{2}\sum_{j=2}^{m}\int_{\mathbb{R}^{n}}J^{s}up_{\alpha-j}(x,D)J^{s}u\,dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}ur_{\alpha-m-1}(x,D)J^{s}u\, dx\\ &=\Theta_{2,1,1}(t)+\Theta_{2,1,2}(t)+\Theta_{2,1,3}(t)+\sum_{j=2}^{m}\Theta_{2,1,j+2}(t)\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}ur_{\alpha-m-1}(x,D)J^{s}u\, dx. \end{split} \end{equation} By using an argument similar to the one described in \eqref{r1}-\eqref{eq2} there exists $r_{\frac{\alpha}{2}-2}(x,D)\in\mathrm{OP}\mathbb{S}^{\frac{\alpha}{2}-2},$ such that \begin{equation}\label{primersmoo} \begin{split} \Theta_{2,1,1}(t)&= \nu_{1}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha}{2}}u\right)^{2}\chi_{\epsilon,\tau}\chi_{\epsilon,\tau}'\,dx+\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha}{2}}u\left[J^{\frac{\alpha}{2}};\partial_{x_{1}}(\chi_{\epsilon,\tau}^{2})\right]J^{s}u\,dx\\ &=\Theta_{2,1,1,1}(t)+\Theta_{2,1,1,2}(t). \end{split} \end{equation} The term containing the commutator expression is quite more complicated to handle since at a first sight some upper bound would require more regularity. However, we will show that this is not the case, since there are several cancellations that allow close the argument without any additional assumption. First, we rewrite $\Theta_{2,1,1,2}$ as follows \begin{equation}\label{commudecomp1..1} \begin{split} \Theta_{2,1,1,2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\alpha}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s}u\, dx\\ &\quad -\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[ J^{\frac{\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right] J^{s+\frac{\alpha}{2}}u\, dx\\ &=\Lambda_{1}(t)+\Lambda_{2}(t). \end{split} \end{equation} We focus our attention on $\Lambda_{1}$. In the same spirit of the decomposition used in \eqref{decomp1.2}, is clear that for some $m_{1}\in\mathbb{N}$ there exist operators\\ $q_{\alpha-1}(x,D),q_{\alpha-2}(x,D), \dots, q_{\alpha-m_{1}}(x,D),r_{\alpha-m_{1}-1}(x,D)$ such that \begin{equation}\label{commudecomp1..2} \left[J^{\alpha}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]=q_{\alpha-1}(x,D)+q_{\alpha-2}(x,D)+\dots+q_{\alpha-m_{1}}(x,D)+ r_{\alpha-m_{1}-1}(x,D), \end{equation} where \begin{equation}\label{commudecomp1..3} q_{\alpha-j}(x,D)=\sum_{|\beta|=j}c_{\beta,j}\partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,j}J^{\alpha-j},\quad j\geq 1,\quad r_{\alpha-m_{1}-1}\in \mathrm{OP}\mathbb{S}^{\alpha-m_{1}-1}. \end{equation} and $\Psi_{\beta,j}\in\mathrm{OP}\mathbb{S}^{0}$ for all $\beta.$ Hence, \begin{equation*} \begin{split} \Lambda_{1}(t)&=\frac{1}{2}\sum_{j=1}^{m_{1}}\int_{\mathbb{R}^{n}}J^{s}uq_{\alpha-j}(x,D)J^{s}u\, dx+\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u r_{\alpha-m_{1}-1}(x,D)J^{s}u\, dx\\ &=\sum_{j=1}^{m_{1}}\Lambda_{1,j}(t)+\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u r_{\alpha-m_{1}-1}(x,D)J^{s}u\, dx \end{split} \end{equation*} A straightforward calculus shows that \begin{equation}\label{com1} q_{\alpha-{1}}(x,D)=-\frac{\alpha}{2}\sum_{|\beta|=1}\partial_{x}^{\beta}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)J^{\alpha-2}\partial_{x}^{\beta} \end{equation} that after replacing it into \eqref{commudecomp1..1} yield \begin{equation*} \begin{split} \Lambda_{1,1}(t) &=-\frac{\alpha}{2}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s}u J^{s+\alpha-2}\partial_{x}^{\beta}u \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\, dx\\ &=-\frac{\alpha}{2}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u J^{s+\frac{\alpha-2}{2}}\partial_{x}^{\beta}u \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\, dx\\ &\quad -\frac{\alpha}{2}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right] J^{s+\frac{\alpha-2}{2}}\partial_{x}^{\beta}u \, dx\\ &=\frac{\alpha}{4}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}u\right)^{2} \partial_{x}^{2\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\, dx\\ &\quad -\frac{\alpha}{2}\sum_{|\beta|=1}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right] J^{s+\frac{\alpha-2}{2}}\partial_{x}^{\beta}u \, dx\\ &=\Lambda_{1,1}(t)+\Lambda_{1,2}(t). \end{split} \end{equation*} From \eqref{smoot} we obtain after fixing properly $\epsilon$ and $ \tau$ that \begin{equation*} \begin{split} \int_{0}^{T}|\Lambda_{1,1}(t)|\, dt <\infty. \end{split} \end{equation*} In the case of $\Lambda_{1,2}$ we a void a new commutator decomposition as follows \begin{equation*} \begin{split} &\Lambda_{1,2}(t)=\\ &-\frac{\alpha}{2}\sum_{|\beta|=1}\left\{\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}(u\chi_{\epsilon, \tau})\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right] J^{s+\frac{\alpha-2}{2}}\partial_{x}^{\beta}(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}u) \, dx\right.\\ &\quad \left.+\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}(u\phi_{\epsilon, \tau})\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right] J^{s+\frac{\alpha-2}{2}}\partial_{x}^{\beta}(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}u) \, dx\right.\\ &\quad \left.+\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}(u\psi_{\epsilon, \tau})\left[J^{\frac{2-\alpha}{2}}; \partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right] J^{s+\frac{\alpha-2}{2}}\partial_{x}^{\beta}(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}u) \, dx\right\}\\ \end{split} \end{equation*} In virtue of Theorem \ref{continuity} and Lemma \ref{zk19} we obtain \begin{equation*} \begin{split} | \Lambda_{1,2}(t)|&\lesssim \|J^{s}(u\chi_{\epsilon,\tau})\|_{L^{2}_{x}}^{2}+\left\|J^{s}(u\phi_{\epsilon,\tau})\right\|_{L^{2}_{x}}^{2}+\|u_{0}\|_{L^{2}_{x}}^{2}, \end{split} \end{equation*} although \begin{equation}\label{partition1.1} J^{s}(u\chi_{\epsilon, \tau})=\chi_{\epsilon, \tau}J^{s}u+[J^{s}; \chi_{\epsilon, \tau}](u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}), \end{equation} and the first term in the r.h.s is the quantity to estimate after applying Gronwall's inequality. The remainder terms are of order $s-1$ and these are estimated by using \eqref{smoot}, Lemma \ref{lemm} and Lemma \ref{zk19} after integrating in time. For $\Lambda_{1,j}$ with $j>1$ is easily handled since the regularity required for such terms is less than $s$ and therefore after integrating in time, the inequality \eqref{smoot} is the key part. More precisely, if we omit the constants in front and we replace \eqref{com1} yield \begin{equation*} \begin{split} & \Lambda_{1,j}(t)=\\ &\sum_{j=2}^{m_{1}}\sum_{|\beta|=j}\int_{\mathbb{R}^{n}}J^{s}(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon})\partial_{x}^{\beta}\partial_{x_{1}}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,j}J^{\alpha-j+s}(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}u)\, dx. \end{split} \end{equation*} Thus, by combining Lemma \ref{lem1} and Theorem \ref{continuity} produce \begin{equation*} | \Lambda_{1,j}(t)|\lesssim \|J^{s}(u\chi_{\epsilon,\tau})\|_{L^{2}_{x}}^{2}+\left\|J^{s}(u\phi_{\epsilon,\tau})\right\|_{L^{2}_{x}}^{2}+\|u_{0}\|_{L^{2}_{x}}^{2}, \end{equation*} for $j=2,3,\dots, m_{1}.$ At this point we apply \eqref{partition1.1} as we did above to bound $\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}},$ and for $\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{x}}$ it is only required to combine Lemma \ref{lemm} together with \eqref{smoot}. In addition, \begin{equation*} \begin{split} \frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u r_{\alpha-m_{1}-1}(x,D)J^{s}u\, dx&=\frac{1}{2}\int_{\mathbb{R}^{n}}uJ^{s}r_{\alpha-m_{1}-1}(x,D)J^{s}u\, dx \end{split} \end{equation*} which implies after setting \begin{equation*} m_{1}= \left\lceil 2s+\alpha-1-s_{n}\right\rceil \end{equation*} then \begin{equation*} \int_{0}^{T} \left|\frac{1}{2}\int_{\mathbb{R}^{n}}uJ^{s}r_{\alpha-m_{1}-1}(x,D)J^{s}u\, dx\, dt \right|\lesssim T\|u_{0}\|_{L^{2}_{x}}\|u\|_{L^{\infty}_{T}H^{s_{n}+}_{x}}<\infty, \end{equation*} where we have used Theorem \ref{continuity} in the last inequality above. A quite similar argument applies to $\Lambda_{2}$ in \eqref{commudecomp1..1}, although for the seek of brevity we omit the details. An idea quite similar to the one used to bound the term $\Lambda_{1}$ also applies for $\Theta_{2,1,j+2}$ for $j=2,3,\dots,m$ in \eqref{disppart}. Indeed, \begin{equation*} |\Theta_{2,1,j+2}(t)|\lesssim\|J^{s}(u\chi_{\epsilon,\tau})\|_{L^{2}_{x}}^{2}+\left\|J^{s}(u\phi_{\epsilon,\tau})\right\|_{L^{2}_{x}}^{2}+\|u_{0}\|_{L^{2}_{x}}^{2},\quad \mbox{for}\quad j=2,3,\dots m. \end{equation*} This term is quite important since it contains part of the smoothing effect we desire to obtain. In the first place, we rewrite the term as \begin{equation*} \begin{split} \Theta_{2,1,2}(t)&=\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &\quad +\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right] \partial_{x_{1}}J^{s}u\, dx\\ &=\Lambda_{3}(t)+\Lambda_{4}(t). \end{split} \end{equation*} Notice that $\Lambda_{3,1}$ is the term to be estimated after integrating in time (it contains the desired smoothing). On the other hand, $\Lambda_{4}$ does not contains terms that will provide some useful information in our analysis. In fact, to provide upper bounds for $\Lambda_{4}$ we require apply a decomposition of the commutator expression quite similar to that in \eqref{commudecomp1..1} and following the arguments used to bound $\Lambda_{1}.$ The expression of $\Theta_{ 2,1,3}$ contains several interactions that make up the smoothing effect. Nevertheless, we require to decouple such interactions to close the argument. In this sense, we claim that there exist $\lambda>0$ such that \begin{equation*} \begin{split} &\lambda\left(\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha}{2}}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx+\int_{\mathbb{R}^{n}}\left(J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\right)\\ &\leq \Theta_{2,1,1,1}(t)+\Lambda_{3,1}(t)+ \Theta_{2,1,3}(t), \end{split} \end{equation*} whenever the condition holds true: $\nu_{1}>0$ and \begin{equation}\label{cond11} 0< \sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}<\min\left\{ \frac{2\nu_{1}}{C\sqrt{\alpha(n-1)}},\frac{\nu_{1}(1+\alpha)}{\alpha\epsilon\sqrt{n-1}}\right\}, \end{equation} with $\epsilon$ satisfying \begin{equation}\label{cond12} 0<\epsilon<\frac{\nu_{1}}{|\overline{\nu}|\sqrt{n-1}}-\frac{\alpha\sqrt{n-1}|\overline{\nu}|}{4\nu_{1}}C^{2}, \end{equation} where $|\overline{\nu}|:=\sqrt{\nu_{2}^{2}+\nu_{3}^{2}+\dots+\nu_{n}^{2}}$ and $$C:=\inf_{f\in L^{2}(\mathbb{R}^{n}),f\neq 0}\frac{\|J^{-1}\partial_{x_{j}}f\|_{L^{2}}}{\|f\|_{L^{2}}},\quad j=2,3,\dots,n.$$ The reader can check that the proof is analogous to the one furnished in {\sc Claim 1} in the proof of Lemma \ref{main2} (see \eqref{claim1} for more details). Since \begin{equation*} \begin{split} \Theta_{2,2}(t)&= \frac{1}{2}\int_{\mathbb{R}^{n}} J^{s}(u\chi_{\epsilon,\tau}+u\phi_{\epsilon,\tau}+u\psi_{\epsilon}) \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s}(u\chi_{\epsilon,\tau}+u\phi_{\epsilon,\tau}+u\psi_{\epsilon})\,dx. \end{split} \end{equation*} An argument similar to the used to handle \eqref{commutatrot1} combined with Lemma \ref{lem1} and corollary \ref{separated} implies that \begin{equation}\label{kernelruin} \begin{split} |\Theta_{2,2}(t)| &\lesssim \left\{\|J^{s}(u\chi_{\epsilon,\tau})\|_{L^{2}_{x}}^{2}+\left\|J^{s}(u\phi_{\epsilon,\tau})\right\|_{L^{2}_{x}}^{2}+\|u_{0}\|_{L^{2}_{x}}^{2}\right\}. \end{split} \end{equation} Notice that \begin{equation*} J^{s}(u\chi_{\epsilon,\tau})=\chi_{\epsilon,\tau}J^{s}u+\left[J^{s};\chi_{\epsilon,\tau}\right] (u\chi_{\epsilon,\tau}+u\phi_{\epsilon,\tau}+u\psi_{\epsilon}), \end{equation*} the first term in the r.h.s above is the quantity to be estimated after taking the $L^{2}-$norm and the remainder terms are of order $s-1.$ Although, to control $\|J^{s}(u\phi_{\epsilon,\tau})\|_{L^{2}_{x}}$ we only require to use Lemma \ref{lemm} combined with \eqref{smoot}, we skip the details, in such a case we obtain \begin{equation*} \int_{0}^{T}\|J^{s}(u(\cdot, t)\phi_{\epsilon, \tau}(\cdot, t))\|_{L^{2}_{x}}^{2}\, dt<c, \end{equation*} for some positive constant $c.$ We decompose the nonlinear term as follows \begin{equation*} \begin{split} \Theta_{3}(t)&=-\int_{\mathbb{R}^{n}}\chi_{\epsilon,\tau}J^{s}u\, \left[J^{s}; \chi_{\epsilon,\tau}\right]u\partial_{x_{1}}u\, dx +\int_{\mathbb{R}^{n}}\chi_{\epsilon,\tau}J^{s}u\left[J^{s}; u\chi_{\epsilon,\tau}\right]\partial_{x_{1}}u\\ &\quad -\frac{1}{2}\int_{\mathbb{R}^{n}}(J^{s}u)^{2}\chi_{\epsilon,\tau}^{2}\, dx-\nu_{1}\int_{\mathbb{R}^{n}}u(J^{s}u)^{2}\chi_{\epsilon,\tau}\chi_{\epsilon,\tau}'\, dx\\ &=\Theta_{3,1}(t)+\Theta_{3,2}(t)+\Theta_{3,3}(t)+\Theta_{3,4}(t). \end{split} \end{equation*} The term $\Theta_{3,1}$ requires to describe the commutator $\left[J^{s}; \chi_{\epsilon,\tau}\right].$ Although, to obtain such expression have been a reiterative argument in this paper and for the sake of brevity we will show the crucial steps. Thus, if we set \begin{equation}\label{main} b_{s-1}(x,D):=\left[J^{s}; \chi_{\epsilon,\tau}\right], \end{equation} then $b_{s-1}\in\mathrm{OP}S^{s-1},$ and its principal symbol admits the following decomposition \begin{equation*}\label{} \begin{split} b_{s-1}(x,\xi) &=\sum_{1\leq|\alpha|\leq l}\frac{(2\pi i)^{-|\alpha|}}{\alpha!}\left\{\partial^{\alpha}_{\xi}\left(\langle \xi\rangle^{s}\right)\partial_{x}^{\alpha}\left(\chi_{\epsilon,\tau}(x,t)\right)\right\}+\kappa_{s-l-1}(x,\xi)\\ &=\sum_{j=1}^{l}\sum_{|\alpha|= j}\frac{(2\pi i)^{-|\alpha|}}{\alpha!}\left\{\partial^{\alpha}_{\xi}\left(\langle \xi\rangle^{s}\right)\partial_{x}^{\alpha}\left(\chi_{\epsilon,\tau}(x,t)\right)\right\}+\kappa_{s-l-1}(x,\xi)\\ \end{split} \end{equation*} where we choose $l>\lceil s-2-\frac{n}{2}\rceil.$ Additionally, for multi-index $\alpha,\beta $ with $\beta\leq \alpha$ we define \begin{equation*} \eta_{\alpha,\beta}(x,\xi):=\frac{(2\pi i\xi)^{\beta}}{\left(1+|\xi|^{2}\right)^{\frac{|\alpha|}{2}}},\qquad x,\xi\in\mathbb{R}^{n}. \end{equation*} Thus, $ \eta_{\alpha,\beta}\in \mathbb{S}^{|\beta|-|\alpha|}\subset\mathbb{S}^{0},$ and \begin{equation}\label{e11.1} \Psi_{\eta_{\alpha,\beta}}g(x):=\int_{\mathbb{R}^{n}}e^{2\pi ix\cdot \xi}\eta_{\alpha,\beta}(x,\xi)\widehat{g}(\xi)\,d\xi,\quad g\in \mathcal{S}(\mathbb{R}^{n}), \end{equation} with this at hand we rearrange the terms in the decomposition of the symbol $q_{s-1}$ to obtain \begin{equation*} \begin{split} \Psi_{q_{s-1}}f(x)=\sum_{j=1}^{l}\sum_{|\alpha|=j}\sum_{\beta\leq\alpha} \omega_{\alpha,\beta,\nu,s} \,\partial_{x}^{\alpha}\chi_{\epsilon, \tau}(x,t)\Psi_{\eta_{\alpha,\beta}}J^{s-|\alpha|}f(x)+ \Psi_{\kappa_{s-m-1}}f(x), \end{split} \end{equation*} where $f\in \mathcal{S}(\mathbb{R}^{n})$ and $\omega_{\alpha,\beta,\nu,s}$ denotes a constant depending on the parameters indicated. Now, we turn back to $\Theta_{3,1}$ from where we get after combining Theorem \ref{continuity}, Lemma \ref{lem1} and Theorem \ref{leibniz} \begin{equation*} \begin{split} & |\Theta_{3,1}(t)|\\ &\lesssim \|\chi_{\epsilon,\tau}J^{s}u \|_{L^{2}_{x}}\sum_{j=1}^{l}\sum_{|\alpha|=j}\sum_{\beta\leq\alpha} |\omega_{\alpha,\beta,\nu,s}| \left\|\partial_{x}^{\alpha} \chi_{\epsilon, \tau}\Psi_{\eta_{\alpha,\beta}}J^{s-|\alpha|}\partial_{x_{1}}\left(\left(u\chi_{\epsilon, \tau}\right)^{2}\right)\right\|_{L^{2}_{x}}\\ &\quad +\|\chi_{\epsilon,\tau}J^{s}u \|_{L^{2}_{x}}\sum_{j=1}^{l}\sum_{|\alpha|=j}\sum_{\beta\leq\alpha} |\omega_{\alpha,\beta,\nu,s}| \left\|\partial_{x}^{\alpha} \chi_{\epsilon, \tau}\Psi_{\eta_{\alpha,\beta}}J^{s-|\alpha|}\partial_{x_{1}}\left(\left(u\widetilde{\phi_{\epsilon, \tau}}\right)^{2}\right)\right\|_{L^{2}_{x}}\\ &\quad +\|\chi_{\epsilon,\tau}J^{s}u\|_{L^{2}_{x}}\sum_{j=1}^{l}\sum_{|\alpha|=j}\sum_{\beta\leq\alpha} |\omega_{\alpha,\beta,\nu,s}| \left\|\partial_{x}^{\alpha} \chi_{\epsilon, \tau}\Psi_{\eta_{\alpha,\beta}}J^{s-|\alpha|}\partial_{x_{1}}\left(\left(u\psi_{\epsilon}\right)^{2}\right)\right\|_{L^{2}_{x}}\\ &\lesssim \|\chi_{\epsilon,\tau}J^{s}u\|_{L^{2}_{x}}\left\{ \left\|J^{s}\left(\left(u\chi_{\epsilon, \tau}\right)^{2}\right)\right\|_{L^{2}_{x}}+\left\|J^{s}\left(\left(u\widetilde{\phi_{\epsilon, \tau}}\right)^{2}\right)\right\|_{L^{2}_{x}}+\|u\|_{L^{4}}^{2}\right\}\\ &\lesssim \|u\|_{L^{\infty}_{x}}\|\chi_{\epsilon,\tau}J^{s}u\|_{L^{2}_{x}}\left\{ \left\|J^{s}\left(u\chi_{\epsilon, \tau}\right)\right\|_{L^{2}_{x}}+\left\|J^{s}\left(u\widetilde{\phi_{\epsilon, \tau}}\right)\right\|_{L^{2}_{x}}+\|u_{0}\|_{L^{2}}\right\}, \end{split} \end{equation*} from this last inequality we have that $\|\chi_{\epsilon,\tau}J^{s}u\|_{L^{2}_{x}}$ is the quantity to be estimated and if additionally we make use of \eqref{main} it follows that \begin{equation}\label{m2} J^{s}(u\chi_{\epsilon, \tau})=\chi_{\epsilon, \tau}J^{s}u+q_{s-1}(x,D)(u\chi_{\epsilon, \tau}+u\phi_{\epsilon,\tau}+u\psi_{\epsilon}), \end{equation} and notice that from the decomposition above the first term in the r.h.s above is the quantity to be estimated by Gronwall's inequality, and the remainder terms are of lower order and localized. To handle the expression $\left\|J^{s}\left(u\widetilde{\phi_{\epsilon, \tau}}\right)\right\|_{L^{2}_{x}}$ we only require to consider a suitable cut-off function and combine Lemma \ref{lemm} together with \eqref{smoot} to show that \begin{equation}\label{m1} \int_{0}^{T}\left\|J^{s}\left(u(\cdot,t)\widetilde{\phi_{\epsilon, \tau}(\cdot,t)}\right)\right\|_{L^{2}_{x}}^{2}\, dt<\infty. \end{equation} a quite similar analysis is used to show that \begin{equation}\label{m3} \int_{0}^{T}\left\|J^{s}\left(u(\cdot,t)\phi_{\epsilon, \tau}(\cdot,t)\right)\right\|_{L^{2}_{x}}^{2}\, dt<\infty. \end{equation} Since \begin{equation*} \begin{split} \Theta_{3,2}(t)&=\int_{\mathbb{R}^{n}}\chi_{\epsilon,\tau}J^{s}u\left[J^{s}; u\chi_{\epsilon,\tau}\right]\partial_{x_{1}}u\, dx\\ &=\int_{\mathbb{R}^{n}}\chi_{\epsilon,\tau}J^{s}u\left[J^{s}; u\chi_{\epsilon,\tau}\right]\partial_{x_{1}}\left(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}\right)\, dx\\ &=\Theta_{3,2,1}(t)+\Theta_{3,2,2}(t)+\Theta_{3,2,3}(t). \end{split} \end{equation*} The arguments to bound $\Theta_{3,2,1}$ and $\Theta_{3,2,2}$ are quite similar and are obtained by using Theorem \ref{KPDESI} \begin{equation*} |\Theta_{3,2,1}(t)|\lesssim \|\chi_{\epsilon, \tau} J^{s}u \|_{L^{2}_{x}}\\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}\|\nabla u \|_{L^{\infty}_{x}} \end{equation*} and \begin{equation*} |\Theta_{3,2,2}(t)|\lesssim \|\nabla u \|_{L^{\infty}_{x}}\left\{\\|\chi_{\epsilon, \tau}J^{s}u\|_{L^{2}_{x}}^{2}+\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}+\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}\right\}. \end{equation*} The arguments used control the terms $\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}$ and $\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}$ were already described in \eqref{m2} resp. \eqref{m3}. For $\Theta_{3,2,3},$ Lemma \ref{lem1} yields \begin{equation*} |\Theta_{3,2,3}(t)|\lesssim \|\chi_{\epsilon, \tau} J^{s}u \|_{L^{2}_{x}} \|u\|_{L^{\infty}_{x}}\|u_{0}\|_{L^{2}_{x}}. \end{equation*} Notice that $\Theta_{3,3}$ is the term to estimate by Gronwall's inequality and $\Theta_{3,4}$ can be bounded above by using Sobolev embedding and \eqref{smoot}. Finally, we gather the information corresponding to this step after applying Gronwall's inequality combined with integration in the time variable imply that for any $\epsilon>0$ and $\tau\geq 5\epsilon,$ \begin{equation}\label{final1} \begin{split} &\sup_{0<t<T}\int_{\mathbb{R}^{n}}(J^{s}u(x,t))^{2}\chi_{\epsilon, \tau}^{2}(\nu\cdot x+\omega t )\, dx\\ &\qquad +\int_{0}^{T}\int_{\mathbb{R}^{n}}(J^{s+\frac{\alpha}{2}}u(x,t))^{2}\left(\chi_{\epsilon, \tau}\chi_{\epsilon, \tau}'\right)(\nu\cdot x+\omega t)\, dx\, dt\leq c. \end{split} \end{equation} This estimate finish the step 1. \begin{flushleft} {\sc \underline{Step 2:}} \end{flushleft} In this case we consider $s\in (s_{n},s_{n}+\frac{\alpha}{2})$. Thus, \begin{equation*} s_{n}+1-\frac{\alpha}{2}<s+1-\frac{\alpha}{2}<s_{n}+1. \end{equation*} As we did in step 1 we perform energy estimates, the main difference is that now are at level $s+\frac{2-\alpha}{2}.$ More precisely, \begin{equation}\label{energy2} \begin{split} &\frac{d}{dt}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{2-\alpha}{2}}u\right)^{2}\chi_{\epsilon,\tau}^{2}\,dx\underbrace{-\frac{\omega }{2}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{2-\alpha}{2}}u\right)^{2}\chi_{\epsilon,\tau}\chi_{\epsilon,\tau}'\,dx}_{\Theta_{1}(t)}\\ & \underbrace{-\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u \partial_{x_{1}}(-\Delta)^{\frac{\alpha}{2}}J^{s+\frac{2-\alpha}{2}}u \chi_{\epsilon,\tau}^{2}\,dx}_{\Theta_{2}(t)} \underbrace{+\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}uJ^{s+\frac{2-\alpha}{2}}(u\partial_{x_{1}}u)\chi^{2}_{\epsilon,\tau}\,dx}_{\Theta_{3}(t)}=0. \end{split} \end{equation} In this step we will focus only on the more difficult terms to handle, these correspond to $\Theta_{1}$ and $\Theta_{2};$ the term $\Theta_{3}$ can be estimated by using similar arguments to the ones described in the previous step. Nevertheless, for the reader's convenience we will indicate the steps that are needed to provided the desired bounds. An argument similar to the one used in \eqref{teta1}-\eqref{smoot} applied to the second term in \eqref{final1} produce: for $\epsilon>0$ and $\tau\geq 5\epsilon,$ \begin{equation}\label{eqinter1} \int_{0}^{T}\int_{\mathcal{H}_{\{\epsilon-\omega t,\nu\}}\cap\mathcal{H}_{\{\tau-\omega t,\nu\}}} \left(J^{r}u(x,t)\right)^{2}\, dx\, dt\leq c_{r}, \end{equation} for some positive constant $c_{r},$ and $r\in\left(0, s+\frac{\alpha}{2}\right].$ We need to take into account that $$s+\frac{\alpha}{2}\geq s+\frac{2-\alpha}{2},$$ so that, the regularity in the previous step is enough to control the terms with localized regularity. Thus, in virtue of \eqref{eqinter1} we obtain for $\epsilon>0$ and $\tau\geq 5\epsilon;$ \begin{equation}\label{interp1.2.1} \begin{split} \int_{0}^{T}|\Theta_{1}(t)|\, dt \lesssim\int_{\mathcal{H}_{\{\epsilon-\omega t,\nu\}}\cap\mathcal{H}^{c} _{\{\tau-\omega t,\nu\}}} \left(J^{s+\frac{2-\alpha}{2}}u(x,t)\right)^{2}\, dx\, dt\leq c_{s,\alpha}. \end{split} \end{equation} A procedure similar to the one used for $\Theta_{2}$ in the step 1 implies that \begin{equation}\label{comm1.1.1} \begin{split} \Theta_{2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s+\frac{2-\alpha}{2}}u \left[J^{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s+\frac{2-\alpha}{2}}u\,dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s+\frac{2-\alpha}{2}}u \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s+\frac{2-\alpha}{2}}u\,dx\\ &=\Theta_{2,1}(t)+\Theta_{2,2}(t). \end{split} \end{equation} We shall remind from \eqref{comm1.2} that there exist pseudo-differential operators $p_{\alpha-j}(x,D),$ where $\,j\in \{1,2\cdots,m\}$ for some $m\in \mathbb{N}$ satisfying \begin{equation}\label{comm1.3} \begin{split} c_{\alpha}(x,D)=p_{\alpha}(x,D)+p_{\alpha-1}(x,D)+\dots+p_{\alpha-m}(x,D)+r_{\alpha-m-1}(x,D), \end{split} \end{equation} where $p_{\alpha-j}\in \mathrm{OP}\mathbb{S}^{\alpha-j}$ and $r_{\alpha-m-1}\in \mathrm{OP}\mathbb{S}^{\alpha-1-m}.$ We choose $m$ as being \begin{equation}\label{control commu} m=\left\lceil 2s+1-s_{n}\right\rceil. \end{equation} Thus, \begin{equation}\label{decomp1,11,2} \begin{split} \Theta_{2,1}(t)&=\frac{1}{2}\sum_{j=0}^{m}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u p_{\alpha-j}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u r_{\alpha-1-m}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &=\frac{1}{2}\sum_{j=0}^{m}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u p_{\alpha-j}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}uJ^{s+\frac{2-\alpha}{2}} r_{\alpha-1-m}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &=\sum_{j=0}^{m}\Theta_{2,1,j}(t)+\Theta_{2,1,m+1}(t). \end{split} \end{equation} We shall remind from \eqref{decomp1.2} that for each $j\in\{1,2,\dots,m\},$ \begin{equation}\label{psudoe} \begin{split} p_{\alpha-j}(x,D)=\sum_{|\beta|=j}c_{\beta,j}\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,j}J^{\alpha-j}, \end{split} \end{equation} with $\Psi_{\beta,j}\in\mathrm{OP}\mathbb{S}^{0}.$ The decomposition above allow us to estimate $\Theta_{2,1,j}$ for all $j>1.$ Indeed, \begin{equation}\label{equiva} \begin{split} \int_{0}^{T}|\Theta_{2,1,j}(t)|\, dt &\lesssim \sum_{|\beta|=j}|c_{\beta,j}|\int_{0}^{T} \int_{\mathbb{R}^{n}}\left| J^{s+\frac{2-\alpha}{2}}u\partial_{x}^{\beta}\left(\chi_{\epsilon, \tau}^{2}\right)\Psi_{\beta,j}J^{s+1-j+\frac{\alpha}{2}}u\,\right|\, dx\, dt\\ &\lesssim \sum_{|\beta|=j}|c_{\beta,j}|\int_{0}^{T} \int_{\mathbb{R}^{n}}\left| J^{s+\frac{2-\alpha}{2}}u\, \theta_{1}^{2}\, \Psi_{\beta,j}J^{s+1-j+\frac{\alpha}{2}}u\,\right|\, dx\, dt\\ &\lesssim \int_{0}^{T} \left\|\theta_{1}J^{s+\frac{2-\alpha}{2}}u\right\|_{L^{2}_{x}}^{2}\, dt +\sum_{|\beta|=j}|c_{\beta,j}| \int_{0}^{T}\left\|\theta_{1}\Psi_{\beta,j}J^{s+1-j+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}^{2}\, dt, \end{split} \end{equation} where $\theta_{1}\in C^{\infty}(\mathbb{R}^{n}),$ such that for all $\beta$ multi-index with $|\beta|=j,$ the following relationship holds \begin{equation*} \theta_{1}\equiv 1 \quad \mbox{on}\, \supp_{x}\left(\partial_{x}^{\beta}(\chi_{\epsilon, \tau}^{2}(\cdot,t))\right) \end{equation*} and \begin{equation*} \supp_{x}\theta_{1}\subset \mathcal{H}_{\left\{\frac{51\epsilon}{16}-\omega t ,\nu\right\}}\cap \mathcal{H}^{c}_{\left\{\tau+\frac{51\epsilon}{16}-\omega t ,\nu\right\}},\, \forall\, t>0, \end{equation*} whenever $\epsilon>0$ and $\tau\geq 5\epsilon.$ Then, \begin{equation*} \begin{split} \int_{0}^{T} \left\|\theta_{1}J^{s+\frac{2-\alpha}{2}}u\right\|_{L^{2}_{x}}^{2}\, dt & \lesssim \int_{0}^{T}\int_{\vspace{20mm} \mathcal{H}_{\left\{\frac{51\epsilon}{16}-\omega t ,\nu\right\}}\cap \mathcal{H}^{c}_{\left\{\frac{\tau+51\epsilon}{16}-\omega t ,\nu\right\}}}\!\!\left(J^{s+\frac{2-\alpha}{2}}u\right)^{2}\, dx\, dt<c, \end{split} \end{equation*} after choosing $(\epsilon,\tau)=(\epsilon',\tau') $ properly in \eqref{interp1.2.1}. We consider $\theta_{2}\in C^{\infty}(\mathbb{R}^{n})$ satisfying \begin{equation*} \theta_{2}\equiv 1 \quad \mbox{on}\quad \supp \theta_{1} \end{equation*} and \begin{equation*} \supp \theta_{2}\subset \mathcal{H}_{\left\{\frac{17\epsilon}{16}-\omega t ,\nu\right\}}\cap \mathcal{H}^{c}_{\left\{2\tau+\frac{51\epsilon}{16}-\omega t ,\nu\right\}},\, \forall\, t>0, \end{equation*} whenever $\epsilon>0$ and $\tau\geq 5\epsilon.$ Thus, combining Lemma \ref{zk19} and \eqref{eqinter1} implies that \begin{equation}\label{equiva2} \begin{split} \sum_{|\beta|=j}|c_{\beta,j}| \int_{0}^{T}\left\|\theta_{1}\Psi_{\beta,j}J^{s+1-j+\frac{\alpha}{2}}u\right\|_{L^{2}_{x}}^{2}\, dt<c , \end{split} \end{equation} for all $j\geq 2.$ From \eqref{control commu} we obtain \begin{equation*} \int_{0}^{T} |\Theta_{2,1,m+1}(t)|\, dt \lesssim T\|u_{0}\|_{L^{2}_{x}}\|u\|_{L^{\infty}_{T}H^{s_{n}^{+}}_{x}}. \end{equation*} It remains to estimate $\Theta_{2,1,1}$ that did not fall into the scope of the previous analysis. Although, when we replace \eqref{psudoe} in \eqref{decomp1,11,2} we find \begin{equation}\label{karine1} \begin{split} \Theta_{2,1,1}(t)&=\frac{1}{2}\sum_{|\beta|=1}c_{\beta,1}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u \partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,1}J^{s+\frac{\alpha}{2}}u\, dx\\ &=\frac{1}{2}\sum_{|\beta|=1}c_{\beta,1}\int_{\mathbb{R}^{n}}J^{s+\frac{1}{2}}u \partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,1}J^{s+\frac{1}{2}}u\, dx\\ &\quad+\frac{1}{2}\sum_{|\beta|=1}c_{\beta,1}\int_{\mathbb{R}^{n}}J^{s+\frac{1}{2}}u\left[J^{\frac{\alpha-1}{2}};\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\right] \Psi_{\beta,1}J^{s+\frac{1}{2}}u\, dx\\ &=\Theta_{2,1,1,1}(t)+\Theta_{2,1,1,2}(t). \end{split} \end{equation} Since $s+\frac{1}{2}\leq s+\frac{\alpha}{2}$ for $\alpha\in[1,2)$ we get up to constants \begin{equation}\label{karine2} \begin{split} \int_{0}^{T} |\Theta_{2,1,1,1}(t)|\,dt&\lesssim\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{1}{2}}u\right)^{2} \chi_{\frac{\epsilon}{3},\tau+\epsilon}' \,dx\, dt\\ &\quad + \sum_{|\beta|=1} \int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\Psi_{\beta,1}J^{s+\frac{1}{2}}u\right)^{2}\left|\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\right| \, dx\, dt<\infty, \end{split} \end{equation} where we have used \eqref{eqinter1} in the first term in the r.h.s above and for the second one we combine Lemma \ref{zk19} together with \eqref{eqinter1}. Instead for $\Theta_{2,1,1,2}$ we decompose the commutator expression as \begin{equation}\label{karine3} \begin{split} \Theta_{2,1,1,2}(t)&=\frac{1}{2}\sum_{|\beta|=1}c_{\beta,1}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{\alpha}{2}};\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\right] \Psi_{\beta,1}J^{s+\frac{1}{2}}u\, dx\\ &\quad -\frac{1}{2}\sum_{|\beta|=1}c_{\beta,1}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{1}{2}};\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\right] \Psi_{\beta,1}J^{s+\frac{\alpha}{2}}u, \end{split} \end{equation} After use a decomposition as the one in \eqref{commudecomp1..2} (replacing $\frac{\alpha}{2}$ in \eqref{commudecomp1..2}) combined with Theorem \ref{continuity} and inequality \eqref{eqinter1} produce \begin{equation}\label{karien4} \int_{0}^{T}|\Theta_{2,1,1,2}(t)|\, dt<\infty. \end{equation} Next, we focus our attention into $\Theta_{2,1,0}$. Indeed, from \eqref{disperive} we have the explicit expression for $p_{\alpha}(x,D),$ so that after replacing it in \eqref{decomp1,11,2} yield \begin{equation*} \begin{split} \Theta_{2,1,0}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u J^{s+\frac{2+\alpha}{2}}u\partial_{x_{1}}\, \left(\chi_{\epsilon, \tau}^{2}\right)\, dx-\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u\,J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}^{2}u\, \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &\quad -\frac{\alpha}{2}\sum_{|\beta|=1, \beta\neq \mathrm{e}_{1}}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u\, J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}\partial_{x}^{\beta}u\, \partial_{x}^{\beta}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &= \Theta_{2,1,0,1}(t)+ \Theta_{2,1,0,2}(t)+ \Theta_{2,1,0,3}(t). \end{split} \end{equation*} In the first place \begin{equation*} \begin{split} \Theta_{2,1,0,1}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}\left(J^{s+1}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx+\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+1}u\left[J^{\frac{\alpha}{2}}; \partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right]J^{s+\frac{2-\alpha}{2}}u\, dx\\ &= \Theta_{2,1,0,1,1}(t)+ \Theta_{2,1,0,1,2}(t). \end{split} \end{equation*} The term $ \Theta_{2,1,0,1,1}(t)$ represents after integrating in time the pursuit smoothing effect. For $\Theta_{2,1,0,1,2}$ we rewrite it as \begin{equation*} \begin{split} \Theta_{2,1,0,1,2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{1+\frac{\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right] J^{s+1-\frac{\alpha}{2}}u\, dx\\ &\quad -\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right] J^{s+1}u\, dx\\ &=\Lambda_{1}(t)+\Lambda_{2}(t). \end{split} \end{equation*} To handle $\Lambda_{1}$ we use the expression \eqref{commudecomp1..2} (after replacing $\frac{\alpha}{2}$ by $\frac{\alpha}{2}+1$), the main difference relies in the fact that we shall we shall fix the positive integer $m_{1}$ as being \begin{equation*} m_{1}=\left\lceil 2s+\frac{3\alpha}{2}-2-s_{n}\right\rceil. \end{equation*} Notice that from such decomposition we obtain terms of lower order which are controlled by using \eqref{final1}. \begin{equation*} \begin{split} \Theta_{2,1,0,2}(t)&= \frac{\alpha}{2}\int_{\mathbb{R}^{n}}\left(\partial_{x_{1}}J^{s}u\right)^{2}\, \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &\quad +\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}u\, dx\\ &=\Theta_{2,1,0,2,1}(t)+\Theta_{2,1,0,2,2}(t). \end{split} \end{equation*} The term $\Theta_{2,1,0,2,1}$ represents the smoothing effect after integrating in the temporal variable. Instead, the remainder does not contain terms that yield some smoothing, thus we require to estimate it. In this sense, we rewrite it as \begin{equation*} \begin{split} \Theta_{2,1,0,2,2}(t)&=\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}u\, dx\\ &=-\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}^{2}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}u\, dx\\ &\quad- \frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\, dx\\ &=\Lambda_{3}(t)+\Lambda_{4}(t). \end{split} \end{equation*} We indicate how to estimate $\Lambda_{3}.$ We start by writing \begin{equation*} \begin{split} &\Lambda_{3}(t)=\\ &\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}\left(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}\right)\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}^{2}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}\left(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}\right)dx\\ \end{split} \end{equation*} Hence by Theorem \ref{continuity} and Lemma \ref{lem1} we obtain \begin{equation*} \begin{split} | \Theta_{2,1,0,2,2}(t)|\lesssim \|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}+\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}+\|u_{0}\|_{L^{2}_{x}}^{2}. \end{split} \end{equation*} Unlike the previous step to estimate $\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}}$ we only require to apply Lemma \ref{zk37} and \eqref{final1} we get \begin{equation*} \sup_{t\in(0,T)}\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}<\infty. \end{equation*} Instead, to estimate $\|J^{s}\left(u\phi_{\epsilon, \tau}\right)\|_{L^{2}_{T}L^{2}_{x}}$ we combine \ref{final1} and the arguments described in \eqref{m1}-\eqref{m3} to obtain $$\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{T}L^{2}_{x}}<\infty.$$ In summary, \begin{equation*} \int_{0}^{T}| \Lambda_{3}(t)|\, dt<\infty. \end{equation*} The same arguments apply to estimate $\Lambda_{4}.$ Indeed, \begin{equation*} \int_{0}^{T}|\Lambda_{4}(t)|\,d t<\infty. \end{equation*} Under the conditions \eqref{cond11}- \eqref{cond12} we there exist $\lambda>0$ such that \begin{equation*} \begin{split} &\lambda\left(\int_{\mathbb{R}^{n}}\left(J^{s+1}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx+\int_{\mathbb{R}^{n}}\left(J^{s-2}\partial_{x_{1}}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\right)\\ &\leq \Theta_{2,1,1,1}(t)+\Lambda_{3,1}(t)+ \Theta_{2,1,0,3}(t). \end{split} \end{equation*} The term that contains the kernel$\mathcal{K}_{\alpha}$ is estimated in the same way as we did in \eqref{kernelruin}, and in such case \begin{equation*} \begin{split} \int_{0}^{T}|\Theta_{2,2}(t)|\, dt<\infty. \end{split} \end{equation*} The arguments of the proof are similar to the ones described in the proof of \eqref{claim1}. To handle the nonlinear part we use the decomposition \begin{equation*} \begin{split} \Theta_{3}(t)&=-\int_{\mathbb{R}^{n}}\chi_{\epsilon,\tau}J^{s+\frac{2-\alpha}{2}}u\, \left[J^{s+\frac{2-\alpha}{2}}; \chi_{\epsilon,\tau}\right]u\partial_{x_{1}}u\, dx \\ &\quad +\int_{\mathbb{R}^{n}}\chi_{\epsilon,\tau}J^{s+\frac{2-\alpha}{2}}u\left[J^{s+\frac{2-\alpha}{2}}; u\chi_{\epsilon,\tau}\right]\partial_{x_{1}}u\\ &\quad -\frac{1}{2}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{2-\alpha}{2}}u\right)^{2}\chi_{\epsilon,\tau}^{2}\, dx-\nu_{1}\int_{\mathbb{R}^{n}}u\left(J^{s+\frac{2-\alpha}{2}}u\right)^{2}\chi_{\epsilon,\tau}\chi_{\epsilon,\tau}'\, dx\\ \end{split} \end{equation*} the reader can notice that this decomposition is just adapted to the regularity of the step. Although, the process required to estimate the terms above is similar as the ones used in the $\Theta_{3}$ in the previous step, so for the sake of brevity we will omit the details. In summary, we find after applying Gronwall's inequality and integrating in time the following: for any $\epsilon>0$ and $\tau\geq 5\epsilon,$ \begin{equation}\label{final2} \begin{split} &\sup_{0<t<T}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{2-\alpha}{2}}u(x,t)\right)^{2}\chi_{\epsilon, \tau}^{2}(\nu\cdot x+\omega t )\, dx\\ &\qquad +\int_{0}^{T}\int_{\mathbb{R}^{n}}(J^{s+1}u(x,t))^{2}\left(\chi_{\epsilon, \tau}\chi_{\epsilon, \tau}'\right)(\nu\cdot x+\omega t)\, dx\, dt\leq c. \end{split} \end{equation} As part of the inductive process we suppose that $s\in (h,h+1), h\in\mathbb{N} ,$ with the condition that for any $\epsilon>0$ and $\tau\geq 5\epsilon$ the following estimate holds true: \begin{equation}\label{final3} \begin{split} &\sup_{0<t<T}\int_{\mathbb{R}^{n}}\left(J^{s}u(x,t)\right)^{2}\chi_{\epsilon, \tau}^{2}(\nu\cdot x+\omega t )\, dx\\ &\qquad +\int_{0}^{T}\int_{\mathbb{R}^{n}}(J^{s+\frac{\alpha}{2}}u(x,t))^{2}\left(\chi_{\epsilon, \tau}\chi_{\epsilon, \tau}'\right)(\nu\cdot x+\omega t)\, dx\, dt\leq c. \end{split} \end{equation} As usual our starting point is the energy estimate \eqref{energy2}. As we did in the previous case we only describe the part that will provide the smoothing effect since the remainder terms can be estimated in a standard way with arguments described previously. First, we rewrite $\Theta_{2}$ from \eqref{energy2} as follows \begin{equation}\label{comm1.1.1} \begin{split} \Theta_{2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s+\frac{2-\alpha}{2}}u \left[J^{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s+\frac{2-\alpha}{2}}u\,dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}} J^{s+\frac{2-\alpha}{2}}u \left[\mathcal{K}_{\alpha}\partial_{x_{1}}; \chi_{\epsilon,\tau}^{2}\right]J^{s+\frac{2-\alpha}{2}}u\,dx\\ &=\Theta_{2,1}(t)+\Theta_{2,2}(t). \end{split} \end{equation} We shall remind that there exist pseudo-differential operators $p_{\alpha-j}(x,D)$ for each $\,j\in \{1,2\cdots,m\}$ and some $m\in \mathbb{N}$ satisfying \begin{equation*} \begin{split} c_{\alpha}(x,D)=p_{\alpha}(x,D)+p_{\alpha-1}(x,D)+\dots+p_{\alpha-m}(x,D)+r_{\alpha-m-1}(x,D), \end{split} \end{equation*} where $p_{\alpha-j}\in \mathrm{OP}\mathbb{S}^{\alpha-j}$ and $r_{\alpha-m-1}\in \mathrm{OP}\mathbb{S}^{\alpha-1-m}.$ We choose $m$ as being \begin{equation*} m=\left\lceil 2s+1-s_{n}\right\rceil. \end{equation*} Thus, \begin{equation*} \begin{split} \Theta_{2,1}(t)&=\frac{1}{2}\sum_{j=0}^{m}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u p_{\alpha-j}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u r_{\alpha-1-m}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &=\frac{1}{2}\sum_{j=0}^{m}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u p_{\alpha-j}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &\quad +\frac{1}{2}\int_{\mathbb{R}^{n}}uJ^{s+\frac{2-\alpha}{2}} r_{\alpha-1-m}(x,D)J^{s+\frac{2-\alpha}{2}}u\, dx\\ &=\sum_{j=0}^{m}\Theta_{2,1,j}(t)+\Theta_{2,1,m+1}(t). \end{split} \end{equation*} where \begin{equation*} \begin{split} p_{\alpha-j}(x,D)=\sum_{|\beta|=j}c_{\beta,j}\partial_{x}^{\beta}(\chi_{\epsilon,\tau}^{2})\Psi_{\beta,j}J^{\alpha-j}, \end{split} \end{equation*} for each $j\in\{1,2,\dots,m\},$ with $\Psi_{\beta,j}\in\mathrm{OP}\mathbb{S}^{0}.$ The key part to control the terms in the expression above relies into combine assumption \eqref{final3} and apply Lemma \ref{zk37} properly to obtain that $\epsilon>0$ and $\tau\geq 5\epsilon,$ the following estimates holds true: \begin{equation}\label{final3.1} \sup_{0<t<T}\int_{\mathbb{R}^{n}}\left(J^{r_{1}}u(x,t)\right)^{2}\chi_{\epsilon, \tau}^{2}(\nu\cdot x+\omega t )\, dx\leq c \end{equation} for any $r_{1}\in \left(0,s\right],$ and \begin{equation}\label{final3.2} \begin{split} \int_{0}^{T}\int_{\mathbb{R}^{n}}(J^{r_{2}}u(x,t))^{2}\left(\chi_{\epsilon, \tau}\chi_{\epsilon, \tau}'\right)(\nu\cdot x+\omega t)\, dx\, dt\leq c. \end{split} \end{equation} for all $r_{2}\in \left(0, s+\frac{\alpha}{2}\right].$ The term $\Theta_{2,1,j}\, j\geq 2,$ can be estimated combining \eqref{final3.1}-\eqref{final3.2} together the arguments already described in \eqref{equiva}-\eqref{equiva2}. Next, the distinguished term in our decomposition is $\Theta{2,1,0}$ and it is given by \begin{equation*} \begin{split} \Theta_{2,1,0}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u J^{s+\frac{2+\alpha}{2}}u\partial_{x_{1}}\, \left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &\quad -\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u\,J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}^{2}u\, \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &\quad -\frac{\alpha}{2}\sum_{|\beta|=1, \beta\neq \mathrm{e}_{1}}\int_{\mathbb{R}^{n}}J^{s+\frac{2-\alpha}{2}}u\, J^{s+\frac{\alpha-2}{2}}\partial_{x_{1}}\partial_{x}^{\beta}u\, \partial_{x}^{\beta}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &= \Theta_{2,1,0,1}(t)+ \Theta_{2,1,0,2}(t)+ \Theta_{2,1,0,3}(t). \end{split} \end{equation*} In the first place \begin{equation*} \begin{split} \Theta_{2,1,0,1}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}\left(J^{s+1}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx+\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s+1}u\left[J^{\frac{\alpha}{2}}; \partial_{x_{1}}(\chi_{\epsilon, \tau}^{2})\right]J^{s+\frac{2-\alpha}{2}}u\, dx\\ &= \Theta_{2,1,0,1,1}(t)+ \Theta_{2,1,0,1,2}(t). \end{split} \end{equation*} The term $ \Theta_{2,1,0,1,1}$ represents after integrating in time the pursuit smoothing effect. For $\Theta_{2,1,0,1,2}$ we rewrite it as \begin{equation*} \begin{split} \Theta_{2,1,0,1,2}(t)&=\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{1+\frac{\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right] J^{s+1-\frac{\alpha}{2}}u\, dx\\ &\quad -\frac{1}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right] J^{s+1}u\, dx\\ &=\Lambda_{1}(t)+\Lambda_{2}(t). \end{split} \end{equation*} To handle $\Lambda_{1}$ we split the commutator expression as in \eqref{commudecomp1..2} (after replacing $\frac{\alpha}{2}$ by $\frac{\alpha}{2}+1$), but fixing $m_{1}\in\mathbb{N}$ as being \begin{equation*} m_{1}=\left\lceil 2s+\frac{3\alpha}{2}-2-s_{n}\right\rceil. \end{equation*} Notice that from such decomposition we obtain terms of lower order which are controlled by using \eqref{final3.1}-\eqref{final3.2}. \begin{equation*} \begin{split} \Theta_{2,1,0,2}(t)&= \frac{\alpha}{2}\int_{\mathbb{R}^{n}}\left(\partial_{x_{1}}J^{s}u\right)^{2}\, \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\\ &\quad +\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}u\, dx\\ &=\Theta_{2,1,0,2,1}(t)+\Theta_{2,1,0,2,2}(t). \end{split} \end{equation*} The term $\Theta_{2,1,0,2,1}$ corresponds to the smoothing effect of this step after integrating in time. Next, we rewrite \begin{equation*} \begin{split} \Theta_{2,1,0,2,2}(t)&=\frac{\alpha}{2}\int_{\mathbb{R}^{n}}\partial_{x_{1}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}u\, dx\\ &=-\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}^{2}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}u\, dx\\ &\quad- \frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}u\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\right]\partial_{x_{1}}J^{s+\frac{\alpha-2}{2}}u\, dx\\ &=\Lambda_{3}(t)+\Lambda_{4}(t). \end{split} \end{equation*} As we did previously we only indicate how to estimate $\Lambda_{3}$ since for $\Lambda_{4}$ the situation is analogous. We start by writing \begin{equation*} \begin{split} &\Lambda_{3}(t)=\\ &\frac{\alpha}{2}\int_{\mathbb{R}^{n}}J^{s}\left(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}\right)\left[J^{\frac{2-\alpha}{2}}; \partial_{x_{1}}^{2}\left(\chi_{\epsilon, \tau}^{2}\right)\right]J^{s+\frac{\alpha-2}{2}}\left(u\chi_{\epsilon, \tau}+u\phi_{\epsilon, \tau}+u\psi_{\epsilon}\right)dx\\ \end{split} \end{equation*} Hence by Theorem \ref{continuity} and Lemma \ref{lem1} we obtain \begin{equation*} \begin{split} | \Theta_{2,1,0,2,2}(t)|\lesssim \|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}+\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}+\|u_{0}\|_{L^{2}_{x}}^{2}. \end{split} \end{equation*} Unlike the previous step to estimate $\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}}$ we only require to apply Lemma \ref{zk37} and \eqref{final1} we get \begin{equation*} \sup_{t\in(0,T)}\|J^{s}(u\chi_{\epsilon, \tau})\|_{L^{2}_{x}}^{2}<\infty. \end{equation*} Instead, to estimate $\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{T}L^{2}_{x}}$ we combine \ref{final1} and the arguments described in \eqref{m1}-\eqref{m3} to obtain $$\|J^{s}(u\phi_{\epsilon, \tau})\|_{L^{2}_{T}L^{2}_{x}}<\infty.$$ In summary, \begin{equation*} \int_{0}^{T}| \Lambda_{3}(t)|\, dt<\infty. \end{equation*} The same arguments apply to estimate $\Lambda_{4}.$ Indeed, \begin{equation*} \int_{0}^{T}|\Lambda_{4}(t)|\,d t<\infty. \end{equation*} In the case of $\Theta_{2,1,1}$ the arguments described in \eqref{karine1}-\eqref{karine3} implies together with \eqref{final3.1} and \eqref{final3.2} that \begin{equation*} \int_{0}^{T}|\Theta_{2,1,1}(t)|\, dt <c \end{equation*} for some positive constant $c.$ Hence, we focus our attention on $\Theta_{2,1,m+1},$ the error term containing $r_{m-\alpha-1}(x,D)$ satisfies in virtue of Theorem \ref{continuity} \begin{equation*} \int_{0}^{T}|\Theta_{2,1,m+1}(t)|\, dt\lesssim T\|u_{0}\|_{L^{2}_{x}}\|u\|_{L^{\infty}_{T}H^{s_{n}+}_{x}}. \end{equation*} Under the conditions \eqref{cond11}- \eqref{cond12} we there exist $\lambda>0,$ such that \begin{equation*} \begin{split} &\lambda\left(\int_{\mathbb{R}^{n}}\left(J^{s+1}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx+\int_{\mathbb{R}^{n}}\left(J^{s-2}\partial_{x_{1}}u\right)^{2}\partial_{x_{1}}\left(\chi_{\epsilon, \tau}^{2}\right)\, dx\right)\\ &\leq \Theta_{2,1,1,1}(t)+\Lambda_{3,1}(t)+ \Theta_{2,1,0,3}(t). \end{split} \end{equation*} The analysis of this term remains unchained from the one in the previous steps. Finally, we gather the estimates above from where we find that for any $\epsilon>0$ and $\tau \geq 5 \epsilon,$ \begin{equation}\label{final3} \begin{split} &\sup_{0<t<T}\int_{\mathbb{R}^{n}}\left(J^{s+\frac{2-\alpha}{2}}u(x,t)\right)^{2}\chi_{\epsilon, \tau}^{2}(\nu\cdot x+\omega t )\, dx\\ &\qquad +\int_{0}^{T}\int_{\mathbb{R}^{n}}(J^{s+1}u(x,t))^{2}\left(\chi_{\epsilon, \tau}\chi_{\epsilon, \tau}'\right)(\nu\cdot x+\omega t)\, dx\, dt\leq c. \end{split} \end{equation} This estimate finishes the inductive argument, and we conclude the proof. \end{proof} The attentive reader might naturally wonder if it is possible to obtain a regularity propagation of regularity result as the one proved previously if the dispersion is weaker when compared with that of the ZK equation. More precisely, if we consider the equation \begin{equation*} \partial_{t}u-\partial_{x_{1}}\left(-\Delta\right)^{\frac{\alpha}{2}}u+u\partial_{x_{1}}u=0, \qquad 0<\alpha<1. \end{equation*} This question was firstly addressed in \cite{AM2} in the one dimensional case (see \cite{AMTHESIS} for a more detailed exposition). It was proved in\cite{AM2} that even in the case that the dispersion is too weak, the propagation of regularity phenomena occurs. Although, for higher dimensions this question has not been addressed before since it was unknown how to obtain Kato's smoothing in these cases. \begin{rem} The arguments described in Lemma \ref{main2} are strong enough and allows us to describe \end{rem} \section{Appendix A}\label{apendice1} The following appendix intends to provide a summary of the main results of Pseudo differential operators we use in this work. \begin{defn} Let $m\in \mathbb{R}.$ Let $\mathbb{S}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{n})$ denote the set of functions $a\in C^{\infty}(\mathbb{R}^{n}\times \mathbb{R}^{n})$ such that for all $\alpha$ and all $\beta$ multi-index \begin{equation*} \left|\partial_{x}^{\alpha}\partial_{\xi}^{\beta}a(x,\xi)\right|\lesssim_{\alpha,\beta}(1+|\xi|)^{m-|\beta|},\quad \mbox{for all}\quad x,\xi \in\mathbb{R}^{n}. \end{equation*} An element $a\in \mathbb{S}^{m}(\mathbb{R}^{n}\times\mathbb{R}^{n})$ is called a \emph{symbol of order $m.$} \end{defn} \begin{rem} For the sake of simplicity in the notation from here on we will suppress the dependence of the space $\mathbb{R}^{n}$ when we make reference to a symbol in a particular class. \end{rem} \begin{rem} For $m\in\mathbb{R},$ the class of symbols $\mathbb{S}^{m}$ can be described as \begin{equation*} \mathbb{S}^{m}=\left\{a(x,\xi)\in C^{\infty}(\mathbb{R}^{n}\times\mathbb{R}^{n})\,|\, |a|_{\mathbb{S}^{m}}^{(j)}<\infty,\, j\in\mathbb{N}\right\}, \end{equation*} where \begin{equation*} |a|_{\mathbb{S}^{m}}^{(j)}:=\sup_{x,\xi\in\mathbb{R}^{n}}\left\{\left\|\langle \xi \rangle ^{|\alpha|-m}\partial_{\xi}^{\alpha}\partial_{x}^{\beta}a(\cdot,\cdot)\right\|_{L^{\infty}_{x,\xi}} \Big| \alpha,\beta\in\mathbb{N}_{0}^{n},\, |\alpha+\beta|\leq j \right\}. \end{equation*} \end{rem} \begin{defn} \emph{A pseudo-differential operator} is a mapping $f\mapsto \Psi f$ given by \begin{equation*} (\Psi f)(x)=\int_{\mathbb{R}^{n}}e^{2\pi\mathrm{i}x\cdot\xi}a(x,\xi)\widehat{f}(\xi)\,d\xi, \end{equation*} where $a(x, \xi)$ is the symbol of $\Psi.$ \end{defn} \begin{rem} In order to emphasize the role of the symbol $a$ we will often write $\Psi_{a}.$ Also, we use the notation $a(x,D)$ to denote the operator $\Psi_{a}.$ \end{rem} \begin{defn} If $a(x,\xi)\in \mathbb{S}^{m},$ the operator $\Psi_{a}$ is said to belong to $\mathrm{OP\mathbb{S}^{m}.}$ More precisely, if $\nu$ is any symbol class and $a(x,\xi)\in \nu,$ we say that $\Psi_{a}\in \mathrm{OP}\mathbb{S}^{m}.$ \end{defn} A quite remarkable property that pseudo-differential operators enjoy is the existence of the adjoint operator, that is described below in terms of its asymptotic decomposition. \begin{thm} Let $a\in \mathbb{S}^{m}.$ Then, there exist $a^{*}\in \mathbb{S}^{m}$ such that $\Psi_{a}^{*}=\Psi_{a^{*}},$ and for all $N\geq 0,$ \begin{equation*} a^{*}(x,\xi)-\sum_{|\alpha|<N}\frac{(2\pi i)^{-|\alpha|}}{\alpha!}\partial_{\xi}^{\alpha}\partial_{x}^{\alpha}\overline{a}(x,\xi)\in\mathbb{S}^{m-N}. \end{equation*} \end{thm} \begin{proof} See Stein \cite{stein3} chapter VI or Taylor \cite{MT1}. \end{proof} Additionally the product $\Psi_{a}\Psi_{b}$ of two operators with symbols $a(x,\xi)$ and $b(x,\xi)$ respectively is a pseudo-differential operator $\Psi_{c}$ with symbol $c(x,\xi).$ More precisely, the description of the symbol $c$ is summarized in the following Theorem: \begin{thm} Suppose $a$ and $b$ symbols belonging to $\mathbb{S}^{m}$ and $ \mathbb{S}^{r}$ respectively. Then, there is a symbol $c$ in $\mathbb{S}^{m+r}$ so that \begin{equation*} \Psi_{c}=\Psi_{a}\circ\Psi_{b}. \end{equation*} Moreover, \begin{equation*} c\sim\sum_{\alpha}\frac{(2\pi i)^{-|\alpha|}}{\alpha!}\partial_{\xi}^{\alpha}a\partial^{\alpha}_{x}b, \end{equation*} in the sense that \begin{equation*} c-\sum_{|\alpha|<N}\frac{(2\pi i)^{-|\alpha|}}{\alpha!}\partial_{\xi}^{\alpha}a\,\partial^{\alpha}_{x}b\in \mathbb{S}^{m+r-N}, \quad \mbox{for all integer} \quad N,\, N\geq 0. \end{equation*} \end{thm} \begin{proof} For the proof see Stein \cite{stein3} chapter VI or Taylor \cite{MT1}. \end{proof} \begin{rem} Note that $c-ab\in \mathbb{S}^{m+r-1}.$ Moreover, each symbol of the form $\partial_{\xi}^{\alpha}a\,\partial_{x}^{\alpha}b$ lies in the class $\mathbb{S}^{m+r-|\alpha|}.$ \end{rem} A direct consequence of the decomposition above is that it allows to describe explicitly up to an error term, operators such as commutators between pseudo- differential operators as is described below: \begin{prop}\label{prop1} For $a\in \mathbb{S}^{m}$ and $b\in \mathbb{S}^{r}$ we define the commutator $\left[\Psi_{a};\Psi_{b}\right]$ by \begin{equation*} \left[\Psi_{a};\Psi_{b}\right]=\Psi_{a}\Psi_{b}-\Psi_{b} \Psi_{a}. \end{equation*} Then, the operator ${\displaystyle \left[\Psi_{a};\Psi_{b}\right]\in \mathrm{OP}\mathbb{S}^{m+r-1},}$ has by principal symbol the Poisson bracket, i.e, \begin{equation*} \sum_{|\alpha|=1}^{n}\frac{1}{2\pi i}\left(\partial_{\xi}^{\alpha}a\,\partial_{x}^{\alpha}b- \partial_{x}^{\alpha}a\,\partial_{\xi}^{\alpha}b\right)\,\, \mathrm{mod}\,\, \mathbb{S}^{m+r-2}. \end{equation*} \end{prop} Also, certain class the pseudo-differential operators enjoy of some continuity properties as the described below. \begin{proof} The proof can be consulted in Stein \cite{stein3} Chapter VI, Theorem 1. \end{proof} An interesting an useful continuity result in the Sobolev spaces is decribed below. \begin{thm}\label{continuity} Let $m\in\mathbb{R},a\in \mathbb{S}^{m},$ and $s\in\mathbb{R}$. Then, the operator $\Psi_{a}$ extends to a bounded linear operator from $H^{s+m}(\mathbb{R}^{n})$ to $H^{s}(\mathbb{R}^{n}).$ Moreover, there exists $j=j(n;m,s)\in\mathbb{N}$ and $c=c(n;m;s)>0$ such that \begin{equation*} \left\|\Psi_{a}f\right\|_{H^{s}_{x}}\leq c|a|_{\mathbb{S}^{m}}^{(j)}\|f\|_{H^{s+m}_{x}} \end{equation*} \end{thm} \begin{proof} See Kumano-go \cite{Kumno} or Stein \cite{stein3}, Chapter VI. \end{proof} An alternative formula for the Bessel kernel. \begin{lem}\label{b1} Let $0<\delta<n+1,$ and $f$ be a tempered distribution. Then $J^{-\delta}f=\mathcal{B}_{\delta}*f,$ where \begin{equation*} \mathcal{B}_{\delta}(y)=\frac{1}{(2\pi)^{\frac{n-1}{2}}2^{\frac{\delta}{2}}\Gamma\left(\frac{\delta}{2}\right)\Gamma\left(\frac{n-\delta+1}{2}\right)}e^{-|y|}\int_{0}^{\infty}e^{-|y|s}\left(s+\frac{s^{2}}{2}\right)^{\frac{n-\delta-1}{2}} \,ds \end{equation*} \begin{proof} See Calderon \& Zygmund \cite{CZ}, Lemma 4.1. \end{proof} \end{lem} \begin{lem}\label{b2} The function $\Theta_{\delta}$ is non-negative and it satisfies the following properties: \begin{itemize} \item[(a)] For $0<\delta<n,$ \begin{equation*} \mathcal{B}_{\delta}(x)\lesssim_{\delta} e^{-|x|}\left(1+|x|^{\delta-n}\right); \end{equation*} \item[(b)] for $\delta=n$ \begin{equation*} \mathcal{B}_{\delta}(x)\lesssim e^{-|x|}\left(1+\log^{+}\left(\frac{1}{|x|}\right)\right); \end{equation*} \item[(c)] for $\beta$ multi-index with $|\beta|>0$ and $0<\delta<n+1$ \begin{equation*} \left|\partial_{x}^{\beta}\mathcal{B}_{\delta}(x)\right|\leq c_{\beta,\delta} e^{-|x|}\left(1+|x|^{-n+\delta-|\beta|}\right), \quad x\neq 0. \end{equation*} \end{itemize} \end{lem} \begin{proof} We refer to Calderon \& Zygmund \cite{CZ}, Lemma 4.2. \end{proof} Also, the behavior of the Bessel potentials in the following cases is necessary for our arguments. \begin{lem}\label{Lemmasimpt} Let $\delta>0$. The function $\mathcal{B}_{\delta}$ satisfies the following estimates: \begin{itemize} \item[(i)] For $\delta<n$ and $|y|\rightarrow 0,$ \begin{equation}\label{asimp1} \mathcal{B}_{\delta}(y)\approx\left( \frac{\pi^{\frac{n}{2}}\Gamma\left(\frac{n-\delta}{2}\right)}{2^{\delta-n}\Gamma\left(\frac{\delta}{2}\right)}|2\pi y|^{\tau-n}\right). \end{equation} \item[(ii)] For $\delta=n$ and $|y|\rightarrow 0,$ \begin{equation}\label{asimp2} \mathcal{B}_{\delta}(y)\approx\frac{\pi ^{n/2}}{\Gamma\left(\frac{n}{2}\right)}\log\left(\frac{1}{|2\pi y|}\right). \end{equation} \item[(iii)] For $\delta>n$ and $|y|\rightarrow 0,$ \begin{equation}\label{simp3} \mathcal{B}_{\delta}(y)\approx\frac{\pi^{\frac{n}{2}}\Gamma\left(\frac{\delta-n}{2}\right)}{\Gamma\left(\frac{\delta}{2}\right)}. \end{equation} \item[(iv)] For $\delta>0$ and $|y|\rightarrow \infty,$ \begin{equation}\label{asimp4} \mathcal{B}_{\delta}(y)\approx\frac{(2\pi)^{\frac{n}{2}}}{2^{\frac{\alpha-1}{2}}\pi^{-\frac{1}{2}}\Gamma\left(\frac{\delta}{2}\right)}|2\pi|^{\frac{\alpha-n-1}{2}}e^{-|2\pi y|}. \end{equation} \end{itemize} \end{lem} \begin{proof} For the proof see Aronszajn \& Smith \cite{ARO}. \end{proof} \section{Appendix B} In this section we present some localization tools that are quite useful to describe the regularity phenomena we are working on. \begin{lem}\label{lem1} Let $\Psi_{a}\in\mathrm{OP\mathbb{S}^{r}}.$ Let $ \alpha=\left(\alpha_{1},\alpha_{2},\dots,\alpha_{n}\right)$ be a multi-index with $|\alpha|\geq 0.$ If $f\in L^{2}(\mathbb{R}^{n})$ and $g\in L^{p}(\mathbb{R}^{n}),\, p\in [2,\infty]$ with \begin{equation}\label{e16} \dist\left(\supp(f),\supp(g)\right)\geq \delta>0, \end{equation} then, \begin{equation*} \left\|g\partial_{x}^{\alpha}\Psi_{a}f\right\|_{L^{2}}\lesssim \|g\|_{L^{p}}\|f\|_{L^{2}}, \end{equation*} where $\partial_{x}^{\alpha}:= \partial_{x_{1}}^{\alpha_{1}}\dots\partial_{x_{n}}^{\alpha_{n}}.$ \end{lem} \begin{proof} See Mendez \cite{AMZK}. \end{proof} The next result can be proved by using the ideas from the proof of Lemma above. Nevertheless, for the reader convenience we describe the main details. \begin{cor}\label{separated} Let $f,g$ be functions such that \begin{equation}\label{support} \dist\left(\supp (f),\supp(g)\right)=\delta>0. \end{equation} Then, the operator \begin{equation*} \left(\mathcal{T}_{\psi}f\right)^{\widehat{}}(\xi):=\psi(\xi)\widehat{f}(\xi),\, f\in\mathcal{S}(\mathbb{R}^{n}) \end{equation*} where $\psi$ is defined as in \eqref{a1}, that is, \begin{equation*} \psi(\xi)=\sum_{j=1}^{\infty} {\alpha/2 \choose j}\langle 2\pi\xi\rangle ^{2-2j},\quad \xi\in\mathbb{R}^{n}. \end{equation*} If $f\in L^{p}(\mathbb{R}^{n})$ and $g\in L^{2}(\mathbb{R}^{n}),$ then \begin{equation*} \left\|\left[\mathcal{T}_{\psi}; f\right]g\right\|_{L^{2}}\lesssim \|f\|_{L^{p}}\|g\|_{L^{2}}, \end{equation*} for $p\in[2,\infty].$ \end{cor} \begin{proof} According to condition \eqref{support} we have \begin{equation*} \left(\left[\mathcal{T}_{\psi}; f\right]g\right)(x) =\sum_{j=2}^{\infty}{\alpha/2 \choose j}f(x)\int_{\{|x-y|>\delta\}}g(y)\mathcal{B}_{2j-2}(x-y)\,dy. \end{equation*} In virtue of Lemma \ref{Lemmasimpt} is clear that for $j>1,$ \begin{equation*} \begin{split} &\left| f(x)\int_{\{|x-y|>\delta\}}g(y)\mathcal{B}_{2j-2}(x-y)\,dy\right|\\ &\lesssim_{n}\frac{1}{2^{j}(j-2)!}\int_{\{|x-y|>\delta\}}|f(x)||g(y)| |2\pi(x-y)|^{\frac{2j-3-n}{2}}e^{-2\pi|x-y|}\,dy.\\ \end{split} \end{equation*} Thus, by Young's inequality \begin{equation*} \begin{split} \left\| \left[\mathcal{T}_{\psi}; f\right]g\right\|_{L^{2}}&\lesssim_{n}\|f\|_{L^{p}}\|g\|_{L^{2}}\left(\sum_{j=2}^{\infty}\frac{\Gamma\left(j+\frac{n-3}{2}\right)}{(j-2)!j^{1+\alpha}}\right)\\ &\lesssim_{n,\alpha}\|f\|_{L^{p}}\|g\|_{L^{2}}, \end{split} \end{equation*} for $p\in[2,\infty].$ \end{proof} The following formulas where firstly obtained by Linares, Kenig, Ponce and Vega in the one dimensional case in their study about propagation of regularity fo solutions of the KdV equation. Nevertheless, these results where later extended to dimension $n,n\geq 2$ in the work of \cite{AMZK} for solutions of the Zakharov-Kuznetsov equation. \begin{lem}[Localization formulas]\label{A} Let $f\in L^{2}(\mathbb{R}^{n}).$ Let $\nu=(\nu_{1},\nu_{2},\dots,\nu_{n})\in \mathbb{R}^{n}$ a non-null vector such that $\nu_{j}\geq 0,\, j=1,2,\dots,n.$ Let $\epsilon>0,$ we consider the function $\varphi_{\nu,\epsilon}\in C^{\infty}(\mathbb{R}^{n})$ to satisfy: $0\leq\varphi_{\nu,\epsilon} \leq 1,$ \begin{equation*} \varphi_{\nu,\epsilon}(x)= \begin{cases} 0\quad \mbox{if}\quad & x\in \mathcal{H}_{\left\{ \nu,\frac{\epsilon}{2}\right\}}^{c}\\ 1\quad \mbox{if}\quad & x\in\mathcal{H}_{\{\nu,\epsilon\}} \end{cases} \end{equation*} and the following increasing property: for every multi-index $\alpha$ with $|\alpha|=1$ \begin{equation*} \partial^{\alpha}_{x}\varphi_{\nu,\epsilon}(x)\geq 0,\quad x\in\mathbb{R}^{n}. \end{equation*} \begin{itemize} \item[(I)] If $m\in \mathbb{Z}^{+}$ and $\varphi_{\nu,\epsilon}J^{m}f\in L^{2}(\mathbb{R}^{n}),$ then for all $\epsilon'>2\epsilon$ and all multi-index $\alpha$ with $0 \leq|\alpha|\leq m,$ the derivatives of $f$ satisfy \begin{equation*} \varphi_{\nu,\epsilon'}\partial^{\alpha}_{x}f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \item[(II)]If $m\in \mathbb{Z}^{+}$ and $\varphi_{\nu,\epsilon}\,\partial^{\alpha}_{x}f\in L^{2}(\mathbb{R}^{n})$ for all multi-index $\alpha$ with $ 0\leq |\alpha|\leq m,$ then for all $\epsilon'>2\epsilon$ \begin{equation*} \varphi_{\nu,\epsilon'}J^{m}f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \item[(III)] If $s>0,$ and $J^{s}(\varphi_{\nu,\epsilon}f)\in L^{2}(\mathbb{R}^{n}),$ then for any $\epsilon'>2\epsilon$ \begin{equation*} \varphi_{\nu,\epsilon'}\,J^{s}f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \item[(IV)] If $s>0,$ and $\varphi_{\nu,\epsilon}J^{s}f\in L^{2}(\mathbb{R}^{n}),$ then for any $\epsilon'>2\epsilon$ \begin{equation*} J^{s}\left(\varphi_{\nu,\epsilon'}f\right)\in L^{2}(\mathbb{R}^{n}). \end{equation*} \end{itemize} \end{lem} \begin{proof} See \cite{AMZK}. \end{proof} A more general version that we are going to use in this work. \begin{lem}\label{lemm} Let $f\in L^{2}(\mathbb{R}^{n}).$ If $\theta_{1}, \theta_{2}\in C^{\infty}(\mathbb{R}^{n})$ are functions such that: $0\leq \theta_{1},\theta_{2}\leq 1,$ their respective supports satisfy \begin{equation*} \dist\left(\supp\left(1-\theta_{1}\right), \supp\left(\theta_{2}\right)\right)\geq \delta, \end{equation*} for some positive number $\delta,$ and for all multi-index $\beta,$ the functions $\partial_{x}^{\beta}\theta_{1},\partial_{x}^{\beta}\theta_{2}\in L^{\infty}(\mathbb{R}^{n}).$ Then, the following identity holds: \begin{itemize} \item[(I)] If $m\in \mathbb{Z}^{+}$ and $\theta_{1}J^{m}f\in L^{2}(\mathbb{R}^{n}),$ then for all multi-index $\alpha$ with $0 \leq|\alpha|\leq m,$ the derivatives of $f$ satisfy \begin{equation*} \theta_{2}\partial^{\alpha}_{x}f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \item[(II)]If $m\in \mathbb{Z}^{+}$ and $\theta_{1}\,\partial^{\alpha}_{x}f\in L^{2}(\mathbb{R}^{n})$ for all multi-index $\alpha$ with $ 0\leq |\alpha|\leq m,$ then \begin{equation*} \theta_{2}J^{m}f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \item[(III)] If $s>0,$ and $J^{s}(\theta_{1}f)\in L^{2}(\mathbb{R}^{n}),$ then \begin{equation*} \theta_{2}\,J^{s}f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \item[(IV)] If $s>0,$ and $\theta_{1}J^{s}f\in L^{2}(\mathbb{R}^{n}),$ then \begin{equation*} J^{s}\left(\theta_{2}f\right)\in L^{2}(\mathbb{R}^{n}). \end{equation*} \end{itemize} \end{lem} \begin{proof} See \cite{AMZK}. \end{proof} \begin{lem}\label{zk19} Let $\Psi_{a}\in \mathrm{OP\mathbb{S}^{0}}.$ Let $\theta_{1},\theta_{2}:\mathbb{R}^{n}\longrightarrow\mathbb{R}$ be smooth functions such that \begin{equation*} \dist\left(\supp\left(1- \theta_{1}\right),\supp \theta_{2}\right)>\delta, \end{equation*} for some $\delta>0.$ Assume that $ f\in H^{s}(\mathbb{R}^{n}),\, s<0.$ If $\theta_{1}f\in L^{2}(\mathbb{R}^{n}),$ then \begin{equation*} \theta_{2}\Psi_{a} f\in L^{2}(\mathbb{R}^{n}). \end{equation*} \end{lem} \begin{proof} For the one dimensional case see \cite{KLPV} and the extension to the $n-$dimensional case see \cite{AMZK}. \end{proof} \begin{lem}\label{zk37} Let $f\in L^{2}(\mathbb{R}^{n})$ and $\nu=(\nu_{1},\nu_{2},\dots,\nu_{n})\in \mathbb{R}^{n}$ such that $\nu_{1}>0,$ for $j=1,2,\dots,n.$ Also assume that \begin{equation*} J^{s}f\in L^{2}\left(\mathcal{H}_{\{\alpha,\nu\}}\right),\quad s>0. \end{equation*} Then, for any $\epsilon>0$ and any $r\in (0,s]$ \begin{equation*} J^{r}f\in L^{2}\left(\mathcal{H}_{\{\alpha+\epsilon,\nu\}}\right). \end{equation*} \end{lem} \begin{proof} For the one dimensional case see \cite{KLPV} and the extension to the $n-$dimensional case see \cite{AMZK}. \end{proof} \begin{thm}\label{KPDESI} Let $s>0$ and $f,g\in\mathcal{S}(\mathbb{R}^{n}).$ Then, \begin{equation*} \left\|\left[J^{s};g\right]f\right\|_{L^{2}}\lesssim \|J^{s-1}f\|_{L^{2}}\|\nabla g\|_{L^{\infty}}+\|J^{s}g\|_{L^{2}}\|f\|_{L^{\infty}}, \end{equation*} where the implicit constant does not depends on $f$ nor $g.$ \end{thm} \begin{proof} For the proof see the appendix in Kato and Ponce \cite{KATOP2}. \end{proof} Also, the following Leibniz rule for the operator $J^{s}$ is quite useful in our arguments \begin{thm}\label{leibniz} Let $s>\frac{n}{2}$ and $f,g\in \mathcal{S}(\mathbb{R}^{n}),$ then \begin{equation*} \|J^{s}(f\cdot g)\|_{L^{2}}\lesssim \|J^{s}f\|_{L^{2}}\|g\|_{L^{\infty}}+\|J^{s}g\|_{L^{2}}\|f\|_{L^{\infty}}, \end{equation*} where the implicit constant does not depends on $f$ nor $g.$ \end{thm} \begin{proof} See the appendix in Kato and Ponce \cite{KATOP2}. \end{proof} \section{Acknowledgment} I shall thank R. Freire for reading an early version of this document and its helpful suggestions. I also would like to thank Prof. F. Linares for pointing out several references and the suggestions in the presentation of this work. I also thanks O. Riaño for calling my attention to this problem.
proofpile-arXiv_065-5839
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec_intro} {\bf The overwhelming importance of simulating water:} The properties of water, such as the uniqueness of its phase diagram, never stop surprising scientific communities.\cite{ZWCW21} Given the vital importance of water in fields that vary from material science to biology, there has been a recent surge in the development and competition of different electronic structure methods for simulating water.\cite{SWSA20,DLPP21,DSBP22,ZCTX21,TPFR21,LHP21,PLSH22,PLDP22} As {\em ab initio} quantum-chemical methods are too expensive for large systems, Kohn-Sham density functional theory (KS-DFT) has become a workhorse of electronic structure methods for running water calculations.\cite{BLP13, BMP14, MBP14,RSBP16,CKRC17} But, despite an excellent accuracy to cost ratio, historically KS-DFT has been unable to deliver sufficiently high accuracy in water simulations to reproduce experimental data.\cite{KMMS04,KKP09,SKAT11,GAM16} A recent breakthrough in this direction by Dasgupta {\it et al.} showed that the strongly constrained and appropriately normed (SCAN) functional, when used in tandem with 'density-corrected DFT' (DC-DFT), is a game changer for water simulation and for those, because it brings KS-DFT close to chemical accuracy.\cite{DLPP21,DSBP22} The role of water in a chemical or biochemical reaction goes beyond providing an environment to help a reaction in an aqueous solvation and is often explicitly involved in the mechanism. For this reason, a complete understanding of the reaction is possible only when the interaction between water and other molecules is accurately described. Figure~\ref{fgr:1} shows how an integratively designed DC-DFT procedure, HF-r$^2$SCAN-DC4, describes not only the interactions between water-water, water-organic molecules, and water-biochemical molecules in various situations, but also the interactions of non-covalent complexes at chemical accuracy or better. {\bf The importance of the density:} DC-DFT is a general framework that separates errors of any DFT calculations into a contribution coming from the approximate "D" (density) and the 'true' error coming from the approximate "F" (functional).\cite{KSB13,KSB14,WNJK17,VSKS19} In addition to being a rigorous exact theory, DC-DFT gives practical guidance on when and how it can be used to reduce errors in DFT simulation.\cite{SSB18,NSSB20,SVSB21} Standard DFT calculations are performed self-consistently (SC). The simplest form of practical DC-DFT is HF-DFT, where density functionals are evaluated instead on Hartree-Fock (HF) densities and orbitals.\cite{KSB11,KPSS15,SKSB18,KSSB18,SVSB22} While in most cases, SC-DFT gives the best answer, in some errors in specific cases SC-DFT suffers from large energetic errors due to the approximate density (density-driven errors).\cite{KSB13,VSKS19} In such cases, HF-DFT typically yields significant improvements over SC-DFT, and these include a number of chemical domains (barrier heights, some torsional barriers, halogen bonds, anions, etc.).\cite{SVSB21} {\bf The importance of the functional:} SCAN is a non-empirical meta-GGA functional designed to satisfy 17 exact physical constraints, and to recover several nonbonded `norms'.\cite{SRP15} Meta-GGA's use the KS kinetic energy density as an ingredient, but are {\em not} hybrid functionals like B3LYP, which include some fraction of exact exchange from a HF calculation.\cite{B93} In terms of accuracy, SCAN is often on par with highly empirical more expensive density functionals designed for molecules. At the same time, it enjoys great successes for simulations of extended systems, making it one of the most-used general-purpose functionals developed over the last 10 years.\cite{SRZS16,TSB16,MH17,GHBE17,ZSPW17} Earlier works have shown that standard (SC) DFT calculations of water clusters suffer badly from density-driven errors, which explains why HF-SCAN is much more accurate than its SC counterpart for simulations of water.\cite{DLPP21} In addition to water clusters, Dasgupta {\it et al.} used HF-SCAN in tandem with many-body potential energy \begin{figure*}[htb] \centering \includegraphics[width=1.95\columnwidth]{figs/another_fig1_v5.pdf} \caption{ Performance of HF-r$^2$SCAN-DC4 relative to HF-SCAN for various chemical reactions: (a) the interaction energy of various configurations of the stacked cytosine dimer, where HF-SCAN underbinds by 2-3 kcal/mol; (b) energies of water hexamer relative to the lowest-lying prism isomer, with HF-SCAN underestimating by almost 1 kcal/mol; (c) errors in binding energy of WATER27 complexes as a function of density sensitivity (how much a DFT energy changes when the density is changed), showing how large errors can be without using the HF density. One cluster, H$_3$O$^+$(H$_2$O)$_4$OH (at $x$ close to 7 kcal/mol) is an outlier argued to exhibit a significant multiconfigurational character\cite{DSBP22}; (d) relative energies of water 20-mer isomers (not density sensitive) from WATER27, where self-consistent SC-r$^2$SCAN-D4 performs best, but using the HF density introduces little error; (e) errors in interaction energies in the water$\cdots$aspirin dimer structures from an MD simulation at T=298.15 K; (f) MAEs for intra- and inter-molecular noncovalent interactions datasets from the GMTKN55 database. For more details, see the main text and supporting information.} \label{fgr:1} \end{figure*} \clearpage \noindent function related to the highly popular MB-pol\cite{BLP13,BMP14,MBP14}) to run molecular dynamics (MD) simulations of liquid water and obtained results in excellent agreement with the experimental data. These were the first successful DFT-based simulations able to correctly describe the condensation of water. Nevertheless, the convergence of the SCAN functional can be painfully slow with respect to the size of molecular grids, due to either the size of a system or because it would require grids larger than those available in most of the standard-quantum chemical codes.\cite{SVSB22} Larger grids also lead to longer computational times. Perdew and co-workers have developed r$^2$SCAN to address these issues of SCAN,\cite{FKNP20} but as we show below, a standalone version of HF-r$^2$SCAN is much less accurate for water simulations than HF-SCAN. {\bf The vital importance of dispersion:} Despite enormous success in modelling water, HF-SCAN is not a panacea. In their water simulations, Dasgupta {\it et al.} used HF-SCAN without dispersion correction, as they found that the standard dispersion corrections, such as those of Grimme\cite{GAEK10}, {\em worsen} the original results of HF-SCAN for water. But such dispersion corrections have long been known to be necessary for non-covalent interactions.\cite{WVNL01,MS96,G04,BJ05,G06a,BJ07,TS09,VV10,RH16,PBJ21} So, despite delivering a high accuracy for pure water simulations, HF-SCAN without a dispersion correction cannot describe accurately long-range dispersion interactions. For this reason, the errors of HF-SCAN are several times larger than those of DFT enhanced by a dispersion correction for the standard noncovalent datasets.\cite{GHBE17} The challenge is then to construct an efficient density functional that correctly describes noncovalent interactions of different nature, while recovering or even improving the accuracy of HF-SCAN for water simulations. {\bf HF-r$^2$SCAN-DC4, an integratively designed DC-DFT procedure:} In the present paper, we resolve these issues by using the principles of DC-DFT to carefully parameterize a dispersion correction for HF-r$^2$SCAN. This yields HF-r$^2$SCAN-DC4, which has the following key features: (i) HF-r$^2$SCAN-DC4 {\em improves} upon HF-SCAN for pure water simulations, by up to 0.5 kcal/mol for relative energies of water hexamers, and up to 4.5 kcal/mol for those of water 20-mers; (ii) HF-r$^2$SCAN-DC4 is far more accurate than HF-SCAN for interactions of water with other molecules and for noncovalent interactions in general, because of the inclusion of explicit dispersion corrections; (iii) HF-r$^2$SCAN-DC4 can be routinely and efficiently used in calculations because, unlike HF-SCAN\cite{SVSB22}, HF-r$^2$SCAN-DC4 has no grid convergence issues. In our HF-r$^2$SCAN-DC4, each of the three ingredients is vitally important: The "HF" part reduces density-driven errors, while r$^2$SCAN fixes the grid issues of SCAN. But most importantly, the way in which we parametrize the D4 corrections by using the DC-DFT principles is vital, as an unwitting fitting of D4 ruins the accuracy for water simulations. If we drop {\bf any} of those elements of HF-r$^2$SCAN-DC4, at least one of its three appealing features will be lost. \begin{figure*}[h] \centering \includegraphics[width=1.95\columnwidth]{figs/dimers.pdf} \caption{ Water dimer interaction energies for (a) Smith stationary points\cite{SSPS90} and (b) MD simulated water dimers with the oxygen-oxygen distance. For (a), MAEs of each functional are (following the order in the legend): 0.25, 0.11, 0.09, 0.17, and 0.08 kcal/mol. DLPNO-CCSD(T)-F12 has been used as a reference. For (b), MAE of each functionals are 0.25, 0.08, 0.20, 0.30 and 0.08 kcal/mol. Figure~\ref{fgr:dimer_s} shows the corresponding density sensitivities and Figure~\ref{fgr:dimer_err} shows the errors of approximations for the Smith dimers and interaction energies for MD dimers. } \label{fgr:dimer} \end{figure*} To illustrate all these points, and how they work together, we created Figure~\ref{fgr:1}. We show how HF-r$^2$SCAN-DC4 is better than HF-SCAN for interactions of nucleobases [panel (a)], water molecules with one another [panels (b), (c), (d)], water with other molecules [panel (e)], and noncovalent interactions in general [panel (f)]. Stacking interactions in nucleobases are of vital importance in biology as their energetics is essential to describe the formation and stability of DNA and RNA.\cite{YT05, KS19} In Figure~\ref{fgr:1}(a), we compare the accuracy of HF-SCAN and HF-r$^2$SCAN-DC4 for interaction energies of stacked cytosine dimers at different configurations. As we can see from Figure~\ref{fgr:1}(a), our HF-r$^2$SCAN-DC4 essentially greatly reduces the errors of HF-SCAN that systematically underbinds these stacked complexes by about 2.5 kcal/mol. This demonstrates that despite its success for modeling water, HF-SCAN misses most of dispersion and thus cannot compete with our HF-r$^2$SCAN-DC4 in modelling noncovalent interactions (NCIs). This is especially the case for NCIs dominated by dispersion interactions as those present in stacked nucleobases. (See Figure~\ref{fgr:ks19_err} in the supporting information for the errors in interaction energies.) We note that the mean absolute error (MAE) of HF-r$^2$SCAN-DC4 (0.4 kcal/mol) is very good relative to HF-SCAN, but not very impressive relative to B3LYP-D3(BJ) (less than 0.2 kcal/mol).\cite{KS19} But such functionals include only a fraction of HF exchange, and so still suffer from large density-driven errors in water, and so have larger errors for pure water (as shown below). \begin{figure*}[htb] \centering \includegraphics[width=1.7\columnwidth]{figs/hex_nbody_main.pdf} \caption{ K-body interaction energy errors with (a) $K=2$, (b) $K=3$, and (c) total, and (d) the interaction energy for 8 water hexamers. (For higher order $K$-body interaction energies, see Figure~\ref{fgr:nbodyerr} and~\ref{fgr:nbodypor}.) Geometries and CCSD(T)/CBS reference interaction energies are from Ref.~\cite{RSBP16}. The MAEs of HF-r$^2$SCAN-DC4 and HF-SCAN are 0.16 kcal/mol and 0.22 kcal/mol, respectively. } \label{fgr:nbody} \end{figure*} Water hexamers, "the smallest drops of water"\cite{NM00,WBBP12}, are important, as they represents the transition from two-dimensional to three-dimensional hydrogen-bonding networks.\cite{BT09,CL10,OJ13} The energy differences between two adjacent isomers of water hexamers are tiny, making even the ordering of isomers a very challenging test for quantum-chemical methods.\cite{BT09,SMFT08} In Figure~\ref{fgr:1}(b), we compare the energies of water hexamer isomers relative to the energy of the prism, as the lowest-lying isomer.\cite{BT09,OBKS07,GMTA12} Despite being more accurate for water hexamers than most DFT methods available on the market, HF-SCAN mistakes the ordering of the isomers, as it predicts too low energies of the chair isomer. Our HF-r$^2$SCAN-DC4 is also here superior to HF-SCAN, as it not only gives the right ordering of isomers, but essentially reproduces the reference values for the relative energies of isomers. If D4 is fitted by not accounting for the DC-DFT principles (see below), the accuracy of HF-r$^2$SCAN-DC4 for the water simulation is lost. This happened in Ref.~\cite{SM21b} and will be discussed in the method section. We use the WATER27 data set to illustrate the importance (and subtlety) of DC-DFT for water simulations. WATER27 is a standard dataset for binding energies of water clusters. Density-sensitivity, $\tilde S$, is a measure for how sensitive a given DFT simulation is to errors in densities (see Section~\ref{sec:dde} in the supporting information for further details and specific definitions). Typically, the errors of SC-DFT calculations grow with $\tilde S$, indicating the presence of large density-driven errors.\cite{SVSB21,SVSB22,SSVB22} DC-DFT reduces these large density-driven errors of SC-DFT and thus the errors of DC-DFT do not grow with $\tilde S$. In Figure~\ref{fgr:1}(c) we plot WATER27 errors as a function of density-sensitivity. As the errors of SC-r$^2$SCAN-D4 grow with $\tilde S$, so also does the energetic improvement of HF-r$^2$SCAN-DC4 over SC-r$^2$SCAN-D4. Furthermore, sometimes dispersion corrections worsen SC-DFT for cases with large density-driven errors.\cite{SVSB21} This is also the case here, as SC-r$^2$SCAN-D4 significantly deteriorates the accuracy of SC-r$^2$SCAN (see Figure \ref{fgr:sc_water27}). The errors of HF-SCAN are also substantially lower than those of SC-r$^2$SCAN-D4, and for most of the binding energies of the WATER27 clusters, HF-SCAN is comparable to HF-r$^2$SCAN-DC4. But, for the four clusters with the largest sensitivities, HF-r$^2$SCAN-DC4 outperforms HF-SCAN by $\sim$4 kcal/mol. WATER27 is a part of the the GMTKN55\cite{GHBE17}, a database that we use to train the D4 parameters in HF-r$^2$SCAN-DC4 (see methods). But, according to the principles of DC-DFT, we exclude those WATER27 clusters that are density-sensitive, as their energetic errors are dominated by the errors in their densities.\cite{SVSB21} Thus none of the clusters that are to the right of the vertical dashed line placed at $\tilde S$ =2 kcal/mol (see Section.~\ref{sec:met} for the details on this reasoning) are used in the fitting, which means HF-r$^2$SCAN-DC4 makes genuinely accurate predications for a vast majority of these water clusters. Not only does it recover HF-SCAN for binding energies of the water clusters, but also provides substantial improvements for the most challenging clusters. An important question is whether or not one should {\em always} correct the density. The general principles of DC-DFT say that one should only correct the density in cases of substantial density-driven errors. In density insensitive cases, the effect of correcting the density should be small, and may actually worsen energetics. Figure \ref{fgr:1}(d) shows energies of water 20-mers relative to the energy of the lowest of the four 20-mers. Here SC-r$^2$SCAN-D4 beats its DC counterpart every time. In contrast to large $\tilde{S}$ for binding energies of the four 20-mers (the last four datapoint in Figure \ref{fgr:1}(c)), the sensitivities corresponding to their relative isomer energies are about twenty times smaller (see Figure \ref{fgr:20mer_s}). Thus the higher accuracy of SC-r$^2$SCAN-D4 over HF-r$^2$SCAN-DC4 does not come as a surprise. But the crucial point is that, even in this {\em low-sensitivity} scenario, the errors introduced by the HF density are far smaller than those of HF-SCAN, and remain tiny on a per molecule basis. A crucial figure of merit is how accurate energetics are for water molecules in the vicinity of an organic molecule, especially if it is polar. In Figure \ref{fgr:1}(e), we show errors in the interaction energies between water and aspirin from structures that we extracted from an MD simulation at T=298.15 K (see Section~\ref{sec:water} for further details on the MD simulation). The structures are sorted by the distance between the oxygen atom in water and the specified oxygen atom in the carboxyl group of aspirin. The errors of HF-r$^2$SCAN-DC4 are much smaller than those of HF-SCAN. They are also substantially smaller than those of SC-r$^2$SCAN-D4 (see Figure~\ref{fgr:aspirin_tmp}), demonstrating again the importance of of both the D4 and DC components in our method. Getting NCI right across a broad range of molecules is important, even in the absence of water. The GMTKN55 collection of 55 databases has become a standard benchmark\cite{GHBE17} and includes many databases for NCIs. In Figure~\ref{fgr:1}(f), we compare the MAEs of HF-SCAN and HF-r$^2$SCAN-DC4 for the standard datasets with intra- and intermolecular NCIs\cite{GHBE17}. Despite its high accuracy for water clusters, HF-SCAN does not capture long-ranged dispersion interactions. This is why it is far less accurate than HF-r$^2$SCAN-DC4 for noncovalent datasets. We can see that HF-r$^2$SCAN-DC4 is highly accurate here, and on average it beats SC-r$^2$SCAN-D4 for both inter- and intramolecular NCIs (see Table~\ref{tbl:gmtkn} in the supporting information comparing the metrics for overall performance). \section{Results} \subsection{Interaction energies for water dimers} As discussed already, HF-SCAN performs incredibly well for interactions in pure water. In this section, we look at select water dimers that are relevant to water simulations, and show how HF-r$^2$SCAN-DC4 reproduces (or even exceeds) this accuracy. More importantly, we show how each aspect of its construction (density correction, regularization of SCAN, and dispersion correction) are vital to its accuracy for water. Later we will show that no other approximation at this level of cost comes close to this performance for water. Figure~\ref{fgr:dimer} shows the interaction energies for many water dimers (the difference in the energies of a dimer and two monomers). (a) shows the interaction energies at Smith stationary points, some of which resemble geometries from dense ice structures.\cite{GAM16} (b) shows the errors of approximations in interaction energies for water dimers as a function of the distance between the two oxygen atoms. The underlying structures were extracted from an MD simulation at T=298.15K (see Section~\ref{sec:water} for further details on the simulation). For the interaction energies of these water dimers, HF-SCAN without a dispersion correction already provides a very high accuracy (with MAEs of less than 0.1 kcal/mol). Our HF-r$^2$SCAN-DC4 essentially recovers this high accuracy of HF-SCAN. Similar patterns observed for binding energies of water clusters are also seen here. By studying the various plots, one can assess the importance of the relative contributions to HF-r$^2$SCAN-DC4. First, the purple points give HF-r$^2$SCAN, to be contrasted with HF-SCAN. We see that HF-r$^2$SCAN significantly (on this scale) underestimates the interaction energy. Even though r$^2$SCAN was designed to reproduce the results of SCAN, these differences are so small as to be negligble for most purposes. However, they are clearly significant here, showing HF-r$^2$SCAN is noticeably less accurate for these dimers. The addition of the D4 correction, however, makes their errors comparable. On the other hand, we may also consider the importance of density correction. We see that SC-r$^2$SCAN-D4 considerably overestimates interaction energies. In fact, SC-r$^2$SCAN does rather well, as the errors due to poor density and missing dispersion cancel. We can also observe from Figure~\ref{fgr:dimer}(b) that the improvement of HF-r$^2$SCAN-DC4 over SC-r$^2$SCAN-D4 decreases with the distance between the two oxygen atoms in water dimers. This can be understood in terms of underlying density sensitivity (a quantity telling us how sensitive is a given calculation to density-driven errors), which also decreases with the O-O distance (see Figure~\ref{fgr:dimer_s}). \subsection{Many-body interactions in larger water clusters} In Figure~\ref{fgr:nbody} we compare errors of HF-r$^2$SCAN-DC4 and HF-SCAN for the interaction energies of the eight standard water hexamers.\cite{BT09,CL10} In addition to total interaction energies, we also use the many-body expansion (MBE) to show the $K$-body contributions to these energies (with $K$ in between 2 and 6). This is a standard methodology for understanding the origins of errors in water models.\cite{DLPP21,RSBP16,GPCS11} The energetic importance of the $K$-body contributions decreases rapidly with $K$ (Figure~\ref{fgr:nbodypor}), making the 2-body contributions by far the most important, and these are where significant differences emerge when the density is corrected. But in order to reach chemical accuracy, a proper description of the higher-order contributions also matters. The 2-body plot shows that HF-SCAN has a rather systemative overestimate of about 0.5 kcal/mol, whereas HF-r$^2$SCAN-DC4 is substantially less for about half the clusters. The 3-body plot shows them being almost identical. But in the total error, we see that HF-r$^2$SCAN-DC4 is far more systematic, as HF-SCAN makes errors of opposite sign, while HF-r$^2$SCAN-DC4 is always an overestimate of about 0.2 kcal/mol. This consistency is important on the plot (d), showing the interaction energy of the 8 hexamers. Because HF-r$^2$SCAN-DC4 is so consistent, it gets the ordering in interaction energies of all clusters correct, whereas HF-SCAN incorrectly predicts that the interaction energy in the chair is higher than that of the boat. The MAE of HF-r$^2$SCAN-DC4 is 0.16 kcal/mol, noticeably lower than 0.22 kcal/mol for HF-SCAN. On average, HF-r$^2$SCAN-DC4 also improves individual $K$-body contributions to the interaction energies, except for $K=4$, where both are marginally small (Figure~\ref{fgr:nbodyerr}). This MBE test shows us that the improvement of HF-r$^2$SCAN-DC4 over HF-SCAN for the water hexamers interaction energies (seen also for the relative isomer energies (Figure~\ref{fgr:1}(b)) is systematic and does not result from the error cancellations between different $K$-body contributions (for the detailed information of water hexamer isomerization energy in Figure~\ref{fgr:1}, see Figure~\ref{fgr:hex_rank}). \subsection{Water$\cdots$cytosine interaction energies} \begin{figure}[htb] \includegraphics[width=1\columnwidth]{figs/water_cytosine_int.pdf} \caption{ Errors in interaction energies of water$\cdots$cytosine complexes sorted by the distance between the oxygen atom in cytosine and the oxygen atom in water. Reference interaction energies have been computed at the DLPNO-CCSD(T)-F12/AVQZ level of theory and these are shown in Figure~\ref{fgr:cc}. } \label{fgr:wc} \end{figure} \begin{figure*}[htb] \includegraphics[width=1.95\columnwidth]{figs/wtmads.pdf} \caption{ (a) The mean absolute error for the water-based reactions appear in this work (hexamer isomer energies, water 20-mers isomer energies, WATER27 binding energies, water-small organic molecule interaction energies, and water dimer interaction energies) versus WTMAD-2 for the GMTKN55 database for selected functionals. For a further description of the reactions used in the $y$-axis, see Figure~\ref{sec:water_int} in the supporting information. HF-SCAN-D4 functional used here is from Ref.~\cite{SM21}. (b) The hexagon plot with MAEs for selected water-based datasets and WTMAD-2 values for the whole GMTKN55 databases (for WTMAD-2 values for other GMTKN55 database, see Figure~\ref{fgr:g}). Abbreviations of isomerization (I), binding (B), and interaction (T) energy are noted in the vertex caption. MAEs of HF-r$^2$SCAN-DC4 for individual GMTKN55 datasets are shown in Table~\ref{tbl:gmtkn}. In Figure~\ref{fgr:nohb}, we give further details about the interaction energies used in the "water-small organic molecule" dataset. } \label{fgr:w} \end{figure*} In Figure~\ref{fgr:wc}, we study the performance of different variations for microhydration of cytosine, by specifically focusing on the interaction energies in water$\cdots$cytosine complexes. We generate these complexes as described in Section~\ref{sec:water}, and in all of them, water interacts with cytosine through the hydrogen bond formed between the hydrogen atom in water and the oxygen atom in cytosine. For each complex, the errors of HF-r$^2$SCAN-DC4 are small, and with the MAE of 0.13 kcal/mol, it is the best performer in Figure~\ref{fgr:wc}. The errors of HF-SCAN are much smaller here than for cytosine dimers (Figure~\ref{fgr:1}(a)), in which the role of dispersion is more important. Nevertheless, HF-r$^2$SCAN-DC4 provides here a significant improvement over HF-SCAN. It is also interesting to observe what happens after we add the D4 correction to HF-r$^2$SCAN and its SC counterpart. In the case of HF-r$^2$SCAN, the errors in the interaction energies are greatly reduced (roughly by a factor of 6 on average). In stark contrast, adding D4 to SC-r$^2$SCAN significantly deteriorates its accuracy, as SC-r$^2$SCAN already overbinds water$\cdots$cytosine complexes and D4 makes the overbinding stronger (for sensitivity of cytosine$\cdots$cytosine and water$\cdots$cytosine interaction, see Figure~\ref{fgr:g}). \subsection{Wide applicability of HF-r$^2$SCAN-DC4} A functional that works extremely well for pure water but nothing else is not widely applicable. Recently, GMTKN55 of 55 databases has become a popular benchmark for testing the accuracy of density functionals for main-group chemistry. Figure~\ref{fgr:w} has been designed to illustrate performance of functionals for both pure water and on the GMTKN55 database simultaneously. The water metric ($y$-axis on the left) combines most of the reactions with water used in this paper, and is carefully defined in the supporting information Section~\ref{sec:water_int}. Figure~\ref{fgr:w}(a) shows errors on GMTKN55 on the $x$-axis and errors on the water metric on the $y$-axis, each in kcal/mol. The $x$-axis ranges from about 3-10 kcal/mol, spanning the performance of modern approximations for main group chemistry, such as atomization energies. The $y$-axis range is much smaller, running less than 4.0 kcal/mol, reflecting the much smaller magnitude of NCIs in water, and how high accuracy needs to be in order to have an accurate model for water. Here, HF-SCAN sets a high standard, with a water error near 1.0 kcal/mol (the chemical accuracy claimed in Ref.~\cite{DLPP21}), while most standard-use functionals cannot compete. On the other hand, SCAN is designed mainly to improve materials calculations without the cost of a hybrid functional, and HF-SCAN has a high error on GMTKN55 (about 9 kcal/mol). Popular functionals have much smaller GMTKN55 errors, but perform worse on water. We also show the many combinations of HF-r$^2$SCAN-DC4 that do not include all the right ingredients, showing they all perform less well on water than HF-SCAN. We finally include $\omega$B97M-V functional \cite{MH16}, which might be considered the DFT gold-standard here, with the smallest errors for both water and main-group chemistry. But this range-separated functional with nonlocal correlation functional is far more expensive to compute than most functionals,\cite{MH18} and is less practical for DFT-MD simulations than e.g., SCAN. We have included it here only to show what is possible in principle with DFT. \begin{figure*}[htb] \centering \includegraphics[width=1.95\columnwidth]{figs/sm21.pdf} \caption{ Comparison between HF-r$^2$SCAN-DC4 and SM21 for (a) water hexamer isomerization energy (b) water hexamer interaction energy (c) water 20-mer isomerization energy and (d) hexgonal plot same as Figure~\ref{fgr:w}(b). SM21 (green) is HF-r$^2$SCAN but with different D4 parameters obtained in Ref.~\cite{SM21b}. } \label{fgr:golo} \end{figure*} But the performance of HF-r$^2$SCAN-DC4 is remarkable. Its errors on both water {\em and} the GMTKN55 dataset are almost half of those of HF-SCAN. No other functional in our collection comes close for water. Clearly, all the chemically-inclined approximations which are comparable for main-group chemistry do much worse. In Figure~\ref{fgr:w}(b), we show the hexagon plots comparing the MAEs of several density functionals, where the position of five vertices denote the MAEs for individual water-based datasets, while the sixth vertex denotes the overall performance of the functionals for the whole GMTKN55 databases, as measured by WTMAD-2. It is the MAE for all the reactions from these five water-based datasets that we use as the quantity on the $y$-axis in Figure~\ref{fgr:w}(a). The size of the hexagon of HF-r$^2$SCAN-DC4 is the closest to that of more costly $\omega$B97M-V. We can also see that the performance of HF-r$^2$SCAN-DC4 is far superior to that of HF-SCAN. M062X-D3(0), a meta-hybrid that is very accurate for small organic molecules,\cite{GHBE17} and yields WTMAD-2 which is slightly lower than that of HF-r$^2$SCAN-DC4. But, for water simulations, M062X-D3(0) is nowhere close to HF-r$^2$SCAN-DC4, as can be seen from the position of the remaining five vertices. \section{Methods} \label{sec:met} The basic principles of DC-DFT are covered elsewhere in the literature\cite{KSB13,VSKS19}, and reviewed in the supporting information. In most KS-DFT calculations, the error in the density has a negligible effect on the energy errors. But sometimes the error in a SC density leads to a noticeable contribution, which can be reduced if a more accurate density is used instead. For many semilocal exchange-correlation approximations in molecular calculations, when a calculation is density sensitive, often the HF density then yields significantly smaller energy errors. These principles have led to improved energetics in reaction barrier heights, electron affinities, and also for the ground state geometries of non-covalent interaction systems, etc.\cite{KSB14,JS08,LFB10,LB10} Application of the principles of DC-DFT is subtle in the case of r$^2$SCAN-D4, because of the need to separate out the error due to density correction from the fitting of the D4 corrections. For example, for halogen bonds, the density-driven errors are far larger than dispersion corrections, so all fitting must be done on density-corrected energetics. Moreover, when empirical functionals contain parameters, such parameters should be fit only on density-insensitive calculations, so that the parameters optimize the 'true' functional error. With these principles in mind, we find the parameters for HF-r$^2$SCAN-DC4 using only the density-insensitive calculations in the GMTKN55 dataset. We find their optimum values by minimizing MAE values over all such cases. This is detailed in the supporting information. This is why we use the acronym DC4 instead of D4, meaning D4 accounts for density correction. In Refs.~\cite{SVSB22} and \cite{SSVB22}, we proposed DC(HF)-DFT, a DC-DFT procedure that discriminately uses HF densities based on the density sensitivity criterion. The main idea of DC(HF)-DFT is to use HF-DFT for density-sensitive (DS) reactions and SC-DFT for density-insensitive (DI) reactions (possible spin-contaminations of the HF results are also taken into account as detailed in Section~\ref{sec:hfdft}). While we consider DC(HF)-DFT a state-of-the-art DC-DFT procedure, for our HF-r$^2$SCAN-DC4 we use HF-DFT, meaning that the functional is always evaluated on the HF density regardless of the sensitivity criterion. To use DC(HF)-DFT, we need to compute density sensitivity for each reaction of interest and possibly make adjustments to its cut-off value which is used to declare whether a given reaction is DS or DI.\cite{SVSB22,SSVB22} This would also require having two sets of D4 parameters, one for DS and one for DI reactions. All these efforts would undermine the ease of use of r$^2$SCAN, which is a general-purpose functional. For this reason and encouraged by the very good performance of HF-DFT with SCAN-like functionals\cite{SRP15,FKNP20}, we employ HF-DFT\cite{TCCL} as a DC-DFT procedure for HF-r$^2$SCAN-DC4. While our HF-r$^2$SCAN-DC4 can be routinely used by applying it to HF orbitals without ever needing to calculate density sensitivity of a given reaction, the use of DC-DFT principles and density sensitivity is vital for our training of HF-r$^2$SCAN-DC4 as explained above. To illustrate what can happen when these principles are not applied, we show results from Ref.~\cite{SM21b}. This is a version of HF-r$^2$SCAN-D4, but where all reactions in GMTKN55 were used, and the WTMAD-2\cite{GHBE17} was used as the cost function instead. Figure~\ref{fgr:golo} illustrates the results for the larger water clusters. In every case, they are noticeably worse than ours. Moreover, (d) shows that, apart from matching on WTMAD-2 measure, HF-r$^2$SCAN-DC4 yields more accurate results in every other case. \section{Conclusions} \label{sec:conc} The work of Ref.~\cite{DLPP21} was a breakthrough in models for water, showing that, by using the principles of DC-DFT, a moderate-cost density functional approximation approached chemical accuracy for many relevant properties of small water clusters. However that functional is lacking in dispersion corrections, yielding large errors for energetics between organic and biological molecules. It also inherits some of the numerical issues of the original SCAN functional, which have been eliminated by using r$^2$SCAN instead in most other applications. However, the small differences between these two wreak havoc on the much smaller scale of subtle energy differences of water clusters. The present work shows that, by a very careful application of the principles of DC-DFT, all these difficulties can be overcome, and even greater accuracy achieved for pure water, while still including dispersion for other molecules where it can be vital. Finding the correct parameters depends crucially on training on only density-insensitive chemical reactions, as inclusion of density-sensitive reactions yields sub-optimal values for the parameters. We suggest HF-r$^2$SCAN-DC4 be tested and applied in solution wherever practical. \acknowledgement{ ES, SS, and YK are grateful for support from the National Research Foundation of Korea (NRF-2020R1A2C2007468 and NRF-2020R1A4A1017737). KB acknowledges funding from NSF (CHE-2154371). SV acknowledges funding from the Marie Sk^\lambda{}odowska-Curie grant 101033630 (EU’s Horizon 2020 programme). We thank John Perdew and Francesco Paesani and his group for many useful discussions. }
proofpile-arXiv_065-5842
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Run Time Assurance Configurations} \label{app:configurations} In \figref{fig:rta_on}, we show a general method for including RTA in the training loop and purposefully left it vague. In the literature, there are many ways of connecting RTA. These range from treating it as an unknown feature of the environment, to using it for generating additional training data. In this work, we have separated these various ways of connecting the RTA into the 6 configurations shown in \figref{fig:app_rta_configs}. While we were only able to experiment with the first 5, the sixth is presented for future work. The configurations are listed in order of increasing complexity. Each configuration builds on the previous ones, helping us observe the impact of each addition. \begin{figure \centering \includegraphics[width=0.4\columnwidth]{figures/rta_config_venn_diagram.pdf} \caption{The RTA configurations used in our experiments represented within the three main categories of Safe Reinforcement Learning outlined in Section \ref{sec:SRL}.} \label{fig:app_rta_configs} \end{figure} The configurations are explained in detail below. Because SAC and PPO collect different data tuples during training, we must define the configurations using different terms, $data_{SAC}$ and $data_{PPO}$, respectively. \subsubsection{(1) Baseline (no RTA)} This configuration, demonstrated in \figref{fig:rta_off}, is used as a baseline to compare all the RTA configurations against. In this configuration, the agent is learning according to the RL algorithm without any modifications. Note that for this comparison to be fair, the environment must be the same with no alterations to the initial set or terminal conditions. \begin{align*} data_{SAC} &= \{\obs, \action_{\rm NN}, r, \obs'\} \\ data_{PPO} &= \{\obs, \action_{\rm NN}, r, v, logp(\action_{\rm NN})\} \end{align*} $\obs$ is the input observation that led to the agent providing the output action, $\action_{\rm NN}$. $\obs'$ is the observation of the state reached after taking action $\action_{\rm NN}$ and $r$ is the reward value associated with it. $v$, the estimated value of the reached state, and $logp(\action_{\rm NN})$, the log-probability of selecting $\action_{\rm NN}$ given the current policy, are terms specific to PPO. \subsubsection{(2) Baseline punishment} In this configuration, we assign a negative reward, i.e. punishment $p$, if $unsafe?$ returns true, meaning at least one safety constraint was violated. This configuration adds SRL-style reward shaping to the problem. Instead of only maximizing the reward, the problem has two goals: (1) complete the task and (2) minimize the punishment, or cost, incurred from violating constraints. The remaining configurations cannot factor in this kind of punishment because they rely on safe exploration, which does not allow any violations of the safety constraints. \begin{equation*} data_{SAC} = \begin{cases} \{\obs, \action_{\rm NN}, r + p, \obs'\}, & \text{if }unsafe? \\ \{\obs, \action_{\rm NN}, r, \obs'\}, & \text{otherwise} \end{cases} \end{equation*} \begin{equation*} data_{PPO} = \begin{cases} \{\obs, \action_{\rm NN}, r + p, v, logp(\action_{\rm NN})\}, & \text{if }unsafe? \\ \{\obs, \action_{\rm NN}, r, v, logp(\action_{\rm NN})\}, & \text{otherwise} \end{cases} \end{equation*} \subsubsection{(3) RTA no punishment} This configuration is the simplest form of safe exploration. Nothing changes from the baseline configuration, except the agent remains safe throughout the training process because of the RTA. \begin{equation*} data_{SAC} = \begin{cases} \{\obs, \action_{\rm NN}, r, \obs'\}, & \text{if }intervening? \\ \{\obs, \action_{\rm NN}, r, \obs'\}, & \text{otherwise} \end{cases} \end{equation*} \begin{equation*} data_{PPO} = \begin{cases} \{\obs, \action_{\rm NN}, r, v, logp(\action_{\rm NN})\}, & \text{if }intervening? \\ \{\obs, \action_{\rm NN}, r, v, logp(\action_{\rm NN})\}, & \text{otherwise} \end{cases} \end{equation*} \subsubsection{(4) RTA punishment} This configuration adds an element of reward shaping to the previous configuration. Since we want the agent to learn the correct action to take in a scenario without the help of an RTA, we assign a punishment for having the RTA intervene. By adding this punishment, $p$, when the RTA intervenes, the agent should learn to make a distinction between safe and unsafe actions since safe actions will not incur a punishment. \begin{equation*} data_{SAC} = \begin{cases} \{\obs, \action_{\rm NN}, r + p, \obs'\}, & \text{if }intervening? \\ \{\obs, \action_{\rm NN}, r, \obs'\}, & \text{otherwise} \end{cases} \end{equation*} \begin{equation*} data_{PPO} = \begin{cases} \{\obs, \action_{\rm NN}, r + p, v, logp(\action_{\rm NN})\}, & \text{if }intervening? \\ \{\obs, \action_{\rm NN}, r, v, logp(\action_{\rm NN})\}, & \text{otherwise} \end{cases} \end{equation*} \subsubsection{(5) RTA Corrected Action} In this configuration, we build on the idea of helping the agent identify the correct action to take in states near violating the safety constraints. Instead of punishing the agent for having the RTA intervene, we correct the agent's output to match that of the RTA's. In this manner, the agent only learns the actions actually taken in the environment. \begin{align*} data_{SAC} &= \{\obs, \action_{\rm act}, r, \obs'\} \\%, & data_{PPO} &= \{\obs, \action_{\rm act}, r, v, logp(\action_{\rm act})\} \end{align*} \subsubsection{(6) Neural Simplex Architecture (NSA)} This configuration (not used in this work but planned for future work) is based on the SRL approach first published in \cite{phan2020neural}. In the authors' original implementation, NSA is used for retraining a learned policy. However, the retraining is done in an online approach, which can be easily adjusted to train a control policy from scratch. This configuration estimates the result of taking the unsafe action and adds that estimate to the training data. This allows the agent to learn about unsafe actions without actually executing them. The additional data should help the agent develop a much better understanding of the environment and, thus, learn a more optimal policy after fewer timesteps. Currently, this configuration is limited to off-policy RL algorithms, but we are actively working on extending it for use in on-policy algorithms. \begin{equation*} data_{SAC} = \begin{cases} \begin{cases} \{\obs, u_{b}, r, \obs'\} \text{ and } \\ \{\obs, \action_{\rm NN}, est\_r, est\_\obs'\} \end{cases}, & \text{if }intervening? \\ \{\obs, \action_{\rm NN}, r, \obs'\}, & \text{otherwise} \end{cases} \end{equation*} Here, $est\_r$ and $est\_\obs'$ are the estimated reward and estimated next observation, respectively. If an implicit RTA is used, the estimates can be computed using the internal simulation, $\phi_1^{\action_{NN}}$, which is used for determining if an action is safe. \subsection{Environments} \label{app:environments} We ran our experiments in three environments with varying levels of complexity. By running our experiments in environments with different levels of complexity, we can observe whether the trends remain the same. If they do, then we can reasonably assume the trends will remain in even more complex environments. Currently, and to the best of our knowledge, these three environments are the only ones provided with accompanying RTA\footnote{As more RL environments are released with RTA, we hope to include them in our study to ensure our results continue to hold true.}. \subsubsection{Pendulum} This environment was previously used in \cite{cheng2019end} as a good indicator of the effectiveness of SRL over standard Deep RL. We use the same initial conditions and constraints described in their work, explained below. The goal of the agent in this environment is to use an actuator to keep the frictionless pendulum upright and within the bounds of $\pm 1rad \approx \pm 46^{\circ}$. Thus, the inequality constraint the RTA is designed to uphold can be written as, \begin{equation} \varphi_1(\state) := 1 - |\theta|, \end{equation} where $\theta$ is the displacement angle of the pendulum measured from the upright position. The interior plant model changes according to the discrete dynamics \begin{align} \label{eq:dynamics} \begin{split} \omega_{t+1} &= \omega_{t} + (\frac{-3g}{2l} \sin(\theta_t + \pi) + \frac{3\action_t}{ml^2}) \Delta t \\ \theta_{t+1} &= \theta_{t} + \omega_{t} \Delta t + (\frac{-3g}{2l} \sin(\theta_t + \pi) + \frac{3\action_t}{ml^2}) \Delta t^2, \end{split} \end{align} where $g=10$, $l=1$, $m=1$, $\Delta t=0.05$, and $\action_t$ is the control from the neural network in the range $[-15, 15]$. Additionally, within the environment the pendulum's angular velocity, $\omega$, is clipped within the range $[-60, 60]$, and the angle from upright, $\theta$, is aliased within $[-\pi, \pi]$ radians. $\theta$ is measured from upright and increases as the pendulum moves clockwise. These values, $\theta$ and $\omega$, are then used to determine the input values for the neural network controller. The input observation is \begin{equation} \obs = [\cos(\theta), \sin(\theta), \omega]^T. \end{equation} The pendulum is randomly initialized within a subset of the safe region in order to ensure it has enough time to intervene before violating the safety constraint. $\theta_0$ is randomly initialized between $\pm0.8rad$. $\omega_0$ is randomly initialized between $\pm1.0rad/s$. In the event the safety constraint is violated, the episode is deemed a failure and immediately terminated. If the simulation were allowed to continue, the problem goal would change from simply keeping the pendulum upright, to also include learning how to swing back up. Additionally, this termination helps us identify safety violations later on in our analyses, since any safety violations would result in an episode length less than 200 timesteps. The reward function was modified as well by adding a constant, $5$. $5$ was chosen in order to make a majority of the reward values positive. By keeping the reward positive, the agent is encouraged to not terminate the episode early. If the reward were mostly negative, the agent might learn the fastest way to terminate the episode in order to maximize the cumulative reward. The resulting reward function, Equation \ref{eq:app_reward_pendulum}, has a cumulative maximum of $1000$ instead of $0$. \begin{equation} \label{eq:app_reward_pendulum} r_t = 5 - (\theta^2_t + 0.1\omega_t^2 + 0.001\action_t^2) \end{equation} We use this reward function, Equation \ref{eq:app_reward_pendulum}, for all the evaluations we conducted. The punishment value used in various configurations when the RTA intervenes is $p = -1$. The RTA design implemented in this environment is a simple implicit simplex design. The backup controller, described by Equation \ref{eq:app_rta_pendulum}, intervenes if the desired control action is not recoverable using the backup controller. To determine whether the desired action, $\action_{NN}$ is recoverable, the desired action is simulated internally. If the simulated next state, $\bar{\state}_{t+1} = \phi_{1}^{\action_{NN}}(\state_t)$, violates the safety constraint, then the backup controller intervenes. In the event the simulated next state, $\bar{\state}_{t+1}$, is safe, a trajectory of up to 100 timesteps is simulated from $\bar{\state}_{t+1}$ using the backup controller. If any simulated state in the trajectory is unsafe, the desired action, $\action_{NN}$, is determined unsafe, and the backup controller intervenes. If the trajectory remains safe or a simulated next state is within the initial conditions, the desired action is determined to be safe, and the backup controller does not intervene. \begin{equation} \action_b(\state) = \min(\max(\frac{-32}{\pi} \theta, -15), 15) \label{eq:app_rta_pendulum} \end{equation} \subsubsection{Spacecraft Docking 2D \& 3D} The cost of building and sending spacecraft into orbit is on the order of hundreds of millions of dollars. Therefore, it is in everyone's best interest to keep spacecraft in orbit operational and prevent collisions. Spacecraft docking is a common and challenging problem with a high risk for failure in the event an error occurs in the docking procedure and the two spacecraft collide. Here, we describe the problem with 3-dimensional dynamics, but repeated our experiments in a 2-dimensional environment where all $z$ values are held to a constant $0$. The goal of the agent in these environments is to use mounted thrusters that move the \emph{deputy} spacecraft in the $x$, $y$, and $z$ directions to a docking region around the \emph{chief} spacecraft located at the origin. The state and observation are the same, \begin{equation*} \state = \obs = [x, y, z, \dot{x}, \dot{y}, \dot{z}]^T. \end{equation*} The action vector consists of the net force produced by the thrusters in each direction, \begin{equation*} \action = [F_x,F_y,F_z]^T, \end{equation*} where each net force is a real value bounded in the range $\Action = [-1, 1]m/s^2$. In this environment, the relative motion dynamics between the deputy and chief spacecraft are given by Clohessy-Wiltshire equations \cite{clohessy1960terminal}. These equations are a first-order approximation represented by \begin{equation} \label{eq:app_system_dynamics} \state_{t+1} = A {\state_{t}} + B\action_t, \end{equation} where \begin{align} \centering A = \begin{bmatrix} 1 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 1 \\ 3n^2 & 0 & 0 & 1 & 2n & 0 \\ 0 & 0 & 0 & -2n & 1 & 0 \\ 0 & 0 & -n^2 & 0 & 0 & 1 \\ \end{bmatrix},~ B = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \frac{1}{m} & 0 & 0 \\ 0 & \frac{1}{m} & 0 \\ 0 & 0 & \frac{1}{m} \\ \end{bmatrix}. \end{align} In these equations, $n = 0.001027rad/s$ is the spacecraft mean motion and $m = 12kg$ is the mass of the deputy spacecraft. The agent, i.e. the deputy spacecraft, is randomly initialized in a stationary position ($\nu_{\rm H} = 0m/s$) around the chief so the distance from the chief, $d_{\rm H}$ is in the range $[100, 150]m$. From there, the deputy successfully docks if the distance between the deputy and chief, $d_{\rm H} = (x^2+y^2+z^2)^{1/2}$, is less than $20m$ and the deputy's relative speed, $\nu_{\rm H} =(\dot{x}^2+\dot{y}^2+\dot{z}^2)^{1/2}$, is less than $0.2m/s$. If the deputy is traveling faster than $0.2m/s$ within the docking region, then a crash occurs and the agent failed the task. RTA is used in these environments to enforce a distance dependent speed limit and maximum velocity limits\footnote{More information on how and why these constraints were chosen can be found in \cite{dunlap2021Safe, dunlap2021comparing}.}. Together, these constraints keep the deputy spacecraft controllable and prevent collisions caused by the deputy approaching too fast. The distance dependent speed limit is defined as, \begin{equation} \label{eq:ha1} \varphi_1(\state) := \nu_D - \nu_{\rm H} + c d_{\rm H}, \end{equation} where $\nu_D = 0.2m/s$ defines the maximum allowable docking velocity and $c = 2n s^{-1}$ is a constant. The maximum velocity, $v_{\rm max} = 10m/s$, limits can be written as inequality constraints, \begin{equation} \label{eq:app_ha2} \varphi_2(\state) := v_{\rm max}^2 - \dot{x}^2, \quad \varphi_3(\state) := v_{\rm max}^2 - \dot{y}^2, \quad \varphi_4(\state) := v_{\rm max}^2 - \dot{z}^2. \end{equation} \begin{table*}[htbp] \centering \caption{Spacecraft 2D Spacecraft Docking \& 3D reward function components} \begin{tabular}{lc}\hline \multicolumn{2}{c}{Terminal Rewards: All Configurations} \\ \hline Successfully Docked $(d_{\rm H} \leq 20m$ and $\nu_{\rm H}\leq 0.2 m/s)$ & +1 \\ Crashed $(d_{\rm H} \leq 20 m$ with a velocity $\nu_{\rm H}> 0.2 m/s)$ & -1 \\ Out of Bounds $(d_{\rm H} > 200m)$ & -1 \\ Over Max Time/Control & -1 \\ \hline \multicolumn{2}{c}{Dense Reward: All Configurations} \\ \hline Proximity & $0.0125(\Delta d_{\rm H} )$ \\ \hline \multicolumn{2}{c}{Safety Rewards: Punishment Configurations} \\ \hline If RTA is Intervening & $-0.001$ \\ Over Max Velocity & $-0.1 - 0.1(\nu_{\rm H} - v_{\rm max})$ \\ \hline \end{tabular} \label{tab:rewards} \end{table*} The reward functions for these environments are defined by sparse and dense components defined in \tabref{tab:rewards}\footnote{These values were provided by the authors of the environments during early development and do not match those published in \cite{ravaioli2022safe}. Additionally, these values were chosen with PPO as the target RL algorithm, which helps explain why SAC struggled with learning to complete the task.}. The sparsely defined terminal and safety reward components are only applied if the agent meets the specified requirements. In contrast, the dense reward component is computed after each timestep. In our experiments, the evaluation returns are computed using all the components defined in \tabref{tab:rewards}. However, during training, the safety components are ignored unless the punishment is required by the configuration. \subsection{Hyperparameters} \label{app:hparams} Providing the hyperparameters used in RL experiments is crucial for recreating the results. In all of our experiments, we train 10 agents using the following random seeds, \begin{equation*} [1630, 2241, 2320, 2990, 3281, 4930, 5640, 8005, 9348, 9462]. \end{equation*} Additionally, in this section, we provide the remaining hyperparameters used for training. No matter the configuration, the following hyperparameters were used. PPO hyperparameters are provided in \tabref{tab:ppo_hparams}. SAC hyperparameters are provided in \tabref{tab:sac_hparams}. \begin{table}[hp] \centering \caption{PPO Hyperparameters} \begin{tabular}{l|c|c|c} & Pendulum & Docking 2D & Docking 3D \\ \midrule actor architecture & 64 tanh, 64 tanh, 1 linear & 64 tanh, 64 tanh, 2 linear & 64 tanh, 64 tanh, 3 linear \\ critic architecture & 64 tanh, 64 tanh, 1 linear & 64 tanh, 64 tanh, 2 linear & 64 tanh, 64 tanh, 3 linear \\ epoch length & 4000 & 10564 & 10564 \\ epochs & 100 & 100 & 100 \\ discount factor $\gamma$ & 0.0 & 0.988633 & 0.988633 \\ clip ratio & 0.2 & 0.2 & 0.2 \\ actor learning rate & 0.0003 & 0.001344 & 0.001344 \\ critic learning rate & 0.001 & 0.001344 & 0.001344 \\ updates per epoch & 80 & 34 & 34 \\ target kl & 0.01 & 0.01 & 0.01 \\ GAE-$\lambda$ & 0.0 & 0.904496 & 0.904496 \\ max episode length & 200 & 1000 & 1000 \\ \end{tabular} \label{tab:ppo_hparams} \end{table} \begin{table}[hp] \centering \caption{SAC Hyperparameters} \begin{tabular}{l|c|c|c} & Pendulum & Docking 2D & Docking 3D \\ \midrule actor architecture & 64 ReLU, 64 ReLU, 1 tanh & 64 ReLU, 64 ReLU, 2 tanh & 64 ReLU, 64 ReLU, 3 tanh \\ critic architecture & 64 ReLU, 64 ReLU, 1 ReLU & 64 ReLU, 64 ReLU, 1 ReLU & 64 ReLU, 64 ReLU, 1 ReLU \\ epoch length & 400 & 1000 & 1000 \\ epochs & 40 & 1000 & 1000 \\ replay buffer size & 10000 & 10000 & 10000 \\ discount factor $\gamma$ & 0.99 & 0.99 & 0.99 \\ polyak & 0.995 & 0.995 & 0.995 \\ entropy coefficient $\alpha$ & 0.2 & 0.2 & 0.2 \\ actor learning rate & 0.001 & 0.001 & 0.001 \\ critic learning rate & 0.001 & 0.001 & 0.001 \\ minibatch size & 256 & 256 & 256 \\ update after \_ step(s) & 1 & 1 & 1 \\ max episode length & 200 & 1000 & 1000 \\ \end{tabular} \label{tab:sac_hparams} \end{table} \subsubsection{Paper title} \subsection{(Deep) Reinforcement Learning} This subsection contains papers related to general RL type things. The Safe RL papers will be more focused on what makes those methods "Safe". This is a link for term definitions that I'll need to double check against what I have: \href{https://towardsdatascience.com/the-complete-reinforcement-learning-dictionary-e16230b7d24e}{RL dictionary} \subsubsection{A (Long) Peek into Reinforcement Learning (Lil'Log)} \textbf{Lillian Weng, Lil'Log, 2018, \href{https://lilianweng.github.io/lil-log/2018/02/19/a-long-peek-into-reinforcement-learning.html#key-concepts}{link}} This blog post, which is highly recommended to people new to RL, explains the concepts of RL in somewhat simpler terms. In essence, this is a summary of a summary. \textbf{What is RL?}: The goal of RL is to learn a good strategy for an agent to complete a task from experimental trials that have their levels of success represented as a reward. With the optimal strategy, the agent is capable to actively adapt to the environment to maximize future rewards. This is done by an agent choosing actions in an environment. This is illustrated in Figure \todo{get figure describing interaction of agent and environment, probably figure 3 from this blog} The types of RL are defined by how they use 3 distinct components and combinations of them. These components are: \begin{itemize} \item \textbf{Model}: a descriptor of the environment. With a model, we can learn or infer how the environment responds to different actions. The major components of a model are the transition probability function and a reward function. \item \textbf{Policy}: the agent's behavior function, often denoted as $\pi$, is a mapping from state, $s$, to action, $a$, and can be deterministic, $\pi(s)=a$ or stochastic, $\pi(a|s) = P_\pi [A=a|S=s]$. \item \textbf{Value Function}: a measure of the "goodness" of a state or how rewarding a state-action pair is based on future rewards. This future reward, or \emph{return}, is a total sum of discounted rewards computed with discount factor $\gamma \in [0,1]$: $G_t = \sum_{k=0}^{\infty} \gamma^k R_{t+k+1}$ \begin{itemize} \item \textbf{state-value}: the expected return if the agent is in state $s$ at time $t$: $V_\pi (s) = E_\pi [G_t |S_t = s]$ \item \textbf{action-value}: also known as the Q-value is the expected return of a given state-action pair: $Q_\pi (s,a)=E_\pi [G_t |S_t =s,A_t =a]$ This can be used to estimate the state-value using the target policy. $V_\pi =\sum_{a \in A} Q_\pi (s,a)\pi(a|s)$ \item \textbf{advantage}: the difference between the action-value and the state-value (this is used in A2C-based algorithms). It is defined as: $A_\pi(s,a)=Q_\pi(s,a)-V_\pi(s)$ \end{itemize} \end{itemize} Some key things/terms to know when talking about these components are: \begin{itemize} \item \textbf{Model-based:} Rely on the model of the environment; either the model is known or the algorithm learns it explicitly. \item \textbf{Model-free:} No dependency on the model during learning. \item \textbf{On-policy:} Use the deterministic outcomes or samples from the target policy to train the algorithm. \item \textbf{Off-policy:} Training on a distribution of transitions or episodes produced by a different behavior policy rather than that produced by the target policy. \end{itemize} \textbf{Temporal-Difference Learning (TD)}: Model-free learning from episodes of experience. The key thing about TD that makes it so important (“one idea … central and novel to reinforcement learning” \todo{cite Sutton\&Barto}) is that it can learn from \emph{incomplete} episodes. We don't have to track the episode to termination using TD learning. This is because of \emph{Value Estimation}. \textbf{Value Estimation} is a key idea of Temporal-Difference Learning where the value function, $V(S_t)$, is updated towards an estimated return, or TD target. The updates in the value function are determined by the learning rate, $\alpha$ so that $V(S_t) \gets V(S_t) + \alpha (R_{t+1} + \gamma V(S_{t+1}) - V(S_t))$. The same can be done for the Q-value where $Q(S_t,A_t) \gets Q(S_t,A_t) + \alpha (R_{t+1} + \gamma Q(S_{t+1},A_{t+1}) - Q(S_t,A_t))$. This adaptation for the Q-value led to the creation of algorithms \emph{SARSA} and \emph{Q-Learning}. \textbf{Q-Learning}, developed by Watkins \& Dayan in 1992, was a major step forward in early RL. It is very similar to \emph{SARSA} except for one key difference in their steps. \emph{SARSA} updates every 2 steps, using what was learned in the second step to evaluate the change for the previous step. \emph{Q-Learning}, on the other hand, uses an estimate of the Q values, $Q_*$ to choose the next action independent of the policy. Thus, \emph{Q-Learning} is also \emph{off-policy}. These are the steps in \emph{Q-Learning}: \begin{enumerate} \item From time step $t$ at state $S_t$, pick an action, $A_t$ according to the Q values, usually in a $\sigma$-greedy approach \item With action $A_t$, we observe reward $R_t$ and a transition to state $S_{t+1}$ \item Update the Q-function according to: $Q(S_t,A_t) \gets Q(S_t,A_t) + \alpha (R_{t+1} + \gamma \max_{a \in A} Q(S_{t+1},a) - Q(S_t,A_t))$ \item increment $t$ and repeat from step 1 \end{enumerate} \emph{Q-Learning} has a problem though. Theoretically, the Q values can be saved for every state-action pair, but that quickly becomes infeasible as the state-action space gets larger. Thus, a function can be used to approximate the Q values. The best way to do this is with \textbf{Deep Q-Network}s (\emph{DQN}). In order for \emph{DQN} to work and not suffer from instability and divergence, it requires two mechanisms: \begin{itemize} \item \textbf{Experience Replay}: all the steps in an episode are recorded in replay memory, $D_t$, which has experience tuples over many episodes. During updates, samples from $D_t$ are drawn at random and included in the update process and can be used multiple times. Experience Replay removes correlations in observation sequences and smooths over changes in data distribution. \item \textbf{Periodically Updated Target}: Update Q towards target values that are only periodically updated. Every $C$ (hyperparameter) steps, the Q network is cloned and kept frozen as the optimization target. This makes training more stable by overcoming short-term oscillations. \end{itemize} The loss function for this looks like: \begin{equation*} L(\theta) = E_{(s,a,r,s')\sim U(D)}[(r + \gamma \max_{a'}Q(s',a';\theta^-) - Q(s,a;\theta))^2] \end{equation*} where $U(D)$ is a uniform distribution over the replay memory and $\theta^-$ is the frozen target Q-network Back to on-policy methods, \textbf{Policy Gradient} methods are some of the most successful on-policy methods. They learn the policy directly with a parameterized function, $\pi(a|s;\theta)$. This method is explained further and more explicitly in \ref{paper:weng2018github2}. Instead, we'll focus more on a corner-stone in Policy Gradient methods called \emph{Actor-Critic}. There are a number of algorithms that build off of this algorithm so it is important to understand how it works. \textbf{Actor-Critic} learns both the value function and the policy using both to influence each other. This is done through the use of an actor and a critic. The actor updates policy parameters in the direction suggested by the critic. The critic updates the value function parameters to make better suggestions for the actor. It can be thought of similar to two siblings playing a video game with the older sibling watching while the younger sibling plays. The older sibling has an understanding of what actions are good and a general idea of what should be done. They influence the actions of the younger sibling with suggestions and recommendations and the younger sibling incorporates those suggestions to their play style. This is more explicitly described in Algorithm \ref{algo:actor_critic}. \begin{algorithm}[H] \label{algo:actor_critic} \caption{Actor-Critic Algorithm} \begin{algorithmic} \REQUIRE $0 < M \le T$ \WHILE{$step\_count \le total\_steps$} \STATE $\{states, actions, log\_probabilities, rewards\} \leftarrow$ Run policy $\pi_{\theta_{old}}$ in environment for $T$ timesteps \STATE compute advantages \STATE $step\_count += T$ \FOR{$epoch = 1,2,...,K$} \STATE shuffle $\{state, action, log\_probability, advantage\}$ tuples \STATE split into $M$ minibatches \FOR{$minibatch = 1,2,...,M$} \STATE compute losses \STATE $\theta_{old} \leftarrow \theta$ \ENDFOR \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} The major challenge still faced, and commonly brought up, in RL is the \textbf{Exploration-Exploitation Dilemma}. \todo{Look into python book for better explanation of this} \subsection{Safe Reinforcement Learning} I'll try to groups these papers by the method they use to make the RL "safe". I'll also try to make these summaries more brief. \subsubsection{AlwaysSafe: Reinforcement Learning Without Safety Constraint Violations During Training \cite{Simao2021alwayssafe}} \textbf{Thiago D. Simão, AAMAS, 2021, \href{https://tdsimao.github.io/publications/Simao2021alwayssafe/}{link}} \subsubsection{Safe Exploration in Continuous Action Spaces \cite{dalal2018safe}} \textbf{Gal Dalal, arXiv, 2018, \href{https://arxiv.org/abs/1801.08757}{link}} \subsection{Reachability and Runtime Assurance} These papers concern the topic of "reachability" and methods for computing reach sets. I'll try to include more of the math involved so I can use it later. Definitions and explanations of things I understand/know: \begin{itemize} \item Frontier: Essentially the set of all possible $s'$ states \item Reachable set: All possible states a system can get to. Usually assumes infinite time and is meant for stability analysis. The set of all frontiers. \item Admissible states: a.k.a. "safe sets" but the better term to use. \item Inadmissible sets: "unsafe sets" but doesn't have to be tied to safety, e.g. could be undesirable. \item Recoverable states: subset of admissible states. To be recoverable, the safety controller must be able to keep the system within the admissible states. \item Flow: \item Flow-pipe: \item Reach tube: \item Lipschitz continuous function: A function is Lipschitz continuous if and only if there exists $L < \infty$ such that $|f(t, u) - f(t, u^*)| \le L|u - u^*|$ for all $u, u^* \in \mathcal{R}$. This is similar to saying $f'$ is bounded by $L$. \item Lipschitz stable: $\dot{x}=f(x(t))$, $x(0)=x_{0}$, where $x(t) \in \mathcal{D} \subseteq \mathcal{R}^{n}$ denotes the system state vector, $\mathcal{D}$ an open set containing the origin, and $f:\mathcal{D} \rightarrow \mathcal{R}^{n}$ continuous on $\mathcal{D}$. Suppose $f$ has an equilibrium at $x_{e}$ so that $f(x_{e})=0$, then this equilibrium is said to be Lyapunov stable, if, for every $\epsilon >0$, there exists a $\delta >0$ such that, if $\|x(0)-x_{e}\|<\delta$, then for every $t\geq 0$ we have $\|x(t)-x_{e}\|<\epsilon$. The equilibrium of the above system is said to be asymptotically stable if it is Lyapunov stable and there exists $\delta >0$ such that, if $\|x(0)-x_{e}\|<\delta$, then $\lim_{t\rightarrow \infty }\|x(t)-x_{e}\|=0$. \end{itemize} \subsubsection{intro to reachability?} \textbf{author, conference/journal, year, \href{}{link}} \subsubsection{Hamilton-Jacobi Reachability: A Brief Overview and Recent Advances \cite{bansal2017hamilton}} \textbf{Somil Bansal, CDC, 2017, \href{https://arxiv.org/abs/1709.07523}{link}} \subsubsection{Efficient Reachability Analysis of Closed-Loop Systems with Neural Network Controllers \cite{everett2021efficient}} \textbf{Michael Everett, arXiv, 2021, \href{https://arxiv.org/abs/2101.01815}{link}} \subsubsection{Verification and Synthesis of Hybrid Systems \cite{dang2000verification}} \textbf{Thi Xuan Thao Dang, dissertation, 2000, \href{https://tel.archives-ouvertes.fr/tel-00006738/document}{link}} In Chapter 5 of this dissertation, the author explains how face lifting works. The initial set of states, which must be represented by an orthogonal polyhedra, is expanded by \emph{lifting} each face according to the defined dynamics. In the initial face lifting algorithm, each face can only move outwards, i.e. the area of the reachable set cannot be smaller than the previous set. The result is an overapproximate reachable set. "However, the goal of ensuring the global error under the desired tolerance is still problematic because of the accumulation of over-approximate error. As a result, there are cases when, in the long run, the over-approximate error becomes too large for the result to be useful." (this is highlighted really well in figure 5.9 and figure 5.11) To reduce some of the error accumulation from \emph{face lifting}, the author introduces a new method, \emph{mixed face lifting}. In this method, we compute frontiers and the reachable set is a union of the computed frontiers. The frontier includes inward transitions for the faces so a frontier can be smaller than the previous frontier. This reduces the amount of accumulated global error, best shown in figure 5.12. The algorithm for this procedure is shown on page 106 and an aircraft example is explained on page 110. It will be important to review this material to gain a better understanding of the difference between \emph{frontier} and \emph{reachable set} as that will play an important role in how we determine safety for our aircraft scenario. Note: this method requires ODE's to determine the face transitions. \subsubsection{Real-Time Reachability for Verified Simplex Design \cite{johnson2016real}} \textbf{Taylor T. Johnson, ACM Transactions on Embedded Computing Systems, 2016, \href{https://dl.acm.org/doi/10.1145/2723871}{link}} \subsubsection{Real-Time Reachability for Verified Simplex Design \cite{bak2014real}} \textbf{Stanley Bak, RTSS, 2014, \href{http://stanleybak.com/papers/bak2014rtss.pdf}{link}} \subsubsection{Neural Simplex Architecture \cite{phan2020neural}} \textbf{Dung T. Phan, NASA Formal Methods Symposium, 2020, \href{https://arxiv.org/abs/1908.00528}{link}} \subsection{TBD} \section{Preliminary Literature Review} \section{Introduction} Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL) are fast-growing fields with growing impact, spurred by success in agents that learn to beat human experts in games like Go \cite{silver2016mastering} and Starcraft \cite{starcraft2019}. However, these successes are predominantly limited to virtual environments. An RL agent learns a behavior policy that is optimized according a reward function. The policy is learned through interacting with/in the environment, making training on real-world hardware platforms prohibitively expensive and time-consuming. Additionally, RL allows agents to learn via trial and error, exploring \emph{any behavior} during the learning process. In many cyber-physical domains, this level of freedom is unacceptable. Consider the example of an industrial robot arm learning to place objects in a factory. Some behaviors could cause the robot to damage itself, other elements in the factory, or nearby workers. To mitigate these set-backs, most RL training is performed in a simulated environment. After the training is completed in simulation, the learned policy can then be transferred to the real world via a \emph{sim2real} transfer. However, this \textit{sim2real} transfer can result in poor performance and undesirable behavior \cite{jang2019simulation, hamilton2022zero}. To counteract these issues, the field of \emph{Safe Reinforcement Learning} (SRL) has grown. Recent works demonstrate real-world online learning \cite{fisac2018general}, optimal performance that does not require safety checking when deployed \cite{wagener2021safe}, and SRL approaches that work better than state-of-the-art DRL approaches \cite{jothimurugan2019composable}. Each new SRL paper claims to be the best, safest, most efficient, or least restrictive approach, but few prove these claims with valid demonstrations. When we tried to replicate studies using original source code, we found inconsistencies in their comparisons that made the SRL problem easier to solve. When these inconsistencies were accounted for, almost all the improvements claimed by the authors to come from the SRL method disappeared. These inconsistencies include (1) manipulating the initial conditions for only the SRL agents, so the RL methods have to learn how to solve the problem from a greater range of initial conditions, (2) forcing the ``unsafe'' RL agents to learn how to recover from unrecoverable unsafe conditions\footnote{An example of an unrecoverable unsafe condition would include crashing the vehicle being controlled, while a recoverable unsafe condition might include violating a set speed limit. While it may be possible to recover from a crash in a low fidelity simulation, it likely is not possible in reality. Therefore, when a crash occurs, the simulation should be terminated instead of allowed to continue. Otherwise, the agents trained to continue after such an event have to learn the optimal policy for that continuation in addition to the original task.}, and (3) tuning hyperparameters for their SRL methods while leaving the regular RL methods hyperparameters at a baseline. Any one of these inconsistencies can lead to results skewed in the SRL method's favor. Furthermore, many demonstrations fail to repeat trials across multiple random seeds. Because RL is a stochastic process, showing results from one random seed is not representative of the true performance of the algorithm. Only presenting the results of a singular trial allows for results to be selectively chosen to show a large improvement over existing methods. The work in \cite{henderson2018deep, agarwal2021deep} highlights the importance of running experiments across at least 5 random seeds and averaging the results and showing the performance range in order to prove the trend of increased efficiency. These issues in SRL publications bring rise to the need for better comparative studies and more \emph{ablation studies}. An ablation study involves singling out and removing individual components of a complex system to understand their impact on the system as a whole. Ablation studies are used to determine causality and can prove which aspects of a system are actually the most important. In this work, we outline a better standard for comparing SRL approaches as we conduct a thorough ablation study on SRL approaches that use \textit{Run Time Assurance} (RTA), an approach that monitors the output of the control policy for unsafe control actions and intervenes by modifying the output to assure system safety. RTA can be applied during training and after the training is complete. \textbf{Our contributions.} This paper presents an in-depth investigation on how RTA configuration and usage choices impact RL training and final agent performance. The key contributions of this paper are as follows. \begin{enumerate} \item Evaluation across four different RTA approaches in addition to training with no RTA. \item Evaluation of five different RTA training configurations that adapt how penalties are assigned during training and whether the RL agent has knowledge of a corrected action. \item Evaluation of (1) and (2) on two different classes of RL: off-policy (SAC) and on-policy (PPO). \item Evaluation of the true performance of each combination by training across 10 random seeds and averaging the results. \item A large-scale (880 unique agents trained) experimental ablation study that covers (1), (2), and (3). \item Analysis of the experimental results to provide practical insights and recommendations for training RL agents with RTA. In particular, answering these important questions: \begin{enumerate} \item (\ref{sec:results_dependency}) Do agents learn to become dependent on RTA? \item (\ref{sec:results_configuration}) Which RTA configuration is most effective? \item (\ref{sec:results_approach}) Which RTA approach is most effective? \item (\ref{sec:results_policy}) Which works better with RTA, off-policy (SAC) or on-policy (PPO)? \item (\ref{sec:results_shaping}) Which is more important, Reward Shaping or Safe Exploration? \end{enumerate} \end{enumerate} \section{Deep Reinforcement Learning} \emph{Reinforcement Learning} (RL) is a form of machine learning in which an agent acts in an environment, learns through experience, and increases its performance based on rewarded behavior. \emph{Deep Reinforcement Learning} (DRL) is a newer branch of RL in which a neural network is used to approximate the behavior function, i.e. policy $\pi$. The basic construction of the DRL approach is shown in \figref{fig:rta_off}. The agent consists of the \emph{Neural Network Controller} (NNC) and RL algorithm, and the environment consists of a plant and observer model. The environment can be comprised of any dynamical system, from Atari simulations (\cite{hamilton2020sonic, alshiekh2018safe}) to complex robotics scenarios (\cite{brockman2016gym, fisac2018general, henderson2018deep, mania2018simple, jang2019simulation, bernini2021few}). \begin{figure}[htpb] \centering \includegraphics[width=0.6\columnwidth]{figures/RL_basic_1.pdf \caption{DRL training interactions between agent and environment without RTA} \label{fig:rta_off} \end{figure} Reinforcement learning is based on the \textit{reward hypothesis} that all goals can be described by the maximization of expected return, i.e. the cumulative reward \cite{silver2015}. During training, the agent chooses an action, $\action_{NN}$, based on the input observation, $\obs$. The action is then executed in the environment, updating the internal state, $\state$, according to the plant dynamics. The updated state, $s$, is then assigned a scalar reward, $r$, and transformed into the next observation vector. In all the examples shown in this work, we assume full observability, so all state information exists in the observation or can be reconstructed accurately from a single observation, i.e. the transformation by the observer is reversible. The observation is useful because it allows us to normalize the state values and change the input dimensions in order to ignore irrelevant variables and/or increase the importance of other variables by repeating them. The process of executing an action and receiving a reward and next observation is referred to as a \emph{timestep}. Relevant values, like the input observation, action, and reward are collected as a data tuple, i.e. \emph{sample}, by the RL algorithm to update the current NNC policy, $\pi$, to an improved policy, $\pi^*$. How often these updates are done is dependent on the RL algorithm. In this work, we focus on model-free DRL algorithms, meaning the agent has no dependency on the environment model during training. Within model-free DRL algorithms, there are two main categories of training, \emph{on-policy} and \emph{off-policy}. On-policy algorithms use the learned policy to select the actions taken during training, while off-policy algorithms use a separate policy. This distinction will cause the RTA to have a varied impact on the learning process. Thus, we repeat our experiments on two state-of-the-art DRL algorithms representing these two categories of training. \emph{Proximal Policy Optimization} (PPO) is our on-policy algorithm\footnote{Other on-policy RL algorithms include A2C \cite{mnih2016asynchronous}, TRPO \cite{schulman2015trust}, and ARS \cite{mania2018simple}.} and \emph{Soft Actor-Critic} (SAC) is our off-policy algorithm\footnote{Other off-policy RL algorithms include DQN \cite{mnih2015human}, DDPG \cite{lillicrap2015continuous}, and TD3 \cite{fujimoto2018addressing}.}. \section{Safe Reinforcement Learning} \label{sec:SRL} When an RL agent explores states in a video game, the consequences of making a ``wrong" move are limited. However, using RL in the real world has shown catastrophic results \cite{fisac2018general, jang2019simulation}. The field of \emph{Safe Reinforcement Learning} (SRL) was developed in response to RL's use on cyber-physical systems domain that interact with the real world in complex scenarios. In Garc\'{i}a and Fern\'{a}ndez's comprehensive survey of SRL from 2015, they categorized the approaches into two main categories or styles: (1) modification of the optimality criterion and (2) modification of the exploration process \cite{garcia2015comprehensive}. In this work, we refer to these categories under the more general terms: (1) \emph{reward shaping} and (2) \emph{safe exploration}. Additionally, we introduce an emerging category of approaches, (3) \emph{adversarial training/retraining}. Each are described in more detail in this section. \subsubsection{Reward Shaping} Reward shaping, the process of crafting a well-designed, optimal reward function, is essential for all forms of DRL since a poorly designed reward function can lead to unexpected and/or ineffective behavior \cite{hamilton2020sonic}. Within SRL, reward shaping is used to reformulate the problem as a \textit{Constrained Markov Decision Processes} (CMDP) \cite{altman1999constrained}. Instead of optimizing performance according to a singular reward function, performance is optimized according to a task-oriented reward and a safety-focused cost \cite{achiam2017constrained,wachi2020safe,ding2021provably,hasanzadezonuzy2020learning,jothimurugan2019composable,satija2020constrained}, so the agent learns a high-performing, safe policy. \subsubsection{Safe Exploration} Safe exploration approaches, which are often geared towards hardware deployment, ensure the agent remains $100\%$ safe throughout the duration of training. Furthermore, this approach can be redesigned for deployment, ensuring the future safety of a static neural network that has completed training. Safe exploration techniques can be further broken down into the following three categories. \begin{enumerate} \item \textbf{Preemptive Shielding} where the action set the agent is allowed to choose from is preemptively reduced to only allow safe actions \cite{alshiekh2018safe, wu2019shield}. \item \textbf{Safe-by-Construction} in which verification techniques are used, often on an abstraction of the learned policy, to verify safe behavior before being allowed to explore and develop further \cite{neary2021verifiable, zhang2021safe, wang2021verification}. Alternatively, correct-by-construction can also be applied to a shielded RL solution \cite{alshiekh2018safe}. \item \textbf{Run Time Assurance (RTA) methods} filter the agent's desired actions, $\action_{NN}$, to assure safety. In some cases, a monitor and/or decision module is used to determine whether the desired action provided by the learning agent is safe. In the event the agent's desired action is deemed unsafe, a different action that is determined to be safe is substituted and sent to the plant \cite{hunt2021hscc, wagener2021safe, xiong2021scalable, li2021safe, fisac2018general, fulton2018safe, desai2019soter, murugesan2019formal, cheng2019end, zhao2020learning}. \end{enumerate} In this work, we focus solely on RTA methods for ensuring safe exploration, since they work across more examples with fewer scalability issues. An example of a general setup for safe exploration via RTA is shown in \figref{fig:rta_on}. \begin{figure}[t] \centering \includegraphics[width=0.75\columnwidth]{figures/RL_RTA_basic_1.pdf \caption{DRL training interactions between the agent and the environment with RTA.} \label{fig:rta_on} \end{figure} \subsubsection{Adversarial Training/Retraining} The newest category of SRL approaches, \emph{Adversarial Training/Retraining}, focuses on identifying unsafe behavior in the agent and then generating data to learn from and correct that behavior \cite{phan2020neural, wang2020falsification, yang2021neural}. Most of the papers that use this approach focus on retraining an agent that already performs well in the environment. However, the approach can also be applied to an untrained network at the cost of requiring more training time. \section{Run Time Assurance} \label{sec:RTA} One of the main contributions of this work is investigating how the RL training process is impacted by RTA approaches that filter unsafe control inputs to preserve system safety. For this paper, we focus on dynamical system plant models sampled discretely given by $\state_{t+1} = f(\state_t, \action_t)$ where $\state_t \in \State$ is the state of the plant at timestep $t$, $\State \subseteq \mathbb{R}^n$ is the real-valued state space, $\action_t \in \Action$ is the control input to the plant at timestep $t$, with $\Action \subseteq \mathbb{R}^m$ the action space, and $f$ is a function describing the state evolution from current state and control action. For the dynamical system, inequality constraints $\varphi_i(\state): \mathbb{R}^n\to \mathbb{R}$, $\forall i \in \{1,...,M\}$ can be used to define a set of $M$ safety constraints, where the constraint is satisfied when $\varphi_i(\state) \geq 0$. The admissible set $\admissibleset \subseteq \State$, which is defined as the set of states where all constraints are satisfied, is then given by, \begin{equation} \admissibleset := \{\state \in \State \mid \varphi_i(\state) \geq 0, \forall i \in \{1,...,M\} \}. \end{equation} \begin{definition} Safety and/or safe operation is achieved by always remaining within the admissible set, i.e. not violating any specified constraints. In the examples provided in this work, safety is defined on a finite time horizon, such that the operation is considered safe if $\forall t \in [t_0, T], \state_t \in \admissibleset$. However, the ending time bound, $T$ can be set to infinity for other systems that operate in perpetuity. \end{definition} For RTA to ensure safe operation, we need to define a stricter subset of states to further constrain operations, known as the \emph{control invariant} safe set, $\safeset$. By operating in this stricter defined set, we avoid scenarios that can arise near the boundary of the admissible set, $\admissibleset$ where, no matter the action executed, the next state will be outside the admissible set. \begin{definition} The control invariant safe set, $\safeset$, is a subset of the admissible set, $\admissibleset$, where $\forall \state \in \safeset, \exists \action \in \Action, f(\state, \action) \in \admissibleset$. \end{definition} In this work, we first focus on two classes of RTA monitoring approaches, \textit{explicit} and \textit{implicit}, which define $\safeset$ differently. Explicit approaches use a pre-defined $\safeset$, to determine when RTA intervention is necessary. To define $\safeset$ explicitly, we first define a set of $M$ control invariant inequality constraints $h_i(\state):\mathbb{R}^n\to\mathbb{R}, \forall i \in \{1,...,M\}$, where the constraints are satisfied when $h_i(\state) \geq 0$. $\safeset$ is then given by, \begin{equation} \label{eq: explicitly_defined_safe_set} \safeset := \{\state \in \State \mid h_i(\state)\geq 0, \forall i \in \{1,...,M\} \}. \end{equation} Implicit approaches use a defined backup control policy and the system dynamics to compute trajectories, which are used to determine when intervention is necessary. Implicitly, the $\safeset$ is defined as, \begin{equation} \label{eq: implicitly_defined_safe_set} \safeset := \{\state \in \State \mid \forall k \in [t_0, T], \phi_k^{u_{\rm b}}(\state)\in\admissibleset \}, \end{equation} where $\phi_k^{u_{\rm b}}$ represents a prediction of the state $\state$ for $k$ timesteps under the backup control policy $\action_{\rm b}$. Because computing trajectories can be computationally expensive, explicit approaches tend to be more efficient. However, implicit approaches can be easier to implement since they do not require a precise definition of the control invariant safe set, which is difficult to define without being overly conservative. Additionally, we split the RTA monitoring approaches further with two classes of intervention, \emph{simplex} and \emph{Active Set-Invariance Filter} (ASIF). The simplex approach switches from the primary control to a pre-defined backup controller if the system is about to leave the control invariant safe set \cite{rivera1996architectural}. The backup controller is usually less efficient at the desired control task, but meets desired safety and/or human-machine teaming constraints. One possible implementation for a simplex RTA filter is constructed as follows, \noindent \rule{1\columnwidth}{0.7pt} \noindent \textbf{Simplex Filter} \begin{equation} \begin{array}{rl} u_{\rm act}(\state)= \begin{cases} \action_{\rm NN}(\obs(\state)) & {\rm if}\quad \phi_k^{\action_{\rm NN}}(\state) \in \safeset \\ u_{\rm b}(\state) & {\rm otherwise} \end{cases} \end{array}\label{eq:switching} \end{equation} \noindent \rule[7pt]{1\columnwidth}{0.7pt} Here, $\phi_k^{\action_{\rm NN}}(\state)$ represents the predicted state if $\action_{\rm NN}$ is applied for $k$ discrete time intervals. ASIF approaches use barrier constraints to minimize deviations from the primary control signal while assuring safety \cite{gurriet2020applied}. One possible implementation for an ASIF RTA filter is constructed using a quadratic program as follows, \noindent \rule{1\columnwidth}{0.7pt} \noindent \textbf{Active Set-Invariance Filter} \begin{equation} \begin{split} \action_{\rm act}(\state, \action_{\rm NN})=\text{argmin} & \left\Vert \action_{\rm NN}-\action_{\rm b}\right\Vert\\ \text{s.t.} \quad & BC_i(\state, \action_{\rm b})\geq 0, \quad \forall i \in \{1,...,M\} \end{split}\label{eq:optimization} \end{equation} \noindent \rule[7pt]{1\columnwidth}{0.7pt} Here, $BC_i(\state, \action_{\rm b})$ represents a set of $M$ barrier constraints \cite{ames2019control} used to assure safety of the system. The purpose of barrier constraints is to enforce Nagumo's condition \cite{nagumo1942lage} and ensure $\dot{h}_i(\state)$ is never decreasing $\forall i \in \{1,...,M\}$ along the boundary of $\safeset$. The function argmin finds the value of $\action_{\rm b}$ closest to $\action_{\rm NN}$ that still satisfies the barrier constraints. In this way, ASIF approaches apply the minimal change necessary to keep the system within $\admissibleset$ at each timestep. Using these defined approaches, we categorize our experiments in this paper according to the four derived classes of RTA monitoring approaches: \emph{Explicit Simplex}, \emph{Implicit Simplex}, \emph{Explicit ASIF}, and \emph{Implicit ASIF}. \section{Experiments} In order to answer the questions posed in the introduction, we have designed 880 experiments across multiple environments, RTA configurations, RTA approaches, and random seeds\footnote{Code is currently undergoing the public release process and will be made available as soon as it is completed.}. For each environment and DRL algorithm, we use established hyperparameters to limit the impact of tuning. We use 10 random seeds to generate our traces \cite{henderson2018deep}. The evaluations run during training halt and freeze the NNC for the duration of the evaluation in order to better represent the performance of the agent at that point. After training is completed, the final learned policy is evaluated on the task 100 times to better approximate the expected performance if deployed. All evaluations are done in environments with and without the RTA active in order to identify any dependence on the RTA forming. \input{configurations.tex} \input{environments.tex} \input{hparams.tex} \section{Results and Discussion} \label{sec:results} In this section, we try to answer the questions posed in the introduction by analyzing the overarching trends found in our experiments. Including all the results collected would make this paper exceedingly long. Thus, we include select results that highlight the trends we found and provide all the results in Appendix~\ref{app:all_results} \subsection{Do agents learn to become dependent on RTA?} \label{sec:results_dependency} \textbf{Answer:} Sometimes. Training RL agents with run time assurance \textit{always} runs the risk of forming dependence. Furthermore, this phenomenon is more prevalent in our on-policy results than our off-policy results. An agent is dependent on the RTA if the RTA is necessary for safe and successful behavior during deployment. We can determine if an agent has learned to be dependent on the RTA by evaluating performance with and without the RTA. We can identify when an agent has learned to form a dependence if the return and success drop significantly when evaluated without the RTA. If the agent is independent of the RTA, the performance metrics should be consistent when evaluated with and without the RTA \begin{table \centering \caption{This table shows final policy evaluation results across all test environments trained using the PPO algorithm with the Implicit Simplex RTA approach. We show the recorded performance measured by the reward function (Return) and whether the agent was successful at completing the task (Success) Rows highlighted in gray indicate a learned dependency.} \begin{tabular}{llccc} \toprule Environment & Configuration & RTA & Return & Success \\%& Interventions/Violations & Correction \\ \midrule Pendulum & RTA no punishment & on & 987.84 $\pm$ 10.86 & 1.00 $\pm$ 0.00 \\%& 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 \\ & & off & 987.59 $\pm$ 10.38 & 1.00 $\pm$ 0.00 \\%& 0.00 $\pm$ 0.00 & - \\ Pendulum & RTA punishment & on & 987.57 $\pm$ 11.20 & 1.00 $\pm$ 0.00 \\%& 0.00 $\pm$ 0.00 & 0.00 $\pm$ 0.00 \\ & & off & 987.85 $\pm$ 11.18 & 1.00 $\pm$ 0.00 \\%& 0.00 $\pm$ 0.00 & -\\ \rowcolor{Gray} 2D Spacecraft Docking & RTA no punishment & on & 2.02 $\pm$ 0.39 & 0.92 $\pm$ 0.27 \\%& 137.07 $\pm$ 37.93 & 2.19 $\pm$ 0.20 \\ \rowcolor{Gray} & & off & -22.39 $\pm$ 15.39 & 0.34 $\pm$ 0.47 \\%& 147.84 $\pm$ 102.78 & - \\ \rowcolor{Gray} 2D Spacecraft Docking & RTA punishment & on & 1.82 $\pm$ 0.60 & 0.73 $\pm$ 0.44 \\%& 108.07 $\pm$ 54.22 & 2.06 $\pm$ 0.26 \\ \rowcolor{Gray} & & off & -17.99 $\pm$ 5.16 & 0.46 $\pm$ 0.50\\% & 147.59 $\pm$ 40.20 & - \\ 3D Spacecraft Docking & RTA no punishment & on & -33.29 $\pm$ 22.18 & 0.50 $\pm$ 0.50 \\%& 152.71 $\pm$ 104.84 & 1.20 $\pm$ 0.49 \\ & & off & -23.51 $\pm$ 8.54 & 0.44 $\pm$ 0.50 \\%& 152.03 $\pm$ 57.31 & - \\ 3D Spacecraft Docking & RTA punishment & on & 2.04 $\pm$ 0.39 & 0.89 $\pm$ 0.31 \\%& 0.00 $\pm$ 0.03 & 0.00 $\pm$ 0.02 \\ & & off & 2.07 $\pm$ 0.34 & 0.92 $\pm$ 0.27 \\%& 1.16 $\pm$ 2.18 & - \\ \end{tabular} \label{tab:rta_dependence_shortened} \end{table} We use gray to highlight examples where agents learned to form a dependency in \tabref{tab:rta_dependence_shortened}, which contains the final policy evaluations of our on-policy results across all the environments using the implicit simplex RTA approach. Note that not all agents in the highlighted rows (differentiated by the random seed used for training) learned a dependency, as evidenced by the increased standard deviation about the mean values. This further reinforces our answer that \emph{sometimes} agents learn to become dependent on the RTA they are trained with. Instead of ``always'' or ``never,'' whether the agent learns to become dependent on the RTA is a matter of chance, i.e. which random seed is used. Mania et al. show a great visualization in \cite{mania2018simple} of just how large an impact the random seed has on whether the agent learns a successful policy. The same is true here. If we had selected different random seeds, we would likely see different results for which agents learn to become dependent. However, it is the case that this can only happen if the agent is trained with RTA, and we have seen it is less likely to occur if the agent is punished for using the RTA as in the \textit{RTA punishment} configuration. Note that the impact of the level/scale of punishment on whether dependence forms is left for future work. \begin{figure}[htpb] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.35\linewidth]{figures/docking2d_implicit_simplex/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.35\linewidth]{figures/docking2d_implicit_simplex/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.35\linewidth]{figures/docking2d_implicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.35\linewidth]{figures/docking2d_implicit_simplex/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an implicit simplex RTA. Each curve represents the average of 10 trials, and the shaded region is the $95\%$ confidence interval about the mean. The large difference in return and success that is recorded with (a \& b) and without (c \& d) RTA shows that all agents trained with RTA learned to depend on it.} \label{fig:docking2d_ppo_imp_sim_dependence} \end{figure} To further demonstrate issues with agents forming a dependence on RTA, observe the drop in performance and success between evaluating with RTA (a \& b) and without RTA (c \& d) in \figref{fig:docking2d_ppo_imp_sim_dependence}. While the RTA helps all the agents reach success throughout the training process, the agents trained with RTA (\textit{RTA no punishment}, \textit{RTA punishment}, and \textit{RTA Corrected Action}) do not maintain that same level of performance when evaluated without the RTA. In contrast, the \textit{baseline punishment} agents learn successful behavior that works with and without the RTA. \subsection{Which RTA configuration is most effective?} \label{sec:results_configuration} \textbf{Answer:} \emph{Baseline punishment} is the most effective. However, if safe exploration is necessary, \emph{RTA punishment} is the most effective. \begin{figure}[htb] \centering \subfigure[2D Docking, PPO evaluated without RTA Average Return]{\includegraphics[width=0.35\linewidth]{figures/docking2d_explicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[2D Docking, PPO evaluated without RTA Average Success]{\includegraphics[width=0.35\linewidth]{figures/docking2d_explicit_simplex/ppo_no_rta_success.png}}\\ \subfigure[3D Docking, PPO evaluated without RTA Average Return]{\includegraphics[width=0.35\linewidth]{figures/docking3d_explicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[3D Docking, PPO evaluated without RTA Average Success]{\includegraphics[width=0.35\linewidth]{figures/docking3d_explicit_simplex/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the 2D (a \& b) and 3D (c \& d) Spacecraft Docking environment with an explicit simplex RTA. Each curve represents the average of 10 trials, and the shaded region is the $95\%$ confidence interval about the mean. All plots show the \textit{baseline punishment} and \textit{RTA punishment} configurations learn at a similar rate and converge to similar levels of success and return.} \label{fig:config_example} \end{figure} The most effective RTA configuration is the one that consistently trains the best performing agent evaluated without RTA. In the case of a tie and the final performance is comparable across multiple configurations, the best configuration is the one that learns the optimal performance quicker, i.e. requiring fewer samples. Across all our experiments, the \emph{baseline punishment} configuration was consistently among the best performing agents. The next best performer was the \emph{RTA punishment} configuration, which often outperformed the \emph{baseline punishment} configuration in our PPO experiments, but did not do as well in all of our SAC experiments. We discuss why this might be the case in Section~\ref{sec:results_policy}. To demonstrate this conclusion, we show the training curves in \figref{fig:config_example} from our experiments training agents across all configurations in both the 2D and 3D Spacecraft Docking environments using the PPO algorithm and the explicit simplex RTA approach. In these particular examples, \figref{fig:config_example}, both \emph{baseline punishment} and \emph{RTA punishment} have similar training curves that converge about the same return and success. This is similar across most of our experiments, except in some experiments when \emph{RTA punishment} has a noticeably lower return because a dependence on the RTA formed. \subsection{Which RTA approach is most effective?} \label{sec:results_approach} \textbf{Answer:} The explicit simplex is the most effective RTA approach for training agents that consistently perform well and do not learn to depend on the RTA to maintain safety. \begin{figure}[htbp] \centering \subfigure[PPO Average Return evaluated without RTA in the 2D Spacecraft Docking environment]{\includegraphics[width=0.35\linewidth]{figures/docking2d_general/ppo_rta_punished_no_rta_eval.png}}\qquad \subfigure[PPO Average Success evaluated without RTA in the 2D Spacecraft Docking environment]{\includegraphics[width=0.35\linewidth]{figures/docking2d_general/ppo_rta_punished_no_rta_success.png}}\\ \subfigure[PPO Average Return evaluated without RTA in the 3D Spacecraft Docking environment]{\includegraphics[width=0.35\linewidth]{figures/docking3d_general/ppo_rta_punished_no_rta_eval.png}}\qquad \subfigure[PPO Average Success evaluated without RTA in the 3D Spacecraft Docking environment]{\includegraphics[width=0.35\linewidth]{figures/docking3d_general/ppo_rta_punished_no_rta_success.png}} \caption{Training curves collected from experiments run in the 2D and 3D Spacecraft Docking environments, training with PPO in the \emph{RTA punishment} configuration across all four RTA approaches. Each curve represents the average of 10 trials, and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:rta_approach_ex} \end{figure} \figref{fig:rta_approach_ex} shows the training curves for PPO agents trained in our 2D and 3D Spacecraft Docking environments with four different RTA approaches. All the training curves represent the PPO agents trained in the \emph{RTA punishment} configuration and evaluated without the RTA. The curves broadly show ASIF RTA's guide the agent to success earlier on, but at the cost of increased sample complexity. The simplex approaches instead have a reduced sample complexity achieving a higher return sooner, which then leads to a greater chance of success. We attribute these results to the differences between simplex and ASIF approaches. With simplex, the RTA does not intervene until the last moment, which allows for more agent-guided exploration, leading to more unique data samples. More unique data samples leads to a better approximation of the value- and/or Q-function, which reduces sample complexity. In contrast, ASIF approaches apply minimal corrections intended to guide the agent away from boundary conditions. This applies a greater restriction on agent-guided exploration, which can lead to more duplicated data samples. The implicit RTA approaches were less effective than the explicit approaches and were less consistent. In the 2D Spacecraft Docking environment, both ASIF training curves had similar return and success. However, in the 3D Spacecraft Docking environment, the explicit ASIF curve maintained the trend of earlier success with reduced return while the implicit ASIF curve failed to improve throughout the entire training process. Similarly, in the 3D Spacecraft Docking environment, the simplex curves had similar return and success, but in the 2D Spacecraft Docking environment the implicit simplex curve showed a large drop in both return and success. Therefore, we reason explicit RTA approaches are better for training. Additionally, simplex approaches lead to a better performing agent in the long run. \subsection{Which works better with RTA, off-policy (SAC) or on-policy (PPO)?} \label{sec:results_policy} \textbf{Answer:} On-policy methods see a greater benefit from training with RTA. Our results showed PPO sees a greater benefit from training with RTA than SAC. This is likely a result of how the methods approach the \emph{exploration versus exploitation} problem. Too much exploitation, using only known information (i.e. the current policy) too strictly, and the agent may never find the optimal policy. However, too much exploration and the agent may never learn what the goal is, particularly if the rewards are sparse. In general, on-policy methods leverage more exploitation than off-policy methods through their use of the learned policy. \begin{figure}[htpb] \centering \subfigure[SAC Average Return evaluated with RTA]{\includegraphics[width=0.35\linewidth]{figures/pendulum_results/pendulum_sac_w_rta_eval.png}}\qquad \subfigure[PPO Average Return evaluated with RTA]{\includegraphics[width=0.35\linewidth]{figures/pendulum_results/pendulum_ppo_w_rta_eval.png}}\\ \subfigure[SAC Average Return evaluated no RTA]{\includegraphics[width=0.35\linewidth]{figures/pendulum_results/pendulum_sac_no_rta_eval.png}}\qquad \subfigure[PPO Average Return evaluated no RTA]{\includegraphics[width=0.35\linewidth]{figures/pendulum_results/pendulum_ppo_no_rta_eval.png}} \caption{Results collected from experiments run in the Pendulum environment. Each curve represents the average of 10 trials, and the shaded region is the $95\%$ confidence interval about the mean. Note: maximum possible return in the environment is 1000.} \label{fig:pendulum_results_policy} \end{figure} \textbf{On-policy} methods exploit the learned policy. Therefore, guiding the agent to success and away from unsafe behavior helps the agent learn that behavior. As a result, the sample complexity is reduced. \figref{fig:pendulum_results_policy} (b \& d) highlights this effect well. However, these benefits are hindered if the wrong configuration is chosen. Across almost all of our PPO experiments, the \emph{RTA Corrected Action} configuration prevented the agents from improving the learned policy as shown in \figref{fig:config_example}. This is likely a result of too much exploitation from the RTA intervening around boundary conditions, which prevented the agents from exploring other options to better define the optimal policy. In contrast, \textbf{off-policy} methods have a larger focus on exploration. In particular, SAC maximizes entropy, assigning a higher value to unexplored state-action combinations. By restricting the actions taken near boundary conditions, the ``unsafe'' actions are never explored in actuality. Without making some distinction when the RTA intervenes makes the patterns harder to learn. This is shown in \figref{fig:bad_reward_example} where both \emph{RTA no punishment} and \emph{RTA Corrected Action} have a noticeably worse training curve than the \emph{baseline} configuration, failing to improve at all through training. We see a similar trend in \figref{fig:pendulum_results_policy} (a \& c) when SAC is used to learn the optimal policy for controlling our inverted pendulum. However, in this environment, the agent is still able to learn a successful policy. \subsection{Which is more important, Reward Shaping or Safe Exploration?} \label{sec:results_shaping} \textbf{Answer:} Reward shaping is generally more important for training. While safe exploration can improve sample complexity in some cases, a well-defined reward function is imperative for training successful RL agents. \begin{figure}[htbp] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.35\linewidth]{figures/docking2d_explicit_simplex/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.35\linewidth]{figures/docking2d_explicit_simplex/sac_w_rta_success.png}} \\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.35\linewidth]{figures/docking2d_explicit_simplex/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.35\linewidth]{figures/docking2d_explicit_simplex/sac_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with explicit simplex RTA. Each curve represents the average of 10 trials, and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:bad_reward_example} \end{figure} In every experiment where reward shaping was applied, we saw a more consistent and improved training curve. For example, note in \figref{fig:config_example} that the 95\% confidence interval about \emph{baseline punishment} and \emph{RTA punishment} is smaller than \emph{baseline} and \emph{RTA no punishment} respectively. Additionally, the return and success tend to be much greater. The same trend shows in all of our experiments. That said, safe exploration does improve sample complexity for on-policy RL, but the improvements are much greater when reward shaping is also applied in the \emph{RTA punishment} configuration. Additionally, the punishment helps prevent the agent from becoming dependent on the RTA. However, safe exploration on its own is no substitute for a well-defined/tuned reward function, as evidenced in our SAC experiments in the docking environments shown in \figref{fig:bad_reward_example}. In these experiments, the agents with the \emph{baseline punishment} and \emph{RTA punishment} configurations quickly converged to an optimal performance, but the optimal performance did not result in success. \section{Conclusions and Future Work} In conclusion, we trained 880 RL agents in 88 experimental configurations in order to answer some important questions regarding the use of RTA for training safe RL agents. Our results showed that \textbf{(1)} agents sometimes learn to become dependent on the RTA if trained with one, \textbf{(2)} \emph{baseline punishment} and \emph{RTA punishment} are the most effective configurations for training safe RL agents, \textbf{(3)} the explicit simplex RTA approach is most effective for consistent training results that do not depend on the RTA for safety, \textbf{(4)} PPO saw a greater benefit from training with RTA than SAC, suggesting that RTA may be more beneficial for on-policy than off-policy RL algorithms, and \textbf{(5)} effective reward shaping is generally more important than safe exploration for training safe RL agents. In future work, and as more environments are released with RTA, we hope to expand this study to ensure our conclusions are representative of more complex training environments. Additionally, we would like to compare the effectiveness of correcting with RTA during training or retrain afterwards, evaluate sim2real transfer of the trained RL agents in representative robotic environments, and evaluate performance under environment and observation noise. \section*{Acknowledgements} This material is based upon work supported by the Department of Defense (DoD) through the National Defense Science \& Engineering Graduate (NDSEG) Fellowship Program, the Air Force Research Laboratory Innovation Pipeline Fund, and the Autonomy Technology Research (ATR) Center administered by Wright State University. The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of Defense or of the United States Air Force. This work has been approved for public release: distribution unlimited. Case Number AFRL-2022-0550. \bibliographystyle{cas-model2-names} \subsection{RTA configurations} In \figref{fig:rta_on}, we show a general method for including RTA in the training loop and purposefully left it vague. In this work, we have separated these various ways of connecting the RTA into the 5 configurations listed and explained below\footnote{More detailed descriptions for the configurations are provided in Appendix \ref{app:configurations}.}. The configurations are listed in order of increasing complexity. Each configuration builds on the previous ones, helping us observe the impact of each addition. \noindent \textbf{(1) Baseline (no RTA)} This configuration, demonstrated in \figref{fig:rta_off}, is used as a baseline comparison for all the RTA configurations. In this configuration, the agent is learning using the RL algorithm without any modifications. $\obs$ is the input observation that led to the agent providing the desired action, $\action_{\rm NN}$, and $r$ is the reward value computed by reaching the next state. The data tuple is $data = \{\obs, \action_{\rm NN}, r\}$. \noindent \textbf{(2) Baseline punishment} In this configuration, we assign a negative reward, i.e. punishment $p<0$, if $unsafe?$ returns true, meaning at least one safety constraint was violated. This configuration adds SRL-style reward shaping to the problem. Instead of only maximizing the reward, the problem has two goals: (1) completing the task and (2) minimizing the punishment, or cost, incurred from violating constraints. In this work. $p$ is constant; however, scaling $p$ according to the severity of the safety violation can be done. The remaining configurations cannot factor in this kind of punishment because they rely on safe exploration which does not allow any violations of the safety constraints. \begin{equation*} data = \begin{cases} \{\obs, \action_{\rm NN}, r + p\}, & \text{if }unsafe? \\ \{\obs, \action_{\rm NN}, r\}, & \text{otherwise} \end{cases} \end{equation*} \noindent \textbf{(3) RTA no punishment} This configuration is the simplest form of safe exploration. Nothing changes from the baseline configuration except the agent remains safe throughout the training process because of the RTA depicted in \figref{fig:rta_on}. The data tuple is $data =\{\obs, \action_{\rm NN}, r\}$, regardless of whether or not the RTA is intervening \noindent \textbf{(4) RTA punishment} This configuration adds an element of reward shaping to the previous configuration. Since we want the agent to learn the correct action to take in a scenario without the help of an RTA, we assign a punishment for having the RTA intervene. By adding this punishment, $p$, to the reward, the agent should better identify safe actions. \begin{equation*} data = \begin{cases} \{\obs, \action_{\rm NN}, r + p\}, & \text{if }intervening? \\ \{\obs, \action_{\rm NN}, r\}, & \text{otherwise} \end{cases} \end{equation*} \noindent \textbf{(5) RTA Corrected Action} In this configuration, we build on the idea of helping the agent identify the correct action to take in states near violating the safety constraints. Instead of punishing the agent for having the RTA intervene, we correct the agent's output to match that of the RTA's. In this manner, the agent only learns the actions actually taken in the environment. The data tuple is $data = \{\obs, \action_{\rm act}, r\}.$ \subsection{Training Environments} We ran our experiments in three environments with varying levels of complexity. Additional information on these environments is provided in Appendix \ref{app:environments}. \noindent \textbf{Pendulum} The first environment is a modified version of OpenAI Gym's \emph{Pendulum-v0} environment\footnote{The original Python implementation of this environment can be found at \url{https://github.com/openai/gym/blob/master/gym/envs/classic_control/pendulum.py}}\cite{brockman2016gym}. We chose this environment for its accessibility, simplicity, and previous use in \cite{cheng2019end} as a good indicator of the effectiveness of SRL over standard DRL. We use the same initial conditions and constraints described in their work. The goal of the agent in this environment is to control an actuator that applies torque to the system to keep a frictionless pendulum upright and within the bounds of $\pm 1rad \approx \pm 46^{\circ}$. Thus, the inequality constraint the RTA is designed to uphold can be written as, \begin{equation} \varphi_1(\state) := 1 - |\theta|, \end{equation} where $\theta$ is the displacement angle of the pendulum measured from the upright position. $\theta$ and the angular momentum of the pendulum, $\omega$, are components of the environment state, $\state = [\omega, \theta]$. The observation used as the input for the RL agent is $\obs = [\cos(\theta), \sin(\theta), \omega]$. If this constraint is violated, the episode is terminated. The RTA design implemented in this environment is a simple implicit simplex design with the backup controller described by Equation \ref{eq:rta_pendulum}. \begin{equation} u_b(\state) = \min\Bigg(\max\Big(\frac{-32}{\pi} \theta, -15\Big), 15\Bigg) \label{eq:rta_pendulum} \end{equation} The reward function was modified from the original used in \cite{cheng2019end} by adding a constant, $5$, which was chosen in order to make a majority of the reward values positive. By keeping the reward positive, the agent is encouraged to not terminate the episode early. If the reward were mostly negative, the agent might learn that quickly terminating the episode maximizes the return better than remaining near upright. The resulting reward function, Equation \ref{eq:reward_pendulum}, has a cumulative maximum of $1000$ instead of $0$. In the configurations which require a punishment, we use $p=-1$. \begin{equation} \label{eq:reward_pendulum} r_t = 5 - (\theta_t^2 + 0.1\omega_t^2 + 0.001\action_t^2) \end{equation} \noindent \textbf{Spacecraft 2D Spacecraft Docking \& 3D} The next two environments come from \cite{ravaioli2022safe} and focus on the spacecraft docking problem\footnote{The implementations of these environments can be found at \url{https://github.com/act3-ace/SafeRL}}. In these environments, an active \emph{deputy} spacecraft, under the control of the RL agent, approaches a passive \emph{chief} spacecraft. This scenario is considered in 2D with only the $x$ and $y$ components, as well as 3D with the $x$, $y$, and $z$ components. The goal of the agent in these environments is to use mounted thrusters that move the \emph{deputy} spacecraft in the $x$, $y$, and $z$ directions to a docking region around the \emph{chief} spacecraft located at the origin. The state and observation are $\state = \obs = [x, y, z, \dot{x}, \dot{y}, \dot{z}]$. If the \emph{chief} spacecraft were moving, the observation might be modified to instead use the relative distances and velocities instead. RTA is used in these environments to enforce a distance dependent speed limit $\varphi_1(\state)$ and maximum velocity limits $\varphi_2(\state)$-$\varphi_4(\state)$ \cite{dunlap2021Safe, dunlap2021comparing}. Together, these constraints keep the deputy space craft controllable and prevent collisions caused by the deputy approaching too fast. The distance dependent speed limit is defined as, \begin{equation} \label{eq:ha1} \varphi_1(\state) := \nu_D - \nu_{\rm H} + c d_{\rm H}, \end{equation} where $\nu_D = 0.2m/s$ defines the maximum allowable docking velocity, $c = 2 s^{-1}$ is a constant, $d_{\rm H} =(x^2+y^2+z^2)^{1/2}$ is the Euclidian distance between the deputy and chief spacecraft, and $\nu_{\rm H} =(\dot{x}^2+\dot{y}^2+\dot{z}^2)^{1/2}$ is the relative speed the deputy is approaching the chief. The maximum velocity, $v_{\rm max} = 10m/s$, limits can be written as inequality constraints, \begin{equation} \label{eq:ha2} \varphi_2(\state) := v_{\rm max}^2 - \dot{x}^2, \quad \varphi_3(\state) := v_{\rm max}^2 - \dot{y}^2, \quad \varphi_4(\state) := v_{\rm max}^2 - \dot{z}^2. \end{equation} \begin{table*}[t] \centering \caption{Spacecraft 2D Spacecraft Docking \& 3D reward function components} \begin{tabular}{lc}\hline \multicolumn{2}{c}{Terminal Rewards: All Configurations} \\ \hline Successfully Docked $(d_{\rm H} \leq 20m$ and $\nu_{\rm H}\leq 0.2 m/s)$ & +1 \\ Crashed $(d_{\rm H} \leq 20 m$ with a velocity $\nu_{\rm H}> 0.2 m/s)$ & -1 \\ Out of Bounds $(d_{\rm H} > 200m)$ & -1 \\ Over Max Time/Control & -1 \\ \hline \multicolumn{2}{c}{Dense Reward: All Configurations} \\ \hline Proximity & $0.0125(\Delta d_{\rm H} )$ \\ \hline \multicolumn{2}{c}{Safety Rewards: Punishment Configurations} \\ \hline If RTA is Intervening & $-0.001$ \\ Over Max Velocity & $-0.1 - 0.1(\nu_{\rm H} - v_{\rm max})$ \\ \hline \end{tabular} \label{tab:rewards} \end{table*} The reward functions for these environments are defined by sparse and dense components defined in \tabref{tab:rewards}\footnote{These values were provided by the authors of the environments during early development and do not match those published in \cite{ravaioli2022safe}. Additionally, these values were chosen with PPO as the target RL algorithm, which helps explain why SAC struggled with learning to complete the task.}. The sparsely defined terminal and safety reward components are only applied if the agent meets the specified requirements. In contrast, the dense reward component is computed after each timestep. In our experiments, the evaluation returns are computed using all the components defined in \tabref{tab:rewards}. However, during training, the safety components are ignored unless the punishment is required by the configuration. \section{All Experimental Results} \label{app:all_results} In this section, we show the results collected from all of our experiments. The subsections are broken up according to environment and RTA approach. Experiments in the pendulum environment are in Section \ref{app:pendulum}. Experiments in the 2D Spacecraft Docking environment are in Sections \ref{app:docking2dexplicitsimplex}, \ref{app:docking2dexplicitasif}, \ref{app:docking2dimplicitsimplex}, and \ref{app:docking2dimplicitasif}. Experiments in the 3D Spacecraft Docking environment are in Sections \ref{app:docking3dexplicitsimplex}, \ref{app:docking3dexplicitasif}, \ref{app:docking3dimplicitsimplex}, and \ref{app:docking3dimplicitasif}. \FloatBarrier \subsection{Pendulum Implicit Simplex} \label{app:pendulum} \begin{figure}[ht] \centering \subfigure[SAC with RTA]{\includegraphics[width=0.45\linewidth]{figures/pendulum_results/pendulum_sac_w_rta_eval.png}}\qquad \subfigure[PPO with RTA]{\includegraphics[width=0.45\linewidth]{figures/pendulum_results/pendulum_ppo_w_rta_eval.png}}\\ \subfigure[SAC no RTA]{\includegraphics[width=0.45\linewidth]{figures/pendulum_results/pendulum_sac_no_rta_eval.png}}\qquad \subfigure[PPO no RTA]{\includegraphics[width=0.45\linewidth]{figures/pendulum_results/pendulum_ppo_no_rta_eval.png}} \caption{Results collected from experiments run in the Pendulum environment. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. Note: maximum possible return in the environment is 1000.} \label{fig:pendulum_results} \end{figure} \begin{table}[hb] \caption{SAC Pendulum} \label{tab:SAC Pendulum} \centering \begin{tabular}{lcccc} \toprule Configuration & RTA & Return & Length & Interventions\\ \midrule baseline & on & 983.1007 $\pm$ 3.1389 & 200.0 $\pm$ 0.0 & 0.2940 $\pm$ 0.7250 \\ & off & 982.8630 $\pm$ 3.3692 & 200.0 $\pm$ 0.0 & - \\ RTA no punishment & on & 983.9275 $\pm$ 2.0795 & 200.0 $\pm$ 0.0 & 0.0370 $\pm$ 0.2401 \\ & off & 982.8445 $\pm$ 30.7715 & 199.8070 $\pm$ 6.1001 & - \\ RTA punishment & on & 984.1265 $\pm$ 2.0494 & 200.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 984.0623 $\pm$ 1.9408 & 200.0 $\pm$ 0.0 & - \\ Corrected Action & on & 984.5683 $\pm$ 2.1403 & 200.0 $\pm$ 0.0 & 0.0030 $\pm$ 0.0547 \\ & off & 984.5059 $\pm$ 2.0753 & 200.0 $\pm$ 0.0 & - \\ \bottomrule \end{tabular} \end{table} \begin{table}[hb] \caption{PPO Pendulum} \label{tab:PPO Pendulum} \centering \begin{tabular}{lcccc} \toprule Configuration & RTA & Return & Length & Interventions\\ \midrule baseline & on & 987.9677 $\pm$ 10.9786 & 200.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 987.3375 $\pm$ 11.2540 & 200.0 $\pm$ 0.0 & - \\ RTA no punishment & on & 987.8363 $\pm$ 10.8634 & 200.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 987.5939 $\pm$ 10.3785 & 200.0 $\pm$ 0.0 & - \\ RTA punishment & on & 987.5672 $\pm$ 11.2048 & 200.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 987.8488 $\pm$ 11.1780 & 200.0 $\pm$ 0.0 & - \\ Corrected Action & on & 850.7139 $\pm$ 51.6744 & 200.0 $\pm$ 0.0 & 27.7470 $\pm$ 11.6062 \\ & off & 306.7720 $\pm$ 281.0874 & 64.7700 $\pm$ 56.5273 & - \\ \bottomrule \end{tabular} \end{table} \FloatBarrier \subsection{2D Spacecraft Docking Explicit Simplex} \label{app:docking2dexplicitsimplex} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an explicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking2d_ppo_exp_sim_results} \end{figure} \begin{table}[hb] \caption{PPO 2D Spacecraft Docking Explicit Simplex} \label{tab:ppo2dexplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -25.5058 $\pm$ 7.0824 & 451.3160 $\pm$ 228.9647 & 0.5960 $\pm$ 0.4907 & 269.1270 $\pm$ 68.7516 & 0.8521 $\pm$ 0.2730 \\ & off & -19.5788 $\pm$ 7.5443 & 261.0470 $\pm$ 255.9112 & 0.4740 $\pm$ 0.4993 & 132.0560 $\pm$ 59.6531 & - \\ baseline punishment & on & 1.7415 $\pm$ 0.7302 & 763.2820 $\pm$ 197.3288 & 0.6720 $\pm$ 0.4695 & 1.0440 $\pm$ 2.4430 & 0.0501 $\pm$ 0.0980 \\ & off & 1.7350 $\pm$ 0.7505 & 758.7400 $\pm$ 198.3093 & 0.6830 $\pm$ 0.4653 & 1.1420 $\pm$ 2.7484 & - \\ RTA no punishment & on & -22.5140 $\pm$ 10.7681 & 348.6550 $\pm$ 85.4651 & 0.8530 $\pm$ 0.3541 & 242.0030 $\pm$ 100.1411 & 0.9448 $\pm$ 0.3900 \\ & off & -18.3351 $\pm$ 4.1843 & 230.2570 $\pm$ 68.7432 & 0.8320 $\pm$ 0.3739 & 154.8820 $\pm$ 42.6876 & - \\ RTA punishment & on & 2.0770 $\pm$ 0.3609 & 710.4490 $\pm$ 157.6271 & 0.8820 $\pm$ 0.3226 & 1.0830 $\pm$ 1.9236 & 0.0595 $\pm$ 0.0981 \\ & off & 2.0637 $\pm$ 0.3659 & 703.4900 $\pm$ 155.3529 & 0.8920 $\pm$ 0.3104 & 1.2450 $\pm$ 2.2545 & - \\ RTA Corrected Action & on & -38.8204 $\pm$ 23.3258 & 370.8200 $\pm$ 230.2876 & 0.0 $\pm$ 0.0 & 370.8060 $\pm$ 230.2876 & 12.8179 $\pm$ 3.2593 \\ & off & -22.7426 $\pm$ 9.9165 & 18.0930 $\pm$ 4.9153 & 0.0 $\pm$ 0.0 & 18.0720 $\pm$ 4.8995 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_simplex/sac_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an explicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking2d_sac_exp_sim_results} \end{figure} \begin{table}[hb] \caption{SAC 2D Spacecraft Docking Explicit Simplex} \label{tab:sac2dexplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -12.5070 $\pm$ 5.3815 & 980.3370 $\pm$ 95.3160 & 0.0170 $\pm$ 0.1293 & 129.1350 $\pm$ 54.4923 & 0.4160 $\pm$ 0.0389 \\ & off & -44.4387 $\pm$ 18.4755 & 936.0760 $\pm$ 201.4800 & 0.0 $\pm$ 0.0 & 375.4500 $\pm$ 150.4911 & - \\ baseline punishment & on & -0.7163 $\pm$ 1.1067 & 998.6170 $\pm$ 18.4751 & 0.0030 $\pm$ 0.0547 & 10.8460 $\pm$ 12.2911 & 0.2622 $\pm$ 0.1204 \\ & off & -1.4651 $\pm$ 2.3989 & 995.6790 $\pm$ 41.1139 & 0.0050 $\pm$ 0.0705 & 17.8680 $\pm$ 24.0389 & - \\ RTA no punishment & on & -14.5861 $\pm$ 4.6865 & 943.9070 $\pm$ 160.2614 & 0.0380 $\pm$ 0.1912 & 148.0320 $\pm$ 50.7698 & 0.4289 $\pm$ 0.0300 \\ & off & -55.0964 $\pm$ 28.7844 & 639.9500 $\pm$ 320.4161 & 0.0080 $\pm$ 0.0891 & 397.9490 $\pm$ 210.2359 & - \\ RTA punishment & on & -2.2407 $\pm$ 1.5318 & 994.1500 $\pm$ 53.0907 & 0.0010 $\pm$ 0.0316 & 25.4220 $\pm$ 16.9404 & 0.3336 $\pm$ 0.0755 \\ & off & -5.9313 $\pm$ 5.0539 & 996.6070 $\pm$ 35.4610 & 0.0040 $\pm$ 0.0631 & 59.2540 $\pm$ 48.6995 & - \\ RTA Corrected Action & on & -19.8202 $\pm$ 14.5191 & 927.6350 $\pm$ 179.7466 & 0.0580 $\pm$ 0.2337 & 199.4710 $\pm$ 144.4070 & 0.4516 $\pm$ 0.0666 \\ & off & -43.6724 $\pm$ 20.6484 & 588.6050 $\pm$ 361.0214 & 0.0050 $\pm$ 0.0705 & 295.2280 $\pm$ 154.5340 & - \\ \bottomrule \end{tabular} } \end{table} \FloatBarrier \subsection{2D Spacecraft Docking Explicit ASIF} \label{app:docking2dexplicitasif} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an explicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. } \label{fig:docking2d_ppo_exp_asif_results} \end{figure} \begin{table}[hb] \caption{PPO 2D Spacecraft Docking Explicit ASIF} \label{tab:ppo2dexplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -19.3937 $\pm$ 8.3859 & 496.2690 $\pm$ 276.8729 & 0.6100 $\pm$ 0.4877 & 320.1170 $\pm$ 76.3203 & 0.9750 $\pm$ 0.3698 \\ & off & -19.6417 $\pm$ 8.6105 & 288.4830 $\pm$ 318.4605 & 0.3710 $\pm$ 0.4831 & 126.4670 $\pm$ 64.2468 & - \\ baseline punishment & on & 1.6885 $\pm$ 0.7824 & 794.0 $\pm$ 174.0669 & 0.6830 $\pm$ 0.4653 & 135.3380 $\pm$ 47.6325 & 0.2399 $\pm$ 0.0411 \\ & off & 1.7063 $\pm$ 0.8210 & 770.9680 $\pm$ 187.9315 & 0.6920 $\pm$ 0.4617 & 1.1970 $\pm$ 2.4409 & - \\ RTA no punishment & on & -18.8412 $\pm$ 3.9873 & 327.1660 $\pm$ 35.9863 & 0.9570 $\pm$ 0.2029 & 287.7570 $\pm$ 32.6175 & 0.9100 $\pm$ 0.1651 \\ & off & -18.4948 $\pm$ 2.2503 & 137.2860 $\pm$ 31.5151 & 0.2320 $\pm$ 0.4221 & 126.7610 $\pm$ 19.5246 & - \\ RTA punishment & on & 1.8265 $\pm$ 0.7208 & 512.1800 $\pm$ 150.3579 & 0.9330 $\pm$ 0.2500 & 198.6380 $\pm$ 45.1417 & 0.2718 $\pm$ 0.0470 \\ & off & -15.3082 $\pm$ 5.7940 & 391.2710 $\pm$ 193.5872 & 0.7970 $\pm$ 0.4022 & 152.7720 $\pm$ 47.8269 & - \\ RTA Corrected Action & on & -35.2004 $\pm$ 20.0159 & 321.1060 $\pm$ 181.7095 & 0.0 $\pm$ 0.0 & 321.1060 $\pm$ 181.7095 & 9.7873 $\pm$ 3.5195 \\ & off & -22.4530 $\pm$ 9.9569 & 20.7170 $\pm$ 6.3494 & 0.0 $\pm$ 0.0 & 20.5450 $\pm$ 6.2174 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_explicit_asif/sac_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an explicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking2d_sac_exp_asif_results} \end{figure} \begin{table}[hb] \caption{SAC 2D Spacecraft Docking Explicit ASIF} \label{tab:sac2dexplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & 0.0216 $\pm$ 0.9744 & 968.7900 $\pm$ 127.7141 & 0.0320 $\pm$ 0.1760 & 280.0150 $\pm$ 76.8260 & 0.3916 $\pm$ 0.0454 \\ & off & -45.9935 $\pm$ 21.5150 & 890.9630 $\pm$ 269.6588 & 0.0010 $\pm$ 0.0316 & 376.5710 $\pm$ 170.1958 & - \\ baseline punishment & on & 0.1026 $\pm$ 0.2979 & 999.5220 $\pm$ 10.2286 & 0.0 $\pm$ 0.0 & 151.7450 $\pm$ 25.1203 & 0.3134 $\pm$ 0.0211 \\ & off & -0.9064 $\pm$ 1.0850 & 995.0900 $\pm$ 51.4919 & 0.0 $\pm$ 0.0 & 12.0070 $\pm$ 11.5531 & - \\ RTA no punishment & on & -0.1290 $\pm$ 0.6667 & 974.0790 $\pm$ 107.3582 & 0.0010 $\pm$ 0.0316 & 257.1470 $\pm$ 44.1387 & 0.3755 $\pm$ 0.0204 \\ & off & -37.6844 $\pm$ 22.5865 & 374.3540 $\pm$ 225.3516 & 0.0 $\pm$ 0.0 & 250.7620 $\pm$ 155.8610 & - \\ RTA punishment & on & -0.0374 $\pm$ 0.5376 & 987.6740 $\pm$ 73.9589 & 0.0020 $\pm$ 0.0447 & 264.8470 $\pm$ 47.4221 & 0.3788 $\pm$ 0.0252 \\ & off & -41.9909 $\pm$ 23.4668 & 428.7950 $\pm$ 266.7232 & 0.0020 $\pm$ 0.0447 & 282.9180 $\pm$ 171.1889 & - \\ RTA Corrected Action & on & 0.2143 $\pm$ 0.5982 & 990.1150 $\pm$ 56.8130 & 0.0280 $\pm$ 0.1650 & 252.5090 $\pm$ 53.4003 & 0.3716 $\pm$ 0.0291 \\ & off & -36.1541 $\pm$ 18.2568 & 598.6270 $\pm$ 325.5361 & 0.0120 $\pm$ 0.1089 & 269.8170 $\pm$ 136.6618 & - \\ \bottomrule \end{tabular} } \end{table} \FloatBarrier \subsection{2D Spacecraft Docking Implicit Simplex} \label{app:docking2dimplicitsimplex} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an implicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. } \label{fig:docking2d_ppo_imp_sim_results} \end{figure} \begin{table}[hb] \caption{PPO 2D Spacecraft Docking Implicit Simplex} \label{tab:Sppo2dimplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & 1.7324 $\pm$ 0.5743 & 628.6310 $\pm$ 233.1921 & 0.7340 $\pm$ 0.4419 & 190.7160 $\pm$ 87.9536 & 2.5410 $\pm$ 0.4201 \\ & off & -22.1098 $\pm$ 10.9814 & 320.5170 $\pm$ 360.2277 & 0.2660 $\pm$ 0.4419 & 143.9450 $\pm$ 79.5162 & - \\ baseline punishment & on & 2.2181 $\pm$ 0.3315 & 709.3360 $\pm$ 158.4366 & 0.9100 $\pm$ 0.2862 & 0.6080 $\pm$ 1.1218 & 0.6372 $\pm$ 0.9141 \\ & off & 2.1394 $\pm$ 0.3291 & 697.2640 $\pm$ 151.6598 & 0.9270 $\pm$ 0.2601 & 0.9660 $\pm$ 2.2147 & - \\ RTA no punishment & on & 2.0223 $\pm$ 0.3853 & 483.4820 $\pm$ 159.6581 & 0.9200 $\pm$ 0.2713 & 137.0720 $\pm$ 37.9305 & 2.1929 $\pm$ 0.2068 \\ & off & -22.3856 $\pm$ 15.3886 & 176.1880 $\pm$ 147.2843 & 0.3410 $\pm$ 0.4740 & 147.8420 $\pm$ 102.7799 & - \\ RTA punishment & on & 1.8153 $\pm$ 0.5999 & 604.7220 $\pm$ 252.2543 & 0.7310 $\pm$ 0.4434 & 108.0690 $\pm$ 54.2185 & 2.0565 $\pm$ 0.2571 \\ & off & -17.9896 $\pm$ 5.1619 & 415.3470 $\pm$ 341.2807 & 0.4600 $\pm$ 0.4984 & 147.5900 $\pm$ 40.1991 & - \\ RTA Corrected Action & on & -1.3983 $\pm$ 1.2348 & 706.2610 $\pm$ 303.1083 & 0.0230 $\pm$ 0.1499 & 456.0030 $\pm$ 333.3891 & 4.8626 $\pm$ 2.1863 \\ & off & -27.0715 $\pm$ 19.0662 & 132.6590 $\pm$ 234.5261 & 0.0 $\pm$ 0.0 & 90.0710 $\pm$ 154.2038 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_simplex/sac_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an implicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking2d_sac_imp_sim_results} \end{figure} \begin{table}[hb] \caption{SAC 2D Spacecraft Docking Implicit Simplex} \label{tab:sac2dimplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & 0.4104 $\pm$ 0.4594 & 992.1540 $\pm$ 61.4690 & 0.0170 $\pm$ 0.1293 & 40.1990 $\pm$ 29.2195 & 2.0015 $\pm$ 0.1018 \\ & off & -36.9676 $\pm$ 17.9372 & 963.3310 $\pm$ 160.9444 & 0.0010 $\pm$ 0.0316 & 312.0830 $\pm$ 141.1573 & - \\ baseline punishment & on & 0.2988 $\pm$ 0.4151 & 996.6570 $\pm$ 34.9140 & 0.0 $\pm$ 0.0 & 5.1930 $\pm$ 4.0115 & 1.8304 $\pm$ 0.5606 \\ & off & -1.1718 $\pm$ 1.6220 & 994.9620 $\pm$ 50.3511 & 0.0 $\pm$ 0.0 & 14.5570 $\pm$ 16.7634 & - \\ RTA no punishment & on & 0.5268 $\pm$ 0.5937 & 985.8930 $\pm$ 77.3721 & 0.0260 $\pm$ 0.1591 & 45.5030 $\pm$ 14.1846 & 2.0088 $\pm$ 0.0616 \\ & off & -49.2378 $\pm$ 26.5212 & 575.5730 $\pm$ 312.6387 & 0.0090 $\pm$ 0.0944 & 356.9840 $\pm$ 198.7087 & - \\ RTA punishment & on & 0.4419 $\pm$ 0.6550 & 980.7460 $\pm$ 98.2139 & 0.0050 $\pm$ 0.0705 & 43.3130 $\pm$ 12.2646 & 2.0111 $\pm$ 0.0616 \\ & off & -47.3890 $\pm$ 26.1510 & 525.6140 $\pm$ 298.1644 & 0.0090 $\pm$ 0.0944 & 341.2960 $\pm$ 193.6923 & - \\ RTA Corrected Action & on & 0.6113 $\pm$ 0.6937 & 982.4190 $\pm$ 82.0556 & 0.0410 $\pm$ 0.1983 & 46.1820 $\pm$ 21.0099 & 2.0038 $\pm$ 0.0626 \\ & off & -38.0627 $\pm$ 19.2859 & 598.8810 $\pm$ 311.1165 & 0.0350 $\pm$ 0.1838 & 286.5530 $\pm$ 145.0794 & - \\ \bottomrule \end{tabular} } \end{table} \FloatBarrier \subsection{2D Spacecraft Docking Implicit ASIF} \label{app:docking2dimplicitasif} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an implicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. } \label{fig:docking2d_ppo_imp_asif_results} \end{figure} \begin{table}[hb] \caption{PPO 2D Spacecraft Docking Implicit ASIF} \label{tab:Sppo2dimplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -17.7031 $\pm$ 8.5565 & 601.9800 $\pm$ 318.1612 & 0.4870 $\pm$ 0.4998 & 337.4270 $\pm$ 80.0928 & 0.8393 $\pm$ 0.2897 \\ & off & -23.4315 $\pm$ 13.5040 & 385.0710 $\pm$ 393.7963 & 0.1980 $\pm$ 0.3985 & 143.9770 $\pm$ 99.2892 & - \\ baseline punishment & on & 1.8864 $\pm$ 0.5720 & 751.0750 $\pm$ 157.2639 & 0.7810 $\pm$ 0.4136 & 138.1540 $\pm$ 44.0523 & 0.2437 $\pm$ 0.0398 \\ & off & 1.8927 $\pm$ 0.5791 & 733.8830 $\pm$ 175.6035 & 0.7550 $\pm$ 0.4301 & 0.9690 $\pm$ 1.8868 & - \\ RTA no punishment & on & -17.5369 $\pm$ 4.5902 & 329.2390 $\pm$ 34.7500 & 0.9610 $\pm$ 0.1936 & 290.2000 $\pm$ 32.0983 & 0.8294 $\pm$ 0.1533 \\ & off & -17.8740 $\pm$ 2.3042 & 123.9470 $\pm$ 36.0346 & 0.1880 $\pm$ 0.3907 & 114.9080 $\pm$ 25.0518 & - \\ RTA punishment & on & 1.9883 $\pm$ 0.3713 & 502.9990 $\pm$ 143.2604 & 0.9480 $\pm$ 0.2220 & 204.7170 $\pm$ 37.7863 & 0.2781 $\pm$ 0.0398 \\ & off & -15.8753 $\pm$ 5.1271 & 364.4110 $\pm$ 181.5286 & 0.7690 $\pm$ 0.4215 & 156.4490 $\pm$ 41.2289 & - \\ RTA Corrected Action & on & -36.3506 $\pm$ 21.1084 & 335.6310 $\pm$ 192.6612 & 0.0 $\pm$ 0.0 & 335.6310 $\pm$ 192.6612 & 9.6171 $\pm$ 3.2334 \\ & off & -22.0632 $\pm$ 9.6757 & 21.0460 $\pm$ 7.2165 & 0.0 $\pm$ 0.0 & 20.8590 $\pm$ 7.0182 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking2d_implicit_asif/sac_no_rta_success.png}} \caption{Results collected from experiments run in the 2D Spacecraft Docking environment with an implicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking2d_sac_imp_asif_results} \end{figure} \begin{table}[hb] \caption{SAC 2D Spacecraft Docking Implicit ASIF} \label{tab:sac2dimplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & 0.0170 $\pm$ 0.6159 & 988.4880 $\pm$ 66.0788 & 0.0220 $\pm$ 0.1467 & 327.1520 $\pm$ 78.9583 & 0.3659 $\pm$ 0.0418 \\ & off & -40.4752 $\pm$ 19.6528 & 945.6220 $\pm$ 187.7012 & 0.0030 $\pm$ 0.0547 & 335.9500 $\pm$ 154.2188 & - \\ baseline punishment & on & -0.0926 $\pm$ 0.3824 & 998.7810 $\pm$ 21.0513 & 0.0 $\pm$ 0.0 & 242.3540 $\pm$ 94.6843 & 0.2916 $\pm$ 0.0280 \\ & off & -0.6723 $\pm$ 1.0088 & 998.8650 $\pm$ 18.3413 & 0.0 $\pm$ 0.0 & 9.5740 $\pm$ 10.5377 & - \\ RTA no punishment & on & 0.0072 $\pm$ 0.5972 & 989.0330 $\pm$ 63.0413 & 0.0120 $\pm$ 0.1089 & 337.4620 $\pm$ 70.4624 & 0.3637 $\pm$ 0.0308 \\ & off & -40.3231 $\pm$ 22.8186 & 388.8060 $\pm$ 237.8237 & 0.0030 $\pm$ 0.0547 & 266.6250 $\pm$ 160.5296 & - \\ RTA punishment & on & -0.0248 $\pm$ 0.5306 & 992.9850 $\pm$ 54.3038 & 0.0020 $\pm$ 0.0447 & 322.7700 $\pm$ 69.0018 & 0.3560 $\pm$ 0.0272 \\ & off & -45.2013 $\pm$ 24.9335 & 500.5040 $\pm$ 285.1929 & 0.0020 $\pm$ 0.0447 & 319.9860 $\pm$ 182.5422 & - \\ RTA Corrected Action & on & 0.0461 $\pm$ 0.7008 & 984.5460 $\pm$ 75.3451 & 0.0210 $\pm$ 0.1434 & 317.3800 $\pm$ 76.3632 & 0.3630 $\pm$ 0.0405 \\ & off & -38.1003 $\pm$ 20.0277 & 474.9430 $\pm$ 292.3608 & 0.0110 $\pm$ 0.1043 & 261.0650 $\pm$ 143.3972 & - \\ \bottomrule \end{tabular} } \end{table} \FloatBarrier \subsection{3D Spacecraft Docking Explicit Simplex} \label{app:docking3dexplicitsimplex} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an explicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking3d_ppo_exp_sim_results} \end{figure} \begin{table}[hb] \caption{PPO 3D Spacecraft Docking Explicit Simplex} \label{tab:ppo3dexplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -31.0188 $\pm$ 10.6629 & 518.1790 $\pm$ 321.9829 & 0.2820 $\pm$ 0.4500 & 193.3420 $\pm$ 67.8229 & 0.8483 $\pm$ 0.2922 \\ & off & -23.6606 $\pm$ 10.0484 & 387.3470 $\pm$ 368.7029 & 0.2320 $\pm$ 0.4221 & 153.5120 $\pm$ 81.3324 & - \\ baseline punishment & on & 2.0348 $\pm$ 0.3485 & 700.8320 $\pm$ 150.7818 & 0.8990 $\pm$ 0.3013 & 0.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 2.0182 $\pm$ 0.3761 & 714.7290 $\pm$ 147.2671 & 0.8870 $\pm$ 0.3166 & 0.9160 $\pm$ 1.9034 & - \\ RTA no punishment & on & -24.4788 $\pm$ 10.7840 & 280.2370 $\pm$ 88.3712 & 0.5980 $\pm$ 0.4903 & 149.2880 $\pm$ 101.5997 & 0.9711 $\pm$ 0.8016 \\ & off & -21.8075 $\pm$ 10.9196 & 190.2490 $\pm$ 111.3772 & 0.5510 $\pm$ 0.4974 & 134.1560 $\pm$ 72.1069 & - \\ RTA punishment & on & 2.0870 $\pm$ 0.3968 & 687.3150 $\pm$ 135.2182 & 0.9360 $\pm$ 0.2448 & 0.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 2.0874 $\pm$ 0.3910 & 685.5820 $\pm$ 136.5485 & 0.9330 $\pm$ 0.2500 & 1.0390 $\pm$ 2.3608 & - \\ RTA Corrected Action & on & -30.8903 $\pm$ 14.7595 & 230.2190 $\pm$ 122.1902 & 0.0 $\pm$ 0.0 & 230.2190 $\pm$ 122.1902 & 19.4718 $\pm$ 3.4466 \\ & off & -22.6890 $\pm$ 8.2845 & 14.7530 $\pm$ 3.3308 & 0.0 $\pm$ 0.0 & 14.7530 $\pm$ 3.3308 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_simplex/sac_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an explicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking3d_sac_exp_sim_results} \end{figure} \begin{table}[hb] \caption{SAC 3D Spacecraft Docking Explicit Simplex} \label{tab:sac3dexplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -45.5176 $\pm$ 17.8207 & 889.6050 $\pm$ 233.6027 & 0.0210 $\pm$ 0.1434 & 81.5790 $\pm$ 65.0452 & 0.4107 $\pm$ 0.1040 \\ & off & -51.7261 $\pm$ 22.6411 & 793.0560 $\pm$ 312.3463 & 0.0050 $\pm$ 0.0705 & 404.6600 $\pm$ 179.4067 & - \\ baseline punishment & on & -3.1201 $\pm$ 3.3789 & 984.0890 $\pm$ 78.1228 & 0.0 $\pm$ 0.0 & 0.5910 $\pm$ 2.4831 & 0.0378 $\pm$ 0.1168 \\ & off & -3.0951 $\pm$ 3.7081 & 985.4280 $\pm$ 66.5032 & 0.0010 $\pm$ 0.0316 & 29.8100 $\pm$ 35.5730 & - \\ RTA no punishment & on & -56.2666 $\pm$ 18.2059 & 894.0790 $\pm$ 228.7942 & 0.0340 $\pm$ 0.1812 & 113.1790 $\pm$ 63.4759 & 0.4365 $\pm$ 0.0638 \\ & off & -65.2064 $\pm$ 30.6350 & 749.4010 $\pm$ 336.3577 & 0.0020 $\pm$ 0.0447 & 478.7300 $\pm$ 229.1926 & - \\ RTA punishment & on & -3.2459 $\pm$ 4.9680 & 997.0710 $\pm$ 27.9703 & 0.0050 $\pm$ 0.0705 & 1.6550 $\pm$ 6.6895 & 0.0429 $\pm$ 0.1162 \\ & off & -3.5462 $\pm$ 6.0729 & 995.9100 $\pm$ 36.4424 & 0.0050 $\pm$ 0.0705 & 35.5580 $\pm$ 58.1211 & - \\ RTA Corrected Action & on & -57.1892 $\pm$ 23.8735 & 874.9890 $\pm$ 254.3602 & 0.0230 $\pm$ 0.1499 & 134.5720 $\pm$ 111.9885 & 0.4541 $\pm$ 0.1048 \\ & off & -53.6893 $\pm$ 28.7788 & 639.5200 $\pm$ 380.3689 & 0.0030 $\pm$ 0.0547 & 379.5430 $\pm$ 226.4612 & - \\ \bottomrule \end{tabular} } \end{table} \FloatBarrier \subsection{3D Spacecraft Docking Explicit ASIF} \label{app:docking3dexplicitasif} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an explicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. } \label{fig:docking3d_ppo_exp_asif_results} \end{figure} \begin{table}[hb] \caption{PPO 3D Spacecraft Docking Explicit ASIF} \label{tab:ppo3dexplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -23.5395 $\pm$ 5.5692 & 520.2150 $\pm$ 335.7678 & 0.4380 $\pm$ 0.4961 & 262.1080 $\pm$ 81.3717 & 0.7496 $\pm$ 0.2788 \\ & off & -25.9051 $\pm$ 15.2182 & 395.0710 $\pm$ 393.6692 & 0.2810 $\pm$ 0.4495 & 174.5970 $\pm$ 127.5959 & - \\ baseline punishment & on & 2.0252 $\pm$ 0.4779 & 719.1760 $\pm$ 154.5015 & 0.8760 $\pm$ 0.3296 & 61.9650 $\pm$ 14.8473 & 0.1813 $\pm$ 0.0324 \\ & off & 1.9796 $\pm$ 0.5287 & 699.8420 $\pm$ 156.6811 & 0.8890 $\pm$ 0.3141 & 1.2990 $\pm$ 3.2802 & - \\ RTA no punishment & on & -35.7737 $\pm$ 31.4269 & 385.1380 $\pm$ 255.2133 & 0.6760 $\pm$ 0.4680 & 295.9280 $\pm$ 214.9787 & 0.8228 $\pm$ 0.4336 \\ & off & -22.1710 $\pm$ 6.5638 & 161.8100 $\pm$ 77.1825 & 0.5380 $\pm$ 0.4986 & 133.3550 $\pm$ 59.4011 & - \\ RTA punishment & on & 1.9614 $\pm$ 0.5390 & 678.1530 $\pm$ 155.3601 & 0.8860 $\pm$ 0.3178 & 65.2410 $\pm$ 17.8011 & 0.1843 $\pm$ 0.0306 \\ & off & 1.5387 $\pm$ 1.0994 & 667.7730 $\pm$ 160.3803 & 0.8910 $\pm$ 0.3116 & 6.0220 $\pm$ 10.9016 & - \\ RTA Corrected Action & on & -28.6656 $\pm$ 12.8821 & 196.4600 $\pm$ 92.5018 & 0.0 $\pm$ 0.0 & 196.4600 $\pm$ 92.5018 & 18.3608 $\pm$ 2.8570 \\ & off & -22.9620 $\pm$ 8.5660 & 14.2520 $\pm$ 3.0771 & 0.0 $\pm$ 0.0 & 14.2440 $\pm$ 3.0875 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_explicit_asif/sac_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an explicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking3d_sac_exp_asif_results} \end{figure} \begin{table}[hb] \caption{SAC 3D Spacecraft Docking Explicit ASIF} \label{tab:sac3ddexplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -12.7579 $\pm$ 10.1785 & 915.5070 $\pm$ 202.0146 & 0.1110 $\pm$ 0.3141 & 204.1420 $\pm$ 90.2193 & 0.3788 $\pm$ 0.0620 \\ & off & -38.0083 $\pm$ 20.4644 & 854.8270 $\pm$ 283.0951 & 0.0410 $\pm$ 0.1983 & 304.2500 $\pm$ 160.4455 & - \\ baseline punishment & on & -0.1637 $\pm$ 0.6890 & 987.1430 $\pm$ 68.1994 & 0.0010 $\pm$ 0.0316 & 60.4100 $\pm$ 32.5684 & 0.2518 $\pm$ 0.0390 \\ & off & -2.8927 $\pm$ 3.6149 & 986.2250 $\pm$ 70.9115 & 0.0050 $\pm$ 0.0705 & 29.0850 $\pm$ 35.7144 & - \\ RTA no punishment & on & -14.5421 $\pm$ 7.3016 & 940.2470 $\pm$ 159.9236 & 0.0780 $\pm$ 0.2682 & 252.0550 $\pm$ 65.3287 & 0.3840 $\pm$ 0.0382 \\ & off & -56.6587 $\pm$ 29.6890 & 568.9540 $\pm$ 316.5479 & 0.0050 $\pm$ 0.0705 & 399.5180 $\pm$ 211.1678 & - \\ RTA punishment & on & -2.7094 $\pm$ 1.9604 & 999.2260 $\pm$ 16.0357 & 0.0 $\pm$ 0.0 & 156.4860 $\pm$ 42.5640 & 0.3229 $\pm$ 0.0292 \\ & off & -29.8980 $\pm$ 14.0208 & 947.3790 $\pm$ 146.0030 & 0.0180 $\pm$ 0.1330 & 270.2530 $\pm$ 120.9608 & - \\ RTA Corrected Action & on & -12.3856 $\pm$ 9.0203 & 905.9600 $\pm$ 213.5613 & 0.0410 $\pm$ 0.1983 & 217.5670 $\pm$ 78.7771 & 0.3668 $\pm$ 0.0465 \\ & off & -48.8981 $\pm$ 23.9236 & 687.4480 $\pm$ 355.5327 & 0.0080 $\pm$ 0.0891 & 375.6960 $\pm$ 195.4146 & - \\ \bottomrule \end{tabular} } \end{table} \FloatBarrier \subsection{3D Spacecraft Docking Implicit Simplex} \label{app:docking3dimplicitsimplex} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an implicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. } \label{fig:docking3d_ppo_imp_sim_results} \end{figure} \begin{table}[hb] \caption{PPO 3D Spacecraft Docking Implicit Simplex} \label{tab:ppo3dimplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -28.8368 $\pm$ 9.1682 & 444.4520 $\pm$ 311.7484 & 0.3520 $\pm$ 0.4776 & 177.5300 $\pm$ 58.5945 & 1.2214 $\pm$ 0.2807 \\ & off & -23.7356 $\pm$ 14.1796 & 339.6470 $\pm$ 368.3187 & 0.2710 $\pm$ 0.4445 & 148.9460 $\pm$ 106.5690 & - \\ baseline punishment & on & 1.9298 $\pm$ 0.4878 & 729.8170 $\pm$ 164.2315 & 0.8050 $\pm$ 0.3962 & 0.0 $\pm$ 0.0 & 0.0 $\pm$ 0.0 \\ & off & 1.9062 $\pm$ 0.4930 & 741.4000 $\pm$ 163.1066 & 0.7880 $\pm$ 0.4087 & 0.8780 $\pm$ 1.9645 & - \\ RTA no punishment & on & -33.2941 $\pm$ 22.1803 & 388.1860 $\pm$ 222.9841 & 0.5020 $\pm$ 0.5000 & 152.7050 $\pm$ 104.8435 & 1.2013 $\pm$ 0.4864 \\ & off & -23.5098 $\pm$ 8.5427 & 214.6090 $\pm$ 125.9420 & 0.4380 $\pm$ 0.4961 & 152.0270 $\pm$ 57.3112 & - \\ RTA punishment & on & 2.0360 $\pm$ 0.3864 & 722.1020 $\pm$ 156.9345 & 0.8900 $\pm$ 0.3129 & 0.0010 $\pm$ 0.0316 & 0.0008 $\pm$ 0.0247 \\ & off & 2.0734 $\pm$ 0.3357 & 703.6070 $\pm$ 152.0538 & 0.9190 $\pm$ 0.2728 & 1.1600 $\pm$ 2.1805 & - \\ RTA Corrected Action & on & -12.6732 $\pm$ 15.6686 & 705.8490 $\pm$ 305.7374 & 0.0290 $\pm$ 0.1678 & 705.4710 $\pm$ 305.5783 & 18.1370 $\pm$ 3.7118 \\ & off & -22.2881 $\pm$ 8.1576 & 15.2590 $\pm$ 3.6905 & 0.0 $\pm$ 0.0 & 15.2590 $\pm$ 3.6905 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_simplex/sac_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an implicit simplex RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking3d_sac_imp_sim_results} \end{figure} \begin{table}[hb] \caption{SAC 3D Spacecraft Docking Implicit Simplex} \label{tab:sac3dimplicitsimplex} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -39.8408 $\pm$ 15.1098 & 853.8830 $\pm$ 271.8064 & 0.0480 $\pm$ 0.2138 & 49.7620 $\pm$ 37.1512 & 1.0700 $\pm$ 0.2004 \\ & off & -49.9209 $\pm$ 23.8915 & 772.6130 $\pm$ 329.1146 & 0.0090 $\pm$ 0.0944 & 384.6810 $\pm$ 189.9644 & - \\ baseline punishment & on & -1.7770 $\pm$ 2.1333 & 996.0120 $\pm$ 39.0965 & 0.0010 $\pm$ 0.0316 & 0.1110 $\pm$ 0.7673 & 0.0623 $\pm$ 0.2595 \\ & off & -1.7795 $\pm$ 2.1777 & 996.9430 $\pm$ 31.9320 & 0.0 $\pm$ 0.0 & 18.1090 $\pm$ 21.9634 & - \\ RTA no punishment & on & -56.7316 $\pm$ 18.9754 & 886.5070 $\pm$ 231.2872 & 0.0530 $\pm$ 0.2240 & 81.5040 $\pm$ 47.9373 & 1.0850 $\pm$ 0.1024 \\ & off & -69.4304 $\pm$ 35.3073 & 701.8620 $\pm$ 334.5834 & 0.0010 $\pm$ 0.0316 & 489.0240 $\pm$ 244.4675 & - \\ RTA punishment & on & -2.3084 $\pm$ 2.7257 & 997.5530 $\pm$ 30.4823 & 0.0020 $\pm$ 0.0447 & 0.4140 $\pm$ 1.8554 & 0.1206 $\pm$ 0.3443 \\ & off & -2.5016 $\pm$ 3.2310 & 998.3320 $\pm$ 22.5890 & 0.0030 $\pm$ 0.0547 & 25.6890 $\pm$ 32.5553 & - \\ RTA Corrected Action & on & -54.9041 $\pm$ 21.5688 & 818.4390 $\pm$ 297.3325 & 0.0260 $\pm$ 0.1591 & 119.9790 $\pm$ 103.9273 & 1.1138 $\pm$ 0.0633 \\ & off & -52.5477 $\pm$ 28.3996 & 510.3110 $\pm$ 374.1590 & 0.0 $\pm$ 0.0 & 332.9860 $\pm$ 220.6356 & - \\ \bottomrule \end{tabular} } \end{table} \clearpage \FloatBarrier \subsection{3D Spacecraft Docking Implicit ASIF} \label{app:docking3dimplicitasif} \begin{figure}[ht] \centering \subfigure[PPO evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/ppo_w_rta_eval.png}}\qquad \subfigure[PPO evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/ppo_w_rta_success.png}}\\ \subfigure[PPO evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/ppo_no_rta_eval.png}}\qquad \subfigure[PPO evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/ppo_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an implicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean. } \label{fig:docking3d_ppo_imp_asif_results} \end{figure} \begin{table}[hb] \caption{PPO 3D Spacecraft Docking Implicit ASIF} \label{tab:ppo3dimplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -11.1833 $\pm$ 14.6859 & 670.6230 $\pm$ 303.9664 & 0.0260 $\pm$ 0.1591 & 670.6230 $\pm$ 303.9664 & 2.1712 $\pm$ 0.6998 \\ & off & -21.7117 $\pm$ 8.4841 & 358.8620 $\pm$ 359.6777 & 0.2550 $\pm$ 0.4359 & 134.7320 $\pm$ 69.5541 & - \\ baseline punishment & on & -11.2320 $\pm$ 15.6139 & 723.6400 $\pm$ 297.1046 & 0.0340 $\pm$ 0.1812 & 723.6400 $\pm$ 297.1046 & 2.0077 $\pm$ 0.7648 \\ & off & 1.8275 $\pm$ 0.5826 & 740.5620 $\pm$ 177.9170 & 0.7960 $\pm$ 0.4030 & 1.2820 $\pm$ 2.5820 & - \\ RTA no punishment & on & -10.4772 $\pm$ 14.6715 & 727.0780 $\pm$ 304.3238 & 0.0290 $\pm$ 0.1678 & 727.0770 $\pm$ 304.3229 & 0.9596 $\pm$ 0.1598 \\ & off & -24.7680 $\pm$ 8.6542 & 101.9490 $\pm$ 38.2915 & 0.0 $\pm$ 0.0 & 89.9680 $\pm$ 33.4132 & - \\ RTA punishment & on & -10.4772 $\pm$ 14.6715 & 727.0780 $\pm$ 304.3238 & 0.0290 $\pm$ 0.1678 & 727.0740 $\pm$ 304.3218 & 0.7655 $\pm$ 0.1558 \\ & off & -24.4860 $\pm$ 8.6466 & 88.4590 $\pm$ 32.2835 & 0.0 $\pm$ 0.0 & 77.6300 $\pm$ 30.2741 & - \\ RTA Corrected Action & on & -10.4772 $\pm$ 14.6715 & 727.0780 $\pm$ 304.3238 & 0.0290 $\pm$ 0.1678 & 727.0780 $\pm$ 304.3238 & 5.3637 $\pm$ 2.2937 \\ & off & -21.8312 $\pm$ 8.9370 & 27.9090 $\pm$ 8.6748 & 0.0 $\pm$ 0.0 & 27.3040 $\pm$ 8.4486 & - \\ \bottomrule \end{tabular} } \end{table} \begin{figure}[ht] \centering \subfigure[SAC evaluated with RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/sac_w_rta_eval.png}}\qquad \subfigure[SAC evaluated with RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/sac_w_rta_success.png}}\\ \subfigure[SAC evaluated without RTA Average Return]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/sac_no_rta_eval.png}}\qquad \subfigure[SAC evaluated without RTA Average Success]{\includegraphics[width=0.45\linewidth]{figures/docking3d_implicit_asif/sac_no_rta_success.png}} \caption{Results collected from experiments run in the Docking3D environment with an implicit ASIF RTA. Each curve represents the average 10 trials and the shaded region is the $95\%$ confidence interval about the mean.} \label{fig:docking3d_sac_imp_asif_results} \end{figure} \begin{table}[hb] \caption{SAC 3D Spacecraft Docking Implicit ASIF} \label{tab:sac3dimplicitasif} \centering \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lcccccc} \toprule Configuration & RTA & Return & Length & Success & Interventions/Violations & Correction \\ \midrule baseline & on & -10.1428 $\pm$ 14.3804 & 721.7680 $\pm$ 289.5521 & 0.0350 $\pm$ 0.1838 & 721.7680 $\pm$ 289.5521 & 1.0391 $\pm$ 0.0697 \\ & off & -49.4634 $\pm$ 24.9526 & 805.9750 $\pm$ 312.1980 & 0.0050 $\pm$ 0.0705 & 384.3810 $\pm$ 188.4363 & - \\ baseline punishment & on & -13.8266 $\pm$ 16.6181 & 643.0790 $\pm$ 310.5455 & 0.0220 $\pm$ 0.1467 & 643.0790 $\pm$ 310.5455 & 1.0190 $\pm$ 0.0795 \\ & off & -3.7864 $\pm$ 5.3072 & 974.0050 $\pm$ 109.7366 & 0.0020 $\pm$ 0.0447 & 36.7950 $\pm$ 50.6100 & - \\ RTA no punishment & on & -9.2941 $\pm$ 13.4528 & 740.2390 $\pm$ 294.4016 & 0.0440 $\pm$ 0.2051 & 740.2390 $\pm$ 294.4016 & 0.9780 $\pm$ 0.0124 \\ & off & -28.8167 $\pm$ 13.9128 & 233.3260 $\pm$ 108.2606 & 0.0 $\pm$ 0.0 & 180.0350 $\pm$ 88.3977 & - \\ RTA punishment & on & -9.2941 $\pm$ 13.4528 & 740.2390 $\pm$ 294.4016 & 0.0440 $\pm$ 0.2051 & 740.2390 $\pm$ 294.4016 & 0.9796 $\pm$ 0.0123 \\ & off & -29.0804 $\pm$ 13.9550 & 232.2120 $\pm$ 107.8407 & 0.0 $\pm$ 0.0 & 180.9220 $\pm$ 87.4997 & - \\ RTA Corrected Action & on & -9.2941 $\pm$ 13.4528 & 740.2390 $\pm$ 294.4016 & 0.0440 $\pm$ 0.2051 & 740.2390 $\pm$ 294.4016 & 1.6682 $\pm$ 0.1922 \\ & off & -21.4260 $\pm$ 7.9279 & 48.9950 $\pm$ 12.3313 & 0.0 $\pm$ 0.0 & 46.4630 $\pm$ 11.9167 & - \\ \bottomrule \end{tabular} } \end{table} \clearpage
proofpile-arXiv_065-5856
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Videos of mobile app usage have become commonplace on the internet. For example, Tech vloggers make app tutorials to educate new users or share the best practices of app features. People record app screens step-by-step to guide their parents who are not familiar with an app, and app testers create bug reports with rich context in video. In particular, screen recordings---that is, videos that record on-screen content but not the user's hands---are easy to produce and share using built-in smartphone facilities. In order to effectively use these screen recordings however, the viewer has to first understand the interactions performed in the video and then manually repeat them in the order shown. This process can be time-consuming and error-prone~\cite{bernal2020translating}, especially when the sequence of necessary interactions is long or the recording is played quickly. For example, a viewer may have to pause the recording after each interaction, or even replay it multiple times before they are able to replicate the interaction on their own device. In addition, the target UI element for an interaction may be difficult to locate in complex app screens, in the presence of different display preferences---such as large fonts or dark mode---or when device or app versions change. What if instead of asking a user to do all of the work of interpreting a screen recording, a machine could do it instead? Such a system might identify the type of interactions that were performed and the target UI elements that these interactions were performed on, ideally from the recording alone. Work has been attempted in this area before, but a limitation of existing systems is that they require a special recording apparatus~\cite{yu2019lirat, li2018appinite}, visual indicators added during recording~\cite{qian2020roscript, bernal2020translating} or additional metadata such as UI transition graph~\cite{feng2022gifdroid} in order to capture the interactions demonstrated in the video. In this paper, we propose a system that automatically extracts user interactions from ordinary screen recordings, without requiring additional settings, specially-instrumented recording tools, or source code access. Our system performs three phrases to extract the interactions: 1) video segmentation, 2) interaction classification, and 3) interaction localization. Our system first segments the start and the end of each interaction. Then, it runs heuristics to choose between six common interaction types. Finally, our system uses a 3D convolutional encoder to learn the semantics of the animation in the UI, a 2D convolutional encoder and decoder to capture the connections between consecutive UI states, and another decoder to infer the interaction probability heatmap and output the location of the interaction. Based on UI detection results from screenshot pixels, we can find the target UI element and its content. We also explore methods to replay the interactions that we extract from videos on another device. Our replay prototype runs UI detection on each app screen, locates the best matched UI element for each recorded interaction and performs the interaction on each UI element in turn. Table~\ref{tab:technique_differences} summarizes the key differences between our method and existing techniques. We evaluated our system on the Rico dataset \cite{deka2017rico} (created 4 years ago), and a smaller app usage dataset (64 top-downloaded apps, each has iOS and Android versions) that we collected and annotated recently. For video segmentation, our system achieves \textcolor{black}{84.7\%} recall on iOS, and slightly worse recall on Android (\textcolor{black}{72.0\%}). For interaction classification, our system achieves comparable accuracy on both platforms (iOS--\textcolor{black}{87.6\%}, Android--\textcolor{black}{89.3\%}). For interaction localization, our system achieves the best accuracy on Rico (\textcolor{black}{69.1\%}), as the model is trained on this dataset. Although app UI design changes have occurred since Rico, our model still works on recent Android with lower accuracy (\textcolor{black}{56.2\%}), and \textcolor{black}{41.4\%} accuracy on recent iOS apps. For interaction replay, we found that the majority of interactions (iOS--84.1\%, Android--78.4\%) can be replayed on different devices. The contributions of this paper are as follows: \begin{itemize} \item We present a pixel-based approach to automatically extract and replay interactions from ordinary screen recordings without requiring additional settings, specialized recording instrumentation, or access to source code. \item We implement a prototype system that instantiates our approach. The results of our evaluation show reasonable accuracy in video segmentation, interaction classification, and interaction localization. Our system successfully replays a majority of interactions on different devices. These results demonstrate the feasibility of our pixel-based approach to extracting replayable interactions. \end{itemize} \begin{table} [b] \caption{Differences in input, additional data requirements, and support for cross-device replay between existing techniques and our proposed system.} \label{tab:technique_differences} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{llllc} \toprule & Input & \makecell[l]{Additional\\ Requirement} & \makecell{Cross-Device \\Replay} \\ \midrule \rowcolor{Gray} \texttt{APPINITE\cite{li2018appinite}} & User demonstration & \makecell[l]{View hierarchy, { } { } { } { } \\Interaction\\point \& type} & No \\ \texttt{V2S\cite{bernal2020translating}} & Video & \makecell[l]{Add touch indicator\\to the video} & No \\ \rowcolor{Gray} \texttt{RoScript\cite{qian2020roscript}} & \makecell[l]{Test script or Video} & Include fingertips & No \\ \texttt{LIRAT\cite{yu2019lirat}} & \makecell[l]{Test script or\\User demonstration} & \makecell[l]{Interaction \\point \& type} & Yes \\ \rowcolor{Gray} \texttt{GifDroid\cite{feng2022gifdroid}} & \makecell[l]{Video and UI { } { } { } { }\\ Transition Graph { } { } } & None & No \\ \rowcolor{Gray2} \hline \texttt{Our system} & \textbf{Video} & \textbf{None} & \textbf{Yes} \\ \bottomrule \end{tabular} } \end{table} \section{Related Work} We discuss the related work across two areas: 1) identifying interactions on user interfaces, and 2) replaying user interactions. \subsection{Identifying Interactions on User Interfaces} Identifying user interactions is an important task on various platforms, including mobile~\cite{li2018appinite,bernal2020translating,yu2019lirat,qian2020roscript, feng2022gifdroid}, desktop~\cite{zhao2019actionnet, nguyen2015making}, web~\cite{bao2017extracting, sprenkle2005automated}, and even the physical world~\cite{guo2019statelens}. The extracted interactions can empower many applications, including task automation~\cite{li2018appinite}, bug reporting~\cite{bernal2020translating}, automated app testing~\cite{yu2019lirat,qian2020roscript, sprenkle2005automated}, and guidance to use appliances~\cite{guo2019statelens}. Previous research has applied various methods to identify user interactions for the purposes of replaying them on the same or another device. LIRAT~\cite{yu2019lirat} obtains the interaction location and type by using a debugging tool to access low-level system events. The device must also have a connection to a computer that runs the debugging tool. APPINITE~\cite{li2018appinite} adds a layer of interaction proxies~\cite{zhang2017interaction} on top of the current running app. An interaction proxy layer captures the users' taps and passes them to the underlying app. This method requires installing an additional background service and obtaining Accessibility permissions, and may not work on all platforms. V2S~\cite{bernal2020translating} requires users to access Android developer settings in order to show a touch indicator at each \textit{tap} event. With this known visual indicator, V2S presents an object detection model to locate the touch indicator and infer the interaction location. This method adds extra work to app video creators, and not all video creators would like to show a developer-mode touch indicator in videos. RoScript~\cite{qian2020roscript} instead requires video creators to use an external camera to record the phone screen and finger movement. It leverages computer vision techniques to recognize a finger and its relative location to the app UI, and the system requires users to move their fingers outside the phone screen between each interaction for segmentation. This method also requires an additional camera and a stable setup of phone and camera. While GifDroid~\cite{feng2022gifdroid} does not require complex setup of the input video, it additionally relies on the UI transition graph to assist interaction identification. However, constructing UI transition graph is not be a trivial graph and requires many efforts. The methods above require settings or recording tools that are specific to a platform (e.g., Android) \cite{yu2019lirat, li2018appinite}, add extra work to app video creators \cite{bernal2020translating, qian2020roscript}, and will not work on existing app usage videos \cite{yu2019lirat, li2018appinite, bernal2020translating, qian2020roscript}. We believe our pixel-based approach can be a more generalizable way to collect interaction traces from videos. \subsection{Replaying User Interactions} After extracting user interactions, the key challenge of replaying is to find where to interact on the replay device. Some work repeats the (x, y) coordinate from the recording~\cite{bernal2020translating, halpern2015mosaic}, while some applications~\cite{yu2019lirat, qian2020roscript} find matching UI elements on replaying devices so that the replay will be more robust to dynamic content and device change. To find a matching UI element sometimes requires access to a view hierarchy. For example, APPINITE~\cite{li2018appinite} tries to match UI metadata from the view hierarchy (e.g., parent-child relationship, text, UI Class) so that it can still locate the target UI element even when the target UI element moves to a different location due to screen content change. However, the view hierarchy is not always available, and the view hierarchy can be incomplete or misleading. To avoid these limitations, some work leverage computer vision techniques to match targeted UI elements. For example, LIRAT~\cite{yu2019lirat} compares image features to match UI elements between recording and replaying screens, and extracted layout hierarchy from pixels to improve matching. Similarly, our method also leverages video pixels to match target UI elements, but we use object detection models that have better performance. \section{System} Essentially, our system extracts interactions from pixels in video frames, and uses this information to replay the interactions on another device. \textcolor{black}{As shown in Figure~\ref{fig:flowchart},} extraction is performed through three phases: 1) video segmentation (\cref{sec:video_segmentation}), interaction classification (\cref{sec:interaction_classification}), and interaction localization (\cref{sec:interaction_localization}). Our system then applies a set of strategies in the interaction replay phase (\cref{sec:interaction_replay}). The rest of this section describes the system phases in detail. \begin{figure*} \centering \includegraphics[width=1\textwidth]{figures/segmentation.pdf} \caption{Visualization of image similarities between consecutive frames. The spikes indicate segments in the video where user interactions were performed, which we use to segment the keyframes. In this figure, we detected six stable intervals, and for each interval, we take the middle frame as the extracted keyframe. \textcolor{black}{The users \textit{tap} on the bottom-right ``search'' icon, \textit{tap} the input field on the top, \textit{type} text, \textit{tap} the bottom-right ``search'' icon, and \textit{swipe} up to see more content.} } \label{fig:segmentation} \Description{The top of the figure shows several screenshots from an interaction trace split up into keyframes. The bottom of the figure shows a line graph of structural similarity over time, as the interaction trace proceeds. The line graph shows several "spikes" where the structural similarity value goes drastically down as the user interactions are performed, and flat areas where the structural similarity is staying the same corresponding to the stable intervals.} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/type_action.pdf} \caption{Examples of patterns that our interaction classification heuristics examine to classify a \textit{type} interaction, including (a) Keyboard patterns that can be detected from OCR results. (b) When users \textit{tap} on an input field, the title of the UI will change instantly; when users perform a \textit{type} interaction, the title will have a steady change or remain the same. } \Description{On the left, we show two screenshots of an app screen with keyboards open, one on the QWERTY keyboard and one on a number keyboard to show how we distinguish a typing action. On the right, we show three screenshots. The first is a Stocks page with a search field on the top, the second screen shows the search field active with the keyboard open, and the third screen shows the same screen where some text has been typed in a search field. The indicated interaction is that the user tapped on the search field to activate the keyboard, and typed some text.} \label{fig:type_action} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{figures/localization_heuristics.pdf} \caption{Examples of patterns that our interaction localization heuristics look for to localize a \textit{tap} interaction, including (a) showing the start and the end UI state when the user taps on an item, and the item remains in the next page having the same label and the same image, and (b) showing an example when the tapped item's label becomes the title of the next UI state.} \label{fig:localization_heuristics} \Description{The figure shows two examples of heuristics our system applies for interaction localization. On the left, we show two UI screens in a Maps app that share some UI elements, including an image with text (which the user tapped) which was moved down in the second UI screen below a map indicating the user tapped on it. On the right, we show an example of a News app UI screen. The first UI screen shows a list of items, one labeled "History". The second UI screen has the title "History" indicating that the "History" list item was tapped.} \end{figure*} \subsection{Phase 1---Video Segmentation} \label{sec:video_segmentation} Video segmentation splits the frames of the input screen recording into a sequence of representative frames that maximally differentiates the video, or \emph{keyframes}~\cite{zhong1996clustering}. To illustrate how this works, consider the frames of the screen recording shown in \cref{fig:segmentation}. We start by computing the histogram of oriented gradient (HOG)~\cite{dalal2005histograms} feature descriptor for each frame. As the name suggests, the HOG descriptor is a simplified representation of the screen in terms of its structure or shape through gradient and orientation. We use this HOG descriptor to calculate similarity between consequence frames using a structural similarity (SSIM) measure~\cite{wang2004image}. Intuitively, a sequence of similar frames represents a stable interval---with the middle of this stable interval being the keyframe, represented with an arrow in \cref{fig:segmentation}. Next, we run a spike detection algorithm using empirical parameters we derived from a number of app usage videos (\cref{sec:evaluation}): 1) the spike should be larger than $(\mathrm{Similarities}_{\mathrm{max}} - \mathrm{Similarities}_{\mathrm{min}})/15$~\footnote{\textcolor{black}{15 is an empirically set number from experiments.}}, to be resilient to partial content changes that are not caused by user interactions, and 2) the stable intervals should contain at least four frames, to mitigate against interactions from transient UI changes. The inverted spikes in \cref{fig:segmentation} indicate interaction clip points that segment the keyframes. \subsection{Phase 2---Interaction Classification} \label{sec:interaction_classification} This phase identifies six interactions---\textit{type}, \textit{swipe left}, \textit{swipe right}, \textit{swipe up}, \textit{swipe down}, and \textit{tap}~\cite{li2019humanoid}---through the following heuristics: \textbf{\textit{Type}} interactions are always associated with a virtual keyboard. We inspect the OCR results on a screen to determine if they contain text corresponding to the rows of a keyboard---that is, 3 rows of QWERTY keyboard, and 4 rows of number pad (Figure~\ref{fig:type_action}(a)). For entered text, we compare the OCR results in the first and last frames of a \textit{type} interaction. To illustrate, in Figure~\ref{fig:type_action}(b)'s right-most screen, the placeholder inside the top search bar changes as text is entered, with suggestions appearing below. Among all changed or added OCR text results in the last frame, we pick the top-most one (smallest $y$) as the entered text. \textbf{\textit{Swipe (left, right, up, down)}} interactions shift several UI elements within a scrollable area, while the top title bar and bottom tab bar are often unchanged (see \cref{fig:swipe_left_vs_tap} and \cref{fig:swipe_up_vs_tap} in the Appendix). Consequently, we compare the OCR results between any two consecutive frames and calculate the movement between each pair of text strings. If multiple ($N>=3$, an empirical parameter) text strings move in the same horizontal or vertical direction within a threshold distance, our system classifies the interaction as a \textit{swipe}. We call the text strings with shared movement a \emph{text collection}. \textcolor{black}{Note that sometimes ``snackbar'' elements may briefly appear at the bottom of the screen with messages about app processes, so we set $N>=3$ to avoid confusing this UI behavior with a swipe.} Capturing the semantics of \textit{swipe} interaction requires three properties: the direction, the distance, and initiation point. The Swipe direction is determined trivially through the movement coordinates. To calculate the \textit{swipe} distance, we use the median distance of movement between two consecutive frames, and then sum all median distances between any two consecutive frames in the interaction clip. The \textit{swipe} initiation point is the either first or last text by x or y position, for horizontal or vertical \textit{swipe}, respectively. \textbf{\textit{Tap}} interactions may lead to a new UI state, pop-up keyboard, or cause few or no movement of shared elements. If an interaction clip is not classified as either a \textit{type} or \textit{swipe}, we classify it as a a \textit{tap} interaction. From the Rico dataset~\cite{deka2017rico}, we found that the majority of interactions in mobile apps are \textit{tap} interactions (91.7\%), and that \textit{type} and \textit{swipe} interactions often cause changes on text elements---for example, through creating or moving text. Informed by these findings, treating \textit{tap} as the fall-through interaction has reasonable justification. For a \textit{tap} interaction, we must also identify the \textit{tap} location. This is described in the next section. \subsection{Phase 3---Interaction Localization for Tap Interactions} \label{sec:interaction_localization} For \textit{tap} interactions, interaction localization is needed to identify the location of the \textit{tap}. In some cases, the start and end UI state will share an interaction component. For this situation, we can use heuristic-based localization (\cref{sec:heuristic-based_localization}) to identify the \textit{tap} interaction location. When heuristic-based location fails, we can rely on visual feedback cues provided by the app when the users \textit{tap} a location. In this situation, we leverage the animation effect and the connections between the two consecutive UI states to train an interaction localization model to locate the interaction point (\cref{sec:localization_model}). \begin{figure*} \centering \includegraphics[width=1\textwidth]{figures/localization_model.pdf} \caption{The structure of our interaction localization model. Given eight frames as the input, the model takes the first and the last frames as the input to the 2D block to learn the connections between the two consecutive UI states; Concurrently, the model feeds all eight frames into a 3D block to encode the animation effect; the model later concatenates the extracted features from the 2D block with the animation features extracted from the 3D encoder, and the combined features are then feed into a decoder to predict the interaction heatmap. } \Description{The figure shows the structure of the proposed localization model. From left to right, it consists of the input frames, the detailed parameters of our model structure and the output heatmap.For the leftmost input, it shows a stack of 8 frames extracted from an interaction clip, which is the input of our 3D block module, and we takes the first and the last frame as the input of the 2D block module.For the middle model structure, from top to down, it consists of three sub-blocks, namely a 2D block, a 3D block and a decoder.The rightmost shows the output, i.e., the predicted interaction heatmap, as well as the ground truth heatmap. We use a variant of focal loss as our target loss function.} \label{fig:localization_model} \end{figure*} \subsubsection{Heuristic-based Localization} \label{sec:heuristic-based_localization} When the title of the new UI state is same as the label of one items in the content area, it is likely that this is the item the user \textit{tap}ped. In Figure~\ref{fig:localization_heuristics}(b), when users \textit{tap} on ``History'', the title of the new UI state also becomes ``History''. Because this is a high-accurate heuristic, we first detect the existence of this pattern to locate the interacted element (\cref{sec:heuristic-based_localization}). To detect the title, we first run OCR on the first and last frame of each interaction clip to obtain all texts. We then find the top title in the second frame, and check if there is an element with the same text in the main content---excluding the top bar and the bottom app bar. If so, we output the position of this element as current \textit{tap} interaction point. \subsubsection{Localization Model} \label{sec:localization_model} From our observations of the Rico animation data and our collected app usage videos, we identified three common visual cues that we can leverage to locate \textit{tap} interactions: 1) ripple effect---a radial action in the form of a visual ripple expanding outward from the user's touch, 2) expand effect---which scales up and cross-fades a UI elements, and 3) changes in the text or background colors. In addition, we noticed that in some cases, the start and end UI state shares the interacted element. For example, in Figure~\ref{fig:localization_heuristics}(a), when users \textit{tap} ``Hotels That Are Homes for the Harvest'', the new UI state contains the same text. However, although in this case, the shared element is the interacted element, it may fail in other situation. We rely on our models to learn the difference between normal shared elements and shared interacted elements. We trained a deep-learning based interaction localization model to locate the \textit{tap} point. Inspired by the success of human pose recognition~\cite{pfister2015flowing}, object detection~\cite{law2018cornernet, lin2017focal}, and video classification models~\cite{tran2015learning}, we designed a model to predict a heatmap of possible \textit{tap} points by learning the semantics of the animation effects. As shown in Figure~\ref{fig:localization_model}, our model primarily consists of three blocks: a 2D block, a 3D block and a decoder. The 2D block is based on 2D convolutional layers~\cite{pfister2015flowing} and aims to find the connections between two consecutive UI states, while the 3D block---based on 3D convolutional layers~\cite{tran2015learning}---captures the temporal relationship among frames---that is, the animation effect across these frames---in each interaction. A final decoder then fuses features extracted from the 2D and the 3D block to infer the interaction heatmap. We added a shortcut module following the U-Net model~\cite{ronneberger2015u} to help the model to retrieve the coarse features from the shallow layer in the encoder part to refine the extracted high-level abstract features and help dense per-pixel heatmap prediction. Concretely, given 8 frames extracted from one interaction clip as the input of our model (\cref{sec:evaluation:interaction_localization}), we take the first and the last frames as the input to 2D block to learn the connections between the two consecutive UI states. In parallel, we feed all frames into our 3D block to encode the animation effect: the extracted features from 2D block and 3D block will then be concatenated, and the combined feature will be fed into the decoder to predict the interaction heatmap, which is the output of our localization model. We take the point with highest probability in the predicted heatmap as the output interaction point. Given that we only have one \textit{tap} point in each training sample, there will be only one out of all 256x512 points in the heatmap being set to 1, while all other points are set to 0. Therefore, our dataset has a similar data imbalance problem as encountered in many object detection tasks~\cite{lin2017focal}. We used two strategies to alleviate this issue. First, instead of setting only one point in the heatmap as 1 and others as 0, we found the bounding box of target UI element (\cref{sec:interaction_clips_from_rico_dataset}) and then applied the 2D Gaussian function to obtain the probability of surrounding points in the interaction elements~\cite{law2018cornernet, pfister2015flowing, duan2019centernet}. Second, we used a variant of the focal loss~\cite{lin2017focal, law2018cornernet} to perform a weighted penalization on the low confidence data: let $p_{ij}$ be the predicted score (a.k.a. confidence) at location (i,j) in the predicted heatmap, and let $y_{ij}$ be the ground-truth score augmented by the 2D Gaussians. Then, the loss function is: $$ L = -\frac{1}{N} \sum_{i=0}^{H} \sum_{j=0}^{W} \left\{ \begin{aligned} (1-p_{ij})^{\alpha}log(p_{ij}) & \; & if \; y_{ij} = 1 \\ (1-y_{ij})^{\beta}(p_{ij})^{\alpha}log(1-p_{ij}) & \; & otherwise \end{aligned} \right. $$ We used the interaction trace and animation datasets in Rico dataset~\cite{deka2017rico} to train our model. The details are explained in \cref{sec:interaction_clips_from_rico_dataset}. Our localization model is trained on 4 Tesla V100 GPUs using an Adam optimizer~\cite{kingma2014adam} for 25 epochs with the initial training rate being 0.01 and a batch size of 32. For each interaction clip, we evenly picked 8 frames from the interaction clips as model input. We can also pick more frames as input, and we evaluated its impact in \cref{sec:evaluation:interaction_localization} Therefore, the structures of 3D blocks for these two inputs are slightly different, with the 3D pooling layer in the third convolutional block having a depth stride of 1 or 2. If the interaction clip does not contain 8 frames, we duplicate some of the frames to get 8 frames. During training, the first and the last frame is fed into the 2D block and extract features from the first and the next UI state. In parallel, all frames are fed into the 3D block to extract the semantics of the animation effect. The two extracted features from 2D and 3D blocks are then concatenated and fed into a decoder to predict the interaction heatmap. The original size of Rico animation clips are 281x500 (width x height), and we resized them to 256x512. \subsection{Phase 4---Interaction Replay} \label{sec:interaction_replay} The previous phases extracted interactions from video. When replaying interactions on another device, we sometimes can directly repeat interactions on screen (for example, typing entered text), but otherwise need to find a matching target UI elements to apply the interactions---such as a UI element to tap or a point to start swiping. To accomplish interaction replays, we run an object detection model~\cite{zhang2021screen} to detect UI elements on each keyframe of recorded video, and then find the UI detection that contains a \textit{tap} point or a \textit{swipe} initiation point; if there are multiple detections, we pick the smallest detection. To find the target UI on the screen of a new device, we run fuzzy matching~\cite{fuzzywuzzy} for text elements and leverage template matching~\cite{matchTemplate} for non-text elements. Specifically, we choose the text with the highest weighted ratio (case-insensitive, ignore punctuation)~\cite{fuzzywuzzy} as the matching target text element. For non-text element, we use it as the template image, and slide it over the new screenshot to find a location with the highest matching value. We used normalized correlation coefficient as our matching function. When the replaying devices have different resolutions than the recorded video, we scale (50\% to 200\%) accordingly to the template image so that its size is similar to the target UI element in new screenshot---that is, multi-scale template matching. Running matching algorithms on every pixel of screenshot can be time-consuming. To speed up matching, we first limit the search space to the detected UIs on the new screen, and run matching on full screenshot only when we fail to find a match from UI detections. \section{Evaluation} \label{sec:evaluation} We evaluated our system on a large-scale dataset (Rico~\cite{deka2017rico}, created 4 years ago), and a smaller app usage recording dataset (iOS and Android versions of 64 top-downloaded apps) we collected and annotated recently (\cref{sec:evaluation:datasets}). We evaluated each phase of system: video segmentation (\cref{sec:evaluation:video_segmentation}, iOS--84.7\%, Android--72.0\% recall), interaction classification (\cref{sec:evaluation:interaction_classification}, iOS--87.6\%, Android--89.3\% accuracy), interaction localization (\cref{sec:evaluation:interaction_classification}, Rico--69.1\%, Android--56.2\%, iOS--41.4\% accuracy), interaction replay across devices (\cref{sec:evaluation:interaction_replay}, iOS--84.1\%, Android--\textcolor{black}{78.4\%} success rate). \subsection{Datasets} \label{sec:evaluation:datasets} \subsubsection{Interaction Clips from Rico Dataset} \label{sec:interaction_clips_from_rico_dataset} The Rico dataset~\cite{deka2017rico} is a large-scale repository of Android app screens. In addition to UI element information on each screen (e.g., bounding box, UI class), the dataset also contains interaction traces of the apps and their corresponding video clips---for example, displaying animations after performing each interaction. Each interaction trace provides a list of gestures to perform interaction, and we needed to derive an interaction type and interaction point from each gesture. We consider six interaction types as in ~\cite{li2019humanoid}, namely \textit{tap}, \textit{swipe left}, \textit{swipe right}, \textit{swipe up}, \textit{swipe down} and \textit{type}. We adopted the following heuristics from ~\cite{li2019humanoid} to determine \textit{tap} and \textit{swipe} interactions: \begin{enumerate} \item If an interaction contained only one gesture point or the distance of the gesture was $\leq$ 10 pixels, we considered it a \textit{tap} interaction. \item If an interaction contained a list of gesture points with distance $>$ 10 pixels, we considered it a \textit{swipe} interaction. We mapped the gesture direction to \textit{swipe left}, \textit{swipe right}, \textit{swipe up}, or \textit{swipe down}. \end{enumerate} For the \textit{type} interaction, we noted that Rico dataset workers used physical keyboards to type text, and the \textit{type} interactions were not recorded in the gesture data as a result. From our observations, we noted that the \textit{type} interactions happened after a \textit{tap} interaction on text field. Therefore, we detected these \textit{tap} interactions and text changes in the text field, and then manually verified these potential \textit{type} interactions. In total, we obtained 44,536 interactions (with interaction type and video clip) from 7,211 user interaction traces in 6,547 free Android apps. Among these interactions, 91.7\% (40,855/44,536) are \textit{tap}, 0.3\% (123/44,536) are \textit{type}, 5.2\% (2,299/44,536) are \textit{swipe up}, 1.0\% (442/44,536) are \textit{swipe down}, 1.54\% (688/44,536) are \textit{swipe left}, and 0.3\% (129/44,536) are \textit{swipe right}. We found several limitations in Rico interaction traces and their video clips. Because the dataset only contains clips---and not a continuous usage recording---we were unable to evaluate our segmentation method using Rico. Some interactions omit their video clip or gesture, while other gestures do not match the video clips. Some video clips contain no changes in the UI. These data quality issues may significantly impact interaction classification result (especially on less frequent classes), as the interaction types are already highly skewed. Nevertheless, we were able to evaluate Rico on our interaction localization model---as our identified data issues have negligible impact for \textit{tap} interactions. Thus, we used the Rico \textit{tap} interactions to train our interaction localization model, and report our model performance on this testing split. \textcolor{black}{Note that as we only use \textit{tap} data to train the localization model, so that the localization model will not be biased.} An additional limitation of the Rico dataset is that it contains only Android apps, and was collected four years ago. As a result, this dataset may not reflect recent app designs on major mobile platforms. Thus, we collected and annotated usage recordings from top-downloaded iOS and Android apps---as discussed in the next section. \begin{table*} \caption{The total number of interactions for each interaction type, and average task duration across 128 collected recordings from 64 top-downloaded applications.} \label{tab:collected_recordings} \begin{tabular}{lrrrrrrrr} \toprule & \#{Taps} & \#{Types} & \#{Swipe-Ups} & \#{Swipe-Downs} & \#{Swipe-Lefts} & \#{Swipe-Rights} & \#{Total} & Avg. Duration \\ \midrule \texttt{Android} & 396 & 33 & 78 & 15 & 6 & 6 & 534 & 35.8s \\ \texttt{iOS} & 391 & 36 & 74 & 12 & 3 & 2 & 518 & 29.3s \\ \hline \texttt{Total} & 787 & 69 & 152 & 27 & 9 & 8 & 1,052 & 32.6s \\ \bottomrule \end{tabular} \end{table*} \subsubsection{Usage Recordings from Top-Downloaded iOS and Android Apps} \textcolor{black}{We followed the process of \citet{bernal2020translating} to collect our dataset, and ensure its diversity and representiveness.} We collected 64 top-downloaded free apps from the 32 categories in the Australia Google Play store (two apps per app category), all of which also offered a free iOS version to enable fair evaluation between different platforms. \textcolor{black}{To ensure the collected recordings were representative, all authors discussed and selected the tasks, which include the key features of each app based on their description in the both app stores.}\footnote{The task details can be found in our supplementary materials.} The first and second authors, \textcolor{black}{one female and one male, both without any disabilities,} randomly picked an app, installed it on both an iPhone 11 (iOS 14, physical device, \textcolor{black}{1792 x 828}) and Nexus 6P (Android 11, emulator, \textcolor{black}{2560 x 1440}), and recorded the screen while performing the same task in both the iOS and Android apps. \textcolor{black}{When recording the videos, the two authors used the mobile apps as normal with no restrictions on their interactions.} Once app usage videos were recorded, the first author manually annotated them to segment stable intervals, classify interaction types, and locate interaction elements. We used an open-source tool, labelImg~\cite{labelImg}, to facilitate the annotation process. In total, we obtained 128 app usage recordings ($\mu$ duration = 32.6 seconds), containing 1,052 interactions (787 \textit{taps}, 69 \textit{types}, 196 \textit{swipes}). Additional details are found in \cref{tab:collected_recordings}). \subsection{Phase 1---Video Segmentation} \label{sec:evaluation:video_segmentation} We evaluated our model's performance in video segmentation on usage recordings from top-downloaded iOS and Android apps. We examined each keyframe predicted by our model with all stable intervals in annotated ground truth. We classified our video segmentation are correct when a predicted keyframe falls into a stable interval (with no other predicted keyframes in this interval). We counted the \# of correctly predicted keyframes (C), \# of predicted keyframes (P), and \# of annotated ground truth keyframes (A), and then calculated precision ($ \frac{C}{P}$), recall ($ \frac{C}{A}$) and F1-score. \begin{table}[t] \centering \caption{Experimental results for video segmentation on recordings from top-downloaded apps on iOS and Android. We report Precision (P), Recall (R), and F1 score for each combination of feature extraction method and distance function.} \begin{tabular}{| l | c c c | c c c |} \hline & \multicolumn{3}{c|}{\textbf{Android}} & \multicolumn{3}{c|}{\textbf{iOS}} \\ \cline{2-7} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline \small{RGB + L1} & 67.1\% & 68.3\% & 67.7\% & \textbf{80.1\%} & 77.5\% & 78.7\% \\ \small{RGB + L2} & 49.7\% & 47.4\% & 48.5\% & 64.8\% & 74.0\% & 69.1\% \\ \small{RGB + SSIM} & 62.0\% & 70.4\% & 65.9\% & 78.5\% & \textbf{84.7\%} & 81.5\% \\ \small{YUV + L1} & \textbf{67.4\%} & 68.3\% & \textbf{67.9\%} & 80.0\% & 77.8\% & 78.9\% \\ \small{YUV + L2} & 50.9\% & 51.3\% & 51.1\% & 68.3\% & 68.2\% & 68.2\% \\ \small{YUV + SSIM} & 62.2\% & 69.6\% & 65.7\% & 78.5\% & 83.5\% & 80.9\% \\ \small{Hist + L1} & 61.2\% & 66.2\% & 63.6\% & 79.8\% & 76.1\% & 77.9\% \\ \small{Hist + L2} & 62.5\% & 65.2\% & 63.8\% & 79.0\% & 72.3\% & 75.5\% \\ \small{Hist + SSIM} & 51.4\% & 58.7\% & 54.8\% & 70.0\% & 52.2\% & 59.8\% \\ \small{HOG + L1} & 57.6\% & 71.1\% & 63.6\% & 76.6\% & 83.1\% & 79.7\% \\ \small{HOG + L2} & 56.5\% & 56.4\% & 56.4\% & 73.8\% & 83.3\% & 78.2\% \\ \small{HOG + SSIM} & 61.0\% & \textbf{72.0\%} & 66.0\% & 79.2\% & \textbf{84.7\%} & \textbf{81.9\%} \\ \small{SIFT} & 52.5\% & 58.4\% & 55.3\% & 70.3\% & 71.6\% & 71.0\% \\ \hline \end{tabular} \label{tab:action_segmentation_results_manual} \end{table} \cref{tab:action_segmentation_results_manual} shows the performance of the video segmentation phase using different features and feature distance functions. Among all combinations, YUV+L1 performs the best (67.9\% F1 score) on Android recordings while HOG+SSIM (81.9\% F1 score) performs the best on iOS recordings. Among all features, color histogram performed relatively the worst as it simply aggregates the general image features and somewhat downplays the salient changes. RGB and YUV features performed similarly as they essentially describe the same features with different representations. The HOG feature achieved the best recall (72.0\% in Android recordings, and 84.7\% in iOS recordings), which suggests that it effectively captures UI changes. Among all feature distance functions, L2 distance had the worst performance, as it may overemphasize large changes. However, a distinguishable change does not necessarily imply a change in UI states. For example, changes in advertisement banner should not be considered as a new UI state. SSIM had the best performance, as it is a perceptual metric, which is able to capture general information about the image. Overall, our method can effectively segment app usage videos into interaction clips when UI states have salient differences. We would like to share insights when our method fails to predict a keyframe. We missed keyframes when user interactions lead to a subtle change (or no change) on the UI. For example, when users select an item in a list, a small checkmark will appear. Such a small difference may be ignored by our simple feature extraction methods. Understanding the whole screen context would help us capture this important change on the UI. The effect of these errors are they reduce recall. We predicted extra keyframes on animations that are not caused by user interactions. For example, when users enter an image-heavy screen, a loading animation may appear while waiting. Similarly, when users download a file, a progress bar updates frequently and may automatically move to the next UI state once the file is downloaded. In the future, we should recognize these common animations. The effect of these errors reduce precision. We also obtained insights from the performance differences we found between iOS and Android recordings. Our method may predict frames with changing advertisements as extra keyframes, which are not caused by user interactions. From our observations, Android apps tended to contain more banner advertisements while iOS apps displayed fewer advertisements. In addition, the Android emulator we used may have higher latency than their physical counterpart devices. Because the emulator takes longer for UI rendering and UI transitions, this makes it harder to distinguish UI rendering and transitions from user interactions. \begin{table}[t] \centering \caption{Experimental results for interaction classification on recordings from top-downloaded apps on Android and iOS, including Precision (P), Recall (R), and F1 Score} \begin{tabular}{| l | c c c | c c c |} \hline & \multicolumn{3}{c|}{\textbf{Android}} & \multicolumn{3}{c|}{\textbf{iOS}} \\ \cline{2-7} & \textbf{P} & \textbf{R} & \textbf{F1} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline \small{Tap} & 94.6\% & 91.9\% & 92.7\% & 94.6\% & 89.2\% & 91.8\% \\ \small{Type} & 74.4\% & 87.9\% & 80.6\% & 57.1\% & 100.0\% & 72.7\% \\ \small{Swipe Up} & 92.9\% & 81.2\% & 86.7\% & 90.2\% & 74.3\% & 81.5\% \\ \small{Swipe Down} & 46.7\% & 53.8\% & 50.0\% & 64.3\% & 75.0\% & 69.2\% \\ \small{Swipe Left} & 46.2\% & 100.0\% & 63.2\% & 33.3\% & 100.0\% & 50.0\% \\ \small{Swipe Right} & 75.0\% & 100.0\% & 85.7\% & 100.0\% & 100.0\% & 100.0\% \\ \hline \small{Macro Avg.} & 71.4\% & 85.8\% & 76.5\% & 73.2\% & 89.8\% & 77.5\% \\ \small{Weighted Avg.} & 90.4\% & 89.3\% & 89.6\% & 90.3\% & 87.6\% & 88.3\% \\ \hline & \multicolumn{3}{c|}{Accuracy: 89.3\%} & \multicolumn{3}{c|}{Accuracy: 87.6\%} \\ \hline \end{tabular} \label{tab:action_classifictaion_results_manual} \end{table} \subsection{Phase 2---Interaction Classification} \label{sec:evaluation:interaction_classification} We evaluated our model’s performance in interaction classification on usage recordings from top-downloaded iOS and Android apps. \cref{tab:action_classifictaion_results_manual} shows the performance of the interaction classification phase, which performs well on both iOS and Android app recordings (\textcolor{black}{87.6\%} and \textcolor{black}{89.3\%} accuracy respectively). Our method achieves high recall in most interaction types (except \textit{swipe down} on Android), and gets high F1 scores in \textit{tap}, \textit{swipe up}, and \textit{swipe right}. \textit{Swipe left} and \textit{swipe down} had the worst performance. As they have only 9 and 27 samples out of 1,052 interactions, their precisions can be impacted by incorrect predictions from the other interaction types. Here is an example of failures of \textit{swipe left}: the screen scrolls horizontally when users \textit{tap} on the next segmented control, which has the same visual effect of \textit{swipe left}. We also found that half of \textit{swipe down} interactions on Android were recognized incorrectly as \textit{tap}. Most of these failures were related to a date/time picker: text in the pickers are smaller and faded to highlight currently selected text, which reduced accuracy of OCR that our heuristics rely on. Some failures in \textit{swipe down} happen when users \textit{tap} a button in the bottom actionsheet; the actionsheet moves down and disappears, creating a similar visual effect of \textit{swipe down}. Not surprisingly, we also found that most false positives are from \textit{tap}---the majority of interactions in our dataset. Our method had reasonable performance on \textit{type} interactions. However, when the virtual keyboard appears, users may still perform non-type interactions like \textit{tap}. In the future, we should focus on the visual changes inside the virtual keyboard to confirm \textit{type} interactions. \subsection{Phase 3---Interaction Localization} \label{sec:evaluation:interaction_localization} We evaluated our model’s performance in interaction localization on the large-scale Rico dataset, and usage recordings from top-downloaded iOS and Android apps. \cref{tab:action_localization_results} shows the interaction localization performance of our system, and several baseline methods as comparison. The first baseline is \textbf{Humanoid}~\cite{li2019humanoid}, which predicts the next interaction given the previous three UI screens. It follows a RNN-style method to encode frame features step-by-step, and predicts the heatmap of possible interaction points using a decoder module. There are also variants of our model as baselines: \textbf{HM2D} (shorts for heatmap 2D) directly uses 2D convolutional layers to learn the semantics from animations, while \textbf{HM3D} instead uses 3D convolutional layers to learn the temporal and spatial features from animations through several layers. The default \textbf{HM3D} contains a UNet-style shortcut, which is expected to help the model refine the high-level abstract features using the features from shallow layers. We also consider a variant of \textbf{HM3D without shortcut} to see the impact from the shortcut module. Another variant close to our model is \textbf{HM3D + 2D}, while our final system (\textbf{HM3D + 2D + Heuristics}) includes heuristics to improve performance. We also compared the performance when the input contained 8 or 16 frames from interaction clips. \begin{table}[t] \caption{Accuracy of our interaction localization model compared with several baselines for the Rico-Test dataset and our manually collected recordings. } \label{tab:action_localization_results} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{|l | c | c |c |c|} \hline \multirow{2}{*}{} & \multirow{2}{*}{ \textbf{\#{Frames}}} & \multirow{2}{*}{\textbf{Rico-Test}} & \multicolumn{2}{c|}{\textbf{Recordings}} \\ \cline{4-5} & & & \textbf{Android} & \textbf{iOS} \\ \hline \texttt{Humanoid} & 8/16 & 29.5\%/29.5\% & 8.8\%/8.8\% & 9.7\%/8.7\% \\ \texttt{HM2D} & 8/16 & 61.8\%/52.4\% & 34.3\%/21.7\% & 28.1\%/21.1\% \\ \texttt{HM3D w/o shortcut} & 8/16 & 66.7\%/66.9\% & 52.8\%/53.0\% & 36.2\%/38.9\% \\ \texttt{HM3D} & 8/16 & 67.9\%/67.0\% & 53.0\%/54.3\% & 40.4\%/38.5\% \\ \texttt{HM3D+2D} & 8/16 & \textbf{69.1\%}/\textbf{67.9\%} & 52.3\%/54.3\% & 41.0\%/40.0\% \\ \texttt{HM3D+2D+Heuristics} & 8/16 & \textbf{69.1\%}/\textbf{67.9\%} & \textbf{53.6\%}/\textbf{56.2\%} & \textbf{41.4\%}/\textbf{40.4\%} \\ \hline \end{tabular} } \end{table} \begin{table*} \caption{The recall of our interaction localization model as compared with several baselines, across 6 common UI types in the Rico Dataset. \textcolor{black}{The table shows the results for models trained on 8 frames / 16 frames.} } \label{tab:localization_ele_type} \begin{tabular}{|l | c | c |c |c | c | c | c |} \hline & \textbf{Text} & \textbf{Button} & \textbf{ImageView} & \textbf{ImageButton} & \textbf{System Nav. Bar} & \textbf{Others} \\ \hline \texttt{Number} & \textit{783} & \textit{625} & \textit{521} & \textit{531} & \textit{388} & \textit{1,592} \\ \hline \texttt{Humanoid} & 1.1\%/1.3\% & 3.5\%/3.7\% & 17.1\%/17.1\% & 68.7\%/68.7\% & N/A & 51.6\%/51.6\% \\ \texttt{HM2D} & 33.3\%/20.1\% & 63.8\%/51.7\% & 44.7\%/34.5\% & 79.3\%/77.4\% & 82.0\%/62.6\% & 69.5\%/62.8\% \\ \texttt{HM3D w/o shortcut} & 43.0\%/43.3\% & 68.6\%/66.7\% & 45.9\%/46.8\% & 83.4\%/84.4\% & 87.4\%/90.2\% & 72.7\%/72.5\% \\ \texttt{HM3D} & 44.3\%/40.2\% & 70.4\%/67.4\% & 48.9\%/49.5\% & 83.1\%/85.3\% & 89.2\%/89.9\% & 74.1\%/73.3\% \\ \texttt{HM3D + 2D} & 44.8\%/43.3\% & 70.9\%/69.9\% & 49.1\%/47.4\% & 85.5\%/86.1\% & 89.4\%/89.9\% & 75.4\%/73.6\% \\ \hline \end{tabular} \end{table*} \subsubsection{Performance on Rico Dataset} Our system outperformed all baselines, reaching 69.1\% accuracy when the input contains 8 frames from interaction clip. We found the 3D convolutional network (HM3D) better captured the temporal features from interactions than RNN-style (Humanoid) and 2D convolutional based network (HM2D). RNN-style model encodes each frame, and the compressed frame may lose information before it is fed into the next RNN cell. A 2D-style model heavily relies on the first layer to capture the temporal features among frames, while a 3D-style model gradually learns the semantics of animations through several layers. UNet-style shortcut and additional 2D modules both help our model to better learn features and slightly boosted the performance. Applying heuristics also improved model performance in recent app recordings. We then analyzed the performance across UI element types as they have different visual effects. We considered six common UI types in Rico dataset, namely: ImageButton, ImageView, TextView, Button, and System Bottom Navigation Bar. The rest of UI elements fall into ``Other'' type. From \cref{tab:localization_ele_type}, we found TextView and ImageView elements had worse performance compared to other UI types. Other UI types are tappable by default and they have animations provided by the system UI framework. TextView and ImageView are not tappable unless developers specify the property or create a customized event listener. Therefore, these two UI types are more likely to have a special animation effect or no visual effect. In contrast, all models performed best on the system back button (except for Humanoid) and ImageButton, which almost always provide visual feedback when users \textit{tap} them. \subsubsection{Performance on Recent iOS and Android App Recordings} Our model is trained on the Rico dataset collected 4 years ago, and we wanted to investigate the feasibility of our method on recent mobile apps. As shown in \cref{tab:action_localization_results}, we found that the performance of all models degrade. Since the release of the Rico dataset, design principles and UI styles have changed substantially in recent mobile apps. One example is the redesign of system bottom navigation bar. Previously, Android apps avoided using the tab bar at the bottom of the screen (side menu drawer is a replacement), because users could easily tap system bottom navigation bar by mistake. Nowadays, the system bottom navigation bar no longer shows buttons, but only a subtle bar with space to enable gesture navigation. Android apps are more likely to use the tab bar instead of the menu drawer. From the Rico dataset, we randomly selected 100 apps and sampled one interaction from each app. Only 6\% of apps contain a bottom tab bar, while most of the recent iOS (87.5\%) and Android (75\%) apps we collected contain a bottom tab bar. We also found the animation visual feedback to be more subdued now. For example, the text color may only slightly change after a \textit{tap}. After examining the failure cases in recent app recordings, we found that our model worked best when the animation visual feedback is more apparent. As seen in the first two keyframes of \cref{fig:segmentation}, users \textit{tap} on the bottom tab bar---which leads to a subtle change in the text color of a tab button and an obvious change in the main content. In the future, the understanding of all UIs on a screen~\cite{zhang2021screen} will help our model focus on important UI changes---for example, to prioritize tab button changes when a tab bar is detected. Finally, we noticed a large performance discrepancy between iOS and Android recordings, as the differences in their designs are even more substantial than the differences between Rico and recent Android apps. A larger-scale app usage recording dataset in both iOS and Android, like Rico, will help our model better capture the interactions under these new UI paradigms. \subsection{Phase 4---Interaction Replay} \label{sec:evaluation:interaction_replay} We evaluated our model's performance in interaction replay on usage recordings from top-downloaded iOS and Android apps. In order to evaluate the success rate of interaction replay, the first two authors also collected the same app interaction traces on devices with different resolutions (Pixel 4 XL running Android 11 and iPhone 11 Pro Max running iOS 15). It is a manual replay process that will not stop by error in one step. To focus on the performance of replay module itself, we directly used the annotated interactions as a ground truth to avoid the errors that propagate from each step during interaction extraction. During this new collection, we took notes of four problems that prevented us from replaying 23 interactions on the target devices. First, some pop-ups windows appear occasionally while replaying and required us to perform extra steps to close them. These pop-up windows include advertisements, instruction hints, rating requests, and permission requests. Second, some interactions were related to specific time that were no longer available. For example, when we try to replay the interaction in a different month, the option of previous month may no longer exist and thus we cannot replay the exact same interaction. Third, apps may contain dynamic content that changes the required user interactions. For example, in a recording, we need to \textit{swipe up} five times to reach the target element to \textit{tap}; in the updated content, we only need to \textit{swipe up} twice. Fourth, apps add and remove features in updates. Such updates could lead to changes in the UI layout and UI transitions, removing an existing UI element, or affecting the navigation logic. We counted the cases of each problem during the replay process. Among failure cases in iOS | Android recordings, 9 | 12 are relevant to the pop-up windows, 1 | 1 are relevant to the time, 6 | 5 are relevant to dynamic content, and 3 | 5 are relevant to app updates. For each interaction, our system found a matching target UI element on newly collected screens. The first two authors manually examined the matching result to determine whether interactions on the matched UI could lead to the expected next UI state. The majority of interactions could be correctly replayed in iOS and Android apps (84.1\% and 78.4\% respectively). Here are some common failure cases in UI element matching: Image content may change across different sessions or change due to personalization (e.g., albums in \cref{fig:swipe_left_vs_tap}(a) are different in different accounts). Therefore, our image template matching would not find the same image. There can also be multiple UI elements contain the same text, and our text matching may find the wrong target text element. Beyond the scope of this paper, a deeper understanding of UI will help in resolving these failure cases: after our system learns what content is dynamic and what content is repetitive, it can replay the interaction on the UI element that has the same relative position in the UI structure. \section{Discussion} \textcolor{black}{ \textbf{Datasets.} We evaluated our proposed system with two different datasets. First, we use the Rico dataset to evaluate our localization models in a large-scale experiment, even though the Rico dataset only contains interactions with Android apps. The addition of a large-scale interaction dataset of iOS apps would help better illustrate the advantages and disadvantages of our system. To mitigate this issue, we then collected interaction traces from two top-downloaded apps from each app category that were available on both the iOS and Android platforms. We used these traces to better evaluate the generalizability and robustness of the proposed system. To ensure the manual recordings were representative, all authors together discussed and selected the tasks to be performed, ensuring that they covered the key features of all apps based on their descriptions in the app stores. While each type of interaction has a different number of trials, we believe the diversity and representativenss of the collected apps and interactions mitigates some of the potential issues. We also report the detailed results for each interaction to better illustrate the performance on different interactions. Our future work will include more data to better examine and improve our system. Moreover, in the future we will examine extending our work to other input devices of different resolutions, such as larger-screened tablets. } \textbf{Opportunities for performance improvements.} Our approach has several opportunities for performance improvements. First, several limitations in detecting interactions were a result of modern UI interaction paradigms which were not available when the Rico dataset was released. Consequently, we expect that a large-scale dataset of \emph{recent} apps on multiple mobile platforms would improve our system performance. Such a dataset would provide relevant data to train machine learning models to enhance our heuristic-based video segmentation and interaction classification phases, as well as and make our interaction localization model more generalizable to apps on iOS and Android platforms. Our current localization model relied on video frames and their corresponding pixel-data, achieving around 70\% accuracy. This localization model could be improved in several ways. First, we could consider running UI detection on a screen to predict the interactable UI elements, as in \citet{zhang2021screen, chen2020object}. These UIs have higher probabilities for \textit{tap} interactions. Additional labels~\cite{chen2020unblind, chen2022towards} for some image-based interactable elements can also improve the system. Second, we tried a simple heuristic-based method to identify the connections between \textit{tapped} text and the title in new UI state. Deeper understanding of the content of text elements would support inferring the interaction between the two UI states. For example, ActionBert~\cite{he2020actionbert} demonstrates that it is possible to predict connection elements between two UI states---even without leveraging animation. We proposed a straightforward method to replay interactions. In our evaluation, it works well on the same (or similar) app versions for a different device. This assumption applies to some applications---for example, automated or regression testing for the the same app version---but other scenarios, such as making app tutorials, may require using multiple app versions. Another challenge is to replay the interactions for the same app in different languages, which would enable cross-locale applications. Collecting the interaction traces in different settings (e.g., app versions, languages) would provide more signals for our interaction replay phase. Our current system only tests recordings within one app, while many tasks involve multiple apps. Learning the transition between apps will enable cross-app interaction extraction and replay to complete more complicated tasks. We think of our pixel-based approach as a general technique that is also in some ways a \emph{lower-bound} on accuracy: by design, it does not take advantage of additional metadata that could potentially further improve its performance. Incorporating metadata, if available---such as the UI framework used within the app, platform, and version---could boost the performance of our approach. Of course, the disadvantage of metadata is that it is not always available, difficult to extract, or unnecessarily constraints and couples the model to the metadata. \textbf{Applications of extracting replayable interactions.} There are several applications that may benefit from our methods to extract and replay user interactions from video pixels. A straightforward application is to allow users to \emph{annotate interactions on existing videos}. For example, they may have a screen recording and have difficulty figuring out how the user in the screen recording is getting to a particular screen. Our system could be used to provide on-screen annotations of our inferred interactions as the user plays the video. For \emph{app bug reports}, users or QA testers could create videos of issues in apps, which developers could then replay within their own development environment to reproduce. Similarly, end-users could upload videos to demonstrate app usage problems: automatically identifying the interactions in these videos could minimize or eliminate the errors introduced by more manual identification procedures. In \emph{automated app testing}, QA testers can sometimes only run apps on unmodified devices, which do not allow special recording tools or collection of metadata. Our method extracts interaction traces from app usage videos, and then replays them on other devices to test. After collecting a larger-scale app dataset on multiple platforms, our pixel-based method could potentially enable cross-platform testing without relying on the platform-specific testing APIs. Finally, our approach could be used in \emph{app tutorials}. As one example, \textcolor{black}{people with limited mobile usage experience} and people with cognitive impairments sometimes require help from others (such as their caregivers) to use a new app or an updated version of app. Our method might be applied to automatically create app tutorials from app usage video recorded by users who better understand and can demonstrate the app functionality. Then, people in need can replay interactions on their own mobile devices, or learn how to use apps with an on-device, interactive tutorials~\cite{harms2011improving}. \section{Conclusion} In this paper, we introduced a novel approach to automatically extract and replay interactions from video pixels without requiring additional settings, recording tools, or source code access. Our approach automatically segments interactions from a video, classifies interaction types, and locates target UI elements for replay. We trained our system using the large-scale Rico dataset for Android, evaluated its effectiveness, and demonstrated the feasibility of learning interaction locations for recent iOS and Android apps. Our prototype can successfully replay the majority of the interactions. The results of this work suggest that extracting replayable interactions is a useful mechanism that potentially benefits a variety of different applications and scenarios. \section{Appendix} \begin{figure*} \centering \includegraphics[width=0.84\textwidth]{figures/swipe_left_vs_tap.pdf} \caption{Examples of patterns that our interaction classification heuristics examine to classify a \textit{swipe left} interaction, including (a) a \textit{swipe left} interaction will likely change at least 3 text elements, and will not change the UI title, and (b) a \textit{tap} interaction instead is likely to change the title.} \label{fig:swipe_left_vs_tap} \Description{The figure shows two examples of interactions: \textit{swipe right} and \textit{tap}. On the left, we show three screenshots showing the transition between a first UI state and a new UI state after swiping left where a middle section has been scrolled. On the right, we show three screenshots to visualize a \textit{tap} interaction on setting "Headphone Safety". The UI in the first screenshot has the title "Sounds & Haptics", the UI in the second screenshot is in transition, and the UI in the third screenshot has the title "Headphone Safety"} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.84\textwidth]{figures/swipe_up_vs_tap.pdf} \caption{Examples of patterns that our interaction classification heuristics examine to classify a \textit{swipe up} interaction, including (a) a \textit{swipe up} interaction will likely change several text elements, but will not change the UI title, and (b) \textit{tap} interaction instead is likely to change the title.} \label{fig:swipe_up_vs_tap} \Description{The figure shows two examples: a \textit{swipe up} interaction and a \textit{tap} interaction. On the left, we show three screenshots where the UI in each screenshot has the title "Sound & Haptics". The portion of the UI below the title has been scrolled slightly in each consecutive screenshot. On the right, we show three screenshots to visualize a \textit{tap} interaction. The UI in the first screenshot has the title "Welcome to Mail", the UI in the second screenshot shows the transition between the first and third UI states, and the UI in the third screneshot has the title "New Account" and a dialog has been opened.} \end{figure*}
proofpile-arXiv_065-5858
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} \FloatBarrier \PARstart{T}{he} growth of interest in Industry 4.0 affects all the industrial sectors, including the marine fields \cite{r4} and, in particular, the blue economy and efficient transportation.\\ This revolution leads to new challenging scenarios; most of the maritime research community's efforts and related stakeholders are now devoted to the autonomous ship. Shortly, we will see human-crewed and uncrewed vessels operating in the same sea area. Such a revolution needs a solid multidisciplinary scientific background, grouping capabilities from different fields. There is still a considerable gap between the amount of data available and the ability of the marine sector to benefit from it. Deep learning algorithms represent the concrete possibility of elaborating this enormous amount of data into useful information to let ships navigate autonomously. Two of the most challenging research problems still open are the collision avoidance strategy and the coordination among different vessels \cite{r20}.\\ The challenge to which the maritime sector is called is integrating the new digitalised processes and technologies, mainly devoted to the intelligent exploitation of the amount of available data produced by sensors, some of those already deployed on most recent vessels.\\ The main marine stakeholders' goal is to achieve autonomous navigation as soon as possible, reduce costs and achieve safe, reliable, and efficient navigation \cite{r6}. Today what seemed impossible until a few years ago appears feasible thanks to the growing evolution of critical enabling technologies such as Artificial Intelligence, Big Data Analytics, and Virtual/Augmented Reality.\\ Based on the reasons above, new and technologically advanced vessel traffic systems are needed to populate the coasts worldwide since the current system \cite{r17,r23,r8} will be unable to deal with the new millennium challenges \cite{r13}.\\ Within this context, this paper looks for near-future and beyond near-future navigation scenarios, wherein ships with different degrees of autonomy \cite{r11}\cite{r41} will navigate the same area and proposes a new Vessel Traffic System architecture. Engineers will face a new challenge in managing this situation. The proposed layout is based on massive ICT technology use and the latest research in the marine engineering field. In particular, it will be impossible to imagine that an autonomous ship can take some action without interacting with each other; for such a reason, advances in the Vessel Traffic System (VTS) design are necessary. \section{Related Works} \label{sec:rel} In the last decade, Autonomous Surface Vessels (ASVs) have been the maritime community's main research topic. Most of the efforts are focused on developing more reliable algorithms for autonomous navigation, guidance, and control \cite{r4}\cite{r40}. These algorithms can limit the human operator's errors, driving the waterborne transport towards more energy-efficiency, more safety, reducing the overall operating expenditure \cite{r1}. Since maritime industry is showing the increasing interest in ASVs, the Maritime Safety Committee (MSC) of the International Maritime Organization (IMO) approved in June 2019 \cite{IMO} interim guidelines for Maritime Autonomous Surface Ships (MASS) trials. These guidelines identifies four levels of autonomy degree which are: \begin{itemize} \item Degree one: ships with automated processes and decision support, which includes the automation of some unsupervised operations but with a seafarer ready to take the control; \item Degree two: remotely controlled ships with seafarers on board, where the ships are operated from a remote location but the seafarers can be available on board to take the control; \item Degree three: Remotely controlled ship without seafarers on board; hence, the ship is controlled and operated from another location; \item Degree four: fully autonomous ship controlled by an operative system. \end{itemize} \subsection{Autonomous Navigation in the Shipping Sector} The first technology review papers on ASVs were presented in \cite{r32}, where many universities and private entities' research projects are reported. In such a work, 60 prototypes of ASVs were classified according to the level of automation achieved.\\ An interesting study is shown in \cite{r36}, where authors analyse the results obtained during the ReVolt project, using a 1:20 scale model, to study advanced control systems to develop an unmanned ship navigation with zero-emission for the short sea shipping.\\ The trend analysis of the automation levels over the years, using the scale presented in \cite{schiaretti2017survey1} in the maritime sector, has been carried out, and the results are summarised in graphic format in Figure \ref{fig:autonomy_trend}. It is worth noting that the number of autonomous vessels projects/prototypes increased over the years; in particular, the higher number is concentrated to a low level of autonomy because they could be of interest to the industry in a short time.\\ In \cite{r34},\cite{r35},\cite{r33}, and \cite{r18} the authors show the results of the experimental study conducted on the nonlinear control logic suitable for the manoeuvring operations of the autonomous ships. In \cite{r37} a new autonomous surface unit was designed. This unit can be configured as a large unit or as a fleet of smaller units that can transport autonomously one container \cite{r9}. \\ In \cite{r37}, the authors show the design of a new fleet of autonomous units, operating along inland waters for multiple purposes, such as passenger transport and the autonomous collection of floating waste. In addition to research organisations; also the maritime industry, classification bodies and institutions, are putting effort into the realisation of autonomous vessels.\\ A crew-less ship needs communication and computing infrastructures that support intelligent algorithms/methods to implement automatic operation, navigation, and berthing. \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9\linewidth]{pict/fig_1a.png} {Trends over the year of unmanned project vs. degree of autonomy. \label{fig:autonomy_trend}} \FloatBarrier \subsection{Communication Technologies} In \cite{r19}, the authors survey the leading Information and Communication Technologies (ICT), the communication architectures and some wireless standards proposed for ASVs. One of the emerging technologies to design communication infrastructure for this type of scenario is based on the Internet of Things (IoT) paradigm. In \cite{r31}, the authors highlight why IoT is generally considered the right candidate to support ships' efficient communication management. IoT technologies are now mature enough to be considered for the support of unmanned navigation systems as well. However, these systems need the support of intelligent algorithms and methods and an orchestration and management platform for handling the interactions with said intelligent algorithms, dynamically selecting the best level of automation for the ship. The complexity arising from IoT devices' interactions and the Cloud-computing environment is presented in \cite{r21}. The authors describe the challenges occurring in the specific maritime environment and propose possible approaches to tackle them. \\ Similarly, in \cite{r28} the authors describe how IoT technologies can be adopted in the naval industry, e.g., to facilitate services such as predictive maintenance, vessel tracking, and safety. Authors in \cite{r22} recognise the importance of ensuring the Quality of Services (QoS), safety, and security for sensor data and control data communications. For this purpose, they analyse the communication requirements coming from the various sensors involved in improving the ship's situational awareness, and they compare them against the different communication technologies. They also highlight the need for a connectivity manager to satisfy the various operating conditions' communication requirements. Work \cite{r10}, instead, investigates how protocols that are typically used in the IoT context can be applied to existing maritime communication technologies. For this purpose, they test on an experimental setup, a configuration using the Constrained Application Protocol (CoAP), i.e. a widely used IoT protocol, and IPv6 on top of a VHF radio technology. The obtained results prove the applicability of the considered protocols. \\ In \cite{r50},{ the authors present the design framework of a Decision Support System (DSS) tool developed to assist the bridge operator during a challenging navigation condition. This work shows how using state of the art, IT devices and hardware, to increase safety at sea mainly focusing on both collision and grounding avoidance.} \subsection{Computing Infrastructures} As far as computing is concerned, all the intelligent applications based on AI technologies have to be managed, configured, and instantiated where needed, possibly ensuring a certain level of flexibility and dynamism to adapt to the ever-changing operation conditions. Moreover, said application needs to be logically interconnected with the IoT environment, e.g., discovering the available sensing nodes to gather all the information necessary for their functioning. To the best authors' knowledge, there is a substantial lack of maritime-specific solutions in this respect. In the context of Smart cities, several works discuss the problems related to obtain, maintain and expose a registry of the IoT devices that are available in the network, e.g. \cite{r15}, \cite{r14}, \cite{r25} and \cite{r16} The proposed solutions show that IoT registries can be efficiently maintained, aiming at optimising multiple requirements, such as energy efficiency, QoS, or security. Instead, in the aeronautical field, solutions exist to manage the dissemination of monitoring information in challenging environments, e.g. \cite{r2}. \subsection{Application Services} For what concerns the application management, the research has been lately focused on orchestrating applications in virtualised environments, e.g., either through Virtual Machines or following lightweight-virtualisation approaches such as Containers \cite{r29}. In this respect, the literature offers various methods to efficiently managing the life-cycle of IoT-based applications, e.g. \cite{r30} and \cite{r1}, which either consider allocating the execution of the applications in the cloud, i.e. remotely or at the edge of the network, thus potentially lowering communication delays. More recent works tackle the mobility of nodes and migrate applications as the user moves, trying to maintain proximity. These approaches typically are based on container migration techniques \cite{r7} and can minimise service disruption in all the application life-cycle.\\ In the building process of the new generation of ASVs, the last link in the chain is Artificial Intelligence (AI) and Augmented reality, which suitably combined can be used to design reliable, efficient, and comfortable ASVs. The synergistic development of the technologies mentioned in this work allows supporting the user in complex and challenging tasks. In \cite{Martelli_M} a decision support system with an augmented reality visualization system is presented. The development process of UAV systems sometimes needs an amount of data not ever available, as illustrated in \cite{Alvey_2021_ICCV}; the authors overcome this bottleneck by introducing photorealistic data to augment the databases. In other cases, the problem is the opposite. There are an enormous amount of data, and the available machines are limited, which is why \cite{Wang2020AugmentedRF} they propose a deep-learning algorithm that works in two phases, thus reducing the computational load of the single-stage. The development of the trend towards a synergistic use leads to the need to have a unified interface between the various technologies used. The review \cite{MING202114} analyses the various ways of estimating the monocular depth, which is an important issue in the interface between deep learning and augmented reality. Augmented reality is a fundamental tool for integrating information that the user already has in particularly complex or risky situations. The study described in \cite{jmse9090996} demonstrates the validity of supporting an augmented reality application for seven sailors involved in the activities of an icebreaker. For example, in \cite{r12} and \cite{r38}, authors' investigate artificial intelligence algorithms to support the operator in preventing collisions using simulated data. Safety and the ability to prevent accidents represent fundamental aspects in navigation, and they are two of the topics on which various artificial intelligence methods are questioned. In \cite{9361551}, a hybrid system that uses fuzzy logic and ontologies is presented. The ontologies allow to build a wide knowledge base and within this it is possible to connect events with different descriptions. The fuzzy logic is used to calculate in real-time the probability of accidents\cite{SENOL201670}. So it is possible fill the gap in numerical results by its semantic descriptions, embedding fuzzy fault tree analysis in ontology-based tree structures. The use of deep-learning techniques guarantees a high level of reliability, which is why in areas such as aeronautics, it is increasingly used and experimented with even in the construction phases of aircraft \cite{9160183}. As illustrated in \cite{bhattarai2020embedded}, the possibility of processing and displaying a massive amount of data in particularly hostile conditions such as during a fire favors or improves safety conditions. Deep-learning algorithms together with digital visualization techniques of reality are used in many applications. To achieve the safety standards currently required of civil structures, the authors propose a monitoring system based on a digital model that constantly exchanges data with the real one \cite{9547763}. As the review \cite{ app11020821} demonstrates, the problem of safety is fundamental in various sectors; that is why, given the enormous reliability of deep-learning systems, they demonstrate increasingly performing results in the most variegate applications. The use of deep learning algorithms has presented fascinating results, especially for autonomous driving \cite{r27}. They guarantee decision support modules localised on board but part of a distributed navigation strategy \cite{r26}. In the field of autonomous driving, the idea of creating a digital driver that can interface both with other machines and with human users is particularly challenging today \cite{Fernndez2020AssociatedRA}. The spread of deep learning in various sectors has also involved the maritime industry in recent years. In fact, the authors of the review \cite{r24} discuss the leading deep learning methods explored in autonomous navigation. The paper highlights the advantages that this new technology offers over traditional machine learning techniques. Also, as reported in \cite{r42}, marine traffic management significantly benefits from using deep learning techniques. Although the literature presents outstanding works on ICT architectures to support the development of ASVs, the rapid evolution of new paradigms of both communication and orchestration of computing resources require further studies on how to integrate and decline such emerging technologies in the context of the autonomous navigation. To this end, this paper aims at presenting the integration of the innovative technologies and methodologies needed to develop a futuristic ASV. To face this challenge, a novel architecture is proposed hereafter, which is founded on four pillars that are: navigation control, computing orchestration, communication and networking, and data analysis. The presented architecture is designed to scale according to the degree of autonomy that the system can undergo, according to the classes defined by IMO in \cite{IMO}. To the best of authors' knowledge, this is the first proposal that faces such a multidisciplinary challenge at those different tiers. The paper is structured as follows: Sec. \ref{sec:system} deals with the overall system description and the orchestration is shown in Sec. \ref{sec:iot}. The use of Intelligent algorithms have been described in Sec. \ref{sec:ai} while the communication infrastructure is shown in Sec. \ref{sec:comm}. The proposed benchmark to test the new vessel traffic management system is reported in Sec \ref{sec:bench}. Eventually, the conclusions are drawn with the expected results and potential benefits. \FloatBarrier \section{System Description} \label{sec:system} \FloatBarrier The navigation in the coastal area needs to be supervised by an Ashore Control Center (ACC) that manages a massive amount of data, receive, elaborate, and return navigation strategies to each ship in the area. However, there can be conditions wherein no ACC is available to supervise ships' navigation, either autonomous or manned, occurring in the same sea area, thus resulting in a more challenging scenario. To fit such a set of different situations, we design a system that is represented in Fig. \ref{fig:system_layout} and includes the following entities: ships, sensors, AI algorithms and ACC. The sensors deployed in the navigation area will provide useful information to the Ships, AI algorithms, and the ACC. Based on this information, the ship controller will plan or maintain the route to be followed. The AI algorithms will enrich the data from sensors to support the ship's decisions and the monitoring at the ACC. Finally, the ACC merges data coming from sensors, from the vessels and the AI, to supervise and coordinate the traffic in the area of its competence. Moreover, as a failure backup strategy, ACC operators are equipped with an augmented-reality device, and they can monitor the situation and plan any manual interventions.\\ The sensors installed on either ship or in the surrounding environment will be implemented following the Internet of Things paradigm to be easily accessible from all the involved entities.\\ Artificial Intelligence (AI) algorithms are needed to create an Intelligent System (IS) and improve the autonomous decision process. They can be deployed and executed either or both on the ships and at the ACC. This system improves situational awareness through the identification of obstacles and their possible classification. The considerable amount of data coming from multiple and diverse sources, acquired by heterogeneous sensors, enables the analysis and extraction of valuable information for monitoring specific situations. The IS will use this data to aid the navigation from two perspectives: on one side, it extract new knowledge usable through augmented reality by the operators of the ashore control station dealing with the monitoring; on the other side, the obtained knowledge can be used onboard to allow the ship to carry out evasive manoeuvres and strategies in complete autonomy.\\ The control and supervisor systems layout have been composed of two layers interacting each other. The remote one aims to coordinate the traffic, and the local one needed to handle the single ship. For this purpose, a Guidance-Navigation-Control (GNC) system for autonomous navigation needs to be ready to receive external information. In particular, GNC using the results from the AI algorithms, can detect an obstacle (either fixed or moving), elaborate an optimum evasive route, and, thanks to its control system, can actuate the new route safely and accurately. However, in the case of a failure on any single component of the VTS, the collision avoidance is still active thanks to the local Guidance-Navigation-Control system. This solution will be clearly sub-optimal, since the coordination part with the other entities of the scenario will be missing. \\ Eventually, to support the operations in such a heterogeneous set of scenarios, an Orchestration and Communication Platform (OCP) will be developed to efficiently handle the interactions among the ISs and the GNCs residing on the same ship, on different ships, or also within the ashore control centre. The OCP will provide flexibility to the overall system, allowing each ship for dealing efficiently with each scenario that can occur. For example, the vessel will discover whether other autonomous ships are navigating in the same area or if an ACC is available to monitor and support the traffic.\\ To achieve the proposed goals, the VTS needs to operate on three main aspects: autonomous maritime navigation, the analysis of a massive amount of data through AI, and the orchestration and management of their interactions.\\ From the marine-engineering perspective, the main changes is a new layer that communicates with the AI algorithms, receives the raw data from sensors, and detects a potential collision. In case a collision risk is detected, an optimised routing algorithm provides an evasive route. The optimisation problem is highly constrained and needs to consider the real manoeuvrability ships' capability, weather conditions, propulsion plant and manoeuvring-device limits. Once a new route is assessed by employing way-points (two spatial coordinates plus time), coordinated and shared with the VTS, this information will be sent to the track-keeping module, who will perform the proper set-points to reach the way-point accurately and safely, as shown in Fig. \ref{fig:system_layout}.\\ From the data analysis perspective, the VTS uses AI techniques to integrate and analyse various heterogeneous data, including raw measurements and images. The AI algorithms are used to extract the information content about obstacles or ships, mine the correlations between them, and build models that would enable predicting the route evolution. This produced rich data that support both the GNC in executing its tasks and the ashore operators in their monitoring activity. The process will be fully automated; however, the information will be displayed at the ACC's human-operators using Augmented Reality techniques in case of necessity. The current situation is continuously updated, and real-time monitoring will be possible. Thus, identifying obstacles and ships will allow the motion-planning system to carry out appropriate manoeuvres and predict evasive routes. \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9\linewidth]{fig1.png} {Layout of the marine traffic management. \label{fig:system_layout}} \FloatBarrier \section{IOT ORCHESTRATION} \label{sec:iot} \FloatBarrier The ISs used for situation awareness require a pervasive deployment of sensors and actuators to perform their tasks and aid the work of the GNCs. This will include sensors such as inertial motion units to monitor position, velocity, and acceleration; LiDAR and cameras; all the propulsion and manoeuvring monitoring devices; sonars and radars; etc. Moreover, they will need to interact with the GNCs to generate the proper set-point to control the machinery's behaviour. \\ While the GNCs will be deployed and interact only with the local physical environment, the ISs could either work only locally or possibly interact with remote entities, such as other ships or the ACC, when available. Moreover, to allow a flexible and cost-effective deployment, the GNCs should dynamically discover the ISs for decision support, available either locally or remotely, and to adapt to the context.\\ Fig. \ref{fig:iot_layout} shows an exemplified scenario wherein three ships, having diverse capabilities, coexist in the same area. A fourth node, the ACC, can also be considered if available in the scenario. Node A and C are ships equipped with an autonomous GNC and can execute Deep Learning algorithms for situation awareness. Node B is instead a crewed ship, which can measure and store environmental information. The two above subsystems will need to be interconnected through an intelligent middleware that will ensure low latency in the monitoring-decision-actuation loop to allow correct and efficient operations. For this purpose, OCP has been defined; it manages the ISs life cycle, discovers and maintains a registry of the available sensing/actuation nodes, and schedules the communications among the various subsystems. The ability to grant the last requirement will also depend on the system's ability to adapt to the system's dynamic conditions, e.g. varying distance of the ship w.r.t. the ashore control centre, the varying performance of the available communication interfaces, etc. \\ The OCP acts as an intermediary between the ISs and the GNCs, providing the following operations. It will need to maintain a global vision of the system status, including the running AI applications status, together with a directory of the available sensing/actuation nodes. It will also need to know the Quality-of-Service requirements of each IS and/or control process within the GNCs to ensure said requirements are met during the execution. Whenever a new AI application is activated, the OCP needs to configure the system at multiple levels, operating in a cross-layer manner to let AI interact with the sensing entities available locally and possibly remotely. OCP allocates communication resources among the applications and said entities to fit the QoS requirements, e.g. communication latency below a certain threshold, a minimum granted data-transmission speed. Ensuring said QoS requirements are of paramount importance, especially when dealing with safety-critical operations, wherein low response times have to be granted either in nominal or contingency conditions, e.g., a congested communication network. \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9\linewidth]{fig2.png} {Logical system view.\label{fig:iot_layout}} \FloatBarrier \section{ARTIFICIAL INTELLIGENT ALGORITHMS AND AUGMENTED REALITY} \label{sec:ai} \FloatBarrier The scientific idea outlined so far consists of a synergy of innovative technologies to implement a complex and highly automated system in which the coexistence of completely autonomous systems with systems under direct human control is possible. \\ Artificial intelligence is the ability of a machine to perform complex reasoning in a human-like way. People often speak indifferently about Artificial Intelligence, Machine Learning, and Deep Learning. Indeed a distinction is necessary; Artificial Intelligence is a broad concept that includes automatic reasoning (machine learning), which provides a series of possible approaches, including deep learning. The deep learning-oriented approach to perform automated reasoning is the suggested path to fulfil the requirements to manage the traffic accounting for collision avoidance purposes. Deep learning algorithms enable the machine to autonomously classify data and structure them hierarchically. This allows the system to solve complex problems and gradually improve performance in an iterative learning process similar to that of the human mind.\\ A particularly lively current trend in artificial intelligence linked to learning services and communication efficient federated learning, is represented by auto-encoders. Auto-encoders are among the unsupervised neural networks. These networks can learn coded patterns of data, and subsequently, when faced with coded representations of data, they can re-generate the input data in an extremely accurate manner.\\ The fields of application that see a constant growth in the use of auto-encoders are the identification of anomalies and image de-noising. When declining the use of these particular networks in the marine field, an interesting prospect would be to use the auto-encoders to detect anomalies in the functioning of the onboard systems or reduce noise in the images of objects or ships coexisting in the same area of navigation. The idea is to use these networks to reconstruct the images of a ship or other object present in the navigation space, in conditions of poor visibility, where recognition even by a human operator is complex. The images are collected by cameras and then onboard-processed thanks to the auto-encoders, which allow extracting robust characteristics and recognizing boats or objects on the radar. Therefore, in addition to a detection problem, there is a problem of images registration from different cameras because the processed images are also different views of the same object. To solve these problems has been proposed the use of different types of auto-encoders to understand the best solution for each of the problems. The auto-encoders, as mentioned above, are designed to transform the input data into a latent code, that is, a compressed version of the starting image in which the features characterizing the object are preserved. Subsequently, this latent code is used to retrieve the relevant information. Below are some types of auto-encoders widely used and among which the authors believe to identify the best performing solutions for the proposed scenarios. Under-complete auto-encoders are designed so that the latent code is presented with a lower dimensionality than the input. So if the input characteristics were all independent it would be extremely complex to reduce the dimensionality. Denoising auto-encoders use images partially corrupted by the artificial and random addition of noise in the training phase to make the algorithm capable of recognizing and subtracting the noise and reconstructing the original data. In sparse auto-encoders the dimensionality is not reduced, as in the previous cases by decreasing the neurons in the latent code, but the simultaneously active neurons are limited in number, so each input is represented by a limited number of neurons. Variational auto-encoders are generative models, that is, after the training phase, they are able to generate images similar to those received in input. Therefore, they are able to extract fundamental features in the generation of latent code (reduced dimensionality) and then to generate images similar to the original ones. The ability to detect anomalies could have various application scenarios, from identifying anomalies in the behaviour of other ships to the identification of anomalies in devices of a target ship. Another interesting hypothesis is represented by the use of auto-encoders for the extraction of features in hybrid structures to optimise already highly performing systems.\\ Instead, augmented reality is a technology that allows enriching the reality perceived by the human eye. Augmented Reality (AR) is a technology that increases the information content of the perceived reality of the user with additional digital objects and provides the user who uses it with a cognitive aid, but also with decision-support if digital objects are the result of an elaboration/processing of data and not a simple visualisation. AR is the optimal way to increase the situational awareness of the ACC operators, in case of emergency, when need to command ships remotely. The goal is to train a system; using deep learning algorithms to recognise various situations; downstream of this recognition phase, the system provides the most suitable strategy to prevent a collision. Moreover, the most appropriate way to give the operator's processed information using virtual/augmented reality is considered. \\ The artificial intelligence system based on deep learning algorithms allows thanks to the analysis of a considerable amount of heterogeneous data (data from onboard sensors, images, information relating to the context in which the boat is operating, etc.) to perform an extremely accurate object recognition even in non-optimal conditions of poor visibility for example. In Fig. \ref{fig:ai} it is possible to see the intelligent system layout. Specific deep learning algorithms process heterogeneous data coming from multiple sensors, the outputs are suggestions to support operators and solutions for the autonomous ships. \\ The idea is to use artificial intelligence algorithms to support the operator in evaluating and choosing the proper operations to implement based on an analysis that considers as much data as possible. Operators will benefit from the intelligent processing of this data thanks to augmented reality. Deep learning algorithms will allow training a system to recognise various situations thanks to the machine's ability to analyse a considerable amount of apparently unconnected data and then evaluate different solutions from time to time in a completely automatic way. As shown in Fig. \ref{fig:system_layout}, the intelligent system has the role of analysing and processing heterogeneous data coming from sensors but also from other ships, in fact it receives the data coming from every single ship and from the sensors installed in the surrounding environment, elaborate them, create and send the synthesis of command information to the GNC system of each system and display the information on the ACC (both on console and AR devices).\\ The use of innovative methodologies and strategies for data analysis to obtain an ever-increasing number of information allows interpreting the navigation scene sufficiently in advance to implement adequate preventive manoeuvres during navigation. The goal is to use heterogeneous data to generate new relevant information relating to the acquired scene and interpret, through recognising particular objects, their classification, and the diagnosis of any interesting situations with deep learning techniques. Another objective is to integrate the supporting digital information, not necessarily present in reality observed, to enrich the reference application context's reference case history. \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9\linewidth]{fig3.png} {High-level view of the Intelligent System.\label{fig:ai}} \FloatBarrier \section{COMMUNICATIONS ARCHITECTURE} \label{sec:comm} \FloatBarrier This section provides a review of the enabling technologies for maritime communications. These technologies can be used for the migration of current information communication systems towards more advanced systems based on a distributed communication logic, using, for example, Machine-Type Communication (MTC) and, in particular, the IoT paradigm.\\ The command and control of the ASVs are based on reliable algorithms for autonomous navigation and guidance. The use of these kinds of algorithms reduces the need for human operator's interaction with the vessel and increases exponentially the amount of data exchanged between ships or between a vessel and the ACC. In scenarios where the human component is limited to the control systems' detriment, MTCs are the most appropriate technological solution to support the communications. In particular, performing the control through thousands of sensors disseminated within the vessel system, the IoT paradigm \cite{r43}, \cite{r44}, and \cite{r45} is the most appropriate to deploy network architectures suitable for different application scenarios for autonomous shipping, such as search and rescue, aids-to-navigation and smart navigation. From the examples of autonomous shipping presented above, it is evident that the fundamental goal for network infrastructure is to provide reliable connectivity to heterogeneous types of systems and maritime applications and services, potentially based on the IoT communication paradigm. The maritime scenario's challenges need to be addressed and modify the communication infrastructures accordingly to provide reliable network infrastructure.\\ The network infrastructure has to provide ubiquitous connectivity between vessels and ashore stations, on a global scale, especially over open oceans, to ensure the unbroken and consistent existence of services, such as those based on AI algorithms used for the assisted-navigation. Note that the ubiquitous connectivity needs of the roaming service among countries traversed by the vessels during navigation. In this scenario, the network infrastructure must be based on communication protocols that can convey nonuniform data traffic. Indeed, marine traffic near the ports, shore stations, and waterways is very dense. Quite the contrary, marine traffic on the high seas, mainly generated from intercontinental transportation or blue water shipping, is relatively sparse in density. Due to maritime services' multiplicity, it is needed to design the network infrastructure adopting a Service-oriented logic. Maritime applications and services vary from simple periodic reporting to route exchange and remote control, such as in the autonomous shipping use case. So the communication infrastructure will be required to support a wide variety of maritime services, adapting itself to specific needs and match changing resource demands of those services. Hence both network configuration and communication resources must be made flexible and adaptive to the offered service. \\ Further, the network needs to ensure that only qualified or authenticated services are available to the vessels or maritime devices, and vice versa, for maritime safety and security. In addition to the multiplicity of services, the communication infrastructure must connect heterogeneous devices that range from the low-end or low-cost type with reduced functionality to the high-end type with advanced functionality. Low-cost devices are adopted, for example, controlling the surroundings using low-power consumption devices, which generate a small amount of data. The high-end devices are adopted onboard large vessels that encompass dense sensor networks, advanced navigation control systems, and navigation-support systems.\\ In this scenario of both heterogeneous application services and devices, maritime traffic can rapidly grow, as expected, with the diffusion of IoT-based services such as the remote command and control for autonomous ships or the AI algorithm for the assisted navigation. Hence, the communication infrastructure must be designed considering both scalability needs in terms of bandwidth and computing resources; and capacity constraints due to the maritime hardware in terms of radio spectrum and bandwidth of the channels. So, the communication infrastructure must provide communication and computing support for heterogeneous devices and services. For such a reason, the communication infrastructure must offer interoperability functionalities, i.e., the communication infrastructure must allow the data exchanging and the integration of the information flexibly, effectively, consistently, and cooperatively. Only guaranteeing the interoperability, it can be provided with access to the network for the different maritime applications and services, based on the IoT paradigm, seamlessly both within and across network boundaries, and provide portability of information efficiently and securely across the complete spectrum of maritime IoT services without effort from the end-user or host, regardless of its manufacturer or origin. The allocation of the radio communication spectrum is the last challenge for the design of the communication infrastructure with a global coverage nature. Indeed, the radio spectrum is typically allocated opportunely by each nation following its regulations. To successfully deploy a communication infrastructure worldwide and to function correctly, it is imperative that an international frequency band is available and established with appropriate international standards and regulations.\\ A communication infrastructure based on hardware components suitable for machine type traffic solves integration between the various types of devices and services relying on the IoT paradigm. Moreover, these technologies can allow the communication of thousands of devices, known as massive-MTC (mMTC) \cite{r46}, \cite{r47}. Concerning the possibility of providing ubiquitous connectivity, the choice to exploit satellite technology offers several advantages: coverage over broad geographical areas, integration with the IoT paradigm as widely reported in the literature, as in \cite{r48}, \cite{r49}, \cite{r49bis} and\cite{r50}. Finally, spectrum allocations for satellite communications in the maritime environment have been already defined, as highlighted in \cite{r51},\cite{r52}, and \cite{r53}.\\ Fig. \ref{fig:comm} shows an example deployment of a space-earth integrated maritime MTC infrastructure, based on the communication modes defined in \cite{r51},\cite{r54}, and \cite{r55}. A mobile station is an MTC terminal aboard a vessel or embedded in marine equipment in this infrastructure. It provides access to the infrastructure for marine equipment or an Ad-hoc network among vessels. We can deploy the maritime cloud in the shore station, acting as a trusted platform to provide various maritime services and applications with the highest computational and storage capacity in the maritime IoT framework. The functionalities for the management and orchestration of the communication infrastructure can run in the cloud. Examples of these functionalities are the dynamic resource allocation, service resolution, and forwarding mechanisms; thus, the physical infrastructure resources can be maximally shared among service providers and are fine-tuned to meet the individual service requirements enabling service-centric networking. A dense network of shore control stations (nearshore) can be deployed, allowing communication among the vessels in the shore and maritime clouds' proximity. This kind of communication can generate more hyper-dense traffic than that originated during offshore communication. Note that vessels can achieve ad-hoc communications to facilitate direct communication among nearby mobile stations for maritime proximity services. In this infrastructure, the satellite can act in both ways providing access to the communication infrastructure or acting as a relay toward a shore station.\\ GEO satellite technology has shown some drawbacks compared to terrestrial radio ones, such as the inefficient covering of a high-dense traffic area due to its large footprint \cite{r56}, high propagation delays, which affect both bandwidth management \cite{r57} and congestion control algorithms \cite{r59}, and error-prone links, which require forward error correction techniques to guarantee service reliability \cite{r58}. Notwithstanding, the new generation of LEO satellites offers a good alternative for maritime \cite{r43} and IoT \cite{r60} communications, as in the scenario taken into account in this work. LEO satellites have a propagation delay of about 10 ms and a smaller footprint than geostationary satellites, so multiple satellites configured in constellations can be used to provide continuous coverage. The satellites in the constellation communicate via inter-satellite links and toward the ground stations through feeder links. Despite the reduced footprint and delay, LEO satellites can be yet unsuitable to serve areas with high-dense traffic, such as harbours and waterways \cite{r56}. In this case, a network of shore stations can allow partitioning the communication effort in the high-dense traffic areas, handling the communications coming from a cluster of vessels.\\ Furthermore, the shore stations can facilitate the optimised assignment of the satellite radio channels, increasing the communications infrastructure's capacity. In this scenario, tight interference management, i.e., intra-cell and inter-cell co-channel interference, is critical in achieving high spectral efficiency and system capacity in such dense deployment. Note that the shore station could need centralised medium access control to reap the full benefit of the terrestrial communication infrastructure.\\ \begin{table*} \centering \caption{Native and Non-Native Maritime Communication Technologies vs. Communication Scenarios.}\label{Table:Comparison} \begin{tabular}{ m{1cm}|m{1.8cm}|>{\centering}m{1cm}|>{\centering}m{1.2cm}|>{\centering}m{1cm}|>{\centering}m{1.5cm}|>{\centering}m{1cm}|>{\centering}m{1cm}|m{1cm}} \multicolumn{2}{c|}{} & \multicolumn{4}{c|}{Native} & \multicolumn{3}{c}{Non-Native} \\ \cline{3-9} \multicolumn{2}{c|}{} & AIS & SAT-AIS& VDES & SAT-VDES & SatCom & WiFi & 4/5G\\ \hline \multirow{2}{*}{ S2S} & Near-Shore & \xmark & \cmark & \cmark & \xmark & \xmark &\cmark & \cmark \\ & Off-Shore & \xmark & \cmark & \cmark & \xmark & \xmark & \xmark $\:\backslash$ \cmark &\xmark $\:\backslash$ \cmark \\ \hline \multirow{2}{*}{ S2I} & Near-Shore & \cmark & \cmark & \cmark & \cmark & \cmark & \cmark &\cmark\\ & Off-Shore & \cmark & \xmark & \xmark & \cmark & \cmark & \xmark &\xmark \\ \hline \multicolumn{2}{c|}{MTC} &\cmark & \cmark & \cmark & \cmark & \cmark & \cmark &\cmark\\ \hline \multicolumn{2}{c|}{BC} &\xmark & \xmark & \xmark $\:\backslash$ \cmark & \xmark $\:\backslash$ \cmark & \cmark &\cmark &\cmark\\ \end{tabular} \end{table*} \Figure[t!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9\linewidth]{fig4.png} {Example of Communication Infrastructure.\label{fig:comm}} Table \ref{Table:Comparison} shows the comparison of the communication technologies considered in this section starting from the existing literature. The first part of the table compares the various communication technologies regarding their ability to provide ship-to-ship (S2S) communication in an ad-hoc way or through ship-to-infrastructure (S2I), for the communication scenarios shown in the Figure \ref{fig:comm}. The first part of the table shows the technology's ability to support communication near the coast or in the harbour (near-shore) or in the open sea (off-shore) where it is not possible to rely on any communication infrastructure. Note that, in the table are evaluated both the WiFi and 4/5G technologies, which offer ease of deployment of both ad-hoc and infrastructure-based communications networks. Both technologies provide data rate levels more than adequate for the applications considered in this paper, but the WiFi has the drawback of the poor communication ranges especially when obstacles are in the communication areas, such as in a harbour. Instead 4/5G has the drawback that the communications always flow through the network infrastructures of the providers. The last consideration should be made about the VDES technology widely used onboard the ships. VDES technology in both satellite (SAT) and terrestrial (TER) versions can provide data rates that are not exceptionally high. However, some studies considered in this section \cite{r51,r52} have highlighted the potential of this technology to be exploited for the transmission of information with data rates close to megahertz.\\ In the second part of the table, the same technologies were compared in terms of their ability to support different levels of data traffic. Precisely, we consider the medium/low levels of data traffic provided by the MTCs, such as that generated by small sensors and actuators of cyber-physical systems of the ship, up to high levels of traffic provided by the Broadband Communications (BC), such as that generated by applications that exploit the control logic based on AI algorithms or applications that exploit the logic of reality and augmented vision. \section{MODEL-SCALE BENCHMARK} \label{sec:bench} It would not be feasible to design, test and validate the system-defined so far due to the cost of applying the proposed idea in a full-scale; a cyber-physical scenario will be the best compromise to merge reliability accuracy. Therefore, a cyber-physical scenario would be more suitable in the preliminary design phase. Such a scenario includes real model scale ships, digital twins, and accurate digital models can simulate real operation conditions. In such a scenario, part of the data needed to test and validate the system has to be provided by the monitoring system installed on ship prototypes (field data), while other data can be simulated or extracted from a database. \\ The scenario should include model-scale vessels, fully actuated and remotely controlled. The model-scale ships should have its digital twin, a digital model that will be updated with the real information coming from the field. Fig. \ref{fig:test} shows that several simulation models of different ship types should interact in this scenario to have a heterogeneous fleet. A set of sensors, such as Lidar, inertial platform, distance gauge, will feed the AI algorithm with real-time information regarding the surrounding environment. To enable the physical communication among the different entities described above, wireless communication interfaces and protocols must be investigated within the envisaged scenarios, for example, through an emulated satellite or a terrestrial long-range wireless link, implemented via software-defined radios.\\ At least three different scenarios should be tested to face the most common encounter situations. The first one includes navigation in blue waters, in which two ships navigate and can communicate with each other. The blue water scenario is used to test the ship-to-ship communication structure, to test and fine tune the local control parameters of the track-keeping system. The second one represents a crowded scenario in a narrow sea, i.e. channels, port entrance, etc, in which the VTS will manage and coordinate the ships' route. The second scenario deals with ''last mile'' navigation operations and is necessary to test the capability of the AI algorithms to handle the navigation of several ships navigating in the same area. This scenario is useful to validate the collision detection and avoidance algorithms. Moreover, in these challenging conditions, the communications infrastructure and the data orchestration systems have to be fully stressed. The third scenario is similar to the previous one, except for the traffic will be not supervised, and the ships communicate with each other. The third suggested test includes not optimal conditions, for instance, to emulate a VTS component's failure, and it will notice the ability of the system to be fault tolerant, for example, continuing to avoid collisions and adequately managing the traffic in the crowed area, acting only through multiple S2S communications.\\ \Figure[b!](topskip=0pt, botskip=0pt, midskip=0pt)[width=0.9\linewidth]{fig5.png} {Cyber-physical scenario set up.\label{fig:test}} \section{POTENTIAL BENEFITS AND EXPECTED IMPACT} \label{sec:benefits} The proposed system will provide a safe, seamless, smart, inclusive, resilient, environmentally neutral and sustainable system for the maritime sector thanks to ICT technologies and services.\\ The system will boost innovative connected, cooperative and automated sea-mobility technologies and strategies for passengers and goods transportation as additional results. Automated systems reduce the human factor as a cause of maritime accidents on vessels operating in sensitive areas where maritime accidents and incidents would have a significantly negative impact (coastal zones, marine protected areas). Using an IP-based OCP as a middleware between the GNCs, the ISs will allow a diverse range of equipment or applications to communicate through a mature and standardised environment. This has several advantages: first, it will be possible to integrate commercial or customised IoT sensors and state-of-the-art AI solutions, thus having a potential cost reduction; second, IP connections can be secured using a wide range of existing and validated solutions; third, IP-enabled entities can be allowed to be reached from anywhere, thus paving the way for even more flexible and extendable solutions. \\ In the same context, the benefits of using deep learning algorithms having predictive capabilities, able to analyse time-varying big data, coupled with innovative displaying tools and technologies allow the reduction of collision and mitigate the associated risk, achieving a global increase of the reliability of the autonomous navigation system. The foreseen results are summarised hereinafter: \begin{itemize} \item new traffic management system for the coastal area; increase of the autonomy level of ships, including AI algorithms used on top of GNC systems; \item improved performance on the collision avoidance systems thanks to new re-planning algorithms for multiple ships; \item a fully function cyber-physical scenario that can be used by industry (in particular automation provider and classification bodies) to test and certified new solutions; enhanced operations and interactions of the autonomous ships with the monitoring operators at the ACC; \item methodology and tools to acquire and share information from various ships, and to and from the ACC; \item advanced intelligent methodologies to process data and to achieve safe and reliable automatic navigation; \item new data representation and displaying techniques that interact with an intelligent system to support the monitoring operators of ashore centres; \item an efficient OCP to dynamically adapt the system to the different operational situations while satisfying Quality of Service, Safety and reliability requirements for computing and communication. \end{itemize} Figure \ref{fig:taxonomy} shows a taxonomy of the existing challenges to design a marine traffic management system accounting for autonomous ships. The picture both summarizes and opens to main research blocks that may be involved in the development of a project devoted to this aim. More than this, some peculiar study tasks provide the structure of what still needs investigation, in line with what is presented in the literature review of this article, which stands out as a widespread but not yet complete overview. \begin{figure \centering \includegraphics[trim=0 0 0 0, clip=true, width=0.99\columnwidth]{Taxonomy_IOS.png} \caption{A taxonomy of the challenges to design a marine traffic management system for autonomous ships.} \label{fig:taxonomy} \end{figure} Regarding the impact that a project of this magnitude could have on the induced world of work, there can be no doubt that replacing crew with autonomous technology will severely impact today's type of seafarers. Seafarers work in a hazardous and stressful environment far from home, the reason why few sailors have an entire career at sea. Autonomous ships imply that jobs onboard will be replaced by alternative jobs in a better work environment onshore. An evident conclusion is that the profession of sea officer is unattractive to women and that both women and men do not aspire to a whole career at sea. The transition of jobs from sea to Back Office environments will improve the working conditions of personnel employed in shipping and their social and personal life, will be more attractive to women and contribute to gender equality in the shipping sector with job opportunities and enhanced economic participation. \section{Conclusions} \label{sec:conclusion} From a general perspective, the proposed system will directly impact the maritime field, thanks to its potential of bringing a concept of dynamic cooperation in the scenario of autonomous ships. In such a sector, the idea is to enhance the existing technology using augmented and virtual reality and machine learning approaches to obtain information suitable for the reliable control of autonomous surface vessels in a wide range of navigation scenarios.\\ Regarding the use of AI algorithms in the challenging maritime context, improvement needs to be done to achieve the most effective and efficient way of extracting information and representing it concisely and clearly through augmented-reality devices. For what concerns; instead, the orchestration and the communication techniques should be designed specifically for the maritime context, fitting the challenging environmental conditions and meeting the stringent safety requirements. \\ For the maritime sector, the proposed system will be a breakthrough in the current state of the art, looking forward to when several ships, with different control strategies developed by different automation providers, will navigate together with crewed ships. In such a scenario, Artificial Intelligence algorithms, ship to ship communications with a standard protocol, enhanced concerning the Automatic Identification System currently in use, will be the solution for the new generation of sea traffic management. To the author's best knowledge, nothing similar exists yet, neither in scientific literature nor in industrial projects. In the maritime sector, artificial intelligence has begun to establish itself in recent years due to a growing demand for ships' autonomy. \\ Adopting automatic technologies that support the service management will allow rationalising the assets (e.g. energy, materials, equipment) involved in the automated services. This rationalisation benefits both providers and society in terms of product price and environmental pollution. Moreover, the automated technologies that allow automated service behaviour analysis can be adapted to support the workers by predicting the hazardous and stressful conditions. Moreover, allowing automated interactions among multiple entities (ships and shore) will ease the adoption of autonomous ships, simplifying diverse systems' interactions while ensuring optimal performance.\\ Autonomous ship capabilities will impact both the ship investment costs (CAPEX) and the ship operational costs (OPEX). Crewless ships will not need deck houses, heating/cooling and lifesaving equipment. This will result in different ship concepts with more space for payload, lower CAPEX and lower carbon footprint per cargo unit than current ship designs. On the other hand, the cost of autonomous ship technology will be higher than present-day ship automation. Preliminary estimates indicate those cost-savings due to the absence of ship personnel are balanced by the ship automation's higher cost. The proposed system will decrease the number of collisions (ship to ship and ship to the pier), with considerable repair costs saving and reducing the delay in goods/services.\\ Finally, the proposed system will increase the safety of human life and goods at sea, reducing accidents in the coastal area. Secondarily, the reduced risk of accidents automatically reduces the oil spills significantly, with a lower impact on the nowadays fragile marine ecosystems. \section*{ACRONYMS} \begin{center} \begin{tabular}{ll} ACC & Ashore Control Center\\ AI & Artificial Intelligence\\ AIS & Automatic Identification System\\ AR & Augmented Reality\\ ASV & Autonomous Surface Vessels\\ BC & Broadband Communication\\ CoAP & Constrained Application Protocol\\ DL & Deep Learning\\ GNC & Guidance-Navigation-Control\\ ICT & Information and Communication Technologies\\ IoT & Internet of Things\\ IS & Intelligent System\\ ML & Machine Learning\\ MTC & Machine-Type Communication\\ OCP & Orchestration and Communication Platform\\ QoS & Quality of Services\\ VDES& VHF Data Exchange Data System\\ VR & Virtual Reality\\ VTS & Vessel Traffic System\\ \end{tabular} \end{center} \bibliographystyle{IEEEtran}
proofpile-arXiv_065-5899
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{Moduli of ppav and compactifications} Denote ${\mathcal A}_g$ the moduli space of complex principally polarized abelian varieties (ppav), which is the quotient of its (orbifold) universal cover, the Siegel upper half-space ${\mathbb{H}}_g$, by the action of the symplectic group $\op{Sp}(2g,{\mathbb{Z}})$. Let $\Sat$ denote the Satake-Baily-Borel compactification, and recall that the Picard group $\op{Pic}_{\mathbb{Q}}(\Sat)={\mathbb{Q}} \lambda$ is one-dimensional, generated by the class~$\lambda$ of the line bundle ${\mathcal L}\rightarrow\Sat$ of Siegel modular forms of weight one, which is ample on~$\Sat$. Let $\Part$ be Mumford's partial compactification of ${\mathcal A}_g$, so that $\partial\Part={\mathcal X}_{g-1}/\pm 1$, where $\pi:{\mathcal X}_{g-1}\to{\mathcal A}_{g-1}$ denotes the universal family of ppav of dimension~$g-1$. All toroidal compactifications of ${\mathcal A}_g$ contain $\Part$. The boundary of the perfect cone toroidal compactification $\Perf$ (we use this notation as no other toroidal compactification will appear) is an irreducible Cartier divisor~$D$, and $\partial\Part$ is dense within $D$. The compactification $\Perf$ is ${\mathbb{Q}}$-factorial, with $\op{Pic}_{\mathbb{Q}}\Perf={\mathbb{Q}}\lambda\oplus {\mathbb{Q}}\delta$, where $\delta$ denotes the class of~$D$. The Picard group $\op{Pic}_{\mathbb{Q}}\Part$ is generated by the restrictions of the classes $\lambda$ and $\delta$ from $\Perf$ to $\Part$. Philosophically, in what follows, the definition of the slope of divisors takes place on $\Part$, though to formally make sense of it we work on $\Part$ (and refer to \cite[Appendix]{gsmPrym} for a discussion of why this notion is the same for any other toroidal compactification). \subsection{The ample and effective slopes} Given a divisor $E$ on $\Perf$ such that its class in the Picard group is $[E]=a\lambda-b\delta$, its {\em slope} is defined to be $s(E)\coloneqq a/b$. The slope of a cone of divisors on $\Perf$ is defined as the infimum of the slopes of divisors contained in the cone. Shepherd-Barron \cite{sb} proved that the {\em ample slope} of ${\mathcal A}_g$, that is the slope of the cone of ample divisors is equal to $12$, namely $$ s_\Amp(\Perf)\coloneqq \inf\left\{s(E)\colon E\in \Amp(\Perf)\right\}=12\,. $$ The {\em effective slope}, that is the slope of the cone of effective divisors $$ s_\Eff(\Perf)\coloneqq \inf\left\{s(E)\colon E\in \Eff(\Perf)\right\}\, , $$ has attracted a lot of attention, in particular because \[ s(K_{\Perf})=s\left((g+1)\lambda-\delta\right)=g+1\,, \] so that the inequality $s_\Eff(\Perf)<g+1$ would imply that ${\mathcal A}_g$ is of general type. Freitag \cite{frei} used the theta-null divisor $\tnull$, of slope $s(\tnull)=8+2^{3-g}$, to show that ${\mathcal A}_g$ is of general type for $g\ge 8$, Mumford \cite{mum} used the Andreotti-Mayer divisor $N_0$, of slope $s(N_0)=6+\tfrac{12}{g+1}$, to show that ${\mathcal A}_g$ is of general type for $g\ge 7$, while recently the fourth author with collaborators \cite{mss} showed that $s_\Eff(\Perf[6])\leq 7$, which implies that the Kodaira dimension of ${\mathcal A}_6$ is non-negative. It is known that ${\mathcal A}_g$ is unirational for $g\leq 5$ (see \cite{MoMu}, \cite{C}, \cite{D}, \cite{V} for the harder cases of $g=4,5$). In fact $s_\Eff(\Perf)$ is known explicitly for $g\leq 5$: the computation of $s_\Eff(\Perf[5])$ is one of the main results of \cite{fgsmv}, and the lower genera cases are reviewed below. On the other hand, the slope $s_\Eff(\Perf)$ is not known for any $g\ge 6$. While the techniques of Tai \cite{tai} show that $s_\Eff(\Perf)=O(1/g)$ for $g\to \infty$ (as explained in~\cite{grag}), not a single explicit effective divisor~$E$ on ${\mathcal A}_g$, for any~$g$, with $s(E)\leq 6$ is known. The analogous notion of effective slope for the moduli space of curves ${\mathcal M}_g$ has been investigated in many papers, in particular for its similar link with the Kodaira dimension of ${\mathcal M}_g$, starting with \cite{harrismumford} \cite{harris}, \cite{eisenbudharris}, and with continuing recent progress such as \cite{fjp}. \subsection{The moving slope}\label{sec:intro-moving} Recall that an effective divisor $E$ is called {\em moving} if $h^0(E)>1$ and if moreover the base locus of its linear system $|E|$ has codimension at least two. The {\em moving slope} is the slope of the cone $\Mov$ of moving divisors $$ s_\Mov(\Perf)\coloneqq \inf\{s(E)\colon E\in \Mov(\Perf)\}\,. $$ Since the moving cone is contained in the effective cone, we have $s_\Eff({\mathcal A}_g)\leq s_\Mov({\mathcal A}_g)$. We first observe that if the effective slope is in fact an infimum but not a minimum, then $s_\Eff({\mathcal A}_g)=s_\Mov({\mathcal A}_g)$ since there is an infinite sequence of effective divisors of strictly decreasing slopes converging to this infimum (see \Cref{lm:properties}(iii) for a precise statement and proof). Thus investigating the moving slope is only of interest if there exists an effective divisor $E\subset{\mathcal A}_g$ of slope $s(E)=s_\Eff({\mathcal A}_g)$. \smallskip While the moving slope of $\Perf$ is less well-studied than the effective slope, it is also important in attempting to determine the structure of the ring of Siegel modular forms, and in attempting to run the log-MMP for ${\mathcal A}_g$ and determine its interesting birational models: in fact, the pull-back of an ample divisor on a normal projective variety $X$ via a non-constant rational map $f:\Perf\dashrightarrow X$ is a moving divisor, as remarked in \cite[Section 1.2]{cfm}. The moving slope of $\Perf$ is known for $g\leq 4$, as we will review below, and Tai's results also imply that $s_\Mov(\Perf)=O(1/g)$ as $g\to\infty$. While the original published version of the paper \cite{fgsmv_published} claimed an upper bound for $s_\Mov(\Perf[5])$, there was a numerical error, and the corrected (arXiv) version \cite{fgsmv} does not allow to deduce any statement on $s_\Mov(\Perf[5])$. For $g=6$ the knowledge of the moving slope of $\Perf[6]$ would help determining the Kodaira dimension of ${\mathcal A}_6$, if it turns out that $s_\Eff(\Perf[6])=7=s(K_{\Perf[6]}$). As in the case $g=5$, though, the moving slope of $\Perf$ remains unknown at present for every $g\geq 6$. \subsection{Context} Our paper revolves around the problem of constructing, from a given modular form, or from given modular forms, new modular forms of controlled slope. In particular, given a modular form of minimal slope, such procedure can provide other interesting modular forms of low slope: for example, for $2\leq g\leq 4$ it does provide a modular form of minimal moving slope (\Cref{maincor:mov}). Our construction(s) will consist in applying certain holomorphic differential operators to Siegel modular forms, so as to yield Siegel modular forms again (\Cref{thm:main}). For motivation, recall the definition of two such well-known operators for $g=1$. The first one is the {\em Serre derivative} (credited by Serre \cite[Theorem 4]{serre:modulaire} to Ramanujan \cite{ramanujan}): it sends Siegel modular forms of weight $a$ to modular forms of weight $a+2$, and is defined as ${\mathcal S}_a(F):=\frac{d F}{d \tau}-\frac{\pi ia}{6}\,E_2\cdot F$, where $E_2$ is the Eisenstein series of weight $2$ (see also \cite[Section 5]{zagier} and \cite[Lemma 3]{sd}). The second one is the second {\em Rankin-Cohen bracket} (see \cite{rankin} and \cite{cohen}), which sends a modular form of weight $a$ to a modular form of weight $2a+4$, and is defined as $[F,F]_{2,a}:=aF\frac{d^2 F}{d\tau^2}- (a + 1)\left(\frac{d F}{d \tau}\right)^2$. Note that ${\mathcal S}_a$ is a $1$-homogeneous (i.e.~multiplying $F$ by a constant $\lambda$ multiplies ${\mathcal S}_a(F)$ by $\lambda^1$) differential operator in $\tau$ with non-constant coefficients, while $[\cdot,\cdot]_{2,a}$ is $2$-homogeneous, of pure order $2$ (meaning that all summands involve the derivative $\frac{d}{d\tau}$ twice), with constant coefficients. There are also $2n$-th Rankin-Cohen brackets $[\cdot,\cdot]_{2n,a}$, which are $2n$-homogeneous, of pure order $2n$, with constant coefficients, and send modular forms of weight $a$ to modular forms of weight $2a+4n$. The holomorphic differential operators that we will produce for $g\geq 2$ are, on one hand, analogous to ${\mathcal S}_a$, as they will be $g$-homogeneous, of order $g$; on the other hand, they share some similarities with the even Rankin-Cohen brackets, as they will be pure of order~$g$ (meaning that each summand involves exactly $g$ partial derivatives), with constant coefficients \subsection{Main results} In order to formulate our main result, given a holomorphic function $F:{\mathbb{H}}_g\to{\mathbb{C}}$, we assemble the coefficients of its differential $dF$ into the matrix \[ \partial F\coloneqq\left(\begin{array}{cccc} \frac{\partial F}{\partial \tau_{11}} & \frac{\partial F}{2\partial \tau_{12}}&\dots& \frac{\partial F}{2\partial \tau_{1g}} \\ \vdots & \vdots&\ddots&\vdots \\ \frac{\partial F}{2\partial \tau_{g1} }& \frac{\partial F}{2\partial \tau_{g2}}&\dots& \frac{\partial F}{\partial \tau_{gg}} \end{array}\right)\,, \] and we consider the holomorphic function $\det(\partial F):{\mathbb{H}}_g\to{\mathbb{C}}$. Suppose now that $F$ is a modular form of weight $a$, with vanishing order $b$ along the boundary $\partial\Perf$ (this will be defined formally in the next section). The determinant $\det(\partial F)$ is in general not a modular form, but its restriction to the zero locus $\{F=0\}$ behaves as a modular form of weight $ga+2$ (a more intrinsic approach to $\det(\partial F)$ will be given in \Cref{rem:intrinsic}). Our main result is the following construction. \begin{mainthm}{A}\label{thm:main} For every $g\geq 2$ and every integer $a\geq \frac{g}{2}$ there exists a differential operator $\mathfrak{D}_{g,a}$ acting on the space of genus $g$ Siegel modular forms of weight $a$ that satisfies the following properties: \begin{itemize} \item[(i)] if $F$ is a genus $g$ Siegel modular form of weight $a$ and vanishing order $b$ along the boundary, then $\mathfrak{D}_{g,a}(F)$ is a Siegel modular form of weight $ga+2$ and of vanishing order $\beta\ge gb$ along the boundary; \item[(ii)] the restriction of $\mathfrak{D}_{g,a}(F)$ to the zero locus of $F$ is equal to the restriction of $\det(\partial F)$. \end{itemize} \end{mainthm} \begin{rem} In \Cref{thm:main} it is possible to deal with Siegel modular forms $F$ with character with respect to $\op{Sp}(2g,{\mathbb{Z}})$, which occur only for $g=2$ only. Since $\mathfrak{D}_{2,a}$ is quadratic, $\mathfrak{D}_{2,a}(F)$ will then still be a modular form (with trivial character). \end{rem} What we will actually prove is a more precise version of this statement. In \Cref{thm:precise} we construct for every $g\geq 2$ and $a\geq \frac{g}{2}$ a holomorphic differential operator ${\mathcal D}_{Q_{g,a}}$ in the $\tau_{ij}$ with constant coefficients and we define $\mathfrak{D}_{g,a}(F):={\mathcal D}_{Q_{g,a}}(F)/g!$ for every Siegel modular forms $F$ of genus $g$ and weight $a$. Thus $\mathfrak{D}_{g,a}(F)$ is always polynomial in~$F$ and its partial derivatives, though its coefficients depend on the weight $a$. Though the operator $\mathfrak{D}_{g,a}$ need not be unique, properties (i-ii) in \Cref{thm:main} force the Siegel modular form $\mathfrak{D}_{g,a}(F)$ to be unique up to adding modular forms divisible by~$F$ (which would thus vanish on the zero locus of~$F$). The construction is explicit, and in \Cref{sec:explicit} we will give the formulas for $\mathfrak{D}_{2,a}$ and $\mathfrak{D}_{3,a}$ explicitly. \smallskip A priori $\mathfrak{D}_{g,a}(F)$ could have a common factor with $F$, or could even be identically zero. In order to prevent such behavior, we will apply \Cref{thm:main} only to modular forms $F$ that satisfy what will be our {\em main condition:} \begin{equation}\tag{\cond}\label{cond} \begin{array}{c} \hfill \text{\it $\det(\partial F)$ does not vanish identically }\hfill\\ \hfill\text{\it on any irreducible component of $\{F=0\}$.}\hfill \end{array} \end{equation} Our main application is an immediate consequence of \Cref{thm:main}. \begin{maincor}{B}\label{cor:main} Suppose that the effective slope $s_\Eff(\Perf)=a/b$ is realized by a modular form $F$ of weight $a\geq\frac{g}{2}$ that satisfies Condition \eqref{cond}. Then \begin{equation}\label{eq:ineqEffMov} s_\Mov(\Perf)\leq s(\mathfrak{D}_{g,a}(F))\leq s_\Eff(\Perf)+\tfrac{2}{bg}\,. \end{equation} \end{maincor} We first note that if the zero locus of $F$ in \Cref{cor:main} is not irreducible, then every its irreducible component must have the same slope. As a consequence, $s_\Mov(\Perf)=s_\Eff(\Perf)$ (as will be proven carefully in \Cref{lm:properties}(i)), and so the statement becomes trivial. Thus we can assume that the zero locus of $F$ is irreducible. We stress that the inequality \eqref{eq:ineqEffMov} for the moving slope depends on the actual class $[F]$, not just on the slope~$s(F)$. Moreover, Condition \eqref{cond} forces $F$ to be square-free. For every $g\leq 5$ it is known that a reduced effective divisor on $\Perf$ of minimal slope exists and is unique. For $g\leq 4$ the machinery of \Cref{cor:main} produces an (already known) divisor that realizes the moving slope. \begin{maincor}{C}\label{maincor:mov} For $2\leq g\leq 4$ the modular form $F$ of minimal slope on $\Perf$ satisfies Condition \eqref{cond} and has weight $a\geq \frac{g}{2}$. Moreover, $\mathfrak{D}_{g,a}(F)$ realizes the moving slope of ${\mathcal A}_g$. \end{maincor} For $g=5$, in \cite{fgsmv} it was proven that the Andreotti-Mayer divisor $N'_0$ (whose definition will be recalled in \Cref{sec:N0}) is the unique effective divisor of minimal slope on $\Perf[5]$. Since we will show in \Cref{prop:cond} that $N'_0$ satisfies Condition \eqref{cond}, as a consequence of \Cref{cor:main} we obtain \begin{maincor}{D}\label{cor:g=5} The moving slope of ${\mathcal A}_5$ is bounded above by $s_\Mov(\Perf[5])\leq \tfrac{271}{35}$, and the slope $271/35$ is achieved by a moving effective divisor. \end{maincor} In the following table, we collect what is thus known about the effective and moving slopes of $\Perf$: \begin{center} \begin{tabular}{c|c|c} & $s_{\Eff}(\Perf)$ & $s_{\Mov}(\Perf)$ \\ \hline $g=1$ & $12$ & \\ $g=2$ & $10$ & $12$ \\ $g=3$ & $9$ & $28/3=9.333\dots$ \\ $g=4$ & $8$ & $17/2=8.500\dots$ \\ $g=5$ & $54/7=7.714\dots$ & $\leq\,271/35=7.742\dots$ \\ $g=6$ & $[\frac{53}{10},\ 7]$ & (?) $\leq\,43/6=7.166\dots$\\ $g\gg 1$ & O(1/g) & O(1/g) \end{tabular} \end{center} where the upper bound $s_\Eff(\Perf[6])\leq 7$ is provided by the Siegel modular form $\theta_{L,h,2}$ of class $14\lambda-2\delta$ constructed in \cite{mss}. The question mark in the above table marks a conjectural upper bound $s_\Mov(\Perf[6])\leq 43/6$, which is a consequence of the following. \begin{maincor}{E}\label{cor:genus6} The form $\theta_{L,h,2}$ on ${\mathcal A}_6$ is prime, i.e. not a product of non-constant Siegel modular forms. Moreover, if $\theta_{L,h,2}$ satisfies Condition \eqref{cond}, then \[ s_\Mov(\Perf[6])\leq \frac{43}{6}. \] \end{maincor} The Torelli map $\tau_g:{\mathcal M}_g\to{\mathcal A}_g$ sending a curve to its Jacobian is an injection of coarse moduli spaces, but for $g\ge 3$ is 2-to-1 as a map of stacks. We denote by ${\mathcal J}_g$ the closure of $\tau_g({\mathcal M}_g)$ inside ${\mathcal A}_g$, which is called the locus of Jacobians. For $g\leq 3$ ${\mathcal J}_g={\mathcal A}_g$, while ${\mathcal J}_4\subset{\mathcal A}_4$ is the zero locus of the Schottky modular form $S_4$, which has weight $8$. Since (even) theta constants always vanish on curves with even multiplicity, this implies that $\tnull\cap{\mathcal J}_g=2\Theta_{\nl}$ for $g\ge 3$, where $\Theta_{\nl}\subset{\mathcal J}_g$ is an integral divisor. As a byproduct of our analysis, we also obtain the following result on Jacobians: \begin{maincor}{F}\label{cor:M4} The form $\mathfrak{D}_{4,8}(S_4)$ restricts on ${\mathcal J}_4$ to $\Theta_{\nl}$. \end{maincor} Beyond these results, we investigate the applications of both Rankin-Cohen brackets and of differential operators acting on Siegel modular forms to constructing new effective divisors. Our results above go essentially one step in this direction, by applying the differentiation technique to a modular form of lowest slope. This construction can be iterated or varied to apply it to a tuple of different modular forms: it would be interesting to investigate the collection of modular forms thus produced, and to see in particular if this sheds any further light on the structure of the ring of Siegel modular forms in any genus $g\ge 4$, where it is not fully known. \subsection{Structure of the paper} The paper is organized as follows. In \Cref{sec:modforms} we set the notation and review the relation between effective divisors on~${\mathcal A}_g$ and Siegel modular forms. In \Cref{sec:relevant} we recall the construction and the slopes of the theta-null divisor $\tnull$ and of the Andreotti-Mayer divisor $N'_0$, and we show that both satisfy Condition \eqref{cond}. In \Cref{sec:RC} we define the Rankin-Cohen bracket and prove a weaker version of \Cref{cor:main}. In \Cref{sec:lowgenus} we review the computation of the effective and moving slopes for $g\leq 4$, derive Corollaries \ref{maincor:mov}-\ref{cor:g=5}-\ref{cor:genus6} from \Cref{thm:main}, and prove \Cref{cor:M4}. Finally, in \Cref{sec:diffop} we introduce a remarkable class of differential operators acting on Siegel modular forms, we define ${\mathcal D}_{Q_{g,a}}$ and we prove \Cref{thm:main}. \section*{Acknowledgements} We are grateful to Gavril Farkas and Alessandro Verra for discussions related to \cite{fgsmv_published}, leading to the correction \cite{fgsmv} and to a better understanding of the intricacies of the geometry involved. We are indebted to Claudio Procesi for the proof of \Cref{procesi}. S.~Grushevsky thanks the Weizmann Institute of Science for hospitality in spring 2022, when this work was done and the paper was written. \section{Siegel modular forms and compactifications of ${\mathcal A}_g$}\label{sec:modforms} We briefly recall the standard notions on Siegel modular forms, referring to \cite{frei} for a more detailed introduction. Unless specified otherwise, we assume $g\geq 2$. \subsection{The Siegel space and the moduli space of ppav} The Siegel upper half-space ${\mathbb{H}}_g$ is the space of complex symmetric $g\times g$ matrices $\tau$ with positive definite imaginary part. An element $\gamma$ of the symplectic group $\op{Sp}(2g,{\mathbb{Z}})$, written as $\gamma=\left(\begin{smallmatrix}A & B \\ C & D \end{smallmatrix}\right)$ in $g\times g$ block form, acts on ${\mathbb{H}}_g$ via \[ \gamma\cdot \tau\coloneqq (A\tau+B)(C\tau+D)^{-1}. \] The action of $\op{Sp}(2g,{\mathbb{Z}})$ on ${\mathbb{H}}_g$ is properly discontinuous, with finite stabilizers. The quotient ${\mathcal A}_g={\mathbb{H}}_g/\op{Sp}(2g,{\mathbb{Z}})$ is the moduli space of ppav --- it is a quasi-projective variety that can be given the structure of an orbifold (or a Deligne-Mumford stack). We denote by $\pi:{\mathcal X}_g\to{\mathcal A}_g$ the universal family of ppav. \subsection{Divisors and Siegel modular forms} A holomorphic function $F:{\mathbb{H}}_g\to{\mathbb{C}}$ is called a holomorphic {\em Siegel modular form of weight $k$} with respect to $\op{Sp}(2g,{\mathbb{Z}})$ if \[ F(\gamma\cdot\tau)=\det(C\tau+D)^kF(\tau) \] for all $\tau\in{\mathbb{H}}_g$ and for all $\gamma\in\op{Sp}(2g,{\mathbb{Z}})$ (for $g=1$ there is an additional regularity condition). This automorphy property with respect to $\op{Sp}(2g,{\mathbb{Z}})$ defines the line bundle \[ {\mathcal L}^{\otimes k}\longrightarrow{\mathcal A}_g \] of Siegel modular forms of weight $k$ on ${\mathcal A}_g$. \begin{rem} While in our paper we focus on Siegel modular forms for $\op{Sp}(2g,{\mathbb{Z}})$, the holomorphic differential operator that we consider is defined for any holomorphic functions on~${\mathbb{H}}_g$, and will preserve suitable automorphy properties. It can thus also be applied to Siegel modular forms with multiplier systems for subgroups of $\op{Sp}(2g,{\mathbb{Z}})$. In particular we will apply it to a Siegel modular forms with a character, namely the theta-null $T_2$ in genus two, discussed in \Cref{sec:theta}. \end{rem} \subsection{Satake compactification} The Satake-Baily-Borel compactification $\Sat$ can be defined as \[ \Sat\coloneqq \mathbf{Proj}\left(\oplus_{n\geq 0} H^0({\mathcal A}_g,{\mathcal L}^{\otimes n})\right). \] What this means is that sections of a sufficiently high power of ${\mathcal L}$ embed ${\mathcal A}_g$ into a projective space, and $\Sat$ is the closure of the image of ${\mathcal A}_g$ under such an embedding. Since $\op{Pic}_{\mathbb{Q}}(\Perf)={\mathbb{Q}}\lambda$, where $\lambda$ denotes the class of ${\mathcal L}$, this implies that any effective ${\mathbb{Z}}$-divisor on ${\mathcal A}_g$ is the zero locus of a Siegel modular form. \subsection{Partial and perfect cone toroidal compactifications} Set-theoretically, $\Sat$ is the union of locally closed strata \[ \Sat={\mathcal A}_g\sqcup {\mathcal A}_{g-1}\sqcup\dots\sqcup {\mathcal A}_0. \] The partial (aka Mumford, or rank one) toroidal compactification \[ \Part\coloneqq{\mathcal A}_g\sqcup\partial\Part \] is obtained by blowing up the partial Satake compactification ${\mathcal A}_g\sqcup{\mathcal A}_{g-1}$ along its boundary ${\mathcal A}_{g-1}$, and the exceptional divisor $\partial\Part$ can then be identified with ${\mathcal X}_{g-1}/\pm 1$. Any toroidal compactification contains $\Part$ and admits a blowdown morphism to $\Sat$. The perfect cone toroidal compactification $\Perf$ has the property that the complement $\Perf\setminus\Part$ is of codimension $2$ inside $\Perf$. The boundary \[ D\coloneqq\partial \Perf \] is an irreducible Cartier divisor, which is the closure of $\partial\Part$. We denote by $p:\Perf\to\Sat$ the blowdown map. \subsection{Effective divisors on ${\mathcal A}_g$} The effective and moving slope is computed on effective divisors in ${\mathcal A}_g$, or, equivalently, on effective divisors in $\Perf$, whose support does not contain~$D$. We will call such divisors {\em internal}. For clarity and completeness, we explain how to associate an internal divisor to a Siegel modular form. A Siegel modular form $F$ of weight $a$, thought of as a section of ${\mathcal L}^{\otimes a}$ on $\Sat$, can be pulled back to a section of $p^*{\mathcal L}^{\otimes a}$ on~$\Perf$. If the vanishing order $\op{ord}_D(p^*F)$ of $p^*F$ along the divisor~$D$ is $b$, this means that the zero locus of $p^*F$ on $\Perf$ is the union of an effective divisor not containing~$D$ in its support, which we will denote by $(F)$ and call the zero divisor of the modular form, and of the divisor~$D$ with multiplicity $b$. Since by definition the zero locus $\{ F=0\}\subset\Sat$ has class $a\lambda$, its preimage in $\Perf$ has class $ap^*\lambda$ (or $a\lambda$ in our notation abuse), it follows that the class of the zero divisor of a modular form is \[ [(F)]=a\lambda-b\delta\in \op{Pic}_{\mathbb{Q}}(\Perf) \] with $a>0$ and $b\geq 0$. To summarize the above discussion, we see that internal effective divisors on $\Perf$ correspond bijectively to Siegel modular forms up to multiplication by a constant, and from now on we will talk about them interchangeably, additionally suppressing the adjective ``internal'' as we will never need to deal with effective divisors on $\Perf$ whose support contains~$D$. We thus define the slope $s(F)$ of a modular form $F$ to be the slope of the corresponding (internal) effective divisor $(F)$. We will write $F$ for the modular form considered on $\Perf$, and stress that the notation $[F]\coloneqq [(F)]$ for the class of the zero divisor of a Siegel modular form on $\Perf$ does {\em not} signify the class of the pullback $p^*F$, which would be simply equal to $a\lambda$. Every effective divisor $E\subset\Perf$ can be uniquely written as $E=\sum c_i E_i$ for suitable $c_i>0$ and pairwise distinct, irreducible, reduced divisors $E_i$. We say that two divisors $E=\sum c_iE_i$ and $E'=\sum d_j E'_j$ have distinct supports if $E_i\ne E'_j$ for all $i,j$. Similarly, a Siegel modular form $F$ can be uniquely written as a product $F=\prod F_i^{c_i}$ for suitable $c_j>0$ and pairwise distinct, {\em prime} Siegel modular forms $F_i$ (i.e.~forms that cannot be factored as products of non-constant modular forms). Two modular forms $F=\prod F_i^{c_i}$ and $F'=\prod (F'_j)^{d_j}$ are said to {\em not have a common factor} if $F_i\neq F'_j$ for all $i,j$. \subsection{Fourier-Jacobi expansion} The vanishing order of a Siegel modular form~$F$ at~$D$ can be computed using the Fourier-Jacobi expansion, which we briefly recall for further use. Writing an element $\tau\in{\mathbb{H}}_g$ as $$\tau = \left( \begin{matrix} \tau' & z \\ z^t & w \end{matrix} \right) \in {\mathbb{H}}_g$$ with $ \tau' \in{\mathbb{H}}_{g-1}, \,\,z\in{\mathbb{C}}^{g-1}, \,\, w\in {\mathbb{C}}^*$, and setting $q\coloneqq \exp(2\pi i w)$, we expand~$F$ in power series in $q$: \begin{equation}\label{F-J} F(\tau) = \sum_{r\geq 0} f_r(\tau', z) q^r \end{equation} Then the vanishing order $\op{ord}_DF$ (which we will often denote~$b$) of $F$ along~$D$ is detected by the Fourier-Jacobi expansion as \begin{equation} \op{ord}_D F=\min\{ r\geq 0\: f_r(\tau',z) \not\equiv 0\}\, . \end{equation} The form $F$ is called a {\em cusp form} if it vanishes identically on~$D$; equivalently, if $f_0(\tau',0)=0$, that is if $\op{ord}_DF>0$. \subsection{First properties of the moving slope} Here we record some properties of the moving slope, showing that one should only focus on the case when there exists an effective divisor of minimal slope, and furthermore that one should only focus on irreducible effective divisors. These are general properties that we state for $\Perf$, but hold on any projective variety. \begin{lm}\label{lm:properties} The moving slope satisfies the following properties: \begin{itemize} \item[(i)] if $E\neq E'$ are irreducible reduced effective divisors, then $$s_\Mov(\Perf)\leq \max\{s(E),s(E')\}\,;$$ \item[(ii)] if $s_\Mov(\Perf)=s(E)$ for {\em some} moving divisor $E$, then there exists an {\em irreducible} moving divisor $E'$ such that $s_\Mov(\Perf)=s(E')$; \item[(iii)] if there does {\em not} exist an effective divisor $E$ such that $s(E)=s_\Eff(\Perf)$, then $s_\Eff(\Perf)=s_\Mov(\Perf)$. \end{itemize} \end{lm} \begin{proof} (i) Let $[E]=a\lambda-b\delta$ and $[E']=a'\lambda-b'\delta$, and suppose that $s(E)\leq s(E')$. Then the linear system $|aE'|$ contains $a'E$, and its base locus is contained inside $E\cap E'$, which has codimension at least two. It follows that $aE'$ is a moving divisor. Since $[aE']=a(a'\lambda-b'\delta)$, we obtain $s_\Mov(\Perf)\leq s(aE')=s(E')=a'/b'$. (ii) If the general element of the linear system $|E|$ is irreducible, then we can choose $E'$ to be any such element. Otherwise a general element $E_t\in |E|$ can be written as a sum $E_t=E^1_t+\dots+E^m_t$ of $m$ distinct effective divisors. Moreover each $E^i_t$ belongs to a linear system whose base locus is contained in the base locus of $E_t$. Hence each $E^i_t$ is moving. Since $s_\Mov(\Perf)\leq \min_i s(E^i_t)\leq s(E_t)=s_\Mov(\Perf)$, we conclude that $s(E^i_t)=s_\Mov(\Perf)$ for all $i$. Hence, it is enough to take $E'=E^i_t$ for any $i$. (iii) Consider a sequence $(E_n)$ of effective divisors on ${\mathcal A}_g$ whose slopes are strictly decreasing and converging to $s_\Eff(\Perf)$. Up to replacing $E_n$ by the irreducible component of $E_n$ with smallest slope, and up to passing to a subsequence, we can assume that all $E_n$ are irreducible. Since the slopes are strictly decreasing, the $E_n$ are all distinct. Applying (i) to the pair $E_{n-1},E_n$, we have $s_\Mov(\Perf)\leq s(E_n)$. The conclusion follows, since $s_\Eff(\Perf)\leq s_\Mov(\Perf)\leq s(E_n)\rightarrow s_\Eff(\Perf)$. \end{proof} \section{Some relevant modular forms}\label{sec:relevant} In this section we briefly recall the definitions and the main properties of theta constants, of the Schottky form, and of Andreotti-Mayer divisors. \subsection{Theta functions and theta constants}\label{sec:theta} For $\varepsilon,\delta\in \{0,1\}^g$ the {\em theta function with characteristic $\chars\varepsilon\delta$} is the function $\tc\varepsilon\delta:{\mathbb{H}}_g\times{\mathbb{C}}^g\rightarrow{\mathbb{C}}$ defined as $$ \tc\varepsilon\delta(\tau,z)\coloneqq\sum\limits_{n\in{\mathbb{Z}}^g} \exp \pi i \left[\left( n+\tfrac{\varepsilon}{2}\right)^t\tau \left(n+\tfrac{\varepsilon}{2}\right)+2\left(n+\tfrac{\varepsilon}{2}\right)^t\left(z+ \tfrac{\delta}{2}\right)\right]. $$ Characteristics $\chars\varepsilon\delta$ are called {\em even} or {\em odd} depending on the parity of the standard scalar product $\langle\varepsilon , \delta\rangle$. This is the same as the parity of $\theta$ as a function of $z\in{\mathbb{C}}^g$ for fixed $\tau\in{\mathbb{H}}_g$, and there are $ 2^{g-1}( 2^{g}+1)$ even characteristics and $2^{g-1}( 2^{g}-1)$ odd ones. The {\em theta constant} is the evaluation of the theta function at $z=0$, which is thus a function $\tc\varepsilon\delta(\tau):{\mathbb{H}}_g\to{\mathbb{C}}$. By the above, all odd theta constants vanish identically, while an even theta constants are modular form of weight $1/2$ (meaning that a suitable square root of the automorphic factor $\det(C\tau+D)$ is taken) with respect to a certain finite index subgroup of $\op{Sp}(2g,{\mathbb{Z}})$. The product of all even theta constants $$ T_g\coloneqq\prod_{\chars\varepsilon\delta\, \op{even}}\theta\chars\varepsilon\delta $$ turns out to be a modular form for the full symplectic group, for $g\ge 3$, called the {\em theta-null} modular form, and its zero locus is called the theta-null divisor $\tnull$. It has class \begin{equation}\label{eq:classTg} [T_g]=2^{g-2}(2^g+1)\lambda-2^{2g-5}\delta\,,\qquad \hbox{and so }\quad s(T_g)=s(\tnull)=8+2^{3-g}\,. \end{equation} The case $g=2$ is slightly different since $T_2$ has a character, meaning that it satisfies \[ T_2(\gamma\cdot\tau)=\pm\det(C\tau+D)^5 T_2(\tau) \] for all $\gamma=\left(\begin{smallmatrix} A & B\\ C & D\end{smallmatrix}\right)\in \op{Sp}(4,{\mathbb{Z}})$. Hence $T_2^2$ is a well-defined modular form and we still have $[T_2]=5\lambda-\delta/2$ in $\op{Pic}_{\mathbb{Q}}(\Perf)$. \subsection{The Schottky form}\label{sec:schottky} The {\em Schottky form} is the weight $8$ modular form on ${\mathcal A}_g$ given by the following degree 16 polynomial in theta constants: \[ S_g\coloneqq \frac{1}{2^g}\sum_{\varepsilon,\delta}\theta^{16}\chars\varepsilon\delta - \frac{1}{2^{2g}}\left(\sum_{\varepsilon,\delta}\theta^8\chars\varepsilon\delta\right)^2\,. \] The Schottky form is a modular form for $\op{Sp}(2g,{\mathbb{Z}})$, and is natural because it can alternatively be expressed as $S_g=\theta_{D_{16}^+}-\theta_{E8\oplus E8}$ as the difference of the lattice theta functions associated to the only two even, unimodular lattices in ${\mathbb{R}}^{16}$ (see \cite{igusagen4} or \cite{igusachristoffel}). It is known that $S_g$ vanishes identically on ${\mathcal A}_g$ if and only if $g\leq 3$, and moreover that the zero locus of $S_4$ is the locus of Jacobians ${\mathcal J}_4\subset{\mathcal A}_4$. The form $S_4$ vanishes identically to first order along $D$, and thus \begin{equation} [S_4]=8\lambda-\delta\,, \quad\text{and}\quad s(S_4)=8\,, \end{equation} while for $g\ge 5$ the form $S_g$ is not a cusp form (and so it has infinite slope). \subsection{Andreotti-Mayer divisor}\label{sec:N0} The {\em Andreotti-Mayer divisor} \cite{anma} is defined to be the locus $N_0$ of ppav whose theta divisor is singular. It is known that $N_0$ is a divisor that has for $g\ge 4$ precisely two irreducible components: $N_0=\tnull\cup N'_0$ (see \cite{mum},\cite{debarredecomposes}), while for $g=2,3$ the Andreotti-Mayer divisor is simply $N_0=\tnull$. \begin{rem}\label{gen-sing} For a generic point of $\tnull$, the unique singularity of the theta divisor of the corresponding ppav is the double point at the two-torsion point of the ppav corresponding to the characteristic of the vanishing theta constant. It is known that generically this singular point is an ordinary double point (i.e.~that the Hessian matrix, of the second derivatives of the theta function with respect to $z$ at this point is non-degenerate). For a generic point of $N'_0$, the theta divisor of the corresponding ppav has precisely two opposite singular points, both of which are generically ordinary double points again, see \cite{gsmordertwo} for a detailed study. \end{rem} As we already know, $\tnull$ is the zero locus of the modular form $T_g$ that is the product of all even theta constants, and we know the class of the corresponding divisor by~\eqref{eq:classTg}. The modular form, which we denote $I_g$, defining the effective divisor $N_0'$ is not known explicitly for any $g\ge 5$ (see \cite{krsm}), while the Riemann theta singularity implies that in genus 4 we have $N'_0={\mathcal J}_4$, and thus $I_4=S_4$. The class of the divisor $N'_0$ was computed by Mumford \cite{mum}: \begin{equation}\label{eq:classAM} [N'_0]=[I_g]=(g!(g+3)/4- 2^{g-3}(2^g +1)) \lambda - ((g+1)!/24 - 2^{2g-6}) \delta\,, \end{equation} \begin{equation*} \text{and so}\quad s(I_g)= 6\cdot\frac{1+2/(g+1)- 2^{g-1}(2^g +1)/(g+1)!}{1 - 3\cdot 2^{2g-3}/(g+1)!}>6\, . \end{equation*} \smallskip Even though an equation for $N'_0$ is not known, a precise description of its tangent space is provided by the following Lemma, which is a special case of results proven in \cite{anma} (see also \cite{ACGH}). \begin{lm}\label{AM}\label{lm:AM} Let $Z$ be $\tnull$ or $N'_0$ and call $\widetilde{Z}$ its preimage in ${\mathbb{H}}_g$. For every general smooth point $\tau_0$ of $\widetilde{Z}$, and every ordinary double point $z_0\in{\mathbb{C}}^g$ of $\theta(\tau_0,\cdot)=0$, the tangent space $T_{\tau_0}\widetilde{Z}$ has equation $d_\tau \theta(\tau_0,z_0)=0$ inside $T_{\tau_0}{\mathbb{H}}_g$. \end{lm} Since theta functions satisfy the heat equation \begin{equation}\label{eq:heat} d_\tau\theta=2\pi i\cdot\op{Hess}_z\theta \, , \end{equation} where $\op{Hess}_z$ denotes the Hessian, that is the matrix of the second partial derivatives of the theta function with respect to $z_1,\dots,z_g$, by \Cref{lm:AM} the differentials $dT_g$ and $dI_g$ are related to the Hessian of the theta function in the $z$-variables. We then have the following \begin{prop}\label{prop:cond} The form $T_g$ for $g\ge 2$ and the form $I_g$ for $g\ge 4$ satisfy Condition \eqref{cond}. \end{prop} The genus restrictions in this statement are simply to ensure that the forms are well-defined and not identically zero. \begin{proof} If $\tau_0$ is a smooth point of $\tnull$, then the theta divisor $\Theta_{\tau_0}\subset X_{\tau_0}={\mathbb{C}}^g/({\mathbb{Z}}^g\oplus\tau_0{\mathbb{Z}}^g)$ is singular at a unique $2$-torsion point, and such a singularity is ordinary if and only if $\det(dT_g)\neq 0$ at $\tau_0$ by \eqref{eq:heat} and \Cref{lm:AM}. Similarly, if $\tau_0$ is a generic point of $N'_0$, then the singular locus of $\Theta_{\tau_0}$ consists of two opposite non-$2$-torsion singular points $\pm z_0$; moreover, $\pm z_0$ are ordinary double points of $\Theta_{\tau_0}$ if and only if $\det(dI_g)\neq 0$ at $\tau_0$ by \eqref{eq:heat} and \Cref{lm:AM}. The conclusion follows from \Cref{gen-sing}. \end{proof} \section{Rankin-Cohen bracket}\label{sec:RC} Our method to bound the moving slope of ${\mathcal A}_g$ from above is by constructing new Siegel modular forms starting from a given known modular form. For example, starting from the known Siegel modular form minimizing the slope of the effective cone, we will try to construct another Siegel modular form, with which it has no common factor, and which has a slightly higher slope. In this section we do this using the Rankin-Cohen bracket (of two different modular forms), which will allow us to prove the main application \Cref{cor:main}, but only under the assumption that the moving slope is achieved. While our construction of the differential operators $\mathfrak{D}_{g,a}$ in \Cref{thm:main} yields a stronger result, we now give the details of the geometrically motivated construction using the Rankin-Cohen brackets. These were defined in \cite{rankin} and \cite{cohen} for $g=1$ (see also \cite{zagier}); a vector-valued version appears in \cite{satoh} and a scalar-valued version appears in \cite{yangyin}. For further use, we define the symmetric $g\times g$ matrix-valued holomorphic differential operator acting on functions on~${\mathbb{H}}_g$ \begin{equation}\label{eq:padefined} \partial_{\tau}\coloneqq \left(\frac{1+\delta_{ij}}{2}\frac{\partial}{\partial \tau_{ij}}\right)_{1\leq i,j\leq g}\,. \end{equation} When no confusion is possible, we will sometimes denote this differential operator simply by~$\partial$. \subsection{Vector-valued bracket} Let $F$ and $G$ be genus $g$ Siegel modular forms of weights $k$ and $h$ respectively. \begin{df}[\cite{satoh}] The {\em vector-valued Rankin-Cohen bracket} of $F$ and $G$ is \[ \{F,G\}\coloneqq \frac{G^{k+1}}{F^{h-1}} \cdot d\left(\frac{F^h}{G^k}\right). \] where $d=d_\tau$ is the differential of a function of $\tau\in{\mathbb{H}}_g$. \end{df} \begin{lm}\label{lemma:bracket} The vector-valued bracket \[ \{F,G\}=-\{G,F\}=hG\,dF-kF\,dG \] is a ${\mathcal L}^{\otimes(h+k)}$-valued holomorphic $(1,0)$-form on ${\mathcal A}_g$. Moreover $\{F,G\}\not\equiv 0$ unless $F^h$ and $G^k$ are constant multiples of each other. \end{lm} \begin{proof} Since $F^h/G^k$ is a meromorphic function on ${\mathbb{H}}_g$, its differential is a meromorphic $(1,0)$-form. Moreover, $G^{k+1}/F^{h-1}$ is a meromorphic Siegel modular form of weight $h+k$ (i.e.~it is a meromorphic function on ${\mathbb{H}}_g$ that satisfies the transformation property). It is immediate to check that $\{F,G\}=hG\,dF-kF\,dG$, which shows that $\{F,G\}$ is thus a holomorphic Siegel-modular-form-valued $(1,0)$ form. Since $F$ and $G$ are non-zero, the bracket vanishes identically if and only if $d(F^k/G^h)$ is identically zero, which is equivalent to this ratio being a constant. \end{proof} Another way to state \Cref{lemma:bracket} is that, writing $\{F,G\}$ as a $g\times g$ matrix, this matrix satisfies the transformation law \[ \{F,G\}( \gamma\cdot\tau)= \det(C\tau+D)^{k+h} (C\tau+D)^t \cdot\{F,G\}(\tau)\cdot (C\tau+D) \] for any $\gamma=\left(\begin{smallmatrix}A & B \\ C & D \end{smallmatrix}\right)$ in $\op{Sp}(2g,{\mathbb{Z}})$. \subsection{Scalar-valued bracket} Let ${\mathbb{E}}\to{\mathcal A}_g$ denote the holomorphic rank~$g$ Hodge bundle of $(1,0)$-holomorphic forms on ppav, namely ${\mathbb{E}}=\pi_*\Omega^{1,0}_\pi$ (where we recall that $\pi:{\mathcal X}_g\to{\mathcal A}_g$ denotes the universal family of ppav). Recall that the cotangent space $T^*{\mathcal A}_g$ can be identified with $\op{Sym}^2{\mathbb{E}}\subset \op{Hom}({\mathbb{E}}^\vee,{\mathbb{E}})$. Since $$ \det:\op{Hom}({\mathbb{E}}^\vee,{\mathbb{E}})\to (\det{\mathbb{E}})^{\otimes 2}\subset \Lambda^g(\op{Sym}^2{\mathbb{E}})\cong\Omega^{g,0}{\mathcal A}_g $$ and $\det{\mathbb{E}}\cong{\mathcal L}$, it follows that $\det$ restricts to a map $\det:T^*{\mathcal A}_g\to {\mathcal L}^{\otimes 2}$, which is homogeneous of degree $g$. If $f$ is a meromorphic function defined on ${\mathcal A}_g$, then $\det(df)$ is a meromorphic section of ${\mathcal L}^{\otimes 2}$. \begin{df} The {\em scalar Rankin-Cohen bracket} of Siegel modular forms $F,G$ is defined as \[ [F,G]\coloneqq\det\{F,G\}\, . \] \end{df} The scalar Rankin-Cohen bracket seems not to have been systematically studied in the literature. Here we collect some of its basic properties. \begin{lm}\label{lm:scalar-bracket} Let $F,G$ be Siegel modular forms, of classes $$ [F]=k\lambda-x\delta;\qquad [G]=h\lambda-y\delta. $$ Then $[F,G]$ is a Siegel modular form of class $$\Big[[F,G]\Big]=g(k+h)\lambda-\beta\delta\,,$$ where \begin{itemize} \item[(i)] $\beta> 0$ (i.e.~$[F,G]$ is a cusp form, even if $F$ and $G$ are not); \item[(ii)] $\beta\ge g(x+y)$; \item[(iii)] for any integer $n>0$, $[F,F^n]=0$; \item[(iv)] if $H$ is another modular form, then $[H^2 F,G]$ and $[HF,HG]$ are divisible by $H^g$; \item[(v)] if $F,G$ do not have any common factors, and $F$ satisfies Condition \eqref{cond}, then~$F$ and $[F,G]$ do not have any common factors. \end{itemize} \end{lm} \begin{proof} (i) Recall that $\{F,G\}=(G^{k+1}/F^{h-1})\cdot d\left(F^h/G^k\right)$ and so $\det\{F,G\}=G^{g(k+1)}/F^{g(h-1)}\det(d\left(F^h/G^k\right))$. It follows that $\det\{F,G\}$ is a modular form of weight $gh(k+1)-gk(h-1)+2=g(h+k)+2$ and, from the local expression of $\{F,G\}$, it follows that $[F,G]$ is holomorphic. Consider then the Fourier-Jacobi expansions \[ F(\tau)=F_0(\tau',0)+\sum_{r>0}F_r(\tau',z)q^r, \quad G(\tau)=G_0(\tau',0)+\sum_{r>0}G_r(\tau',z)q^r, \] at $\tau=\left(\begin{smallmatrix}\tau' & z \\ z^t & w \end{smallmatrix}\right)$. We have \[ dF=\left( \begin{array}{cc} d_{\tau'}F & d_z F\\ (d_z F)^t & d_w F \end{array} \right),\quad dG=\left( \begin{array}{cc} d_{\tau'}G & d_z G\\ (d_z G)^t & d_w G \end{array} \right). \] Recall that $q=\exp(2\pi iw)$, so that $\partial (q^r)/\partial w=2\pi r i q^r$. It is immediate to check that the last columns of $dF$ and $dG$ are divisible by $q$. It follows that $[F,G]$ is divisible by $q$, and so is a cusp form. (ii) Writing $dF$ and $dG$ as above, it is immediate that $\op{ord}_D dF=\op{ord}_D F$ and $\op{ord}_D dG=\op{ord}_D G$. Hence $\op{ord}_D \{F,G\}=\op{ord}_D F+\op{ord}_D G$, and the conclusion follows. (iii) By direct computation $\{F,F^n\}=(nk) F^n dF-kF (n F^{n-1})dF=0$\,. (iv) Let $\ell$ be the weight of $H$; we compute directly $$ \begin{aligned} \{H^2 F,G\}&=hG(H^2 \,dF+2HF\,dH)-(2\ell+k)H^2 F \,dG\\ &= H(hHG\, dF+2hF\,dH-(2\ell+k)HF \,dG) \end{aligned} $$ and $$ \begin{aligned} \{HF,HG\}&=(\ell+h)HG(H\,dF+F\,dH)-(\ell+k)HF(H\,dG+G\,dH)\\ &= H(H\{F,G\}+\ell H(G\,dF-F\,dG)+(h-k)FG\,dH)\,. \end{aligned} $$ (v) Note first that Condition \eqref{cond} implies that $F$ is square-free. Evaluating $[F,G]$ along the zero divisor of $F$, we obtain \begin{equation}\label{eq:FGwhereF=0} [F,G]|_{F=0}=h^gG^g\det(\partial F)\,. \end{equation} Since $F$ and $G$ do not have common factors, $[F,G]$ is identically zero along a component of $\{F=0\}$ if and only if $\det(\partial F)$ is. \end{proof} \begin{rem} It is possible that the strict inequality $\beta>g(x+y)$ holds in (ii) above: for example, (i) implies that $\beta\geq 1$ for $x=y=0$. \end{rem} \begin{rem}\label{rem:intrinsic} Statement (v) above is one instance where we see the key importance of Condition ~\eqref{cond}, and of $\det(\partial F)$. A more intrinsic description of the function $\det(\partial F)$ is as follows. If $F$ is a modular form of weight $k$, its differential is not well-defined on ${\mathcal A}_g$, but the restriction of $dF$ to the zero divisor $E=\{F=0\}$ of $F$ is. Thus $dF|_E$ is a section of ${\mathcal L}^{\otimes k}\otimes \op{Sym}^2{\mathbb{E}}|_E$, and $\det(dF)$ is a section of ${\mathcal L}^{\otimes (kg+2)}|_E$. In other words, the restriction of $\det(\partial F)$ to the zero locus of $F$ behaves as a modular form of weight $gk+2$, as mentioned in the introduction. \end{rem} \subsection{The bracket and the moving slope}\label{sec:bra-mov} In this section we apply the scalar Rankin-Cohen bracket to two modular forms of low slope in order to produce another modular form of low slope. This will allow us to prove the following weaker version of \Cref{cor:main} --- it is weaker only in that it assumes that the moving slope is achieved, i.e.~is a minimum rather than infimum. \begin{prop}\label{prop11} Assume that the effective slope $s_\Eff(\Perf)=a/b$ is realized by a Siegel modular form $F$ of class $a\lambda-b\delta$ that satisfies Condition \eqref{cond}. Suppose moreover that the moving slope $s_\Mov(\Perf)=a'/b'$ is achieved by a Siegel modular form $G$ of class $a'\lambda-b'\delta$. Then \[ s_\Mov(\Perf)\leq s_\Eff(\Perf)+\frac{2}{bg}\, . \] \end{prop} \begin{proof} If $F$ is a product of at least two distinct prime factors, then each of them realizes the effective slope. Hence $s_\Mov(\Perf)=s_\Eff(\Perf)$ by \Cref{lm:properties}(i), and so the conclusion trivially holds. Hence we can assume that $F$ is a prime Siegel modular form. Up to replacing $G$ by a general element in its linear system, we can assume that $F$ does not divide $G$. By \Cref{lm:scalar-bracket}(v), the form $[F,G]$ is not divisible by $F$, and so in particular $[F,G]$ does not identically vanish. It follows from \Cref{lm:scalar-bracket}(ii) that \[ \frac{a'}{b'} = s_\Mov(\Perf)\leq s([F,G])\leq \frac{g(a+a')+2}{g(b+b')}\,, \] which can be rewritten as \[ s_\Mov(\Perf)=\frac{a'}{b'}\leq \frac{a}{b}+\frac{2}{bg}=s_\Eff(\Perf)+\frac{2}{bg}\,. \] \end{proof} Both the scalar Rankin-Cohen bracket and $\mathfrak{D}_{g,a}$ (which will be introduced in \Cref{sec:diffop}) are holomorphic differential operators of degree $g$, but their relationship is not clear, and deserves a further investigation. \section{Effective and moving slopes for small $g$}\label{sec:lowgenus} In this section we recall what is known about the effective and moving slopes of ${\mathcal A}_g$ for $2\leq g\leq 5$. In all these cases the effective slopes are achieved, and we analyze what we obtain by applying \Cref{thm:main} (whose proof is postponed till \Cref{sec:diffop}) to such effective divisors of minimal slope, and we prove Corollaries \ref{maincor:mov}-\ref{cor:g=5}-\ref{cor:genus6}-\ref{cor:M4}. \subsection{Case $g=2$} In genus $2$ the unique effective divisor of minimal slope is the closure of the locus ${\mathcal A}_2^{\dec}$ of decomposable abelian varieties inside $\Perf[2]$. Set-theoretically, this locus is simply equal to the theta-null divisor $\tnull$. We thus obtain \[ s_\Eff(\Perf[2])=s({\mathcal A}_2^{\dec})= s(\tnull)=s(5\lambda-\delta/2)=10\,. \] \begin{rem} Note that the class $[T_2]=\frac{1}{2}(10\lambda-\delta)$ in $\op{Pic}_{\mathbb{Q}}(\Perf)$ is not integral, though its double is. From the stacky point of view, this is a manifestation of the fact that ${\mathcal A}_2^{\dec}\cong ({\mathcal A}_1\times{\mathcal A}_1)/S_2$ and so the general element of ${\mathcal A}_2^{\dec}$ has an automorphism group $\{\pm 1\}\times\{\pm 1\}$, of order $4$, whereas the general genus $2$ ppav has automorphism group $\{\pm 1\}$, of order $2$. \end{rem} As mentioned in the introduction, \Cref{thm:main} can be applied to $T_2$, even though $T_2$ is a modular form with character. Since $T_2$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, we obtain a cusp form $\mathfrak{D}_{2,5}(T_2)$ of weight $12$ that is not identically zero on $\tnull$. As in \Cref{cor:main}, it follows that $$ s_\Mov(\Perf[2])\leq s\left(\mathfrak{D}_{2,5}(T_2)\right)=12\,. $$ \begin{proof}[Proof of \Cref{maincor:mov} for $g=2$] It is known \cite{frei} that the ideal of cusp forms inside the ring of genus $2$ Siegel modular forms is generated by two modular forms $\chi_{10}\coloneqq T_2^2$ and $\chi_{12}$, which has class $[\chi_{12}]=12\lambda-\delta$. It then follows that $\mathfrak{D}_{2,5}(T_2)$ and $\chi_{12}$ are proportional, and so $\mathfrak{D}_{2,5}(T_2)$ realizes the moving slope. \end{proof} Since $T_2$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, and since $T_2$ and $\chi_{12}$ are square-free and without common factors, \Cref{lm:scalar-bracket}(v) ensures that the cusp form $[T_2,\chi_{12}]$ does not vanish identically along $\tnull$. By \Cref{lm:scalar-bracket} it follows that \[ [[T_2,\chi_{12}]]=36\lambda-3\delta\,, \] and so $[T_2,\chi_{12}]$ is another Siegel modular form that achieves the moving slope.\ \subsection{Jacobian and hyperelliptic loci}\label{sec:jac} As mentioned in the introduction, it is possible to define a slope for effective divisors in the moduli space ${\mathcal M}_g$ of projective curves of genus $g$. We denote \[ \tau_g : {\mathcal M}_g\longrightarrow{\mathcal A}_g \] the Torelli map that sends a smooth projective curve of genus $g$ to its Jacobian. The Torelli map is known to extend to a morphism $\overline{\tau}_g:\overline{{\mathcal M}}_g\to\Perf$ from the Deligne-Mumford compactification~$\overline{{\mathcal M}}_g$ of~${\mathcal M}_g$ to $\Perf$ \cite{AlBr}, but for our purposes, it will suffice to work with the well-known partial extension \[ \tau'_g:{\mathcal M}'_g\longrightarrow\Part \] from the moduli space ${\mathcal M}'_g$ of irreducible stable curves of genus $g$ with at most one node. The partial compactification ${\mathcal M}'_g$ is the union of ${\mathcal M}_g$ and the boundary divisor $\Delta'=\partial{\mathcal M}'_g$ consisting of singular curves with only one node, which is non-separating. It is well-known that $\op{Pic}_{\mathbb{Q}}({\mathcal M}'_g)={\mathbb{Q}}\lambda_1\oplus{\mathbb{Q}}\delta'$ for $g\geq 3$, and that the map induced by $\tau'_g$ on Picard groups is \[ (\tau_g')^*\lambda=\lambda_1,\quad (\tau_g')^*\delta=\delta'\, . \] The slope for a divisor $a\lambda_1-b\delta'$ on ${\mathcal M}'_g$ is defined to be $a/b$, and the slopes of cones of divisors on ${\mathcal M}'_g$ are defined analogously to ${\mathcal A}_g'$. \begin{rem}\label{rem:FP} The standard definition of slope for an effective divisor in $\overline{{\mathcal M}}_g$ involves the vanishing order at all the boundary divisors of $\overline{{\mathcal M}}_g$. It follows from \cite{FP} that, if we limit ourselves to divisors of slopes less than $29/3$ for $g\leq 5$, then the two definitions are equivalent. \end{rem} As a consequence of the above discussion, we have obtained the following \begin{lm}\label{lm:restrict-Mg} Let $g\geq 4$ and let $E$ be an (internal) effective divisor on ${\mathcal A}_g$. \begin{itemize} \item[(i)] If $E$ does not contain the Jacobian locus ${\mathcal J}_g$, then $(\tau'_g)^{-1}(E)$ is an effective divisor in ${\mathcal M}'_g$ of slope $s(E)$; \item[(ii)] If $s(E)<s_\Eff({\mathcal M}_g)$, then $E$ contains the Jacobian locus ${\mathcal J}_g$. \end{itemize} \end{lm} \medskip Intersecting an effective divisor with the locus t ${\mathcal H}_g\subset{\mathcal M}_g$ of hyperelliptic Jacobians can also provide a constraint for the slope. The closure ${\mathcal H}'_g$ of ${\mathcal H}_g$ inside ${\mathcal M}'_g$ is obtained by adding the locus $\partial{\mathcal H}'_g$ consisting of curves with one non-disconnecting node, obtained from smooth hyperelliptic curves of genus $g-1$ by identification of two points that are exchanged by the hyperelliptic involution, cf.~\cite{ch}. Call the restriction of $\lambda_1$ to ${\mathcal H}'_g$ from $\overline{{\mathcal M}}_g$ still by $\lambda_1$, and denote $\xi_0$ the class of $\partial{\mathcal H}'_g$ (which is also the restriction of $\delta_0$ from $\overline{{\mathcal M}}_g$). It is known that $\op{Pic}_{\mathbb{Q}}({\mathcal H}'_g)$ has dimension $1$ and is generated by $\lambda_1,\xi_0$ with the relation $(8g+4)\lambda_1 =g\xi_0$ (see \cite[Proposition 4.7]{ch}). The map $\overline{\tau}_g$ restricts to ${\mathcal H}'_g\to\Part$ and sends $\partial{\mathcal H}'_g$ to the boundary of $\Part$. The following result was proven by Weissauer \cite{we86} (see \cite{sm} for details). Here we present a different argument. \begin{prop}\label{prop:restrict-hyp} For every $g\geq 3$, modular forms of slope strictly less than $8+\tfrac{4}{g}$ must contain ${\mathcal H}_g$. \end{prop} \begin{proof} Let $F$ be a modular form on ${\mathcal A}_g$ with class $[F]=a\lambda-b\delta$, and suppose that $F$ does not vanish on the entire ${\mathcal H}_g$. We want to show that $s(F)=\tfrac{a}{b}\geq 8+\tfrac{4}{g}$. The pullback of $F$ on ${\mathcal H}_g$ vanishes on an effective divisor $V$ of class $[V]=a\lambda_1-\beta\xi_0$ with $\beta\geq b$. Using the relation in $\op{Pic}_{\mathbb{Q}}({\mathcal H}'_g)$, we obtain $[V]=\beta\left(\tfrac{a}{\beta}-\left(8+\tfrac{4}{g}\right)\right)\lambda_1$. Consider now the double cover $C_t$ of ${\mathbb{P}}^1$ branched at $\lambda_1,\dots,\lambda_{2g+1},t$, and fix distinct $\lambda_1,\dots,\lambda_{2g+1},t_0$ such that $C_{t_0}\notin V$. Then $(C_t)_{t\in {\mathbb{P}}^1}$ induces a map ${\mathbb{P}}^1\to{\mathcal H}'_g$, whose image is a complete, irreducible curve $B\subset{\mathcal H}'_g$ not contained in $V\cup\partial{\mathcal H}'_g$. It follows that $\deg_B(V)\geq 0$ and $\deg_B(\lambda_1)>0$, and so $\tfrac{a}{\beta}-\left(8+\tfrac{4}{g}\right)\geq 0$. The conclusion follows, since $s(F)\geq \tfrac{a}{\beta}$. \end{proof} Another way to prove \Cref{prop:restrict-hyp} is to use the explicit modular form of slope $8+\tfrac{4}{g}$, whose zero locus avoids ${\mathcal H}_g$, constructed in \cite{sm}. By \Cref{rem:FP}, for $g=3,4,5$ the content of \Cref{prop:restrict-hyp} can be also recovered as a consequence of Corollary 3.27 in \cite{hm}. \subsection{Case $g=3$} The moduli space ${\mathcal A}_3$ has a meaningful effective divisor, namely (the closure of) the hyperelliptic locus ${\mathcal H}_3$. \begin{proof}[Proof of \Cref{maincor:mov} for $g=3$] By \Cref{prop:restrict-hyp} a divisor in ${\mathcal A}_3$ with slope smaller than $\tfrac{28}{3}$ must contain ${\mathcal H}_3$. This implies that the only effective divisor that could be of slope under $\tfrac{28}{3}$ is (the closure of) the hyperelliptic locus itself, and so $s_\Mov(\Perf[3])\ge\tfrac{28}{3}$. Since the closure of ${\mathcal H}_3$ coincides with the theta-null divisor, we obtain from \eqref{eq:classTg} \[ s({\mathcal H}_3)=s(T_3)=s(18\lambda-2\delta)=9<\tfrac{28}{3}\,. \] It follows that $$ s_\Eff(\Perf[3])=s({\mathcal H}_3)=9 \quad\text{and}\quad s_\Mov(\Perf[4])\geq \tfrac{28}{3}\,. $$ Since $T_3$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, \Cref{thm:main} provides a modular form $\mathfrak{D}_{3,18}(T_3)$ of class $56\lambda-\beta\delta$ with $\beta\geq 6$. If $\beta\ge 7$, then the slope of $\mathfrak{D}_{3,18}(T_3)$ would be $s(\mathfrak{D}_{3,18}(T_3))\leq 56/7$, which is less than 9, contradicting the knowledge of effective slope. Thus $\beta=6$, and $$ s_\Mov(\Perf[3])\leq s(\mathfrak{D}_{3,18}(T_3))=\tfrac{56}{6}=\tfrac{28}{3}\,. $$ This proves that the moving slope is equal to $s_\Mov(\Perf[3])=\tfrac{28}{3}$, and is realized by $\mathfrak{D}_{3,18}(T_3)$. \end{proof} There are also other constructions of Siegel modular forms in ${\mathcal A}_3$ of slope $\tfrac{28}{3}$: \begin{itemize} \item Let $M$ be the set of all octuplets of even characteristics that are cosets of some three-dimensional vector space of characteristics, and define $$ \chi_{28}\coloneqq\sum_{\text{$M$ even octuplet coset}}\left(\frac{T_3}{\prod_{m\in M}\theta_m}\right)^2\,. $$ This can be checked to be a modular form of class $[\chi_{28}]=28\lambda-3\delta$, see \cite{Tsu}. We verify that $\chi_{28}$ cannot be divisible by $T_3$, as otherwise $\chi_{28}/T_3$ would be a holomorphic cusp form of weight $10$, which does not exist by \cite{Tsu,LR}. \item Alternatively, one can consider the forms $$\chi_{140}\coloneqq\sum_{\text{$m$ even}} \left(\frac{T_3}{\theta_m}\right)^8\,,$$ which can be shown to have class $[\chi_{140}]=140\lambda-15\delta$. We remark that the decomposable locus ${\mathcal A}^{\dec}_3$ can be described by the equations $T_3=\chi_{140}=0$, since ${\mathcal A}^{\dec}_3$ is simply the locus where at least two theta constants vanish. Since ${\mathcal A}_3^{\dec}$ has codimension $2$ within ${\mathcal A}_3$, this confirms that the forms $T_3$ and $\chi_{140}$ could not have a common factor. \item Since $T_3$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, and since $T_3$ and $\chi_{28}$ are square-free and without common factors, \Cref{lm:scalar-bracket}(v) ensures that the Rankin-Cohen bracket $[T_3,\chi_{28}]$ does not vanish identically along $\tnull$. By \Cref{lm:scalar-bracket}, $[T_3,\chi_{28}]$ has weight $140$, and vanishes to order $\beta\geq 15$ along the boundary. However, if it were to vanish to order 16 or higher, then its slope would be at most $\tfrac{140}{16}=8.75$, which is impossible since $s_\Eff(\Perf[3])=9$. Thus we must have $\beta=15$, so that \[ [[T_3,\chi_{28}]]=140\lambda-15\delta \] is a Siegel modular form that also realizes $s_\Mov(\Perf[3])=\tfrac{140}{15}=\tfrac{28}{3}$. \end{itemize} \subsection{Case $g=4$} The locus of Jacobians ${\mathcal J}_4$ is a divisor in ${\mathcal A}_4$, which is known to be the unique effective divisor on $\Perf[4]$ of minimal slope, see \cite{smmodfour}. It is known that the effective slope of ${\mathcal M}_4$ is $s_\Eff({\mathcal M}_4)=\tfrac{17}{2}$ (see \cite{gie} and \cite{FP}), By Riemann's theta singularity theory, theta divisors of Jacobians are singular, and in fact ${\mathcal J}_4=N_0'$. Since $I_4=S_4$ and since $[I_4]=8\lambda-\delta$ as recalled in \Cref{sec:N0}, this reconfirms the equality $$ s_\Eff(\Perf[4])=s(I_4)=s(8\lambda-\delta)=8\,. $$ As $I_4$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, \Cref{thm:main} applied to $I_4$ produces a modular form $\mathfrak{D}_{4,8}(I_4)$ not divisible by $I_4$, of class $[\mathfrak{D}_{4,8}(I_4)]=34\lambda-\beta\delta$, with $\beta\ge 4$. Again, if $\beta$ were at least 5, the slope would be at most $34/5<8$, contradicting the effective slope, and thus we must have $\beta=4$. \begin{proof}[Proof of \Cref{maincor:mov} for $g=4$] From the above discussion, it follows that $$ s_\Mov(\Perf[4])\leq s(\mathfrak{D}_{4,8}(I_4))=s(34\lambda-4\delta)=\tfrac{17}{2}\,. $$ On the other hand, \Cref{lm:restrict-Mg} implies that any effective divisor in ${\mathcal A}_4$ that does not contain the locus of Jacobians has slope at least $s_\Eff({\mathcal M}_4)=\tfrac{17}{2}$. It follows that $s_\Mov(\Perf[4])\geq \tfrac{17}{2}$. We thus conclude that $s_\Mov(\Perf[4])=\tfrac{17}{2}=s(\mathfrak{D}_{4,8}(I_4))$. \end{proof} There are at least two other modular forms in ${\mathcal A}_4$ that realize the moving slope. \begin{itemize} \item The first one is $T_4$, whose class is $[T_4]=68\lambda-8\delta$ by~\eqref{eq:classTg}. \item The second one is the Rankin-Cohen bracket $[I_4,T_4]$. Since $I_4$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, and since $I_4$ and $T_4$ are square-free and without common factors, \Cref{lm:scalar-bracket} ensures that $[I_4,T_4]$ does not vanish identically along $N'_0={\mathcal J}_4$, and has class $306\lambda-\beta\delta$, with $\beta\geq 36$. Since the effective slope of ${\mathcal A}_4$ is $8$ and the moving slope of ${\mathcal A}_4$ is $\tfrac{17}{2}$, we must have $\beta=36$ and $$ s_\Mov(\Perf[4])=\tfrac{17}{2}=\tfrac{306}{36}=s([I_4,T_4])\,. $$ \end{itemize} As mentioned in the introduction, the pullback $\tau_g^*T_g$ via the Torelli map gives $2\Theta_{\nl}$ on ${\mathcal M}_g$, i.e.~$\sqrt{T_g}$ is not a modular form, but its restriction to ${\mathcal J}_g$ is a Teichm\"uller modular form. For $g=4$ we exhibit a Siegel modular form that intersects ${\mathcal J}_4$ in the divisor $\Theta_{\nl}$, with multiplicity $1$. \begin{proof}[Proof of \Cref{cor:M4}] Recall that $S_4=I_4$. By \Cref{maincor:mov} for $g=4$ proven above, $\mathfrak{D}_{4,8}(S_4)$ realizes the moving slope of ${\mathcal A}_4$, and so it does not contain the divisor of minimal slope, namely the Schottky divisor. Thus $\tau_4^*\mathfrak{D}_{4,8}(S_4)$ is an effective divisor on ${\mathcal M}'_4$ of class $34\lambda_1-4\delta'$, which thus realizes the effective slope $s_\Eff(\overline{\mathcal M}_4)=\tfrac{17}{2}$. Thus the pullbacks $\tau_4^*T_4$ and $\tau_4^*\mathfrak{D}_{4,8}(S_4)$ must have the same support. Since $[T_4]=2[\mathfrak{D}_{4,8}(S_4)]$, we conclude that $\tau_4^*\mathfrak{D}_{4,8}(S_4)=\Theta_{\nl}$. \end{proof} \begin{rem} For the sake of completeness, we recall that the moving slope of ${\mathcal M}_4$ is $s_\Mov({\mathcal M}_4)=60/7$, see \cite{Fed}. We can exhibit a modular form (analogous to $\chi_{140}$ for $g=3$) with this slope, namely $$ \phi_{540}\coloneqq \sum_{\text{$m$ even}} \left(\frac{T_4}{\theta_m}\right)^8\,. $$ The Siegel modular form $\phi_{540}$ has class $540\lambda-63\delta$, see \cite{igusagen4}, and hence $\tau_4^*\phi_{540}$ gives an effective divisor on ${\mathcal M}'_4$ that realizes the moving slope $s_\Mov({\mathcal M}'_4)$. Finally, we observe that both $T_4$ and $\phi_{540}$ have slope less than $9$, and the equations $T_4=\phi_{540}=0$ define, set theoretically, the hyperelliptic locus ${\mathcal H}_4\subset{\mathcal M}_4$, as discussed in \cite{Fp}. \end{rem} \subsection{Case $g=5$} We recall that one of the main results of \cite{fgsmv} was the proof that the divisor $N'_0$ in $\Perf[5]$ has minimal slope: $$ s_\Eff(\Perf[5])=s(I_5)=s(108\lambda-14\delta)=\tfrac{54}{7}=7.714\dots $$ Since $I_5$ satisfies Condition \eqref{cond} by \Cref{prop:cond}, by \Cref{thm:main} \[ [\mathfrak{D}_{5,108}(I_5)]=542\lambda-\beta\delta \quad\text{with}\quad \beta\geq 70 \] is a modular form that does not vanish identically on $N'_0$. \begin{proof}[Proof of \Cref{cor:main}] If $\beta\ge 71$, then the slope of $\mathfrak{D}_{5,108}(I_5)$ would be at most $$ 542/71=7.633\dots<7.714\dots=54/7=s_\Eff(\Perf[5])\,, $$ which is a contradiction. Thus $\beta=70$, and \[ s_\Mov(\Perf[5])\leq s(\mathfrak{D}_{5,108}(I_5))=\tfrac{271}{35}= 7.742\dots \] \end{proof} \subsection{Case $g=6$}\label{sec:genus6} For genus $6$, the slope is bounded from below $s(\Perf[6])\geq \frac{53}{10}$ by \cite{FV}. Moreover, an interesting Siegel modular form $\theta_{L,h,2}$ of class $14\lambda-2\delta$ was constructed in \cite{mss}, showing that $s(\Perf[6])\leq 7$ and that the Kodaira dimension of ${\mathcal A}_6$ is non-negative. \begin{proof}[Proof of \Cref{cor:genus6}] In light of the classification of modular forms in low genus and weight in \cite{CT} and \cite{CTweb}, in genus $6$ there are no cusp forms in weight $7,8,9,11,13$. Now, in genus $6$ there are no Siegel modular forms of weight $2$ and we have seen above that $s(\Perf[6])\geq \frac{53}{10}$. Hence, the unique (up to multiple) cusp form of weight $10$ vanishes with multiplicity one along $D$ (and so does a possible cusp form in weight $6$). As $\op{ord}_D\theta_{L,h,2}=2$, the form $\theta_{L,h,2}$ must thus be prime. As for the second claim, there are two possibilities: \begin{itemize} \item[(a)] there exists a Siegel modular form of slope at most $7$, not divisible by $\theta_{L,h,2}$: in this case, from \Cref{lm:properties} it follows that $s_\Mov(\Perf[6])\le 7$; \item[(b)] $\theta_{L,h,2}$ is the unique genus~$6$ Siegel modular form of slope~$7$ (up to taking powers): the claim then follows from \Cref{cor:main}, since $s(\mathfrak{D}_{6,14}(\theta_{L,h,2}))=7+\frac{2}{2\cdot 6}=\frac{43}{6}$ (as usual, if it happened that $\mathfrak{D}_{6,14}(\theta_{L,h,2})$ were to vanish to order strictly higher than $6\cdot 2=12$, then its slope would be at most $\tfrac{86}{13}<7$). \end{itemize} In either case, the result is proven. \end{proof} In the above case (a) the moduli space ${\mathcal A}_6$ would have Kodaira dimension at least $1$, in case (b) it would have Kodaira dimension $0$. \section{Pluriharmonic differential operators}\label{sec:diffop} In this section we introduce a suitable differential operator on the space of modular forms and prove \Cref{thm:main} using a general result of \cite{Ibukiyama}. Before introducing the relevant notions, we explain the outline of what is to be done. We are looking for an operator $\mathfrak{D}_{g,a}$ that will map a genus $g$ Siegel modular form $F$ of weight $a$ to another modular form satisfying certain properties: more precisely, $\mathfrak{D}_{g,a}(F)$ will be a polynomial in $F$ and its partial derivatives. There are various motivations for looking for $\mathfrak{D}_{g,a}$ of such a form, which are discussed for the general setup for the problem in \cite{Ibukiyama}, \cite{ibuzagier}, \cite{ehib}. In our situation, motivated by the occurrence of $\det(\partial F)$ in our treatment of Rankin-Cohen bracket (see \Cref{rem:intrinsic} and \Cref{prop11}), we will want $\mathfrak{D}_{g,a}(F)$ to restrict to $\det(\partial F)$ along the zero locus $\{F=0\}$. Note that $\det(\partial F)$ is homogeneous in~$F$ of degree $g$, in the sense that each monomial involves a product of $g$ different partial derivatives of~$F$, and moreover it is a purely $g$'th order differential operator, in the sense that each monomial involves precisely $g$ differentiations. Hence, we will look for a $\mathfrak{D}_{g,a}$ that shares these two properties. Besides $F\mapsto \det(\partial F)$, another operator with the above properties is $F\mapsto F^{g-1}(\det\partial)F$, where each monomial is $F^{g-1}$ multiplied by a suitable $g$'th order partial derivative of~$F$. Of course $\mathfrak{D}_{g,a}(F)$ cannot be defined either as $\det(\partial F)$ or as $F^{g-1}(\det\partial) F$, as these are not modular forms. But a wished-for $\mathfrak{D}_{g,a}$ can be constructed explicitly: in order to do so, we will use the general machinery of \cite{Ibukiyama}, which implies that a differential operator with constant coefficients maps a non-zero modular form to a modular form if the corresponding polynomial is pluriharmonic and satisfies a suitable transformation property under the action of $\op{GL}(g,{\mathbb{C}})$. We now begin by reviewing the general notation, before stating a particular case of \cite[Thm.~2]{Ibukiyama} that allows the construction of $\mathfrak{D}_{g,a}$. \subsection{Polynomials and differential operators} Let $R_1,\dots,R_g$ be a $g$-tuple of $g\times g$ {\em symmetric} matrices, and denote the entries of $R_h$ by $(r_{h;ij})$. Denote \[ {\mathbb{C}}[R_1,\dots,R_g]\coloneqq{\mathbb{C}}[\{r_{h;ij}\}] \] the space of polynomials in the entries of these matrices. The group $\op{GL}(g,{\mathbb{C}})$ naturally acts by congruence on each symmetric matrix $R_h$, and so on the space ${\mathbb{C}}[R_1,\dots,R_g]$. For every integer $v\geq 0$ we denote by ${\mathbb{C}}[R_1,\dots,R_g]_v\subset{\mathbb{C}}[R_1,\dots,R_g]$ the vector subspace of those polynomials $P\in{\mathbb{C}}[R_1,\dots,R_g]$ that satisfy \[ P(AR_1A^t,\dots,AR_gA^t)=\det(A)^{v}P(R_1,\dots,R_g) \] for all $A\in\op{GL}(g,{\mathbb{C}})$. For every polynomial $Q\in{\mathbb{C}}[R_1,\dots,R_g]$ we define \[ Q_\partial\coloneqq Q(\partial_1,\dots,\partial_g), \quad\text{where as usual}\ (\partial_h)_{ij}\coloneqq\frac{1+\delta_{ij}}{2}\frac{\partial}{\partial \tau_{h;ij}}\,. \] Such $Q_\partial$ is then a holomorphic differential operator with constant coefficients acting on holomorphic functions in the variables $\tau_{h;ij}$. We further define the holomorphic differential operator ${\mathcal D}_Q$ that sends a $g$-tuple of holomorphic functions $F_1(\tau_1),\dots,F_g(\tau_g)$ on~${\mathbb{H}}_g$ to another holomorphic function on the Siegel space given by \[ {\mathcal D}_Q(F_1,\dots,F_g)(\tau)\coloneqq \left. Q_\partial(F_1,\dots,F_g)\right|_{\tau_1=\dots=\tau_g=\tau}\,. \] What this means is that applying each $\partial_h$ takes the suitable partial derivatives of $F_h$ with respect to the entries of the period matrix $\tau_h$, and then once the polynomial in such partial derivatives is computed, it is evaluated at the point $\tau_1=\dots=\tau_g=\tau$. While the general theory of applying ${\mathcal D}_Q$ to a $g$-tuple of modular forms is very interesting, we will focus on the case $F=F_1=\dots=F_g$, denoting then simply ${\mathcal D}_Q(F)\coloneqq{\mathcal D}_Q(F,\dots,F)$. \begin{exa}\label{exa:poly} It is immediate to check that the following polynomial (the general notation ${\mathfrak R}$ will be introduced in \Cref{sec:poly2} below) \[ {\mathfrak R}(\underbrace{1,\dots,1}_{\text{ $g$ times}}):=g!\,\sum_{\sigma\in S_g}\sgn(\sigma) \prod_{j=1}^g r_{j;j,\sigma(j)} \] induces the differential operator ${\mathcal D}_{{\mathfrak R}(1,\dots,1)}(F)=g!\,\det(\partial F)$. On the other hand, the polynomial \[ {\mathfrak R}(g,\underbrace{0,\dots,0}_{\text{$g-1$ times}}\!\!):=\sum_{\sigma\in S_g}\sgn(\sigma) \prod_{j=1}^g r_{1;j,\sigma(j)} \] induces the differential operator ${\mathcal D}_{{\mathfrak R}(g,0,\dots,0)}(F)=F^{g-1}(\det\partial)F$. We stress that while each term of ${\mathcal D}_{{\mathfrak R}(1,\dots,1)}$ is a product of $g$ first-order partial derivatives of $F$, each term of ${\mathcal D}_{{\mathfrak R}(g,0,\dots,0)}$ is equal to $F^{g-1}$ multiplied by one $g$'th order partial derivative of~$F$. \end{exa} Our main result is the following \Cref{thm:precise}, which is a refined version of \Cref{thm:main}: indeed, to obtain \Cref{thm:main} from it, we just need to set \[ \mathfrak{D}_{g,a}(F)\coloneqq {\textstyle\frac{1}{g!}}{\mathcal D}_{Q_{g,a}}(F) \] for every modular form $F$ of genus $g\geq 2$ and weight $a\geq\frac{g}{2}$ (the constant factor $g!$ is introduced only for notational convenience). In order to motivate the statement below, recall that we want ${\mathcal D}_{Q_{g,a}}(F)$ to be equal to $g!\,\det(\partial F)$ modulo $F$. Since $g!\,\det(\partial F)={\mathcal D}_{{\mathfrak R}(1^g)}(F)$ as in \Cref{exa:poly}, and since ${\mathfrak R}(1^g)$ belongs to ${\mathbb{C}}[R_1,\dots,R_g]_2$, it is rather natural to look for $Q_{g,a}$ inside ${\mathbb{C}}[R_1,\dots,R_g]_2$. \begin{thm}\label{thm:precise} For every $g\geq 2$ and every $a\geq\frac{g}{2}$ there exists a polynomial $Q_{g,a}\in{\mathbb{C}}[R_1,\dots,R_g]_2$ such that the following properties hold for every genus $g$ Siegel modular form $F$ of weight $a$: \begin{itemize} \item[(i)] ${\mathcal D}_{Q_{g,a}}(F)$ is a Siegel modular form of weight $ga+2$; \item[(ii)] if $\op{ord}_D F=b$, then $\op{ord}_D{\mathcal D}_{Q_{g,a}}(F)\ge gb$; \item[(iii)] the restriction of ${\mathcal D}_{Q_{g,a}}(F)$ to the zero locus $\{F=0\}$ of~$F$ is equal to $g!\cdot\det(\partial F)$. \end{itemize} Moreover, for any other polynomial $Q'_{g,a}\in{\mathbb{C}}[R_1,\dots,R_g]_2$ such that ${\mathcal D}_{Q'_{g,a}}$ satisfies properties (i) and (iii), the difference ${\mathcal D}_{Q_{g,a}}(F)-{\mathcal D}_{Q'_{g,a}}(F)$ is a Siegel modular form divisible by~$F$. \end{thm} The above differential operator ${\mathcal D}_{Q_{g,a}}$, which is homogeneous of degree $g$, can be also applied to modular forms with a character, which only occur for $g=2$: in this case, the output is a modular form (with trivial character). \begin{rem} As a consequence of \Cref{thm:precise}(iii), if a modular form $F$ of genus $g$ and weight $a\geq\frac{g}{2}$ satisfies Condition \eqref{cond}, then ${\mathcal D}_{Q_{g,a}}(F)$ does not vanish identically on the zero divisor of $F$. \end{rem} The reason we are able to construct $Q_{g,a}$ explicitly is that we can use a lot of prior work, especially by the second author and collaborators, on differential operators acting on modular forms. In particular, by \Cref{thm:ibu} the operator ${\mathcal D}_{Q_{g,a}}$ will map modular forms to modular forms if $Q_{g,a}$ is pluriharmonic --- this essential notion will be recalled in \Cref{sec:pluri}. Thus to prove \Cref{thm:precise}, it will suffice to construct a pluriharmonic $Q_{g,a}\in {\mathbb{C}}[R_1,\dots,R_g]_2$. Property (i) will rely on \Cref{thm:ibu} and (ii) will be easily seen to hold. Up to rescaling, we will also check (iii), and the last claim will follow. \subsection{A basis of $\bm{{\mathbb{C}}[R_1,\dots,R_g]_2}$}\label{sec:poly2} Consider the $g$-tuple of symmetric $g\times g$ matrices $R_1,\dots,R_g$. We set \begin{equation}\label{eq:Mdefined} {\mathfrak R}\coloneqq t_1R_1+\dots+t_gR_g\,, \end{equation} and denote by ${\mathfrak R}({\bf n})\in{\mathbb{C}}[R_1,\dots,R_g]$ the coefficients of the expansion of the determinant \[ \det({\mathfrak R})=\sum_{{\bf n}\in {\bf N}_g}{\mathfrak R}({\bf n})t_1^{n_1}\dots t_g^{n_g} \] as a polynomial in the variables $t_1,\dots,t_g$, where $${\bf N}_g\coloneqq\{{\bf n}=(n_1,\dots,n_g)\in{\mathbb{N}}^g\,|\, n_h\geq0\textrm{ for all }h,\ \sum n_h=g\}\,.$$ The importance of the polynomials ${\mathfrak R}({\bf n})$ for us relies on the fact that they clearly belong to ${\mathbb{C}}[R_1,\dots,R_g]_2$, simply because $\det(A{\mathfrak R} A^t)=\det(A)^2\det({\mathfrak R})$ for all $A\in \op{GL}(g,{\mathbb{C}})$. The following lemma, of a very classical flavor, is due to Claudio Procesi. \begin{lm}\label{procesi} The set of polynomials $\{{\mathfrak R}({\bf n})\}_{{\bf n}\in{\bf N}}$ is a basis of ${\mathbb{C}}[R_1,\dots,R_g]_2$\,. \end{lm} \begin{proof} Let $V$ be a complex $g$-dimensional vector space and let $\op{GL}(V)$ naturally act on $\op{Sym}^2(V^*)^{\oplus g}$ via \[ A\cdot ((\phi_1\otimes\phi_1),\dots,(\phi_g\otimes\phi_g)):= ((\phi_1A)\otimes (\phi_1 A),\dots,(\phi_g A)\otimes(\phi_g A)) \] for $A\in\op{GL}(V)$. Consider the ${\mathbb{C}}$-algebra ${\mathcal I}(V, g)$ of $\op{SL}(V)$-invariants inside $\op{Sym}^2(V^*)^{\oplus g}$. The quotient $\op{GL} (V) / \op{SL} (V) \cong{\mathbb{C}}^*$ acts on ${\mathcal I}(V,g)$ and, under this action, the algebra of invariants decomposes as $$ {\mathcal I} (V, g) =\bigoplus_d {\mathcal I} (V, g) _d, \ \hbox{where}\ {\mathcal I} (V, g)_d\coloneqq \{P\in {\mathcal I}(V,g)\ |\ A\cdot P=\det(A)^{2d}P\}. $$ Clearly ${\mathcal I} (V, g)_d$ is simply the subspace of ${\mathcal I}(V, g)$ consisting of invariant polynomial maps $P:\op{Sym}^2(V)^{\oplus g}\rightarrow{\mathbb{C}}$ of total degree $d\cdot \dim(V)$ with respect to the above ${\mathbb{C}}^*$-action. The subspace ${\mathcal I}(V,g)_1$ decomposes as \[ {\mathcal I}(V,g)_1=\bigoplus_{{\bf n}\in{\bf N}_g} {\mathcal I}(V,g)_{{\bf n}} \] where ${\mathcal I}(V,g)_{{\bf n}}\coloneqq\left(\bigotimes_{i=1}^g \op{Sym}^2(V^*)^{\otimes n_i}\right)^{\op{SL}(V)}$ denotes the subspace of invariant polynomial functions $\op{Sym}^2(V)^{\oplus g}\rightarrow{\mathbb{C}}$ of multi-degree ${\bf n}$. Since it is easy to check that ${\mathfrak R}({\bf n})\in {\mathcal I}(V,g)_{{\bf n}}$, it is enough to show that ${\mathcal I}(V,g)_{{\bf n}}$ has dimension $1$ for all ${\bf n}\in{\bf N}_g$. Moreover, ${\mathcal I}(V,g)_{{\bf n}}$ is isomorphic to ${\mathcal I}(V,g)_{(1,\dots,1)}=(\op{Sym}^2(V^*)^{\otimes g})^{\op{SL}(V)}$ as an $\op{SL}(V)$-module for all ${\bf n}\in{\bf N}_g$, and so it is enough to show that $(\op{Sym}^2(V^*)^{\otimes g})^{\op{SL}(V)}$ has dimension at most $1$ (and in fact it will have dimension $1$, since ${\mathfrak R}({\bf n})\not\equiv 0$). Thinking of $(\op{Sym}^2(V^*)^{\otimes g})^{\op{SL}(V)}$ as a subspace of $((V^*)^{\otimes 2g})^{\op{SL}(V)}$, we describe a basis of $((V^*)^{\otimes 2g})^{\op{SL}(V)}$; it is enough to do that for $V={\mathbb{C}}^g$. Let $\mathfrak{P}$ be the set of all permutations $(I,J)=(i_1,\dots,i_g,j_1,\dots,j_g)$ of $\{1,2,\dots,2g\}$ such that $i_h<j_h$ for all $h=1,\dots,g$. For every $(I,J)\in\mathfrak{P}$ we denote by \[ [i_1,\dots,i_g][j_1,\dots,j_g]:V^{\otimes 2g}\longrightarrow{\mathbb{C}} \] the linear map that sends $v_1\otimes\dots\otimes v_{2g}$ to $\det(v_I)\det(v_J)$, where $v_I$ is the matrix whose $h$-th column is $v_{i_h}$ (and similarly for $v_J$). It is a classical fact that the collection of $[i_1,\dots,i_g][j_1,\dots,j_g]$ with $(I,J)\in\mathfrak{P}$ is a basis of $((V^*)^{\otimes 2g})^{\op{SL}(V)}$. Fix now $(I,J)$ and consider the restriction of $[i_1,\dots,i_g][j_1,\dots,j_g]$ to $\op{Sym}^2(V)^{\otimes g}$, and in particular to the vectors of type \[ v_1\otimes v_1\otimes v_2\otimes v_2\otimes\cdots\otimes v_g\otimes v_g \] which generate $\op{Sym}^2(V)^{\otimes g}$. Note that $[i_1,\dots,i_g][j_1,\dots,j_g]$ is alternating both in $I$ and in $J$, and vanishes on all vectors $v_1\otimes v_1\otimes v_2\otimes v_2\otimes\cdots\otimes v_g\otimes v_g$ as soon as either $I$ or $J$ contains $\{2m-1,\,2m\}$ for some $m=1,\dots,g$. It follows that all elements $[i_1,\dots,i_g][j_1,\dots,j_g]$ of the above basis of $((V^*)^{\otimes 2g})^{\op{SL}(V)}$ vanish on $\op{Sym}^2(V)^{\otimes g}$, except possibly $[1,3,5,\dots,2g-1][2,4,6,\dots,2g]$. We conclude that $(\op{Sym}^2(V^*)^{\otimes g})^{\op{SL}(V)}$ is at most $1$-dimensional. \end{proof} \subsection{Definition of the polynomial $\bm{Q_{g,a}}$}\label{sec:explicit} In view of \Cref{procesi}, the sought polynomial $Q_{g,a}\in{\mathbb{C}}[R_1,\dots,R_g]_2$ must be a linear combination of the ${\mathfrak R}({\bf n})$'s. Now we give the explicit formulas for the coefficients of such a linear combination. Pluriharmonicity of $Q_{g,a}$ will be verified in \Cref{sec:Qpluri}. We define the constant \[ C(1)\coloneqq (g-1)\prod_{i=1}^{g-1}(2a-i)\,. \] Moreover, for every $m=2,\dots,g$ we define the constant \[ C(m)\coloneqq (-1)^{m-1}(m-1)!(2a)^{m-1}\prod_{i=m}^{g-1}(2a-i)\,, \] where for $m=g$ the last product above is declared to be equal to $1$, so that $C(g)=(-1)^{g-1}(g-1)! (2a)^{g-1}$. Assume $2a\geq g\geq 2$, so that $C(1)\neq 0$, and define then \[ Q_{g,a}:=\frac{1}{C(1)}\sum_{{\bf n}\in{\bf N}_g} c({\bf n}){\mathfrak R}({\bf n})\,, \] where \begin{enumerate} \item $c(1,\dots,1)\coloneqq C(1)$; \item if at least two of $n_1,\dots,n_g$ are greater than 1, then we set $c({\bf n})\coloneqq 0$; \item if $n_h=m>1$ for some $h$, while $0\le n_j\le 1$ for any $j\neq h$, then we set $c({\bf n})\coloneqq C(m)$. \end{enumerate} Hence $c({\bf n})\neq 0$ if and only if ${\bf n}$ is equal to $(m,1,1,\dots,1,0,0,\dots,0)$ for some $m\ge 1$, up to permuting its components. \subsection{Explicit formulas} In order to have a more explicit expression for the polynomials ${\mathfrak R}({\bf n})$, we expand the relevant determinants. \begin{ntn} If $M$ is a $g\times g$ matrix and $I,J\subset\{1,2,\dots,g\}$ with $|I|=|J|$, we denote by $M_{IJ}$ the minor of $M$ consisting of rows $I$ and columns $J$, and denote by $\det_{IJ}(M)$ the determinant of $M_{IJ}$ (if $|I|=|J|=0$, then we formally set $\det_{IJ}(M)\coloneqq 1$). Moreover, we let ${\widehat I}$ be the complement of $I$ and, if $i\in \{1,2,\dots,g\}$, then we let $\hat{\imath}:=\{1,2,\dots,g\}\setminus\{i\}$. \end{ntn} Applying the Laplace expansion several times yields \begin{equation}\label{laplace} {\mathfrak R}({\bf n})= \sum_{I_\bullet,J_\bullet} \epsilon(I_\bullet,J_\bullet) \det_{I_1 J_1}(R_1)\cdots \det_{I_g J_g}(R_g)\,, \end{equation} where $(I_\bullet,J_\bullet)=(I_1,\dots,I_g,J_1,\dots,J_g)$ and \begin{itemize} \item $I_1, \dots, I_g, J_1, \dots, J_g$ run over all subsets of $\{1,\dots,g\}$ such that $|I_i|=|J_i|=n_i$ for each $i=1,\dots,g$ and $\sqcup_{i=1}^{g}I_i= \sqcup_{i=1}^{g}J_i=\{1,\dots,g\}$; \item $\epsilon(I_\bullet,J_\bullet)$ is the signature of the element of $S_g$ that maps $(I_1,\dots,I_g)$ to $(J_1,\dots,J_g)$, where the elements inside each subset $I_i$ or $J_i$ are ordered from minimum to maximum. \end{itemize} In order to compute ${\mathcal D}_{{\mathfrak R}({\bf n})}(F,\dots,F)$, consider ${\bf n}=(m,1,\dots,1,0,\dots,0)$. Regarding the partial sum of the parts for $|I_j|=1$ as expansions of determinants by Laplace expansion, we have \begin{equation}\label{diffresult} {\mathcal D}_{{\mathfrak R}({\bf n})}(F,\dots,F) =F^{m-1}\sum_{|I|=|J|=m} \epsilon(I,J)(g-m)!(\det\nolimits_{IJ}\partial)F\cdot \det\nolimits_{\widehat{I},\widehat{J}}(\partial F)\,, \end{equation} where we denote \[ \epsilon(I,J)\coloneqq(-1)^{i_1+\dots+i_m+j_1+\dots+j_m}. \] Thus we have obtained the following. \begin{cor}\label{cor:summands} \hskip2mm \begin{itemize} \item[(i)] If ${\bf n}=(1,\dots,1)$, then ${\mathcal D}_{{\mathfrak R}(1,\dots,1)}(F)=g!\,\det(\partial F)$. \item[(ii)] If ${\bf n}\in{\bf N}_g$ and ${\bf n}\ne (1,\dots,1)$, then ${\mathcal D}_{{\mathfrak R}({\bf n})}(F)$ is a multiple of $F$. \end{itemize} \end{cor} \begin{proof} For (i), note that \[ \sum_{j=1}^{g}(-1)^{i+j}\partial_{ij}F\cdot \det\nolimits_{\widehat{\imath}\,\widehat{\jmath}}(\partial F) =\det(\partial F) \] for every $i=1,\dots,g$. Then formula \eqref{diffresult} for ${\bf n}=(1,1,\dots,1)$ (that is, for $m=1$) yields \[ {\mathcal D}_{{\mathfrak R}(1,\dots,1)}(F)= \sum_{i,j=1}^{g}(-1)^{i+j}\partial_{ij}F\cdot\det\nolimits_{\widehat{\imath}\,\widehat{\jmath}}(\partial F) =g!\,\det(\partial F)\,. \] For (ii), we observe that since $\sum n_h=g$, and all $n_h\ge 0$, it follows that unless ${\bf n}=(1,\dots,1)$, there exists at least one $h$ such that $n_h=0$. But then the polynomial ${\mathfrak R}({\bf n})$ would contain no $r_{h;ij}$, which is to say that $F_h$ is not differentiated at all by ${\mathcal D}_{{\mathfrak R}({\bf n})}(F_1,\dots,F_g)$. This finally means that ${\mathcal D}_{{\mathfrak R}({\bf n})}(F_1,\dots,F_g)$ is divisible by $F_h$, and thus ${\mathcal D}_{{\mathfrak R}({\bf n})}(F)$ is divisible by $F$. \end{proof} \begin{exa} For $g=2,3$ we have \begin{align*} {\mathcal D}_{Q_{2,a}}(F,F) &=2\det(\partial F)+\frac{2(2a)}{1-2a}F\cdot(\det\partial) F\,,\\ {\mathcal D}_{Q_{3,a}}(F,F,F) &= 6\det(\partial F)+ \frac{3(2a)^2}{(2a-1)(2a-2)} F^2\cdot (\det\partial) F\\ &\quad - \frac{3(2a)}{(2a-1)} F\sum_{i,j=1}^3(\partial_{ij}F)\cdot(\det\nolimits_{\hat{\imath}\hat{\jmath}}\partial)F\,. \end{align*} \end{exa} \subsection{Pluriharmonic polynomials}\label{sec:pluri} By \Cref{thm:ibu} the most important step toward the proof of \Cref{thm:precise} is checking that $Q_{g,a}$, defined in \Cref{sec:explicit} as a linear combination of ${\mathfrak R}({\bf n})$, is pluriharmonic. In this section we recall the relevant setup, definitions, and statements. Fix an $g\times k$ matrix $X=(x_{i\nu})$, and denote for $1\leq i,j\leq g$ \[ \Delta_{ij}(X)\coloneqq\sum_{\nu=1}^{k}\frac{\partial^2}{\partial x_{i\nu}\partial x_{j\nu}}\,. \] For a polynomial $P(R)$ in the entries of a symmetric $g\times g$ matrix $R=(r_{ij})$ we denote $\tP(X)\coloneqq P(X X^t)$. \begin{df} The polynomial $P$ is called {\em pluriharmonic} (with respect to $X$) if $\Delta_{ij}\tP=0$ for all $1\leq i,j \leq g$. \end{df} To detect this pluriharmonicity in terms of $R$, we define the differential operator in variables $(r_{ij})$ by \begin{equation}\label{eq:Ddefined} D_{ij}\coloneqq k\cdot\partial_{ij}+\sum_{u,w=1}^{g}r_{uw}\partial_{iu}\partial_{jw}\,, \end{equation} where $\partial_{ij}:=\frac{1+\delta_{ij}}{2}\frac{\partial}{\partial r_{ij}}$. Then a direct computation yields \begin{equation}\label{eq:DDelta} (D_{ij}P)(XX^t)=\Delta_{ij}(\tP(X))\,, \end{equation} where $P(r_{ij})$ is any polynomial, and the LHS means $D_{ij}$ is applied to $P$, and then evaluated at $XX^t$. This equality shows that computing the $\Delta_{ij}$ derivative of $\tP$ (which is a second order differential operator) amounts to computing the $D_{ij}$ applied to~$P$, which is a differential operator that includes first and second order derivatives. Thus pluriharmonicity is equivalent to the condition $D_{ij}(P)=0$ for all $1\le i,j\le g$. Now the full setup we require is as follows. For a positive integer $k=2a$ we consider a $g$-tuple of $g\times k$ matrices $X_1,\dots,X_g$, and denote $R_h\coloneqq X_hX_h^t$. \begin{df} A polynomial $P\in{\mathbb{C}}[R_1,\dots,R_g]$ is called {\em pluriharmonic} if \[ \tP(X_1,\dots,X_g)\coloneqq P(X_1X_1^t,\dots, X_gX_g^t)\,, \] is pluriharmonic with respect to the $g\times (gk)$ matrix $X=(X_1,\dots,X_g)$. \end{df} The following result is a special case of \cite[Theorem 2]{Ibukiyama}, which shows the importance of pluriharmonicity. \begin{thm}\label{thm:ibu} For $g\geq 2$, let $P\in{\mathbb{C}}[R_1,\dots,R_g]_2$ and let $F\neq 0$ be a Siegel modular form of genus $g$ and weight $a\geq\frac{g}{2}$. Then ${\mathcal D}_P(F,\dots,F)$ is a Siegel modular form of weight $ga+2$ if $P$ is pluriharmonic. \end{thm} Let us first give an elementary characterization of pluriharmonicity. \begin{lm}\label{elementary} Let $P\in{\mathbb{C}}[R_1,\dots,R_g]$. \begin{itemize} \item[(i)] The polynomial $\tP(X)$ is pluriharmonic if and only if $\tP(AX)$ is harmonic (i.e.~$\sum_{i=1}^{g}\Delta_{ii}\tilde{P}(AX)=0$) for any $A \in \op{GL}(g,{\mathbb{C}})$. \item[(ii)] Assume that $\tP\in{\mathbb{C}}[R_1,\dots,R_g]_v$ for some $v$. Then $\tP(X)$ is pluriharmonic if and only if $\Delta_{11}(\tP)=0$. \end{itemize} \end{lm} \begin{proof} The claim (i) is remarked in \cite{kashiwaravergne} and we omit the proof. In order to prove (ii), note that pluriharmonicity of $\tP$ implies that $\Delta_{11}(\tP)=0$ by definition. Hence, it is enough to prove that $\Delta_{11}(\tP)=0$ implies pluriharmonicity. For a fixed $i$ with $1\leq i\leq g$, let $A$ be the permutation matrix that exchanges the first row and the $i$-th row. Since \begin{align*} \Delta_{ii}(X)\cdot\tP(X) & =\det(A)^{-v} \Delta_{ii}(X)\cdot \tP(AX) \\ &=\det(A)^{2-v}\Delta_{11}(AX)\cdot \tP(AX)=0, \end{align*} the conclusion follows. \end{proof} Denoting $D_{h;11}$ the differential operator $D_{11}$ defined in \eqref{eq:Ddefined} with respect to the entries of the matrix $R_h$, and using \eqref{eq:DDelta} to rewrite $\Delta_{h,ij}$ for each $X_h$ as $D_{h;11}$, by \Cref{elementary} we have the following. \begin{cor}\label{cor:pluri} Suppose that $\tP\in{\mathbb{C}}[R_1,\dots,R_g]_v$ for some $v$. Then $P$ is pluriharmonic with respect to the $g\times (gk)$ matrix $(X_1,\dots,X_g)$ if and only if \begin{equation}\label{eq:Dharmonic} \sum_{h=1}^{g}D_{h;11}P=0\,. \end{equation} \end{cor} The above corollary applies to $Q_{g,a}$ and simplifies the verification of its pluriharmonicity. \subsection{Pluriharmonicity of $Q_{g,a}$}\label{sec:Qpluri} The result that we want to show is the following. \begin{prop}\label{prop:existence} The polynomial $Q_{g,a}$ is pluriharmonic. \end{prop} Since we will be dealing with minors of the matrix ${\mathfrak R}$ defined by~\eqref{eq:Mdefined}, we let \begin{equation}\label{eq:bdefined} {\mathbf N'}\coloneqq\{{\mathbf n'}=(n'_1,\dots,n'_g)\in{\mathbb{N}}^g\,|\,n'_h\geq 0\textrm{ for all }h,\ \sum n'_h=g-1\}\,, \end{equation} and we denote by $\widehat{{\mathfrak R}}_{k;l}$ the determinant of the matrix ${\mathfrak R}_{\widehat{k};\widehat{l}}$, and denote by $\widehat{{\mathfrak R}}_{k;l}({\mathbf n'})$ the polynomial appearing in the expansion \[ \widehat{{\mathfrak R}}_{k;l}=\sum_{{\mathbf n'}\in{\mathbf N'}}\widehat{{\mathfrak R}}_{k;l}({\mathbf n'}) t_1^{p_1}\cdots t_g^{p_g}\,. \] We can now compute the derivative of ${\mathfrak R}({\bf n})$ that enters into the formula \eqref{eq:Dharmonic} for pluriharmonicity. \begin{lm}\label{deriv} For any ${\bf n}\in {\bf N}_g$, we have \[ D_{h;11}{\mathfrak R}({\bf n})=2(k-n_h+1)\widehat{{\mathfrak R}}_{1;1}({\bf n}-{\bf e}_h)\,, \] where $k=2a$, and $\{{\bf e}_1,\dots,{\bf e}_g\}$ is the standard basis of ${\mathbb{Z}}^g$. \end{lm} \begin{proof} By symmetry, it is enough to prove this for $h=1$; for simplicity, we just write $r_{ij}$ for the entries of the symmetric matrix $r_{1;ij}$, and define $\partial$ by \eqref{eq:padefined}. We recall that $D_{1;11}=k\cdot\partial_{11}+\sum_{i,j=1}^g r_{ij}\partial_{1i}\partial_{1j}$. Then by treating the cases $i=1$ and $i\ne 1$ separately, and checking the factor of $1/2$ versus $1$ appearing in the definition of $\partial$ for these entries, we see that \[ \partial_{1i}\det ({\mathfrak R})=2(-1)^{1+i}t_1\widehat{{\mathfrak R}}_{1;i} \] for any $i=1,\dots,g$. To compute the second order derivatives appearing in $D_{1;11}$, we first note that since $\widehat{{\mathfrak R}}_{1;1}$ does not depend on any $r_{1i}$, we have $\partial_{1i}\partial_{1j}\det ({\mathfrak R})=0$ if $i=1$ or $j=1$. Otherwise, for $i,j\neq 1$, we compute \begin{align*} r_{ij}\partial_{1i}\partial_{1j}\det ({\mathfrak R}) &=(-1)^{1+j}t_1r_{ij}\partial_{1i}\widehat{{\mathfrak R}}_{1;j} =(-1)^{1+j+1+(i-1)}t_1^2r_{ij}\widehat{{\mathfrak R}}_{\{1,i\};\{1,j\}}\\ &=(-1)(-1)^{(i-1)+(j-1)}t_1^2r_{ij}\widehat{{\mathfrak R}}_{\{1,i\};\{1,j\}}\,. \end{align*} Summing these identities yields \[ \sum_{i,j=2}^{g}r_{ij}\partial_{1i}\partial_{1j}\det ({\mathfrak R}) = (-t_1)\sum_{i,j=2}^{g}t_1r_{ij}(-1)^{(i-1)+(j-1)}\widehat{{\mathfrak R}}_{\{1,i\};\{1,j\}}\,. \] Here for a fixed $i$, the sum $\sum_{j=2}^{g}(-1)^{(i-1)+(j-1)}r_{ij}\widehat{{\mathfrak R}}_{\{1,i\};\{1,j\}}$ is nothing but the derivative of the $(i-1)$-th row of $\widehat{{\mathfrak R}}_{1;1}$ with respect to $t_1$, and thus \[ \sum_{i,j=2}^{g}(-1)^{(i-1)+(j-1)}r_{ij}\widehat{{\mathfrak R}}_{\{1,i\};\{1,j\}} =\frac{\partial}{\partial t_1}\widehat{{\mathfrak R}}_{1;1}\,. \] Recall that $\widehat{{\mathfrak R}}_{1;1}=\sum_{{\mathbf n'}\in{\mathbf N'}}\widehat{{\mathfrak R}}_{1;1}({\mathbf n'}) t_1^{p_1}\dots t_g^{p_g} $ and note that \[ t_1\frac{\partial}{\partial t_1}(t_1^{p_1}\dots t_g^{p_g}\widehat{{\mathfrak R}}_{1;1}({\mathbf n'})) =p_1t_1^{p_1}\dots t_g^{p_g}\widehat{{\mathfrak R}}_{1;1}({\mathbf n'})\,. \] Thus the coefficient of $t_1^{n_1}\dots t_g^{n_g}$ in the expansion of $D_{1;11}\det ({\mathfrak R}) $ is equal to \[ 2k\widehat{{\mathfrak R}}_{1;1}(n_1-1,n_2,\dots,n_g)-2(n_1-1) \widehat{{\mathfrak R}}_{1;1}(n_1-1,n_2,\dots,n_g)\,. \] \end{proof} As a consequence of \Cref{deriv}, we have \begin{align*} \frac{(2a-1)!}{(2a-g)!}\sum_{h=1}^g D_{h;11}Q_{g,a} &= \sum_{h=1}^g c({\bf n}) D_{h;11}{\mathfrak R}({\bf n})\\ &=\sum_{h=1}^g 2(k-n_h+1)c({\bf n})\widehat{{\mathfrak R}}_{1;1}({\bf n}-{\bf e}_h)\\ &=2\sum_{h=1}^g (k-n'_h)c({\mathbf n'}+{\bf e}_h)\widehat{{\mathfrak R}}_{1;1}({\mathbf n'})\,. \end{align*} Thus by \Cref{cor:pluri}, to check pluriharmonicity of $Q_{g,a}$ it is enough to check that \begin{equation}\label{harmoniccondition} \sum_{h=1}^g (k-n'_h)c({\mathbf n'}+{\bf e}_h)=0 \end{equation} for all ${\mathbf n'}\in{\mathbf N'}$. Comparing the degrees of $\widehat{{\mathfrak R}}_{1;1}({\mathbf n'})$ with respect to $R_h$, one can see that the set $\{\widehat{{\mathfrak R}}_{1,1}({\mathbf n'})\,|\,{\mathbf n'}\in{\mathbf N'}\}$ is linearly independent over ${\mathbb{C}}$, and so \eqref{harmoniccondition} is actually equivalent to the pluriharmonicity of $Q_{g,a}$. \begin{proof}[Proof of \Cref{prop:existence}] It is enough to verify \eqref{harmoniccondition} for every ${\mathbf n'}\in{\mathbf N'}$. Up to reordering the entries of ${\mathbf n'}$, we can assume that they are non-increasing. If $n'_1\geq n'_2>1$, then ${\mathbf n'}+{\bf e}_{\ell}$ has two entries larger than $1$, and so by definition we have $c({\mathbf n'}+{\bf e}_{\ell})=0$ for any $\ell$. It follows that all the terms in \eqref{harmoniccondition} are equal to zero, and the equation is trivially satisfied. For ${\mathbf n'}=(1,\dots,1,0)$, the LHS of \eqref{harmoniccondition} is $$ k\cdot c(1,\dots,1)+(k-1)\biggl(c(2,1,\dots,1,0)+c(1,2,\dots,1,0)+\dots+c(1,\dots,2,0)\biggr) =k\cdot C(1)+(k-1)(g-1)C(2)\,. $$ By definition of $C(1)$ and $C(2)$, the terms cancel, yielding $0$. Let now ${\mathbf n'}=(m,1,\dots,1,0,\dots,0)$ with $g-m$ entries $1$. If $2\leq \ell\leq g-m$, then $n_\ell>1$ and $c({\bf n}'+{\bf e}_\ell)=0$ by definition. We then have $$c({\mathbf n'}+{\bf e}_1)=c(m+1,1,\dots,1,0,\dots,0)=C(m+1)\,.$$ If $g-m+1\leq \ell$, then ${\mathbf n'}+{\bf e}_\ell$ is of type $(m,1,\dots,1,0,\dots,0)$, $(m,1, \dots,1,0,1,0,\dots,0)$, \dots, or $(m,1,\dots,1,0,\dots,0,1)$: in all these cases ${\mathbf n'}+{\bf e}_\ell$ has $g-m$ entries $1$, and thus $c({\mathbf n'}+{\bf e}_\ell)=C(m)$. So LHS of \eqref{harmoniccondition} is given by \[ (k-m)C(m+1)+km\cdot C(m)\,, \] which also vanishes by our definition of the constants $C(m)$. \end{proof} \begin{exa} In the case $g=2$ we obtain \[ Q_{2,a}= {\mathfrak R}(1,1)-\frac{2a}{2a-1}{\mathfrak R}(2,0)-\frac{2a}{2a-1}{\mathfrak R}(0,2)\,, \] where we have used $k=2a$. This is a special case of the discussion in \cite{ehib}. \end{exa} \begin{proof}[Proof of \Cref{thm:precise}] The polynomial $Q_{g,a}$, defined in \Cref{sec:explicit}, belongs to ${\mathbb{C}}[R_1,\dots,R_g]_2$ and is pluriharmonic by \Cref{prop:existence}. Then (i) and the first part of (ii) follow from \Cref{thm:ibu}. Moreover, since ${\mathcal D}_{Q_{g,a}}(F_1,\dots,F_g)$ is ${\mathbb{C}}$-linear in each $F_h$, and since $F_h$ and $\frac{\partial F_h}{\partial\tau_{ij}}$ have the same vanishing order at the boundary for all $h$ and all $i,j$, it follows that ${\mathcal D}_{Q_{g,a}}(F,\dots,F)$ has vanishing order $\beta\geq gb$. This completes the proof of (ii). As for (iii), note that $2a\geq g\geq 2$ ensures that the constant $C(1)$ defined in \Cref{sec:explicit} is non-zero and so, by construction, \[ Q_{g,a}={\mathfrak R}(1,\dots,1)+\sum_{{\bf n}\neq(1,\dots,1)} \frac{(2a-g)! c({\bf n})}{(2a-1)!(g-1)}{\mathfrak R}({\bf n})\,. \] By \Cref{cor:summands} it follows that \[ {\mathcal D}_{Q_{g,a}}(F,\dots,F)\equiv g!\ \det(\partial F)\quad\pmod{F}\,, \] and so (iii) is proven. The last claim is an immediate consequence of (i) and (iii), as the modular form ${\mathcal D}_{Q_{g,a}}(F)-{\mathcal D}_{Q'_{g,a}}(F)$ vanishes along $\{F=0\}$. \end{proof} We make one last remark on the above proof. We are not claiming that $Q_{g,a}$ or the associated differential operator ${\mathcal D}_{Q_{g,a}}$ are unique. Since we are looking for polynomials in ${\mathbb{C}}[R_1,\dots,R_g]_2$, these must be linear combinations of the ${\mathfrak R}({\bf n})$'s by \Cref{procesi}. If $Q'_{g,a}\in{\mathbb{C}}[R_1,\dots,R_g]_2$ satisfies property (iii) in \Cref{thm:precise}, then it must take the form \[ Q'_{g,a}={\mathfrak R}(1,\dots,1)+\sum_{{\bf n}\neq(1,\dots,1)} c'({\bf n}) {\mathfrak R}({\bf n}) \] by \Cref{cor:summands}. Hence, the restrictions of ${\mathcal D}_{Q'_{g,a}}(F)$ and ${\mathcal D}_{Q_{g,a}}(F)$ to the locus $\{F=0\}$ agree. Note that the coefficients $c'({\bf n})$ may differ from the $c({\bf n})$ that were defined in \Cref{sec:explicit}.
proofpile-arXiv_065-5927
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} High-dimensional quantum systems feature a number of interesting phenomena, beyond what is possible for qubit systems. For example, the effect of entanglement is known to become increasingly robust to noise when higher dimensions are considered, the robustness becoming even arbitrary large \cite{zhu2021high, ecker2019overcoming}. In turn, the nonlocal correlations obtained from measurements on high-dimensional systems also feature significantly increased robustness. Indeed, these effects offer interesting perspectives for quantum information processing, allowing, e.g., for quantum communications over very noisy channels. In this work, we consider the effect of genuine high-dimensional steering (GHDS), which has been introduced recently \cite{designolle2021genuine}. The steering scenario can be viewed as the certification of entanglement between an untrusted party (Alice) and a trusted one (Bob). Hence steering is usually referred to as being one-sided device-independent (1-SDI). The key point of GHDS is to certify not only the presence of entanglement, but a minimal dimensionality of entanglement (specifically the Schmidt number) from observed correlations in a 1-SDI scenario. More formally, this approach introduces the notion of $n$-preparable assemblages, i.e., those assemblages being preparable based on any possible entangled state of Schmidt rank at most $n$; 1-preparable assemblages being then simply those assemblages that cannot lead to steering. Next, one can construct a steering inequality for $n$-preparable assemblages, the violation of which implies the presence of genuine $n+1$-dimensional steering. This was demonstrated in a quantum optics experiment (based on photon-pairs entangled in orbital angular momentum) reporting the 1-SDI certification of 14-dimensional entanglement. A natural question at this point is to understand what are the resources required in terms of measurements for demonstrating GHDS. Indeed, the effect of steering uses not only an entangled state as a resource, but also a well-chosen set of local measurements for Alice. The latter must be incompatible (in the sense of being non-jointly measurable), but it turns out that steering has a direct connection to measurement incompatibility. The present work explores this question, and establishes a general connection between GHDS and the notion of $n$-simulability of high-dimensional measurements which has been recently introduced in Ref.~\cite{ioannou2022simulability}. This notion generalises the concept of joint measurability and provides a quantification of measurement incompatibility in terms of a dimension. The connection we uncover generalises the well-known relations between quantum steering and joint measurability. Moreover, we also extend the connection to quantum channels, in particular the characterisation of their high-dimensional properties. These general tripartite connections between high-dimensional steering, measurements and channels, allow for results of one area to be directly translated in others, which we illustrate with several examples. \begin{figure*}[t] \centering \includegraphics[width=0.99\textwidth]{steering,n-sim,schmidt} \caption{Concepts and connections that appear in this work. (a) Quantum Steering scenario. (b) A set of measurements is $n$-simulable if they can be replaced by an $n$-partially entanglement breaking channel ($n$-PEB) followed by some measurements. (c) Illustration of the Schmidt number (SN) of a bipartite state: the state of two $5$ level systems is a combination of states with only qubit entanglement, hence the overall state has SN at most $2$. } \label{fig:fig1} \end{figure*} \section{Summary of results} We start by identifying the resources for GHDS. In particular, we show that an assemblage is $n$-preparable if it can be prepared via an entangled state of Schmidt number $n$ or if the set of Alice's local measurements are $n$-preparable. Hence the observation of genuine $n+1$-dimensional steering implies the presence of both (i) an entangled state of Schmidt number (at least) $n+1$, and (ii) a set of measurements for Alice that is not $n$-simulable. In this sense, GHDS provides a dimensional certification of both the entangled state and the local measurements. Moreover, we show that there is a one-to-one mapping between any $n$-preparable assemblage and a set of measurements that is $n$-simulable, generalising the existing connection between steering and joint measurability (corresponding here to the case $n=1$). This connection allows us to import results from one area to the other. For example, we can construct optimal models for simulating the correlations of high $d$-dimensional entangled states (so-called isotropic states) based on lower $n$-dimensional entanglement (and classical shared randomness). This simulation models hold for all possible local measurements on Alice's and Bob's side. In this sense, these models can be considered as a generalisation of the well-known local hidden state model of Werner, where classical shared randomness is augmented with low-dimensional entanglement. Moreover, we can translate steering inequalities for GHDS into criteria for testing non $n$-simulability of measurements. Finally, we obtain a dimensional characterisation of quantum channels via channel-state duality. In particular, we consider channels that map the set of all measurements to $n$-simulable ones, and describe the corresponding Choi states. We conclude with a number of open questions. \section{Basic concepts and questions} A central notion for us will be \textit{quantum steering}, see e.g. \cite{cavalcanti2016quantum, uola2020quantum} for recent reviews. Here, one party (Alice) performs local measurements $\{M_{a|x}\}$ on a state $\rho_{AB}$, a unit-trace positive semi-definite matrix acting on a finite-dimensional Hilbert space, that she shares with another distant party (Bob). The measurements are collections of matrices for which $M_{a|x} \geq 0$ $\forall a,x$ and $\sum_a M_{a|x} = \mathds{1} $ for each $x$. Here $x$ indexes the measurement and $a$ indexes the outcome. For each $x$, the collection $\{M_{a|x}\}_a$ is called a positive operator-valued measure (POVM for short). By performing her measurements, Alice remotely prepares the system of Bob in different possible states denoted by \begin{equation} \sigma_{a|x}:=\text{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ [\rho_{AB}] \bigg), \label{eq:steeringassem} \end{equation} usually termed an \textit{assemblage}, see Figure (\ref{fig:fig1}a). Such an assemblage demonstrates quantum steering when it does not admit a \textit{local hidden state} (LHS) model, i.e. a decomposition of the form \begin{equation} \label{LHS} \sigma_{a|x} = p(a|x) \sum_\lambda ~ p(\lambda|a,x) ~ \sigma_\lambda \,, \end{equation} where $p(a|x)$ is a normalisation factor and $p(\lambda|x,\lambda)\sigma_\lambda$ is an ensemble of states whose priors get updated upon Bob asking Alice to perform the measurement $x$ and her reporting back the outcome $a$. Steering represents a form of quantum correlations that is intermediate between entanglement and Bell nonlocality \cite{wiseman2007steering,quintino2015inequivalence}. Specifically, there exist entangled states that cannot lead to steering, and there exist some steerable states that cannot lead to Bell inequality violation (nonlocality). Also, the steering scenario is commonly referred to as \textit{one-sided device-independent} (1-SDI), as Alice's device is untrusted but Bob's device is fully characterised. Since steering requires the presence of entanglement---separable states always admitting an LHS model---it also represents a 1-SDI method for certifying entanglement. Moreover, steering is an asymmetric phenomenon, as there exist states $\rho_{AB}$ for which steering is only possible in one direction (e.g., from $A$ to $B$) \cite{bowles2014one}. One can take the concept of quantum steering a step further in terms of bipartite entanglement detection. Instead of only certifying the presence of entanglement, it is possible to use steering to characterise the dimensionality entanglement dimensionality, as quantified via the Schmidt number \cite{terhal2000schmidt}. For a pure state $\ket{\psi}$, this corresponds to the \textit{Schmidt rank} (SR), i.e., the minimum number of terms needed to express $\ket{\psi}$ as a linear combination of product states. The \textit{Schmidt number} (SN) \cite{terhal2000schmidt} is a generalisation to mixed states, formally defined as \begin{align} \text{SN}(\rho) := \underset{\{ p_k, ~\ket{\psi_k} \}}{\min} \max_k \quad&\text{SR}(\ket{\psi_k}) \\\nonumber \quad \text{s.t} \quad &\rho = \sum_k p_k \ketbra{\psi_k}{\psi_k}. \end{align} The Schmidt number thus quantifies the entanglement dimensionality, in that it tells the minimum number of degrees of freedom that one needs to be able to entangle in order to produce the state, see Figure (\ref{fig:fig1}c). As an example, witnessing a Schmidt number of three implies that qubit entanglement, even when mixed between different subspaces, is not enough to produce the state. In \cite{designolle2021genuine}, the concept of \textit{genuine high-dimensional steering} (GHDS) was introduced, where one asks whether a given assemblage $\sigma_{a|x}$ can be produced using a bipartite state $\rho_{AB}$ of Schmidt number at most $n$, in which case we term the assemblage \textit{$n$-preparable}. In this framework, an assemblage is LHS if and only if it is $1$-preparable, as any LHS assemblage can be prepared using only separable states \cite{Kogias15,Moroder16}. Hence if an assemblage is not $n$-preparable, this guarantees that the underlying state $\rho_{AB}$ is of Schmidt number at least $n+1$. This represents a 1-SDI certification of entanglement dimensionality, illustrated in a recent quantum optics experiment certifying up to 14-dimensional entanglement \cite{designolle2021genuine}. So far, the focus of GHDS is on the dimensionality of the shared entangled state. There is however another resource that is crucial for observing quantum steering, namely the set of measurements performed by Alice, which must be incompatible. More generally, there exist in fact a deep connection between measurement incompatibility (in the sense of being not jointly measurable) and quantum steering \cite{quintino14,uola14,uola15}. In particular, this implies that any set of incompatible measurements for Alice can be combined with an appropriate state $\rho_{AB}$ for demonstrating steering. This naturally raises the question of what are the necessary resources in terms of measurements for demonstrating GHDS. Intuitively, the latter should also require a minimal ``dimensionality'' for the set of measurements. Below we will make this intuition precise, by using the concept of $n$-simulability of a set of measurements. More generally, we will establish a deep connection between GHDS (more precisely the notion of $n$-preparability of an assemblage) and $n$-simulability of set of measurements. This generalises the previously known connection between steering and measurement incompatibility. A set of measurements $\{M_{a|x}\}$, defined on a Hilbert space of dimension $d$, is said to be $n$-simulable when the statistics of this set of measurements on any possible quantum state can be exactly recovered using a form of compression of quantum information to a lower $n$-dimensional space. Consider for example Alice (on the moon), sending an arbitrary state $\rho$ to a distant party Bob (on earth), who will perform a set of POVMs $\{M_{a|x}\}$ (see Fig. 1). Which POVM Bob performs depends on some input $x$. The expected (target) data is given by $p(a|x,\rho) = \Tr(M_{a|x} \rho)$. As resource, we consider here the dimensionality of the quantum channel between Alice and Bob, while a classical channel is always available for free. The goal is then to compress as much as possible the initial state of Alice, in order to use a quantum channel with minimal dimension, while still recovering exactly the target data. More formally, we demand that \begin{equation} \label{n-simulable} M_{a|x} = \sum_{\lambda} \Lambda_{\lambda}^*( N_{a|x,\lambda}) \end{equation} where $\Lambda = \{\Lambda_{\lambda}\}_{\lambda}$ denotes the instrument (compressing from dimension $d$ to $n$), with classical output $\lambda$, and $N_{a|x,\lambda}$ is a set of $n$-dimensional POVMs performed by Bob upon receiving the input $x$ and the classical information $\lambda$ communicated by Alice. Here $\Lambda_\lambda^*$ refers to the Heisenberg picture of $\Lambda_\lambda$. A set of measurements is termed $n$\textit{-simulable} whenever a decomposition of the form \eqref{n-simulable} can be found. An important case is $1$-simulability, i.e., when the full quantum information can be compressed to purely classical one. This is possible if and only if the set of POVMs is jointly measurable, i.e., $M_{a|x} = \sum_\lambda ~ p(a|x,\lambda) ~ G_\lambda$, for some probability distribution $p(a|x,\lambda)$ and a ``parent'' measurement $G_\lambda$, see \cite{JMinvitation,JMreview} for reviews on the topic. A set of POVMs that is not jointly measurable (hence called \textit{incompatible}), can nevertheless be $n$-simulable, for some $n$ with $2 \leq n \leq d$. The notion of $n$-simulability can also be connected to quantum channels, and their dimensional properties. This requires the use of a property of channels that is analogous Schmidt number of bipartite states. Namely, one says that a channel $\Lambda$ is \textit{$n$-partially entanglement breaking} ($n$-PEB) if $\text{SN}(\Lambda \otimes \mathds{1} \rho) \leq n$ for all $\rho$ \cite{chruscinski2006partially}. Clearly, for the case $n=1$ this concept corresponds to entanglement breaking channels. This leads to an alternative formulation of \textit{$n$-simulability}, which we will primarily use in the following sections: a measurement assemblage $M_{a|x}$ is \textit{$n$-simulable} if and only if there exists an $n$-PEB quantum channel $\Lambda$ and a measurement assemblage $N_{a|x}$ such that $M_{a|x} = \Lambda^* \big ( N_{a|x} \big )$. In the rest of the paper, we will first establish precisely the connection between $n$-preparability and $n$-simulability. In turn, we will discuss simulation models for for the correlations of entangled states (of Schmidt number $d$) using as resource lower-dimensional entanglement (of Schmidt number $n<d$), considering all possible measurements. This idea can be seen as a generalisation of the problem of simulating the correlations of entangled state via local hidden variables (or local hidden state models). Finally, in the last section of the paper, we will also extend the connection to quantum channels and their characterisation in terms of dimension. This will provide a full tripartite connection, for characterising dimension in steering assemblages, incompatibility of sets of measurements, and quantum channels. \section{High-dimensional steering and simulability of measurements} In this section, we present in detail the structural connection between $n$-preparability of steering assemblages and $n$-simulability of sets of measurements. We start with a first result clearly identifying the resource for GHDS. More precisely, the following Theorem implies that observing GHDS, i.e., an assemblage which is not $n$-preparable, implies that (i) the shared entangled state $\rho_{AB}$ has at least Schmidt number $n+1$, and (ii) the set of measurements $\{M_{a|x}\}$ performed by Alice is not $n$-simulable. In other words, one really needs both high-dimensional entanglement and high-dimensional measurement incompatibility to witness genuine high-dimensional steering. More formally we can prove the following. \begin{theorem} \label{theorem:compat->prep} If $M_{a|x}$ is $n$-simulable or $\rho_{AB}$ has Schmidt number at most $n$, then the assemblage \begin{equation} \sigma_{a|x}:=\textup{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ [\rho_{AB}] \bigg) \end{equation} is $n$-preparable. \end{theorem} \begin{proof} If $\rho_{AB}$ has SN at most $n$, this simply follows from the definition of $n$-preparability. Now suppose that $M_{a|x}$ is $n$-simulable. Then there exists a $n$-PEB channel $\Lambda$ and measurements $N_{a|x}$ such that $M_{a|x} = \Lambda^* (N_{a|x})$. By the definition of the dual, we can hence write \begin{align} \sigma_{a|x} &= \Tr_A \Big ( \Lambda^* (N_{a|x}) \otimes \mathds{1} [ \rho_{AB} ] \bigg ) \\ & = \Tr_A \Big ( \big(N_{a|x} \otimes \mathds{1}\big) \big( \Lambda \otimes \mathds{1} \big) [\rho_{AB}] \Big ) \end{align} and as $\Lambda$ is $n$-PEB, then $\Lambda \otimes \mathds{1} [\rho_{AB}]$ has SN at most $n$, so $\sigma_{a|x}$ is $n$-preparable. \end{proof} It is worth noting that, for the simplest case of $n=1$, the above Theorem corresponds to the well-known fact that an assemblage constructed from a separable state or via a jointly measurable set of POVMs always admits a LHS model. In other words, the observation steering proves the presence of an entangled state and an incompatible set of POVMs for Alice. Our next result establishes a general equivalence between any $n$-preparable assemblage and a set of POVMs that is $n$-simulable, and vice versa. The main idea is that a set of quantum measurements $M_{a|x}$ and a steering assemblage $\sigma_{a|x}$ are very similar types of mathematical objects: both are composed of positive semi-definite matrices, and $\sum_a M_{a|x}=\mathds{1}\quad \forall x$ whereas $\sum_a \sigma_{a|x}$ will be equal to some fixed state $\rho_B = \text{Tr}_A (\rho_{AB})$ for all $x$. A direct connection can be established, namely that $\sigma_{a|x}$ is LHS if and only if $\rho_B^{-\frac{1}{2}} \sigma_{a|x} \rho_B^{-\frac{1}{2}}$ is jointly measurable (when interpreted as a set of measurements) \cite{uola15}. The Theorem below can be considered a generalisation of this result, in the sense that the proof of Ref. \cite{uola15} corresponds to the case $n=1$. \begin{theorem} \label{thm: n-sim = n-prep} Consider a steering assemblage $\sigma_{a|x}$ and measurements $M_{a|x}$ such that $M_{a|x}=\rho_B^{-\frac{1}{2}} ~ \sigma_{a|x} ~ \rho_B^{-\frac{1}{2}}$, where $\rho_{B} := \sum_a \sigma_{a|x}$ is of full rank. Then $M_{a|x}$ is $n$-simulable if and only if $\sigma_{a|x}$ is $n$-preparable. \end{theorem} \begin{proof} Let $N_{a|x}$ be a measurement assemblage and $\rho_{AB}$ be a state such that $\Tr_A(\rho_{AB}) = \rho_B$. Let $(\cdot)^T$ denote the transpose with respect to an eigenbasis of $\rho_B$. We then have the following equivalences \begin{align} \sigma_{a|x} &= \Tr_A (N_{a|x} \otimes \mathds{1}~\rho_{AB} ) \\ \iff M_{a|x}&= ~ \rho_B^{ -\frac{1}{2}} ~\Tr_A (N_{a|x} \otimes \mathds{1}~\rho_{AB} ) ~ \rho_B^{ -\frac{1}{2}} \\ \iff M_{a|x}^T&= ~ \rho_B^{ -\frac{1}{2}} ~\Tr_A (N_{a|x} \otimes \mathds{1}~\rho_{AB} )^T ~ \rho_B^{ -\frac{1}{2}} \\ \iff M_{a|x}^T &= \Lambda_{\rho_{AB}}^* \Big ( N_{a|x} \Big ), \end{align} where in the third line we used the fact that $(\rho_B^{-\frac{1}{2}})^T=\rho_B^{-\frac{1}{2}}$, as the transpose is taken in an eigenbasis of $\rho_B$, and in the last line we have invoked the form of channel-state duality from Ref.~\cite{kiukas2017continuous}. Now observe that the existence of a state $\rho_{AB}$ in the above with Schmidt number at most $n$ is equivalent to $\sigma_{a|x}$ being $n$-preparable. We can also see that there exists $\rho_{AB}$ with SN$(\rho_{AB})\leq n$ if and only if $M_{a|x}^T$ is $n$-simulable, as such state corresponds to $\Lambda_{\rho_{AB}}$ being $n$-PEB, see Appendix A for details. To finalize the proof we must show that $M_{a|x}$ is $n$-simulable if and only if $M_{a|x}^T$ is $n$-simulable. This can be seen as follows. First note that $M_{a|x}^T$ defines a valid collection of measurements. Suppose that $M_{a|x} = \Lambda^*(N_{a|x})$ with $\Lambda$ $n$-PEB and $N_{a|x}$ arbitrary measurements. Then letting $\mathcal{T}$ denote the transpose map, we have that $M_{a|x}^T = (\mathcal{T} \circ \Lambda^*)(N_{a|x}) = ( \Lambda \circ \mathcal{T}^*)^*(N_{a|x})$. As $\Lambda$ is $n$-PEB, $\Lambda \circ \mathcal{T}^*$ is also $n$-PEB. Hence $M_{a|x}^T$ is $n$-simulable. The converse direction follows from $(M_{a|x}^T)^T = M_{a|x}$. \end{proof} As a technical remark, note that as for any $a$ and $x$ the support of $\sigma_{a|x}$ is contained within the support of $\rho_B = \sum_a \sigma_{a|x}$ (this follows as $\sigma_{a|x}$ are all positive semi-definite), we can still invoke the above theorem in the case where $\rho_B$ is not full rank, by restricting $\sigma_{a|x}$ to the support of $\rho_B$. Theorem \ref{thm: n-sim = n-prep} also allows to prove the following result, which complements Theorem \ref{theorem:compat->prep}. This shows that for any set of POVMs that is not $n$-simulable, one can always find an entangled state such that the resulting assemblage is not $n$-preparable. Again, this generalizes some previous results stating that any incompatible set of POVMs can lead to steering \cite{quintino14,uola14}, which corresponds to the case $n=1$ of the proposition below. \begin{proposition} If $M_{a|x}$ is not $n$-simulable, then the assemblage \begin{equation} \sigma_{a|x}:=\textup{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ \ketbra{\Phi^+} \bigg) \end{equation} is not $n$-preparable, where $\ket{\Phi^+}=\frac{1}{\sqrt{d}}\sum_i \ket{ii}$. \end{proposition} \begin{proof} We have that \begin{align} \sigma_{a|x}=\text{Tr}_A \bigg (M_{a|x}\otimes \mathds{1} ~ \ketbra{\Phi^+} \bigg) = \frac{1}{d}~ M_{a|x}^T. \end{align} By the proof of Theorem~\ref{thm: n-sim = n-prep}, if $M_{a|x}$ is not $n$-simulable, then $M_{a|x}^T$ is not $n$-simulable. Then invoking Theorem \ref{thm: n-sim = n-prep} with $\rho_B = \frac{\mathds{1}}{d}$, we have that that $\sigma_{a|x}$ is not $n$-preparable. \end{proof} In the final part of this section, we show that the trade-off between high-dimensional entanglement, high-dimensional measurement incompatibility, and high-dimensional steering can be made quantitative. For this, we use a specific resource quantifiers known as the convex weight \cite{Steeringweight}. Consider for example the quantification of entanglement via the weight. For any entangled state $\rho$, we can measure its entanglement through its weight, given by the following quantity \begin{equation} \label{eq: WeightDef} \begin{split} \mathcal{W}_F(\rho) := &\min \lambda\\ &\mathrm{s.t.}\ D= (1-\lambda) \rho_{sep} + \lambda\sigma, \end{split} \end{equation} where the minimisation runs over any state $\rho_{sep}$ that is separable, and $\sigma$ an arbitrary state. As expected, $\mathcal{W}_F (\rho) =0$ when $\rho$ is separable. More generally, this quantifier can apply to objects such as states, measurements or steering assemblages, with respective free sets $E_n$: the set of states with Schmidt number at most $n$, $S_n$: the set of of $n$-simulable measurements assemblages, and $P_n$: the set of $n$-preparable steering assemblages. We can now state our next result, which quantitatively illustrates the necessity of high-dimensional measurement incompatibility and entanglement for GHDS: \begin{restatable}{theorem}{weighttheorem} \label{thm:weight} Given an assemblage $\sigma_{a|x}=\textup{Tr}_A (M_{a|x}\otimes \mathds{1} ~ [\rho_{AB}])$, we have the following inequality: \[ \mathcal{W}_{P_n}(\sigma_{a|x}) \leq \mathcal{W}_{S_n}(M_{a|x}) \mathcal{W}_{E_n}(\rho_{AB}). \] For the case $n=1$ we get a quantitative connection among steering, measurement incompatibility and entanglement. \end{restatable} We defer the proof of this theorem to the appendix. \section{Simulating the correlations of high-dimensional entangled states using low-dimensional entanglement} Strong demonstrations of the non-classical correlations of entangled states comes from the observation of Bell inequality violation, or from quantum steering. A long-standing topic of research is to understand the link between entanglement and these stronger forms of quantum correlations, see e.g. \cite{brunner2014bell,Augusiakreview}. In a seminal paper, Werner showed that certain entangled states, referred to as Werner states, cannot lead to Bell inequality violation \cite{Werner1989}. This result is based on the construction of an explicit local hidden variable model that reproduces exactly the correlations expected from any possible local projective measurements on the Werner state. Moreover, it turns out that the model construct by Werner is in fact of the form of an LHS model (as in Eq. \eqref{LHS}, see also Fig~\ref{fig:LHSmodel}), hence these Werner states can also never lead to quantum steering \cite{wiseman2007steering}. Note that these results can be extended to general POVMs using the model of Ref. \cite{barrett2002nonsequential}, which can be shown to be of LHS form \cite{quintino2015inequivalence}. Here we revisit the above questions and propose a new perspective, based on the ideas developed in the previous sections of the paper. Instead of considering simulation models that involve only classical resources (classical shared randomness), we consider now simulation models assisted by entanglement, see Fig.~\ref{fig:EALHSmodel}. Of course, for this problem to be non-trivial, we must demand that the entanglement used in the simulation model is somehow weaker compared to the entanglement of the original state to be simulated. The dimensionality of entanglement (as given by the Schmidt number) provides a good measure for this problem. Consider an entangled state $\rho_{AB}$ of Schmidt number $d$ and arbitrary local measurements (possibly infinitely many) for both Alice and Bob. We now ask if we can simulate the resulting correlations with a model involving lower-dimensional entangled states (of Schmidt number $n<d$) and classical shared randomness. Of course, building such models can be challenging, as the model should reproduce exactly all correlations for any possible choice of local measurements. Nevertheless, we will see that using the ideas developed above, we can come up with such entanglement-assisted simulation models, and moreover prove their optimality. The main idea to construct these simulation models is to apply Theorem~\ref{theorem:compat->prep} to a result obtained recently in \cite{ioannou2022simulability}. The latter consist in obtaining bounds (in terms of noise robustness) for $n$-simulability for the (continuous) set of all projective measurements (in dimension $d$) under white noise. From Theorem~\ref{theorem:compat->prep}, we obtain an equivalent assemblage (with a continuous input $x$) that is $n$-preparable. The last point is to notice that this assemblage corresponds in fact to the one obtained from performing arbitrary local projective measurements on a shared entangled state $\rho_{AB}$, which takes the form of an isotropic state, i.e. \begin{equation} \label{iso} \rho(\eta'):= \eta' \ketbra{\Phi^+} + (1-\eta') \frac{\mathds{1}}{d^2} \end{equation} where $\ket{\Phi^+}=\frac{1}{\sqrt{d}}\sum_i \ket{ii}$ and $0 \leq \eta' \leq 1$. Hence we obtain a simulation model using only entanglement with Schmidt number $n$ which reproduces exactly the correlations of some isotropic state of dimension $d\times d$. Interestingly, it appears that this isotropic state can have a Schmidt number that is larger than $n$. More formally, consider the set of all projective measurements (PVMs) subject to white noise \begin{equation}\label{noisyPVM} \mathcal{M}_{PVM}^\eta:=\bigg \{\eta M_{a|U} + (1-\eta)\frac{\mathds{1}}{d} ~ : ~ U\in U(d) \bigg \} \,, \end{equation} where $U(d)$ is the unitary matrix group, $M_{a|U} = U\ketbra{a}U^\dagger$ and $\ket{a}$ denotes the computational basis. It was shown in \cite{ioannou2022simulability} that the set $\mathcal{M}_{PVM}^\eta$ is $n$-simulable if $\eta \leq (d \sqrt{\frac{n+1}{d+1}}-1)(d-1)^{-1}$. Then by passing the noise from the measurements onto the state (see for example \cite{uola14}), we have that: \begin{align} &\text{Tr}_A \bigg (\bigg [\eta M_{a|U} + (1-\eta)\frac{\mathds{1}}{d} \bigg ]\otimes \mathds{1} ~ \ketbra{\Phi^+})\\ =&\text{Tr}_A \bigg (M_{a|U}\otimes \mathds{1} ~ \rho (\eta) \bigg ). \end{align} Hence we reproduce exactly the assemblage expected from arbitrary projected measurement on an isotropic state with $\eta'= \eta$. Moreover, it is known that $\text{SN}(\rho(\eta)) \geq n+1 \quad \text{if} \quad \eta > \frac{dn-1}{d^2-1}$ \cite{terhal2000schmidt}. Hence for \begin{equation} \frac{dn-1}{d^2-1} < \eta \leq \frac{d \sqrt{\frac{n+1}{d+1}}-1}{d-1} \end{equation} the resulting assemblage can be reproduced via a simulation model involving only entangled states of Schmidt number $n$, despite the state possessing a Schmidt number of $n+1$. More generally, one can deduce a general bound on the noise parameter $\eta$ for guaranteeing $n$-preparability. We have illustrated these bounds this in Fig.~\ref{fig:my_label} for the case of dimension four. Remarkably, as the construction for PVMs in Ref. \cite{ioannou2022simulability} is optimal, the simulation models we obtain are also optimal (considering all possible PVMs). An interesting question is to understand how to extend these bounds considering all POVMs, but this is a challenging question, still open for the simplest case of $n=1$. \begin{figure} \centering \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{steering_scen_2}\vspace{-10pt} \caption{} \label{fig:LHSmodel} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \vspace{5pt}\includegraphics[width=\textwidth]{steering_scen_3} \vspace{-10pt} \caption{} \label{fig:EALHSmodel} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \centering \hspace*{-5pt}\includegraphics[width=\textwidth]{SDI-SN_plot} \vspace*{-15pt} \caption{} \label{fig:my_label} \end{subfigure} \caption{(a) Local hidden state model: one aims at simulating the assemblage with a separable state.\\ (b) $n$-preparability: simulation of an assemblage using states with low-dimensional entanglement.\\ (c) High-dimensional entanglement and steering properties of the isotropic state with local dimension $d=4$. The Schmidt number (SN) bounds can be found in \cite{terhal2000schmidt}, and in this work we translate known values on the $n$-simulability of all PVMs from \cite{ioannou2022simulability} to thresholds on states being such that they can only lead to n-preparable assemblages under all projective measurements. In the figure this is referred to as the one-sided semi-device independent Schmidt number (1-SDI-SN) under all PVMs. The bound for LHS models for all POVMs is from \cite{barrett2002nonsequential, almeida2007noise}.} \end{figure} \section{Criteria for $n$-simulability} The connections established in Section III also allow us to translate $n$-preparability inequalities into criteria for $n$-simulability. As an example, we take the set of $n$-preparability witnesses presented in Ref. \cite{designolle2021genuine}. Such witnesses state that for an $n$-preparable state assemblage $\{\sigma_{a|x}\}$ with $2$ inputs and $d$ outputs, one has that \begin{equation} \label{witness} \sum_{a,x}\text{Tr}[\sigma_{a|x}W_{a|x}]\leq N\Big(\frac{\sqrt{n}-1}{\sqrt{n}+1}+1\Big) \,. \end{equation} where $N=1+1/\sqrt{d}$. The witness $W_{a|x}$ consists of a pair of mutually unbiased bases (MUBs for short) transposed in the computational basis, i.e., $W_{a|1}=|a\rangle\langle a|$ and $W_{b|2}=|\varphi_b\rangle\langle\varphi_b|^T$, where $\{|a\rangle\}$ is the computational basis and $\{|\varphi_b\rangle\}$ is an orthonormal basis with the property $|\langle a|\varphi_b\rangle|^2=1/d$ for each $a$ and $b$. As an $n$-simulable set of measurements leads to an $n$-preparable state assemblage by Theorem \ref{theorem:compat->prep}, violation of a witness of this type in a steering scenario verifies that Alice's measurements are not $n$-simulable. As an example, we take a pair of MUBs subjected to white noise with visibility $\eta$ (similarly to Eq. \eqref{noisyPVM}) on Alice's side and the isotropic state \eqref{iso}. Plugging the resulting assemblage into the witness \eqref{witness}, we get that \begin{align} \eta\leq\frac{(d+\sqrt{d}-1)\sqrt{n}-1}{(d-1)(\sqrt{n}+1)}. \end{align} Hence, for a visibility larger than this bound, a pair of MUBs is provably not $n$-simulable. We note that for the case $n=1$ we retrieve the known tight joint measurability threshold of two MUBs subjected to white noise \cite{Carmeli12,Haa15,Uola16}. Obtaining similar bounds for complete sets of MUBs, known for $n=1$ \cite{Designolle2019}, would be interesting. \section{Quantum channels} An important superset of entanglement breaking channels is that of incompatibility breaking channels \cite{heinosaari2015incompatibility}, which are channels $\Lambda$ such that $\Lambda^*(M_{a|x})$ is jointly measurable for any $M_{a|x}$. Via channel-state duality these channels correspond respectively to separable and unsteerable states (where the direction of unsteerability corresponds to whether the channel is applied on the first or second system in the definition of channel-state duality). The connections between high-dimensional steering, $n$-simulability and $n$-PEB channels motivate the following definition: \begin{definition} \label{def: PSB} A channel $\Lambda$ is \textbf{$n$-partially incompatibility breaking} ($n$-PIB) if for any measurement assemblage $N_{a|x}$ the resulting measurement assemblage $\Lambda^*(N_{a|x})$ is $n$-simulable\footnote{We note that our definition here is different to the notion of $n$-incompatibility breaking channels defined in \cite{heinosaari2015incompatibility}, which denotes channels who break the incompatibility of any $n$ observables.}. \end{definition} Hence, just as $\Lambda \otimes \mathds{1}$ maps all bipartite states to states with Schmidt number $n$ for $\Lambda$ a $n$-PEB channel, an $n$-PIB channel maps any measurement assemblage to an $n$-simulable one (in the Heisenberg picture). We can also gain insight from considering the structure of $n$-PIB channels and their relation to $n$-PEB channels. Elaborating upon Def.~\ref{def: PSB}, for $\Lambda$ to be $n$-PIB we require that for all measurement assemblages $N_{a|x}$, there exists an $n$-PEB channel $\Omega$ and a set of measurements $M_{a|x}$ such that \begin{equation} \Lambda^*(N_{a|x})=\Omega^*(M_{a|x}). \label{eq:explicit-n-PIB} \end{equation} Therefore, by simply taking $\Omega:=\Lambda$ and $M_{a|x}:=N_{a|x}$ in Eq.~\ref{eq:explicit-n-PIB}, we immediately arrive at the following result: \begin{proposition} Every $n$-PEB channel is $n$-PIB. \end{proposition} It is illuminating to consider the corresponding Choi states. For $n$-PEB channels, the Choi states are exactly the states with Schmidt number $n$. \cite{chruscinski2006partially}. For $n$-PIB channels, we have the following result: \begin{theorem} \label{thm:pib = sdi-sn} $\Lambda$ is $n$-PIB if and only if $\rho_\Lambda$ only leads to $n$-preparable assemblages. \end{theorem} \begin{proof} Let $\sigma=\text{Tr}_A(\rho_\Lambda)$ fix the channel-state correspondence. Suppose $\Lambda$ is $n$-PIB, that is, for all measurements $N_{a|x}$, we have that $\Lambda^*(N_{a|x})$ is $n$-simulable. By Theorem \ref{thm: n-sim = n-prep}, this is equivalent to $\sigma^{\frac{1}{2}}\Lambda^*(N_{a|x})^T\sigma^{\frac{1}{2}}$ being $n$-preparable for all $N_{a|x}$. Via channel-state duality, this is equivalent to \begin{equation} \text{Tr}_A(N_{a|x} \otimes \mathds{1} \rho) \end{equation} being $n$-preparable for all $N_{a|x}$. \end{proof} The result of the above Theorem is put into context of other similar type connections between a channel and its Choi state in Table~\ref{tab:cs-duality}. We note that our results on bounding entanglement assisted simulation models for the noisy singlet state translate directly into bounds on the identity channel under depolarising noise for being $n$-PIB on the resrticted class of projective measurements. This also shows that when only projective measurements are considered, there are channels that are $n$-PEB without being $n$-PIB. \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{2} \begin{table}[] \centering \begin{tabular}{|c|c|c|}\hline Channel & State & Reference \\\hline Entanglement breaking & Separable & \cite{horodecki2003entanglement} \\ Incompatibility breaking & Unsteerable & \cite{heinosaari2015incompatibility, kiukas2017continuous}\\ $n$-PEB & SN $n$ & \cite{terhal2000schmidt, chruscinski2006partially} \\ $n$-PIB & SDI-SN $n$ & Theorem \ref{thm:pib = sdi-sn}. \\\hline \end{tabular} \caption{Connections between channels and their Choi states. Our work naturally extends this picture by generalising both incompatibility breaking channels and unsteerable states in terms of dimension, and proving that they directly correspond to each other through generalised channel state-duality.} \label{tab:cs-duality} \end{table} \section{Conclusions} We have uncovered deep connections between high-dimensional versions of quantum steering, measurement incompatibility, and quantum channels, and demonstrated how a rich transfer of information is possible between these areas. In particular, we showed that the concept of $n$-simulability for sets of POVMs is equivalent to $n$-preparability for state assemblages in steering. This generalises the well-known connection between steering and joint measurability, which simply corresponds here to the case $n=1$. We identified the resources required for observing GHDS, in particular that both high-dimensional measurements and high-dimensional entanglement are necessary. In the light of these results, we conclude that the experiment of Ref.~\cite{designolle2021genuine} also demonstrates measurements in pairs of MUBs that are highly incompatible, in the sense that are they not $14$-simulable. Another direction is the idea of quantifying the degree of steering of an entangled state via a dimension. We obtained optimal models for isotropic entangled state, considering all projective measurements. This can be seen as a generalisation of the well-known type of local (hidden state) models by Werner, now allowing for low-dimensional entanglement as a resource. In turn, this leads to a characterisation of channels that map any set of projective measurements into $n$-simulable ones. There are many exciting notions to explore that would extend this research direction. It would be useful to have better bounds on both $n$-preparability and $n$-simulability, and our work demonstrates that any progress here can be readily applied to both notions, providing a practical bridge between the two scenarios. Of particular interest would be to find bounds on the isotropic state being of SDI-SN $n$ under all POVMs, which would directly translate into the $n$-simulability of all POVMs. This follows analogous lines to the $n=1$ case (finding LHS bounds under projective/POVM measurements) \cite{barrett2002nonsequential}. A natural further question would be to explore these questions in the context of nonlocality \cite{brunner2014bell}, which can be thought of as a fully-device independent (FDI) regime. Analogously to the steering case, one could define a behaviour $p(a,b|x,y)$ to be $n$-preparable if it could have arisen from a shared state of Schmidt number at most $n$, and define a state to have fully-device independent Schmidt number $n$ (FDI-SN $n$) if it can only lead to $n$-preparable behaviours. This is related to \cite{brunner2008testing}, where the authors introduce the concept of dimension witnesses to lower bound the dimension of the underlying state. One can quickly see in this scenario that if either of the two parties use $n$-simulable measurements, then the resulting behaviour will be $n$-preparable. Similarly, uncharacterised measurements on an $n$-preparable assemblage can only result in an $n$-preparable behaviour. However, it is less clear how one could characterise the corresponding channels whose Choi states have FDI-SN $n$. In the steering case we were able to exploit and generalise known connections with measurement incompatibility, but it seems that new tools may be needed to attack this problem in the fully device independent regime. \textit{Acknowledgments.---} We acknowledge financial support from the Swiss National Science Foundation (projects 192244, Ambizione PZ00P2-202179, and NCCR SwissMAP). BDMJ acknowledges support from UK EPSRC (EP/SO23607/1). T.C. would like to acknowledge the funding Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2123 QuantumFrontiers 390837967, as well as the support of the Quantum Valley Lower Saxony and the DFG through SFB 1227 (DQ-mat). \bibliographystyle{unsrt}
proofpile-arXiv_065-5932
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Understanding the population of active galactic nuclei (AGN) in the local Universe can provide insights into the growth and evolution of supermassive black holes (SMBHs) across cosmic time. While virtually every massive galaxy contains a SMBH in its center, the occupation fraction of black holes in the dwarf galaxy regime remains poorly constrained. There is strong evidence for the existence of $\sim 10^5 - 10^6M_{\odot}$ black holes in dwarf galaxies \citep{Filippenko2003,Barth2004,Reines2015,Baldassare2015}. However, with the exception of the recent gravitational wave event GW190521 with a merger remnant mass of $142^{+28}_{-16}\ M_{\odot}$ \citep{LIGOVirgo2020}, the X-ray tidal disruption event 3XMM J215022.4-055108 \citep{Lin2018}, and the somewhat more controversial $M_{\rm{BH}} \sim 10^4\ M_{\odot}$ hyper-luminous X-ray source ESO 243-49 HLX-1 \citep{Farrell2009}, intermediate-mass black holes (IMBHs) with $M_{\rm{BH}} \sim 10^2 - 10^4M_{\odot}$ remain difficult to identify \citep{Greene2020}. To explain these observations, as well as the formation of SMBHs at high redshifts when the Universe was only a few hundred Myr old (e.g., \citealt{Fan2001,Wu2015,Banados2018,Wang2021}), it is thought that SMBHs must grow via accretion and mergers from early seed black holes (e.g., \citealt{Natarajan2014,Inayoshi2020}). Theories of SMBH seeding scenarios broadly fall into two classes: ``light'' and ``heavy'' seeds. In the most popular light seed scenario, black holes with masses of $\sim 10^{1-2}\ M_{\odot}$ are expected to form as remnants of the massive, first generation of stars, namely the Population III (Pop III) stars \citep{Bond1984,Madau2001,Fryer2001,Abel2002,Bromm2003}. With improvement in the resolution of simulations that track the formation of first stars, it is now found that rather than forming individual stars, early star formation results in star clusters, whose evolution could also provide sites for the formation of light initial seeds \citep{Gurkan2004,PortegiesZwart2004}. On the other hand, in the most popular ``heavy'' seed scenario, black holes with masses of $\sim 10^4 - 10^6\ M_{\odot}$ are expected to viably form from direct collapse of primordial gas clouds under specific conditions \citep{Haehnelt1993,Loeb1994,Bromm2003,Koushiappas2004,Lodato2006,Begelman2006,Lodato2007}. Additionally multiple other formation channels have also been proposed, such as mechanisms within nuclear star clusters \citep{Devecchi2009,Davies2011,Devecchi2010,Tal2014,Lupi2014,Antonini2015,Stone2017,Fragione2020,Kroupa2020,Natarajan2021}; inside globular clusters \citep{Miller2002,Leigh2014,Antonini2019}, and even young star clusters \citep{Rizzuto2021}. Heavy seeds are predicted to be fewer in number, while light seeds are predicted to be more common but less massive \citep{Lodato2007}. Given that the host galaxy stellar mass and mass of the inactive and active central black holes (BHs) appear to be correlated at least in the local Universe; \citealt{Magorrian1998,Reines2015}), the occupation fraction (i.e., fraction of galaxies containing a central BH at a given stellar mass) is expected be an observational tracer of seeding \citep{Volonteri2008,Greene2012}. Counter-intuitively, despite their complex growth history via accretion and mergers, the local occupation fraction in the dwarf galaxy mass range ($M_{\star} \lesssim 10^{9.5}\ M_{\odot}$) is predicted to be particularly sensitive to early seeding physics (but see \citet{Mezcua2019Nat}). Even at these late cosmic times and on these small dwarf galaxy scales, estimates of the occupation fraction might permit discriminating between the light and heavy seeding scenarios \citep{Volonteri2008}. Deep X-ray surveys have been used to identify low-mass and low-luminosity AGNs at low and intermediate redshifts \citep{Fiore2012,Young2012,Civano2012,Miller2015,Mezcua2016,Luo2017,Xue2017}. However, these surveys are expensive and are often plagued by contamination from X-ray binaries. Radio searches have also successfully identified low-mass AGNs in as radio cores in star-forming dwarf galaxies \citep{Mezcua2019,Reines2020}, although they are subject to the low detection rates. Traditional AGN search techniques at optical wavelengths, such as narrow-emission line diagnostics \citep{Baldwin1981,Veilleux1987}, on the other hand tend to miss a large fraction of IMBHs preferentially in star-forming \citep{Baldassare2016,Trump2015,Agostino2019} and low-metallicity \citep{Groves2006} host galaxies. However, systematic searches using wide-area optical surveys have begun to uncover this previously-hidden population of accreting black holes in dwarf galaxies. One popular technique that has been pursued is the mining of large databases of optical spectra for broad emission features in Balmer emission lines \citep{Greene2007,Chilingarian2018,Liu2018}. However, this method requires high $S/N$ spectra to detect the very low-luminosity broad emission \citep{Burke2021c}. In addition, it suffers from contamination from supernovae and stellar winds, which can both produce transient broad Balmer emission with luminosities identical to a dwarf AGN. Confirmation of the detection of dwarf AGN further requires multi-epoch spectroscopy to ensure the broad emission is persistent \citep{Baldassare2016}. Finally, it has been suggested that some accreting IMBHs may fail to produce a broad line region at all \citep{Chakravorty2014}. The possibility that some IMBHs live outside their host galaxy nuclei---the so called ``wandering'' BH population---is another complicating factor for systematic searches of IMBHs \citep{Volonteri2005,Bellovary2010,Mezcua2015,Mezcua2020,Bellovary2019,Reines2020,Ricarte2021a,Ricarte2021b,Ma2021}. As recently demonstrated from the analysis of the Romulus suite of simulations \citep{Ricarte2021a} demonstrate that a variety of dynamical mechanisms could result in a population of wandering IMBHs in galaxies, such as tidal stripping of merging dwarf galaxies \citep{Zinnecker1988}; gravitational recoil from galaxy centers \citep{Volonteri2003,Holley-Bockelmann2008,OLeary2009,Blecha2011,Blecha2016}, or gravitational runaway processes in star clusters \citep{Miller2002,PortegiesZwart2002,Fragione2018}. Recently, searches for optical variability in wide-area optical surveys have uncovered hundreds of dwarf AGN candidates \citep{Baldassare2018,Baldassare2020,Burke2021b,Martinez-Palomera2020,Ward2021}. These sources have enabled studies that have improved our understanding of AGN optical variability across a vast range of mass scales. Variability is thought to be driven by the inner UV-emitting regions of their rapidly-accreting accretion disks \citep{Burke2021}. In this work, we leverage these recent advances in IMBH identification and optical variability behavior, along with extrapolations of known host-galaxy correlations observed in the low-mass regime (e.g., \citealt{Reines2015}), to forecast the IMBH population that could be detectable by upcoming time-domain imaging surveys. This work is organized as follows. In \S\ref{sec:model}, we develop a forward model to forecast the number density of IMBHs in dwarf galaxies. In \S\ref{sec:obs}, we adapt this model to generate simulated observations mimicking light curves expected from the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST Rubin; \citealt{Ivezic2019}) and the Palomar Transient Factory (PTF) survey \citep{Law2009} to compare with existing observations \citep{Baldassare2020}. We opt for the PTF comparison over a similar study with SDSS \citep{Baldassare2018} because the PTF study has a larger sample size which enables tighter constraints on the variable fraction while being broadly consistent with the SDSS data. A comparison with the Dark Energy Survey is presented separately in \citet{Burke2021b}. We demonstrate the capability of our model to reproduce the IMBH detection fraction as a function of stellar mass consistent with existing AGN demographic studies. A concordance $\Lambda$CDM cosmology with $\Omega_{m} = 0.3$, $\Omega_{\Lambda} = 0.7$, and $H_0 = 70$ km s$^{-1}$ Mpc$^{-1}$ is assumed throughout. Unless stated otherwise, all uncertainty bands in the figures are $1\sigma$, estimated using the 16th and 84th percentiles of the probability density distributions, and points are the distribution means. Duplicate symbols are used for some parameters throughout this work. The reader should refer to the context to resolve any ambiguity. \section{Methodology to construct the demographic model}\label{sec:model} \begin{figure*} \includegraphics[width=1\textwidth]{model.pdf} \caption{Monte Carlo model for local AGN demographics in dwarf galaxies. We start from the galaxy stellar mass function (a) and consider two possibilities for the occupation fraction (a ``light'' seed scenario in blue/square symbols and a ``heavy'' seed scenario in magenta/circle symbols; b). Then, we use the local $M_{\rm{BH}}-M_{\star}$ scaling relation to predict the BH mass function (d). Finally, we assume a power-law distribution for the Eddington ratios (e) to predict the local bolometric AGN luminosity function (LF) (f). The shaded bands are $1\sigma$ uncertainties, estimated using the 16th and 84th percentiles of the distributions, and points are the distribution means. The red `x' symbols are the observational constraints on the local galaxy stellar mass function (points below $\sim10^8\ M_{\odot}$ are effected by incompleteness of low surface brightness galaxies; \citealt{Baldry2012}) (a) and the local AGN luminosity function from optical observations \citep{Schulze2009,Hao2005} (f). The red curve in panel (d) is stellar BH relic mass function anchored to merger rates from gravitational wave observations \citep{Sicilia2022}. The red line in panel (f) is the best-fit broken power law to the local AGN luminosity function derived from X-ray observations \citep{Ajello2012}. The red `o' symbols is the GSMF measured from the SDSS-based NASA Sloan Atlas catalog \citep{Blanton2011}, which demonstrates the spectroscopic incompleteness at low stellar mass (a). The red filled squares in panel (f) is the observed luminosity function of ultra-luminous X-ray sources derived from seven collisional ring galaxies \citep{Wolter2018} normalized to the number density of $M_\star \sim 10^6\ M_{\odot}$ dwarf galaxies after excluding sources with $L_{0.5-10\ {\rm{keV}}} < 10^{39}$ erg s$^{-1}$ where the sample is incomplete. The red open squares in panel (f) is the observed luminosity function of X-ray binaries (XRBs) in globular clusters (GCs) in nearby galaxies \citep{Lehmer2020} normalized to the number density of $M_\star \sim 10^6\ M_{\odot}$ dwarf galaxies. The red dotted vertical line in panel (f) represents the bolometric luminosity of the $M_{\rm{BH}} \sim 10^4-10^5\ M_{\odot}$ dwarf Seyfert galaxy NGC 4395 \citep{Filippenko2003,Moran1999}. \label{fig:model}} \end{figure*} Broadly following the basic methodology presented in prior work by \citet{Caplar2015} and \citet{Weigel2017}, we develop an empirically motivated forward model starting from the galaxy stellar mass function and host-galaxy scaling relations to derive the corresponding BH mass and AGN luminosity functions (also see \citealt{Gallo2019,Greene2020}). Our goal is to estimate the number density of dwarfs with central AGNs in the IMBH mass range that would result from the various proposed seeding mechanisms. Therefore, we must extrapolate scaling relations derived from current observational constraints on the galaxy population from host galaxy correlations as well as the Eddington ratio distribution derived for more massive AGNs to lower mass BHs. A summary table of parameters and our adopted values for them are provided in Table~\ref{tab:par}, unless otherwise explicitly quoted in the text. \subsection{The dwarf galaxy population} \label{sec:GSMF} \begin{table} \centering \caption{Table of parameters, our adopted values and their $1\sigma$ uncertainties describing the galaxy population of our Monte Carlo model.} \label{tab:par} \small \begin{tabular}{cccc} \hline \hline Parameter & Value & Unit & Reference \\ \hline \multicolumn{4}{|c|}{Galaxy Stellar Mass Function (GSMF)}\\ \hline $\log(M_{\star}^{\ast}/M_{\odot})$ & $10.78 \pm 0.01$ & dex & \citet{Wright2017} \\ $\phi_1/10^{-3}$ & $2.93 \pm 0.40$ & Mpc$^{-3}$ & \ldots \\ $\phi_2/10^{-3}$ & $0.63 \pm 0.10$ & Mpc$^{-3}$ & \ldots \\ $\alpha_1$ & $-0.62 \pm 0.03$ & & \ldots \\ $\alpha_2$ & $-1.50 \pm 0.01$ & & \ldots \\ \hline \multicolumn{4}{|c|}{$^{a,b}$Blue$+$Green Galaxy Stellar Mass Function (GSMF)}\\ \hline $\log(M_{\star}^{\ast}/M_{\odot})$ & $10.72$ & Mpc$^{-3}$ & \citet{Baldry2012} \\ $\phi/10^{-3}$ & $0.71$ & & \ldots \\ $\alpha$ & $-1.45$ & & \ldots \\ \hline \multicolumn{4}{|c|}{$^{a}$Red Galaxy Stellar Mass Function (GSMF)}\\ \hline $\log(M_{\star}^{\ast}/M_{\odot})$ & $10.72$ & dex & \citet{Baldry2012} \\ $\phi_1/10^{-3}$ & $3.25$ & Mpc$^{-3}$ & \ldots \\ $\phi_2/10^{-3}$ & $0.08$ & Mpc$^{-3}$ & \ldots \\ $\alpha_1$ & $-0.45$ & & \ldots \\ $\alpha_2$ & $-1.45$ & & \ldots \\ \hline \multicolumn{4}{|c|}{$^{c}$Host Galaxy-Black Hole Mass Scaling}\\ \hline $\log(M_{\star}^{\ast}/M_{\odot})$ & $11$ & dex & \citet{Reines2015} \\ $\alpha$ & $7.45 \pm 0.08$ & & \ldots \\ $\beta$ & $1.05 \pm 0.11$ & & \ldots \\ \hline \multicolumn{4}{|c|}{Blue$+$Green Eddington Ratio Distribution Function (ERDF)}\\ \hline $\log(\lambda^{\ast}_{\rm{Edd}})$ & $-1.84^{+0.30}_{-0.37}$ & & \citet{Weigel2017} \\ $\delta_1$ & $-0.2$ & & $^{d}$ \\ $\delta_2$ & $2.53^{+0.68}_{-0.38}$ & & \citet{Weigel2017} \\ $\log(\lambda_{\rm{Edd, min}})$ & $-8$ & & \\ $\log(\lambda_{\rm{Edd, max}})$ & $0$ & & \\ \hline \multicolumn{4}{|c|}{Red Eddington Ratio Distribution Function (ERDF)}\\ \hline $\log(\lambda^{\ast}_{\rm{Edd}})$ & $-2.84^{+0.22}_{-0.14}$ & & \citet{Weigel2017} \\ $\delta_1$ & $-0.3$ & & $^{d}$ \\ $\delta_2$ & $1.22^{+0.19}_{-0.13}$ & & \citet{Weigel2017} \\ $\log(\lambda_{\rm{Edd, min}})$ & $-8$ & & \\ $\log(\lambda_{\rm{Edd, max}})$ & $0$ & & \\ \hline \end{tabular}\\ {\raggedright $^{a}$ We use the \citet{Wright2017} GSMF, which is better-constrained in the dwarf galaxy regime, but use the separate blue$+$green and red GSMFs from \citet{Baldry2012} to determine the relative ratio of the blue$+$green and red galaxy populations (see text for details). \\ $^{b}$ This is a single \citet{Schechter1976} function in \citet{Baldry2012}. \\ $^{c}$ We adopt the rms scatter in the relation of $\sim 0.55$ dex in the $M_{\star}$ direction \citep{Reines2015}. \\ $^{d}$ We re-normalized the $\delta_1$ parameters to better approximate the variable fraction of the entire galaxy population. Our normalization is still consistent with the local AGN luminosity function. \\ \par} \end{table} We begin by considering the number density of galaxies in the local Universe. At a given redshift, the measured galaxy stellar mass function (GSMF) is well-described by a double power-law function of the form, \begin{equation} \label{eq:GSMF} \phi(M_{\star})\ dM_{\star} = e^{-M_{\star}/M_{\star}^{\ast}}\ \left[\phi_1\left(\frac{M_{\star}}{M_{\star}^{\ast}}\right)^{\alpha_1} + \phi_2\left(\frac{M_{\star}}{M_{\star}^{\ast}}\right)^{\alpha_2}\right] \frac{dM_{\star}}{M_{\star}^{\ast}}, \end{equation} where $\phi=dn_{\star}/dM_{\star}$, $M_{\star}$ is the galaxy stellar mass, $n_{\star}$ is the number density, $M_{\star}^{\ast}$ is the break stellar mass, $\alpha_1$ and $\alpha_2$ are the shallow and steep power law exponents, respectively, and $\phi_1$ and $\phi_2$ are normalization factors that correspond to the low and high mass end of the GSMF, respectively \citep{Schechter1976} . We adopt the best-fit parameters from \citet{Wright2017} based on the Galaxy And Mass Assembly (GAMA) low-redshift $\sim$180 deg$^2$ spectroscopic survey, which has a spectroscopic depth of $r\sim19.8$ mag \citep{Driver2011,Liske2015}. The GAMA survey measured GSMF is good to $z\sim0.1$ and for $M_{\star} \gtrsim 10^{7.5}\ M_{\odot}$ but is also consistent with current limits on the GSMF down to $M_{\star} \sim 10^{6.5}\ M_{\odot}$ from deep G10-COSMOS imaging---a $\sim$1 deg$^2$ subset of the GAMA survey overlapping with the Cosmic Evolution Survey \citep{Scoville2007} with a spectroscopic depth of $r\sim24.5$ mag \citep{Andrews2017}. The high mass end of the GSMF is mostly constituted by red galaxies, while the low mass end of the GSMF is dominated by blue galaxies. Although the \citet{Wright2017} parameters are well-constrained for the low-mass end of the GSMF, they do not include separate derived GSMFs and tailored fits for the red and blue galaxy populations. Therefore, we use the ratio of the GSMFs partitioned between the red and blue galaxy populations from \citet{Baldry2012}, which is consistent with the results of \citet{Wright2017}, to separately populate red and blue galaxies in our model. We assign each galaxy a ``red'' or ``blue'' identifier, which we use to determine the accretion mode, that differs between these two galaxy populations \citep{Weigel2017,Ananna2022}. We ignore any redshift dependence in the GSMF, as we show that the number of detectable IMBHs drops off quickly with redshift at the expected sensitivities ($g\sim25$ mag) for LSST Rubin currently being considered. Our LSST model-predicted, detectable IMBHs are expected to mostly lie at $z\lesssim 0.05$. The number of random draws $N_{\rm{draw}}$ can be defined in terms of the GSMF and the survey volume as: \begin{equation} \label{eq:Ndraw} N_{\rm{draw}} = V(z_{\rm{min}}, z_{\rm{max}}, \Omega)\ \int_{M_{\star,\rm{min}}}^{ M_{\star,\rm{max}}} \phi({M_\star})\ dM_\star, \end{equation} where $V$ is comoving volume between redshifts $z_{\rm{min}}$ and $z_{\rm{max}}$ over solid angle $\Omega$. With each draw we randomly assign $N_{\rm{draw}}$ galaxies a stellar mass using Equation~\ref{eq:GSMF} as the target distribution with $z_{\rm{min}}=0$, $z_{\rm{max}}=0.055$. Our choice of $z_{\rm{max}}=0.055$ is chosen to match existing observational constraints \citep{Baldassare2020}, and we show that the number of detectable IMBHs falls off dramatically with increasing redshift. This assumption of the restriction of the redshift range under scrutiny assumption also allows us to ignore any explicit redshift dependence in the GSMF. The galaxy redshifts are determined by randomly assigning each galaxy to a redshift bin out to $z_{\rm{max}}$, where the number of galaxies in each redshift bin is then proportional to the cosmological differential comoving volume at that redshift bin. As a consistency check, we show that the redshift and stellar mass distributions of our mock sample compare extremely well to observed SDSS galaxies in Appendix~\ref{sec:nsacomp}. \subsection{Occupation fraction} After determining $N_{\rm{draw}}$, we then consider different possible functional forms for the occupation fraction, the fraction of galaxies hosting an IMBH/SMBH, $\lambda_{\rm{occ}}(M_{\star})$. We refer to this quantity as the \emph{occupation function}. This quantity may be greater than unity if multiple IMBHs are harbored in a galaxy. We explore the following scenarios for the occupation function: \begin{enumerate} \item \textbf{Light seeds:} A constant occupation function of $\lambda_{\rm{occ}} = 1$ shown in blue in Figure~\ref{fig:model}. This represents the most optimistic predictions for an initial ``light'' seed scenario (e.g., from Pop. III stellar remnants) as examined in \citet{Ricarte2018}. \item \textbf{Heavy seeds:} An occupation function that approaches unity for massive galaxies ($M_{\star} > 10^9\ M_{\odot}$) but drops dramatically by $M_{\star} \sim 10^8\ M_{\odot}$, shown in magenta according to the ``heavy-MS'' scenario (e.g., from direct collapse channels) adopted from \citet{Ricarte2018}. This prediction is derived from a semi-analytic model which traces the evolution of heavy seeds under the assumption of a steady-state accretion model that reproduces the observed AGN main-sequence. This resulting occupation fraction is broadly consistent with studies from cosmological simulations \citep{Bellovary2019}. \item \textbf{Light seed $+$ wanderers:} We adopt an occupation fraction anchored to the \citet{Sicilia2022} BH mass function (BHMF) derived from ongoing stellar formation channels. The \citet{Sicilia2022} BHMF describes the local IMBH population by anchoring to merger rates derived from gravitational wave (GW) observations by LIGO/VIRGO \citep{Abbott2021,Baxter2021}. We assume a smooth transition between these GW anchors to the \citet{Sicilia2022} BHMF at $M_{\rm{BH}} \sim 10^2\ M_{\odot}$ and the BHMF from scenario (i) at $M_{\rm{BH}} \sim 10^4\ M_{\odot}$ as a reasonable model to approximate the wandering and off-nuclear IMBHs that have not yet fallen to the center of the host galaxy. The resulting occupation fraction is broadly consistent with the existing constraints on the luminosity function derived from AGNs, ultra-luminous X-ray sources (ULXs), and XRBs as shown in Figure~\ref{fig:model}(f). \end{enumerate} Scenarios (i) and (ii) both assume a single seeding epoch and subsequent growth of the seed BH to fall onto the black hole-host galaxy mass relation at late times. However, stellar cluster seed formation channels can continuously produce IMBHs as recently pointed by \citet{Natarajan2021}. There are considerable theoretical uncertainties in these models arising from the hitherto unknown efficiencies of continual seed formation processes. We will incorporate continual BH formation models in future work. Furthermore, multiple seeding scenarios could simultaneously be at work in the Universe, and this implies that theoretical constraints on the occupation functions do remain uncertain for this reason as well. In this work, however, we pursue a brand new avenue and explore if optical variability can be used to constrain the occupation function. Precisely how the low-redshift occupation fraction traces seeding scenarios at high redshifts is more complex question that requires more detailed interpretation due to the interplay with accretion physics \citep{Mezcua2019Nat}. Here, we adopt two different scenarios described above (i) and (ii) for nuclear black holes as a way to bracket the possible reasonable outcomes. For each of the scenarios (i) and (ii), we assign each galaxy a BH or not according to its occupation probability. Therefore, the remaining number of draws is given by, \begin{equation} N_{\rm{draw, BH}} = N_{\rm{draw}} \int^{M_{\rm{\star,max}}}_{M_{\rm{\star,min}}} \lambda_{\rm{occ}}(M_{\star})\ dM_{\star}, \end{equation} where $N_{\rm{draw}}$ is given by Equation~\ref{eq:Ndraw}. \subsection{Black hole mass scaling relations} In the local universe, the stellar mass of the AGN host galaxy scales with the mass of the central BH as a power-law of the form: \begin{equation} \label{eq:MMstar} \log\left(\frac{M_{\rm{BH}}}{M_{\odot}}\right) = \alpha + \beta \log\left(\frac{M_{\star}}{M_{\star}^{\ast}}\right). \end{equation} We adopt the relation measured from local broad-line AGNs including dwarf galaxies with $\alpha=7.45\pm0.08$; $\beta=1.05\pm0.11$; with a pivot mass $M_{\star}^{\ast} = 10^{11}\ M_{\odot}$ \citep{Reines2015} to obtain BH masses for scenario (i) and (ii). We also include the rms scatter of $\sim 0.6$ dex in $M_{\rm{BH}}$ in the relation when assigning each galaxy a BH mass. For the wandering BH population of scenario (iii), we assume an analogous relation between the BH mass and mass of the star cluster containing the IMBH $M_{\rm{SC}}$ to obtain their associated stellar masses: \begin{equation} \label{eq:MMNSC} \log\left(\frac{M_{\rm{SC}}}{M_{\odot}}\right) = \alpha + \beta \log\left(\frac{M_{\rm{BH}}}{M_{\rm{BH}}^\ast}\right). \end{equation} We adopt the best-fit parameters from the relation between the BH mass and mass of the nuclear star cluster (as a proxy for $M_{\rm{SC}}$) derived from low-mass nuclear star clusters by \citet{Graham2020} with $\alpha=7.70\pm0.20$; $\beta=0.38\pm0.06$; $M_{\rm{BH}}^{\ast} = 10^{7.89}\ M_{\odot}$ and an intrinsic scatter of $\sim 0.5$ dex in $M_{\rm{SC}}$. Although by definition wandering black holes would not all necessarily be found in nuclear star clusters, to first order, we assume that this relation offers a reasonable description for off-nuclear star clusters with wandering IMBHs. For the wandering BH population, we will use $M_{\rm{SC}}$ in place of host galaxy stellar mass $M_{\star}$ to compute the luminosity from starlight that dilutes the variability. \subsection{The Eddington ratio distribution} We adopt a broken power-law distribution for the Eddington luminosity ratio ($\lambda_{\rm{Edd}} \equiv L_{\rm{bol}}/L_{\rm{Edd}}$) probability distribution function to compute the AGN bolometric luminosity $L_{\rm{bol}}$ from this. Specifically, we adopt the commonly used double power-law parameterization \citep{Caplar2015,Sartori2015,Sartori2019,Weigel2017,Pesce2021,Ananna2022}: \begin{equation} \label{eq:ERDF} \xi(\lambda_{\rm{Edd}}) = \xi^\ast \left[\left(\frac{\lambda_{\rm{Edd}}}{\lambda_{\rm{Edd}}^\ast}\right)^{\delta_1} + \left(\frac{\lambda_{\rm{Edd}}}{\lambda_{\rm{Edd}}^\ast}\right)^{\delta_2}\right]^{-1}, \end{equation} where $\xi(\lambda_{\rm{Edd}})$ is the Eddington ratio distribution function (ERDF); $\lambda_{\rm{Edd}}^\ast$ is the break Eddington ratio; and $\delta_1$; $\delta_2$ are the shallow and steep power law exponents, respectively. There is compelling evidence that the red and blue galaxy populations that host central AGN that accrete in different modes. \citet{Weigel2017} found that the radio AGN luminosity function (predominately red host galaxies) are described by a broken power law ERDF favoring lower accretion rates. On the other hand, the X-ray AGN luminosity function (predominately blue host galaxies) described by a broken power law ERDF is found to favor relatively higher accretion rates. \citep{Weigel2017} interpret this as evidence for a mass-independent ERDF for red and blue galaxies with radiatively inefficient and efficient accretion modes, respectively. We adopt the best-fit parameters for the high-end slope and break Eddington ratio for the red and blue galaxy populations $\delta_2$, and $\log\ \lambda^\ast$ from \citet{Weigel2017}, in order to match constraints on the $z\approx0$ AGN bolometric luminosity function (e.g., \citealt{Ajello2012,Aird2015}). In seed scenario (iii), we assume that the wandering IMBH population produced through stellar formation channels anchored to the \citet{Sicilia2022} BHMF are described by the radio AGN ERDF favoring lower accretion rates, which is broadly consistent with expectations that wandering black holes are expected to have lower accretion rates \citep{Bellovary2019,GuoM2020,Ricarte2021a,Seepaul2022}. The normalization of the ERDF $\xi^\ast$ determines how many of the randomly drawn $N_{\rm{draw, BH}}$ BH mass values are assigned an Eddington ratio. Unlike \citet{Weigel2017}, we wish to consider an ERDF normalization that describes the \emph{entire} red and blue galaxy population (rather than separate classes of radio or X-ray selected AGNs). Therefore, our ERDFs must be re-normalized accordingly. We set $\xi^\ast$ such that the integral of the ERDF from $\lambda_{\rm{Edd, min}}$ to $\lambda_{\rm{Edd, max}}$ is 1. This means that all $N_{\rm{draw, BH}}$ BH values are assigned an Eddington ratio and we have assumed that it is independent of BH mass. Then, noting that the low-end slope $\delta_1$ is not well-constrained by the AGN luminosity function for $\delta_1 < -\alpha_1$ (the low-luminosity end of the luminosity function is then determined by $\alpha_1$; \citealt{Caplar2015}). We allow $\alpha_1$ to be a free parameter in our model and adjust it to match the overall variable AGN fraction while maintaining consistency with the AGN luminosity function. The best-fit parameters for radiatively-efficient AGNs from \citet{Weigel2017} are consistent with the ERDF for low-mass galaxies from \citet{Bernhard2018}. Radiatively-efficient, low-mass AGNs dominate in number, and have the largest impact on the luminosity function. Although alternative ERDFs have been proposed \citep{Kauffmann2009}, the simple mass-independent broken power-law function is able to adequately reproduce observations once selection effects are accounted for \citep{Jones2016,Ananna2022}. Finally, we caution that a population of $z\sim0$ X-ray obscured Compton thick AGNs may be missing from our entire census and hence absent in the luminosity function as well. We consider the optically-obscured AGN fraction later on in this work before computing the optical-band luminosities. \subsection{Model consistency with observational constraints} A schematic detailing our model results using random sampling is shown in Figure~\ref{fig:model}. To ensure that our model parameters are consistent with all available relevant observational constraints, we compare our model AGN luminosity function to the observed local AGN luminosity function from \citet{Hao2005} and \citet{Schulze2009} measured using Type 1, broad-line AGNs from the Sloan Digital Sky Survey (SDSS; faint end) and the Hamburg/ESO Survey (bright end). The number densities in each bin $i$ are given by: \begin{equation} \phi_i(x) = \frac{n_i}{V(z_{\rm{min}}, z_{\rm{max}}, \Omega) \times \Delta \log x}, \end{equation} where $x$ is substituted for the variable of interest e.g., $M_{\star}$, $M_{\rm{BH}}$, or $L_{\rm{bol}}$. We fix the ERDF parameters to reproduce the observed local AGN luminosity function from \citet{Ajello2012} starting with the best-fit parameters of \citet{Weigel2017} and re-normalizing the ERDF to describe the entire galaxy population. This is in reasonably good agreement with the Type 1 bolometric $z\approx0$ AGN luminosity function \citep{Schulze2009,Hao2005}. We separately consider a Type 1/Type 2 AGN fraction before computing the observable optical luminosities for the AGN population. To check for the consistency of our derived luminosity functions with observations at luminosities below $\sim10^{42}$ erg s$^{-1}$, we show the observed luminosity function of ULXs) derived from \emph{Chandra} observations of seven collisional ring galaxies \citep{Wolter2018}. ULXs are non-nuclear sources with X-ray luminosities in excess of $10^{39}$ erg s$^{-1}$, generally thought to be X-ray binaries or neutron stars accreting at super-Eddington rates. However, it is possible that some ULXs are in fact accreting IMBHs (e.g., as noted in \citealt{Feng2011,Kaaret2017}). Regardless, it is important to check that our model bolometric luminosity function for the wandering IMBH population does not exceed the luminosity functions derived from ULXs as a limiting case. We demonstrate the consistency in Figure~\ref{fig:model}, by assuming a bolometric correction factor of $1.25$ \citep{Anastasopoulou2022}. We exclude sources with X-ray luminosities below $10^{39}$ erg s$^{-1}$, where the sample is incomplete \citep{Wolter2018}. We normalize the \citet{Wolter2018} (per-galaxy) luminosity function to the number density of $M_\star \sim 10^{6}\ M_{\odot}$ ultra-low mass dwarf galaxies of $\sim 10^{-1}$ Mpc$^{-3}$ \citep{Baldry2012}, whose IMBHs should dominate the low luminosity end of the BH luminosity function. This comparison should be treated with caution, because the \citet{Wolter2018} sample of massive collisional ring galaxies are not fully representative of all dwarf galaxies, and the normalization of the luminosity function is expected to depend on the star formation rate of the host galaxy \citep{Grimm2003}. In addition to ULXs, we show the completeness-corrected luminosity function of X-ray binaries (XRBs) spatially coincident with globular clusters (GCs) in nearby galaxies from \citet{Lehmer2020} assuming a bolometric correction of $1.25$ \citep{Anastasopoulou2022}. Again, we confirm that our predicted luminosity functions do not significantly exceed the observed luminosity function of XRBs in GCs after normalizing the luminosity function to the number density of $M_\star \sim 10^{6}\ M_{\odot}$ ultra-low mass dwarf galaxies. Similar caveats exist with this comparison and with that of the ULXs, as the results are also expected to depend on the properties of the star cluster. As an additional check, we plot the GSMF measured from the SDSS-based NASA Sloan Atlas\footnote{\url{http://nsatlas.org/data}} catalog \citep{Blanton2011} of $z<0.055$ galaxies, which serves as the parent sample of the existing observational constraints \citep{Baldassare2018,Baldassare2020} using the spectroscopic survey area of $\Omega\approx9380$ deg$^2$ (see \citealt{Weigel2016}). The SDSS GSMF is roughly consistent with the \citet{Wright2017} GSMF above $M_{\star} \sim 10^{10}\ M_{\odot}$ but is highly incomplete below. Deeper catalogs will be required to take advantage of the next generation of optical time-domain imaging surveys. \subsection{Optical bolometric corrections}\label{sec:sed} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{sed.pdf} \caption{Example spectral energy distributions (SEDs) of AGNs with BH masses in the range $M_{\rm{BH}} = 10^2 - 10^8\ M_{\odot}$ with $L_{\rm{bol}}/L_{\rm{Edd}} = 0.1$ using the model of \citet{Done2012} (thick gray lines, denoted ``Done+12 AGN''). We assume a distance of $30$ Mpc for these models. We also show the filter transmittance (throughput) curves for the GALEX (FUV and NUF; violet) and SDSS bandpasses ($ugriz$; blue to black) for reference (arbitrary $y$-axis scaling). We label the approximate locations of the dominant SED component in black text (but note the shift of their peak wavelengths to the left as $M_{\rm{BH}}$ decreases). The dashed black line is the best-fit \citet{Gierlinski2009} irradiated disk model of the IMBH candidate HLX-1 ($M_{\rm{BH}} \sim 10^{4}\ M_{\odot}$; \citealt{Farrell2014}), re-scaled to a distance of 30 Mpc and $L_{\rm{bol}}/L_{\rm{Edd}} \sim 0.1$ for comparison (denoted ``HLX-1''). The dotted black line is the Type 1 quasar SED of \citet{Richards2006} for SMBHs derived from composite observations (denoted ``QSO''). Note the \citet{Richards2006} SED contains emission from the AGN torus at $>1$ microns (i.e., the IR bump), while the \textsc{xspec} SEDs do not contain the torus emission. \label{fig:sed}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{sed2.pdf} \caption{Template RIAF spectral energy distributions (SEDs) of AGNs with Eddington luminosity ratios in the range $\lambda_{\rm{Edd}} \equiv L_{\rm{bol}}/L_{\rm{Edd}} = 10^{-8} - 10^{-2}$, with $M_{\rm{BH}} = 4 \times 10^6\ M_{\odot}$ using the model of \citet{Yuan2003} as implemented by \citet{Nemmen2014} (thick gray lines, denoted ``Nemmen+14 RIAF''). For comparison, we show the radiatively efficient accretion model of \citet{Done2012} using the same parameters in Figure~\ref{fig:sed}, except we set the electron temperature for the soft Comptonisation component to $kT_{e} = 1.9$ keV to match Sgr A$^*$ \citep{Baganoff2003} (thick dashed gray lines, denoted ``Done+12 AGN''). We assume a distance of $30$ Mpc for these models. We also show the filter transmittance (throughput) curves for the GALEX (FUV and NUF; violet) and SDSS bandpasses ($ugriz$; blue to black) for reference (arbitrary $y$-axis scaling). We label the approximate locations of the dominant SED component in black text (but note the shift of their peak wavelengths to the right as $\lambda_{\rm{Edd}}$ decreases). The dashed (dotted) black line is the best-fit quiescent-state (flaring-state) \citet{Yuan2003} radiatively inefficient accretion flow disk model for Sgr A$^*$ ($\lambda_{\rm{Edd}} \sim 10^{-8.5}$; $M_{\rm{BH}}=4.3 \pm 0.2 \times 10^{6}\ M_{\odot}$; \citealt{Genzel2010}), assuming a distance of 30 Mpc. Note the \citet{Yuan2003} SED contains outflow/jet synchrotron low-frequency radio emission, while the \citet{Done2012} \textsc{xspec} SEDs do not contain the outflow/jet synchrotron emission. \label{fig:sed2}} \end{figure*} \begin{table*} \caption{Format of the FITS file containing the pre-computed grid of \citet{Done2012} or \citet{Nemmen2014} model SEDs. \label{tab:sed}} \small \begin{tabular}{lllll} \hline \hline Header & Column Name & Format & Unit & Description \\ \hline 0 & data & $^{a}$float64 & $\log_{10}( {\rm erg}\ {\rm cm}^{-2} \ \ {\rm s}^{-1} )$ & $\log_{10}$ of the SED computed on the grid \\ \hline 1 & data & $^{b}$float64 & AB mag & Absolute magnitude in the $i$ band at $z=2$ computed on the grid \\ \hline 2 & log\_M\_BH & float64 & $\log_{10}( M_{\odot} )$ & $\log_{10}$ of the black hole mass \\ 2 & log\_LAMBDA\_EDD & float64 & $\log_{10}( M_{\odot} )$ & $\log_{10}$ of the Eddington ratio \\ 2 & Z & float64 & & Redshift \\ \hline 3 & $^{c}$log\_WAV & float64 & $\log_{10}( {\rm nm} )$ & $\log_{10}$ of the rest-frame wavelengths where SED is evaluated \\ \hline \end{tabular}\\ {\raggedright $^{a}$ This is a 4-dimensional array of the shape [log\_M\_BH, log\_LAMBDA\_EDD, Z, log\_WAV]. \\ $^{b}$ This is a 2-dimensional array of the shape [log\_M\_BH, log\_LAMBDA\_EDD]. \\ $^{c}$The wavelength range over which the SEDs are evaluated is $10^{-3} - 10^8$ nm spaced evenly in $\log$ space. \\ \par} \end{table*} In order to predict the observed (time-averaged) luminosity in a given band $L_{\rm{band}}$, we need to assume a bolometric correction factor, defined as ${\rm{BC}}_{\rm{band}} = L_{\rm{bol}}/L_{\rm{band}}$. Typically, bolometric corrections are inferred from a template quasar spectral energy distribution (SED). However, the disk temperature profile of an IMBH is expected to differ significantly from that of a SMBH accreting at the same Eddington ratio, causing the SED to peak in the extreme UV (e.g., \citealt{Cann2018}). For this reason, it is inappropriate to use standard AGN or quasar SEDs to explore the IMBH regime (e.g., \citealt{Richards2006}). Instead, here we adopt the energetically self-consistent model of \citet{Done2012} that assumes that the emission thermalizes to a color-temperature-corrected blackbody only at large radii for radiatively efficient accretion ($L_{\rm{bol}}/L_{\rm{Edd}} > 10^{-3}$). This model captures the major components observed in the rest-frame UV/optical in narrow-line Seyfert 1 galaxy SEDs: black-body emission from the outer color-temperature-corrected accretion disk; an inverse Compton scattering of photons from the inner disk model of the soft X-ray excess, and inverse Compton scattering in a corona to produce the power-law tail. For mass accretion rates $\dot{m}<\dot{m}_{\rm{crit}} \approx \alpha^2 \approx 0.1$, a radiatively inefficient accretion flow (RIAF) is expected to develop, resulting in a much lower luminosity \citep{Fabian1995,Narayan1994,Narayan1995}. It is thought that black holes with $10^{-6}<\dot{m}<\dot{m}_{\rm{crit}}$ may fall in a hybrid RIAF regime, while ``quiescent'' BH with $\dot{m}<10^{-6}$ are in a RIAF-dominated regime \citep{Ho2009}, resulting in a power-law SED like the quiescent-state of Sgr A$^\ast$ \citep{Narayan1998}. The dimensionless mass accretion rate is given by: \begin{eqnarray} \dot{m} \simeq 0.7\ (\alpha/0.3)\ (L_{\rm{bol}}/L_{\rm{Edd}})^{1/2}, \end{eqnarray} where $\alpha$ is the \citet{Shakura1973} viscosity parameter. For RIAFs where, $L_{\rm{bol}}/L_{\rm{Edd}} < 10^{-3}$, we adopt the model of \citet{Nemmen2014}. The model includes an inner advection-dominated accretion flow (ADAF), and an outer truncated thin accretion disk and a jet \citep{Nemmen2014,Yuan2007,Yuan2005}. This model provides a reasonable description for low luminosity AGNs and low-ionization nuclear emission-line region (LINER; \citealt{Eracleous2010,Molina2018}) galaxies with low accretion rates ($L_{\rm{bol}}/L_{\rm{Edd}} \sim 10^{-6} - 10^{-4}$; \citealt{Nemmen2014}). Therefore, we adopt $L_{\rm{bol}}/L_{\rm{Edd}} = 10^{-3}$ as the boundary between radiatively efficient and inefficient accretion flow SEDs, although precisely where this boundary lies is unclear (e.g., \citealt{Ho2009}). \subsubsection{Radiatively Efficient Accretion} To derive bolometric corrections, we use version 12.12.0 of the \textsc{xspec} software\footnote{\url{https://heasarc.gsfc.nasa.gov/xanadu/xspec/}} \citep{Arnaud1996} to generate a fine grid of \citet{Done2012} \texttt{optxagnf} SED models spanning $M_{\rm{BH}} = 10^2 - 10^9\ M_{\odot}$, $L_{\rm{bol}}/L_{\rm{Edd}} = 10^{-3} - 1$, and $z = z_{\rm{min}} - z_{\rm{max}}$. We make the following simple assumptions for the additional parameters in the model: BH spin $a_\star = 0$; coronal radius of transition between black-body emission to a Comptonised spectrum $r_{\rm{cor}} = 100\ R_g$; electron temperature of the soft Comptonisation component (soft X-ray excess) $kT_e = 0.23$ keV; optical depth of the soft excess $\tau=11$; spectral index of the hard Comptonisation component $\Gamma = 2.2$; and fraction of the power below $r_{\rm{cor}}$ which is emitted in the hard Comptonisation component $f_{\rm{pl}} = 0.05$. The outer radius of the disk is set to the self gravity radius \citep{Laor1989}. These parameters are chosen to roughly match that of narrow-line Seyfert 1 galaxy RE1034+396 (see \citet{Done2012} for a more complete description of each parameter). We interpolate this grid of SEDs at each $N_{\rm{draw, BH}}$ Eddington ratio, BH mass, and redshift using our Monte Carlo model. We provide this grid of pre-calculated SEDs as a supporting fits data file\footnote{\url{https://doi.org/10.5281/zenodo.6812008}}. The format of the data file is described in Table~\ref{tab:sed}. We assume no dust extinction/reddening, because the LSST Rubin wide-fast-deep survey is expected to largely avoid the galactic plane and the intrinsic dust extinction in Type 1 AGNs is generally small. Finally, we use the optical filter transmission curves and the SED to compute $L_{\rm{band}}$. The \citet{Done2012} SED models are undefined for $M_{\rm{BH}} > 10^9\ M_{\odot}$ in \textsc{xspec}, so we caution that our derived luminosities for the most massive SMBHs relies on extrapolation from this grid of parameters. Nevertheless, we will show that our derived $L_{\rm{band}}$ values are close to the observed $L_{\rm{band}}$ values from SDSS quasars below. We show \citet{Done2012} model SEDs spanning $M_{\rm{BH}} = 10^2 - 10^8\ M_{\odot}$ with $L_{\rm{bol}}/L_{\rm{Edd}} = 0.1$ in Figure~\ref{fig:sed}. Our model SEDs are evaluated on a grid spanning $M_{\rm{BH}} = 10^2 - 10^9\ M_{\odot}$, but we have only shown a subset of the results to avoid crowding the figure. We over-plot the SDSS optical \citep{Blanton2017} and GALEX UV \citep{Martin2005} filter transmission curves for reference. For comparison, we also show the best-fit \citet{Gierlinski2009} irradiated disk model of the IMBH candidate HLX-1 \citep{Farrell2009} fit to \emph{Hubble Space Telescope} and \emph{Swift} photometry from \citet{Farrell2014}. This SED model displays qualitatively similar features to the \citet{Done2012} models, given its expected mass of $M_{\rm{BH}} \sim 10^4\ M_{\odot}$ and distance of 95 Mpc \citep{Farrell2014}. Other phenomenological models might also adequately describe the SED arising from an accretion-disk around an IMBH (e.g., \citealt{Mitsuda1984,Makishima1986}). Indeed the SED from the accretion disk emission may differ if the IMBH is in a binary configuration that undergoes state transitions similar to X-ray binaries \citep{Servillat2011}. Here, we assume an IMBH is in a ``high-soft''/rapidly-accreting state where its disk may be approximately geometrically thin and behave like a scaled-down accretion disk around a SMBH \citep{McHardy2006,Scaringi2015,Burke2021c}. One could also incorporate variations in model parameters into our Monte Carlo framework. Although our results depend on these model assumptions, it is unlikely to change our final results in excess of the fiducial uncertainty on the BH mass/bolometric luminosity function. Nevertheless, we retain the flexibility in our framework to substitute other SED models as better observational constraints on dwarf AGN SEDs become available in the future. \subsubsection{Radiatively Inefficient Accretion} We calculate \citet{Nemmen2014} RIAF model SEDs\footnote{\url{https://github.com/rsnemmen/riaf-sed}} and add them to our grid of model SEDs spanning $M_{\rm{BH}} = 10^2 - 10^9\ M_{\odot}$, $L_{\rm{bol}}/L_{\rm{Edd}} = 10^{-8} - 10^{-3}$, and $z = z_{\rm{min}} - z_{\rm{max}}$. We make the following simple assumptions for the additional parameters in the model: power-law index for accretion rate (or density) radial variation $s=0.3$, \citet{Shakura1973} viscosity parameter $\alpha=0.3$, ratio between the gas pressure and total pressure $\beta = 0.9$, strength of wind $p=2.3$, fraction of energy dissipated via turbulence that directly heats electrons $\delta=10^{-3}$, adiabatic index $\gamma=1.5$. The outer radius of the disc is set to the self gravity radius \citep{Laor1989}. These parameters are chosen to roughly match those inferred from fitting a sample of LINERs from \citet{Nemmen2014} (see the Nemmen et al. paper for a more complete description of each parameter). To overcome sensitivities to boundary conditions when finding model solutions, we generate a single template SED with Sgr A$^\ast$-like parameters and normalize the resulting SED by BH mass and accretion rate. We then include the simple color-temperature correction analogous to \citet{Done2012}. We show \citet{Nemmen2014} RIAF SEDs spanning $L_{\rm{bol}}/L_{\rm{Edd}} = 10^{-8} - 10^{-2}$ along with a \citet{Done2012} SED with $L_{\rm{bol}}/L_{\rm{Edd}} = 10^{-2}$ with $M_{\rm{BH}} = 4 \times 10^6\ M_{\odot}$ in Figure~\ref{fig:sed2} and $kT_{e} = 1.9$ keV to match Sgr A$^*$ \citep{Baganoff2003}. For comparison, we show the SED of Sgr A$^\ast$ ($\lambda_{\rm{Edd}} \sim 10^{-8.5}$; $M_{\rm{BH}}=4.3 \pm 0.2 \times 10^{6}\ M_{\odot}$; \citealt{Genzel2010}) in both its quiescent and flaring states and using the radiatively inefficient accretion flow disk model of \citet{Yuan2003}. We find the \citet{Nemmen2014} models provide a reasonable approximation to the optical/UV/X-ray emission of the flaring-state SED of Sgr A$^\ast$. The difference in the shape of the SED compared to \citet{Done2012} model SEDs is attributed to differences between radiatively efficient and RIAFs cooled by advection \citep{Narayan1994,Narayan1995}. There are many theoretical uncertainties regarding the nature of RIAFs, owing to a lack of high-quality observations. However, these detailed assumptions will only affect the luminosities for sources with very low accretion rates in our model which fortunately do not dominate the variability-selected samples. \subsection{Optical variability}\label{sec:var} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{SFinf.pdf} \caption{Asymptotic rms variability amplitude $\rm{SF}_{\infty}$ versus virial BH mass $M_{\rm{BH}}$ for the sample of SDSS quasars measured from SDSS light curves (points colored by their Eddington luminosity ratios $\lambda_{\rm{Edd}}$; \citealt{MacLeod2010}) and broad-line dwarf AGNs (blue circle symbols) measured from ZTF light curves in the $g$ band computed at $z\sim0.01$. The extrapolated $\rm{SF}_{\infty}$ relations from the \citet{MacLeod2010} prescription (Equation~\ref{eq:SFinf}) assuming no host galaxy dilution are shown in gray with $L_{\rm{bol}}/L_{\rm{Edd}}=$ 0.1 (solid line), 0.01 (dashed line), and 0.001 (dotted line) with $1\sigma$ uncertainty band shown over the $L_{\rm{bol}}/L_{\rm{Edd}}=$ 0.1 prediction. Our modified extrapolations are similarly shown in blue after accounting for host galaxy dilution assuming a color index of $g-r=0.5$ and a covering factor of $f_\star=10$\%, typical of low-redshift dwarf galaxies. The inconsistency (opposite trends) with the $L_{\rm{bol}}/L_{\rm{Edd}}$ scaling for SDSS quasars and our model is due to the dimming of AGN light as $L_{\rm{bol}}/L_{\rm{Edd}}$ decreases, leading to more host dilution. We have also assumed different host galaxy colors compared to quasar host galaxies (e.g., \citealt{Matsuoka2014}), and the dwarf galaxy and SDSS quasar populations are at different redshifts. The uncertainty is dominated by scatter in the BH-host galaxy relation and the galaxy mass-to-light ratio (see \S\ref{sec:var}). Our modified relation gives more reasonable results in the IMBH regime and is more consistent with observations of dwarf AGN variability. Typical uncertainties on the $\rm{SF_{\infty}}$ measurements are $\sim0.1$ dex. Virial mass uncertainties are typically $\sim0.4$ dex (e.g., \citealt{Shen2013}). \label{fig:SFinf}} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{SFinf2.pdf} \caption{Same as Figure~\ref{fig:SFinf} but with $L_{\rm{bol}}/L_{\rm{Edd}}= 0.1$ and varying host dilution covering factors of $f_{\star}= $ 0.2\% (dashed line), 2\% (dash-dotted line), 20\% (solid line), and 100\% (dotted line) with $1\sigma$ uncertainty band shown over the $f_{\star}= $ 20\% prediction. The results with no host dilution are a reasonable approximation of the extrapolated relation for quasars \citep{MacLeod2010}. \label{fig:SFinf2}} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Mi.pdf} \caption{Absolute $i$-band magnitude $K$-corrected to $z=2$ versus BH mass computed with the \citet{Done2012} ($L_{\rm{bol}}/L_{\rm{Edd}}>10^{-3}$) or \citet{Nemmen2014} ($L_{\rm{bol}}/L_{\rm{Edd}}<10^{-3}$) SEDs (blue) compared to the relation for quasars $L_{\rm{bol}}/L_{\rm{Edd}}=0.1$ (gray; e.g., \citealt{Shen2009}). The thick blue line is the $L_{\rm{bol}}/L_{\rm{Edd}}=0.1$ case, while the thin blue lines span $L_{\rm{bol}}/L_{\rm{Edd}}=10^{-2} - 10^{-8}$. The width of the gray line corresponds to the $1\sigma$ scatter in the relation. The more complex shape of the blue curve---namely, larger $M_i(z{=}2)$ at lower BH mass---is due to the blueward disk temperature shift at lower BH masses. For $M_{\rm{BH}} < 10^6\ M_{\odot}$, this relation is well-approximated by $M_i = 125 - 3.3\ \log(L_{\rm{bol}}\ /\ {\rm{erg\ s}^{-1}})$. \label{fig:Mi}} \end{figure} \begin{figure} \includegraphics[width=0.5\textwidth]{lc_examples.pdf} \caption{Example mock DRW $g$-band rest-frame light curves of AGNs with BH masses in the range $M_{\rm{BH}} = 10^2 - 10^6\ M_{\odot}$, $L_{\rm{bol}}/L_{\rm{Edd}}=0.1$, $g-r=0.5$ and a host dilution covering factor of $f_\star=10$\%, with a duration of 1 year. The mock light curve prescription includes estimates of host galaxy contamination following \S\ref{sec:var}. The variability amplitude of an IMBH saturates at a few tenths of a magnitude due to host dilation and the characteristic timescale of variability is $\sim$ tens of hours. \label{fig:lc}} \end{figure} To a good approximation, AGN light curves can be well described by a damped random walk (DRW) model of variability \citep{Kelly2009,MacLeod2010}. We assume a DRW model for both accretion modes. In the DRW model, the PSD is described by a $f^{-2}$ power-law at the high-frequency end, transitioning to a white noise at the low-frequency end. The transition frequency corresponds to the damping timescale $\tau_{\rm{DRW}}$ as $f_0 = 1/(2\pi\tau_{\rm{DRW}}$). The damping timescale thus describes a characteristic timescale of the optical variability. There is growing evidence that the variability characteristics depend on AGN properties. \citet{Burke2021} found that (i) the damping timescale depends on accretor mass and (ii) there exists a strong correlation between $\tau_{\rm{DRW}}$ and BH mass, which extends to the stellar mass range using optical variability measured for nova-like accreting white dwarfs \citep{Scaringi2015}. We generate mock AGN light curves using the recipe of \citet{MacLeod2010,Suberlak2021}: \begin{multline} \label{eq:SFinf} \log\left(\frac{{\rm{SF}}_{\infty}}{\rm{mag}}\right) = A + B\ \log\left(\frac{\lambda_{\rm{RF}}}{4000\ {\textup{\AA}}}\right) + C\ (M_i + 23) + \\ D\ \log\left(\frac{M_{\rm{BH}}}{10^9\ M_{\odot}}\right), \end{multline} where $A = -0.51 \pm 0.02$, $B = -0.479 \pm 0.005$, $C = 0.131 \pm 0.008$, and $D = 0.18 \pm 0.03$; and, \begin{multline} \log\left(\frac{\tau}{\rm{days}}\right) = A + B\ \log\left(\frac{\lambda_{\rm{RF}}}{4000\ {\textup{\AA}}}\right) + C\ (M_i + 23) + \\ D\ \log\left(\frac{M_{\rm{BH}}}{10^9\ M_{\odot}}\right), \end{multline} where ${\rm{SF}}_{\infty}$ is the structure function (SF) evaluated at infinity (i.e., asymptotic rms variability amplitude; e.g., \citealt{Kozlowski2016}) and $A = 2.4 \pm 0.2$, $B = 0.17 \pm 0.02$, $C = 0.03 \pm 0.04$, and $D = 0.21 \pm 0.07$ \citep{Suberlak2021}. Here we adopt the coefficients of $A=2.029\pm0.004$, $D=0.38\pm0.05$ and pivot mass from \citet{Burke2021} which includes dwarf AGNs. In these relations, $\lambda_{\rm{RF}}$ is the rest-frame wavelength of the observation, i.e., $\lambda_{\rm{RF}}=\lambda_{\rm{obs}}/(1 + z)$ where $\lambda_{\rm{obs}}$ is the central wavelength of the filter/band and $z$ is the redshift, and $M_i$ refers to the $i$-band absolute magnitude $K$-corrected to $z=2$, $M_i(z{=}2)$, as a proxy for the AGN bolometric luminosity $L_{\rm{bol}}$ following \citet{Richards2006}. As such, we adopt the relation $M_i = 90 - 2.5\ \log(L_{\rm{bol}}\ /\ {\rm{erg\ s}^{-1}})$ \citep{Shen2009} instead of the actual value computed from the SED (Figure~\ref{fig:Mi}) in these relations so that this variable still acts as a linear proxy for $\log\ L_{\rm{bol}}$ when extrapolated to low BH masses. We show the predicted $g$-band $\rm{SF}_{\infty}$ versus $M_{\rm{BH}}$ in Figure~\ref{fig:SFinf} using the \citet{Done2012} SEDs to compute $M_i$ (Figure~\ref{fig:Mi}) and varying $L_{\rm{bol}}/L_{\rm{Edd}}=$ 0.1, 0.01, and 0.001. Similarly, we show results for varying host galaxy dilution covering factors of $f_{\star}= $ 0.2\%, 2\%, 20\%, and 100\% in Figure~\ref{fig:SFinf2}. For context, we show the individual data points from SDSS quasars \citep{MacLeod2010} and dwarf AGNs with broad-line (virial) BH mass estimates and $\rm{SF}_{\infty}$ values measured from Zwicky Transient Facility (ZTF; \citealt{Bellm2019}) light curves (Burke et al. in prep). We extrapolate the \citet{MacLeod2010} relation to the IMBH regime, but find the predicted $\rm{SF}_{\infty}$ values of $\gtrsim 1$ mag are far too large to be reasonable. An IMBH with this level of variability has not been detected. The \citet{MacLeod2010} sample is dominated by quasars, so $M_i$ and $\rm{SF}_{\infty}$ correspond primarily to emission from the quasar with a small component contributed by host galaxy. However, in the IMBH regime, host galaxy light is expected to dominate, diluting the variability amplitude from the AGN emission. To estimate this host galaxy light dilution, we use the $M_{\rm{BH}}-M_{\star}$ relation of \citet{Reines2015} (Equation~\ref{eq:MMstar}) and the stellar mass-to-light ratios of \citet{Zibetti2009} assuming a host galaxy color index typical of dwarf AGNs of $g-r \approx 0.5$ (e.g., \citealt{Baldassare2020,Reines2013}) and contamination factor of $f_\star=20$\% (i.e., covering factor, accounting for aperture effects) such that the host galaxy luminosity enclosed in an aperture is $L_{\star, {\rm{ap}}} = f_\star\ L_{\star}$, where $L_{\star}$ is the total luminosity from the host galaxy starlight. These assumptions are justified further in Appendix~\ref{sec:host}, and we will use these mass-dependent parameterizations of the color index and covering factor in our final model. The resulting observed (diluted) rms variability amplitude is, \begin{equation} \label{eq:SF1} {\rm{SF}}_\infty^{\prime} = \frac{L_{\rm{AGN}}}{L_{\rm{AGN}}+ f_\star L_{\star}}\ {\rm{SF}}_\infty, \end{equation} where $L_{\rm{AGN}}$ is the mean AGN luminosity (assumed to be a point source), $L_{\star}$ is the host galaxy luminosity in a given band, and $\rm{SF}_\infty$ is given by Equation~\ref{eq:SFinf}. We caution that the assumptions above are highly uncertain (e.g., $\sim0.5$ dex scatter in the $M_{\rm{BH}}-M_{\star}$ relation and $\sim0.3$ dex scatter in the mass-to-light ratios) and the level of host contamination would depend on the individual galaxy. Nevertheless, these qualitative arguments yield more reasonable predictions for the variability amplitude in the IMBH regime and are surprisingly consistent with observations of dwarf AGN variability which have typical $\rm{SF}_\infty$ values of a few tenths of a magnitude \citep{Baldassare2018,Baldassare2020,Burke2020b,Ward2020,Martinez-Palomera2020}. Our modified relation also gives a reasonable prediction for low Eddington ratio black holes. When the AGN emission dominates, the observed anti-correlation between Eddington ratio and variability amplitude (e.g., \citealt{Wilhite2008,Simm2016,Caplar2017,Rumbaugh2018}) may hold for quasars ($M_{\rm{BH}} \sim 10^8 - 10^{10}\ M_{\odot}$; $L_{\rm{bol}}/L_{\rm{Edd}} \sim 0.1$), but below a certain Eddington ratio, the host galaxy dilution becomes so large as to swamp the AGN variability entirely. This is consistent with the lack of detected strong optical variability in very low luminosity AGNs (e.g., detected by ultra deep radio or X-ray surveys) due to host dilution. We show sample mock DRW $g$-band light curves of AGNs (including host dilution following the prescription above) with BH masses in the range $M_{\rm{BH}} = 10^2 - 10^8\ M_{\odot}$ with $L_{\rm{bol}}/L_{\rm{Edd}} \sim 0.1$ in Figure~\ref{fig:lc} with the same assumptions as above. This figure demonstrates the dramatically more rapid variability ($\lesssim$ days) shown by AGNs in the IMBH regime and suppressed variability amplitude due to estimated host dilution. We compute full mock DRW light curves for all the $N_{\rm{draw, BH}}$ sources in our Monte Carlo model and adopt a simple stellar mass-dependent $g-r$ color index and redshift-dependent contamination factor based on a fitting to SDSS NASA Sloan Atlas galaxies as described in the Appendix~\ref{sec:host}. We assume the emission from the stellar mass of the host star clusters of the wanderers in scenario (iii) are unresolved. This is consistent with the typical size of young star clusters in the local Universe of a few pc or less \citep{Carlson2001}. \subsection{Optical Type 1 fraction} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{obs.pdf} \caption{Fraction of optically obscured AGNs $f_{\rm{obs}}$ as a function of bolometric luminosity. The gray line shows the model of \citet{Merloni2014} using the X-ray bolometric correction of \citet{Duras2020}. The colored points and $1\sigma$ uncertainty bands are shown for the input ``light'' (blue/square symbols), ``heavy'' (magenta/circle symbols), and ``light $+$ wanderers'' (green triangle symbols) seeding scenarios probed for our LSST-like model. \label{fig:obs}} \end{figure} Type 2 (highly optically obscured) AGNs show little or no detectable optical variability because their UV/optical accretion disk emission is thought to be obscured \citep{Barth2014}. We adopt the luminosity-dependent optically obscured AGN fraction $f_{\rm{obs}}$ from \citet{Merloni2014}: \begin{equation} f_{\rm{obs}}(l_x) = A + \frac{1}{\pi} \tan^{-1}\left(\frac{l_0 - l_x}{\sigma_x}\right), \end{equation} where $l_x = \log(L_{X} / {\rm{erg}\ {s}^{-1}})$ and their best-fit parameters from their X-ray selected sample are $A=0.56$, $l_0 = 43.89$, an $\sigma_x=0.46$. However, we adopt the normalization $A=0.5$ to ensure $f_{\rm{obs}}$ asymptotes to unity at low luminosity. Formal uncertainties are not given by \citet{Merloni2014}, but the uncertainties in their luminosity bins are $\sim0.2$ dex in luminosity. We show the optically-obscured fraction as function of $L_{\rm{bol}}$ in Figure~\ref{fig:obs} using the luminosity-dependent $2{-}10$ keV bolometric correction of \citet{Duras2020}. We randomly assign each $N_{\rm{draw, BH}}$ sources in our Monte Carlo model to be optically obscured or unobscured using the probability function shown in Figure~\ref{fig:obs}. We simply set the AGN luminosity to zero for optically obscured sources, with Equation~\ref{eq:SF1} ensuring their variability would be undetectable ($\rm{SF}_\infty^\prime \approx 0$ for $L_{\rm{AGN}}/f_{\star}\ L_{\star} << 1$). \section{Mock Observations}\label{sec:obs} \subsection{Light curves} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{SFinf_ptf.pdf} \includegraphics[width=0.5\textwidth]{SFinf_lsst.pdf} \caption{Measured light curve rms versus host galaxy aperture magnitude for our PTF (\emph{upper panel}; cf. Figure~3 of \citealt{Baldassare2020}) and LSST-like (\emph{lower panel}) mock samples. The variable sources are shown as colored points (magenta circles: ``heavy'' seed scenario; blue squares: ``light'' seed scenario; green triangles: ``light $+$ wanderers'' scenario), while the shaded contours shows the total light curve distribution (darker contours being regions of higher density). The distributions are from a single, representative bootstrap realization of our model results. The number of data points has been reduced by a factor of 10 (PTF) or 100 (LSST) to improve clarity. The single-visit model photometric precision rms $\sigma_{1}$ versus apparent magnitude for LSST Rubin $g$-band following Equations~\ref{eq:ppm1} and \ref{eq:ppm2} \citep{Ivezic2019} is shown in gray. To facilitate comparison, the dashed lines in the top panel and lower panel show the photometric precision model for LSST Rubin and PTF, respectively. \label{fig:phprec}} \end{figure} \begin{figure*} \includegraphics[width=0.98\textwidth]{varhist_ptf.pdf} \includegraphics[width=0.98\textwidth]{varhist_lsst.pdf} \caption{Distributions of (host diluted) asymptotic rms variability amplitude ${\rm{SF}_\infty}$, rest-frame damping timescale $\tau$, and Eddington luminosity ratios for all sources within the flux limit of the survey (gray) and variable sources detected in the survey for the different input seeding scenarios (magenta: ``heavy''; blue: ``light''; green: ``light $+$ wanderers'') for our PTF (\emph{upper panel}) and LSST-like (\emph{lower panel}) mock samples. The distributions are from a single, representative bootstrap realization of our model results. This figure demonstrates the resulting distributions of the parameters relative to the input distributions after variability selection. \label{fig:SFtau}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{varfrac_0.3_ptf.pdf} \includegraphics[width=0.98\textwidth]{varfrac_0.3_lsst.pdf} \caption{Recovered (observed) variable (defined as $\sigma_{\rm{var}} > 2$) AGN fraction versus host galaxy stellar mass (\emph{left}) and aperture apparent magnitude (\emph{right}) for the input ``light'' (blue/square symbols) and ``heavy'' (magenta/circle symbols) seeding scenarios for our PTF (\emph{upper panel}) and LSST-like (\emph{lower panel}) models. These recovered variable fractions are computed by selecting for variable light curves following mock observations as described in \S\ref{sec:obs} after including all components of our demographics model as described in \S\ref{sec:model}. The current observational constraints and $1\sigma$ uncertainties from PTF are shown in red \citep{Baldassare2020}. We omit the data points in the most massive and faintest bins where the sample is highly incomplete for clarity. Our model results have greater statistical power at low stellar mass than the constraints from \citet{Baldassare2020} because that sample is limited to SDSS spectroscopically-targeted galaxies ($r\lesssim17.8$ mag), which is shallower than the PTF flux limit of $R \sim 20.5$ mag. \label{fig:varfrac}} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{massredshift_ptf.pdf} \includegraphics[width=0.98\textwidth]{massredshift_lsst.pdf} \caption{BH mass (\emph{left}) and stellar mass (\emph{right}) $-$ redshift distributions for the input ``light'' (blue squares), ``heavy'' (magenta circles), and ``light $+$ wanderers'' (green triangles) seeding scenarios for our PTF (\emph{upper panel}) and LSST-like (\emph{lower panel}) models. We assume measurement uncertainties of $\sim0.3$ dex in stellar mass. Darker contours represent denser regions of the distributions. The scatter points are recovered variable sources are computed by selecting for variable light curves following mock observations as described in \S\ref{sec:obs} after including all components of our demographics model as described in \S\ref{sec:model}. The gray-scale contours represents the underlying distribution of all sources (variable and non-variable) in each model. The solid black curves represents the theoretical mass detection limits following Appendix~\ref{sec:appdetlim} assuming a typical rms variability amplitude of 0.1 mag. The distributions are from a single, representative bootstrap realization of our model results. The number of data points has been reduced by a factor of 10 (PTF) or 100 (LSST) to improve clarity. \label{fig:massredshift}} \end{figure*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{bhmfrecovered_lsst.pdf} \caption{Recovered (observed) BH mass function for the input ``light'' (blue/square symbols), ``heavy'' (magenta/circle symbols), and ``light $+$ wanderers'' (green triangle symbols) seeding scenarios for our LSST Rubin-like model. These recovered variable fractions are computed by selecting for variable light curves following mock observations as described in \S\ref{sec:obs} after including all components of our demographics model as described in \S\ref{sec:model}. \label{fig:varfracbh}} \end{figure} In order to perform source forecasts, we generate synthetic observations assuming LSST Rubin -like observational parameters. We focus our mock observations on the $g$-band, because the (diluted) AGN variability amplitude is typically larger at bluer wavelengths and the $u$-band suffers from worse single-epoch imaging depth. We generate realistic DRW light curves with a duration of 10 years, a cadence of 25 days, and a season length of 150 days, which roughly matches the expected median values of the ``baseline'' $g$-band LSST Rubin wide-fast-deep survey.\footnote{See \texttt{baseline\_v2.0\_10yrs} metrics at \url{http://astro-lsst-01.astro.washington.edu:8080/allMetricResults?runId=1}} We adopt the photometric precision model of LSST Rubin from \citet{Ivezic2019} of the form: \begin{equation} \label{eq:ppm1} \sigma_{1}^2 = \sigma_{\rm{sys}}^2 + \sigma_{\rm{rand}}^2, \end{equation} where $\sigma_{1}$ is the expected photometric error in magnitudes for a single visit, $\sigma_{\rm{sys}}$ is the systematic photometric error, and $\sigma_{\rm{rand}}$ is the random photometric error given by, \begin{equation} \label{eq:ppm2} \sigma_{\rm{rand}}^2 = (0.04 - \gamma)\ x + \gamma\ x^2\ {(\rm{mag}^2)} \end{equation} with $x\equiv10^{0.4(m-m_5)}$ where $m_{5}$ is the 5$\sigma$ limiting magnitude for point sources in a given band, and $\gamma$ is a band-dependent parameter which depends on sky brightness and instrument properties. We use the expected $g$-band flux limit of $m_{5}=25.0$ mag, $\sigma_{\rm{sys}}^2=0.005$ mag, and $\gamma=0.039$ \citep{Ivezic2019}, which is in good agreement with mock observations from synthetic data \citep{Sanchez2020}. In order to enable comparison with the current observational constraints \citep{Baldassare2020}, we generate similar mock observations with the PTF \citep{Law2009}. We adopt a cadence of 5 days, a season length of 100 days, and a total survey length of 5 years. We use the same photometric precision model from \citet{Ivezic2019} but with an $R$-band flux limit now of $m_{5}=21.5$ mag, $\sigma_{\rm{sys}}^2=0.005$ mag, and $\gamma=0.035$. We obtained these values that approximate the data in Figure~3 of \citet{Baldassare2020} by eye. This is apparently more precise at fixed magnitude than the \citep{Ofek2012} PTF calibration. We show the photometric precision models and measured light curve rms values for LSST Rubin and the PTF in Figure~\ref{fig:phprec}. Taking our mock light curves with flux-dependent uncertainties, we then use the simple $\chi^2$-based variability metric to compute the variability significance: \begin{equation} \left[\chi^2/\nu\right]_{\rm var} = \frac{1}{\nu}\sum^N_{i=1} (m_i-\overline{m})^2 w_i, \end{equation} where the weighted mean $\overline{m}$ is given by, \begin{equation} \overline{m} = \frac{\sum^N_{i=1}m_i w_i}{\sum^N_{i=1} w_i}, \end{equation} with weights given by the reciprocal of the squared photo-metric uncertainties $w_i=1/\sigma_i^2$ on each measurement $m_i$ in magnitudes (e.g., \citealt{Butler2011,Choi2014}). We then convert this test statistic to a resulting significance $\sigma_{\rm var}$ in units of $\sigma$. This metric is statistically-motivated, model independent, and fast to compute. Following \citet{Baldassare2020}, we consider a source to be variable if its light curve satisfies $\sigma_{\rm var}>2$, which implies a $\sim5\%$ false positive rate. We require the light curve input rms variability amplitude $\rm{SF}_\infty$ to be larger than the survey's photo-metric precision, i.e., $\rm{SF}_\infty > \sigma_1(m)$, where $m$ is the magnitude of the source and $\sigma_1$ is the photo-metric precision model (Equation~\ref{eq:ppm1}) to assure that our variable sources are reliable detections. Our model does not include other contaminants, such as other variable transients (e.g., supernovae, tidal disruption events, or variable stars), or other (possibly non-Gaussian) systematic sources of light curve variability (i.e., non-photometric observations). Therefore, we have no need to introduce a classification metric for ``AGN-like'' variability. This makes our selection simpler and less dependent on the exact underlying process describing AGN light curves but more idealized than reality. We show histograms of the ${\rm{SF}}_{\infty}$, $\tau$, and $\lambda_{\rm{Edd}}$ values for our sources in Figure~\ref{fig:SFtau}, highlighting our detected variable sources from realistic LSST Rubin -like light curves. \subsection{Observational Forecasts} \begin{table} \centering \caption{Number of expected IMBHs and massive BHs detectable with LSST Rubin over the WFD footprint.} \label{tab:num} \small \begin{tabular}{ccc} \hline \hline Seeding Scenario & Number IMBHs$^{a}$ & Number massive BHs$^{b}$ \\ \hline light (i) & $3.9^{+4.1}_{-3.0} \times 10^{2}$ & $1.5^{+0.6}_{-0.6} \times 10^{3}$ \\ heavy (ii) & $5.9^{+5.9}_{-5.9} \times 10^{0}$ & $5.9^{+1.5}_{-1.1} \times 10^{3}$\\ light $+$ wanderers (iii) & $9.7^{+6.2}_{-6.9} \times 10^{3}$ & $2.1^{+0.3}_{-0.7} \times 10^{4}$ \\ \hline \end{tabular}\\ {\raggedright $^{a}$ $10^2\ M_{\odot} < M_{\rm{BH}} < 10^4\ M_{\odot}$. \\ $^{b}$ $10^4\ M_{\odot} < M_{\rm{BH}} < 10^6\ M_{\odot}$. \\ \par} \end{table} We compute the recovered (observed) fraction of variable galaxies in bins of stellar mass and magnitude using the criteria $\sigma_{\rm var}>2$ for both the LSST Rubin ($g<25.0$ mag) and the PTF ($R<20.5$ mag) in Figure~\ref{fig:varfrac}. We assume a bright saturation limit of $R > 14$ mag for the PTF \citep{Ofek2012} and $g > 16$ mag for LSST Rubin \citep{Ivezic2019}. The uncertainties in the figure trace the uncertainties in the model itself. The slight uptick in the smallest mass bin for the PTF light seed scenario can result from small number statistics, because the smallest bins which only contain a few sources. Recall that we have assumed $z<0.055$ and consider a source to be variable if $\sigma_{\rm var}>2$ and the rms variability is larger than the uncertainty given by the photo-metric precision model.\footnote{ One need not necessarily use the rms constraint when constructing a version of Figure~\ref{fig:varfrac}, although the number of false positive detections would likely increase if this is not done. In fact, the $\sigma_{\rm{var}}$ threshold can be lowered further or a different measure, such as the rolling average $\sigma_{\rm{var}}$ versus stellar mass, could be adopted which may be more sensitive to the input occupation fraction.}. We assume total survey solid angles of $\Omega=9380$ deg$^2$ and $\Omega=18,000$ deg$^2$ for the PTF and LSST Rubin, respectively. We show the distribution of stellar mass versus redshift for a single, representative bootstrap realization of our model results in Figure~\ref{fig:massredshift}. We also compute the recovered fraction of variable galaxies versus BH mass for our LSST Rubin-like model in Figure~\ref{fig:varfracbh}, albeit the BH mass is not usually a directly observable quantity. Assuming an LSST Rubin-like footprint of $\Omega=18,000$ deg$^2$, the number of expected IMBHs in the mass range $10^2\ M_{\odot} < M_{\rm{BH}} < 10^4\ M_{\odot}$ and ``massive black holes'' $10^4\ M_{\odot} < M_{\rm{BH}} < 10^6\ M_{\odot}$ using optical variability for the various occupation fractions used in this work are enumerated in Table~\ref{tab:num}. Similar figures divided into the blue and red galaxy populations is shown in Appendix~\ref{sec:appredblue}. Our calculations indicate that LSST Rubin may be a very promising source for uncovering massive black holes and IMBH candidates modulo the underlying occupation fraction. \subsection{Recoverability of black hole masses from variability timescales} \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{tau_25_lsst.pdf} \includegraphics[width=0.32\textwidth]{tau_3_lsst.pdf} \includegraphics[width=0.32\textwidth]{tau_special_lsst.pdf} \caption{Recovered BH mass versus input BH mass for using mock light curves for various cadences scenarios of 3 days, 25 days, and a hybrid cadence of 25 days plus a $\sim$hourly cadence for 5 days assuming the BH mass$-$damping timescale relation of \citet{Burke2021}. The horizontal dashed gray line represents the BH mass with variability timescale equal to the limiting cadence of the light curves. The $y=x$ line is shown as a gray solid line. The hybrid cadence provides the best recovery of IMBH masses measured from realistic light curves of the three cadence modes. \label{fig:MBHrecovered}} \end{figure*} In order to determine how well one can recover the BH mass using optical variability information alone, we attempt to infer the input damping timescale $\tau$ values using mock light curves using different cadence scenarios. Because the dependence of the damping timescale on wavelength is weak \citep{MacLeod2010,Suberlak2021,Stone2022}, observations from multiple bands could be effectively combined to reduce the typical cadence to a few days. Recall that we have used the relation between $\tau$ and BH mass from \citet{Burke2021} to generate the mock DRW light curves. We then use the \textsc{celerite} \citep{Foreman-Mackey2017} package to infer $\tau$ values from these light curves following the procedure of \citet{Burke2021} using a maximum-likelihood fitting of a DRW Gaussian process to the light curve. Deviations from the DRW approximation may complicate the inference of a damping timescale. However, a more sophisticated analysis can be used to measure the damping timescales accurately \citep{Stone2022}. Our resulting recovered BH mass values from optical variability as a function of the input BH masses are shown in Figure~\ref{fig:MBHrecovered} for sources that are significantly variable $\sigma_{\rm{var}}>2$ with an input cadence of 25 days ($g$-band wide-fast-deep cadence), 3 days (wide-fast-deep cadence combining all bands), and a hybrid cadence described below. Unsurprisingly, we find that we are unable to recover BH mass values below $M_{\rm{BH}} \sim 10^{6.4}\ M_{\odot}$ ($M_{\rm{BH}} \sim 10^{4.1}\ M_{\odot}$) given the limiting input cadence of $25$ ($3$) days. Using the \citet{Burke2021} relation, a $\tau$ value of 25 days corresponds to $M_{\rm{BH}}\sim10^5\ M_{\odot}$ with a $\sim0.3$ dex scatter in the BH mass direction. However, such IMBHs can be identified in principle from their significant variability, and the cadence can be used as a rough upper-limit on the BH mass. We caution that other measures to select AGNs from the auto-correlation information are likely to miss AGNs with characteristic variability timescales less than the survey cadence, because such variability would be nearly indistinguishable from (uncorrelated) white noise. In order to test the feasibility of using a custom designed high-cadence mini-survey to identify IMBHs, we repeat the procedure above using a rapid cadence of observations separated by 2.4 hours for 5 days but with daytime gaps, followed by the standard wide-fast-deep cadence. This hybrid cadence is able to recover the input BH mass values reasonably well, albeit with increased scatter. These relations are derived from a subset of the total AGN population, and the true dependence on other parameters like Eddington ratio as well as the exact cadence adopted. \section{Discussion} \label{sec:discussion} \subsection{Comparison with previous Work} \subsubsection{Variable fraction} We have constructed a mock sample consistent with the the PTF survey ($R<20.5$) to enable direct comparison with observed constraints on the optical variable fraction. We match the sample redshift distribution and survey parameters to observations \citep{Baldassare2020}. Our PTF-like model's recovered variable fraction for all occupation fractions tested here is consistent with \citet{Baldassare2020} within $2\sigma$ below $M_{\star} \sim 10^{9}\ M_{\odot}$. The larger discrepancy at high stellar masses could perhaps be explained by larger contamination in the \citet{Baldassare2020} sample at these masses due to non-AGN variability or some form of incompleteness. For example, more massive AGNs with luminous blue/UV emission could be confused as lower mass star-forming galaxies, flattening out the observed variability fraction. Another obvious possibility is errors from assumptions or extrapolations of uncertain relations in our model. For example, the exact dependence of the derived variability amplitude on the AGN luminosity and accretion rate. Nevertheless, we consider this agreement to be excellent given the assumptions made in our model. \subsubsection{Active fraction} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{varfrac_0.3_ptf_A.pdf} \caption{Recovered (observed) variable AGN fraction in bins of stellar mass (\emph{left}) and aperture apparent magnitude (\emph{right}) for the active fraction prediction from \citet{Pacucci2021} (cyan/square symbols) for our PTF-like model. An active fraction of the form $\lambda_{\mathcal{A}} \propto (\log{M_{\star}})^{4.5}$ is very similar shape to our model results in Figure~\ref{fig:varfrac} and is a reasonable match to the observational constraints. These recovered variable fractions are computed by selecting for variable light curves following mock observations as described in \S\ref{sec:obs} after including all components of our demographics model as described in \S\ref{sec:model}. The current observational constraints and $1\sigma$ uncertainties from PTF are shown in red \citep{Baldassare2020}. We omit the data points in the most massive and faintest bins where the sample is highly incomplete for clarity. \label{fig:varfracA}} \end{figure*} The active fraction---the fraction of galaxies radiating with Eddington luminosity ratio greater than $\lambda_{\rm{Edd, lim}}$---can be defined as, \begin{equation} \lambda_{\mathcal{A}}(M_{\star}, \lambda_{\rm{Edd, lim}}) = \lambda_{\rm{occ}}(M_{\star}) \times\frac{\int^{\lambda_{\rm{Edd, max}}}_{\lambda_{\rm{Edd, lim}}} \xi(\lambda_{\rm{Edd}}, \xi^\ast{=}1)\ d\lambda_{\rm{Edd}}}{\int^{\lambda_{\rm{Edd, max}}}_{\lambda_{\rm{Edd, min}}} \xi(\lambda_{\rm{Edd}}, \xi^\ast{=}1)\ d\lambda_{\rm{Edd}}} \end{equation} within the context of our model, where the ERDF $\xi$ is given by Equation~\ref{eq:ERDF}. Our definition differs slightly from the definitions adopted by other authors, who count any galaxy with an assigned $\lambda_{\rm{Edd}}$ value greater than $\lambda_{\rm{Edd, min}}$ toward the active fraction (e.g., \citealt{Weigel2017}). In this work, we have assigned each BH a $\lambda_{\rm{Edd}}$ value, but allow $\lambda_{\rm{Edd}}$ to be so small that the accretion activity effectively goes undetected. A different approach was adopted by \citet{Pacucci2021}, who developed an alternate theoretical model to predict the active fraction of dwarf AGNs. Their approach derives the active fraction from the number density and angular momentum content of the gas at the Bondi radius (as a proxy for the angular momentum content near an IMBH). After calibrating the model to observations, \citet{Pacucci2021} find an active fraction $\lambda_{\mathcal{A}} \propto (\log{M_{\star}})^{4.5}$ for $10^{7}\ M_{\odot}<M_{\star}<10^{10}\ M_{\odot}$ for black holes accreting at $\lambda_{\rm{Edd}} \sim 0.1$. These arguments imply that the observed optically-variable fraction is roughly the product of the optically unobscured fraction and the active fraction $\lambda_{\rm{var}} \sim (1-f_{\rm{obs}}) \times \lambda_{\mathcal{A}}$. In our model, we have assumed two mass-independent ERDFs for the blue/green (generally less massive, radiatively efficient accretion) and red (generally more massive, radiatively inefficient accretion) galaxy populations \citep{Weigel2017}. In contrast, the arguments from \citet{Pacucci2021} can be interpreted as a stellar mass dependent ERDF (also see~\citealt{Shankar2013,Hickox2014,Schulze2015,Bongiorno2016,Tucci2017,Bernhard2018,Caplar2018}) as opposed to a galaxy color/type dependent one. To test what impact these different assumptions have on the results, we re-run our forward Monte Carlo model, substituting a continuum of Eddington ratios given by an ERDF for an active fraction of the functional form $\lambda_{\mathcal{A}} = 0.1 \times \left[\log({M_{\star}/M_{\odot}})/9\right]^{4.5}$, which closely matches the normalization in Figure 3 of \citet{Pacucci2021}. Here, active galaxies are assumed to have $\lambda_{\rm{Edd}} = 0.1$ with a dispersion of 0.2 dex (typical for low-$z$ AGN samples; \citealt{Pacucci2021,Greene2007}) and non-active galaxies have $\lambda_{\rm{Edd}} \approx 0$ as determined by random sampling. Our resulting detected variable fraction versus stellar mass for the PTF-like scenario is shown in Figure~\ref{fig:varfracA}. The resulting variable fraction has a very similar form as our model results. The computed variable fraction has a qualitatively similar scaling with magnitude and mass, which implies that the assumption of a mass-dependent ERDF does not strongly change the results, as expected if radiatively-efficient AGNs dominate the census. This is consistent with the findings of \citet{Weigel2017}. Therefore, we can conclude that our results and the existing observational constraints are broadly consistent with an active fraction of the form $\lambda_{\mathcal{A}} \propto (\log{M_{\star}})^{4.5}$ after calibration to the definition of ``active'' to the level of detectable accretion activity. This is reassuring and points to the fact that our model assumptions are reasonable. However, this simple Gaussian ERDF may not be consistent with the local AGN luminosity function. \subsection{The effect of uncertainty on stellar mass measurements}\label{sec:syserr} The broad-band SED of galaxies can be used to infer the stellar mass of galaxies in large photo-metric catalogs. Uncertainties on these stellar masses are typically $\sim 0.3$ dex and dominated by systematic uncertainties from model choices in stellar evolution (e.g., initial mass function, star formation history; \citealt{Ciesla2015,Boquien2019}). An additional problem is degeneracies between star-formation and AGN power-law emission. For example, Type 1 quasars with a blue/UV power-law continuum emission from the accretion disk (i.e., ``big blue bump'') can be confused for dwarf starburst galaxies. This degeneracy can be more problematic when the redshift of the galaxy is uncertain or highly degenerate. Finally, variability from non-simultaneous observations can introduce additional errors in the SED. Because spectroscopic redshifts will not be available for every source in the large planned time-domain surveys, future work is needed to determine the strength of these degeneracies and how they can possibly be minimized (e.g., using the variability amplitude and timescale to independently constrain the strength of the AGN emission) over the entire range of stellar masses. We then consider how uncertainties on stellar mass measurements affects the occupation function analysis in Figure~\ref{fig:varfracA}, regardless of the exact sources of the uncertainty. To do this, we repeat the analysis of the variable fraction in Figure~\ref{fig:varfracA}, which assumes a $0.3$ dex uncertainty in stellar mass, using increasingly larger uncertainties of $0.6$ and $0.9$ dex in stellar mass. The results are shown in Appendix~\ref{sec:appsyserr}. We have assumed a Gaussian distribution for the uncertainties, which may not be strictly true. We see that as the uncertainties increase, the variable fraction ``flattens out'' as the stellar masses are smeared into adjacent bins and would result in a larger number of false positive IMBH candidates. \subsection{Recovery of the host galaxy-black hole mass scaling relation} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{massbias_ptf.pdf} \includegraphics[width=0.5\textwidth]{massbias_lsst.pdf} \caption{Recovered $M_{\rm{BH}}-M_{\star}$ relation for variability-selected sources for the ``heavy'' and ``light'' seeding scenarios (strongly overlapping) compared to the input relation given by \citet{Reines2015} (gray) for PTF (\emph{upper panel}) and LSST-like (\emph{lower panel}) models. \label{fig:massbias}} \end{figure} We show the recovered $M_{\rm{BH}}-M_{\star}$ relation for variability-selected sources to investigate the influence of variability selection effects in Figure~\ref{fig:massbias}. The more massive and luminous black holes tend to have larger observed variability amplitudes at fixed stellar mass due to having less host galaxy dilution (see discussion in \S\ref{sec:var}). See \citet{Lauer2007} for a related selection bias. We find that this bias results in variability-selected $M_{\rm{BH}}$ values that are on average larger by $\sim 1$ dex than expected from the \citet{Reines2015} relation for $M_{\star} < 10^{9}\ M_{\odot}$ host galaxies. This bias is only slightly reduced with more photo-metrically sensitive light curves. We therefore expect variability-selected IMBH candidates in dwarf galaxies to be strongly affected by this bias. This demonstrates the importance of obtaining additional $M_{\rm{BH}}$ estimates for variability-selected AGNs, such as from the variability timescale \citep{Burke2021c} or broad emission line signatures \citep{Shen2013}, rather than using the stellar mass alone as a proxy. \subsection{Extension beyond the local Universe} We have shown that the number of detectable IMBHs falls off quickly with redshift (Figure~\ref{fig:massredshift}) faster than the gain in volume. However, extensions of our model beyond the local Universe are straightforward if one is interested in AGNs with somewhat larger BH masses, $M_{\rm{BH}} \sim 10^5-10^6\ M_{\odot}$, that are detectable at intermediate redshifts (e.g., \citealt{Guo2020,Burke2021b}). To extend the treatment to higher redshifts, one could adopt the same GSMF form of Equation~\ref{eq:GSMF}, but adjust the parameters based on the redshift range using observational constraints on the GSMF evolution (e.g., \citealt{Marchesini2009,Adams2021}). A model for the commensurate host-galaxy $K$-correction (e.g., \citealt{Chilingarian2010}) to the mass-to-light ratios would need to be considered. At intermediate redshifts, the host galaxy-BH mass relation may have a different normalization and slope that better describes the AGN population (e.g., \citealt{Caplar2018,Ding2020}). Obviously, the GSMF in the dwarf galaxy regime becomes less well-constrained with increasing redshift. In addition, whether and how the ERDF of the obscured AGN fraction changes with redshift is uncertain at present. Finally, there are other factors (e.g., dwarf galaxy-galaxy mergers) that complicate using occupation fraction as a direct tracer of seeding scenarios at high redshift \citep{Volonteri2010,Ricarte2018,Mezcua2019,Buchner2019}. Investigations of IMBH evolution in dwarf galaxies using cosmological simulations that incorporate the relevant physics on these scales may help illuminate the properties of the evolving IMBH population \citep{Sharma2022,Haidar2022}. \subsection{Caveats \& Future work} Our methodology can be extended and applied to other wavelengths, such as sensitive X-ray observations of dwarf galaxies with eROSITA \citep{Predehl2021,Latimer2021} or time-domain UV imaging surveys \citep{Sagiv2014,Kulkarni2021}. Better constraints on the shape and normalization of the ERDF in the IMBH regime would help us compute our forecasts for the total number of detectable variable dwarf AGNs. Ultimately, a variety of multi-wavelength probes are desired to derive robust constraints on the occupation fraction. Though counter-intuitive, it has been amply demonstrated by many previous workers including \cite{Ricarte2018} that local observations of the occupation fraction of black holes in low mass dwarf galaxies could serve to discriminate between high redshift initial seeding models. Despite the modulation of black growth by accretion over cosmic time from early seeds, the memory of these initial conditions survives in particular for these low mass galaxy hosts that preferentially host IMBHs. And while current observations do not offer conclusive evidence, the prospects for doing so are promising as we describe below. Our modeling indicates that the ``light'' seeding scenario is slightly more consistent with current observational constraints from dwarf AGN variability, however, the current observational constraints in the dwarf galaxy regime (Figure~\ref{fig:varfrac}) are not particularly strong. The discriminating power of optical variability to distinguish between seeding scenarios lies in the capability to accurately measure the variable detected fraction in $M_{\star} \lesssim 10^8 M_{\odot}$ galaxies. Our model predictions for the occupation fractions in scenario (i) and (ii) can be differentiated at the $2-3 \sigma$ level in the detectable variable fraction at $M_{\star} \lesssim 10^8 M_{\odot}$ (see Figure~\ref{fig:varfrac}). Therefore, we are unable to strongly rule-out either seeding scenario (or a mixture of several) at this time except for ones that predict occupation fractions of zero in dwarf galaxies. The large uncertainties here are dominated by uncertainties in the GSMF, optical variability properties, and scatter in the host-mass scaling relation. We expect constraints on some of these quantities to improve dramatically in the near future. We have made some assumptions in our model using the average properties of the galaxy population to predict variability amplitudes. For example, the predicted observed variability amplitudes in our model depend on our population-level model of host galaxy color index and the level of contamination in the light curve aperture. In order to eliminate these assumptions, one could directly use catalog properties, e.g. measured host galaxy luminosities within light curve apertures, from the parent sample of the observations as long as one is cautious about the relevant selection biases in the parent sample properties. Additionally, we caution that the \citet{MacLeod2010,Suberlak2021,Burke2021} parameters are likely to be affected by selection biases, and whether these these relations hold in the ADAF/RIAF regime is also somewhat uncertain. Nevertheless, we have demonstrated the expected capabilities and prospects of the LSST Rubin wide-fast-deep survey for IMBH identification via optical variability. With robust observational constraints, the problem could be turned around to become an inference problem to constrain the multiple free parameters in our model with priors derived from observational constraints \citep{Caplar2015,Weigel2017}. Improved constraints on the optical variability properties in the IMBH regime will further reduce the uncertainties. Additionally, a wide-field, deep, flux limited catalog of stellar masses of low-redshift galaxies is urgently needed in the Southern Hemisphere to obtain enough statistical power to distinguish between seeding mechanisms with LSST Rubin. \subsection{A note on the optical variability amplitude} The arguments in \S\ref{sec:var} could pose a quantifiable, unified interpretation of the nuclear optical variability amplitude of galaxies and AGNs where the intrinsic variability amplitude is set by the accretion rate and BH mass, but the resulting observed variability amplitude is diluted by the host galaxy emission. This approach provides quantitative phenomenological predictions for IMBH optical variability, which is argued to show fast and small amplitude variability (e.g., \citealt{Martinez-Palomera2020}). \section{Conclusions} \label{sec:conclusions} We have investigated prospects for IMBH discovery using optical variability with LSST Rubin by building a forward Monte Carlo model beginning from the galaxy stellar mass function. After assuming several possibilities for the BH occupation fraction, and incorporating observed galaxy-BH scaling relations, we demonstrate our model's capability to reproduce existing observations. Below, we summarize our main conclusions: \begin{enumerate} \item We confirm the discriminating power of optical variability to distinguish between BH occupation fractions by accurately measuring the variable detected fraction in the $M_{\star} \sim 10^6\ - 10^8 M_{\odot}$ regime. \item Current observational constraints are however, insufficient to constrain early seeding scenarios given their limited statistical power and the theoretical uncertainties in this regime. However, they are inconsistent with an IMBH occupation fraction of zero near $M_{\star} \sim 10^8 M_{\odot}$. \item We demonstrate the resulting BH masses may be biased toward larger $M_{\rm{BH}}$ on average at fixed $M_{\star}$ from an Eddington-type bias, depending on the photometric precision of the survey. \item Given these findings, we forecast detection of up to $\sim 10^2$ IMBHs with LSST Rubin using optical variability assuming an optimistic ``light'' seeding scenario and perhaps more if there exists a population of wandering IMBHs with an Eddington ratio distribution similar to that of SMBHs in red galaxies. \item A targeted $\sim$ hourly cadence program over a few nights can provide constraints on the BH masses of IMBHs given their expected rapid variability timescales. \end{enumerate} \section*{Acknowledgements} We thank Chris Done and Rodrigo Nemmen for helpful discussions. We thank Konstantin Malanchev and Qifeng Cheng for referring us to an improved algorithm for generating DRW time series. CJB acknowledges support from the Illinois Graduate Survey Science Fellowship. YS acknowledges support from NSF grant AST-2009947. XL and ZFW acknowledge support from the University of Illinois Campus Research Board and NSF grant AST-2108162. PN gratefully acknowledges support at the Black Hole Initiative (BHI) at Harvard as an external PI with grants from the Gordon and Betty Moore Foundation and the John Templeton Foundation. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This research made use of Astropy,\footnote{\url{http://www.astropy.org}} a community-developed core Python package for Astronomy \citep{astropy2018}. \section*{Data Availability} The data used in this work is available following the references and URLs in the text. Our pre-computed SED templates are available at \url{https://doi.org/10.5281/zenodo.6812008}. \bibliographystyle{mnras}
proofpile-arXiv_065-5960
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Introduction} Topological insulators (TIs) have attracted significant interest from the quantum materials community for their surface electronic structures, notably surface Dirac fermions with spin-momentum locking that are robust to disorder and defects \cite{Ando2013,Hasan2010,Wehling2014,Keimer2017}. Chemical intercalation of TIs (i.e., the insertion of atoms or molecules between quintuple layers) can further modify the material properties \cite{Paraskevopoulos1988,Bludska2010,Choi2011, Shruti2015,Mazumder2018,Yonezawa2018,Wang2020} or create additional electronic phenomena, such as superconductivity with Cu intercalation ($T_{c,\text{max}} \sim~ \SI{3.5}{\kelvin}$) \cite{Wray2010,Hor2010,Sasaki2011,Kriener2011,Das2013,Yonezawa2016,Yonezawa2018}. The Cu-intercalated TI \ce{Bi2Se3} (\ce{Cu_xBi2Se3}) is a known superconductor, with $T_{c,\text{max}}$ and the superconducting shielding fraction strongly depending on the intercalated Cu content and preparation method \cite{Hor2010, Sasaki2011, Kriener2011, Schneeloch2015}. Controlling the Cu content and its variation during and after synthesis is thus important for realizing ideal superconducting properties in \ce{Cu_xBi2Se3}. Intercalated materials may be exposed to air during sample preparation or real world usage, such as in the operation of devices, which may affect composition. For instance, the intercalate guest can diffuse within battery cells left at a fixed potential, and it is known that intercalants can diffuse within and between layers in various compounds. \cite{Whittingham1978,Ryabishchenkova2015, Ye2021}. The effect of ambient conditions on intercalant chemistry is often not fully understood due to experimental difficulties, despite its practical importance. In this work, we use \ce{Cu_xBi2Se3} as a representative material to establish the effects of controlled ambient environments on evolving chemistry in the near-surface region in intercalated TIs. The near-surface Cu composition was found to increase under both controlled \ce{O2} dosing and ambient air exposure, coincident with the formation and growth of an oxide layer that is strongest in full atmosphere. Lastly, core-electron spectroscopy simulations show that our XPS observations are consistent with a sample developing vertical gradient distributions of \ce{Cu} and \ce{Se} upon exposure to a controlled \ce{O2} environment. \begin{figure*}[t] \includegraphics[width=1.0\textwidth]{figure_1_v4_5.png} \caption{XPS and ARPES characterizations of freshly cleaved \ce{Cu_{0.15}Bi2Se3} and \ce{Cu_{0.3}Bi2Se3}. (a) \ce{Cu_{0.15}Bi2Se3} survey spectrum taken with $E_{ph} = \SI{650}{\electronvolt}$ in vacuum. Cu, Bi, and Se core levels are labeled, with the dashed box indicating the shallow core levels analyzed in this work. The adventitious C 1s peak is also labeled. (b) ARPES spectrum of the topological surface state in \ce{Cu_{0.15}Bi2Se3}. (c) Shallow core levels in the dashed box taken with $E_{ph} = \SI{230}{\electronvolt}$, \SI{370}{\electronvolt}, \SI{650}{\electronvolt}, and \SI{900}{\electronvolt} in vacuum. For best comparison, spectra are shown normalized to their Se 3d peaks and offset for clarity. (d) Illustration of the loss feature background in a \ce{Bi2Se3} sample, with the individual plasmon resonances indicated in green and cyan. (e) Corresponding spectrum in \ce{Cu_{0.3}Bi2Se3} showing the Cu 3p peaks (brown curve) and loss feature background after dosing \SI{0.133}{\milli\bar} \ce{O2}. The black arrow indicates the additional spectral weight from the Cu. (d) and (e) were taken with $E_{ph} = \SI{370}{\electronvolt}$.} \label{fig_1} \end{figure*} \subsection{Materials and methods} Synchrotron ambient pressure XPS (AP-XPS) experiments were performed at the Advanced Light Source Beamline 9.3.2 in a photon energy range of $E_{ph} = 230-\SI{900}{\electronvolt}$, with a spot size of $d\sim\SI{1}{\milli\meter\squared}$. The core level spectra were collected with a Scienta R4000 HiPP electron analyzer with differential pumping, allowing the sample to remain at ambient pressures during data acquisition \cite{Grass2010}. The analysis environment was initially at high vacuum ($\sim\SI{1e-7}{\milli\bar}$), and then with \ce{O2} at ambient pressure (\SI{0.133}{\milli\bar}). The measured partial pressure of \ce{O2} is equivalent to the total ambient pressure in the AP-XPS experiment. Longer timescale experiments were performed using a Kratos AXIS Supra$^+$ with a monochromated, unpolarized $E_{ph} = \SI{1486.6}{\electronvolt}$ Al-K$\alpha$ source. Samples were cleaved \textit{in situ} with the top post method in the AP-XPS experiment, and cleaved \textit{ex situ} with Scotch tape in the Kratos XPS experiment, keeping air exposure under several minutes prior to the initial measurements. ARPES spectra were collected at the Advanced Light Source Beamline 4.0.3 (MERLIN) with $E_{ph} = \SI{39.2}{\electronvolt}$. Core level peaks were identified using binding energy reference values \cite{X_ray_handbook}. \ce{Bi2Se3} samples were grown using the Bridgman method, and were then intercalated with Cu to form \ce{Cu_xBi2Se3} crystals using a solution-based process \cite{Koski2012}. Due to the variation of Cu and Se content highlighted in our study, our convention is to use the nominal bulk stoichiometries after synthesis (\ce{Cu_{0.15}Bi2Se3} and \ce{Cu_{0.3}Bi2Se3}) to distinguish between samples in our analysis. No indications of superconductivity were found in these samples, which depends on the details of the synthesis and the intercalation \cite{Schneeloch2015, Yu2019} (see Supplemental Material for further details). \subsection{Results} Figure \ref{fig_1} provides initial XPS and ARPES characterizations of \ce{Cu_{0.15}Bi2Se3} and \ce{Cu_{0.3}Bi2Se3} at several different photon energies. Fig. \ref{fig_1}(a) shows a survey spectrum of the core levels accessible with $E_{ph} = \SI{650}{\electronvolt}$ in the AP-XPS experiment. The survey confirms the quality of the \textit{in situ} cleaved sample, showing peaks for Cu, Bi, and Se. Fig. \ref{fig_1}(b) shows an ARPES spectrum on an \textit{in situ} cleaved sample, clearly showing the topological surface state and Dirac point at $E_B\sim\SI{0.3}{\electronvolt}$, indicating maintenance of crystallinity and topological electronic features after Cu intercalation. Fig. \ref{fig_1}(c) shows the three core levels present in the shallow binding energy region in Fig. \ref{fig_1}(a): Cu 3p, Se 3d, and Bi 5d, none of which show initial oxidation or hydroxylation. This set of three shallow core levels is sufficient to determine the chemical composition of \ce{Cu_{0.15}Bi2Se3} and \ce{Cu_{0.3}Bi2Se3} throughout the oxidation process; the relative peak intensities are proportional to the elemental composition after correcting for the relative sensitivity factor (RSF) for each photon energy \cite{Yeh1985, Shard2020}. To quantify the Cu content, we monitor the binding energy region around $E_B\sim \SI{76}{\electronvolt}$ where the Cu 3p doublet is present. All Cu 3p data are measured above the loss feature-containing background (Figs. \ref{fig_1}(d,e)). To accurately quantify the Cu content in Fig. \ref{fig_1}, which is present on top of a relatively large background, we utilize a background correction procedure incorporating the electron energy loss features in the \ce{Cu_{0.3}Bi2Se3} XPS spectrum. The loss features near $E_B\sim \SI{72}{\electronvolt}$ (green and cyan curves in Figs. \ref{fig_1}(d,e)) originate from bulk plasmon resonances in \ce{Bi2Se3} \cite{Nascimento1999}. Since this background is also present in a \ce{Bi2Se3} reference sample from the same batch (Fig. \ref{fig_1}(d)), we use the \ce{Bi2Se3} spectrum from a newly cleaved sample as a reference for fitting to the plasmon losses in the \ce{Cu_{0.3}Bi2Se3} spectrum (Fig. \ref{fig_1}(e)). After fitting Voigt peaks to the Se 3d peaks and the loss features in the reference spectrum, the result is a background containing the loss features (dashed curve). Fitting this background to the \ce{Cu_{0.3}Bi2Se3} spectrum allows us to isolate the Cu 3p signal intensity, shown as a separate Voigt doublet (brown curve). The areas of the fitted Cu, Se, and Bi peaks then correspond to the total XPS intensity for these elements. Additional information on the Cu quantification, RSF assumptions, and plasmon loss feature correction can be found in the Supplemental Material. Figure \ref{fig_2} shows the main results of the AP-XPS experiment, which track the evolution of the Cu and Se composition from an initial condition with no \ce{O2} dosing, and later times in \SI{0.133}{\milli\bar} \ce{O2} to simulate ambient atmosphere. The chemical compositions are expressed in terms of the atomic fractions of Cu/Bi and Se/Bi for each time and photon energy, with $x = 2\times\text{Cu/Bi}$. To determine the atomic fractions, the XPS intensities determined from the peak fitting were weighed by the RSF for each element and photon energy (see Supplemental Material). The atomic fractions are normalized to Bi since Bi changes relatively little compared to Cu and Se in our experiments. Over the course of the four day experiment, the Cu fraction $x$ increases from $x = 0.13$ to $x = 0.40$ after dosing \ce{O2} for the $E_{ph} = \SI{900}{\electronvolt}$ data, with smaller changes seen at lower, more surface sensitive $E_{ph}$ (Fig. \ref{fig_2}(a)). The growth of the Cu/Bi ratio is consistent with Cu migrating vertically to the probed-surface region of the sample, since the only additional Cu present is deeper within the sample, with no external Cu deposition. This is best seen when expressed versus the inelastic mean free path (IMFP) for each photon energy in Fig. \ref{fig_2}(b), which provides a length scale for the observed Cu migration. The measured depth ($3\times$IMFP) at each photon energy includes depths below one IMFP, which still contribute to the XPS measurement. A similar plot of the Se/Bi fraction is shown in Fig. \ref{fig_2}(c), showing a steady decrease in Se/Bi at all $E_{ph}$ after dosing \ce{O2}, decreasing from the ideal Se/Bi = 3/2 fraction of \ce{Bi2Se3}. There are large differences in the initial and final Se/Bi fractions for the different $E_{ph}$, implying a large variation in the Se distribution, with the bulk rich in Se and the surface deficient in Se (Fig. \ref{fig_2}(d)). Between one and four days in the AP-XPS chamber, the changes to Cu/Bi and Se/Bi are less pronounced than those that happen immediately after dosing \ce{O2}. Similar trends have also been observed in another AP-XPS experiment in a mixed \ce{O2}/\ce{H2O} environment (see Supplemental Material). Figure \ref{fig_3} shows the initial and final Bi 5d peaks for all $E_{ph}$ before and after dosing \ce{O2}, quantifying the growth of oxides. The initial state peaks are well separated, showing little to no signs of Bi oxidation (Fig. \ref{fig_3}(a)). After four days in the AP-XPS chamber, chemically-shifted peaks are present for $E_{ph} = \SI{650}{\electronvolt}, \SI{370}{\electronvolt}$, and $\SI{230}{\electronvolt}$ (Fig. \ref{fig_3}(b), dashed colored lines). These changes to the Bi levels directly show the formation of \ce{Bi2O3} at the surface, which is most dominant at lower $E_{ph}$. The noise in the initial $E_{ph} = \SI{230}{\electronvolt}$ spectrum (blue, left) is due to the low photon flux and low photoionization cross-section of Bi at this particular $E_{ph}$, which was ameliorated in the final state (blue, right) by extending the measurement time. The large width of the oxide components in the Bi 5d spectra suggest a convolution of several electrostatically (i.e., band bending) or chemically distinctive states, which are most pronounced in the spectrum measured with highest surface sensitivity ($E_{ph} = \SI{230}{\electronvolt}$). Fig. \ref{fig_3}(c) quantifies the growth of the \ce{Bi2O3} peaks relative to the total Bi peak intensity, showing that oxidation starts immediately after dosing \ce{O2}, with a larger oxide fraction seen with the more surface-sensitive $E_{ph} = \SI{370}{\electronvolt}$ (compared to $E_{ph} = \SI{650}{\electronvolt})$. Interestingly, most of the oxidation process occurs within the first day and only little change is observed thereafter, as is the case with the Cu and Se changes. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{figure_2_v4.png} \caption{Evolution of Cu and Se composition in \ce{Cu_{0.3}Bi2Se3} in the AP-XPS experiment. (a) Cu/Bi atomic ratios for four different photon energies, initially in vacuum and after in \SI{0.133}{\milli\bar} \ce{O2}. (b) Initial and final Cu/Bi ratios expressed in terms of the IMFP and the measured depth ($3\times$IMFP) for each $E_{ph}$ as determined from (a). Initial: in vacuum after cleaving; Final: in \ce{O2} for 4 days. (c) Se/Bi atomic ratios for the same photon energies and conditions. Dashed line at Se/Bi $= 1.5$ indicates the ideal Se/Bi = $3/2$ ratio. (d) Initial and final Se/Bi ratios from (c) expressed in terms of IMFP and measured depth.} \label{fig_2} \end{figure} To gain insight into changes in the \ce{Cu_{0.15}Bi2Se3} composition over longer timescales ($t > \SI{20}{\hour}$), we continue by performing a different set of lab-based ex-situ XPS measurements after exposing a \ce{Cu_{0.15}Bi2Se3} sample to full atmosphere in air, with increasing periods of air exposure. The loss feature-corrected spectra for Cu 3p are shown in Fig. \ref{fig_4}(a), with large changes in the Cu/Bi fraction after exposure to air for multiple days (Fig. \ref{fig_4}(b)). In Fig. \ref{fig_4}(c), we again see the oxidation of the surface, with the Bi oxides showing stronger growth over several weeks. The evolution of \ce{Bi2O3} in Fig. \ref{fig_4}(d) shows an initial jump after exposure to air for 0.9 days, and continues to grow afterward. Oxidation of Se is also seen (see Supplemental Material), and we note that this sample had a smaller initial Cu concentration than the sample measured with AP-XPS. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{figure_3_v4.png} \caption{Oxidation of \ce{Bi} in \ce{Cu_{0.3}Bi2Se3} in the AP-XPS experiment. (a) Bi 5d levels before dosing \ce{O2} and (b) after four days of oxidation in \SI{0.133}{\milli\bar} \ce{O2}. Solid colored lines indicate the Bi 5d components of the overall fits, and the dashed colored lines indicate the \ce{Bi2O3} contributions. (c) \ce{Bi2O3}/Bi ratio over the course of the entire experiment for two photon energies.} \label{fig_3} \end{figure} \begin{figure}[t] \includegraphics[width=0.45\textwidth]{figure_4_v4.png} \caption{Growth of Cu 3p and Bi oxidation at longer ($t > \SI{20}{\hour}$) timescales in \ce{Cu_{0.15}Bi2Se3}. (a) Increasing Cu 3p signal over 11 days of air exposure with $E_{ph} = \SI{1486.6}{\electronvolt}$. All Cu 3p spectra are loss feature-corrected and normalized to their total Bi 5d intensities, including oxide peaks. (b) Cu/Bi atomic ratio after air exposure. (c) Evolution of Bi 5d peaks over time, showing the appearance of \ce{Bi2O3} peaks. (d) Growth of \ce{Bi2O3}/\ce{Bi} over many days.} \label{fig_4} \end{figure} \subsection{Discussion} Our interpretation of present data is guided by prior studies of pristine \ce{Bi2Se3} exposed to air and controlled ambient environments. The effects of ambient conditions in regular \ce{Bi2Se3} have been studied in the following ways with the following conclusions. Exposure to ambient environments has been shown to alter measured angle-resolved photoemission spectroscopy (ARPES) spectra and the surface composition in \ce{Bi2Se3} \cite{Benia2011,Chen2012,Green2016,Thomas2015}. The topological surfaces states can be modified by forming 2D quantum well states in ambient conditions \cite{Chen2012}, and can present band-bending and controlled charge-doping after dosing \ce{H2O} \cite{Benia2011} and UV irradiation \cite{Sakamoto2021}. In most cases, the robust topological surface states are still present in \ce{Bi2Se3} despite air exposure and oxidation \cite{Chen2012,Yang2020}. The thinness of the oxide layer and the robustness of the topological surface states are common themes in \ce{Bi2Se3}, including the persistence of surface states in intercalated samples \cite{Tanaka2012,Ye2021}, with some works not reporting any surface reactivity \cite{Yashina2013,Atuchin2011}. Still, the surface chemistry in \ce{Bi2Se3} remains an open question, particularly after intercalation. To begin the discussion of our results, we first turn to the observations seen in the AP-XPS experiment. The main result in Fig. \ref{fig_2} shows that the increase in Cu 3p peak intensity over the course of the experiment is coincident with the introduction of \ce{O2} gas, showing that Cu migrates to the surface region during measurement. This behavior has not been previously reported or quantified in a topological insulator with XPS, although some indications of Cu near the surface have been reported with STM imaging after cleaving \cite{Hor2010}. The probing depth in XPS is mainly limited by the inelastic mean free path (IMFP) of escaped electrons at each $E_{ph}$, so measurement with several different $E_{ph}$ to vary IMFP allows one to obtain a depth profile of the elements in the sample. Between the photon energy range $E_{ph} = 230-\SI{900}{\electronvolt}$ in \ce{Cu_{0.3}Bi2Se3}, the IMFP of Cu 3p photoelectrons ranges from $7.6-\SI{18.9}{\angstrom}$, calculated in QUASES using the TPP2M algorithm \cite{Tanuma1994}. Measured XPS intensities generally follow an exponential form for attenuation \cite{Tougaard2021}: \begin{equation}\label{eqn1} dI=I_0 \cdot X(z) \cdot e^{-z/\lambda\cos(\theta)}dz, \end{equation} with a total emitted photoelectron intensity $I_0$, vertical depth $z$, atomic fraction $X_A(z)$ at each depth $z$, the IMFP $\lambda$, and the photoelectron collection angle $\theta$ from the surface normal. From Eq. \ref{eqn1}, the measured intensity is $I \approx 0.95I_0$ within three IMFPs below the surface, providing an upper limit for the probing depth. Fig. \ref{fig_2}(b) shows the initial and final distributions of Cu/Bi as a function of IMFP and the measured depth ($3\lambda$) in the AP-XPS experiment (black and gray solid lines), showing the growth of Cu from deeper within the sample. The greatest Cu increase is at a measured IMFP of \SI{18.9}{\angstrom}, which corresponds to Cu migrating into the top \SI{6}{\nano\meter} surface region of the sample. Another notable observation is the decrease in Se content relative to Bi in Fig. \ref{fig_2}(c) over the course of the AP-XPS experiment. Initially after cleaving (that is, prior to the first data points in Fig. \ref{fig_2}(c)), the Se 3d intensity is already reduced, which continues to drop over the course of the experiment from the ideal $\ce{Se}/\ce{Bi}= 3/2$. The initial distribution of Se when expressed versus IMFP in Fig. \ref{fig_2}(d) shows that the uppermost \SI{2}{\nano\meter} (2 quintuple layers) is deficient in Se, while the uppermost \SI{6}{\nano\meter} (6 quintuple layers) is richer in Se and representative of bulk stoichiometric \ce{Bi2Se3}. After four days in \ce{O2}, Se/Bi decreases at all depths. This observation of a long Se gradient with minimal Se at the surface is consistent with the well known volatility of Se in \ce{Bi2Se3}. \ce{Bi2Se3} generally has selenium vacancies which make samples naturally $n$-type without further chemical compensation. \cite{Chen2012,Biswas2015,Benia2011,Bianchi2010,Kong2011,Tumelero2016,Wang2011,Hou2019,Gross2021}. However, our results indicate that these selenium vacancies may be more concentrated near the surface, and the deeper bulk is closer to nominal stoichiometry. The Fermi level $E_F$ in the ARPES spectrum in Fig. \ref{fig_1}(b) intersects the bulk conduction band, confirming that this specimen is $n$-type at the surface region, with an IMFP of \SI{4.9}{\angstrom} for $E_{ph} = \SI{39.2}{\electronvolt}$ calculated in QUASES. Cu doping is also known to shift the chemical potential further into the bulk conduction band \cite{Wang2011}. A prior XPS/AFM study has observed small Bi islands that appear within one hour after cleaving \cite{Green2016}, which is also consistent with decreasing Se/Bi in Fig. \ref{fig_2}(c), suggesting that some Bi migration could also be occurring along with Cu. However, the larger increases in Cu/Bi we see suggest that any Bi migration would be very small, below the sensitivity of a standard XPS instrument \cite{Green2016} and indistinguishable from Bi oxides at the surface. \begin{figure}[h] \includegraphics[width=0.45\textwidth]{figure_5_v4.png} \caption{SESSA modelling of surface compositional gradients in \ce{Cu_xBi2Se_y}. (a) Proposed Cu distributions for the initial state (before dosing \ce{O2}) and the final state (after dosing \ce{O2} for four days). The initial Cu distribution is uniform, and the final Cu distribution has developed a Cu gradient, including a \SI{2}{\angstrom} \ce{Bi2O3} overlayer. Black arrows indicate the passage of time. (b) Proposed Se gradients before and after dosing \ce{O2}. The final Se distribution has a steeper gradient when nearing the surface, with a \SI{2}{\angstrom} \ce{Bi2O3} overlayer. (c) Comparison of the AP-XPS Cu/Bi intensity ratios (black squares) to SESSA simulated ratios (red squares) for the initial and final states. (d) AP-XPS Se/Bi intensity ratios compared to SESSA values.} \label{fig_5} \end{figure} While Cu migration can be seen on shorter timescales, the process continues and is more easily seen at the longer timescales ($t > \SI{25}{\hour}$) in Fig. \ref{fig_4}(a). On these timescales the oxidation of the near-surface Bi is evident with new oxide peaks, consistent with prior work \cite{Green2016,Kong2011} and the oxide peaks we see during the AP-XPS experiment in Fig. \ref{fig_3}. The oxidation is stronger and continues for longer in full atmosphere than in the AP-XPS experiment ($P=\SI{1.3e-4}{}$ atm), even when considering the deeper probing depth of the $E_{ph} = \SI{1486.6}{\electronvolt}$ Al-K$\alpha$ source (IMFP = \SI{28.9}{\angstrom}). The oxidation is accompanied by a steady increase in Cu 3p over several days (Fig. \ref{fig_4}(b)), greater than what was observed in AP-XPS. The link between oxidation and Cu migration is clear when looking at the trends in Figs. $2-4$: both Cu migration and \ce{Bi2O3} formation start right after dosing \ce{O2}, and when oxide growth slows between $1-4$ days, Cu growth also slows, changing only slightly. This suggests that surface oxides establish the conditions needed for Cu to diffuse towards the surface. There are several microscopic mechanisms that can promote Cu migration, such as a surface work function mismatch between \ce{Bi2O3} and \ce{Cu_xBi2Se3} that can drive the Cu to the surface with a built-in $E$-field. The work function difference between \ce{Bi2O3} and \ce{Bi2Se3} is estimated to be $\sim\SI{1.6}{\electronvolt}$ \cite{Morasch2013, Takane2016}, which would create a sufficiently large $E$-field near the surface. Cu is able to occupy five different sites in the Van der Waals gap and in interstitial vacancies \cite{Wang2011, Tumelero2016}, and some migration could be driven by Cu diffusion among these sites. Occupation of surface Se vacancies by Cu could also influence the Cu migration to the surface, which is possible due to the amphoteric character of Cu impurities in \ce{Bi2Se3} \cite{Vako1974}. To connect the proposed changes in chemical composition to the measured XPS intensities, we model our experimental results with core-electron spectroscopy simulations using the National Institute of Standards and Technology (NIST) Simulation of Electron Spectra for Surface Analysis (SESSA) software/database \cite{Smekal2005, Werner2017}. SESSA can accurately simulate XPS spectra and peak intensities for different experimental conditions, geometries, and sample compositions using database reference values. As strongly suggested by our experimental observations, Cu and Se form a compositional gradient in the near-surface region of the material. A gradient structure consisting of several discretized, homogeneous layers with varying Cu, Bi, and Se compositions models a sample with Cu and Se gradients, shown in Figs. \ref{fig_5}(a,b). The initial Cu distribution is assumed to be constant. The gradients have a $X(Z)=Ae^{-Z/L}+B$ falloff when approaching the surface, with fitting parameters $A$, $L$, and $B$, which were chosen to match the boundary conditions observed in experiment. Due to the oxidation present after \ce{O2} dosing, the final simulated structure is also capped with a thin \SI{2}{\angstrom} overlayer of \ce{Bi2O3}. The red points in Figs. \ref{fig_5}(c,d) show the SESSA-calculated peak intensities for the initial and final structures, expressed in terms of the Cu/Bi and Se/Bi intensity ratios versus IMFP and the measurement depth. In the initial state, a structure with a single homogeneous Cu composition with no Cu gradient matches closely with the observed constant dependence versus IMFP in Fig. \ref{fig_2}(b). Initially, no Cu has migrated into the surface regions and no oxide has formed yet. In the final state after oxidation, the dual Cu, Se gradient structure agrees best with our measured XPS intensities in Figs. \ref{fig_5}(c,d), capturing both the increase in Cu and decrease in Se near the surface. We note that there is greater error when comparing the gradient to experiment at the smallest depth, due to the low photon flux at $E_{ph} = \SI{230}{\electronvolt}$. Modelling the uppermost \SI{10}{\angstrom} of \ce{Cu_xBi2Se3} is motivated by the fact that the top \SI{10}{\angstrom} contributes predominately to the measured XPS intensity. Because of the exponential falloff at greater depths, compositional changes deeper in the sample are more difficult to detect. Additionally, \SI{10}{\angstrom} is the approximate thickness of one quintuple layer in \ce{Bi2Se3}, and the van der Waals gap can serve as a barrier to deeper oxidation of the material \cite{Green2016}. Thus, it is likely that the oxidation is limited to the uppermost quintuple layer in the AP-XPS experiment, with lesser contributions below. There are other factors that are not present in this model which can also affect the evolution of \ce{Cu_xBi2Se3} surface chemistry. These include imperfect cleaves that can form step edge sites for oxidation \cite{Thomas2015}, lingering Cu remaining at the surface after the cleave, different Bi, Se surface terminations \cite{Biswas2015}, as well as nanosheet morphology \cite{Jia2016,Kong2011}. Other Cu, Se compositional distributions are possible and consistent with our experimental observations, such as ones with discontinuous step edges or sigmoidal distributions. \subsection{Conclusion} In summary, we have observed an increase in the surface Cu content in the intercalated TI \ce{Cu_xBi2Se3} with ambient-pressure XPS measurements. Our results show that Cu migrates to the surface and is enhanced by the appearance and growth of surface oxides over several days, with the most pronounced changes seen in full atmosphere. Modelling Cu migration concomitant with Se depletion and oxidation matches our depth-selective XPS observations for a wide range of photon energies. These findings show that oxidation can be used as an approach for driving chemical species towards the surface of layered intercalated materials, and add additional chemical complexity that must be considered at TI surfaces exposed to ambient conditions. Chemically tailoring the surfaces of topological materials will be needed for realizing real-world environmental applications in chemical sensing, catalysis, and electronics. Most intriguingly, the proximity of the topological surface states to the observed chemical changes in \ce{Cu_xBi2Se3} points to further study of the effect of intercalants on the surface states of TIs, particularly on timescales that allow environmental changes to influence them. \vspace{5mm} \begin{acknowledgments} We thank Henrique Martins, Jonathan Denlinger, and Andrew Thron for helpful discussions. A.L.G. and S.N. acknowledge funding through the Laboratory Directed Research and Development (LDRD) Program at Lawrence Berkeley National Laboratory under a grant titled “Photoemission Investigations of Layered 2D Materials”, and A.L.G. acknowledges subsequent support from the Alfred P. Sloan Foundation (FG-2019-12170). L.J.F. acknowledges support of the Humboldt Foundation, Bonn, Germany. Purchase of the Kratos AXIS Supra+ XPS instrument used to collect data in Fig. 4 was supported by the National Science Foundation under the award NSF-MRI-1828238, and collection of these data were supported by NSF-DMR-1838532. ARPES experiments were supported by AFOSR Grant No. FA9550-18-1-0156. This research used resources of the Advanced Light Source, which is a US Department of Energy Office of Science User Facility under contract No. DE-AC02-05CH11231. R.R.U. and V.T. acknowledge support from the UC Lab Fees Research Program (LFR-20-653926) and the UC Davis Physics Liquid Helium Laboratory Fund. \end{acknowledgments} \section{XPS data analysis} XPS intensities for Bi 5d and Se 3d are determined by fitting spectra to Voigt doublets with a Shirley background, using Kolibrik KolXPD for peak fitting and calculating peak areas. To determine oxide content, Bi oxide peaks were quantified by fitting a separate set of chemically shifted Voigt doublets to the spectra, keeping spin-orbit coupling and branching ratios fixed. To accurately quantify chemical content, all XPS intensities were weighed by the relative sensitivity factor (RSF) for each element at each excitation energy. The predominant term of the RSF is the photoionization cross-section $\frac{d\sigma}{d\Omega}$, calculated using elemental cross-sections and asymmetry values from the ELETTRA database \cite{Yeh1985,Yeh1993}, and converted to differential form by the Yeh and Lindau formula \cite{Yeh1985} using the experimental geometries for ALS Beamline 9.3.2 and for the Kratos AXIS SUPRA$^+$. The analyzer transmission function is not included in the RSF because the Cu 3p, Se 5d, and Bi 5d levels span a sufficiently small $\sim\SI{50}{\electronvolt}$ kinetic energy range, so the transmission function contribution is negligible. The photon flux and inelastic mean free path corrections were excluded as these are constant for each photon energy, and are not needed for calculating Cu/Bi and Se/Bi atomic ratios. In all figures, core level binding energies were offset in energy to be consistent with NIST and SESSA database values. \section{\ce{Bi2Se3} loss feature correction} This section describes the procedure that was implemented to systematically background-correct the AP-XPS \ce{Cu_xBi2Se3} spectra in the vicinity of the Cu 3p core levels to isolate the Cu 3p intensity (see main text). This procedure has the advantage of incorporating the plasmon energy loss features in the \ce{Cu_xBi2Se3} spectra, and is able to discriminate between the small Cu 3p intensity and the loss features for each excitation energy. The background is initially estimated by fitting to \ce{Bi2Se3} reference spectra at each excitation energy containing the plasmon energy loss features \cite{Nascimento1999}. To model the loss features, several Voigt peaks and a linear background were fit to the reference \ce{Bi2Se3} spectrum (Fig. \ref{fig_Cu_loss}(a,b)), forming the composite background that was subsequently fitted to the \ce{Cu_xBi2Se3} spectra in Fig. \ref{fig_Cu_loss}(c). The peak widths, peak ratios, and energy spacing between individual peaks were fixed, with only the amplitudes varying as a fitting parameter up to an overall binding energy offset determined by the Se 3d peak. The Cu 3p spectra were fitted to another Voigt doublet to quantify the XPS intensity of Cu 3p. It is also possible to quantify Cu 3p after directly subtracting a normalized \ce{Bi2Se3} reference spectrum, which was done for a separate AP-XPS experiment in a mixed \ce{O2}/\ce{H2O} environment (not shown in the main text). A similar increasing Cu/Bi trend was seen with this background correction method, shown in Fig. \ref{fig_O2_H2O}(a). This will be discussed below in Section V. \begin{figure*}[h] \includegraphics[width=0.9\columnwidth]{supp_Bi2Se3_loss.png} \captionsetup{justification=centering,singlelinecheck=false} \caption{Loss feature fits in \ce{Bi2Se3} XPS spectra, right after cleaving. (a) Fitted XPS spectrum for \ce{Bi2Se3} including Se 3d core levels and plasmon loss features. The measured data is in red, the individual fitted peaks are the colored curves, and the combined fit result is the solid black curve. A linear background was fitted over the entire region. (b) Zoom in of plasmon loss feature background in \ce{Bi2Se3}, with the linear background subtracted.} \label{fig_Cu_loss} \end{figure*} \section{O 1s levels in \ce{Cu_{0.3}Bi2Se3}} Additional spectra taken with $E_{ph}=\SI{650}{\electronvolt}$ in the AP-XPS experiment shows that the growth of the \ce{Bi2O3} features is coincident with the growth of the O 1s peak. Initially right after the cleave, adventitious O is adsorbed on the surface, as seen by the non-zero initial O 1s peak (Fig. \ref{fig_oxides_650eV}, right). After dosing \ce{O2}, the O 1s peak steadily grows, reaching the highest intensity at the end of the AP-XPS experiment when the Bi is most oxidized. We note that the O 1s peak is still growing at the end of the AP-XPS experiment, consistent with our longer timescale observations that show continued oxidation over several weeks. \begin{figure*}[h] \includegraphics[width=1.0\columnwidth]{supp_oxides_650eV.png} \captionsetup{justification=centering,singlelinecheck=false} \caption{Oxidation of \ce{Cu_{0.3}Bi2Se3} in the AP-XPS experiment, taken with $E_{ph}=\SI{650}{\electronvolt}$. Left: growth of the Bi 5d levels initially after cleaving and after four days dosing \ce{O2}. Bi 5d peaks have been normalized to the total Bi intensity, and offset for clarity. Right: unnormalized O 1s peaks during the AP-XPS experiment.} \label{fig_oxides_650eV} \end{figure*} \newpage \section{Additional Cu migration data} At higher photon energy ($E_{ph} = \SI{1486.6}{\electronvolt}$) using the Kratos AXIS SUPRA$^+$, several higher binding energy Cu, Bi, and O levels are observable in the survey spectra: Cu 2p$_{3/2}$, Cu 2p$_{1/2}$ ($E_B=932.7,\SI{952.3}{\electronvolt}$), Bi 4s ($E_B= \SI{939}{\electronvolt}$), and O KL23L23, KL1L23 Auger peaks ($E_B=970.6,\SI{990.1}{\electronvolt}$). These are shown in Fig. \ref{fig_Cu_2p}. The Cu 2p levels near $E_B = 933-\SI{952}{\electronvolt}$ are more intense than Cu 3p and generally preferable for quantifying Cu and \ce{Cu2O}. However, tracking Cu 3p is needed over Cu 2p due to the strong overlapping Bi 4s level at $E_B = \SI{939}{\electronvolt}$, so only a qualitative analysis of the XPS spectra is possible. Nevertheless, separate observations also show that the Cu 2p levels increase in intensity after exposing \ce{Cu_{0.15}Bi2Se3} samples to air for several weeks, simultaneously with growing Cu 3p levels. Freshly cleaved \ce{Cu_{0.15}Bi2Se3} shows only a small Bi 4s peak, with no Cu 2p or O Auger peaks. After \SI{20}{\hour} in air, Cu 2p peaks appear, consistent with the increase in the Cu 3p peaks at $E_B=\SI{76}{\electronvolt}$ after \SI{20}{\hour}. O Auger peaks also appear, which indicate surface oxidation. After \SI{163}{\hour}, the Cu 2p peaks continue to grow relative to the O Auger peak (blue arrows) demonstrating that Cu migration continues on the timescale as seen with Cu 3p. Interestingly, a small peak on the higher binding energy side of Cu 2p$_{1/2}$ forms after 2 months, which is not observed in a similarly oxidized \ce{Bi2Se3} sample. This peak is consistent with a CuO shakeup peak or a different oxidation state of Cu, indicating that a small amount of Cu probably oxidizes after many weeks of air exposure. \begin{figure*}[h] \includegraphics[width=1\columnwidth]{supp_Cu_2p.png} \captionsetup{justification=centering,singlelinecheck=false} \caption{XPS spectra with Bi 4s and Cu 2p core levels, taken with $E_{ph} = \SI{1486.6}{\electronvolt}$. (a) XPS spectrum of a freshly cleaved \ce{Cu_{0.15}Bi2Se3} sample, showing a small Bi 4s peak and no visible Cu 2p peaks. (b) Spectrum of \ce{Cu_{0.15}Bi2Se3} after 20 hrs in air. Note the appearance of Cu 2p peaks, and O auger peaks. The distance between the O auger peak and the Cu 2p$_{3/2}$ peak is indicated by the spacing between the blue dashed lines. (c) Spectrum of \ce{Cu_{0.15}Bi2Se3} after 163 hrs, showing growth of the Cu 2p peaks compared to the O Auger peaks (longer blue arrow). (d) Spectrum of \ce{Cu_{0.15}Bi2Se3} after 2 months, showing further growth of the Cu 2p peaks. (e) Spectrum of \ce{Bi2Se3} sample kept in air for several months, for comparison.} \label{fig_Cu_2p} \end{figure*} \newpage \section{AP-XPS in a mixed \ce{O2}/\ce{H2O} environment} Another set of AP-XPS experiments were performed in a mixed \ce{O2}/\ce{H2O} environment (100 mTorr/20 mTorr) showing a similar increase in Cu/Bi and decrease in Se/Bi consistent with the \ce{O2} experiment (Fig. \ref{fig_O2_H2O}). At \SI{10}{\hour} after measurements began, 20 mTorr \ce{H2O} was introduced into the chamber. At \SI{20}{\hour}, the sample was heated to \SI{80}{\celsius} from room temperature. The growth of Cu 3p peaks can expressed as loss feature-corrected spectra at different times in (Fig. \ref{fig_O2_H2O}(a)) and in terms of the atomic fraction Cu/Bi (Fig. \ref{fig_O2_H2O}(b)). The increase in signal-to-noise ratio in Fig. \ref{fig_O2_H2O}(a) is indicative of increased Cu content being measured at later times. The XPS spectrum of the same \ce{Cu_{0.15}Bi2Se3} sample was collected with the Kratos AXIS SUPRA$^+$ ($E_{ph} = \SI{1486.6}{\electronvolt}$) after 1 month and 24 days of aging in air. Shown in Fig. \ref{fig_after_1_month}, the Cu 3p peaks and the Se and Bi oxide peaks are considerably larger after extended air exposure. We note the growth of the valence band states in Fig. \ref{fig_after_1_month}(b), which are also consistent with greater Cu near the surface. \begin{figure*}[h] \includegraphics[width=0.8\columnwidth]{supp_O2_H2O.png} \captionsetup{justification=centering,singlelinecheck=false} \caption{AP-XPS data for \ce{Cu_{0.15}Bi2Se3} in a mixed \ce{O2}/\ce{H2O} environment, measured with $E_{ph} = \SI{370}{\electronvolt}$. (a) Evolution of Cu 3p peaks, after correcting for the loss feature background. (b) Growth of Cu/Bi ratio and Cu fraction $x$ for several conditions: before dosing \ce{O2}, dosing \ce{O2}, dosing mixed \ce{O2}/\ce{H2O}, and heating when dosing \ce{O2}/\ce{H2O}. (c) Decrease of Se/Bi ratio for the same conditions.} \label{fig_O2_H2O} \end{figure*} \begin{figure*}[h] \includegraphics[width=1.0\columnwidth]{supp_after_1_month.png} \captionsetup{justification=centering,singlelinecheck=false} \caption{\ce{Cu_{0.15}Bi2Se3} sample after oxidation in air. (a) Shallow core levels including the valence band measured with $E_{ph} = \SI{900}{\electronvolt}$, prior to any \ce{O2}/\ce{H2O} dosing. (b) XPS spectrum taken after the AP-XPS experiment and 1 month, 24 days in air, using $E_{ph} = \SI{1486.6}{\electronvolt}$.} \label{fig_after_1_month} \end{figure*} \newpage \section{Se depletion and oxidation} In the mixed \ce{O2}/\ce{H2O} AP-XPS experiments, the Se 3d peak intensities were observed dropping right after cleaving, and continue to decrease at later times (Fig. \ref{fig_Se_3d}(a)). These observations show that the Se content is highly volatile even in the earliest moments of handling the sample and when starting \ce{O2} dosing. Due to the lower partial pressure of \ce{O2}, no Se oxidation was observed in the AP-XPS experiments, and was not included in the SESSA model. Se oxidation can be observed, but only in measurements taken after oxidation in full atmosphere for several days. In Fig. \ref{fig_Se_3d}(b), a feature to the higher binding energy side of the Se 3d levels indicates that Se is oxidized, and follows the growth trend shown in Fig. \ref{fig_Se_3d}(c). \clearpage \begin{figure*}[h] \includegraphics[width=0.7\columnwidth]{supp_Se_3d.png} \captionsetup{justification=centering,singlelinecheck=false} \caption{Se depletion and oxidation in \ce{Cu_{0.15}Bi2Se3}. (a) Se depletion after cleaving and dosing \ce{O2} in the AP-XPS experiment ($E_{ph} = \SI{370}{\electronvolt}$). (b) Se 3d peaks over several days of oxidation, with \ce{SeO2} visible at higher $E_B$ ($E_{ph} = \SI{1486.6}{\electronvolt}$). (c) Growth of \ce{SeO2} relative to \ce{Se} over several days.} \label{fig_Se_3d} \end{figure*} \section{\ce{Bi2Se3} and \ce{Cu_$\lowercase{x}$Bi2Se3} sample preparation} \ce{Bi2Se3} samples were grown using a modified form of the Bridgman method \cite{NissonThesis2015,Bridgman1925}, starting with pure Bi and Se precursors mixed into an evacuated quartz ampoule in a 2:3 ratio. After melting of the initial mixture, the ampoule was placed into the furnace with a temperature gradient between \SI{750}{\celsius} - \SI{650}{\celsius}. The ampoule pulled through the gradient at a rate of 2-3 \SI{}{\milli\meter\per\hour} forming high purity \ce{Bi2Se3} at the end of the process. Additional information on the custom apparatus at UC Davis is available in the indicated reference \cite{NissonThesis2015}. A solution-based process was used to intercalate the \ce{Bi2Se3} with Cu to form \ce{Cu_xBi2Se3} \cite{Koski2012}. The Bridgman synthesized \ce{Bi2Se3} samples were placed in a tetrakisacetonitrile copper hexafluorophosphate solution and heated in 5 mL acetone at \SI{45}{\celsius}, just below reflux, for \SI{4}{\hour}. Samples were then heated under vacuum at \SI{180}{\celsius} for \SI{7.5}{\minute}, and the entire sequence was repeated for four cycles. After synthesis, samples were kept away from moisture during storage to avoid \ce{H2O} contamination. Samples were mounted on a conductive substrate with silver epoxy, and for cleaving in the AP-XPS experiment, ceramic top posts were adhered to the tops of the samples with silver epoxy. A mechanical arm was used to detach the top posts when cleaving \textit{in situ}. \section{Characterization of superconductivity in \ce{Cu_xBi2Se3}} \begin{figure}[h] \includegraphics[]{supp_CuBi2Se3_Most.pdf} \caption{\label{fig:Susceptibility} The DC magnetic susceptibility was measured on \ce{Cu_xBi2Se3} with nominal $x = 0.15$ and $x = 0.3$, down to 1.9\,K in a Quantum Design MPMS. No superconducting transition was observed in either sample.} \end{figure} We measured the DC magnetic susceptibility of a selection of \ce{Cu_xBi2Se3} samples in a Quantum Design SQUID Magnetometer (MPMS). The \ce{Cu_xBi2Se3} samples were wedged between multiple straws and oriented so the applied field was parallel to the c-axis. These choices were made to limit the diamagnetic background signal observed when alternative mounting methods are used. The $M/H$ vs. $T$ plot shows that none of the samples have a significant superconducting shielding fraction (less than 0.3\%). For a superconducting sample, one would expect 100\% shielding fraction (i.e. $M/H=-1$) in zero field cooled measurements. In the literature, samples have been reported to have shielding fractions up to 50\% for $x=0.4$ in Kriener et al. \cite{Kriener2011} and up to 56\% for Cu content of $x=0.35$ in Schneeloch et al. \cite{Schneeloch2015}. Additionally, a more recent study from Fang et al. \cite{Fang2020} reported a 17\% superconducting shielding fraction estimate after considering demagnetization effects. The shielding fraction can depend on method of synthesis. Melt-growth methods, when quenching from above \SI{560}{\celsius} and annealing are known to produce superconducting samples, while floating zone crystal growth methods do not tend to produce superconducting samples \cite{Schneeloch2015}. Electrochemical synthesis also tends to promote superconductivity in \ce{Cu_xBi2Se3} \cite{Kriener2011}.
proofpile-arXiv_065-5961
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} An \emph{iterated function system} is by definition a tuple $(T_1,\ldots,T_N)$ of contracting transformations of some metric space $X$, which in this article will be taken to be $\mathbb{R}^d$. To avoid trivialities it will be assumed throughout this article that $N \geq 2$. If $(T_1,\ldots,T_N)$ is an iterated function system acting on $\mathbb{R}^d$ then it is well-known that there exists a unique nonempty compact subset $Z\subset \mathbb{R}^d$ with the property $Z =\bigcup_{i=1}^N T_iZ$, called the \emph{attractor} or \emph{limit set} of the iterated function system. If additionally any probability vector $(p_1,\ldots,p_N)$ is specified then there exists a unique Borel probability measure $m$ on $\mathbb{R}^d$ such that $m=\sum_{i=1}^Np_i (T_i)_*m$. In the case where the transformations $T_i$ are contracting similitudes of $\mathbb{R}^d$ we call the limit set $Z$ a \emph{self-similar set} and the measure $m$ a \emph{self-similar measure}. For each $x \in \mathbb{R}^d$ and $r>0$ let $B_r(x)$ denote the open Euclidean ball with radius $r$ and centre $x$. If $m$ is a Borel probability measure $m$ on $\mathbb{R}^d$ such that the limit \[\lim_{r\to 0} \frac{\log m(B_r(x))}{\log r}\] exists for $m$-a.e.~$x$ and is constant $m$-a.e, we say that $m$ is \emph{exact-dimensional} and define the dimension of $m$ to be the value of this almost-everywhere limit. We denote the dimension of such a measure by $\dim m$. It was shown in 2009 by D.-J. Feng and H. Hu that every self-similar measure on $\mathbb{R}^d$ is exact-dimensional \cite{FeHu09}. We denote the Hausdorff dimension of any subset $Z$ of $\mathbb{R}^d$ by $\dimh Z$. An iterated function system is said to satisfy the \emph{open set condition} if there exists a nonempty open set $U$ such that $T_iU \subseteq U$ for all $i=1,\ldots,N$ and such that $T_iU \cap T_jU =\emptyset$ whenever $i \neq j$, and is said to satisfy the \emph{strong open set condition} if additionally $U \cap Z \neq \emptyset$. The starting point of the motivation for this article is the following landmark theorem of J.E. Hutchinson \cite{Hu81}: \begin{theorem}[Hutchinson]\label{th:hutch} Let $T_1,\ldots,T_N \colon \mathbb{R}^d \to \mathbb{R}^d$ be contracting similitudes of the form $T_ix:=r_i O_ix+v_i$ for some $r_i \in (0,1)$, $O_i \in O(d)$ and $v_i \in \mathbb{R}^d$ and suppose that $(T_1,\ldots,T_N)$ satisfies the open set condition. Then the Hausdorff dimension of the attractor $Z$ of the iterated function system $(T_1,\ldots,T_N)$ is equal to the unique real number $s \in (0,d]$ such that $\sum_{i=1}^N r_i^s=1$. Moreover there exists a unique self-similar measure $m$ supported on $Z$ with dimension $s$. \end{theorem} The extension of Theorem \ref{th:hutch} in various directions has been an active topic of research since its original publication. One major area of research has been the problem of understanding systematically what happens when the open set condition is removed (such as in \cite{BaFrMa19,BrMoSi04,Ho15,Ho14,LiVa16,Ra17,SaShSo18,So95}) and this line of research has focused especially on the dimensions of the resulting measures as opposed to the resulting sets. A second major direction of extension of Theorem \ref{th:hutch} is that in which the transformations $T_i$ are allowed to be arbitrary affine contractions instead of similitudes: this line of research dates back to the work of Bedford, McMullen and Falconer in the 1980s \cite{Be84,Fa88,Mc84} and has been particularly active within the last few years (see for example \cite{BaHoRa19,BaKa17,BaFe13,BoMo18,DaSi17,Fe19,FeSh14,KaMo18,Ra18}). It is with this second direction of extension that this article is concerned. When $(T_1,\ldots,T_N)$ is an iterated function system consisting of affine contractions of $\mathbb{R}^d$ the attractor of $(T_1,\ldots,T_N)$ is referred to as a \emph{self-affine set} and Borel probability measures satisfying $m=\sum_{i=1}^Np_i (T_i)_*m$ are referred to as \emph{self-affine measures}. It was shown recently by D.-J. Feng in \cite{Fe19} that every self-affine measure is exact-dimensional; previous partial results in this direction include \cite{Ba15,BaKa17,FrJoJu18}. Let us now describe the most natural generalisation of Hutchinson's dimension formula $\sum_{i=1}^N r_i^s=1$ to the affine context. We recall that the \emph{singular values} of a $d\times d$ real matrix $A$ are defined to be the square roots of the (necessarily non-negative) eigenvalues of the positive semidefinite matrix $A^\top A$. We denote the singular values of $A$ by $\sigma_1(A),\ldots,\sigma_d(A)$ where it is always understood that $\sigma_1(A) \geq \sigma_2(A) \geq \cdots \geq \sigma_d(A)$. Following the notation of \cite{Fa88}, given a $d\times d$ real matrix $A$, for each $s \geq 0$ we define the \emph{singular value function} $\varphi^s(A)$ applied to $A$ by \[\varphi^s(A):=\left\{\begin{array}{cl}\sigma_1(A)\cdots \sigma_{\lfloor s\rfloor}(A)\sigma_{\lceil s\rceil}(A)^{s-\lfloor s\rfloor}&\text{if }0 \leq s \leq d,\\ |\det A|^{\frac{s}{d}}&\text{if }s \geq d. \end{array}\right.\] The singular value function satisfies the useful inequality $\varphi^s(AB) \leq \varphi^s(A)\varphi^s(B)$ for all $A,B \in \GL_d(\mathbb{R})$, as is noted in \cite{Fa88}. Given $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ we define the \emph{singular value pressure} of $(A_1,\ldots,A_N)$ at $s$ to be the real number \[P(A_1,\ldots,A_N;s):=\lim_{n \to \infty} \frac{1}{n}\log \sum_{i_1,\ldots,i_n=1}^N \varphi^s\left(A_{i_n}\cdots A_{i_1}\right),\] the existence of the limit being guaranteed by subadditivity. When $A_1,\ldots,A_N \in \GL_d(\mathbb{R})$ are contracting in the Euclidean norm (or indeed with respect to an arbitrary norm on $\mathbb{R}^d$) it is not difficult to show that the function $s \mapsto P(A_1,\ldots,A_N;s)$ is strictly decreasing and locally Lipschitz continuous and has a unique zero in $(0,+\infty)$ which we denote by $\dimaff(A_1,\ldots,A_N)$. We observe that when every $A_i$ has the form $A_i=r_iO_i$ for some $r_i \in (0,1)$ and $O_i \in O(d)$ as in Theorem \ref{th:hutch}, the pressure simplifies to $P(A_1,\ldots,A_N;s)=\log \sum_{i=1}^N r_i^s$ and thus in this case $\dimaff(A_1,\ldots,A_N)$ is simply the unique solution $s$ to Hutchinson's equation $\sum_{i=1}^N r_i^s=1$. If $(T_1,\ldots,T_N)$ is an affine iterated function system of the form $T_ix=A_ix+v_i$ then we will also find it useful to write $\dimaff(T_1,\ldots,T_N):=\dimaff(A_1,\ldots,A_N)$. We note that the singular value potential and affinity dimension have a number of antecedents in the literature in the context of the dimension theory of attractors of dynamical systems: a version of the singular value potential was considered by Douady--Oesterl\'{e} \cite{douady-oesterle} in the study of the Hausdorff dimensions of attractors, and in the same context the relevance of the asymptotics of singular values (in the form of Lyapunov exponents) was foreseen by Kaplan--Yorke \cite{kaplan-yorke} who conjectured that in generic situations the Hausdorff dimension should be related to the growth asymptotics of singular values. An active area of research in the theory of self-affine sets is the problem of obtaining analogues of Theorem \ref{th:hutch} for affine iterated function systems. The first general result in this direction was obtained by K. Falconer in the 1988 article \cite{Fa88}: \begin{theorem}[Falconer]\label{th:fa} Let $A_1,\ldots,A_N \in \GL_d(\mathbb{R})$. If $\max_{1 \leq i \leq N}\|A_i\|<\frac{1}{2}$ then for Lebesgue a.e.~$(v_1,\ldots,v_N) \in (\mathbb{R}^d)^N$ the attractor $Z$ of the iterated function system $(T_1,\ldots,T_N)$ defined by $T_ix:=A_ix+v_i$ satisfies \[\dimh Z = \min\{d,\dimaff(A_1,\ldots,A_N)\}.\] If $\max_{1 \leq i \leq N}\|A_i\|<1$, then for \emph{every} $(v_1,\ldots,v_N) \in (\mathbb{R}^d)^N$ the attractor satisfies \[\dimh Z \leq \min\{d,\dimaff(A_1,\ldots,A_N)\}.\] \end{theorem} Here $\|\cdot\|$ denotes the operator norm induced by the Euclidean norm. Falconer's original argument assumed $\max_{1 \leq i \leq N}\|A_i\|<\frac{1}{3}$, the improvement to $\frac{1}{2}$ being due to Solomyak \cite{So98}, who also noted that the value of $\frac{1}{2}$ cannot be further improved to any $\frac{1}{2}+\varepsilon$. We remark that the hypothesis $\max_{1 \leq i \leq N}\|A_i\|<\frac{1}{2}$ and the conclusion $\dimh Z = \min\{d,\dimaff(A_1,\ldots,A_N)\}$ contain a minor asymmetry: it is clear that if each $A_i$ is replaced with $X^{-1}A_iX$ for some fixed $X \in \GL_d(\mathbb{R})$ then the almost sure Hausdorff dimension $\dimh Z$ of the attractor does not change, but the condition $\max_{1 \leq i \leq N}\|A_i\|<\frac{1}{2}$ will in general be invalidated for certain choices of $X$. This asymmetry can be remedied by weakening the hypothesis to the condition $\max_{1 \leq i \leq N}\|A_i\|<\frac{1}{2}$ for the operator norm induced by \emph{some} norm $\|\cdot\|$ on $\mathbb{R}^d$, and similarly with the condition $\max_{1 \leq i \leq N}\|A_i\|<1$, and under this hypothesis Falconer's proof goes through with minimal changes. Some similar remarks relating to sufficient conditions for the existence of the attractor of $(T_1,\ldots,T_N)$ were presented in \cite[\S6]{AtBaViWi10}. To avoid similar asymmetries in our results we will assume in this article that our affine iterated function systems are contracting with respect to some unspecified norm on $\mathbb{R}^d$. Theorem \ref{th:fa} demonstrates that the affinity dimension correctly describes the Hausdorff dimension of the attractor in a large range of cases, but this result inherently does not apply to explicit, specific examples of affine iterated function systems. Since the publication of \cite{Fa88} an active line of research, especially in recent years, has therefore been that of extending Theorem \ref{th:fa} to explicit affine iterated function systems for which the vectors $v_i$ are fixed and some version of the open set condition is satisfied (see for example \cite{FaKe18,HuLa95,MoSh19}). In this direction the following powerful result was obtained recently by B. B\'ar\'any, M. Hochman and A. Rapaport \cite{BaHoRa19}: \begin{theorem}[B\'ar\'any-Hochman-Rapaport]\label{th:bahora} Let $(T_1,\ldots,T_N)$ be an affine iterated function system acting on $\mathbb{R}^2$ and satisfying the strong open set condition, where each $T_i$ is contracting with respect to the Euclidean norm. Let us write $T_ix:=A_ix+v_i$ for every $i=1,\ldots,N$ and suppose that each $A_i$ is invertible. Suppose that the linear maps $|\det A_i|^{-1/2}A_i$ are not contained in a compact subgroup of $\GL_2(\mathbb{R})$ and do not preserve a finite union of one-dimensional subspaces of $\mathbb{R}^2$. Then the Hausdorff dimension of the attractor of $(T_1,\ldots,T_N)$ is equal to $\dimaff(A_1,\ldots,A_N)$. \end{theorem} In dimension $d>2$ the problem of obtaining an analogue of Theorem \ref{th:bahora} is substantially more challenging. At the time of writing, no explicit examples of affine iterated function systems in dimension higher than two are yet known where the Hausdorff and affinity dimensions coincide, other than those which fall within the scope of Theorem \ref{th:hutch}. On the other hand, in the broader setting of limit sets of actions of non-conformal transformations, Dufloux \cite{dufloux} has successfully computed the Hausdorff dimension of limit sets on the boundary $\partial H^n_\mathbb{C}$ of the $n$-dimensional complex hyperbolic space associated to well-positioned Schottky subgroups. We also note the work of Pozzetti--Sambarino--Wienhard \cite{PSW} who, under an asymptotic conformality assumption, have successfully calculated the Hausdorff dimensions of limit sets in projective spaces. Returning to our setting of affine iterated function systems, while Theorems \ref{th:fa} and \ref{th:bahora} extend the part of Theorem \ref{th:hutch} which describes the dimension of the attractor, a feature which has no direct parallel in Theorem \ref{th:bahora} in particular is the question of whether or not there exists a measure supported on the attractor of the affine iterated function system $(T_1,\ldots,T_N)$ having dimension equal to the affinity dimension. While we conjecture that this should indeed be the case in the context of Theorem \ref{th:bahora} and its presumed higher-dimensional analogues (and indeed it is known that such measures exist generically in the sense of Theorem \ref{th:fa} -- see \cite{JoPoSi07}) in this article we will focus on a narrower question: under what circumstances does an affine iterated function system $(T_1,\ldots,T_N)$ acting on $\mathbb{R}^d$ admit a \emph{self-affine} measure with dimension equal to the affinity dimension? Theorem \ref{th:hutch} indicates that this phenomenon occurs when the affine transformations are all similitudes, or more generally when they are simultaneously conjugated to similitudes by some linear transformation of $\mathbb{R}^d$. In this situation it was observed by P. Mattila that while the open set condition is sufficient for the existence of a self-similar measure with dimension equal to the affinity dimension, it is not necessary for it (see the introduction to \cite{Sc94}). One may also show that self-affine measures with dimension equal to the affinity dimension can arise in certain circumstances when the linear parts of the affinities admit a common invariant subspace, or when the affinity dimension is precisely equal to $d$. The objective of this article is to demonstrate that these are the \emph{only} situations in which this phenomenon occurs. Henceforth we shall say that a subset $\mathsf{A}$ of $\GL_d(\mathbb{R})$ is \emph{irreducible} if there does not exist any proper nonzero subspace of $\mathbb{R}^d$ preserved by every $A \in \mathsf{A}$, and \emph{strongly irreducible} if a finite union of such subspaces is not preserved by every element of $\mathsf{A}$. When $\mathsf{A}$ is not irreducible it will be called \emph{reducible}. Clearly $\mathsf{A}$ is (strongly) irreducible if and only if the semigroup generated by $\mathsf{A}$ is. We will at times abuse notation by saying that a tuple $(A_1,\ldots,A_N)$ is (strongly) irreducible if and only if the corresponding set is. Our main result is as follows: \begin{theorem}\label{th:main} Let $T_1,\ldots,T_N$ be invertible affine transformations of $\mathbb{R}^d$ having the form $T_ix:=A_ix+v_i$ for some $v_1,\ldots, v_N \in \mathbb{R}^d$, where $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ has the following four properties: \begin{enumerate}[(i)] \item There exists a norm $\trip{\cdot}$ on $\mathbb{R}^d$ such that $\trip{A_i}<1$ for every $i=1,\ldots,N$; \item The affinity dimension $\dimaff(A_1,\ldots,A_N)$ is strictly between $0$ and $d$; \item The tuple $(A_1,\ldots,A_N)$ is irreducible; \item There does not exist an inner product on $\mathbb{R}^d$ with respect to which the linear maps $A_1,\ldots,A_N$ are similitudes. \end{enumerate} Then every self-affine measure $m=\sum_{i=1}^Np_i(T_i)_*m$ satisfies $\dim m < \dimaff(A_1,\ldots,A_N)$. Furthermore this property is locally uniform in the following sense. Suppose that $\mathsf{K} \subset \GL_d(\mathbb{R})^N$ is a compact set such that every $(A_1,\ldots,A_N) \in \mathsf{K}$ satisfies hypotheses (i)--(iv) above. This applies in particular if $(B_1,\ldots,B_N) \in \GL_d(\mathbb{R})^N$ satisfies (i)--(iv) above and $\mathsf{K}$ is a sufficiently small compact neighbourhood of $(B_1,\ldots,B_N)$. Then there exists $\kappa>0$ depending on $\mathsf{K}$ with the following property: if $(A_1,\ldots,A_N) \in \mathsf{K}$, and $T_1,\ldots,T_N \colon \mathbb{R}^d \to \mathbb{R}^d$ are affine transformations of the form $T_ix=A_ix+v_i$ for some vectors $v_1,\ldots,v_N$, and $m=\sum_{i=1}^N p_i(T_i)_*m$ is a self-affine measure on $\mathbb{R}^d$ for some probability vector $(p_1,\ldots,p_N)$, then $\dim m \leq \dimaff(A_1,\ldots,A_N)-\kappa$. \end{theorem} In stating this result we have taken advantage of the fact that every self-affine measure on $\mathbb{R}^d$ is exact-dimensional, but this result is not required in our proof. The proof of Theorem \ref{th:main} in fact shows that the upper packing dimension of the measure $m$, \[{\ess \sup}_m \limsup_{r \to \infty} \frac{\log m(B_r(x))}{\log r},\] is bounded by $\dimaff(A_1,\ldots,A_N)-\kappa$. This in turn is achieved by showing that the \emph{Lyapunov dimension} of an appropriate measure on the coding space $\Sigma_N:=\{1,\ldots,N\}^{\mathbb{N}}$ is bounded by $\dimaff(A_1,\ldots,A_N)-\kappa$. The Lyapunov dimension is relatively technical to describe and would be digressive to define in this introduction, so we defer further discussion of this point to \S\ref{se:proof-of-main} below. The condition that the linear maps $A_i$ are not all similitudes with respect to some inner product on $\mathbb{R}^d$ is equivalent to the statement that the linear maps $|\det A_i|^{-1/d}A_i$ are not all contained in some compact subgroup of $\GL_d(\mathbb{R})$, and we will at times prefer the latter formulation in the proofs. To see that these statements are equivalent we observe that if $G \leq \GL_d(\mathbb{R})$ is a compact group containing the linear maps $|\det A_i|^{-1/d}A_i$, $\langle \cdot,\cdot\rangle$ denotes the standard inner product on $\mathbb{R}^d$, and $H$ is the normalised Haar measure on $G$, the formula \[\langle u,v\rangle_G:= \int_G \langle Bu,Bv\rangle dH(B)\] may easily be verified to define an inner product on $\mathbb{R}^d$ which is invariant under the action of elements of $G$. In particular the transformations $A_i$ are similitudes with respect to this inner product structure. The converse direction of implication is obvious. Theorem \ref{th:main} therefore admits the following corollary which motivates the title of this work: \begin{corollary}\label{co:converse-hutchinson} Let $T_1,\ldots,T_N \colon \mathbb{R}^d \to \mathbb{R}^d$ be invertible affine transformations which are contracting with respect to some norm on $\mathbb{R}^d$. Let us write $T_ix=A_ix+v_i$ for all $x \in \mathbb{R}^d$ and $i=1,\ldots,N$, and suppose that $\{A_1,\ldots,A_N\}$ is irreducible. If there exists a self-affine measure $m=\sum_{i=1}^N p_i(T_i)_*m$ such that $\dim m=\dimaff(A_1,\ldots,A_N) \in (0,d)$, then there exists an inner product on $\mathbb{R}^d$ with respect to which the transformations $T_i$ are all similitudes. \end{corollary} \begin{figure}% \centering \subfloat[The classical self-similar Sierpi\'nski gasket $X_1$.]{{\includegraphics[width=5.8cm]{gasket1-smaller.png}}} \qquad \subfloat[A self-affine gasket $X_2$ which is not self-similar.]{{\includegraphics[width=5.8cm]{gasket2-smaller.png}}} \caption{By Theorem \ref{th:hutch} there exists a self-similar measure supported on the classical Sierpi\'nski gasket $X_1$ with dimension equal to the Hausdorff dimension of the set itself, $\log 3/\log 2$. This measure corresponds to that defined simply by giving measure $\frac{1}{3}$ to each of the three copies of $X_1$ with diameter half that of the original, measure $\frac{1}{9}$ to each of the nine sub-copies with diameter $\frac{1}{4}$ that of the original, and so forth. By the combination of Theorems \ref{th:bahora} and \ref{th:main}, for the self-affine gasket $X_2$ there is a gap between the maximum possible dimension of a self-affine measure supported on $X_2$ and the Hausdorff dimension of $X_2$ itself. } \label{fi:onlyfigure}% \end{figure} We note that the affinity dimension of an invertible affine iterated function system is never zero and therefore the endpoint case $\dimaff(A_1,\ldots,A_N)=0$ of Theorem \ref{th:main} cannot occur. In the other endpoint case $\dimaff(A_1,\ldots,A_N)=d$ it is easy to construct examples in which the normalised restriction of Lebesgue measure to a convex polyhedral body in $\mathbb{R}^d$ may be represented as a self-affine measure with respect to affine transformations which are not simultaneously conjugate to similitudes and whose linear parts do not admit an invariant proper subspace. For example, if $U \subset \mathbb{R}^2$ is an open triangular region then up to Lebesgue measure zero it may be bisected along a line passing through one vertex and its opposite edge into the union of two smaller triangular regions $U_1$ and $U_2$, each having two side lengths smaller than those of the original triangle and one side length in common with it. Taking further bisections if necessary $U$ may be written up to measure zero as a finite union of strictly smaller triangular regions $V_1,\ldots,V_N$ each of which is the image of $U$ under some contracting affine transformation $T_i$. It is clear that if $m$ denotes the normalised Lebesgue measure on $U$ then it satisfies the relation $m=\sum_{i=1}^N m(V_i)(T_i)_*m$ and hence is a self-affine measure with respect to $(T_1,\ldots,T_N)$ which has dimension $2$. In general this construction may be performed in such a way as to ensure that hypotheses (i),(iii) and (iv) of Theorem \ref{th:main} are satisfied; moreover the linear parts of the affinities may be taken to be \emph{strongly} irreducible. The details of this aspect of the construction and of its generalisation to higher dimensions are left to the reader. We remark that if in Theorem \ref{th:main} instead of measures of the form $m=\sum_{i=1}^N p_i(T_i)_*m$ we were to consider the larger category of Borel probability measures $m$ which satisfy an equation of the form \begin{equation}\label{eq:bigsa}m=\sum_{i_1,\ldots,i_n=1}^N q_{(i_1,\ldots,i_n)} (T_{i_1}\cdots T_{i_n})_*m\end{equation} for some $n\geq 1$ and some probability vector $(q_{(1,\ldots,1)}, \ldots,q_{(N,\ldots,N)}) \in \mathbb{R}^{N^n}$, then no dimension gap would occur. In two dimensions it is known that the supremum of the Hausdorff dimensions of measures which are self-affine in the broader sense of \eqref{eq:bigsa} can be equal to the affinity dimension $\dimaff(A_1,\ldots,A_N)$ when the conditions of Theorem \ref{th:main} are satisfied. Indeed this fact played a significant role in the proof of Theorem \ref{th:bahora} by extending the results of \cite{MoSh19} which pertain to self-affine measures into a result concerning self-affine sets. Theorem \ref{th:main} demonstrates that outside the context of similarity transformations this supremum is attained only in degenerate cases in which a common invariant subspace exists. To conclude this introduction let us briefly outline how Theorem \ref{th:main} will be proved. If $T_1,\ldots,T_N$ are contractions of $\mathbb{R}^d$ with respect to some fixed norm then there exists a well-defined coding map $\Pi \colon \{1,\ldots,N\}^{\mathbb{N}} \to \mathbb{R}^d$ with the property \[\Pi\left[(x_k)_{k=1}^\infty\right] = \lim_{n \to \infty} T_{x_1}\cdots T_{x_n}v\] for all $v \in \mathbb{R}^d$, and whose image is precisely the attractor of $(T_1,\ldots,T_N)$. It is a well-known result due to Hutchinson \cite[\S4]{Hu81} that a Borel probability measure $m$ on $\mathbb{R}^d$ satisfies $m=\sum_{i=1}^N p_i(T_i)_*m$ if and only if it satisfies $m=\Pi_* \mu$ where $\mu$ is the Bernoulli measure $(\sum_{i=1}^N p_i\delta_i)^{\mathbb{N}}$ on $\{1,\ldots,N\}^{\mathbb{N}}$. This measure $\mu$ is an ergodic invariant measure with respect to the shift transformation $\sigma \colon \{1,\ldots,N\}^{\mathbb{N}} \to \{1,\ldots,N\}^{\mathbb{N}}$ defined by $\sigma[(x_k)_{k=1}^\infty]:=(x_{k+1})_{k=1}^\infty$. Now, using a combination of results of A. K\"aenm\"aki \cite{Ka04} and T. Jordan, M. Pollicott and K. Simon \cite{JoPoSi07}, one may show that if an ergodic shift-invariant measure $\mu$ on $\{1,\ldots,N\}^n$ has the property $\dim \Pi_*\mu = \dimaff(T_1,\ldots,T_N)$ then it necessarily maximises the quantity \[h(\mu)+ \lim_{n \to \infty} \frac{1}{n}\int \log \varphi^s(A_{x_1}\cdots A_{x_n})d\mu\left[(x_k)_{k=1}^\infty\right]\] over all shift-invariant Borel probability measures on $\{1,\ldots,N\}^{\mathbb{N}}$, where $s:=\dimaff(T_1,\ldots,T_N)$, $A_i$ denotes the linear part of the affine transformation $T_i$ and $h(\mu)$ denotes the entropy of the measure $\mu$ with respect to the transformation $\sigma$. Measures which maximise this quantity have been named \emph{K\"aenm\"aki measures}. The critical step in proving Theorem \ref{th:main} is to show that under the hypotheses of that theorem there cannot exist a K\"aenm\"aki measure which is also a Bernoulli measure. The dimension gap result then follows by relatively straightforward compactness considerations. The proof of this statement relies on a general theorem on the structure of K\"aenm\"aki measures which was established by J. Bochi and the first named author in \cite{BoMo18}, building on the earlier works \cite{FeKa11} and \cite{KaMo18}. Let us illustrate how this argument functions in a simple special case. Suppose that the semigroup generated by $A_1,\ldots,A_N$ is \emph{Zariski dense} as a subgroup of $\GL_d(\mathbb{R})$: that is, suppose that every function $\phi \colon \GL_d(\mathbb{R}) \to \mathbb{R}$ which corresponds to a polynomial function of the matrix entries and vanishes on the semigroup generated by $A_1,\ldots,A_N$ also vanishes identically on $\GL_d(\mathbb{R})$. (Equivalently, $A_1,\ldots,A_N$ is not contained in any proper algebraic subgroup of $\GL_d(\mathbb{R})$.) Then it follows by a result of A. K\"aenm\"aki and the first named author in \cite{KaMo18} that if $\mu$ is a K\"aenm\"aki measure for $(T_1,\ldots,T_N)$ then it satisfies \begin{equation}\label{eq:bg2}C^{-1}\leq \frac{\mu(\{(x_k)\colon x_j=i_j\text{ for all }j=1,\ldots,n\})}{\varphi^s(A_{i_1}\cdots A_{i_n})} \leq C\end{equation} for some constant $C>1$, for all $i_1,\ldots,i_n \in \{1,\ldots,N\}$ and $n \geq 1$. But if $\mu$ is also a Bernoulli measure, the value of the numerator depends only on which symbols appear in the sequence $i_1,\ldots,i_n$ and not on the order in which those symbols appear. This implies that the same property must hold for $\varphi^s(A_{i_1}\cdots A_{i_n})$ up to the introduction of a scalar multiplicative factor $C^2$. Using this principle one may deduce that if $B_1$ and $B_2$ belong to the semigroup generated by $A_1,\ldots,A_N$ then necessarily \begin{equation}\label{eq:basicgibbs}C^{-3} \leq \frac{\varphi^s((B_1B_2)^n)}{\varphi^s(B_1^n)\varphi^s(B_2^n)} \leq C^3\end{equation} for every $n \geq 1$. Now if $\lambda_i(B)$ denotes the $i^{\mathrm{th}}$ largest of the absolute values of the $d$ eigenvalues of $B \in \GL_d(\mathbb{R})$, and $0<s<d$, one may show that \[\lim_{n \to \infty} \varphi^s(B^n)^{\frac{1}{n}}=\lambda_1(B)\cdots \lambda_{\lfloor s\rfloor}(B) \lambda_{\lceil s\rceil}(B)^{s-\lfloor s\rfloor}=:\xi^s(B).\] Taking the power $\frac{1}{n}$ and letting $n \to \infty$ in \eqref{eq:basicgibbs} it follows that the function $\xi^s$ just defined satisfies $\xi^s(B_1B_2)=\xi^s(B_1)\xi^s(B_2)$ for all $B_1,B_2$ in the semigroup generated by the linear maps $A_1,\ldots,A_N$. But this turns out to be impossible for a semigroup which is Zariski dense in $\GL_d(\mathbb{R})$, essentially by a theorem of Y.~Benoist (later reproven by J.-F.~Quint using a different method, see Theorem 7.4 and Proposition 9.8 of \cite{bq.book} and additionally \cite{benoist.linear2,quint.schottky}). The extension of this argument to the more general circumstances of Theorem \ref{th:main} requires us to engage with a number of complications. Similarly to the special case described above, the core of the proof operates by assuming that hypotheses (i)--(iii) of Theorem \ref{th:main} hold and that a K\"aenm\"aki measure exists which is a Bernoulli measure, and proceeds to show that the linear maps $|\det A_i|^{-1/d}A_i$ necessarily belong to a compact group, contradicting (iv). In general under the hypotheses of Theorem \ref{th:main} there may be multiple inequivalent K\"aenm\"aki measures. (This remains true even under slightly stronger hypotheses: see \cite{MoSe19}.) The hypotheses imply that at least one of these measures is Bernoulli, but \emph{a priori} other K\"aenm\"aki measures may not be. In this case the denominator of \eqref{eq:bg2} will not correspond to the function $\varphi^s(A_{i_1}\cdots A_{i_n})$ but to some more complicated function derived from the action of $A_{i_1}\cdots A_{i_n}$ on finite unions of proper subspaces of exterior powers of $\mathbb{R}^d$ (see \cite[\S2]{BoMo18}). The more complicated structure of this function necessitates further steps in order to deduce the multiplicativity of some analogue of the function $\xi^s$ defined above, which in general will correspond to some spectral data relating to the action of a finite-index subsemigroup of the semigroup generated by $A_1,\ldots,A_N$ on certain pairs of subspaces of exterior powers of $\mathbb{R}^d$. This multiplicativity will allow us to show that certain homomorphic images of a finite-index subsemigroup of the semigroup generated by $|\det A_1|^{-1/d} A_1,\ldots,|\det A_N|^{-1/d}A_N$ are contained in compact groups, and this can be applied to deduce that the elements of that finite-index subsemigroup act as ``simultaneously normal'' linear maps on certain subspaces of particular exterior powers of $\mathbb{R}^d$: that is, on those spaces there exists an inner product structure with respect to which the linear maps act as orthogonal direct sums of linear similitudes. An extensive additional argument is then required to show that these normal linear maps actually \emph{are} similitudes. This additional argument makes use of the variational characterisation of K\"aenm\"aki measures to bound a weighted sum of the Lyapunov exponents of the other K\"aenm\"aki measures and so force the remaining K\"aenm\"aki measures to also be Bernoulli measures. It is then straightforward to deduce that the entire semigroup generated by $|\det A_1|^{-1/d} A_1,\ldots,|\det A_N|^{-1/d}A_N$ acts on these subspaces of exterior powers by similitudes. Still further arguments are required to deal with the possibility that these subspaces of the exterior powers may be proper. The first two parts of the argument, in which the linear maps are first shown to act normally and then shown to act by similitudes on certain subspaces of exterior powers, are dealt with in section \ref{se:irr-case}. The final part, in which the action on proper subspaces of exterior powers is related to the action on $\mathbb{R}^d$, forms a separate argument which is presented in section \ref{se:gen-case}. The remainder of the article is therefore structured as follows. In the following section we review such background on the thermodynamic formalism of affine iterated function systems as is necessary to state our main technical theorem, Theorem \ref{th:main-tech}, which asserts that under the hypotheses of Theorem \ref{th:main} a K\"aenm\"aki measure cannot be a Bernoulli measure. In section \ref{se:proof-of-main} we derive Theorem \ref{th:main} from Theorem \ref{th:main-tech}; this is the most technically straightforward part of the proof of Theorem \ref{th:main}. Section \ref{se:rev} then reviews key concepts from the theory of linear algebraic groups which will be used in the proof of Theorem \ref{th:main-tech}. Section \ref{se:irr-case} proves a key special case of Theorem \ref{th:main-tech} in which the irreducibility of certain representations is assumed, and section \ref{se:gen-case} applies this result to deduce the general case. During peer review it was brought to our attention that some of the technical arguments underlying Theorem \ref{th:main} may be expressed in intrinsic terms as a statement concerning potentials defined in terms of reductive linear algebraic groups. This is discussed in more detail in the appendix. \section{Subadditive thermodynamic formalism and the main technical theorem}\label{se:tech-thm} Let $\Sigma_N$ denote the set $\{1,\ldots,N\}^{\mathbb{N}}$ equipped with the infinite product topology (with respect to which it is compact and metrisable) and let $\sigma \colon \Sigma_N \to \Sigma_N$ denote the shift transformation $(x_k)_{k=1}^\infty \mapsto (x_{k+1})_{k=1}^\infty$ which is a continuous surjection. When $N$ is understood let $\mathcal{M}_\sigma$ denote the set of all $\sigma$-invariant Borel probability measures on $\Sigma_N$. Via the Riesz representation theorem we identify $\mathcal{M}_\sigma$ with a subset of $C(\Sigma_N)^*$ equipped with the corresponding weak-* topology, and in this topology it is compact and metrisable; a sequence of measures $(\mu_n)_{n=1}^\infty$ in $\mathcal{M}_\sigma$ converges to a measure $\mu \in \mathcal{M}_\sigma$ if and only if $\lim_{n \to \infty} \int f\,d\mu_n=\int f\,d\mu$ for every $f \in C(\Sigma_N)$. We define $\Sigma_N^*$ to be the set of all finite sequences $\mathtt{i}=(i_k)_{k=1}^n \in \{1,\ldots,N\}^n$, which we refer to as \emph{words}. If $\mathtt{i}=(i_k)_{k=1}^n$ then we write $|\mathtt{i}|=n$ and define this to be the \emph{length} of the word $\mathtt{i}$. Given two words $\mathtt{i}=(i_k)_{k=1}^n, \mathtt{j}=(j_k)_{k=1}^m \in \Sigma_N^*$ we define their concatenation $\mathtt{i} \mathtt{j}$ to be the word of length $|\mathtt{i}|+|\mathtt{j}|=n+m$ with first $n$ symbols $i_1,\ldots,i_n$ and subsequent symbols $j_1,\ldots,j_m$. We define the concatenation of more than two words (e.g. $\mathtt{i} \mathtt{j} \mathtt{k}$ where $\mathtt{i},\mathtt{j},\mathtt{k} \in \Sigma_N^*$) in the obvious fashion, and if $\mathtt{i} \in \Sigma_N^*$ and $n \geq 1$ we let $\mathtt{i}^n$ denote the concatenation $\mathtt{i} \mathtt{i} \cdots \mathtt{i}$ of $n$ copies of $\mathtt{i}$. If $A_1,\ldots,A_N \in \GL_d(\mathbb{R})$ are understood then we write $A_\mathtt{i}:=A_{i_1}\cdots A_{i_n}$ and observe that $A_\mathtt{i} A_\mathtt{j} = A_{\mathtt{i}\mathtt{j}}$ for all $\mathtt{i},\mathtt{j} \in \Sigma_N^*$. If $x=(x_k)_{k=1}^\infty \in \Sigma_N$ then we define $x|_n$ to be the word $(x_k)_{k=1}^n \in \Sigma_N^*$. If $\mathtt{i} \in \Sigma_N^\ast$ then we define the \emph{cylinder set} $[\mathtt{i}]$ to be the set of all $x \in \Sigma_N$ such that $x|_n=\mathtt{i}$. Every cylinder set is clopen and cylinder sets form a basis for the topology of $\Sigma_N$. The linear span of the set of all characteristic functions of cylinder sets is dense in $C(\Sigma_N)$ and therefore a sequence of measures $(\mu_n)_{n=1}^\infty$ in $\mathcal{M}_\sigma$ converges to a measure $\mu \in \mathcal{M}_\sigma$ if and only if $\lim_{n \to \infty} \mu_n([\mathtt{i}])=\mu([\mathtt{i}])$ for every $\mathtt{i} \in \Sigma_N^*$. We will say that $\mu \in \mathcal{M}_\sigma$ is a \emph{Bernoulli measure} if there exists a probability vector $(p_1,\ldots,p_N)$ such that $\mu([i_1\cdots i_n])=p_{i_1}\cdots p_{i_n}$ for all $i_1,\ldots,i_n \in\{1,\ldots,N\}$ and all $n \geq 1$. (We permit cases in which some of the entries of the probability vector are zero.) Clearly Bernoulli measures on $\Sigma_N$ are in one-to-one correspondence with probability vectors $(p_1,\ldots,p_N)$. It is not difficult to see that the natural map from the $(N-1)$-simplex of probability vectors to the set of corresponding Bernoulli measures on $\Sigma_N$ is weak-* continuous, and in particular the set of all Bernoulli measures on $\Sigma_N$ is weak-* compact. Every Bernoulli measure is ergodic with respect to $\sigma$. Let us say that a \emph{submultiplicative potential}, or simply a \emph{potential}, is a function $\Phi \colon \Sigma_N^* \to (0,+\infty)$ such that $\Phi(\mathtt{i} \mathtt{j}) \leq \Phi(\mathtt{i})\Phi(\mathtt{j})$ for all $\mathtt{i},\mathtt{j} \in \Sigma_N^*$. We define the \emph{pressure} of $\Phi$ to be the limit \[P(\Phi):=\lim_{n \to \infty} \frac{1}{n}\log \sum_{|\mathtt{i}|=n}\Phi(\mathtt{i})\] and observe that this limit exists by subadditivity. If $\Phi$ is a submultiplicative potential then we define a sequence of functions $\Phi_n \colon \Sigma_N \to (0,+\infty)$ by $\Phi_n(x):=\Phi(x|_n)$ for every $x \in \Sigma_N$ and $n \geq 1$. In this case we observe that each $\Phi_n$ is continuous (since it depends on only finitely many co-ordinates of $x \in \Sigma_N$) and that the subadditivity property $\log \Phi_{n+m}(x) \leq \log \Phi_n(\sigma^mx)+\log \Phi_m(x)$ is satisfied by the sequence of continuous functions $\log \Phi_n \colon \Sigma_N \to \mathbb{R}$. As a consequence of this property, for each ergodic $\mu \in \mathcal{M}_\sigma$ the following limit exists and defines \[\Lambda(\Phi,\mu):=\lim_{n \to \infty}\frac{1}{n}\int \log \Phi_n(x)\,d\mu(x) = \lim_{n \to \infty}\frac{1}{n}\sum_{|\mathtt{i}|=n} \mu([\mathtt{i}])\log\Phi(\mathtt{i}) \in [-\infty,+\infty).\] The next result is a special case of the subadditive variational principle of Cao, Feng and Huang (\cite[Theorem 1.1]{CaFeHu08}): \begin{proposition}\label{pr:varp} Let $N \geq 2$ and let $\Phi \colon \Sigma_N \to (0,+\infty)$ be a submultiplicative potential. Then \begin{equation}\label{eq:eq}P(\Phi)=\sup_{\mu \in \mathcal{M}_\sigma}\left[ h(\mu) + \Lambda(\Phi,\mu)\right].\end{equation} \end{proposition} When $\mu$ attains the supremum \eqref{eq:eq} we call it an \emph{equilibrium state} for the potential $\Phi$. If $\Phi$ is a submultiplicative potential then by subadditivity \[\Lambda(\Phi,\mu)=\inf_{n \geq 1}\frac{1}{n}\sum_{|\mathtt{i}|=n}\mu([\mathtt{i}])\log\Phi(\mathtt{i})\] and also \[h(\mu)=\lim_{n \to \infty} \frac{1}{n}\sum_{|\mathtt{i}|=n}-\mu([\mathtt{i}])\log\mu([\mathtt{i}]) = \inf_{n \geq 1}\frac{1}{n}\sum_{|\mathtt{i}|=n}-\mu([\mathtt{i}])\log\mu([\mathtt{i}])\] and since each function $\mu \mapsto \mu([\mathtt{i}])$ is continuous, these formulas imply that the function $\mu \mapsto h(\mu)+\Lambda(\Phi,\mu)$ is the pointwise infimum of a family of continuous functions $\mathcal{M}_\sigma \to \mathbb{R}$, and hence is an upper semi-continuous function $\mathcal{M}_\sigma \to [-\infty,+\infty)$. In particular it attains its maximum by the compactness of $\mathcal{M}_\sigma$ and so at least one equilibrium state exists for any specified potential $\Phi$. A submultiplicative potential $\Phi$ will be called \emph{quasi-multiplicative} if there exist a finite set $F \subset \Sigma_N^*$ and a real number $\delta>0$ such that \begin{equation}\label{eq.def.quasimult} \max_{\mathtt{k} \in F} \Phi(\mathtt{i}\mathtt{k}\mathtt{j})\geq \delta \Phi(\mathtt{i})\Phi(\mathtt{j}) \end{equation} for all $\mathtt{i},\mathtt{j} \in \Sigma_N^*$. The significance of this condition is that it both guarantees the uniqueness of the equilibrium state of $\Phi$ and provides explicit information about its structure: \begin{proposition}\label{pr:qm-unique} Let $\Phi \colon \Sigma_N^* \to \mathbb{R}$ be a submultiplicative and quasi-multiplicative potential. Then there exists a unique equilibrium state $\mu$ for $\Phi$. Furthermore there exists $C>0$ such that \[C^{-1}e^{-|\mathtt{i}|P(\Phi)} \Phi(\mathtt{i}) \leq \mu([\mathtt{i}]) \leq Ce^{-|\mathtt{i}|P(\Phi)}\Phi(\mathtt{i})\] for all $\mathtt{i} \in \Sigma_N^*$. \end{proposition} We refer to the above inequality between $\mu([\mathtt{i}])$ and $\Phi(\mathtt{i})$ as the \emph{Gibbs inequality} for the potential $\Phi$ and measure $\mu$. Proposition \ref{pr:qm-unique} has been proved and re-proved in various forms across a number of works: we mention for example \cite[Theorem 5.5]{Fe11}, \cite[\S3]{KaRe14}. The fundamental example of a potential from the perspective of this article will be the singular value potential $\Phi^s(\mathtt{i}):=\varphi^s(A_\mathtt{i})$, where $A_1,\ldots,A_N \in \GL_d(\mathbb{R})$ are understood; this potential was investigated extensively by A. K\"aenm\"aki in \cite{Ka04} and the properties of its equilibrium states were developed in subsequent articles such as \cite{BoMo18,FeKa11,KaMo18}. Our argument will however require us to work with potentials which have a unique equilibrium state, and the singular value potential does not have this property unless additional constraints are imposed beyond the hypotheses of Theorem \ref{th:main}. In particular, although the irreducibility of $(A_1,\ldots,A_N)$ as hypothesised in Theorem \ref{th:main} ensures this uniqueness for $d=2$, it is not sufficient for this when $d>2$ and $1<s<d-1$ (see for example \cite[\S9]{KaMo18}). This problem cannot be alleviated by assuming strong irreducibility in place of irreducibility \cite{MoSe19}. The core technical result of this article is the following: \begin{theorem}\label{th:main-tech} Let $(A_1,\ldots,A_N)\in \GL_d(\mathbb{R})^N$ be irreducible and define a potential $\Phi \colon \Sigma_N^* \to (0,+\infty)$ by \[\Phi(\mathtt{i}):=\prod_{i=1}^d \sigma_i(A_\mathtt{i})^{\alpha_i}\] where $\alpha_1 \geq \alpha_2 \geq \cdots \geq \alpha_d \geq 0$ and $\alpha_1>\alpha_d$. If $\Phi$ has an equilibrium state which is a Bernoulli measure then the linear maps $|\det A_1|^{-1/d}A_1,\ldots, |\det A_N|^{-1/d}A_N$ are all contained in a compact subgroup of $\GL_d(\mathbb{R})$. \end{theorem} We observe that the submultiplicativity of the above potential $\Phi$ follows from the inequality \begin{equation}\label{eq:svsa}\prod_{i=1}^k \sigma_i(AB) \leq \prod_{i=1}^k \sigma_i(A) \cdot \prod_{i=1}^k \sigma_i(B)\end{equation} which is valid for all linear maps $A,B \colon \mathbb{R}^d \to \mathbb{R}^d$ and all $k=1,\ldots,d$, since we may write \[\prod_{i=1}^d \sigma_i(A_\mathtt{i})^{\alpha_i} = \prod_{k=1}^d \left(\prod_{i=1}^k \sigma_i(A_\mathtt{i})\right)^{\alpha_k-\alpha_{k+1}}\] where $\alpha_{d+1}:=0$. We will find it convenient to approach the inequality \eqref{eq:svsa} via norms on exterior powers of $\mathbb{R}^d$, but an elementary proof may be found in for example \cite[Theorem 3.3.4]{HoJo94}. If $0<s<d$ with $d \geq 2$ then clearly the singular value potential $\Phi^s$ corresponds to the case $\alpha_1=\cdots=\alpha_{\lfloor s\rfloor}=1$, $\alpha_{\lceil s\rceil}=s-\lfloor s\rfloor$, $\alpha_{\lceil s \rceil+1}=\cdots=\alpha_d=0$ of the above theorem. In particular Theorem \ref{th:main-tech} implies that if $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ is irreducible, $0<s<d$ and the singular value potential has an equilibrium state which is Bernoulli, then the linear maps $|\det A_1|^{-1/d}A_1,\ldots, |\det A_N|^{-1/d}A_N$ are all contained in a compact subgroup of $\GL_d(\mathbb{R})$. As was indicated in the introduction, in combination with various more-or-less standard results from the literature, Theorem \ref{th:main-tech} is sufficient to prove Theorem \ref{th:main}. The derivation of Theorem \ref{th:main} from Theorem \ref{th:main-tech} is presented in the following section, and Theorem \ref{th:main-tech} itself is proved in sections \ref{se:rev} to \ref{se:gen-case}. \section{Proof of Theorem \ref{th:main} conditional on Theorem \ref{th:main-tech}}\label{se:proof-of-main} We begin the process of proving Theorem \ref{th:main} by collecting various results from the literature concerning the Lyapunov dimension, the affinity dimension, the natural projection from $\Sigma_N$ to the attractor, and self-affine measures. \subsection{The Lyapunov and affinity dimensions} The following result demonstrates that the affinity dimension has the properties alluded to in the introduction and introduces its counterpart for measures, the Lyapunov dimension: \begin{lemma}\label{le:cty} Let $A_1,\ldots,A_N \in \GL_d(\mathbb{R})$ with $\max_i \trip{A_i}<1$ for some norm $\trip{\cdot}$ on $\mathbb{R}^d$, and for each $s \geq 0$ define a potential $\Phi^s$ by $\Phi^s(\mathtt{i}):=\varphi^s(A_\mathtt{i})$. Then: \begin{enumerate}[(i)] \item The function $s\mapsto P(\Phi^s)=P(A_1,\ldots,A_N;s)$ is a continuous strictly decreasing function $[0,+\infty) \to \mathbb{R}$ with a unique zero, and this zero is strictly positive. \item For every $\mu \in \mathcal{M}_\sigma$ the function $s\mapsto h(\mu)+\Lambda(\Phi^s,\mu)$ is a continuous strictly decreasing function $[0,+\infty) \to \mathbb{R}$ with a unique zero. \end{enumerate} We define the \emph{affinity dimension} of $(A_1,\ldots,A_N)$ to be the unique zero of $s\mapsto P(\Phi^s)$, and the \emph{Lyapunov dimension} of $\mu \in \mathcal{M}_\sigma$ relative to $(A_1,\ldots,A_N)$, denoted $\dimlyap (\mu;A_1,\ldots,A_N)$, to be the unique zero of $s\mapsto h(\mu)+\Lambda(\Phi^s,\mu)$. \end{lemma} The proof of the above lemma is a straightforward application of the inequalities \[\varphi^{s_1}(A_\mathtt{i}) \leq \left(C\trip{A_\mathtt{i}}\right)^{s_1-s_2} \varphi^{s_2}(A_\mathtt{i}) \leq C^{s_1-s_2}\left(\max_i \trip{A_i}\right)^{(s_1-s_2)|\mathtt{i}|} \varphi^{s_2}(A_\mathtt{i})\] and \[\left(\min_i \sigma_d(A_i)\right)^{(s_1-s_2)|\mathtt{i}|} \varphi^{s_2}(A_\mathtt{i}) \leq \sigma_d(A_\mathtt{i})^{s_1-s_2}\varphi^{s_2}(A_\mathtt{i})\leq\varphi^{s_1}(A_\mathtt{i}) \] which are valid for all $\mathtt{i} \in \Sigma_N^*$ and $s_1 \geq s_2 \geq 0$, where the constant $C>0$ depends only on $\trip{\cdot}$ and not on $\mathtt{i}$, $s_1$ or $s_2$. The following relationship between Lyapunov dimension and affinity dimension was observed by A. K\"aenm\"aki \cite{Ka04}: \begin{lemma}\label{le:high} Let $A_1,\ldots,A_N \in \GL_d(\mathbb{R})$ with $\max_i \trip{A_i}<1$ for some norm $\trip{\cdot}$ on $\mathbb{R}^d$, and let $\mu \in \mathcal{M}_\sigma(\Sigma_N)$. Then $\dimlyap (\mu;A_1,\ldots,A_N) \leq \dimaff (A_1,\ldots,A_N)$, and equality holds if and only if $\mu$ is an equilibrium state of the potential $\Phi^s(\mathtt{i}):=\varphi^s(A_\mathtt{i})$ where $s:=\dimaff(A_1,\ldots,A_N)$. \end{lemma} \begin{proof} For each $s\geq 0$ we have $h(\mu)+\Lambda(\Phi^s,\mu) \leq P(\Phi^s)$ by the variational principle, Proposition \ref{pr:varp}. In particular if $P(\Phi^s)<0$ for some $s>0$ then $h(\mu)+\Lambda(\Phi^s,\mu)<0$. It follows that \[\left\{s \geq 0 \colon P(\Phi^s)< 0\right\} \subseteq \left\{s \geq 0 \colon h(\mu)+\Lambda(\Phi^s,\mu) < 0\right\}\] and since using Lemma \ref{le:cty} \[\dimlyap (\mu;A_1,\ldots,A_N) = \inf\left\{s \geq 0 \colon h(\mu)+\Lambda(\Phi^s,\mu) < 0\right\}\] and \[\dimaff (A_1,\ldots,A_N)= \inf \left\{s \geq 0 \colon P(\Phi^s) < 0\right\}\] it follows that $\dimlyap (\mu;A_1,\ldots,A_N) \leq \dimaff (A_1,\ldots,A_N)$ as required. If these two quantities are equal to one another with common value $s_0$, say, then we must have $h(\mu)+\Lambda(\Phi^{s_0},\mu)=0$ and $P(\Phi^{s_0})=0$ by continuity in view of Lemma \ref{le:cty}, which implies that $\mu$ is an equilibrium state for the potential $\Phi^{s_0}$ as claimed. The converse is trivial. \end{proof} \subsection{The natural projection and the dimension of self-affine measures} If $T_1,\ldots,T_N$ are affine transformations of $\mathbb{R}^d$ which are contractions with respect to some norm $\trip{\cdot}$ on $\mathbb{R}^d$ then for every $v \in \mathbb{R}^d$ and $x =(x_k)_{k=1}^\infty \in \Sigma_N$ the limit \[\Pi(x):=\lim_{n \to \infty} T_{x_1}T_{x_2}\cdots T_{x_n}v\] exists and is independent of the choice of $v \in \mathbb{R}^d$. Indeed, if $\varepsilon>0$ is chosen such that $\trip{T_iu-T_iv} \leq (1-\varepsilon)\trip{u-v}$ for all $u,v \in \mathbb{R}^d$, and $v_0 \in \mathbb{R}^d$ is arbitary, then for every $r\geq\varepsilon^{-1} \max_i \trip{v_0-T_iv_0}$ every map $T_i$ preserves and contracts $\overline{B_r(v_0)}$, the closed $r$-ball centred on $v_0$ with respect to the norm $\trip{\cdot}$. It follows easily that $\Pi(x)=\bigcap_{n=1}^\infty T_{x_1}\cdots T_{x_n}\overline{B_r(v_0)}$. We deduce also that the diameter of the set $\Pi([\mathtt{i}])$ is bounded by a constant times $(1-\varepsilon)^{|\mathtt{i}|}$ and it follows that $\Pi \colon \Sigma_N \to \mathbb{R}^d$ is continous. It is not difficult to see that $\Pi(\Sigma_N)$ is contained in the attractor of $(T_1,\ldots,T_N)$ since the initial point $v$ may be taken to be in the attractor. It is also not difficult to see that $\Pi(\Sigma_N)$ is precisely the attractor, although this fact will not be used. We call $\Pi$ the \emph{natural projection} associated to $(T_1,\ldots,T_N)$. The following result relating Bernoulli measures to self-affine measures via the natural projection follows from a more general theorem of J. E. Hutchinson \cite[\S4]{Hu81}. Although Hutchinson's proof assumes the probability vector $(p_1,\ldots,p_N)$ to be nondegenerate, it is not difficult to check that this stipulation is unnecessary. \begin{lemma}\label{le:hutch} Let $T_1,\ldots,T_N \colon \mathbb{R}^d \to \mathbb{R}^d$ be affine transformations which are contractions with respect to some norm on $\mathbb{R}^d$, and let $(p_1,\ldots,p_N)$ be a probability vector. Then a Borel probability measure $m$ on $\mathbb{R}^d$ satisfies $\sum_{i=1}^Np_i (T_i)_*m = m$ if and only if it satisfies $m=\Pi_*\mu$ where $\mu$ is the Bernoulli measure on $\Sigma_N$ characterised by the property $\mu([\mathtt{i}])=p_{i_1}\cdots p_{i_n}$ for all $\mathtt{i}=(i_k)_{k=1}^n \in \Sigma_N^*$. \end{lemma} Finally, the following result connects the Lyapunov dimension with the dimension of a measure: \begin{lemma}\label{le:rossi-joposi} Let $T_1,\ldots,T_N \colon \mathbb{R}^d \to \mathbb{R}^d$ be affine transformations which are contractions with respect to some norm on $\mathbb{R}^d$ and let $\mu \in \mathcal{M}_\sigma$. Write $T_ix=A_ix+v_i$ for all $x\in\mathbb{R}^d$ and $i=1,\ldots,N$. Then $\dim \Pi_*\mu \leq \dimlyap (\mu;A_1,\ldots,A_N)$. \end{lemma} \begin{proof} It is shown in \cite[Theorem 2.2]{Ro14} in the more general context of a countably infinite family of transformations $(T_i)_{i=1}^\infty$ that \[\limsup_{r \to 0} \frac{\log \Pi_*\mu(B_r(\Pi(y)))}{\log r}\leq \dimlyap(\mu;A_1,\ldots,A_N) \] for $\mu$-a.e. $y \in \Sigma$, and this obviously implies \[\limsup_{r \to 0} \frac{\log \Pi_*\mu(B_r(x))}{\log r} \leq \dimlyap(\mu;A_1,\ldots,A_N)\] for $\Pi_*\mu$-a.e. $x \in \mathbb{R}^d$, which yields the result. The result may also be derived from the proof of \cite[Theorem 4.3]{JoPoSi07}. \end{proof} \subsection{Further continuity properties of the Lyapunov and affinity dimensions} Let $\mathrm{Cont}(\GL_d(\mathbb{R})^N)$ denote the set of all tuples $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})$ with the property that $\max_{1 \leq i \leq N} \trip{A_i}<1$ for some norm $\trip{\cdot}$ on $\mathbb{R}^d$ depending on $(A_1,\ldots,A_N)$. This is clearly an open subset of $\GL_d(\mathbb{R})^N$. The following two results will be key in proving the local uniformity of the dimension gap in Theorem \ref{th:main}: \begin{proposition}\label{pr:gamma} Define a function $\gamma \colon \mathrm{Cont}(\GL_d(\mathbb{R})^N) \to \mathbb{R}$ by \[\gamma(B_1,\ldots,B_N):=\sup\left\{\dimlyap (\mu;B_1,\ldots,B_N) \colon \mu\text{ is a Bernoulli measure on }\Sigma_N\right\}.\] Then $\gamma$ is upper semi-continuous, and additionally for every tuple $(B_1,\ldots,B_N) \in \mathrm{Cont}(\GL_d(\mathbb{R})^N)$ the supremum in the definition of $\gamma$ is attained. \end{proposition} \begin{proof} It is sufficient to prove the following statement: given a sequence of tuples $(A_1^{(n)},\ldots,A_N^{(n)}) \in \mathrm{Cont}(\GL_d(\mathbb{R})^N)$ which converges to a limit $(A_1,\ldots,A_N) \in \mathrm{Cont}(\GL_d(\mathbb{R})^N)$, there exists a Bernoulli measure $\mu$ on $\Sigma_N$ such that \begin{equation}\label{eq:gammagoal}\dimlyap(\mu;A_1,\ldots,A_N) \geq \limsup_{n \to \infty} \gamma(A_1^{(n)},\ldots,A_N^{(n)}).\end{equation} Applying this result to a constant sequence of tuples $(A_1,\ldots,A_N)$ demonstrates that the supremum in the definition of $\gamma(A_1,\ldots,A_N)$ is attained; applying it to a nonconstant sequence directly implies that $\gamma$ is upper semi-continuous. Let us prove this claim. For each $n \geq 1$ let $\mu_n$ be a Bernoulli measure such that \[\dimlyap (\mu_n;A_1^{(n)},\ldots,A_N^{(n)}) > \gamma(A_1^{(n)},\ldots,A_N^{(n)})-\frac{1}{n}.\] By passing to a subsequence if required, we may assume that the sequences of values $\gamma(A_1^{(n)},\ldots,A_N^{(n)})$ and $\dimlyap (\mu;A_1^{(n)},\ldots,A_N^{(n)})$ are convergent in $\mathbb{R}$, and similarly we may assume that $(\mu_n)$ converges to a limit $\mu$ in the weak-* topology. It is straightforward to verify that the set of Bernoulli measures on $\Sigma_N$ is closed in the weak-* topology and so the limit $\mu$ is necessarily Bernoulli. To prove \eqref{eq:gammagoal} it is sufficient to prove that \begin{equation}\label{eq:lyapgoal} \dimlyap(\mu;A_1,\ldots,A_N) \geq \lim_{n \to \infty}\dimlyap (\mu_n;A_1^{(n)},\ldots,A_N^{(n)}).\end{equation} For each $n \geq 1$ and $s \geq 0$ define a potential $\Phi^{s,n} \colon \Sigma_N^* \to (0,+\infty)$ by $\Phi^{s,n}(\mathtt{i}):=\varphi^s(A_\mathtt{i}^{(n)})$, and define also $\Phi^s(\mathtt{i}):=\varphi^s(A_\mathtt{i})$ for all $\mathtt{i} \in \Sigma_N^*$. In the case where the limit $\lim_{n \to \infty} \dimlyap (\mu_n;A_1^{(n)},\ldots,A_N^{(n)})$ is zero the outcome \eqref{eq:lyapgoal} holds trivially, so we assume the limit to be strictly positive. In order to prove \eqref{eq:lyapgoal} it suffices to prove the following: for every positive real number $s<\lim_{n \to \infty} \dimlyap (\mu_n;A_1^{(n)},\ldots,A_N^{(n)})$ we have $h(\mu)+\Lambda(\Phi^s,\mu)\geq 0$. Let us therefore fix $s<\lim_{n \to \infty} \dimlyap (\mu_n;A_1^{(n)},\ldots,A_N^{(n)})$. Let $n_0 \geq 1$ such that $\dimlyap(\mu_n;A_1^{(n)},\ldots,A_N^{(n)})>s$ for all $n \geq n_0$. For every $n \geq n_0$ we have $h(\mu_n)+\Lambda(\Phi^{s,n},\mu_n) \geq 0$ by the definition of the Lyapunov dimension. For each $n \geq 1$ we by definition have \[h(\mu_n)=\inf_{m \geq 1} \frac{1}{m} \sum_{|\mathtt{i}|=m} -\mu_n([\mathtt{i}])\log\mu_n([\mathtt{i}])=\lim_{m \to \infty} \frac{1}{m} \sum_{|\mathtt{i}|=m} -\mu_n([\mathtt{i}])\log\mu_n([\mathtt{i}])\] and \[\Lambda(\Phi^{s,n},\mu_n) = \inf_{m \geq 1} \frac{1}{m}\sum_{|\mathtt{i}|=m}\mu_n([\mathtt{i}]) \Phi^{s,n}(\mathtt{i})=\lim_{m \to \infty} \frac{1}{m}\sum_{|\mathtt{i}|=m}\mu_n([\mathtt{i}]) \Phi^{s,n}(\mathtt{i}),\] so for each $n \geq n_0$ we have \[\frac{1}{m}\sum_{|\mathtt{i}|=m} -\mu_n([\mathtt{i}])\log \mu_n([\mathtt{i}]) + \frac{1}{m}\sum_{|\mathtt{i}|=m}\mu_n([\mathtt{i}]) \Phi^{s,n}(\mathtt{i}) \geq h(\mu_n)+\Lambda(\Phi^{s,n},\mu_n) \geq 0\] for every $m \geq 1$. We have $\lim_{n \to \infty} \mu_n([\mathtt{i}])=\mu([\mathtt{i}])$ for every $\mathtt{i}$ by weak-* convergence and $\lim_{n\to \infty} \Phi^{s,n}(\mathtt{i})=\Phi^s(\mathtt{i})$ for every $\mathtt{i}$ by the $1$-Lipschitz continuity of the singular value functions $\sigma_k \colon \GL_d(\mathbb{R}) \to \mathbb{R}$. For fixed $m \geq 1$ it is thus clear that \begin{eqnarray*} {\lefteqn{\frac{1}{m}\sum_{|\mathtt{i}|=m} -\mu([\mathtt{i}])\log \mu([\mathtt{i}]) + \frac{1}{m}\sum_{|\mathtt{i}|=m}\mu([\mathtt{i}]) \Phi^{s}(\mathtt{i})}}& & \\ & & =\lim_{n \to \infty} \frac{1}{m}\sum_{|\mathtt{i}|=m} -\mu_n([\mathtt{i}])\log \mu_n([\mathtt{i}]) + \frac{1}{m}\sum_{|\mathtt{i}|=m}\mu_n([\mathtt{i}]) \Phi^{s,n}(\mathtt{i}) \geq 0\end{eqnarray*} and we deduce that \[h(\mu)+\Lambda(\Phi^s,\mu) = \lim_{m \to \infty} \frac{1}{m}\sum_{|\mathtt{i}|=m} -\mu([\mathtt{i}])\log \mu([\mathtt{i}]) + \frac{1}{m}\sum_{|\mathtt{i}|=m}\mu([\mathtt{i}]) \Phi^{s}(\mathtt{i}) \geq 0.\] This demonstrates that $\dimlyap (\mu;A_1,\ldots,A_N) \geq s$ and the result follows. \end{proof} We also recall the following theorem of Feng and Shmerkin, which was originally proved in \cite{FeSh14} using thermodynamic formalism and the multiplicative ergodic theorem\footnote{The original result of Feng and Shmerkin works on the smaller space of tuples $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ such that $\max_i \|A_i\|<1$ for the Euclidean norm on $\mathbb{R}^d$. If we instead assume that $(A_1,\ldots,A_N) \in \mathrm{Cont}(\GL_d(\mathbb{R})^N)$ satisfies $\max_i \trip{A_i}<1$ for some norm $\trip{\cdot}$ on $\mathbb{R}^d$, then for some integer $n \geq 1$ and all $(B_1,\ldots,B_N)$ in a small neighbourhood of $(A_1,\ldots,A_N)$, the $N^n$-tuple $(B_1^n, B_1^{n-1}B_2,\ldots, B_N^{n-1}B_{N-1},B_N^n) \in \GL_d(\mathbb{R})^{N^n}$ is contracting in the Euclidean norm on $\mathbb{R}^d$ and has affinity dimension equal to $\dimaff(B_1,\ldots,B_N)$ by elementary consideration of the definition of the pressure function. In particular Feng and Shmerkin's result may be applied to these $N^n$-tuples in order to deduce the continuity of the affinity dimension with respect to $(B_1,\ldots,B_N)$ in the small neighbourhood.}. An alternative proof using linear algebra was given in \cite{Mo16}. \begin{theorem}\label{th:feng-shmerkin} The function $\dimaff \colon \mathrm{Cont}(\GL_d(\mathbb{R})^N) \to [0,+\infty)$ is continuous. \end{theorem} We also require the following algebraic lemma. Although it can be deduced from the structure theory of reductive groups, we provide a brief elementary proof. \begin{lemma}\label{le:irred.bdd} Let $\mathsf{A}$ be an irreducible subset of $\GL_d(\mathbb{R})$. Suppose that for every $A$ in the semigroup generated by $\mathsf{A}$, the eigenvalues of $A$ all have absolute value $|\det A|^{1/d}$. Then $\{|\det A|^{-1/d}A\colon A \in \mathsf{A}\}$ is contained in a compact subgroup of $\GL_d(\mathbb{R})$. \end{lemma} \begin{proof} Consider the semigroup $\Gamma$ generated by the set $\{|\det A|^{-1/d}A \colon A\in \mathsf{A}\}$, which is clearly irreducible. We claim that $\Gamma$ is bounded. To see this consider the closed subsemigroup $\overline{\mathbb{R}.\Gamma}:=\overline{\{\beta A \colon A \in \Gamma\text{ and }\beta \in \mathbb{R}\}}$ of the algebra of linear endomorphisms of $\mathbb{R}^d$. It is clear that for every $A \in \overline{\mathbb{R}.\Gamma}$ the eigenvalues of $A$ are also all of absolute value $|\det A|^{1/d}$, so in particular every element of $\overline{\mathbb{R}.\Gamma}$ is either invertible or nilpotent. It is easily seen that $\overline{\mathbb{R}.\Gamma}$ admits a nonzero nilpotent element if and only if $\Gamma$ is unbounded, so to prove the claim we will show that the only nilpotent element of $\overline{\mathbb{R}.\Gamma}$ is zero. For a contradiction let $r$ be the minimal rank of a nilpotent nonzero element of $\overline{\mathbb{R}.\Gamma}$ and note that $0<r<d$. Fix a nilpotent element $B$ with rank $r$. Since $\rank (B^2)<\rank B$ by nilpotency we have $\rank (B^2)=0$ by minimality of $r$ so that $B^2=0$. The equation $B^2=0$ implies that the image $B\mathbb{R}^d$ is a subspace of the kernel of $B$. Since $\Gamma$ is irreducible, the nonzero $\Gamma$-invariant subspace $\sspan \{ABv \colon v \in \mathbb{R}^d\text{ and }A\in \Gamma\}$ must equal $\mathbb{R}^d$, so in particular there exists $A \in \Gamma$ such that $AB\mathbb{R}^d \not\subset \ker B$. The linear map $AB \in \overline{\mathbb{R}.\Gamma}$ has kernel equal to $\ker B$ since $A$ is invertible, it has rank precisely $r$, and it is nilpotent since every element of $\overline{\mathbb{R}.\Gamma}$ which is not invertible is nilpotent. But we have $(AB)^2 \neq 0$ because the image of $AB$ is not a subset of $\ker B = \ker AB$. This implies that $0<\rank AB<r$ which contradicts the minimality of $r$. We conclude that $\overline{\mathbb{R}.\Gamma}$ contains no nonzero nilpotents and therefore $\Gamma$ must be bounded as claimed. To complete the proof it is sufficient to observe that the closure $\overline{\Gamma}$ is a group. Clearly this closure is a compact subsemigroup of $\GL_d(\mathbb{R})$. To see that it is a group it suffices to show that every $A \in \overline{\Gamma}$ satisfies $A^{-1} \in \GL_d(\mathbb{R})$, which may be achieved as follows. Given $A \in \overline{\Gamma}$ choose $(n_k)_{k=1}^\infty$ such that $\lim_{k \to \infty} A^{n_k}$ exists and $n_{k+1}\geq 2+n_k$ for all $k \geq 1$; it is clear that $\lim_{k \to \infty} A^{n_{k+1}-n_k-1}=A^{-1} \in \overline{\Gamma}$ as required. \end{proof} The final ingredient which we require for the proof of Theorem \ref{th:main} is the following: \begin{proposition}\label{pr:openness} The set of all $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ satisfying hypotheses (i)--(iv) of Theorem \ref{th:main} is open. \end{proposition} \begin{proof} It is obvious that if $(A_1,\ldots,A_N)$ satisfies $\max_i \trip{A_i}<1$ for some norm $\trip{\cdot}$ on $\mathbb{R}^d$ then so does every tuple $(B_1,\ldots,B_N)$ sufficiently close to $(A_1,\ldots,A_N)$. Similarly the set of all $(A_1,\ldots,A_N)$ satisfying (i) such that $0<\dimaff(A_1,\ldots,A_N)<d$ is open as a consequence of Theorem \ref{th:feng-shmerkin}. We claim that the set of all irreducible tuples $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ is open. To see this we observe that $(A_1,\ldots,A_N)$ is \emph{not} irreducible if and only if there exist unit vectors $u,v \in \mathbb{R}^d$ such that $\langle A_\mathtt{i} u,v\rangle=0$ for all $\mathtt{i} \in \Sigma_N^*$. Indeed, if such vectors exist then $\mathrm{span}\{A_\mathtt{i} u \colon \mathtt{i} \in \Sigma_N^*\}$ is an invariant subspace for $A_1,\ldots,A_N$ which is clearly not the zero subspace and is clearly a proper subspace since it does not contain $v$. On the other hand if an invariant subpace $U$ exists for $A_1,\ldots,A_N$ then we may choose arbitrary unit vectors $u \in U$ and $v \in U^\perp$ and see that the preceding condition is satisfied. Now observe that if for each $n$ the tuple $(A_1^{(n)},\ldots,A_N^{(n)})$ and unit vectors $u_n$ and $v_n$ satisfy $\langle A_\mathtt{i}^{(n)} u_n,v_n\rangle=0$ for all $\mathtt{i} \in \Sigma_N^*$, and $(A_1,\ldots,A_N)=\lim_{n \to \infty} (A_1^{n)},\ldots,A_N^{(n)})$, then any accumulation point $(u,v)$ of the sequence $(u_n,v_n)$ satisfies $\langle A_\mathtt{i} u,v\rangle=0$ for all $\mathtt{i} \in \Sigma_N^*$. Thus the set of all tuples $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})$ which are \emph{not} irreducible is closed. As was discussed immediately subsequent to the statement of Theorem \ref{th:main}, the tuple $(A_1,\ldots,A_N)$ satisfies (iv) if and only if the linear maps $|\det A_i|^{-1/d}A_i$ are all contained in some compact subgroup of $\GL_d(\mathbb{R})$. We claim that if $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})$ is irreducible, then the linear maps $|\det A_i|^{-1/d}A_i$ are all contained in a compact subgroup of $\GL_d(\mathbb{R})$ if and only if for every $\mathtt{i} \in \Sigma_N^*$, every eigenvalue of $A_\mathtt{i}$ has absolute value equal to $|\det A_\mathtt{i}|^{1/d}$. Indeed, if the first statement holds then every product $A_\mathtt{i}$ has the property that the sequence $(|\det A_\mathtt{i}|^{-n/d}A_\mathtt{i}^n)_{n \in \mathbb{Z}}$ is bounded. Applying Gelfand's formula as $n \to +\infty$ it follows that $\rho(|\det A_\mathtt{i}|^{-1/d}A_\mathtt{i})= 1$ and applying Gelfand's formula as $n\to-\infty$ we obtain $\rho(|\det A_\mathtt{i}^{-1}|^{-1/d}A_\mathtt{i}^{-1})= 1$. (Here and throughout this article $\rho(B)$ denotes the spectral radius of the linear map $B$.) These two identities together imply that every eigenvalue of $|\det A_\mathtt{i}|^{-1/d}A_\mathtt{i}$ has modulus $1$ and the second statement follows. The converse implication is given by Lemma \ref{le:irred.bdd}. We conclude that for an irreducible tuple $(A_1,\ldots,A_N)$, (iv) is equivalent to the statement that for every $\mathtt{i} \in \Sigma_N^*$, every eigenvalue of $A_\mathtt{i}$ has absolute value equal to $|\det A_\mathtt{i}|^{1/d}$. To complete the proof of the proposition we observe that a tuple $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ satisfies both (iii) and (iv) if and only if it belongs to the set of irreducible tuples (which is open) and avoids the set of tuples with the property that for every $\mathtt{i} \in \Sigma_N^*$, every eigenvalue of $A_\mathtt{i}$ has absolute value equal to $|\det A_\mathtt{i}|^{1/d}$. The latter set is obviously closed. The result follows. \end{proof} \subsection{Proof of Theorem \ref{th:main}} It is now a straightforward task to prove the main theorem. Proposition \ref{pr:openness} shows that if $(A_1,\ldots,A_N)$ satisfies hypotheses (i)--(iv) of Theorem \ref{th:main}, and $\mathsf{K}$ is a sufficiently small compact neighbourhood of $(A_1,\ldots,A_N)$, then every element of $\mathsf{K}$ satisfies (i)--(iv). Fix a compact subset $\mathsf{K}$ of $\GL_d(\mathbb{R})^N$ such that every $(A_1,\ldots,A_N) \in \mathsf{K}$ satisfies hypotheses (i)--(iv) of Theorem \ref{th:main}. By Lemma \ref{le:high} we have $\gamma(A_1,\ldots,A_N)-\dimaff(A_1,\ldots,A_N) \leq 0$ for all $(A_1,\ldots,A_N) \in \mathsf{K}$, and by the combination of Proposition \ref{pr:gamma} and Theorem \ref{th:feng-shmerkin} the function $(A_1,\ldots,A_N) \mapsto \gamma(A_1,\ldots,A_N) - \dimaff(A_1,\ldots,A_N)$ is upper semi-continuous. In particular its supremum is attained somewhere on $\mathsf{K}$, and is non-positive. Suppose first that this supremum is equal to some negative number $-\kappa<0$. If $(A_1,\ldots,A_N) \in \mathsf{K}$, and $T_1,\ldots,T_N \colon \mathbb{R}^d \to \mathbb{R}^d$ are affine maps for which there exist $v_1,\ldots,v_N \in \mathbb{R}^d$ such that $T_ix=A_ix+v_i$ for all $x \in \mathbb{R}^d$, and $m$ is a self-affine measure with respect to $T_1,\ldots,T_N$, then by Lemma \ref{le:hutch} we have $m=\Pi_*\mu$ for some Bernoulli measure $\mu$ on $\Sigma_N$. Using Lemma \ref{le:rossi-joposi} it follows that \begin{align*}\dim m = \dim \Pi_*\mu &\leq \dimlyap (\mu;A_1,\ldots,A_N)\\ & \leq \gamma(A_1,\ldots,A_N) \leq \dimaff(A_1,\ldots,A_N)-\kappa\end{align*} and we have established the conclusion of Theorem \ref{th:main}. To prove Theorem \ref{th:main} it therefore suffices to show that the supremum \[\sup \left\{\gamma(A_1,\ldots,A_N) -\dimaff(A_1,\ldots,A_N)\colon (A_1,\ldots,A_N) \in \mathsf{K}\right\}\] cannot be zero. If this supremum is zero then by the upper semi-continuity of $\gamma$, the continuity of $\dimaff$ and the compactness of $\mathsf{K}$ it must be the case that $\gamma(A_1,\ldots,A_N)=\dimaff(A_1,\ldots,A_N)$ for some $(A_1,\ldots,A_N) \in \mathsf{K}$. By Proposition \ref{pr:gamma} we have $\dimlyap (\mu;A_1,\ldots,A_N)=\gamma(A_1,\ldots,A_N)=\dimaff(A_1,\ldots,A_N)$ for some Bernoulli measure $\mu$ on $\Sigma_N$. By Lemma \ref{le:high} this implies that $\mu$ is an equilibrium state of the potential $\Phi(\mathtt{i}):=\varphi^s(A_\mathtt{i})$ where $s:=\dimaff(A_1,\ldots,A_N) \in (0,d)$. By Theorem \ref{th:main-tech} the linear maps $|\det A_i|^{-1/d}A_i$ are all contained in a compact subgroup of $\GL_d(\mathbb{R})$, but as discussed subsequently to the statement of Theorem \ref{th:main}, this contradicts (iv). The proof of Theorem \ref{th:main} is complete. \section{Review of linear algebraic groups}\label{se:rev} \subsection{Reductive linear algebraic groups} Here we include a brief overview of some aspects of reductive linear algebraic groups that will be useful in the proofs of the main results. Our principal reason of interest for this class of groups is that they arise as the Zariski closures of semigroups in $\GL_d(\mathbb{R})$ that act irreducibly on $\mathbb{R}^d$ (see below). For a more detailed exposition of the theory of reductive linear algebraic groups, we refer the reader to \cite{bq.book,borel.book,borel-tits,chevalley,knapp}. \subsubsection{Definition and relation to irreducible semigroups}\label{subsub.reductive.irred} A linear Lie subgroup $G$ of $\GL_d(\mathbb{R})$ is said to be reductive if it has no non-trivial normal subgroup consisting of unipotent matrices. A connected reductive linear real Lie group $G$ is also a linear algebraic group in the sense that it is the connected component of identity $\mathbb{G}(\mathbb{R})^o$ of the group of real points $\mathbb{G}(\mathbb{R})$ of a (reductive) linear algebraic group $\mathbb{G}$ defined over $\mathbb{R}$. The linear algebraic group $\mathbb{G}$ admits a faithful rational representation $\mathbb{G} \to \GL_d$. In particular it can be seen as the set of zeros of polynomials in $\mathbb{R}[x_{ij},\det x^{-1}]$, where $x_{ij}$'s are the entries in $\Mat(d,\mathbb{R})$. Consequently, we can speak of the Zariski topology on $G$: a subset of $G$ is said to be Zariski closed if it is the set of common zeros of a set of polynomial maps. This defines the Zariski topology; the notions of Zariski closure and Zariski density are defined in the obvious way. The usual Hausdorff (analytic) topology on $G$ is finer than the Zariski topology. In the sequel, we shall speak of a real reductive group to mean a reductive linear real Lie group with finitely many connected components, and unless otherwise specified, topological notions refer to the analytic topology. We will often work with semigroups in $\GL_d(\mathbb{R})$. We recall the elementary fact that the Zariski closure of a semigroup $\Gamma$ in $G$ is a (Zariski-closed) group, call it $H$. In particular, the Zariski closure of the group generated by $\Gamma$ is also $H$. Before proceeding further, let us clarify the aforementioned relationship between irreducible, or rather completely reducible, families and real reductive groups. Recall that a semigroup $\Gamma$ in $\GL_d(\mathbb{R})$ is said to act completely reducibly if $\mathbb{R}^d$ decomposes into a direct sum $V_1 \oplus \ldots \oplus V_k$ of $\Gamma$-invariant subspaces $V_i$, on which $\Gamma$ acts irreducibly. It is equivalent to require that every $\Gamma$-invariant subspace has a $\Gamma$-invariant complement. Clearly, if $\Gamma$ acts irreducibly on $\mathbb{R}^d$, then it acts completely reducibly. The action on $\mathbb{R}^d$ of a real reductive group $G<\GL_d(\mathbb{R})$ is completely reducible (see \cite[Ch.4]{chevalley}). Conversely, let $\Gamma$ be a semigroup of $\GL_d(\mathbb{R})$ that acts completely reducibly on $\mathbb{R}^d$. Let $G$ be the Zariski closure of $\Gamma$. We claim that $G$ is a real reductive group. Indeed, being algebraic, $G$ has finitely many connected components. If it is not real reductive, then it contains a non-trivial normal subgroup $N$ consisting of unipotent matrices. Let $V_1$ be a $G$-irreducible subspace on which $N$ acts non-trivially. By a classical result of Kolchin, the subspace $V_0$ of fixed vectors of $N$ in $V_1$ is a non-trivial proper subspace of $V_1$. Since $N$ is normal in $G$, $V_0$ is invariant under $G$, contradicting irreducibility of the $G$-action on $V_1$. \subsubsection{Cartan space and roots}\label{subsub.cartanspace} Let $A<G$ be a maximal connected real split torus so that it is a closed Lie subgroup of $G$ that is isomorphic to $(\mathbb{R}^\ast_+)^d$ for some $d \in \mathbb{N}$. The integer $d$ is called the (real) rank of $G$. Let $Z(G)$ denote the center of $G$. The integer $d_S:=d-\dim Z(G)$ is called the semisimple rank of $G$. The Lie algebra $\mathfrak{a}$ of $A$ writes as $\mathfrak{a}=\mathfrak{a}_Z \oplus \mathfrak{a}_S$, where $\mathfrak{a}_Z$ is the Lie algebra of $A \cap Z(G)$ and $\mathfrak{a}_S$ is the Lie algebra of $A \cap [G,G]$. Here $[G,G]$ denotes the closed commutator subgroup of $G$, which is a semisimple Lie group. Let $\mathfrak{g}$ be the Lie algebra of $G$ and let $\Ad: G \to \GL(\mathfrak{g})$ be the adjoint representation of $G$. A non-trivial character $\alpha:A \to \mathbb{R}^\ast_+$ is said to be a root of $G$ if it is a weight of $A$ for the $\Ad$-representation, i.e. the subspace $\mathfrak{g}_\alpha:=\{v \in \mathfrak{g}\, | \, \Ad(a)v=\alpha(a)v \; \forall a \in A\}$ is non-trivial. Given a character $\alpha$ of $A$, we denote by $\overline{\alpha}$ the element of $\mathfrak{a}^\ast$ satisfying $\exp(\overline{\alpha}(x))=\alpha(\exp(x))$ for every $x \in \mathfrak{a}$. The set of non-zero $\overline{\alpha}$'s appearing in this form from the $\Ad$-representation forms a root system that we denote by $\Sigma$. Let $\{\overline{\alpha}_1,\ldots,\overline{\alpha}_{d_S}\}$ be a choice of simple roots so that $\Sigma$ splits into a disjoint union of positive roots $\Sigma_+$ (those elements of $\Sigma$ that can be written as a non-negative integer linear combination of $\overline{\alpha}_i$'s) and negative roots $-\Sigma_+$. We denote by $\mathfrak{a}^+$ the choice of a Weyl chamber in $\mathfrak{a}$ corresponding to a choice of simple roots: $x \in \mathfrak{a}$ belongs to $\mathfrak{a}^+$ if and only if for every $\overline{\alpha} \in \Sigma_+$, $\overline{\alpha}(x) \geq 0$. It is a closed fundamental domain for the action of the Weyl group $N_G(A)/Z_G(A)$, where $N_G(A)$ is the normalizer of $A$ in $G$ and $Z_G(A)$ is the centralizer of $A$ in $G$. The Weyl chamber $\mathfrak{a}^+$ is the direct sum of a salient cone $\mathfrak{a}^+ \cap \mathfrak{a}_S$ and the subspace $\mathfrak{a}_Z$. An example of a real reductive group is $G=\GL_d(\mathbb{R})$ itself. In this case, the maximal real split torus $A$ can be taken to be diagonal matrices with positive coefficients. Its Lie algebra $\mathfrak{a}$ is the commutative Lie algebra of $d \times d$ diagonal matrices. The rank of $G$ is equal to $d$. The commutator $[G,G]=\SL(d,\mathbb{R})$ so that $\mathfrak{a}_S$ is the diagonal matrices whose coefficients sum to $0$. In particular, the semisimple rank of $G$ is $d-1$. The (log) roots are the linear forms $\overline{\alpha}_{i,j}$ with $i \neq j \in \{1,\ldots,d\}$ such that $\alpha_{i,j}(a)=\frac{a_i}{a_j}$ where $a_i$'s are the diagonal entries of $a$. A base of simple roots is given by $\overline{\alpha}_{i,i+1}$. The corresponding choice of Weyl chamber $\mathfrak{a}^+$ is the diagonal matrices with decreasing coefficients. The Weyl group is isomorphic to the symmetric group $S_d$ acting on $A$ by permuting the diagonal coefficients. \subsubsection{Cartan and Jordan projections}\label{subsub.Cartan.Jordan} Let $G$ be a real reductive group and let $K$ be a maximal compact subgroup of $G$ whose Lie algebra is orthogonal to $\mathfrak{a}$ for the Killing form. The Cartan decomposition of $G$ says that we have $G=KAK$. Here, given an element $g \in G$, its factor in the Cartan decomposition corresponding to the group $A$ is, up to the action of the Weyl group, uniquely determined. In particular for each $g \in G$ there exists a unique element $a_g \in A^+:=\exp(\mathfrak{a}^+)$ such that $g \in Ka_g K$. Accordingly we define the Cartan projection $$ \kappa:G \to \mathfrak{a}^+ $$ by setting $\kappa(g):=a_g$. Every element $g \in G$ can also be decomposed as a commuting product $g=g_e g_h g_u$, where $g_e$ is an elliptic element (i.e.\ belonging to a compact group), $g_u$ is a unipotent element (i.e.\ $\Ad(g_u)$ is a unipotent linear transformation, where $\Ad:G \to \GL(\mathfrak{g})$ is the adjoint representation) and $g_h$ is a hyperbolic element (i.e.\ it is conjugate to an element of $A$). The hyperbolic part $g_h$ is uniquely determined and this allows us to define the Jordan projection $$ \lambda:G \to \mathfrak{a}^+ $$ setting $\lambda(g)$ to be the logarithm of the unique element of $A^+$ conjugate to $g_h$. When $G=\GL_d(\mathbb{R})$, with the above choice of $A$, the maximal compact group $K$ can be taken to be the orthogonal group $O(d,\mathbb{R})$ and the Cartan decomposition is the polar decomposition: for $g \in \GL_d(\mathbb{R})$ its Cartan projection reads $\kappa(g)=(\log \sigma_1(g),\ldots, \log \sigma_d(g))$. The factorisation $g=g_e g_h g_u$ corresponds to Jordan block form and the Jordan projection $\lambda(g)$ reads $\lambda(g)=(\log |\lambda_1(g)|,\ldots,|\lambda_d(g)|)$. \subsubsection{Representations and highest weights}\label{subsub.rep} Let $G$ be a connected real reductive group and let $A<G$ and $\Sigma$ be as above. Let $U$ be a maximal unipotent subgroup of $G$ normalised by $A$ and whose Lie algebra is generated by the root spaces $(\mathfrak{g}_\alpha)_{\overline{\alpha} \in \Sigma_+}$. Let $V$ be a finite dimensional real vector space and $(\pi,V)$ an algebraic representation of $G$. An (algebraic) character $\chi$ of $A$ is said to be a restricted weight of $G$ in $(\pi,V)$ if the vector space $V^\chi:=\{v \in V\, | \, \pi(a)v=\chi(a)v \; \forall a \in A\}$ is non-trivial. Such a weight $\chi$ is said to be a parabolic weight if it is a weight of $A$ in the space $V^U:=\{v\in V \, |\, Uv=v\}$. It is said to be a dominant weight if it belongs to the Weyl chamber $\mathfrak{a}^+$ after the identification of $\mathfrak{a}$ with $\mathfrak{a}^\ast$ with an inner product on $\mathfrak{a}$ extending the restriction of the Killing form on $\mathfrak{a}_S$ and for which $\mathfrak{a}_S$ and $\mathfrak{a}_Z$ are orthogonal. The choice of positive roots induces a partial order on the set of characters of $A$: we let $\chi_1 \leq \chi_2$ if and only if $\chi_2-\chi_1$ is a non-negative linear combination of positive, or equivalently simple, roots. An irreducible algebraic representation $(\pi,V)$ of $G$ admits a unique parabolic weight that we shall denote $\chi_V$. This is also the largest weight for the order induced by the choice of $\mathfrak{a}^+$ and this dominant weight is called \textit{the highest weight}. We will use the following fact that serves as a bridge between the geometry of $G$ and its representations. For its proof, see e.g. \cite[Lemma 8.17]{bq.book} \begin{lemma}\label{lemma.weight.vs.eigenvalue} Let $G$ be a connected real reductive group, $(\pi,V)$ be an irreducible linear representation of $G$ and $\chi$ be the highest weight. Then, for every $g\in G$, we have $$ \log |\lambda_1(\pi(g))|=\overline{\chi}(\lambda(g)). $$ \end{lemma} \subsubsection{A density result of Benoist} In his study of asymptotic properties of linear groups and their actions on homogeneous spaces, Benoist \cite{benoist.linear2} (see also \cite{benoist.proper.actions}) introduced a notion of limit cone of a semigroup: given a semigroup $\Gamma$ in a real reductive group $G$, this is the smallest closed cone in $\mathfrak{a}^+$ containing all Jordan projections $\lambda(\gamma)$ of elements $\gamma \in \Gamma$. He proved in particular that the intersection of an affine translate of this cone with $\mathfrak{a}_S$ has non-empty interior in $\mathfrak{a}_S$ whenever $\Gamma$ is Zariski dense in $G$. The following density result of Benoist \cite{benoist.linear2}, later proven in a more elementary fashion by Quint \cite{quint.schottky}, is a refinement of the aforementioned property of this limit cone. In the proof of our main result, it will be instrumental in deducing the compactness of the image of $[G,G]$ under certain linear representations. We state a version of this result that is adapted to our purposes (see \cite[Proposition 9.8]{bq.book}): \begin{theorem}[\cite{benoist.linear2,quint.schottky,bq.book}]\label{thm.benoist.density} Let $G$ be a connected real reductive group and $\Gamma<G$ a Zariski dense semigroup. The closed subgroup of $\mathfrak{a}$ spanned by the elements $\lambda(\gamma_1 \gamma_2)-\lambda(\gamma_1)-\lambda(\gamma_2)$ for $\gamma_1, \gamma_2 \in \Gamma$ is $\mathfrak{a}_S$. \end{theorem} We remark that in the work of Quint \cite{quint.schottky}, Benoist's non-arithmeticity result was also applied in a symbolic-dynamical context. \section{The case of irreducible representations}\label{se:irr-case} \subsection{Overview} We may now commence working in earnest on the proof of Theorem \ref{th:main-tech}. We will study the potential $\Phi(\mathtt{i}):=\prod_{i=1}^d \sigma_i(A_\mathtt{i})^{\alpha_i}$ by rewriting it in the form $\Phi(\mathtt{i})=\prod_{j=1}^d \|A_\mathtt{i}^{\wedge j}\|^{\alpha_j-\alpha_{j+1}}$, where $\alpha_{d+1}:=0$. Since by hypothesis the semigroup $\Gamma:=\{A_\mathtt{i} \colon \mathtt{i} \in \Sigma_N^*\}$ acts irreducibly on $\mathbb{R}^d$, it follows from the discussion at the beginning of \S\ref{se:rev} that the Zariski closure of $\Gamma$ in $\GL_d(\mathbb{R})$ is a real reductive group $G$. We are thus in the following situation: we have a finite set of elements $g_1,\ldots,g_N$ of a real reductive group $G$ which generate a Zariski dense subsemigroup of $G$, a finite collection of representations $\pi_j$ from $G$ to $\GL(\wedge^j\mathbb{R}^d)$, a collection of non-negative real numbers $\beta_j$, and a potential $\Phi$ of the form $\Phi(\mathtt{i})=\prod_j \|\pi_j(g_\mathtt{i})\|^{\beta_j}$, where $g_\mathtt{i}:=g_{i_1}\cdots g_{i_n}$ for $\mathtt{i}=(i_t)_{t=1}^n$. (Since those indices $j$ for which $\beta_j=0$ have no effect on the value of $\Phi(\mathtt{i})$, we discard those indices. The condition $\alpha_1>\alpha_d$ implies that at least one $j<d$ is retained.) We wish to show that if $\Phi$ has an equilibrium state which is a Bernoulli measure, then $G$ must be a group of similitudes. Equivalently, we wish to show that the group $\{|\det g|^{-1/d}g \colon g \in G\}$ must be compact. In the full generality of Theorem \ref{th:main-tech} we have no reason to believe that the representations $\pi_j$ are irreducible, which significantly complicates the argument. These representations are however completely reducible as a consequence of the reductiveness of the group $G$. We will therefore first prove a version of Theorem \ref{th:main-tech} in the case of irreducible representations $\pi_j$, and then obtain the theorem in the general case by presenting the problem as a family of sub-cases each of which corresponds to a choice of a family of irreducible subspaces, one from each exterior power. The latter task is deferred to the following section. The objective of the present section will therefore be to prove the following: \begin{theorem}\label{th:irreducible-case} Let $G$ be a real reductive group. Given a positive integer $k$ and for each $j=1,\ldots,k$ a real inner product space $V_j$ of dimension $d_j \geq 1$, let $\pi_j \colon G \to \GL(V_j)$ be an irreducible linear representation. Let $g_1,\ldots,g_N \in G$ and write $g_\mathtt{i}:=g_{i_1}\cdots g_{i_n}$ for all $\mathtt{i} =(i_t)_{t=1}^n \in \Sigma_N^*$. Given constants $\beta_j>0$, define a potential $\Phi \colon \Sigma_N^* \to (0,+\infty)$ by \[\Phi(\mathtt{i}):=\prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})\right\|^{\beta_j}.\] Suppose that the semigroup generated by $g_1,\ldots,g_N$ is Zariski dense in $G$. Then the following are equivalent: \begin{enumerate}[(i)] \item There exists an equilibrium state of $\Phi$ which is a Bernoulli measure. \item The potential $\Phi^{\det} \colon \Sigma_N^* \to (0,+\infty)$ defined by \[\Phi^{\det}(\mathtt{i}):= \prod_{j=1}^k \left|\det \pi_j(g_\mathtt{i}) \right|^{\frac{\beta_j}{d_j}}\] satisfies $P(\Phi)=P(\Phi^{\det})$. \item For every $j=1,\ldots,k$ the group \[\left\{|\det \pi_j(g)|^{-\frac{1}{d_j}} \pi_j(g) \colon g \in G\right\}\] is a compact subgroup of $\GL(V_j)$. \end{enumerate} \end{theorem} The proof of the implications (iii)$\implies$(ii)$\implies$(i) is straightforward and almost all of the length of the proof of Theorem \ref{th:irreducible-case} arises from the implication (i)$\implies$(iii). As was described briefly in \S\ref{se:tech-thm} this proof itself consists of two somewhat separate parts, which we describe below. \subsubsection{Comments on the proof}\label{subsub.comments} The representations $\pi_j$ are irreducible but will not in general be strongly irreducible, so in general there exists for each $j$ a finite collection $U_j^1,\ldots,U_j^{n_j}$ of subspaces of $V_j$ which is permuted by the action of $G$ under the representation $\pi_j$. (If $\pi_j$ is strongly irreducible then we have $n_j=1$ and $U_j^1=V_j$.) We choose these subspaces to be of the least possible dimension and it is not difficult to deduce that they must have pairwise trivial intersection. Each $U_j^i$ is preserved by every element of the (Zariski) identity component\footnote{For the purposes of this description of the proof it makes no difference whether $G^o$ is taken to be the connected component with respect to the analytic topology or with respect to the Zariski topology. However, for technical reasons which will apply later, we define $G^o$ to be the group of real points of the Zariski connected component of $G$.} $G^o$, and in the first part of the proof we consider the action of $G^o$ on each $U_j^i$ via the restriction of $\pi_j$ to a representation $G^o \to \GL(U_j^i)$. By minimality of the dimension of $U_j^i$ this action is irreducible. Using the fact that that there exists a $\Phi$-equilibrium state which is a Bernoulli measure, a mechanism introduced in \cite{BoMo18} for writing $\Phi$ as the pointwise maximum of a finite collection of quasi-multiplicative potentials $\Phi^{\mathcal{W}}$, Proposition \ref{pr:qm-unique}, and Theorem \ref{thm.benoist.density}, we establish using the ideas outlined in the introduction that for each $j$ and $i$ the group $\pi_j(G^o)|_{U_j^i}$ is a group of linear similarity transformations of $U_j^i$ with respect to some inner product on $U_j^i$. At this point we will have established that for each $j$, the elements of $\pi_j(G^o)$ can be simultaneously block diagonalised (using a splitting of the form $V_j=U_j^{i_1}\oplus \cdots \oplus U_j^{i_r}$) with each diagonal block equal to an orthogonal matrix times a positive real scalar. (This construction can be interpreted by saying that the elements of $\pi_j(G^o)$ are all normal matrices with respect to some consistent inner product structure on $V_j$.) In order to verify that $\pi_j(G^o)$ has the required property (iii) it remains to verify that for each fixed $g$ these scalars are the same for every block. In this part of the proof we must use not only the existence of a potential $\Phi^{\mathcal{W}_0}$ whose equilibrium state is a Bernoulli measure, but the fact the pressure $P(\Phi^{\mathcal{W}_0})$ is equal to the pressure $P(\Phi)$ of the original potential $\Phi$, or equivalently, the fact that $P(\Phi^{\mathcal{W}_0})$ is maximal among all of the pressures $P(\Phi^{\mathcal{W}})$. The underlying intuitive idea is that the products $\pi_j(g_\mathtt{i})$ necessarily have non-separated Lyapunov exponents with respect to the Bernoulli measure; this will be shown to imply that these products also have non-separated Lyapunov exponents with respect to the equilibrium measures of the other potentials $\Phi^{\mathcal{W}}$, since if this were not the case those equilibrium states would have a larger top Lyapunov exponent than is allowed by the variational principle. In practice this argument is implemented by comparing the values of various pressure functions associated to the different potentials $\Phi^{\mathcal{W}}$ (which are defined in terms of the growth rate of the norm of each representation and allow for separated Lyapunov exponents) and the potential $\Phi^{\det}$, which is defined in terms of the growth rates of determinants of representations (which does not perceive any difference between Lyapunov exponents). Once it has been shown that for each $g \in G^o$ the scalars associated to each diagonal block in the block diagonalisation of $\pi_j(g)$ are the same, it follows that $\pi_j(G^o)$ is contained in a group of linear similarity transformations of $\GL(V_j)$. The same result follows immediately for $\pi_j(G)$ since the remaining components of $\pi_j(G)$ form a finite collection of continuous images of $\pi_j(G^o)$. The respective functions of the two parts of the proof may be illustrated by considering two opposite extreme cases of the argument as follows. If it is known \emph{a priori} that each representation $\pi_j$ is strongly irreducible -- for example, if the group $G$ is known to be connected -- then we have $U_j^1=V_j$ for each $j$ and the first part of the proof establishes directly that each $\pi_j(G)$ is a group of linear similitudes as required. The proof is then complete without meaningful reference to the second part. If on the other hand it is known \emph{a priori} that for each $j$, there is a basis for $V_j$ with respect to which every $\pi_j(g_\mathtt{i})$ is represented by a generalised permutation matrix (that is, a matrix with exactly one nonzero entry in each row and in each column) then the subspaces $U_j^i$ are all one-dimensional, the action of $G^o$ on each subspace is trivially by a similitude since no other linear transformations of a one-dimensional space exist, and the first part of the proof is entirely redundant. In this case only the second part of the proof is required. \subsubsection{Remarks on a generalisation of Theorem \ref{th:irreducible-case}} Before starting the proof of Theorem \ref{th:irreducible-case}, we lastly remark that this theorem (and Theorem \ref{th:main-tech}) can easily be extended to the case of a linear Lie group $G$ which is not necessarily reductive. Indeed, using a reductivisation argument such as \cite[Proposition 6.8]{KaMo18} one may show that the equilibrium states of an affine iterated function system are determined only by the projections of the linear parts of the affinities to a reductive Levi component (or in explicit co-ordinates, by the block diagonal parts of those linear maps when presented in block upper triangular form). This extended result does not lead to a significantly more powerful version of Theorem \ref{th:main} since in general it can easily occur that the equilibrium states are determined only by a proper subset of the diagonal blocks: the existence of a Bernoulli equilibrium state in the absence of irreducibility (but in the presence of complete reducibility) consequently can be used only to deduce that \emph{some} of the diagonal blocks of the affine transformations must consist of similitudes. Since this extended result requires few additional steps but lacks the clear interest of Theorem \ref{th:main} we leave it to the reader. \subsection{Proof of the implications (iii)$\implies$(ii)$\implies$(i)} The implication (iii)$\implies$(ii) is simple: if for each $j=1,\ldots,k$ the group \[\{|\det \pi_j(g)|^{-1/d_j} \pi_j(g) \colon g \in G\}\] is contained in a compact subset of $\GL(V_j)$, then we may find $K>0$ such that \[K^{-1} |\det \pi_j(g)|^{1/d_j} \leq \left\|\pi_j(g)\right\| \leq K |\det \pi_j(g)|^{1/d_j} \] for all $j=1,\ldots,k$ and all $g \in G$. It follows that for all $\mathtt{i} \in \Sigma_N$ we have \[K^{-\sum_{j=1}^k \beta_j} \Phi^{\det}(\mathtt{i}) \leq \Phi(\mathtt{i}) \leq K^{\sum_{j=1}^k \beta_j}\Phi^{\det}(\mathtt{i})\] and we deduce that $P(\Phi)=P(\Phi^{\det})$ by direct reference to the definition of the pressure. This proves (iii)$\implies$(ii). Let us now prove (ii)$\implies$(i). Assuming (ii), let $\mu$ be the Bernoulli measure on $\Sigma_N^*$ with probability vector $(p_1,\ldots,p_N)$ given by \[p_{i_0}:= \frac{ \prod_{j=1}^k \left|\det \pi_j(g_{i_0}) \right|^{\frac{\beta_j}{d_j}}}{\sum_{i=1}^N \prod_{j=1}^k \left|\det \pi_j(g_i) \right|^{\frac{\beta_j}{d_j}}}\] for every $i_0=1,\ldots,N$. Since \begin{align*}P(\Phi^{\det}) &= \lim_{n \to \infty} \frac{1}{n}\log \sum_{|\mathtt{i}|=n} \Phi^{\det}(\mathtt{i}) = \lim_{n \to \infty} \frac{1}{n}\log \sum_{|\mathtt{i}|=n} \prod_{j=1}^k \left|\det \pi_j(g_\mathtt{i}) \right|^{\frac{\beta_j}{d_j}}\\ &=\log \sum_{i=1}^N \prod_{j=1}^k \left|\det \pi_j(g_i) \right|^{\frac{\beta_j}{d_j}}\end{align*} using the multiplicativity of the determinant, we observe that \[\mu([\mathtt{i}])=\frac{ \prod_{j=1}^k \left|\det \pi_j(g_\mathtt{i} ) \right|^{\frac{\beta_j}{d_j}}}{\left(\sum_{i=1}^N \prod_{j=1}^k \left|\det \pi_j(g_i) \right|^{\frac{\beta_j}{d_j}}\right)^{|\mathtt{i}|}}=\frac{ \Phi^{\det}(\mathtt{i})}{e^{|\mathtt{i}|P(\Phi^{\det})}}\] for every $\mathtt{i} \in \Sigma_N^*$. Now, for each $n \geq 1$ we have \begin{eqnarray*} {\lefteqn{\sum_{|\mathtt{i}|=n} -\mu([\mathtt{i}])\log \mu([\mathtt{i}]) +\sum_{|\mathtt{i}|=n} \mu([\mathtt{i}])\log \Phi^{\det}(\mathtt{i})}}& & \\ & & = \sum_{|\mathtt{i}|=n} \mu([\mathtt{i}]) \left(nP(\Phi^{\det}) - \log \Phi^{\det}(\mathtt{i})+\log \Phi^{\det}(\mathtt{i})\right)\\ & & = nP(\Phi^{\det}) \sum_{|\mathtt{i}|=n}\mu([\mathtt{i}])= nP(\Phi^{\det}) \end{eqnarray*} and since \[h(\mu)= \lim_{n \to \infty} \frac{1}{n}\sum_{|\mathtt{i}|=n} -\mu([\mathtt{i}])\log \mu([\mathtt{i}])\] and \[\Lambda\left(\Phi^{\det},\mu\right) = \lim_{n \to \infty}\frac{1}{n}\sum_{|\mathtt{i}|=n} \mu([\mathtt{i}])\log \Phi^{\det}(\mathtt{i})\] we conclude that \[h(\mu)+\Lambda(\Phi^{\det},\mu) = P\left(\Phi^{\det}\right).\] Now, clearly \[\Phi^{\det}(\mathtt{i})= \prod_{j=1}^k \left|\det \pi_j(g_\mathtt{i}) \right|^{\frac{\beta_j}{d_j}} \leq \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})\right\|^{\beta_j}=\Phi(\mathtt{i})\] for every $\mathtt{i} \in \Sigma_N^*$ using the elementary bound $|\det B| \leq \|B\|^{\dim V_j}$ valid for all $B \in \GL(V_j)$. It follows directly that $\Lambda(\Phi^{\det},\mu) \leq \Lambda(\Phi,\mu)$. We deduce that \[P(\Phi)=P(\Phi^{\det}) = h(\mu)+\Lambda(\Phi^{\det},\mu) \leq h(\mu)+\Lambda(\Phi,\mu) \leq P(\Phi)\] where we have used the hypothesis (ii) and, in the final inequality, the subadditive variational principle. It follows that $h(\mu)+\Lambda(\Phi,\mu)=P(\Phi)$ and thus the Bernoulli measure $\mu$ is an equilibrium state for the potential $\Phi$. This completes the proof of (ii)$\implies$(i). \subsection{Proof of (i) $\implies$ (iii)} \subsubsection{The family of subspaces with finite orbit}\label{subsub.family.of.subspaces} For each $j=1,\ldots,k$ let $\ell_j \geq 1$ be the smallest possible dimension of a nonzero subspace of $V_j$ which is invariant under $\pi_j(g)$ for all $g \in G^o$, and choose $U_j \subseteq V_j$ to be such an $\ell_j$-dimensional subspace. It is not difficult to see that the function $g \mapsto \pi_j(g)U_j$ is constant on each connected component of $G$: if $g_1,g_2$ belong to the same component $G_i$ then $g_1^{-1}G_i$ is a connected component which contains the identity, hence is $G^o$, hence $g_1^{-1}g_2 \in G^o$, so $\pi_j(g_1^{-1}g_2)U_j=U_j$ and therefore $\pi_j(g_1)U_j=\pi_j(g_2)U_j$. For fixed $j=1,\ldots,k$, let $U_j^1,\ldots,U_j^{n_j}$ denote the complete list of subspaces of $V_j$ having the form $\pi_j(g)U_j$ for some $g \in G$. Fix $j \in \{1,\ldots,k\}$. We observe that $\mathrm{span} \bigcup_{i=1}^{n_j} U_j^i$ is a nonzero subspace of $V_j$ which is preserved by $\pi_j(g)$ for every $g \in G$, since each $\pi_j(g)$ acts on the spaces $U_j^i$ by permutation. By irreducibility it follows that this subspace must equal the whole of $V_j$. We now make the following claim: if $i_1,\ldots,i_{t+1}$ are distinct integers in the range $1$ to $n_j$, where $t \geq 1$, then $U_j^{i_{t+1}}$ either is a subspace of the vector space $\mathrm{span} \bigcup_{s=1}^t U_j^{i_s}$ or has trivial intersection with it. Indeed, if neither of these statements is true then $0<\dim U_j^{i_{t+1}}\cap \left(\mathrm{span} \bigcup_{s=1}^t U_j^{i_s}\right)<\dim U_j^{i_{t+1}}=\ell_j$, in which case $U_j^{i_{t+1}}\cap \left(\mathrm{span} \bigcup_{s=1}^t U_j^{i_s}\right)$ is a subspace of $V_j$ which is fixed by $\pi_j(g)$ for all $g \in G^o$ but has dimension strictly less than $\ell_j$, contradicting minimality, and we deduce the truth of the claim. Now let $r_j$ be the largest integer such that we can find distinct integers $i_1,\ldots,i_{r_j}$ for which the spaces $U_j^{i_1},\ldots,U_j^{i_{r_j}}$ form a direct sum. (We observe that $r_j$ is at least $1$ and at most $n_j$, hence is well-defined.) If $U_j^{i_1}\oplus \cdots \oplus U_j^{i_{r_j}}$ is not equal to $V_j$ then by the preceding observation there must be some subspace $U_j^t$ which is not contained in it, hence has trivial intersection with it, allowing us to extend the direct sum, which is a contradiction. We therefore have $V_j=U_j^{i_1}\oplus \cdots \oplus U_j^{i_{r_j}}$ and in particular $r_j\ell_j=d_j$. We now claim there exists $C_1>0$ such that \begin{equation}\label{eq:kappa}\prod_{j=1}^k \left\|\pi_j(g)\right\|^{\beta_j} \leq C_1\prod_{j=1}^k \max_{1 \leq i \leq n_j} \left\|\pi_j(g)|_{U_j^i}\right\|^{\beta_j} \end{equation} for all $g \in G$. It is clearly sufficient to show that for each $j$ there exists $\tau_j>0$ such that $\max_{1 \leq i \leq n_j} \|B|_{U_j^i}\| \geq \tau_j\|B\|$ for every linear map $B \colon V_j \to V_j$, since then we may take $C_1:=\prod_{j=1}^k \tau_j^{-\beta_j}$. By homogeneity it is clearly sufficient to restrict to the case where $\|B\|=1$. If we can show that $\max_{1 \leq i \leq n_j} \|B|_{U_j^i}\|>0$ for every $B \in \mathrm{End}(V_j)$ with $\|B\|=1$ then the existence of $\tau_j$ follows by the compactness of the unit sphere of $\mathrm{End}(V_j)$. But if this inequality fails for some $B \in \mathrm{End}(V_j)$ with $\|B\|=1$ then we have found a nonzero linear map from $V_j$ to itself which is zero on every $U_j^i$, and this is impossible since the spaces $U_j^i$ together span $V_j$. The claim is proved. \subsubsection{Transitivity classes and the construction of quasi-multiplicative potentials}\label{subsub.transitivity.classes} Let $\mathfrak{W}$ denote the set of all $k$-tuples $(U_j^{i_j})_{j=1}^k$ such that $1 \leq i_j \leq n_j$ for all $j=1,\ldots,k$. We observe that $G$ acts on $\mathfrak{W}$ by taking the pair $(g, (U_j^{i_j})_{j=1}^k)$ to the tuple $(\pi_j(g)U_j^{i_j})_{j=1}^k$. Since the value of $(\pi_j(g)U_j^{i_j})_{j=1}^k$ depends only on the connected component of $G$ to which $g$ belongs, the $G$-action on $\mathfrak{W}$ factors through $G^o$ and yields an action of the finite group $G/G^o$ on $\mathfrak{W}$. Let us say that a \emph{transitivity class} is a subset of $\mathfrak{W}$ which corresponds to the orbit of a single tuple $(U_j^{i_j})_{j=1}^k$, and denote the set of transitivity classes by $\mathscr{W}$. Obviously, the number of transitivity classes is finite. For every transitivity class $\mathcal{W} \in \mathscr{W}$ let us define a potential $\Phi^{\mathcal{W}} \colon \Sigma_N^* \to (0,+\infty)$ by \begin{equation* \Phi^{\mathcal{W}}(\mathtt{i}):=\max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j}. \end{equation*} The inequality $\Phi^{\mathcal{W}}(\mathtt{i}\mathtt{j}) \leq \Phi^{\mathcal{W}}(\mathtt{i})\Phi^{\mathcal{W}}(\mathtt{j})$ follows easily from the definition. It is clear that for each $\mathtt{i} \in \Sigma_N^*$ \[\Phi(\mathtt{i}) = \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})\right\|^{\beta_j} \leq C_1\prod_{j=1}^k \max_{1 \leq i \leq n_j} \left\|\pi_j(g_\mathtt{i})|_{U_j^i}\right\|^{\beta_j} \leq C_1\prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})\right\|^{\beta_j} = C_1\Phi(\mathtt{i})\] and also \[\prod_{j=1}^k \max_{1 \leq i \leq n_j} \left\|\pi_j(g_\mathtt{i})|_{U_j^i}\right\|^{\beta_j} = \max_{(U_j^{i_j})_{j=1}^k\in\mathfrak{W}} \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{U_j^{i_j}}\right\|^{\beta_j} =\max_{\mathcal{W} \in \mathscr{W}} \Phi^{\mathcal{W}}(\mathtt{i})\] so that \begin{equation}\label{eq:kappa-again}C_1^{-1}\Phi(\mathtt{i}) \leq \max_{\mathcal{W} \in \mathscr{W}}\Phi^{\mathcal{W}}(\mathtt{i}) \leq \Phi(\mathtt{i})\end{equation} for all $\mathtt{i} \in \Sigma_N^*$. We observe in particular that $P(\Phi^{\mathcal{W}}) \leq P(\Phi)$ for every transitivity class $\mathcal{W}$ by direct appeal to the definition of the pressure. By \cite[Theorem 6]{BoMo18}\footnote{See also a predecessor of this result by Quint, based on the first property of ``produit g\'{e}n\'{e}rique'' in \cite[Proposition I.2]{quint.div}, cf.~Step 2 of proof of Theorem \ref{thm.cartan.state}.} there exist $\delta>0$ and a finite subset $F$ of the semigroup $\{g_\mathtt{i} \colon \mathtt{i} \in \Sigma_N^*\}$ such that for every $\mathtt{i},\mathtt{j} \in \Sigma_N^*$ we have \[\max_{\mathtt{k} \in F}\Phi^{\mathcal{W}}(\mathtt{i} \mathtt{k} \mathtt{j}) \geq \delta \Phi^{\mathcal{W}}(\mathtt{i})\Phi^{\mathcal{W}}(\mathtt{j}). \] By Proposition \ref{pr:qm-unique} this implies that for each transitivity class $\mathcal{W}$ there exists a unique measure $\nu \in \mathcal{M}_\sigma$ which is an equilibrium state for $\Phi^{\mathcal{W}}$, and this measure satisfies the Gibbs inequality \begin{equation* C^{-1}_2e^{-|\mathtt{i}|P(\Phi^{\mathcal{W}})} \Phi^{\mathcal{W}}(\mathtt{i}) \leq \nu([\mathtt{i}]) \leq C_2e^{-|\mathtt{i}|P(\Phi^{\mathcal{W}})} \Phi^{\mathcal{W}}(\mathtt{i}) \end{equation*} for every $\mathtt{i} \in \Sigma_N^*$, where $C_2>0$ does not depend on $\mathtt{i}$. Since the number of transitivity classes is finite, we may choose $C_2$ to be independent of the choice of $\mathcal{W}$ also. We observe in particular that $\nu([\mathtt{i}])$ is always nonzero. By hypothesis there exists a Bernoulli measure $\mu \in \mathcal{M}_\sigma$ which is an equilibrium state for $\Phi$. Since $\mu$ is a Bernoulli measure it is ergodic, so by the subadditive ergodic theorem we have for $\mu$-a.e. $x \in \Sigma_N$ \[\lim_{n \to \infty} \frac{1}{n}\log \Phi^{\mathcal{W}}(x|_n)=\Lambda(\Phi^{\mathcal{W}},\mu)\] for every transitivity class $\mathcal{W}$, and also \[\lim_{n \to \infty} \frac{1}{n}\log \Phi(x|_n)=\Lambda(\Phi,\mu).\] In particular for $\mu$-a.e. $x \in \Sigma_N$ \begin{equation}\label{eq.max.trans.class} \begin{aligned}\Lambda(\Phi,\mu) &= \lim_{n \to \infty} \frac{1}{n}\log \Phi(x|_n) = \lim_{n \to \infty} \frac{1}{n}\log \max_{\mathcal{W}\in\mathscr{W}} \Phi^{\mathcal{W}}(x|_n)\\ &= \max_{\mathcal{W}\in\mathscr{W}} \lim_{n \to \infty} \frac{1}{n}\log \Phi^{\mathcal{W}}(x|_n)=\max_{\mathcal{W}\in\mathscr{W}} \Lambda(\Phi^{\mathcal{W}},\mu)\end{aligned} \end{equation} where we have used \eqref{eq:kappa-again} in the second equation. Choose a transitivity class $\mathcal{W}_0$ which attains this maximum, which we fix for the remainder of the proof. We have \[P(\Phi) =h(\mu)+\Lambda(\Phi,\mu) = h(\mu)+\Lambda(\Phi^{\mathcal{W}_0},\mu)\leq P(\Phi^{\mathcal{W}_0}) \leq P(\Phi) \] using the variational principle and the inequality $P(\Phi^{\mathcal{W}}) \leq P(\Phi)$ established earlier. Since the first and last terms in this chain of inequalities are equal, the inequalities must be equations. It follows that $\mu$ is the unique equilibrium state of the potential $\Phi^{\mathcal{W}_0}$. \subsubsection{Investigation of the transitivity class $\mathcal{W}_0$}\label{sss:c3} ${}$ We now investigate the transitivity class $\mathcal{W}_0$ specified in the previous paragraph which attains the maximum in \eqref{eq.max.trans.class}. We claim that the fact that the potential $\Phi^{\mathcal{W}_0}$ has a Bernoulli measure as its equilibrium state implies an additional relationship between the tuples $(W_j)_{j=1}^k$ which constitute the transitivity class $\mathcal{W}_0$. Specifically we claim that there exists $C_3>0$ such that for all $\mathtt{i}\in \Sigma_N^*$ such that $g_\mathtt{i} \in G^o$, \begin{equation}\label{eq:thank-you-reviewer-C}\Phi^{\mathcal{W}_0}(\mathtt{i}) \leq C_3\min_{(W_j)_{j=1}^k \in \mathcal{W}_0}\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{W_j}\right\|^{\beta_j}.\end{equation} Before beginning the proof of the claim we make the following observation. By the Gibbs inequality established previously, there exists $C_2>0$ such that for all $\mathtt{i} \in \Sigma_N^*$, \[C^{-1}_2 e^{|\mathtt{i}|P(\Phi)}\mu([\mathtt{i}])\leq \Phi^{\mathcal{W}_0}(\mathtt{i}) \leq C_2e^{|\mathtt{i}|P(\Phi)}\mu([\mathtt{i}]).\] If $\mathtt{i},\mathtt{j} \in \Sigma_N^*$ are arbitrary then we notice that $\mu([\mathtt{i}\mathtt{j}])=\mu([\mathtt{i}])\mu([\mathtt{j}])$ because $\mu$ is Bernoulli, and therefore \begin{align}\label{eq:bernoulli-1}\Phi^{\mathcal{W}_0}(\mathtt{i}\mathtt{j}) &\geq C^{-1}_2e^{|\mathtt{i}\mathtt{j}|P(\Phi)}\mu([\mathtt{i}\mathtt{j}])\\\nonumber & = C^{-1}_2e^{|\mathtt{i}|P(\Phi)}\mu([\mathtt{i}])e^{|\mathtt{j}|P(\Phi)}\mu([\mathtt{j}])\geq C^{-3}_2\Phi^{\mathcal{W}_0}(\mathtt{i})\Phi^{\mathcal{W}_0}(\mathtt{j}).\end{align} We will use this property to prove the claim. Let $r$ be the number of (Zariski) connected components of $G$. Since the semigroup $\{g_\mathtt{i} \colon \mathtt{i} \in \Sigma_N^*\}$ is Zariski dense in $G$, we may choose $\mathtt{j}_1,\ldots,\mathtt{j}_r \in \Sigma_N^*$ such that every connected component of $G$ contains precisely one of the elements $g_{\mathtt{j}_r}, g_{\mathtt{j}_r\mathtt{j}_{r-1}},\ldots,g_{\mathtt{j}_1\cdots \mathtt{j}_r}$ and therefore the sequence $g_{\mathtt{j}_r}G^o, g_{\mathtt{j}_{r-1}\mathtt{j}_{r}}G^o,\ldots, g_{\mathtt{j}_1 \cdots \mathtt{j}_{r}}G^o$ lists the components of $G$. It follows that if $(W_j)_{j=1}^k \in \mathcal{W}_0$ is arbitrary, then $(\pi_j(g_{\mathtt{j}_i\cdots \mathtt{j}_r})W_j)_{j=1}^k$ lists all of the elements of $\mathcal{W}_0$ (possibly with repetitions) as $i$ runs through $1,\ldots,r$. Now let $\mathtt{i} \in \Sigma_N^*$ be an arbitrary word such that $g_\mathtt{i} \in G^o$, and let $(W_j')_{j=1}^k \in \mathcal{W}_0$ such that \[\prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j'}\right\|^{\beta_j}= \min_{(W_j)_{j=1}^k \in \mathcal{W}_0} \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j}.\] Observe that by definition there exists $(W_j)_{j=1}^k \in \mathcal{W}_0$ such that \[\Phi^{\mathcal{W}_0}(\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r )= \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r})|_{W_j}\right\|^{\beta_j}. \] Repeated application of \eqref{eq:bernoulli-1} yields \begin{equation} \label{eq:1st-est}\Phi^{\mathcal{W}_0}(\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r )\geq C_2^{-3(2r-1)} \Phi^{\mathcal{W}_0}(\mathtt{i})^r \left(\prod_{t=1}^r \Phi^{\mathcal{W}_0}(\mathtt{j}_t)\right) \geq \tau \Phi^{\mathcal{W}_0}(\mathtt{i})^r,\end{equation} say, where $\tau>0$ is independent of $\mathtt{i}$. In the other direction we obtain \begin{align*}\Phi^{\mathcal{W}_0}(\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r )&= \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r})|_{W_j}\right\|^{\beta_j} \\ &\leq \left(\prod_{t=1}^r \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i} \mathtt{j}_t})|_{\pi_j(g_{\mathtt{i} \mathtt{j}_{t+1}\cdots \mathtt{i}\mathtt{j}_r}) W_j}\right\|^{\beta_j}\right) \\ &= \left(\prod_{t=1}^r \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i} \mathtt{j}_t})|_{\pi_j(g_{\mathtt{j}_{t+1}\cdots \mathtt{j}_r}) W_j}\right\|^{\beta_j}\right)\end{align*} where we have used the fact that $(\pi_j(g_\mathtt{i})W_j)_{j=1}^k = (W_j)_{j=1}^k$ for every $(W_j)_{j=1}^k \in \mathcal{W}_0$ since $g_\mathtt{i} \in G^o$. This is clearly bounded by \[\left(\prod_{t=1}^r\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{j}_t})|_{\pi_j(g_{\mathtt{j}_{t+1}\cdots \mathtt{j}_r}) W_j}\right\|^{\beta_j}\right)\left(\prod_{t=1}^r \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{\pi_j(g_{\mathtt{j}_{t}\cdots \mathtt{j}_r}) W_j}\right\|^{\beta_j}\right)\] and hence by \[K\left(\prod_{t=1}^r \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{\pi_j(g_{\mathtt{j}_{t}\cdots \mathtt{j}_r}) W_j}\right\|^{\beta_j}\right) \] where $K:=\prod_{t=1}^r \Phi^{\mathcal{W}_0}(\mathtt{j}_t)$, which clearly does not depend on $\mathtt{i}$. Thus \[\Phi^{\mathcal{W}_0}(\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r )\leq K \prod_{t=1}^r \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{\pi_j(g_{\mathtt{j}_{t}\cdots \mathtt{j}_r}) W_j}\right\|^{\beta_j}.\] But this in turn is clearly bounded by \[K \left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{W_j'}\right\|^{\beta_j}\right)\left(\max_{(W_j'')_{j=1}^k \in \mathcal{W}_0} \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{W_j''}\right\|^{\beta_j}\right)^{r-1}\] because as $t$ ranges from $1$ to $r$ the tuple $(\pi_j(g_{\mathtt{j}_t\cdots \mathtt{j}_r})W_j)_{j=1}^k$ ranges over all of the elements of $\mathcal{W}_0$ and in particular is equal to $(W_j')_{j=1}^k$ for at least one value of $t$. Thus \begin{equation}\label{eq:2nd-est}\Phi^{\mathcal{W}_0}(\mathtt{i}\mathtt{j}_1\mathtt{i} \mathtt{j}_2\mathtt{i} \cdots \mathtt{j}_{r-1}\mathtt{i}\mathtt{j}_r )\leq K \left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{W_j'}\right\|^{\beta_j}\right)\Phi^{\mathcal{W}_0}(\mathtt{i})^{r-1}.\end{equation} Combining \eqref{eq:1st-est} and \eqref{eq:2nd-est} yields \[\tau \Phi^{\mathcal{W}_0}(\mathtt{i})^r \leq K \left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}})|_{W_j'}\right\|^{\beta_j}\right)\Phi^{\mathcal{W}_0}(\mathtt{i})^{r-1}\] where $K,\tau>0$ do not depend on $\mathtt{i}$, and dividing by $\tau\Phi^{\mathcal{W}_0}(\mathtt{i})^{r-1}$ proves the claim. \subsubsection{A multiplicativity property on a dense subsemigroup of the identity component}\label{subsub.get.multiplicative} We now claim that for every $\mathtt{i},\mathtt{j} \in \Sigma_N^*$ such that $g_\mathtt{i},g_\mathtt{j}\in G^o$ and every $(W_j)_{j=1}^k \in \mathcal{W}_0$, we have \begin{equation}\label{eq:mult-identity}\prod_{j=1}^k \rho(\pi_j(g_\mathtt{i} g_\mathtt{j})|_{W_j})^{\beta_j} = \left(\prod_{j=1}^k \rho(\pi_j(g_\mathtt{i})|_{W_j})^{\beta_j} \right)\left(\prod_{j=1}^k \rho(\pi_j(g_\mathtt{j})|_{W_j})^{\beta_j} \right)\end{equation} where $\rho(B)$ denotes the spectral radius of the linear map $B$. Fix words $\mathtt{i}$ and $\mathtt{j}$ such that $g_\mathtt{i},g_\mathtt{j}\in G^o$, and fix $(W_j)_{j=1}^k \in \mathcal{W}_0$. We observe that $\pi_j(g_\mathtt{i})W_j=W_j$ and $\pi_j(g_\mathtt{j})W_j=W_j$ for all $j=1,\ldots,k$. Using the fact that $\mu$ is a Bernoulli measure we have $\mu([(\mathtt{i}\mathtt{j})^n])=\mu([\mathtt{i}])^n\mu([\mathtt{j}])^n=\mu([\mathtt{i}^n])\mu([\mathtt{j}^n])$ for every $n \geq 1$, so by the Gibbs inequality \begin{align*}\Phi^{\mathcal{W}_0}(\mathtt{i}^n)\Phi^{\mathcal{W}_0}(\mathtt{j}^n)&\leq C^2_2 e^{n(|\mathtt{i}|+|\mathtt{j}|)P(\Phi)} \mu([\mathtt{i}^n])\mu([\mathtt{j}^n]) \\ &=C^2_2 e^{n(|\mathtt{i}|+|\mathtt{j}|)P(\Phi)} \mu([(\mathtt{i}\mathtt{j})^n])\\ &\leq C^3_2 \Phi^{\mathcal{W}_0}((\mathtt{i}\mathtt{j})^n)\end{align*} and similarly \[ \Phi^{\mathcal{W}_0}((\mathtt{i}\mathtt{j})^n) \leq C^3_2\Phi^{\mathcal{W}_0}(\mathtt{i}^n)\Phi^{\mathcal{W}_0}(\mathtt{j}^n).\] We have \[ \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}}^n|_{W_j})\right\|^{\beta_j} \leq \Phi^{\mathcal{W}_0}((\mathtt{i}\mathtt{j})^n)\] by the definition of $\Phi^{\mathcal{W}_0}$, and since $g_{\mathtt{i}\mathtt{j}}^n \in G^o$ we have \[\Phi^{\mathcal{W}_0}((\mathtt{i}\mathtt{j})^n) \leq C_3\min_{(W_j')_{j=1}^k \in \mathcal{W}_0} \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}}^n|_{W_j'})\right\|^{\beta_j}\leq C_3\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}}^n|_{W_j})\right\|^{\beta_j}\] by the previous claim. Likewise \[ \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}}^n|_{W_j})\right\|^{\beta_j} \leq \Phi^{\mathcal{W}_0}(\mathtt{i}^n) \leq C_3 \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}}^n|_{W_j})\right\|^{\beta_j}\] and \[ \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{j}}^n|_{W_j})\right\|^{\beta_j} \leq \Phi^{\mathcal{W}_0}(\mathtt{j}^n) \leq C_3 \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{j}}^n|_{W_j})\right\|^{\beta_j}.\] Thus \begin{align*}\left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}}^n|_{W_j})\right\|^{\beta_j}\right)\left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{j}}^n|_{W_j})\right\|^{\beta_j}\right) &\leq \Phi^{\mathcal{W}_0}(\mathtt{i}^n)\Phi^{\mathcal{W}_0}(\mathtt{j}^n)\\ & \leq C^3_2 \Phi^{\mathcal{W}_0}((\mathtt{i}\mathtt{j})^n)\\ &\leq C^3_2 C_3\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}}^n|_{W_j})\right\|^{\beta_j} \end{align*} and similarly \begin{align*}\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}}^n|_{W_j})\right\|^{\beta_j} &\leq \Phi^{\mathcal{W}_0}((\mathtt{i}\mathtt{j})^n) \\&\leq C^3_2 \Phi^{\mathcal{W}_0}(\mathtt{i}^n)\Phi^{\mathcal{W}_0}(\mathtt{j}^n)\\ &\leq C^3_2 C_3^2 \left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}}^n|_{W_j})\right\|^{\beta_j}\right)\left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{j}}^n|_{W_j})\right\|^{\beta_j}\right).\end{align*} We have obtained \[C^{-3}_2 C_3^{-1} \leq \frac{\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}\mathtt{j}}^n|_{W_j})\right\|^{\beta_j} }{ \left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}}^n|_{W_j})\right\|^{\beta_j}\right)\left(\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{j}}^n|_{W_j})\right\|^{\beta_j}\right)} \leq C^3_2 C_3^2\] for all $n \geq 1$. Taking the power $\frac{1}{n}$ and letting $n\to \infty$ we obtain by Gelfand's formula \[\frac{\prod_{j=1}^k \rho(\pi_j(g_\mathtt{i} g_\mathtt{j})|_{W_j})^{\beta_j}}{ \left(\prod_{j=1}^k \rho(\pi_j(g_\mathtt{i})|_{W_j})^{\beta_j} \right)\left(\prod_{j=1}^k \rho(\pi_j(g_\mathtt{j})|_{W_j})^{\beta_j} \right)}=1\] for all $\mathtt{i},\mathtt{j} \in \Sigma_N^*$ such that $g_\mathtt{i},g_\mathtt{j} \in G^o$, and this is precisely \eqref{eq:mult-identity}. \subsubsection{Application of the theorem of Benoist}\label{sss:ben} We now apply the work of Benoist to show that the identity \eqref{eq:mult-identity} severely restricts the possible structures of the groups $\{\pi_j(g)|_{W_j} \colon g \in G^o\}$ for $(W_j)_{j=1}^k \in \mathcal{W}_0$. Fix an arbitrary tuple $(W_j)_{j=1}^k \in \mathcal{W}_0$ and define \begin{equation}\label{eq.defn.xi} \xi(g):=\prod_{j=1}^k \rho(\pi_j(g)|_{W_j})^{\beta_j} \end{equation} for all $g \in G^o$. The identity \eqref{eq:mult-identity} asserts that $\xi(g_\mathtt{i} g_\mathtt{j})=\xi(g_\mathtt{i})\xi(g_\mathtt{j})$ for all $\mathtt{i},\mathtt{j} \in \Sigma_N^*$ such that $g_\mathtt{i},g_\mathtt{j} \in G^o$. Recall that by construction (\S \ref{subsub.family.of.subspaces}), for each $j=1,\ldots,k$, the restriction of $\pi_j$ to the connected reductive group $G^o$ gives rise to an irreducible linear representation of $G^o$ on $W_j$. Denote this representation by $\hat{\pi}_j$. We will show that for each $j=1,\ldots,k$ the image $\hat{\pi}_j([G^o,G^o])$ is compact. If $[G^o,G^o]$ is itself compact then this result is trivial, so without loss of generality we assume that the semisimple group $[G^o,G^o]$ is non-compact. For each $j$ let $\hat{\chi}_j$ be the highest weight of $\hat{\pi}_j$ so that $\overline{\hat{\chi}}_j \in \mathfrak{a}^\ast$ where $\mathfrak{a}$ is a fixed Cartan subspace in the Lie algebra $\mathfrak{g}$ of $G$ (see \S \ref{subsub.cartanspace} and \S \ref{subsub.rep}). By Lemma \ref{lemma.weight.vs.eigenvalue}, $(\ref{eq.defn.xi})$ can be rewritten as \begin{equation}\label{eq.xi.to.chi} \log \xi(g)=\sum_{j=1}^k \beta_j \overline{\hat{\chi}}_j(\lambda(g)) \end{equation} where $\lambda$ is the Jordan projection on a fixed Weyl chamber $\mathfrak{a}^+$ in $\mathfrak{a}$ (\S \ref{subsub.Cartan.Jordan}). Denote by $\Gamma$ the semigroup in $G$ generated by $\{g_1,\ldots,g_N\}$ and by $\Gamma_o$ the intersection $G^o \cap \Gamma$. Since by hypothesis $\Gamma$ is Zariski dense in $G$, the semigroup $\Gamma_o$ is Zariski dense in $G^o$. Setting $\bar{\chi}:=\sum_{j=1}^k \beta_j \overline{\hat{\chi}}_j$, in view of $ (\ref{eq.defn.xi})$ and $(\ref{eq.xi.to.chi})$, the equation $(\ref{eq:mult-identity})$ implies that the set \[ \{\lambda(\gamma_1\gamma_2)-\lambda(\gamma_1)-\lambda(\gamma_2)\colon\gamma_1, \gamma_2 \in \Gamma_o\} \] is contained in the subspace $\ker \overline{\chi}$. Since the latter is closed, by Theorem \ref{thm.benoist.density} we deduce that the semisimple part $\mathfrak{a}_S$ of the Cartan space $\mathfrak{a}$ is contained in $\ker \overline{\chi}$. Furthermore, since for each $j=1,\ldots,k$, $\overline{\hat{\chi}}_j$ is a dominant weight (in particular, it takes non-negative values on the cone $\mathfrak{a}_S \cap \mathfrak{a}^+$) and $\beta_j>0$, this implies that for each $j=1,\ldots,k$, we have $\mathfrak{a}_S \subseteq \ker \overline{\hat{\chi}}_j$. Hence by Lemma \ref{lemma.weight.vs.eigenvalue} the spectral radius of every element of $\hat\pi_j([G^o,G^o])$ is $1$. The determinant of every element of $\hat\pi_j([G^o,G^o])$ is also $1$ as a direct consequence of the definition of $[G^o,G^o]$ (as closure of a group generated by elements of type $ghg^{-1}h^{-1}$), so every element of $\hat\pi_j([G^o,G^o])$ has every eigenvalue equal to $1$ in modulus. Since $[G^o,G^o]$ is semisimple it acts completely reducibly on $W_j$, so by applying Lemma \ref{le:irred.bdd} to each subspace in a decomposition of $W_j$ into invariant subspaces on which $[G^o,G^o]$ acts irreducibly, it follows that $\hat\pi_j([G^o,G^o])$ is a compact subgroup of $\GL(W_j)$ as required. On the other hand, since $\hat{\pi}_j$ is an irreducible representation (\S \ref{subsub.family.of.subspaces}), by Schur's lemma, $\mathbb{R} Z(\hat{\pi}_j(G^o)) \leq \End_{\mathbb{R} \hat{\pi}_j(G^o)}(W_j)$ is isomorphic to either $\mathbb{R}$ or $\mathbb{C}$ as a real division algebra. In the first case, $Z(\hat{\pi}_j(G^o))$ is contained in the group of homotheties $\simeq \mathbb{R}^\ast$ of $W_j$ and in the latter case it is contained in a copy of $\SO(2,\mathbb{R}) \times \mathbb{R}^\ast$ in $\GL(W_j)$. Finally we recall that the connected real reductive group $G^o$ is an almost direct product of its center $Z(G^o)$ and $[G^o,G^o]$ (\cite[Proposition 2.2]{borel-tits}), which is to say the map $Z(G^\circ) \times [G^o,G^o] \to G^o$ defined by $(z,g) \mapsto zg$ is surjective with finite kernel. We conclude that $\hat{\pi}_j(G^o)$ is contained in a compact subgroup of $\GL(W_j)$ modulo factoring out the absolute value of the determinant of each element, and therefore each of the groups $\hat{\pi}_j(G^o)$ is a group of linear similarity transformations of $W_j$ with respect to some Euclidean structure on $W_j$. Now recall that, for each $j=1,\ldots,k$, the finite group $G/G^o$ acts transitively on $\{U^i_j \colon i=1,\ldots,n_j\}$. Since for each $j=1,\ldots,k$ we have $W_j=U_j^i$ for some $i \in \{1,\ldots,n_j\}$, by transitivity of $G/G^o$, repeating the same argument above for every $(W_j)_{j=1}^k \in \mathcal{W}_0$, we deduce that up to conjugation in $\GL(V_j)$, $\pi_j(G^o)|_{U^i_j}$ is contained in the group of linear similarities of $U^i_j$ for every $i=1,\ldots,n_j$, for every $j=1,\ldots,k$. In particular, passing to matrix representation by convenient choice of bases for $U^{i_\ell}_j$'s for $\ell=1,\ldots,r_j$ and $j=1,\ldots,k$, $\pi_j(G^o)$ is contained in the group of block diagonal matrices of the form \begin{equation}\label{eq.matrix.form} \begin{bmatrix} \gamma_1 O_1 & 0 & \dots & 0 \\ 0 & \gamma_2 O_2 & \dots & \vdots \\ \vdots & & \ddots & 0\\ 0 & \dots & 0 & \gamma_{r_j} O_{r_j} \end{bmatrix} \end{equation} where the $\gamma_i$'s are scalars in $\mathbb{R}^\ast_+$ and $O_i$'s are $\ell_j \times \ell_j$ orthogonal matrices. We have completed the first of the two parts of the proof as described in \S \ref{subsub.comments}. \subsubsection{The identity of the scalars.} In the second part of the proof we wish to show that for every $g \in G^o$, in the matrix representation \eqref{eq.matrix.form} we have $\gamma_1=\cdots =\gamma_{r_j}$. Since obviously each $\gamma_i$ is equal to $|\det (\gamma_i O_i)|^{1/\ell_j}$, the idea is to show that for each $g \in G^o$ and $j=1,\ldots,k$ the quantity $|\det \pi_j(g)|_{U_j^i}|^{1/\ell_j}$ is independent of $i$. Since $V_j$ can be written as a direct sum of a sub-collection of spaces $U_j^{i_1},\ldots,U_j^{i_{r_j}}$, this in turn is clearly equivalent to the identity \begin{equation}\label{eq:dets-same}\left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}} = |\det \pi_j(g)|^{\frac{1}{d_j}}\end{equation} for every $i=1,\ldots,n_j$ and $j=1,\ldots,k$, which is what shall be shown. It will then be a straightforward matter to conclude the theorem. We therefore undertake to prove \eqref{eq:dets-same}. To establish this equality we must use the fact that $\Phi^{\mathcal{W}_0}$ has the greatest pressure of any $\Phi^{\mathcal{W}}$, which we did not previously substantially use. The key fact which we shall ultimately demonstrate is that there exists $C>0$ such that $C^{-1}\Phi^{\mathcal{W}_0}(\mathtt{i}) \leq \Phi^{\mathcal{W}}(\mathtt{i}) \leq C\Phi^{\mathcal{W}_0}(\mathtt{i})$ for every $\mathtt{i} \in \Sigma_N^*$ such that $g_\mathtt{i} \in G^o$, for every transitivity class $\mathcal{W}$. \subsubsection{A first identity involving determinants}\label{sss:first-id} Fix $j \in \{1,\ldots,k\}$. If we knew that the number $n_j$ of spaces $U_j^i$ was equal to exactly $d_j/\ell_j$ then we would have $V_j = \bigoplus_{i=1}^{n_j} U_j^i$ and the identity \begin{equation}\label{eq:dets-ad-mortem}\left(\prod_{i=1}^{n_j}\left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}} \right)^{\frac{1}{n_j}} =|\det \pi_j(g)|^{\frac{1}{d_j}}\end{equation} would be obvious. However, in general we do not necessarily have $n_j=d_j/\ell_j$. Our first task will be to show that the above identity remains true even when $n_j>d_j/\ell_j$ and the spaces $U_j^1,\ldots,U_j^{n_j}$ do not form a direct sum. The proof of this equality is conducted by exploring the combinatorial relationships between the similarity ratios $\gamma_i(g):=|\det (\pi_j(g)|_{U_j^i})|^{1/\ell_j}$ and subspaces $U_j^i$. The fundamental task will be to show that the list of spaces $U_j^1,\ldots,U_j^{n_j}$ may be partitioned into equal-sized classes in such a way that every $g \in G^o$ has constant similarity ratio on each class, and such that the spans of the classes form a direct sum. For $i=1,\ldots,n_j$ and $g \in G^o$, let $\gamma_i(g) :=|\det (\pi_j(g)|_{U_j^i})|^{1/\ell_j}\in \mathbb{R}^\ast_+$ denote the similarity ratio of $\pi_j(g)|_{U^{i}_j}$. Define an equivalence relation $\sim$ on $\{1,\ldots,n_j\}$ by writing $i_1 \sim i_2$ if and only if $\gamma_{i_1}(g)=\gamma_{i_2}(g)$ for all $g \in G^o$. Let $\mathsf{x}_1,\ldots,\mathsf{x}_p$ denote the equivalence classes under $\sim$. There is a natural action of $G/G^o$ on $\{1,\ldots,n_j\}$ which takes the pair $([g],i)$ to the unique integer $i'$ such that $\pi_j(g)U_j^i=U_j^{i'}$, and this action is obviously transitive since $G/G^o$ acts transitively on the spaces $U_j^1,\ldots,U_j^{n_j}$. For distinct $i_1$ and $i_2$ and arbitrary $g \in G^o$ and $h \in G$ it is not difficult to see that $\pi_j(g)$ has distinct similarity ratios on $U_j^{i_1}$ and $U_j^{i_2}$ if and only if $\pi_j(hgh^{-1})$ has distinct similarity ratios on $\pi_j(h)U_j^{i_1}$ and $\pi_j(h)U_j^{i_2}$, so the action on $\{1,\ldots,n_j\}$ respects the equivalence relation $\sim$ and in particular has the effect of inducing a permutation of the equivalence classes $\mathsf{x}_1,\ldots,\mathsf{x}_p$. The transitivity of the action of $G/G^o$ on $\{1,\ldots,n_j\}$ easily implies that this action of $G/G^o$ on the set of equivalence classes is transitive. It follows in particular that the equivalence classes must all have the same cardinality: we have $\#\mathsf{x}_t=n_j/p$ for every $t=1,\ldots,p$. For each equivalence class $\mathsf{x}_t$ define $X_t$ to be the span of the union of all the subspaces $U_j^i$ such that $i \in \mathsf{x}_t$. Arguing as in the second paragraph of \S\ref{subsub.family.of.subspaces} we note that every $X_t$ must be equal to a direct sum $U_j^{i_1}\oplus \cdots \oplus U_j^{i_q}$ for some suitable choice of indices $i_1,\ldots,i_q \in \mathsf{x}_t$ and for some integer $q \geq 1$ which \emph{a priori} might depend on $t$. (To see this, consider a direct sum $U_j^{i_1}\oplus \cdots \oplus U_j^{i_q}\subseteq X_t$ with $i_1,\ldots,i_q \in \mathsf{x}_t$ which is maximal in the sense that it cannot be extended by a further direct summand $U_j^{i_{q+1}}$ such that $i_{q+1} \in \mathsf{x}_t$. If every $U^i_j$ satisfying $i \in \mathsf{x}_t$ is a subspace of this direct sum then the direct sum equals $X_t$ as required. Otherwise, there exists $U^i_j$ satisfying $i \in \mathsf{x}_t$ which neither is a subspace of $U_j^{i_1}\oplus \cdots \oplus U_j^{i_q}$ nor forms a direct sum with it, in which case the intersection $(U_j^{i_1}\oplus \cdots \oplus U_j^{i_q}) \cap U_j^i$ is nonzero, has finite orbit under the action of $\pi_j(G)$, and has dimension smaller than $\ell_j$, contradicting the definition of $\ell_j$. We conclude that any such maximal direct sum yields a decomposition of $X_t$ with the claimed properties.) Now, as a consequence of the result shown in \S\ref{sss:ben}, every $U^i_j$ admits an inner product structure with respect to which every $g \in G^o$ acts on $U^i_j$ as a similarity transformation. Combined with the existence of the aforementioned direct sums this implies that for every $t=1,\ldots,p$ there exists an inner product structure on $X_t$ with respect to which every $g \in G^o$ acts on $X_t$ as a similarity transformation. For distinct $t_1,t_2$ in the range $1,\ldots,p$, by the definition of $\sim$ there exists $g \in G^o$ such that $\pi_j(g)$ has different similarity ratios on $X_{t_1}$ and on $X_{t_2}$, and this implies that necessarily $X_{t_1} \cap X_{t_2}=\{0\}$. We conclude that the spaces $X_1,\ldots,X_p$ form a direct sum, which is equal to the span of the spaces $U_j^1,\ldots,U_j^{n_j}$ and hence is equal to $V_j$. Since $G/G^o$ transitively permutes the set of equivalence classes $\mathsf{x}_1,\ldots,\mathsf{x}_p$ it follows that the action $([g],X_t) \mapsto \pi_j(g)X_t$ transitively permutes the spaces $X_1,\ldots,X_p$. These spaces are therefore pairwise isomorphic, so $\dim X_t$ is independent of $t$ and therefore $\dim X_t=d_j/p$ for every $i=1,\ldots,p$. We may now prove \eqref{eq:dets-ad-mortem}. We observe that for every $g \in G^o$ and $t \in \{1,\ldots,p\}$ \[\left|\det \left(\pi_j(g)|_{X_{t}}\right)\right|^{\frac{1}{\dim X_{t}}} = \left(\prod_{i \in \mathsf{x}_{t}} \left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{1}{\#\mathsf{x}_{t} }} \] because the term on the left is the similarity ratio of $\pi_j(g)$ on $X_{t}$, which is also the similarity ratio of $\pi_j(g)$ on $U_j^i$ for every $i \in \mathsf{x}_t$. This is to say \[\left|\det \left(\pi_j(g)|_{X_t}\right)\right|^{\frac{p}{d_j}} = \left(\prod_{i \in \mathsf{x}_{t}} \left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{p}{n_j}} \] for every $t=1,\ldots,p$. Since $V_j=\bigoplus_{t=1}^p X_t$, we also have \[\prod_{t=1}^p \det \left(\pi_j(g)|_{X_t}\right) = \det \pi_j(g).\] Hence \begin{align*} |\det \pi_j(g)| &= \prod_{t=1}^p \left|\det \left(\pi_j(g)|_{X_t}\right)\right|\\ & = \prod_{t=1}^p\left(\prod_{i \in \mathsf{x}_{t}} \left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{d_j}{n_j}} =\left(\prod_{i=1}^{n_j} \left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{d_j}{n_j}} \end{align*} and this is precisely \eqref{eq:dets-ad-mortem}. \subsubsection{A second identity involving determinants} Here, we will apply \eqref{eq:dets-ad-mortem} to derive a further identity: we claim that for all $g \in G^o$ and $\mathcal{W} \in \mathscr{W}$ \begin{equation}\label{eq:dets-again}\left(\prod_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g)|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \right)^{\frac{1}{\#\mathcal{W}}} = \prod_{j=1}^k|\det \pi_j(g)|^{\frac{\beta_j}{d_j}}.\end{equation} To see this fix $g \in G^o$, let $\mathcal{W}$ be a transitivity class and let $(W_j')_{j=1}^k \in \mathcal{W}$ be arbitrary. We note that the sets \[\left\{[h] \in G/G^o \colon (\pi_j(h)W_j')_{j=1}^k =(W_j)_{j=1}^k\right\}\] for distinct $(W_j)_{j=1}^k \in \mathcal{W}$ form a partition of $G/G^o$ into cosets, hence each has the same cardinality. We deduce that \[\left(\prod_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g)|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\right)^{\frac{1}{\#\mathcal{W}}}=\left(\prod_{[h]\in G/G^o} \prod_{j=1}^k \left|\det \left(\pi_j(g)|_{\pi_j(h)W_j'}\right)\right|^{\frac{\beta_j}{\ell_j}}\right)^{\frac{1}{\#G/G^o}}.\] It is therefore sufficient to show that for each $j=1,\ldots,k$, for every $i_0 \in \{1,\ldots,n_j\}$, \[\left(\prod_{[h]\in G/G^o} \left|\det \left(\pi_j(g)|_{\pi_j(h)U_j^{i_0}}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{1}{\#G/G^o}} = |\det \pi_j(g)|^{\frac{1}{d_j}}.\] Fix such a $j$ and $i_0$. As before, the sets \[\left\{[h] \in G/G^o \colon \pi_j(h)U_j^{i_0}=U^i_j\right\}\] form a partition of $G/G^o$ into cosets and hence have equal cardinality, which implies that \[\left(\prod_{[h]\in G/G^o} \left|\det \left(\pi_j(g)|_{\pi_j(h)U_j^{i_0}}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{1}{\#G/G^o}} = \left(\prod_{i=1}^{n_j} \left|\det \left(\pi_j(g)|_{U_j^i}\right)\right|^{\frac{1}{\ell_j}}\right)^{\frac{1}{n_j}}.\] By \eqref{eq:dets-ad-mortem} this last expression is equal to $|\det \pi_j(g)|^{1/d_j}$, so combining the identities obtained so far yields \eqref{eq:dets-again}. \subsubsection{Two inequalities between potentials}\label{subsub.two.ineq} Let us define a new potential by \[\Phi^{\det}(\mathtt{i}):=\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}}\] for all $\mathtt{i} \in \Sigma_N^*$. We clearly have $\Phi^{\det}(\mathtt{i}\mathtt{j})=\Phi^{\det}(\mathtt{i})\Phi^{\det}(\mathtt{j})$ for all $\mathtt{i},\mathtt{j} \in \Sigma_N^*$. We aim to show that \begin{equation}\label{eq:pressures-same}P(\Phi)=P(\Phi^{\mathcal{W}})=P(\Phi^{\det})\end{equation} for all transitivity classes $\mathcal{W}$. In pursuit of \eqref{eq:pressures-same} we will prove two inequalities. We first claim that there exists $C_4>0$ such that for every transitivity class $\mathcal{W}$ we have $\Phi^{\det}(\mathtt{i})\leq C_4\Phi^{\mathcal{W}}(\mathtt{i}) $ for all $\mathtt{i} \in \Sigma_N^*$. We begin by considering the case where $\mathtt{i} \in \Sigma_N^*$ satisfies $\mathtt{i} \in G^o$. It follows easily from \eqref{eq:dets-again} that \begin{align}\label{eq:sixteen}\Phi^{\mathcal{W}}(\mathtt{i})&=\max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j} \geq \max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \\\nonumber &\geq \left(\prod_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \right)^{\frac{1}{\#\mathcal{W}}}\\\nonumber & = |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}} = \Phi^{\det}(\mathtt{i})\end{align} for every transitivity class $\mathcal{W}$ and every $\mathtt{i} \in \Sigma_N^*$ such that $g_\mathtt{i} \in G^o$. Now observe that by the Zariski density of the semigroup $\{g_\mathtt{i} \colon \mathtt{i} \in \Sigma_N^*\}$ in $G$, we may choose $\mathtt{k}_1,\ldots,\mathtt{k}_r$ such that every connected component of $G$ contains one of the elements $g_{\mathtt{k}_t}$. Given $\mathtt{i} \in \Sigma_N^*$ observe that we can choose $t_0 \in \{1,\ldots,r\}$ such that $g_{\mathtt{i} \mathtt{k}_{t_0}} \in G^o$. We have $\Phi^{\mathcal{W}}(\mathtt{i} \mathtt{k}_{t_0}) \leq \Phi^{\mathcal{W}}(\mathtt{i})\Phi^{\mathcal{W}}(\mathtt{k}_{t_0})$ and $\Phi^{\det}(\mathtt{i}\mathtt{k}_{t_0})=\Phi^{\det}(\mathtt{i})\Phi^{\det}(\mathtt{k}_{t_0})$, and therefore using \eqref{eq:sixteen} \[\frac{\Phi^{\det}(\mathtt{i})}{\Phi^{\mathcal{W}}(\mathtt{i})} \leq \left(\frac{\Phi^{\mathcal{W}}(\mathtt{k}_{t_0})}{\Phi^{\mathcal{W}}(\mathtt{i}\mathtt{k}_{t_0})}\right)\left(\frac{\Phi^{\det}(\mathtt{i}\mathtt{k}_{t_0})}{\Phi^{\det}(\mathtt{k}_{t_0})}\right) \leq \frac{\Phi^{\mathcal{W}}(\mathtt{k}_{t_0})}{\Phi^{\det}(\mathtt{k}_{t_0})}\leq C_4,\] say, where \[C_4:=\max_{\mathcal{W}\in\mathscr{W}} \max_{1 \leq t \leq r} \frac{\Phi^{\mathcal{W}}(\mathtt{k}_{t})}{\Phi^{\det}(\mathtt{k}_{t})}\] which proves the claim. We now establish our second inequality: we claim that there exists $C_5>0$ such that $\Phi^{\mathcal{W}_0}(\mathtt{i}) \leq C_5 \Phi^{\det}(\mathtt{i})$ for every $\mathtt{i} \in \Sigma_N^*$. By the inequality \eqref{eq:thank-you-reviewer-C} established in \S\ref{sss:c3} we have \[\Phi^{\mathcal{W}_0}(\mathtt{i}) \leq C_3 \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j} \] for some $C_3>0$ and every $g_\mathtt{i} \in G^o$ and $(W_j)_{j=1}^k\in\mathcal{W}_0$. It follows in particular that \[ \frac{\Phi^{\mathcal{W}_0}(\mathtt{i})}{\prod_{j=1}^k \left| \det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}} \leq \frac{C_3 \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j}}{\prod_{j=1}^k \left| \det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}}\] for all $g_\mathtt{i} \in G^o$ and $(W_j)_{j=1}^k\in\mathcal{W}_0$. Since for each $j$ \[\left\{\left| \det \left(\pi_j(g)|_{W_j}\right)\right|^{-\frac{1}{\ell_j}} \pi_j(g)|_{W_j}\colon g \in G^o\right\}\] is contained in a compact subset of $\GL(W_j)$, it follows that there exists $K>0$ such that \[ \frac{\Phi^{\mathcal{W}_0}(\mathtt{i})}{\prod_{j=1}^k \left| \det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}} \leq K\] for all $g_\mathtt{i} \in G^o$ and $(W_j)_{j=1}^k\in\mathcal{W}_0$. Taking the geometric mean over all $(W_j)_{j=1}^k\in\mathcal{W}_0$ for fixed $g_\mathtt{i}$ using \eqref{eq:dets-again} yields \[\frac{\Phi^{\mathcal{W}_0}(\mathtt{i})}{\Phi^{\det}(\mathtt{i})} \leq K\] for all $\mathtt{i} \in \Sigma_N^*$ such that $g_\mathtt{i} \in G^o$. We now extend to the case of general words $\mathtt{i}$. Fix $\mathtt{i} \in \Sigma_N^*$ and observe that we may choose $t_0 \in \{1,\ldots,r\}$ such that $g_{\mathtt{i}\mathtt{k}_{t_0}} \in G^o$. For some $(W_j)_{j=1}^k \in \mathcal{W}^0$ we have \[\Phi^{\mathcal{W}_0}(\mathtt{i}) = \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j}\] and therefore \begin{align*} \Phi^{\mathcal{W}_0}(\mathtt{i})&=\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i}} g_{\mathtt{k}_{t_0}} g_{\mathtt{k}_{t_0}}^{-1})|_{W_j}\right\|^{\beta_j}\leq \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i} \mathtt{k}_{t_0}})|_{\pi_j(g_{\mathtt{k}_{t_0}}^{-1})W_j}\right\|^{\beta_j}\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{k}_{t_0}}^{-1})|_{W_j}\right\|^{\beta_j} \\ &\leq \left( \prod_{j=1}^k \left\|\pi_j(g_{\mathtt{i} \mathtt{k}_{t_0}})|_{\pi_j(g_{\mathtt{k}_{t_0}}^{-1})W_j}\right\|^{\beta_j} \right)\left(\max_{1 \leq t \leq r}\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{k}_{t}}^{-1})|_{W_j}\right\|^{\beta_j} \right) \\ &\leq C \Phi^{\mathcal{W}_0}(\mathtt{i} \mathtt{k}_{t_0}) \leq KC\Phi^{\det}(\mathtt{i} \mathtt{k}_{t_0})\\ & \leq KC \left(\max_{1 \leq t\leq r} \Phi^{\det}(\mathtt{k}_t)\right)\Phi^{\det}(\mathtt{i})\leq C_5\Phi^{\det}(\mathtt{i}),\end{align*} where we took $C:=\max_{1 \leq t \leq r}\prod_{j=1}^k \left\|\pi_j(g_{\mathtt{k}_{t}}^{-1})|_{W_j}\right\|^{\beta_j}$ and $C_5:=KC\max_{1 \leq t\leq r} \Phi^{\det}(\mathtt{k}_t)$. This proves the claim. \subsubsection{The Gibbs property and a third inequality between potentials} The two inequalities just proved assert that for some $C>0$ \begin{equation}\label{eq:three-potentials}C^{-1} \Phi^{\mathcal{W}_0}(\mathtt{i}) \leq \Phi^{\det}(\mathtt{i}) \leq C\Phi^{\mathcal{W}}(\mathtt{i})\end{equation} for all $\mathtt{i} \in \Sigma_N^*$ and all transitivity classes $\mathcal{W}$. It follows directly that \[P(\Phi)=P(\Phi^{\mathcal{W}_0})\leq P(\Phi^{\det})\leq P(\Phi^{\mathcal{W}}) \leq P(\Phi) \] for all transitivity classes $\mathcal{W}$, and we have proved the identity \eqref{eq:pressures-same}: $P(\Phi)=P(\Phi^{\mathcal{W}})=P(\Phi^{\det})$ for all transitivity classes $\mathcal{W}$. We may now prove that $\mu$ is the equilibrium state of $\Phi^{\mathcal{W}}$ for \emph{every} transitivity class $\mathcal{W}$, and is also the equilibrium state of $\Phi^{\det}$. Indeed, for each transitivity class $\mathcal{W}$ the inequality \eqref{eq:three-potentials} yields \[\Lambda(\Phi^{\mathcal{W}_0},\mu)\leq \Lambda(\Phi^{\det},\mu) \leq \Lambda(\Phi^{\mathcal{W}},\mu)\] and therefore \begin{align*}P(\Phi) = P(\Phi^{\det}) = P(\Phi^{\mathcal{W}_0}) & = h(\mu)+\Lambda(\Phi^{\mathcal{W}_0},\mu) \leq h(\mu)+ \Lambda(\Phi^{\det},\mu)\\ & \leq h(\mu)+\Lambda(\Phi^{\mathcal{W}},\mu) \leq P(\Phi^{\mathcal{W}})= P(\Phi)\end{align*} so that \[P(\Phi^{\mathcal{W}}) = h(\mu)+\Lambda(\Phi^{\mathcal{W}},\mu) \qquad \text{and} \qquad P(\Phi^{\det}) = h(\mu)+\Lambda(\Phi^{\det},\mu)\] as required for $\mu$ to be an equilibrium state of $\Phi^{\mathcal{W}}$ and $\Phi^{\det}$ respectively. We now make further use of the Gibbs inequality. Each $\Phi^{\mathcal{W}}$ has a unique equilibrium state and satisfies the Gibbs inequality with respect to that equilibrium state, and the equilibrium state of each such potential is $\mu$. The same remarks apply to $\mu$ and the potential $\Phi^{\det}$. Therefore there exists $C_6>0$ such that \[C_6^{-1} \leq \frac{\Phi^{\mathcal{W}}(\mathtt{i})}{e^{-|\mathtt{i}|P(\Phi^{\mathcal{W}})} \mu([\mathtt{i}])} = \frac{\Phi^{\mathcal{W}}(\mathtt{i})}{e^{-|\mathtt{i}|P(\Phi)} \mu([\mathtt{i}])} \leq C_6\] for all $\mathtt{i} \in \Sigma_N^*$ and all transitivity classes $\mathcal{W}$, and also \[C_6^{-1} \leq \frac{\Phi^{\det}(\mathtt{i})}{e^{-|\mathtt{i}|P(\Phi^{\det})} \mu([\mathtt{i}])} = \frac{\Phi^{\det}(\mathtt{i})}{e^{-|\mathtt{i}|P(\Phi)} \mu([\mathtt{i}])} \leq C_6\] for all $\mathtt{i} \in\Sigma_N^*$. We deduce the inequality $\Phi^{\mathcal{W}}(\mathtt{i}) \leq C_6^2 \Phi^{\det}(\mathtt{i})$ for all $\mathtt{i} \in \Sigma_N^*$ and transitivity classes $\mathcal{W}$. \subsubsection{A final determinant identity} Let $\mathtt{i} \in \Sigma_N^*$ such that $g_\mathtt{i} \in G^o$. We have \begin{align*}&\max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \leq \max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left\|\pi_j(g_\mathtt{i})|_{W_j}\right\|^{\beta_j}\\ &=\Phi^{\mathcal{W}}(\mathtt{i})\leq C_6^2\Phi^{\det}(\mathtt{i})=C_6^2\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}}\\ &=C_6^2\left(\prod_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \right)^{\frac{1}{\#\mathcal{W}}}\\ &\leq C_6^2\left(\min_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \right)^{\frac{1}{\#\mathcal{W}}}\\ &\qquad\cdot\left(\max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\right)^{\frac{\#\mathcal{W}-1}{\#\mathcal{W}}} \end{align*} and we obtain \[\max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\leq C_6^{2(\#\mathcal{W})} \min_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\] for all transitivity classes $\mathcal{W}$ and all $g_\mathtt{i} \in G^o$. It follows that if $(W_j')_{j=1}^k$ is any element of any transitivity class $\mathcal{W}$, then for every $g_\mathtt{i} \in G^o$ \begin{align*}\prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j'}\right)\right|^{\frac{\beta_j}{\ell_j}}& \geq \min_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\\ &\geq C_6^{-2(\#\mathcal{W})} \max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\\ &\geq C_6^{-2(\#\mathcal{W})} \left(\prod_{(W_j)_{j=1}^k \in \mathcal{W}} \left(\prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\right)\right)^{\frac{1}{\#\mathcal{W}}}\\ &=C_6^{-2(\#\mathcal{W})}\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}}\end{align*} where we have used \eqref{eq:dets-again} again, and from the preceding chain of inequalities \[\prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j'}\right)\right|^{\frac{\beta_j}{\ell_j}}\leq \max_{(W_j)_{j=1}^k \in \mathcal{W}} \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}} \leq C_6^2\prod_{j=1}^k |\det \pi_j(g)|^{\frac{\beta_j}{d_j}}.\] We have found that if $\mathtt{i} \in \Sigma_N^*$ such that $g_\mathtt{i} \in G^o$, $\mathcal{W}$ is any transitivity class and $(W_j)_{j=1}^k$ any element of $\mathcal{W}$ \[C_6^{-2(\#\mathcal{W})}\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}} \leq \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\leq C_6^2\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}}.\] Applying this estimate to $g_{\mathtt{i}^n}=g_\mathtt{i}^n$ in place of $g_\mathtt{i}$, taking the power $\frac{1}{n}$ and letting $n \to \infty$ yields \begin{equation}\label{eq:dets-same-2}\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}} = \prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}\end{equation} for every $g_\mathtt{i} \in G^o$ and every $(W_j)$ in any transitivity class. \subsubsection{Conclusion of the proof} The equation \eqref{eq:dets-same-2} suffices to yield \eqref{eq:dets-same}. Fix $j_0 \in \{1,\ldots,k\}$ and $1 \leq i_1,i_2 \leq n_{j_0}$. Let $W_{j_0}:=U_{j_0}^{i_1}$ and $W_{j_0}':=U_{j_0}^{i_2}$, and for $j \neq j_0$, set $W_j:=U_j^1$ and $W_j':=U_j^1$. Applying \eqref{eq:dets-same-2} gives \[\frac{\left|\det\left(\pi_{j_0}(g_\mathtt{i})|_{U_{j_0}^{i_1}}\right)\right|^{\frac{\beta_{j_0}}{\ell_{j_0}}}}{\left|\det\left(\pi_{j_0}(g_\mathtt{i})|_{U_{j_0}^{i_2}}\right)\right|^{\frac{\beta_{j_0}}{\ell_{j_0}}}} = \frac{\prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j}\right)\right|^{\frac{\beta_j}{\ell_j}}}{\prod_{j=1}^k \left|\det \left(\pi_j(g_\mathtt{i})|_{W_j'}\right)\right|^{\frac{\beta_j}{\ell_j}}} =\frac{\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}} }{\prod_{j=1}^k |\det \pi_j(g_\mathtt{i})|^{\frac{\beta_j}{d_j}} } =1\] for every $g_\mathtt{i} \in G^o$. Hence for every $g_\mathtt{i} \in G^o$ and every $j \in \{1,\ldots,k\}$, \[\left|\det\left(\pi_j(g_\mathtt{i})|_{U_j^{i}}\right)\right|^{\frac{1}{\ell_j}}\] is independent of $i \in \{1,\ldots,n_j\}$ and in particular must be equal to its geometric mean with respect to $i \in \{1,\ldots,n_j\}$, which by \eqref{eq:dets-ad-mortem} is $\left|\det \pi_j(g_\mathtt{i})\right|^{1/d_j}$. This establishes \eqref{eq:dets-same} which in turn allows us to readily conclude. Indeed, together with \eqref{eq.matrix.form}, it implies that for every $g \in G^o$ and $j\in \{1,\ldots,k\}$, $\pi_j(g)=|\det(\pi_j(g))|^\frac{1}{d_j}O_j(g)$ where $O_j(g) \in O(V_j)$ for some Euclidean structure on $V_j$ not depending on $g$. Therefore \[\left\{\left| \det \pi_j(g)\right|^{-\frac{1}{d_j}} \pi_j(g)\colon g \in G^o\right\}\] is a compact subgroup of $\GL(V_j)$ and since the index $[G:G^o]$ is finite, the same is true of \[\left\{\left| \det \pi_j(g)\right|^{-\frac{1}{d_j}} \pi_j(g)\colon g \in G\right\}.\] The proof is complete. \section{Proof of Theorem \ref{th:main-tech}}\label{se:gen-case} Let $(A_1,\ldots,A_N) \in \GL_d(\mathbb{R})^N$ be irreducible and let $\alpha_1 \geq \cdots \geq \alpha_d\geq 0$ with $\alpha_1>\alpha_d$. Let $G\leq \GL_d(\mathbb{R})$ denote the Zariski closure of the subsemigroup of $\GL_d(\mathbb{R})$ generated by $A_1,\ldots,A_N$; it is a real reductive group (\S \ref{subsub.reductive.irred}). Define $\alpha_{d+1}:=0$ and let $k_1,\ldots,k_r$ be the list of all integers $i \in \{1,\ldots,d\}$ for which the difference $\alpha_i-\alpha_{i+1}$ is positive, where $k_1<\cdots<k_r$. We observe that since $\alpha_1>\alpha_d$ we have $r \neq 0$ and also $k_1<d$. Define $\beta_j:=\alpha_{k_j}-\alpha_{1+k_{j}}>0$ for each $j=1,\ldots,r$, and for each $j=1,\ldots,r$ let $\pi_j \colon G \to \GL(\wedge^{k_j}\mathbb{R}^d)$ denote the exterior power representation $\pi_j(g):=g^{\wedge k_j}$. We have \[\prod_{j=1}^d \sigma_j(g)^{\alpha_j} =\prod_{j=1}^d \left(\prod_{i=1}^j \sigma_i(g)\right)^{\alpha_j-\alpha_{j+1}}=\prod_{j=1}^d \left\|g^{\wedge j}\right\|^{\alpha_j-\alpha_{j+1}}= \prod_{j=1}^r \left\|\pi_j(g)\right\|^{\beta_j}\] for every $g \in G$, and in particular the potential $\Phi$ defined in the statement of the theorem satisfies the description \[\Phi(\mathtt{i})=\prod_{j=1}^r \left\|\pi_j(A_\mathtt{i})\right\|^{\beta_j}.\] Since the representations $\pi_j \colon G\to \GL(\wedge^{k_j}\mathbb{R}^d)$ are not in general irreducible, Theorem \ref{th:irreducible-case} is not directly applicable to the potential $\Phi$. We will study $\Phi$ by writing it as the maximum of a finite collection of simpler potentials to which Theorem \ref{th:irreducible-case} may be applied. Since $G$ is reductive, the rational representations $\pi_j$'s are completely reducible (\S \ref{subsub.reductive.irred}), in other words, for each $j=1,\ldots,r$ we may write $\wedge^{k_j}\mathbb{R}^d=V_1^j \oplus \cdots \oplus V_{n_j}^j$ where each $V_i^j$ is an invariant subspace of the group $\pi_j(G)$ on which $\pi_j(G)$ acts irreducibly. For each $j=1,\ldots,r$ and $1 \leq \ell \leq n_j$ define an irreducible representation $\pi_{j,\ell}\colon G \to \GL(V_\ell^j)$ by $\pi_{j,\ell}(g):=\pi_j(g)|_{V_\ell^j}$ for all $g \in G$. Let $\mathfrak{L}$ denote the set of all tuples of integers $\mathfrak{l}=(\ell_1,\ldots,\ell_r)$ such that $1 \leq \ell_j \leq n_j$ for each $j=1,\ldots,r$. For each $\mathfrak{l}=(\ell_1,\ldots,\ell_r) \in \mathfrak{L}$ define a potential $\Phi_{\mathfrak{l}} \colon \Sigma_N^* \to (0,+\infty)$ by \begin{equation}\label{eq:frakpotential1} \Phi_{\mathfrak{l}}(\mathtt{i}):= \prod_{j=1}^r \left\|\pi_j(A_\mathtt{i})|_{V_{\ell_j}^j}\right\|^{\beta_j} = \prod_{j=1}^r \left\|\pi_{j,\ell_j}(A_\mathtt{i})\right\|^{\beta_j}.\end{equation} For each fixed $\mathfrak{l}=(\ell_1,\ldots,\ell_r)$ the representations $\pi_{j,\ell_j}$ for $j=1,\ldots,r$ are irreducible, so each $\Phi_{\mathfrak{l}}$ satisfies the hypotheses of Theorem \ref{th:irreducible-case}. Clearly we also have \begin{align}\label{eq:l-max}\Phi(\mathtt{i})=\prod_{j=1}^r \left\|\pi_j(A_\mathtt{i})\right\|^{\beta_j} &=\prod_{j=1}^r \max_{1 \leq \ell \leq n_j} \left\|\pi_j(A_\mathtt{i})|_{V_\ell^j}\right\|^{\beta_j}\\\nonumber & =\max_{(\ell_1,\ldots,\ell_r)\in \mathfrak{L}} \prod_{j=1}^r \left\|\pi_j(A_\mathtt{i})|_{V_{\ell_j}^j}\right\|^{\beta_j} = \max_{\mathfrak{l} \in \mathfrak{L}} \Phi_{\mathfrak{l}}(\mathtt{i}) \end{align} for every $\mathtt{i} \in \Sigma_N^*$. We will find it helpful to define further potentials as follows. For each $\mathfrak{l}=(\ell_1,\ldots,\ell_r) \in \mathfrak{L}$ define \begin{equation}\label{eq:frakpotential2} \Phi^{\det}_{\mathfrak{l}}(\mathtt{i}):= \prod_{j=1}^r \left|\det \left(\pi_{j,\ell_j}(A_\mathtt{i})\right)\right|^{\frac{\beta_j}{\dim V_{\ell_j}^j}}\end{equation} for all $\mathtt{i} \in \Sigma_N^*$. Define also \[\Phi^{\det}(\mathtt{i}) = \prod_{j=1}^r \left|\det A_\mathtt{i} \right|^{\frac{k_j\beta_j}{d}},\] for all $\mathtt{i} \in \Sigma_N^*$. Our strategy in proving Theorem \ref{th:main-tech} will be to establish the identity \begin{equation}\label{eq:pressure-goal}P(\Phi_{\mathfrak{l}}) =P(\Phi_{\mathfrak{l}}^{\det})\end{equation} for all $\mathfrak{l} \in \mathfrak{L}$. This will permit the implication (ii)$\implies$(iii) of Theorem \ref{th:irreducible-case} to be applied, establishing that each of the groups $\pi_{j,\ell}(G)$ is compact modulo factoring out the determinant. The compactness of each $\pi_j(G)$ modulo factoring out the determinant will then follow via some additional bookkeeping to ensure that for each $j=1,\ldots,r$ the determinant which is factored out of the representation $\pi_{j,\ell}$ is consistent across all $\ell \in \{1,\ldots,n_j\}$, and the compactness of $G$ modulo factoring out the determinant will follow by some simple manipulations involving singular values. Much as in the second half of the proof of Theorem \ref{th:irreducible-case}, before commencing the proof of \eqref{eq:pressure-goal} we must first establish an identity involving determinants. The proof of this identity is relatively long and comprises a large proportion of this section. Specifically, we make the following claim: for every $j=1,\ldots,r$, for all $\ell=1,\ldots,n_j$ we have \begin{equation}\label{eq:dets}\left|\det \left(\pi_{j,\ell}(g)\right)\right|^{\frac{1}{\dim V_{\ell}^j}} = \left|\det g \right|^{\frac{k_j}{d}} \end{equation} for all $g \in G$. To prove the claim it is sufficient for us to establish \eqref{eq:dets} for all $g \in G^o$, since if this has been proven then for any given $g \in G$ we have $g^n \in G^o$ for some integer $n \geq 1$ and hence clearly \[\left|\det \left(\pi_{j,\ell}(g)\right)\right|^{\frac{1}{\dim V_{\ell}^j}} =\left|\det \left(\pi_{j,\ell}(g^n)\right)\right|^{\frac{1}{n\cdot \dim V_{\ell}^j}} = \left|\det (g^n) \right|^{\frac{k_j}{nd}} =\left|\det g \right|^{\frac{k_j}{d}} \] as required. We therefore restrict our attention to the task of proving \eqref{eq:dets} for all $g \in G^o$. To this end let us fix $j$ and $\ell$ and define a continuous group homomorphism $\hat\pi$ from $G^o$ to the multiplicative group of positive real numbers by $\hat\pi(g):=|\det \left(\pi_{j,\ell}(g)\right)|^{1/k_j \cdot \dim V_\ell^j}$. Our objective is now to show that $\hat\pi(g)=|\det g|^{1/d}$ for all $g \in G^o$. The set of all $g \in G^o$ satisfying this equation is obviously a group, and this set obviously includes $[G^o,G^o]$ as a subset since by the commutativity of real multiplication we have $\hat\pi(g)=1=|\det g|^{1/d}$ for all $g \in [G^o,G^o]$. Since $G^o$ is equal to an almost direct product of $Z(G^o)$ and $[G^o,G^o]$, the claim will therefore follow if we can prove that $\hat\pi(z)=|\det z|^{1/d}$ for all $z\in Z(G^o)$. We begin this process by analysing the action of $Z(G^o)$ on $\mathbb{R}^d$. By Clifford's \cite[Theorem 1.7]{wehrfritz} applied to the irreducible group $G \leq \GL_d(\mathbb{R})$ and its normal subgroup $G^o$, we obtain a direct sum decomposition $\mathbb{R}^d=X_1 \oplus \ldots \oplus X_p$ consisting of the homogeneous (isotypic) components of the $G^o$-representation. By the same result of Clifford the collection of subspaces $X_i$ are permuted by the component group $G/G^o$. In particular these subspaces all have the same dimension, which we denote by $m \in \mathbb{N}$. (Here of course we have $m=d/p$.) Decomposing each $X_i$ into a sum of irreducible subspaces for the $G^o$-action and using Schur's lemma, we deduce that there exists an inner product structure on each $X_i$ with respect to which every $g \in Z(G^o)$ acts on $X_i$ by a similarity transformation. Now, by \cite[Proposition 8.15]{borel.book} there exist a maximal compact subgroup $Z(G^o)_A$ and a maximal real diagonalisable subgroup $Z(G^o)_D$ of $Z(G^o)$ such that $Z(G^o)_A \cap Z(G^o)_D$ is finite and $Z(G^o)= Z(G^o)_D Z(G^o)_A$, which is to say $Z(G^o)$ is an almost direct product of the subgroups $Z(G^o)_D$ and $Z(G^o)_A$. The group $\hat\pi(Z(G^o)_A)$ is a compact subgroup of the positive reals and hence is equal to $\{1\}$, and similarly the image of $Z(G^o)_A$ under the homomorphism $z \mapsto |\det z|^{1/d}$ must also equal $\{1\}$, so we have $\hat\pi(z)=|\det z|^{1/d}$ for all $z \in Z(G^o)_A$. Hence the claim will be proved if we can show that $\hat\pi(z)=|\det z|^{1/d}$ for every $z \in Z(G^o)_D$. Since each $X_i$ is a sum of isomorphic irreducible representations of $G^o$, it follows from Schur's lemma and (real) diagonalisability that every $z \in Z(G^o)_D$ acts on each $X_i$ by a scalar transformation $v \mapsto \gamma_i(z)v$ for some nonzero real number $\gamma_i(z)$ for $i=1,\ldots,p$. On the other hand, since $V^j_\ell$ is $Z(G^o)_D$-invariant and since $Z(G^o)_D$ is abelian, $V^j_\ell$ writes as a direct sum of $Z(G^o)_D$-irreducible subspaces in $\wedge^{k_j}\mathbb{R}^d$. But $Z(G^o)_D$ is also a split torus, and therefore so is its image in the exterior power representations. Hence, these $Z(G^o)_D$-irreducible subspaces of $V^j_\ell$ are $1$-dimensional subspaces. Each gives rise to a character of $Z(G^o)_D$ of the form $\gamma_1(z)^{t_1}\cdots \gamma_p(z)^{t_p}$ for some non-negative integers $t_1,\ldots,t_p$ whose sum is equal to $k_j$. The quantity $\det \pi_{j,\ell}(z)=\det z^{\wedge k_j}|_{V_\ell^j}$ is a product of precisely $\dim V_\ell^j$ such characters, so it has the form $\gamma_1(z)^{t_1'}\cdots \gamma_p(z)^{t_p'}$ for some non-negative integers $t_1',\ldots,t_p'$ such that $\sum_{i=1}^p t_p'=k_j\cdot \dim V_\ell^j$. Taking the absolute value and raising to the power $1/(k_j\cdot \dim V_\ell^j)$ as in the definition of $\hat\pi$, it follows that there exist non-negative rational numbers $r_1,\ldots,r_p$ such that $\hat\pi(z)=|\gamma_1(z)|^{r_1}\cdots |\gamma_p(z)|^{r_p}$ for all $z \in Z(G^o)_D$ and such that $\sum_{i=1}^p r_i=1$. On the other hand clearly $\det z =\gamma_1(z)^m\cdots \gamma_p(z)^m$ for every $z \in Z(G^o)_D$ since $\mathbb{R}^d=\bigoplus_{i=1}^p X_i$ and $\det (z|_{X_i})=\gamma_i(z)^m$ for every $i=1,\ldots,p$, where we recall that $m=d/p$ is the dimension of each of the spaces $X_i$. Hence $|\det z|^{1/d} = |\gamma_1(z)\cdots \gamma_p(z)|^{1/p}$ for all $z \in Z(G^o)_D$. Now, if $z \in Z(G^o)_D$ and $g \in G$ then $gzg^{-1}$ also belongs to $Z(G^o)$ and also acts on each $X_i$ by a scalar transformation, which by the maximality of $Z(G^o)_D$ as a real diagonalisable subgroup of $Z(G^o)$ implies $gzg^{-1} \in Z(G^o)_D$. For every $[g] \in G/G^o$ there exists a permutation $\varsigma$ of $\{1,\ldots,p\}$ such that $gX_i = X_{\varsigma(i)}$ for every $i =1,\ldots,p$ and every $g \in [g]$, and the corresponding element $gzg^{-1}$ of $Z(G^o)_D$ satisfies $\gamma_i(gzg^{-1}) = \gamma_{\varsigma(i)}(z)$ for all $i=1,\ldots,p$. For each $i \in \{1,\ldots,p\}$ the transitivity of the action of $G/G^o$ on $X_1,\ldots,X_p$ implies that the sets $\left\{[g]\in G/G^o \colon gX_{i}=X_j\right\}$ for $j=1,\ldots,p$ form a partition of $G/G^o$ into cosets of equal cardinality $(\#G/G^o)/p$ and therefore \begin{align}\label{eq:more-determinant-like-stuff}\prod_{[g]\in G/G^o} |\gamma_{i}(gzg^{-1})| &=\prod_{j=1}^p \left(\prod_{\substack{[g]\in G/G^o \\ gX_i=X_j}} |\gamma_j(z)| \right)= \left(\prod_{j=1}^p |\gamma_j(z)|\right)^{\frac{\#G/G^o}{p}}=|\det z|^{\frac{\#G/G^o}{d}} \end{align} for each $i=1,\ldots,p$ and $z \in Z(G^o)_D$. We obviously have $\hat\pi(gzg^{-1})=\hat\pi(z)$ for every $z \in Z(G^o)_D$ and $g \in G$ by the commutativity of real multiplication. Hence for every $z \in Z(G^o)_D$ \begin{align*}\hat\pi(z)&=\left(\prod_{[g]\in G/G^o} \hat\pi(gzg^{-1})\right)^{\frac{1}{\#G/G^o}}= \left(\prod_{[g] \in G/G^o}\prod_{i=1}^p |\gamma_i(gzg^{-1})|^{r_i }\right)^{\frac{1}{\#G/G^o}}\\ & = \prod_{i=1}^p\left(\prod_{[g] \in G/G^o} |\gamma_i(gzg^{-1})|\right)^{\frac{r_i}{ \#G/G^o}}=\prod_{i=1}^p |\det z|^{\frac{r_i}{d}} = |\det z|^{\frac{1}{d}}\end{align*} where we have used \eqref{eq:more-determinant-like-stuff} and the equation $r_1+\cdots +r_p=1$. We have obtained $\hat\pi(z)=|\det z|^{1/d}$ for all $z \in Z(G^o)_D$ and we deduce that the claimed identity \eqref{eq:dets} is valid for every $g \in G$ as required. We may now return to the main direction of the proof. Our first step towards the desired identity \eqref{eq:pressure-goal} is to observe that \[P(\Phi)\geq \max_{\mathfrak{l} \in \mathfrak{L}} P(\Phi_{\mathfrak{l}})\] as a direct consequence of \eqref{eq:l-max} together with the definition of the pressure. Furthermore, for each $\mathfrak{l} \in \mathfrak{L}$ we have $\Phi_{\mathfrak{l}}(\mathtt{i}) \geq \Phi_{\mathfrak{l}}^{\det}(\mathtt{i})$ for all $\mathtt{i} \in \Sigma_N^*$. This follows by comparing \eqref{eq:frakpotential1} and \eqref{eq:frakpotential2} and using the elementary inequality $|\det B| \leq \|B\|^{\dim V}$ for all $B \in \GL(V)$, and it entails that $P(\Phi_{\mathfrak{l}}) \geq P(\Phi_{\mathfrak{l}}^{\det})$ for every $\mathfrak{l} \in \mathfrak{L}$. We have thus far obtained \begin{equation}\label{eq:so-far}P(\Phi)\geq P(\Phi_{\mathfrak{l}})\geq P(\Phi_{\mathfrak{l}}^{\det})\end{equation} for every $\mathfrak{l} \in \mathfrak{L}$. Using the identity \eqref{eq:dets}, we immediately deduce that $\Phi_{\mathfrak{l}}^{\det}(\mathtt{i})=\Phi^{\det}(\mathtt{i})$ for all $\mathtt{i} \in \Sigma_N^*$ simply by applying the equation \eqref{eq:dets} to the definition of the two potentials. Combining this observation with \eqref{eq:so-far} it follows that \begin{equation}\label{eq:so-far-2}P(\Phi)\geq P(\Phi_{\mathfrak{l}})\geq P(\Phi_{\mathfrak{l}}^{\det})=P(\Phi^{\det})\end{equation} for every $\mathfrak{l} \in \mathfrak{L}$. Let us now show that $P(\Phi)=P(\Phi^{\det})$. By hypothesis there exists a Bernoulli measure $\mu$ which satisfies $h(\mu)+\Lambda(\Phi,\mu)=P(\Phi)$. Since $\mu$ is Bernoulli, it is ergodic, so by the subadditive ergodic theorem we have for $\mu$-a.e. $x\in \Sigma_N$ \begin{align*}\Lambda(\Phi,\mu) &= \lim_{n \to \infty} \frac{1}{n}\log \Phi(x|_n) = \lim_{n \to \infty} \frac{1}{n}\log \max_{\mathfrak{l} \in \mathfrak{L}} \Phi_{\mathfrak{l}}(x|_n)\\ &= \max_{\mathfrak{l} \in \mathfrak{L}} \lim_{n \to \infty} \frac{1}{n}\log \Phi_{\mathfrak{l}}(x|_n)=\max_{\mathfrak{l} \in \mathfrak{L}} \Lambda(\Phi_{\mathfrak{l}},\mu).\end{align*} Thus $P(\Phi)=h(\mu)+\Lambda(\Phi_{\mathfrak{l}_0},\mu)$ for some particular $\mathfrak{l}_0 \in \mathfrak{L}$, and therefore \[P(\Phi)\geq \max_{\mathfrak{l} \in \mathfrak{L}}P(\Phi_{\mathfrak{l}}) \geq P(\Phi_{\mathfrak{l}_0}) \geq h(\mu)+\Lambda(\Phi_{\mathfrak{l}_0},\mu) = P(\Phi)\] where we have used the subadditive variational principle in the third inequality. We conclude that $P(\Phi)=P(\Phi_{\mathfrak{l}_0})$ and that $\mu$ is an equilibrium state of $\Phi_{\mathfrak{l}_0}$. By Theorem \ref{th:irreducible-case} applied to the potential $\Phi_{\mathfrak{l}_0}$ we have $P(\Phi_{\mathfrak{l}_0})=P(\Phi_{\mathfrak{l}_0}^{\det})$. We have seen already that $\Phi_{\mathfrak{l}_0}^{\det}$ is identically equal to $\Phi^{\det}$, so \[P(\Phi^{\det}) = P(\Phi_{\mathfrak{l}_0}^{\det}) = P(\Phi_{\mathfrak{l}_0}) =P(\Phi)\geq P(\Phi_{\mathfrak{l}})\geq P(\Phi_{\mathfrak{l}}^{\det})=P(\Phi^{\det})\] for every $\mathfrak{l} \in \mathfrak{L}$, where we have invoked \eqref{eq:so-far-2}. We have now established the desired identity \[P(\Phi_{\mathfrak{l}}) =P(\Phi_{\mathfrak{l}}^{\det})\] for every $\mathfrak{l} \in \mathfrak{L}$. Since every $\Phi_{\mathfrak{l}}$ satisfies the hypotheses of Theorem \ref{th:irreducible-case} it follows from the implication (ii)$\implies$(iii) of that theorem that for each $\mathfrak{l}=(\ell_1,\ldots,\ell_r) \in \mathfrak{L}$, for every $j=1,\ldots,r$ the group \begin{align*}\left\{\left|\det \left(\pi_{j,\ell_j}(g)\right)\right|^{-\frac{1}{\dim V_{\ell_j}^j}} \pi_{j,\ell_j}(g) \colon g \in G\right\} &=\left\{\left|\det g\right|^{-\frac{k_j}{d}} g^{\wedge k_j}|_{V_{\ell_j}^j} \colon g \in G\right\}\\ &=\left\{\left( |\det g|^{-\frac{1}{d}} g\right)^{\wedge k_j}|_{V_{\ell_j}^j} \colon g \in G\right\}\end{align*} is compact, where we have again used \eqref{eq:dets}. Since $\mathfrak{l}$ is arbitrary we deduce that the group \[\left\{(|\det g|^{-\frac{1}{d}}g)^{\wedge k_j} \colon g \in G\right\}\] is compact for every $j=1,\ldots,r$. In particular it is compact for $j=1$, so there exists $K>0$ such that for every $g \in G$ we have $\|(|\det g|^{-\frac{1}{d}}g)^{\wedge k_1}\|\leq K$. Let $g \in G$ and define $h:=|\det g|^{-1/d}g$. We observed at the beginning of the proof that $k_1<d$. Since $1=|\det h|=\sigma_1(h)\cdots \sigma_d(h)$ we have \begin{align*}\|h\|=\sigma_1(h) &=\sigma_2(h)^{-1}\cdots \sigma_d(h)^{-1}=\sigma_1(h^{-1})\cdots \sigma_{d-1}(h^{-1})\\ &\leq \left(\sigma_1(h^{-1})\cdots \sigma_{k_1}(h^{-1})\right)^{\frac{d-1}{k_1}}=\|(h^{-1})^{\wedge k_1}\|^{\frac{d-1}{k_1}} \leq K^{\frac{d-1}{k_1}}\end{align*} where we have used $k_1 \leq d-1$ in order to pass from the first line to the second. The same reasoning obviously applies to $h^{-1}$, and we conclude that the group \[\left\{|\det g|^{-\frac{1}{d}} g \colon g \in G\right\}\leq \GL_d(\mathbb{R})\] is contained in the compact set \[\left\{h \in \GL_d(\mathbb{R}) \colon \max\{\|h\|,\|h^{-1}\|\} \leq K^{\frac{d-1}{k_1}}\right\}\] and hence is compact. Since obviously that group contains all of the linear maps $|\det A_i|^{-1/d}A_i$ the theorem is proved.
proofpile-arXiv_065-5978
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection*{Consensus algorithm} Unlike Bitcoin and Ethereum, Tezos' \textbf{consensus algorithm} is based on a \textbf{Proof-of-Stake} algorithm~\cite{tezosLiquidPos}: rights to produce new blocks are given to accounts that own a stake. More precisely, there is a delegation mechanism and the block-producing rights of each account are given in probabilistic proportion to the number of tokens that have been \emph{delegated} to this account. Block producers have to make a security deposit that is slashed if their behaviour looks malicious, for example if they produce two different blocks for the same level (double spending attack). \subsubsection*{On-chain voting} Another key point of Tezos is its \textbf{on-chain governance mechanism}. The codebase can be changed by a vote of the token holders via their delegates. This helps preventing divisions amongst the community that could lead to forks. Only a delimited part of the codebase, named the \emph{economic ruleset} or the \emph{economic protocol}~\cite{goodman2014positionpaper,goodman2014whitepaper}, can be changed. This part contains the rules that define what a valid transaction is, what a valid block is, as well as how to choose between multiple chains. % Thus, the economic ruleset contains, amongst other things, the consensus algorithm, the language for smart contracts and also the voting rules~\cite{tezosVoting}. It does not contain the network and storage layers. If a proposal is accepted, nodes need not to stop and restart: the new code is downloaded from other peers, dynamically compiled and hot swapped. At the moment, the voting procedure lasts approximately three months but that could be changed in the future via a vote. \subsubsection*{Focus on formal verification} Our long-term ambition is to have certified code in the whole Tezos codebase\footnote{Note that since code changes must be approved by the Tezos community, we can only propose a certified implementation of the economic ruleset.} as well as certified smart contracts. The choice of OCaml as an implementation language is an interesting first step: OCaml gives Tezos good static guarantees since it benefits from OCaml's strong type system and memory management features. Furthermore, formally verified OCaml code can be produced by a variety of tools such as F*~\cite{fstar}, Coq~\cite{coq}, Isabelle/HOL~\cite{isabelle}, Why3~\cite{why3}, and FoCaLiZe~\cite{focalize}. % Another specificity of Tezos is the use of formally verified cryptographic primitives. Indeed the codebase uses the HACL* library~\cite{hacl}, which is certified C code extracted from an implementation of Low*, a fragment of F*. % This article presents Mi-Cho-Coq, a framework for formal verification of Tezos smart contracts, written in the Michelson programming language. It is organised as follows: Section~\ref{sec:michelson} gives an overview of the Michelson smart contract language, the Mi-Cho-Coq framework is then presented in Section~\ref{sec:mi-cho-coq}, a case study on a Multisig smart contract is then conducted in Section~\ref{sec:use-case-multisig}, Section~\ref{sec:related-work} presents some related workd and finally Section~\ref{sec:limits-future-work} concludes the article by listing directions for future work. The Mi-Cho-Coq framework, including the Multisig contract described in Section~\ref{sec:use-case-multisig}, is available at \url{https://gitlab.com/nomadic-labs/mi-cho-coq/tree/FMBC_2019}. \subsection{Design rationale} \label{sec:design-rational} Smart contracts operate in a very constrained context: they need to be expressive, evaluated efficiently, and their resource consumption should be accurately measured in order to stop the execution of programs that would be too greedy, as their execution time impacts the block construction and propagation. Smart contracts are non-updatable programs that can handle valuable assets, there is thus a need for strong guarantees on the correctness of these programs. The need for efficiency and more importantly for accurate account of resource consumption leans toward a low-level interpreted language, while the need for contract correctness leans toward a high level, easily auditable, easily formalisable language, with strong static guarantees. To satisfy these constraints, Michelson was made a Turing-complete, low level, stack based interpreted language (\textit{\`a la} Forth), enabling the resource measurement, but with some high level features \textit{\`a la} ML: polymorphic products, options, sums, lists, sets and maps data-structures with collection iterators, cryptographic primitives and anonymous functions. Contracts are pure functions that take a stack as input and return a stack as output. This side-effect free design is an asset for the conception of verification tools. The language is statically typed to ensure the well-formedness of the stack at any point of the program. This means that if a program is well typed, and if it is being given a well-typed stack that matches its input expectation, then at any point of the program execution, the given instruction can be evaluated on the current stack. Moreover, to ease the formalisation of Michelson, ambiguous or hidden behaviours have been avoided. In particular, unbounded integers are used to avoid arithmetic overflows and division returns an option (which is !None! if and only if the divisor is 0) so that the Michelson programmer has to specify the behaviour of the program in case of division by 0; she can however still \emph{explicitly} reject the transaction using the !FAILWITH! Michelson instruction. \subsection{Quick tour of the language} \label{sec:quick-tour-language} The full language syntax, type system, and semantics are documented in~\cite{michelsonwhitedoc}, we give here a quick and partial overview of the language. \subsubsection{Contracts' shape} \label{sec:contracts-shape} A Michelson smart contract script is written in three parts: the parameter type, the storage type, and the code of the contract. A contract's code consists in one block of code that can only be called with one parameter, but multiple entry points can be encoded by branching on a nesting of sum types and multiple parameters can be paired into one. When the contract is deployed (or \emph{originated} in Tezos lingo) on the chain, it is bundled with a data storage which can then only be changed by a contract successful execution. The parameter and the storage associated to the contract are paired and passed to the contract's code at each execution, it has to return a list of operations and the updated storage. Seen from the outside, the type of the contract is the type of its parameter, as it is the only way to interact with it. \subsubsection{Michelson Instructions} \label{sec:instruction-type} As usual in stack-based languages, Michelson instructions take their parameters on the stack. All Michelson instructions are typed as a function going from the expected state of the stack, before the instruction evaluation, to the resulting stack. For example, the !AMOUNT! instruction used to obtain the amount in $\mu tez$ of the current transaction has type !'S -> mutez:'S! meaning that for any stack type !'S!, it produces a stack of type !mutez:'S!. Some instructions, like comparison or arithmetic operations, exhibit non-ambiguous ad-hoc polymorphism: depending on the input arguments' type, a specific implementation of the instruction is selected, and the return type is fixed. For example !SIZE! has the following types: \begin{tabular}[t]{lcl} \begin{lstlisting} bytes:'S -> nat:'S string:'S -> nat:'S \end{lstlisting} &\hspace{3em} & \begin{lstlisting} set 'elt:'S -> nat:'S map 'key 'val:'S -> nat:'S list 'elt:'S -> nat:'S \end{lstlisting} \end{tabular} While computing the size of a string or an array of bytes is similarly implemented, under the hood, the computation of map size has nothing to do with the computation of string size. Finally, the contract's code is required to take a stack with a pair \emph{parameter}-\emph{storage} and returns a stack with a pair \emph{operation list}-\emph{storage}:\\ !(parameter_ty*storage_ty):[] -> (operation list*storage_ty):[]!. The operations listed at the end of the execution can change the delegate of the contract, originate new contracts, or transfer tokens to other addresses. They will be executed right after the execution of the contract. The transfers can have parameters and trigger the execution of other smart contracts: this is the only way to perform \emph{inter-contract} calls. \subsubsection{A short example - the Vote contract.} \label{sec:short-example} We want to allow users of the blockchain to vote for their favorite formal verification tool. In order to do that, we create a smart contract tasked with collecting the votes. We want any user to be able to vote, and to vote as many times as they want, provided they pay a small price (say 5 $tez$). We originate the contract with the names of a selection of popular tools: Agda, Coq, Isabelle and K\_framework, which are placed in the long-term storage of the contract, in an associative map between the tool's name and the number of registered votes (of course, each tool starts with 0 votes). In the figure \ref{fig:vote}, we present a voting contract, annotated with the state of the stack after each line of code. When actually writing a Michelson contract, development tools (including an Emacs Michelson mode) can interactively, for any point of the code, give the type of the stack provided by the Michelson typecheck of a Tezos node. \newbox\voting \begin{lrbox}{\voting} \begin{lstlisting}[numbers=left] storage (map string int); # candidates parameter string; # chosen code { # (chosen, candidates):[] AMOUNT; # amount:(chosen, candidates):[] PUSH mutez 5000000; COMPARE; GT; # (5 tez > amount):(chosen, candidates):[] IF { FAIL } {}; # (chosen, candidates):[] DUP; DIP { CDR; DUP }; # (chosen, candidates):candidates:candidates:[] CAR; DUP; # chosen:chosen:candidates:candidates:[] DIP { # chosen:candidates:candidates:[] GET; ASSERT_SOME; # candidates[chosen]:candidates:[] PUSH int 1; ADD; SOME # (Some (candidates[chosen]+1)):candidates:[] }; # chosen:(Some (candidates[chosen]+1)):candidates:[] UPDATE; # candidates':[] NIL operation; PAIR # (nil, candidates'):[] } \end{lstlisting} \end{lrbox} \newbox\votingstorage \begin{lrbox}{\votingstorage} \begin{lstlisting} {Elt "Agda" 0 ; Elt "Coq" 0 ; Elt "Isabelle" 0 ; Elt "K" 0} \end{lstlisting} \end{lrbox} \begin{figure}[h!] \centering \captionsetup[subfigure]{position=b} \subcaptionbox {\label{fig:vote}} {\usebox\voting} \hfill \centering \subcaptionbox {\label{fig:vote-storage}} {\usebox\votingstorage} \caption{A simple voting contract \subref{fig:vote} and an example of initial storage \subref{fig:vote-storage}} \end{figure} Let's take a look at our voting program: First, the description of the storage and parameter types is given on lines \texttt{1-2}. Then the code of the contract is given. On line \texttt{5}, !AMOUNT! pushes on the stack the amount of (in $\mu tez$) sent to the contract address by the user. The threshold amount (5 $tez$) is also pushed on the stack on line \texttt{6} and compared to the amount sent: !COMPARE! pops the two top values of the stack, and pushes either -1, 0 or 1 depending on the comparison between the value. !GT! then pops this value and pushes !true! if the value is 1. If the threshold is indeed greater than the required amount, the first branch of the !IF! is executed and !FAIL! is called, interrupting the contract execution and cancelling the transaction. If the value was !false!, the execution continues on line \texttt{9}, where we prepare the stack for the next action: !DUP! copies the top of the stack, we then manipulate the tail of the stack while preserving it's head using !DIP!: there, we take the right element of the !(chosen, candidates)! pair with !CDR!, and we duplicate it again. By closing the block guarded by !DIP! we recover the former stack's top, and the following line takes its left element with !CAR!, and duplicates it. On line \texttt{12}, we use !DIP! to protect the top of the stack again. !GET! then pops !chosen! and !candidates! from the stack, and pushes an option containing the number of votes of the candidate, if it was found in the map. If it was not found, !ASSERT_SOME! makes the program fail. On line \texttt{15}, the number of votes is incremented by !ADD!, and packed into an option type by !SOME!. We then leave the !DIP! block to regain access to value at the top of the stack (!chosen!). On line \texttt{18}, !UPDATE! pops the three values remaining on top of the stack, and pushes the !candidates! map updated with the incremented value for !chosen!. Finally, we push an empty list of operations with !NIL operation!, and pair the two elements on top of the stack to get the correct return type. \subsubsection{Michelson syntax and typing in Coq} Michelson's type system, syntax and semantics, as described in the main documentation, are fully formalised in Mi-Cho-Coq. The abstract syntax tree of a Michelson script is a term of an inductive type which carries the script type : \lstset{language=Coq} \begin{lstlisting} Inductive instruction : list type -> list type -> Set := | NOOP {A} : instruction A A | FAILWITH {A B a} : instruction (a :: A) B | SEQ {A B C} : instruction A B -> instruction B C -> instruction A C | IF {A B} : instruction A B -> instruction A B -> instruction (bool :: A) B | LOOP {A} : instruction A (bool :: A) -> instruction (bool :: A) A ... \end{lstlisting} A Michelson code is usually a sequence of instructions (\coqe{SEQ}), which is one of the \coqe{instruction} constructors. It has type \coqe{instruction stA stB} where \coqe{stA} and \coqe{stB} are respectively the type of the input stack and of the output stack. The stack type is a list of Michelson type constructions, defined in the \coqe{type} inductive: \begin{lstlisting} Inductive comparable_type : Set := | nat | int | string | bytes | bool | mutez | address | key_hash | timestamp. Inductive type : Set := | Comparable_type (a : comparable_type) | key | unit | signature | operation | option (a : type) | list (a : type) | set (a : comparable_type) | contract (a : type) | pair (a b : type) | or (a b : type) | lambda (a b : type) | map (key : comparable_type) (val : type) | big_map (key : comparable_type) (val : type). \end{lstlisting} A full contract, for a given storage type \coqe{storage} and parameter type \coqe{params} is an instruction of type \begin{lstlisting} instruction ((pair params storage) :: nil) ((pair (list operation) storage) :: nil). \end{lstlisting} Thanks to the indexing of the ^instruction^ inductive by the input and output stack types, only well-typed Michelson instructions are representable in Mi-Cho-Coq. This is very similar to the implementation of Michelson in the Tezos node which uses a similar feature in OCaml: generalised algebraic datatypes. To ease the transcription of Michelson contracts into Mi-Cho-Coq AST we use notations so that contracts in Mi-Cho-Coq look very similar to actual Michelson code. The main discrepancy between Michelson and Mi-Cho-Coq syntax being that due to parsing limitations, the Michelson semi-colon instruction terminator has to be replaced by a double semi-colon instructions separator. The ad-hoc polymorphism of Michelson instructions is handled by adding an implicit argument to the corresponding instruction constructor in Mi-Cho-Coq. This argument is a structure that carries an element identifying the actual implementation of the instruction to be used. As the argument is \emph{implicit and maximally inserted}, Coq type unifier tries to fill it with whatever value can fit with the known types surrounding it, \emph{i.e.} the type of the input stack. Possible values are declared through the Coq's canonical structures mechanism, which is very similar to (Coq's or Haskell's) typeclasses. \subsubsection{Michelson interpreter in Coq} Michelson semantics is formalised in Coq as an evaluator \coqe{eval} of type ^forall {A B : list type}, instruction A B -> nat -> stack A^ ^-> M (stack B)^ where ^M^ is the error monad used to represent the explicit failure of the execution of a contract. The argument of type ^nat^ is called the \emph{fuel} of the evaluator. It represents a bound on the depth of the execution of the contract and should not be confused with Michelson's cost model which is not yet formalised in Mi-Cho-Coq. Some domain specific operations which are hard to define in Coq are axiomatised in the evaluator. These include cryptographic primitives, data serialisation, and instructions to query the context of the call to the smart contract (amount and sender of the transaction, current date, balance and address of the smart contract). \subsubsection{A framework for verifying smart contracts} To ease the writing of correctness proofs in Mi-Cho-Coq, a weakest precondition calculus is defined as a function \coqe{eval_precond} of type ^forall {fuel A B}, instruction A B -> (stack B -> Prop) ->^ ^(stack A -> Prop)^ that is a Coq function taking as argument an instruction and a predicate over the possible output stacks of the instruction (the postcondition) and producing a predicate on the possible input stacks of the instruction (the precondition). This function is proved correct with respect to the evaluator: \begin{lstlisting} Lemma eval_precond_correct {A B} (i : instruction A B) fuel st psi : eval_precond fuel i psi st <-> match eval i fuel st with Failed _ _ => False | Return _ a => psi a end. \end{lstlisting} Note that the right-hand side formula is the result of the monad transformer of \cite{fstar_monad_transformer} which here yields a simple expression thanks to the absence of complex effects in Michelson. \subsubsection{A short example - the Vote contract} We give below, as an example, a formal specification of the voting contract seen previously. We want the contract to take into account every vote sent in a transaction with an amount superior to 5 $tez$. Moreover, we want to only take into account the votes toward an actual available choice (the contract should fail if the wrong name is sent as a parameter). Finally, the contract should not emit any operation. In the following specification, the \emph{precondition} is the condition that must be verified for the contract to succeed. The \emph{postcondition} fully describes the new state of the storage at the end of the execution, as well as the potentially emitted operations. \texttt{amount} refers to the quantity of $\mu tez$ sent by the caller for the transaction. \let\legacywedge\ensuremath{\legacywedge} \def\ensuremath{\legacywedge}{\ensuremath{\legacywedge}} {\small \begin{longtable}{ll} \textbf{Precondition}: & \texttt{amount} $\geq$ 5000000 \ensuremath{\legacywedge}{} chosen $\in$ \texttt{Keys}(storage)\\ \textbf{Postconditions}: & returned\_operations = [ ] \ensuremath{\legacywedge}\\ & $\forall$ c, c $\in$ \texttt{Keys}(storage) $\iff$ c $\in$ \texttt{Keys}(new\_storage) \ensuremath{\legacywedge}\\ & new\_storage[chosen] = storage[chosen] + 1 \ensuremath{\legacywedge}\\ & $\forall$ c $\in$ \texttt{Keys}(storage), c $\neq$ chosen $\Rightarrow$ new\_storage[c] = storage[c] \end{longtable}} Despite looking simple, proving the correctness of the vote contract still needs a fair number of properties about the map data structure. In particular we need some lemmas about the relations between the \texttt{mem}, \texttt{get} and \texttt{update} functions, which we added to the Mi-Cho-Coq library to prove this contract. Once these lemmas are available, the contract can easily be proved by studying the three different situations that can arise during the execution : the contract can fail (either because the sender has not sent enough tez or because they have not selected one of the possible candidates), or the execution can go smoothly. \lstset{language=michelson} \subsubsection{Michelson Implementation} To be as generic as possible, the possible actions of our Multisig contract are: \begin{itemize} \item produce a list of operations to be run atomically \item change the threshold and the list of owner public keys \end{itemize} The contract features two entrypoints named \texttt{default} and \texttt{main}. The \texttt{default} entrypoint takes no parameter (it has type \texttt{unit}) and lets unauthenticated users send funds to the Multisig contract. The \texttt{main} entrypoint takes as parameters an action, a list of optional signatures, and a counter value. It checks the validity and the number of signatures and, in case of successful authentication, it executes the required action and increment the counter. The Michelson script of the Multisig contract is available at \cite{multisigArthur}. The code of the \texttt{default} entrypoint is trivial. The code for the \texttt{main} entrypoint can be divided in three parts: the header, the loop, and the tail. The header packs together the required action and the nonce and checks that the counter given as parameter matches the one stored in the contract. The loop iterates over the stored public keys and the optional signatures given in parameter. It counts and checks the validity of all the signatures. Finally the contract tail checks that the number of provided signatures is at least as large as the threshold, it increments the stored counter, and it runs the required action (it either evaluates the anonymous function passed in the contract parameter and emits the resulting operations or modifies the contract storage to update the list of owner public keys and the threshold). \subsubsection{Specification and Correctness Proof} Mi-Cho-Coq is a functional verification framework. It is well suited to specify the relation between the input and output stacks of a contract such as Multisig but it is currently not expressive enough to state properties about the lifetime of a smart contract nor the interaction between smart contracts. For this reason, we have not proved that the Multisig contract is resistant to replay attacks. However, we fully characterise the behaviour of each call to the Multisig contract using the following specification of the Multisig contract, where !env! is the evaluation environment containing among other data the address of the contract (!self env!) and the amount of the transaction (!amount env!). \lstset{language=Coq} \begin{lstlisting} Definition multisig_spec (parameter : data parameter_ty) (stored_counter : N) (threshold : N) (keys : Datatypes.list (data key)) (new_stored_counter : N) (new_threshold : N) (new_keys : Datatypes.list (data key)) (returned_operations : Datatypes.list (data operation)) (fuel : Datatypes.nat) := let storage : data storage_ty := (stored_counter, (threshold, keys)) in match parameter with | inl tt => new_stored_counter = stored_counter /\ new_threshold = threshold /\ new_keys = keys /\ returned_operations = nil | inr ((counter, action), sigs) => amount env = (0 ~Mutez) /\ counter = stored_counter /\ length sigs = length keys /\ check_all_signatures sigs keys (fun k sig => check_signature env k sig (pack env pack_ty (address_ env parameter_ty (self env), (counter, action)))) /\ (count_signatures sigs >= threshold new_stored_counter = (1 + stored_counter match action with | inl lam => match (eval lam fuel (tt, tt)) with | Return _ (operations, tt) => new_threshold = threshold /\ new_keys = keys /\ returned_operations = operations | _ => False end | inr (nt, nks) => new_threshold = nt /\ new_keys = nks /\ returned_operations = nil end end. \end{lstlisting} Using the Mi-Cho-Coq framework, we have proved the following theorem: \begin{lstlisting} Lemma multisig_correct (params : data parameter_ty) (stored_counter new_stored_counter threshold new_threshold : N) (keys new_keys : list (data key)) (returned_operations : list (data operation)) (fuel : nat) : let storage : data storage_ty := (stored_counter, (threshold, keys)) in let new_storage : data storage_ty := (new_stored_counter, (new_threshold, new_keys)) in 17 * length keys + 14 $\leq$ fuel -> eval multisig (23 + fuel) ((params, storage), tt) = Return _ ((returned_operations, new_storage), tt) <-> multisig_spec params stored_counter threshold keys new_stored_counter new_threshold new_keys returned_operations fuel. \end{lstlisting} The proof relies heavily on the correctness of the precondition calculus. The only non-trivial part of the proof is the signature checking loop. Indeed, for efficiency reasons, the Multisig contract checks the equality of length between the optional signature list and the public key list only after checking the validity of the signature; an optional signature and a public key are consumed at each loop iteration and the list of remaining optional signatures after the loop exit is checked for emptiness afterward. For this reason, the specification of the loop has to allow for remaining unchecked signatures. \section{Introduction to Tezos} \input{intro} \section{Overview of Michelson} \label{sec:michelson} \input{michelson} \section{Mi-Cho-Coq : a Verification Framework in Coq for Michelson} \label{sec:mi-cho-coq} \input{michocoq} \section{A case study : the Multisig Contract} \label{sec:use-case-multisig} \input{multisig} \section{Related Work} \label{sec:related-work} \input{related} \section{Limits and Future Work} \label{sec:limits-future-work} \input{future} \bibliographystyle{splncs04}
proofpile-arXiv_065-5986
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} The interplay between magnetism and superconductivity is a fundamental topic in condensed matter physics, and plays an important role in many low-temperature phenomena, e.g. in high-temperature superconductors \cite{Dagotto94_review, Lee06_Review_HTSC, Dai2012}, inhomogeneous superconductivity\cite{Larkin64_FFLO, Fulde64_FFLO,Matsuda07_Review_FFLO}, Abrikosov vortex lattices\cite{blatter_vortex_review}, etc. Already at the microscopic level, this interplay is fascinating and complex: a single magnetic impurity [e.g., a magnetic atom or a quantum dot (QD) attached to superconducting leads] can locally break Cooper pairs and introduce single-particle localized states known as Yu-Shiba-Rusinov, or simply ``Shiba'', states inside the superconducting gap \cite{Yu65_YSR_states,Shiba68_YSR_states,Rusinov69_YSR_states}. Shiba states have been recently the focus of intense research due to their potential uses in spintronic devices and in topological superconductors hosting Majorana bound states (i.e., Majorana ``Shiba chain'' proposals)\cite{Nadj-Perdge13_Majorana_fermions_in_Shiba_chains,Klinovaja13_TSC_and_Majorana_Fermions_in_RKKY_Systems,Braunecker13_Shiba_chain}. They have been clearly seen in STM experiments in nanostructured magnetic adatom/superconductor (SC) surfaces\cite{Yazdani97_YSR_states, Ji08_YSR_states, Iavarone10_Local_effects_of_magnetic_impurities_on_SCs, Ji10_YSR_states_for_the_chemical_identification_of_adatoms, Franke11_Competition_of_Kondo_and_SC_in_molecules, Bauer13_Kondo_screening_and_pairing_on_Mn_phtalocyanines_on_Pb, Hatter15_Magnetic_anisotropy_in_Shiba_bound_states_across_a_quantum_phase_transition, Ruby15_Tunneling_into_localized_subgap_states_in_SC, Ruby_2016, Hatter2017_Scaling_of_YSR_energies, Choi2017_Mapping_the_orbital_structure_of_Shiba_states}, and in transport experiments on hybrid nanostructures made of Coulomb-blockaded quantum dots coupled to superconducting leads \cite{deFranceschi10_Hybrid_SC_QD_devices}. In these systems, the QD (physically a carbon nanotube, a gated semiconductor, or a molecule such as C$_{60}$) acts as an artificial ``magnetic atom'' where the position of the Shiba states (also known as Andreev bound-states in this context) can be controlled with supercurrents or voltage gates. An important feature of Shiba physics is the existence of an experimentally accessible spin- and parity-changing phase transition, the so called ``0-$\pi$ transition'', related to the position of the Shiba level inside the gap. For a model with a classical impurity \cite{Sakurai70}, it can be seen that the change in the occupation of a Shiba state when it crosses the Fermi level causes the collective ground state to change from a BCS-like even-parity singlet to an odd-parity doublet. When dealing with the quantum impurity case, theoretical progress both on the analytical\cite{Zittartz_1970_I,Zittartz_1970_III} and numerical side (in particular, implementations of the numerical renormalization group (NRG) method\cite{Satori92_Magnetic_impurities_in_SC_with_NRG, Sakai93_Magnetic_impurities_in_SC_with_NRG,Yoshioka98_Kondo_impurity_in_SC_with_NRG,Yoshioka00_NRG_Anderson_impurity_on_SC}), allowed to identify two competing mechanisms which operate on this transition: the Kondo effect vs the superconducting pairing potential. Whereas the Kondo effect consists in the formation of a many-body singlet in which the magnetic impurity is screened by the conduction electrons at temperatures $T\ll T_K$ (i.e., the Kondo temperature \cite{hewson}), the superconducting pairing potential tends to favor a Cooper-pair condensate in which the mangnetic impurity remains unscreened due to the presence of a gap $2\Delta$ in the density of states of the superconductor around the Fermi energy $E_F$\cite{Heinrich18_Review_single_adsorbates}. In particular, the 0-$\pi$ transition takes places when the Shiba state (whose position inside the gap depends on the ratio $T_K/\Delta$) crosses $E_F$, something that is theoretically predicted to occur at the critical value $T_K/\Delta_\text{c} \approx 0.3$\cite{Satori92_Magnetic_impurities_in_SC_with_NRG, Sakai93_Magnetic_impurities_in_SC_with_NRG,Yoshioka98_Kondo_impurity_in_SC_with_NRG,Yoshioka00_NRG_Anderson_impurity_on_SC}. Recently, important new questions driven by the rapidly-evolving experimental techniques have arisen in the context of Shiba physics. Among the many questions originated in the complexities of real experiments, the effect of the Rashba spin-orbit coupling (SOC) on the 0-$\pi$ transition constitutes an open problem. Here the question is: what is the effect of the Rashba SOC on, e.g., the position of the subgap states? This question was addressed recently for a classical \cite{Kim15_Classical_Shiba_states_with_Rashba_SOC} or a quantum \cite{Li18_Rashba-induced_Kondo_screening_magnetic_impurity_two-dimensional} impurity embedded in a two-dimensional (2D) superconductor. Here we are interested in the regime where the spin of the impurity is a \emph{quantum-mechanical} object in contact with a one-dimensional (1D) superconducting nanowire. As we will show, important differences arise with respect to the 2D (classical o quantum) case. We recall that Rashba SOC is a crucial ingredient to observe topological Majorana quasiparticle excitations in 1D systems, both in proximitized semiconductor nanowire experiments \cite{Mourik12_Signatures_of_MF,Das2012_Zero-bias_peaks_and_splitting_in_an_Al-InAs_nanowire_topological_superconductor_as_a_signature_of_Majorana_fermions, Churchill2013_Majorana_NW, Albrecht2016, Deng16_MBS_in_QD_hybrid_NW,Gül2018, Zhang2018} , and in the Majorana ``Shiba-chain'' experiments\cite{NadjPerge14_Observation_of_Majorana_fermions_in_Fe_chains,Pawlak16_Probing_Majorana_wavefunctions_in_Fe_chains,Ruby15_MBS_in_Shiba_chain,Feldman16_High_resolution_Majorana_Shiba_chain,Jeon17_Distinguishing_MZMs}. Therefore, its effect cannot be disregarded in those experimental systems. However, due to the complicated many-body Kondo correlations that emerge already at a single quantum-impurity level, the effect of Rashba SOC on the 0-$\pi$ transition is hard to describe in detail. Another important question is the effect of the Rashba SOC on the Kondo temperature $T_K$. In the case of magnetic impurities in normal metals (i.e., in the absence of superconductivity), treated with the Kondo or Anderson models, the complexity of the problem has resulted in a variety of different conclusions. Depending on the parameter regime (mainly, on the position of the localized impurity level), some authors\cite{Isaev12_Kondo_effect_in_the_presence_of_spin-orbit_coupling,Zarea12_Enhancement_of_Kondo_effect_through_Rashba_SOC} find an enhancement of $T_K$ induced by SOC, while others \cite{Malecki07_Two_Dimensional_Kondo_Model_with_Rashba_Spin-Orbit_Coupling, Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction, Chen16_The_Kondo_temperature_of_a_two-dimensional_electron_gas_with_Rashba_spin–orbit_coupling,Wong16_Influence_Rashba_spin-orbit_coupling_Kondo_effect} predict a minor modification. In the case of an impurity coupled to a one dimensional metalic nanowire, it has been reported \cite{deSousa16_Kondo_effect_quantum_wire_with_spin-orbit_coupling} that the effect of SOC in the nanowire is to increase $T_K$. Finally, for an impurity in contact with a 2D superconductor with Rashba SOC \cite{Li18_Rashba-induced_Kondo_screening_magnetic_impurity_two-dimensional}, the authors report an enhancement of the screening mechanisms. Motivated by these questions, in this work we study a single-level QD coupled to a 1D superconductor with strong Rashba SOC, and study its effect on the position of the subgap states, on the $0-\pi$ transition and on the Kondo temperature. We model the QD with the Anderson impurity model with on-site interaction $U$, and assume that the superconductor is a single-channel one-dimensional (1D) nanowire subject to a local pairing term $\Delta$ and to a strong Rashba SOC. To solve this problem, we have implemented the density-matrix renormalization group (DMRG) \cite{White_DMRG_92} and have introduced a logarithmic discretization in the 1D conduction band (as is usual in NRG implementations\cite{wilson75}). To the best of our knowledge, the DMRG method has not been applied to the Shiba-impurity problem before. Since in a 1D geometry this method is known to have a good performance with increasing number of impurities, it might offer a versatile platform to study, e.g., small 1D clusters of magnetic nanostructures coupled to superconductors. We have tested and benchmarked our results using previous works where the NRG method was used in the absence of SOC\cite{Bauer07_NRG_Anderson_model_in_BCS_superconductor, Sakai93_Magnetic_impurities_in_SC_with_NRG}, with excellent agreement, showing that the DMRG reaches essentially the same degree of accuracy. These encouraging results pave the way to implement DMRG as a reliable alternative to describe many-body physics of subgap states induced by magnetic impurities. Our results show that the Rashba SOC in a 1D setup does not qualitatively affect the phase diagram of the 0-$\pi$ transition, affecting it only at a quantitative level through a modification of the effective local (i.e., at the site of the QD) density of states at the Fermi level $\rho_0$. This also leads us to conclude that in a 1D geometry the Rashba SOC has \emph{detrimental} effects on $T_K$. This paper is organized as follows. We begin in Sec. \ref{model} by describing the 1D model of a QD coupled to a superconductor nanowire with Rashba SOC. In Sec. \ref{Logarithmic} we focus on the DMRG and, in particular, on the implementation of the logarithmic-discretization procedure used to map our model onto an effective ``Wilson chain'' Hamiltonian\cite{wilson75}. In Sec. \ref{res} we present our DMRG results, focusing mainly on the position of the subgap states and the 0-$\pi$ transition as a function of the Rashba SOC parameter. Our results are computed both at and away the particle-hole symmetric point of the Anderson model. Lastly, we devote Sec. \ref{conclusions} to present a summary and discuss future perspectives. \section{Theoretical Model}\label{model} We describe a single-level QD hybridized with a superconducting lead by means of the Anderson impurity model $H = H_\text{SC} + H_{d}$. Here, the term $H_\text{SC}$ describes a single-channel BCS superconductor with a Rashba SOC term: \begin{align} H_\text{SC}&= \sum_{k} \left[\sum_\sigma \epsilon_0\left(k\right) c^{\dagger}_{k \sigma} c_{k \sigma}+\Delta \left(c^{\dagger}_{k \uparrow} c^{\dagger}_{-k \downarrow} + \text{H.c.}\right)\right. \nonumber\\ &\left.-\sum_{\sigma,\sigma^\prime} \hbar \alpha_R k \left(c^{\dagger}_{k \sigma} \hat{\sigma}^y_{\sigma,\sigma^\prime} c_{k \sigma^\prime} \right)\right], \label{H_SC} \end{align} where $c_{k\sigma}$ is the annihilation operator of a fermionic quasiparticle with momentum $k$ and spin projection $\sigma$ along the $\hat{z}$-axis, and is defined so that the usual commutation relation $\{c_{k\sigma},c^\dagger_{k^\prime \sigma^\prime}\}=\delta_{k,k^\prime}\delta_{\sigma,\sigma^\prime}$ holds. The quantity $\epsilon_0\left(k\right)$ is the dispersion relation of the quasiparticles in the 1D conduction band in the absence of both Rashba SOC and pairing interaction, and is taken with respect to $E_F$. The BCS pairing potential $\Delta$ induces a superconducting ground state, and opens a SC gap of size 2$\Delta$ around $E_F$ in the spectrum of quasiparticles\cite{Tinkham_Introduction_to_superconductivity}. For simplicity, here we have not considered the usual self-consistent equation for the BCS order parameter, and therefore our results are limited to the regime $T\ll T_\text{c}$, with $T_\text{c}$ the superconducting critical temperature. Finally, $\alpha_R$ is the Rashba SOC parameter (note that $\alpha_R$ has units of velocity), generated by the breaking of the inversion symmetry, and $\hat{\sigma}^y$ is the Pauli matrix acting on spinor space. This model could represent the situation in, e.g., a semiconducting quantum wire proximitized by a nearby bulk superconductor (Pb or Al), as used recently in Majorana experiments\cite{Mourik12_Signatures_of_MF,Das2012_Zero-bias_peaks_and_splitting_in_an_Al-InAs_nanowire_topological_superconductor_as_a_signature_of_Majorana_fermions, Churchill2013_Majorana_NW,Albrecht2016,Deng16_MBS_in_QD_hybrid_NW,Gül2018, Zhang2018} We stress that the assumption of a 1D superconductor is not essential for the implementation of the numerical techniques presented in this work. Higher-dimensional geometries, such as magnetic impurities in 2D SCs, are also possible to describe but we defer these studies for future works. The term $H_d$ describes a single-level QD with strong local Coulomb repulsion $U$, coupled to the SC: \begin{align} H_{d} &= \sum_{\sigma} \epsilon_d n_{d \sigma}+U n_{d \uparrow}n_{d \downarrow}+\sum_{k\sigma} \frac{V}{\sqrt{N}} \left(d^{\dagger}_{\sigma} c_{k \sigma} + \text{H.c.}\right), \label{H_imp} \end{align} where $d^{\dagger}_{ \sigma}$ creates an electron with spin proyection $\sigma$ in the QD, and $n_{d\sigma}=d^\dagger_\sigma d_\sigma$ is the number of fermions. The parameter $\epsilon_d$ is the energy level of the dot, which is assumed to be tuned by means of external gate voltages, and $U$ is the local Coulomb repulsion. The parameter $V$ is the hybridization hopping amplitude between the QD and the SC nanowire. For later use, it is convenient to define here the \emph{effective} hybridization parameter $\Gamma_0=V^2\rho_0 \pi$, where $\rho_0$ is the density of states at the Fermi level. To gain insight into the physical aspects of this Hamiltonian, we can assume the system in a particle-hole symmetric situation (i.e., $\epsilon_d=-U/2$) and $\Gamma_0 \ll U$. Under such conditions, the QD acts as an effective spin-1/2 impurity, with a frozen occupation number in the subspace $n_{d \sigma} \simeq 1/2$. In the absence of SC pairing (i.e., $\Delta=0$), the conduction-band electrons near $E_F$ tend to screen this effective spin-1/2 impurity and create Kondo correlations which eventually give rise to the many-body ``Kondo singlet'' characterized by an energy scale $T_K\sim D \exp{\left[-\pi U/8 \Gamma_0\right]}$\cite{Haldane_1978}. However, in the presence of the SC gap, due to the lack of quasiparticles in the energy region $2\Delta$ around $E_F$, the screening mechanism fails if $T_K \ll \Delta$ and the QD remains unscreened. This is the essence of the ``0-$\pi$'' transition. Before implementing the numerical solution of this model, it is convenient first to introduce a unitary transformation in spinor space in order to eliminate the Rashba SOC from $H_\text{SC}$, i.e., $\left(\tilde{c}_{k +}, \tilde{c}_{k -}\right)^T= \hat{U} \left(c_{k \uparrow}, c_{k \downarrow}\right)^T$, where the unitary transformation $\hat{U}$ is a $\frac{\pi}{2}$-rotation in spinor space around the $\hat{x}$ axis: $\hat{U}=e^{i \frac{\pi}{4} \hat{\sigma}_x}/\sqrt{2}$. In this new basis, the transformed Hamiltonian $\tilde{H}_\text{SC}=\hat{U}^\dagger H_\text{SC} \hat{U}$ explicitly writes \begin{align} \tilde{H}_\text{SC}&= \sum_{k} \left[\sum_{h=\pm} \epsilon_{h}\left(k\right) \tilde{c}^{\dagger}_{k h} \tilde{c}_{k h}+ \Delta \left(\tilde{c}^{\dagger}_{k +} \tilde{c}^{\dagger}_{(-k) -} + \text{H.c.} \right)\right], \label{H_rot} \end{align} where we have defined the new band dispersion $\epsilon_{h}\left(k\right) \equiv \epsilon_0\left(k\right) + h \alpha_Rk \hbar$, with $h=\pm$ playing the role of an effective ``up'' or ``down'' spin projection along the $\hat{y}$-axis. The same transformation can be implemented for the QD term, $\tilde{H}_d=\hat{U}^\dagger H_d \hat{U}$. Explicitly: \begin{align} \tilde{H}_{d} &= \sum_{h=\pm}\epsilon_d \tilde{n}_{d h}+U \tilde{n}_{d +}\tilde{n}_{d -}+\sum_{k,h=\pm} \frac{V}{\sqrt{N}} \left(\tilde{d}^{\dagger}_{h} \tilde{c}_{k h} + \text{H.c.}\right), \label{H_rotimp} \end{align} where the new impurity operators are $\left(\tilde{d}_{+}, \tilde{d}_{-}\right)^T= \hat{U} \left(d_{\uparrow}, d_{\downarrow}\right)^T$. Note that in the transformed Hamiltonian $\tilde{H}=\tilde{H}_\text{SC}+\tilde{H}_d$, the Rashba SOC term has been eliminated and is now completely encoded in the new dispersion relation $\epsilon_{kh}$. Moreover, this transformed Hamiltonian is a one-channel Anderson Hamiltonian. This is not a peculiarity of 1D: as shown in Ref. \onlinecite{Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction}, the single-orbital Anderson impurity model is always effectively a single-channel problem, independently of dimensionality and of the type of conduction band. Following Malecki \cite{Malecki07_Two_Dimensional_Kondo_Model_with_Rashba_Spin-Orbit_Coupling}, we assume a quadratic dispersion $\epsilon_0\left(k\right)=\hbar^2k^2/2m^*-\mu$, with $m^*$ the renormalized mass of the band quasiparticles and $\mu$ the chemical potential. With this, the Fermi energy is $E_F=\mu$. We obtain a modified Fermi wavevector and Fermi velocity due to the Rashba SOC: \begin{align} k_{Fh}&=k^0_F\sqrt{1+\frac{\epsilon_R}{\mu}}-hk_R\hbar,\\ v_{Fh}&=\frac{1}{\hbar} \left. \frac{\partial \epsilon_{kh}}{\partial k}\right|_{k=k_{Fh}}=v^0_F\sqrt{1+\frac{\epsilon_R}{\mu}} \label{eq:vFh}, \end{align} where $k^0_F=\sqrt{2m^*\mu}/\hbar$ and $v^0_F=\hbar k^0_F/m^*$ are, respectively, the Fermi wavevector and the Fermi velocity in the absence of Rashba SOC, and where we have defined a ``Rashba momentum'' $k_R=m^*\alpha_R/\hbar$, and a Rashba energy $\epsilon_R=m^*\alpha_R^2/2$, so that $\epsilon_R/\mu=\left(\alpha_R/{v^0_F}\right)^2$. In the following, when refering to the effects of the Rasbha SOC, we will alternatively refer to the Rashba energy $\epsilon_R$ or to the Rashba coupling $\alpha_R$. Since in a 1D geometry the density of normal states at the Fermi energy is obtained from the expression $\rho_0\left(\epsilon_R\right) = 1/\left(2 \pi v_{Fh}\right)$, from Eq. (\ref{eq:vFh}) we can obtain the expression of the density of states \textit{modified} by the effect of the Rashba SOC \begin{align} \rho_0\left(\epsilon_R\right/\mu)&=\frac{\rho_0}{\sqrt{1+\frac{\epsilon_R}{\mu}}} \label{eq:rho0R}. \end{align} Therefore, in this 1D case the effect of the Rashba SOC appears \textit{only} through a modification of the density of states of the conduction band, for the purposes of this work\cite{note1}. In what follows, we will assume that the Fermi level is far from the bottom of the band, which we assume located at energy $\epsilon=-D$, and therefore we linearize the 1D spectrum in a window of energy $2D$ around $E_F$, i.e., $\epsilon_h\left(k\right)\simeq v_{Fh}k$. This amounts to replacing the original band by a symmetric, half-filled flat band with a constant density of states in the region $E_F-D < \epsilon < E_F+ D$. This is the most important approximation in our work, which nevertheless is the standard case in most NRG studies (as we will see in Sec. \ref{Logarithmic}, it considerably simplifies the implementation of the logarithmic discretization method). In addition, this approximation imposes the condition $\mu=D$ in Eq. \ref{eq:rho0R} (i.e., half-filled band), and therefore the decrease in the density of states at $E_F$ can be interpreted in terms of a modified (broader) effective conduction band due to the effect of the Rashba SOC, i.e., $D\rightarrow D\left(\epsilon_R\right)\equiv D\sqrt{1+\frac{\epsilon_R}{D}}$. In this way, the product $2D\left(\epsilon_R\right)\rho_0\left(\epsilon_R\right)$ remains constant and the number of electrons in the effective conduction band is preserved. We stress that this is a \emph{generic} property of a Rashba-coupled 1D band, and is independent of the above approximation. Consequently, it is easy to see that the effective hybridization is renormalized to lower values: \begin{equation} \Gamma\left(\epsilon_R\right/D)=\frac{\Gamma_0}{\sqrt{1+\frac{\epsilon_{R}}{D}}}. \label{newGamma} \end{equation} This result is consistent with Refs. \onlinecite{Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction, Chen16_The_Kondo_temperature_of_a_two-dimensional_electron_gas_with_Rashba_spin–orbit_coupling}, where the same conclusion was obtained using, respectively, the NRG and Monte Carlo approaches. \section{Logarithmic discretization and DMRG} \label{Logarithmic} Having established that the effect of the Rashba SOC enters essentially through a renormalized density of states, we now focus on the implementation of the DMRG method in order to obtain the ground-state properties of the system. An important feature of this problem is that the BCS Hamiltonian (\ref{H_rot}) does not preserve the number of particles: the presence of the pairing term $\sim \Delta \left(\tilde{c}^{\dagger}_{k +} \tilde{c}^{\dagger}_{(-k) -} + \text{H.c.} \right)$ changes the number of particles of a $N$-particle state by $\pm 2$. Note, however, that the fermion parity, i.e. $P=(-1)^N$, is a conserved quantity which can be used to classify the different many-body states in the Hilbert space when implementing DMRG. One important aspect of Shiba systems is the exponential localization of subgap states characterized by a localization length $\xi$. A rough estimation of $\xi$ in our case can be obtained assuming, for simplicity, a classical (instead of a quantum) spin. Following Ref. \onlinecite{Balatsky_2006}, for the simplest case of isotropic scattering the Shiba state is localized around the impurity with localization length $\xi=\xi_{0}/\left|\sin\left(2\delta_{0}\right)\right|$, where $\xi_{0}$ is the coherence length of the BCS superconductor, defined as $\xi_{0}\simeq\hbar v_{F}/\Delta$, and $\delta_{0}$ is the $s$-wave phase shift due to the magnetic scattering with the impurity. From Eq. (6.10) in Ref. \onlinecite{Balatsky_2006}, the relation between $E_b$, the energy of the Shiba state within the gap, and the phase shift is $\frac{E_b}{\Delta} =\cos\left(2\delta_{0}\right)$, and therefore, we can write the localization length as \begin{align} \xi & =\frac{\xi_{0}}{\sqrt{1-\frac{E_b^2}{\Delta^2}}}. \end{align} From this expression, it is easy to realize that the localization length of the Shiba level diverges as its energy gets close to the superconductor gap edge (i.e., $E_b/\Delta \rightarrow 1$). This is particularly problematic for real-space methods such as DMRG, which can reach system sizes of up to $L_\text{max} \sim 300$ sites, depending on the implementation. This means that eventually, the localization length will be $\xi \gg L_\text{max} $, and considerable errors arising from finite-size effects will appear. As mentioned before, the case of a single impurity coupled to a SC host \emph{without} Rashba SOC has been studied in previous works by means of the NRG method \cite{Satori92_Magnetic_impurities_in_SC_with_NRG, Sakai93_Magnetic_impurities_in_SC_with_NRG,Yoshioka98_Kondo_impurity_in_SC_with_NRG,Yoshioka00_NRG_Anderson_impurity_on_SC, Bauer07_NRG_Anderson_model_in_BCS_superconductor,Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction, Zitko16}. One crucial step in NRG implementations corresponds to the logarithmic discretization procedure of the conduction band, and the subsequent mapping of the Hamiltonian onto a Wilson chain Hamiltonian. Here, although in principle the DMRG \emph{does not} require such mapping, we will adopt it in order to deal with the extremely large subgap-state localization length, which generically exceeds the maximal system sizes allowed by our computational resources. Therefore, following the abovementioned references, we implement a logarithmic discretization, which effectively maps the original Hamiltonian $\tilde{H}_\text{SC}$ defined in $k$-space, onto an effective one-dimensional semi-infinite chain (i.e., the Wilson chain). Since we follow a standard technique, we do not provide here the details of this derivation and refer the reader to Refs. \onlinecite{Bulla08_NRG_review,wilson75, Satori92_Magnetic_impurities_in_SC_with_NRG, Sakai93_Magnetic_impurities_in_SC_with_NRG,Yoshioka98_Kondo_impurity_in_SC_with_NRG, Yoshioka00_NRG_Anderson_impurity_on_SC}, where the method is well explained. Applying the logarithmic discretization the effective Wilson chain Hamiltonian is obtained: \begin{widetext} \begin{align} \bar{\tilde{H}} &= \bar{U} \tilde{n}_{d +} \tilde{n}_{d -} + \sum_{h} \left[\bar{\epsilon}_d \tilde{n}_{d h} + \bar{V} \left(\tilde{d}^{\dagger}_{h} \tilde{f}_{1 h} + \text{H.c.}\right) \right]+ \sum_{n=0}^\infty \left[ \sum_{h=\pm} \left(\bar{\gamma}_{n} \tilde{f}^{\dagger}_{n h} \tilde{f}_{n+1 h} + \text{H.c.}\right)+\bar{\Delta} \left(\tilde{f}^{\dagger}_{n +} \tilde{f}^{\dagger}_{n -} + \text{H.c.} \right) \right], \label{H_DMRG} \end{align} \end{widetext} where the bar indicates dimensionless quantities expressed in units of $D$ (e.g., $\bar{\Delta}\equiv \Delta/D$). The index $n$ (interpreted here as the effective ``site'' in the Wilson chain) corresponds to the $n$-th energy shell $\varLambda^{-\left(n+1\right)}<|\bar{\epsilon}|<\varLambda^{-n}$ in the logarithmically-discretized conduction band, with $\varLambda>1$ the discretization parameter. Consistently, the operator $f_{nh}^\dagger$ creates a fermion in that energy shell. The effective ``hopping'' parameter $\bar{\gamma}_{n}$ acquires the form\cite{wilson75,Bulla08_NRG_review}: \begin{equation} \bar{\gamma}_{n } = \sqrt{1+\frac{\epsilon_R}{D}} \frac{(1+\varLambda^{-1})(1-\varLambda^{-n-1})}{2 \sqrt{(1-\varLambda^{-2n-1})} \sqrt{(1-\varLambda^{-2n-3})}} \varLambda^{-n/2}, \end{equation} which is standard, except for the extra renormalization factor $\sqrt{1+\frac{\epsilon_R}{D}}$ due to the effect of the Rashba SOC. This renormalization of the effective hopping parameter is directly related to the change in the conduction band width $D\left(\epsilon_R\right)\equiv D\sqrt{1+\frac{\epsilon_R}{D}}$ described in the previous Section. The Hamiltonian (\ref{H_DMRG}) has been solved using the DMRG method\cite{White_DMRG_92}, for various values of the parameter $U/\Delta$. We have calculated the ground state (GS) energy and also the spectral function $\rho_d\left(\omega\right)$ of the QD. To the best of our knowledge, the DMRG has not been used before to solve this kind of system, which has been treated using mainly the NRG method \cite{Satori92_Magnetic_impurities_in_SC_with_NRG, Sakai93_Magnetic_impurities_in_SC_with_NRG,Yoshioka98_Kondo_impurity_in_SC_with_NRG,Yoshioka00_NRG_Anderson_impurity_on_SC, Bauer07_NRG_Anderson_model_in_BCS_superconductor,Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction, Zitko16}, perturbation theory in $U$ \cite{Zonda16_Perturbation_theory_of_SC_0_Pi_transition}, or equations of motions \cite{Li18_Rashba-induced_Kondo_screening_magnetic_impurity_two-dimensional}. In all cases, we have used a discretization parameter $\varLambda=2$, larger values are not convenient since they would tend to concentrate a large number of discrete frecuencies near $E_F$, where the SC gap suppresses the density of states \cite{Bauer07_NRG_Anderson_model_in_BCS_superconductor}. This allows us to work with realistic values of $\Delta$ (we consider a typical value for the bandwidth $D\simeq $ 1 eV, and for the SC gap $\Delta\simeq 1$ meV, so typically $\bar{\Delta}\simeq 10^{-3}$), and also with much smaller values of $\Delta$ when testing the universality of the model (see Section \ref{Universality} below). The presence of the SC gap is actually beneficial for the DMRG method \cite{DMRG_review}. A technical point here is that as a consequence of the discretization, the density of conduction states at the Fermi level $\rho_{0,\varLambda}\left(0\right)$ decreases with respect to the continuum limit $\varLambda \rightarrow 1$. We have calculated $\rho_{0,\varLambda}\left(0\right)$ numerically for the Wilson chain without the impurity and in the absence of Rashba SOC, and determined the hybridization $V_\varLambda$ of the QD with the first site of the chain from the condition $\Gamma_0=\pi \rho_{0,\varLambda} V_\varLambda^2$. \section{Results} \label{res} When the QD is connected to the superconductor, the multiple Andreev reflections of fermions at the QD-SC nanowire interface give rise to localized Shiba or Andreev bound states. As already mentioned, the GS of the system can be either a singlet, in which case the fermion parity is even, or a doublet, in which case the parity is odd (see Ref. \onlinecite{Zitko16}). The energy difference between the odd-parity and even-parity GSs gives the energy of the subgap Shiba level $E_{b}$: \begin{equation} E_{b} = \pm\left(E_\text{o} - E_\text{e}\right), \label{eq:Shiba} \end{equation} where $ E_\text{e(o)} $ is the GS energy for the system in the subspace with even (odd) number of particles. The $\pm$ signs appear due to the intrinsic particle-hole symmetry of the BCS Hamiltonian: each quasi-particle eigenstate $\psi\left(\epsilon \right)$ with energy $+\epsilon$ is related by charge-conjugation to a ``partner'' quasi-hole state $\psi^\dagger\left(-\epsilon \right)$ at energy $-\epsilon$. Subgap states are not the exception and therefore, finite-energy Shiba states \emph{must} appear in pairs symmetrically located around the $E_F$. \subsection{Symmetrical point} We concentrate first on the electron-hole symmetric point of the Anderson model, i.e., $\epsilon_d=-U/2$. In Fig. \ref{Energy_Shiba} we show the energy of the Shiba states as a function of $U/\pi \Gamma_0$ for different values of $\epsilon_R$ and for different values of $\Delta$. For clarity, here we plot only the ``+'' Shiba-state branch in Eq. (\ref{eq:Shiba}). The singlet-to-doublet transition takes place when $E_b$ crosses zero energy. We can observe that the effect of the Rashba SOC is to shift the transition point to lower $U/\pi \Gamma_0$, thus favoring the transition to a doublet induced by the superconducting pairing interaction. Note that the effect of the Rashba SOC is more important for smaller SC gap $\Delta$. As a benchmark for validating our method, we have checked that the curves for $\epsilon_R=0$ match those reported previously calculated with NRG \cite{Bauer07_NRG_Anderson_model_in_BCS_superconductor, Zitko16, Yoshioka00_NRG_Anderson_impurity_on_SC}. As shown in Fig. \ref{Energy_Shiba}, for all values of $U/\pi \Gamma_0$ and/or the SC gap $\Delta$, the agreement is excellent. In Fig. \ref{diagrama_fases} we show the critical gap $\Delta_\text{c}$ as a function of $U$, both scaled by $\pi \Gamma_0$. This type of diagram is similar to that shown by Bauer {\it{et al.}} (see Fig. 9 in Ref. \onlinecite{Bauer07_NRG_Anderson_model_in_BCS_superconductor}), and in addition we show the effect of the Rashba SOC. The curves $\Delta_\text{c}/\pi \Gamma_0$ vs $U/\pi \Gamma_0$ indicate the boundary between the singlet and doublet regions in parameter space. We see that the region with a doublet GS expands as the Rashba SOC increases (i.e., the curves are lowered and shifted to the left as $\alpha_R$ increases). Note that for smaller values of $U/ \pi\Gamma_0$ (i.e., smaller than $\sim$ 0.63 for $\alpha_R =0$), the transition dissappears, or equivalently $\Delta_\text{c}\rightarrow \infty$. This can be explained by the fact that for $U/ \pi\Gamma_0\ll 1$, quantum fluctuations of the charge inhibit the formation of a net magnetic moment in the QD and therefore the system never reaches the doublet GS phase. The point at which $\Delta_\text{c}\rightarrow \infty$ also shifts to lower values for finite values of $\alpha_R$. In Ref. \onlinecite{Bauer07_NRG_Anderson_model_in_BCS_superconductor}, the $0-\pi$ transition was addressed varying the parameter $U/\pi \Gamma_0$ and/or the SC gap $\Delta$. Assuming a fixed value of $\Delta$, the parameter $U/\pi \Gamma_0$ is indeed a good parameter that allows to explore the quantum phase diagram. Intuitively, a small value of $U/\pi \Gamma_0$ implies a large amount of charge fluctuations (and therefore, a non-magnetic regime) in the QD. Then, the QD is not able to ``break'' the Cooper pairs and consequently the GS of the system is a singlet adiabatically connected to the BCS state. For moderate values of $U/\pi \Gamma_0$ such that $T_K\sim D e^{-\pi U/8\Gamma_0}\gg \Delta$, the BCS state smoothly evolves into a many-body Kondo singlet, and throughout this evolution, the GS remains in the singlet subspace. Finally, for larger values of $U/\pi \Gamma_0$ such that $T_K\ll \Delta$, the QD develops a well-defined $S=1/2$ magnetic moment which cannot be screened due to the quasiparticle gap, and the GS becomes a doublet. The effect of the Rasbha SOC term ontop of the above physical picture occurs through a modification of $\rho_0\rightarrow \rho_0\left(\epsilon_R\right)$ [see Eq. (\ref{eq:rho0R})], which is actually always lowered as the Rashba SOC increases, in our 1D case. As we will analize in detail in Sec. \ref{Universality}, this results in a weakening of the Kondo effect, consequently the critical $\left(U/\pi \Gamma_0\right)_\text{c}$ is shifted to lower values. Indeed, as we have seen from the analysis of Figs. \ref{Energy_Shiba} and \ref{diagrama_fases}, the effect of the Rashba SOC on the host has the effect of favouring the doublet phase. Then, if the Rashba SOC coupling could be tuned as a parameter in the Hamiltonian, as it happens in semiconductors and interfaces coupled to gate potentials \cite{Guido_18_Tuning_Rashba_spin_orbit_coupling_in_homogeneous_semiconductor_nanowires, Iorio_2019_Vectorial_Control_of_the_Spin_Orbit_Interaction_in_Suspended_InAs_Nanowires, Herranz_15_Engineering_two_dimensional_superconductivity_and_Rashba_SOC, Direct_Rashba_spin-orbit_interaction_in_Si_and_Ge_nanowires_with_different_growth_directions}, this mechanism could be used for the \textit{in-situ} control the 0-$\pi$ transition. \begin{figure}[t] \includegraphics[width=8.5cm]{fig1.pdf} \caption{(Color online) Energy (in units of the SC gap $\Delta$) of one branch of the Shiba states as a function of $U/\pi \Gamma_0$ for $\Delta=0.001$ (top) and $\Delta=0.06$ (bottom). Starting from $U/\pi \Gamma_0=0$, when the energy crosses the line $E_b / \Delta =0$, the transition from a Kondo singlet to a doublet occurs. We remark that there is another branch symmetrically located respecto to zero energy, not shown for clarity. The NRG results for zero Rashba SOC coupling are from Ref. \onlinecite{Bauer07_NRG_Anderson_model_in_BCS_superconductor}} \label{Energy_Shiba} \end{figure} When refering to experiments, it is worth noting that $D$ is not a relevant experimental parameter, and a more important quantity is $\mu$ which essentially determines the filling of the band(s), and in semiconductor nanowires can be tuned with voltage gates \cite{Das2012_Zero-bias_peaks_and_splitting_in_an_Al-InAs_nanowire_topological_superconductor_as_a_signature_of_Majorana_fermions}. Since we have imposed $\mu=D$ in order to have a half filled band, we can take as a reference the values of the chemical potential in experiments. In Ref. \onlinecite{Das2012_Zero-bias_peaks_and_splitting_in_an_Al-InAs_nanowire_topological_superconductor_as_a_signature_of_Majorana_fermions} experiments in InAS nanowires report values of $\mu$ up to $\sim 0.2$ meV, and spin-orbit energies of $\epsilon_R=75$ $\mu\text{eV}$, so in that case $\epsilon_R/\mu \sim 0.375$, in accordance with the values we have used for our calculations, but it is important to keep in mind that those experimental values might vary with the chemical potential. On the other hand, much larger Rashba spin-orbit energies $\epsilon_R\simeq 6.5$ meV have been found experimentally in InAs nanowires \cite{Wang_2017, Kammhuber2017}. \begin{figure}[h] \includegraphics[width=8.5cm]{fig2.pdf} \caption{(Color online) Phase diagram of the model showing the critcal gap as a function of $U/\pi \Gamma_0$ for different values of the Rashba energy.} \label{diagrama_fases} \end{figure} \subsection{Spectral function} Using the correction vector scheme of DMRG presented by Nocera and Alvarez\cite{Nocera_PhysRevE.94.053308}, we have calculated the spectral function at the QD as \begin{align} \rho_d\left(\omega\right) = & - \frac{1}{\pi} \text{Im} \left\{ \langle \psi_\text{GS} | d \left(\frac{1}{\omega + i \eta + H - E_\text{GS} } \right) d^\dagger |\psi_\text{GS} \rangle +\right. \nonumber \\ & \left. \langle\psi_\text{GS} | d^\dagger \left( \frac{1}{\omega + i \eta - H - E_\text{GS} } \right) d |\psi_\text{GS} \rangle \right\} \label{eq:spectral} \end{align} where here $\eta$ is a small broadening parameter that is introduced to avoid the poles of the Green's function in the real axis, and the GS energy $E_\text{GS}$ can be either $E_\text{e}$ or $E_\text{o}$ depending on the specific values of parameters $\Delta,U/\pi\Gamma_0$ and $\alpha_R$. We use a frequency-dependent $\eta=\eta(\omega)$ to increase the accuracy of the plot: \begin{align} \eta\left(\omega\right)&=\begin{cases} \Delta/20 &\text{if }\left|\omega\right|<\Delta,\\ Ae^{\omega}+B &\text{if }\Delta<\left|\omega\right|< 0.2D,\\ 0.15D &\text{if } 0.2D<\left|\omega\right|<D, \end{cases} \label{eq:eta} \end{align} The constants $A$ and $B$ are chosen so that $\eta(\omega)$ is a continuous function. In Fig. \ref{spectral} we show the calculated $\rho_d\left(\omega\right)$ for the parameters indicated in the caption, for which the system is in the singlet phase. All the expected features can be observed with clarity: the SC energy-gap of width $2\Delta$ and the Shiba states, which clearly appear as two peaks inside the gap. Each Shiba peak inside the gap can be described with a Lorentzian function $L(w,A,w_0)=\frac{\eta A}{\pi(\eta^2+(\omega-\omega_0)^2)}$, where $\omega_0$ and $A$ are fitting parameters controlling, respectively, the center of the peak and its spectral weight. The parameter $\eta$ is the width of the Lorentzian and is the same function defined in Eq. (\ref{eq:eta}) and used in the correction vector calculations for each $\omega$ of the spectral function in Eq. (\ref{eq:spectral}). As an internal sanity check, we have verified that the center of the Shiba peaks $\omega_0$ match (within the DMRG numerical precision) the corresponding values of $E_b$ obtained from Eq. (\ref{eq:Shiba}). With respect to the spectral weights obtained within this scheme, we do not show the results here but we mention that for $\alpha_R = 0$ our calculations agree very well with those reported in previous works, e.g., specifically with the weights appearing in Fig. 3 in Ref.\onlinecite{Bauer07_NRG_Anderson_model_in_BCS_superconductor}. The most important feature of the spectral weight is that at the 0-$\pi$ transition it has an abrupt discontinuity and its value, coming from the singlet phase, is reduced to a half on the doublet phase. This discontinuity is due to the abrupt change in the degeneracy of the GS from $g=1$ (singlet) to $g=2$ (doublet), as explained in Refs. \onlinecite{Sakai93_Magnetic_impurities_in_SC_with_NRG,Yoshioka98_Kondo_impurity_in_SC_with_NRG}, and consequently, it is a universal feature which is preserved at finite $\alpha_R$. \begin{figure}[htb] \includegraphics[width=8.5cm]{fig3.pdf} \caption{(Color online) Spectral function $\rho_d\left(\omega\right)$ of the impurity for $U/(\pi \Gamma_0)=4, \Delta=0.001$ and $\epsilon_R=0$. Inset: zoom inside the gap to visualize the two Shiba peaks fitted by $F(\omega,\omega_0,A)=L(w,w_0,A)+L(-w,-w_0,A)$. The energy of the Shiba peaks inside the gap perfectly matches that plotted in Fig. \ref{Energy_Shiba}} \label{spectral} \end{figure} \subsection{Away from the symmetrical point} \label{assymetry} It has already been established \cite{Bauer07_NRG_Anderson_model_in_BCS_superconductor} that deviations from the electron-hole symmetrical point $\epsilon_d=-U/2$ (for fixed $\Delta$, $U$ and $\Gamma_0$) can also trigger the 0-$\pi$ transition. This situation is experimentally more relevant, since in many cases the energy level of the impurity $\epsilon_d$ cannot be controlled, and therefore it would be very rare to find the system ``self-tuned'' at the symmetric point. Moreover, in QDs or nanowires where $\epsilon_d$ can in principle be controlled by the gate potential, the asymmetry can be tuned \textit{in situ}, providing an additional ``knob'' to access the 0-$\pi$ transition. Therefore, describing the effects of electron-hole asymmetry is intrinsecally and experimentally relevant. Defining the asymmetry parameter as\cite{Bauer07_NRG_Anderson_model_in_BCS_superconductor} $\zeta_d=\epsilon_d+U/2$, such that the symmetrical point corresponds to $\zeta_d=0$, in Fig. \ref{asymmetry} we show the effect of the Rasba coupling in the position of the Shiba states when $\zeta_d$ is varied. For the parameters of the figure, at $\zeta_d=0$ the system is in the doublet phase and in consequence increasing the Rashba SOC coupling at that point drives the system deeper into the doublet phase (the energy of the Shiba states decreases from $0$ as $\alpha_R$ increases). As the system enters the asymmetric regime, the critical value $\zeta_{d,\text{c}}$ at which the transition occurs increases as $\alpha_R$ increases. Hence we can see that also in the assymetric case the Rashba SOC also favours the doublet phase, i.e., weakens the Kondo regime. \begin{figure}[h] \includegraphics[width=8.5cm]{fig4.pdf} \caption{(Color online) Energy of one branch of Shiba states for different values of the Rashba SOC coupling, as a function of the assymetry factor $\zeta_d$. NRG results for zero Rashba SOC coupling from Ref. \onlinecite{Bauer07_NRG_Anderson_model_in_BCS_superconductor}} \label{asymmetry} \end{figure} \subsection{Universality and Kondo Temperature.} \label{Universality} In this section, we discuss an important result of our work, the evolution of the Kondo temperature as a function of the Rashba SOC coupling. In addition, we discuss the universality of the model, following the work done previously by Yoshioka and Ohashi \cite{Yoshioka00_NRG_Anderson_impurity_on_SC} for the case $\alpha_R=0$. For $\alpha_R=0$, we recall that the Kondo temperature is defined as\cite{Haldane_1978}: \begin{align} T_K &= 0.364 \sqrt{\frac{2 \Gamma_0 U}{\pi}} \exp\left[{\frac{\epsilon_d \left(\epsilon_d+U\right)\pi }{2 \Gamma_0 U}}\right], \label{TK_Haldane} \end{align} which for the symmetric case reduces to: \begin{align} T_K &= 0.182 U \sqrt{\frac{8 \Gamma_0 }{\pi U}} \exp\left[-{\frac{\pi U }{8 \Gamma_0}}\right]. \label{TK_Haldane_sym} \end{align} With this expression, Yoshioka and Ohashi \cite{Yoshioka00_NRG_Anderson_impurity_on_SC} showed that, within the Kondo regime $\pi \Gamma_0 < U$, the energy of the Shiba state $E_b/\Delta$ is a \emph{universal} function of $T_K/\Delta$. As mentioned before, when $\pi \Gamma_0 > U$ the charge fluctuations in the dot inhibit the formation of a local magnetic moment and the system is away from the Kondo regime. It was also shown that universality breaks down well inside the doublet phase, for values of the SC gap $\Delta$ larger than the critical $\Delta_\text{c}$. Since in our single-impurity 1D case the only important effect of the Rasbha coupling is to lower the effective hybridization $\Gamma\left(\epsilon_R/D \right)$ [see Eq. (\ref{newGamma})], it can be seen that the universality found for $\alpha_R=0$ \cite{Yoshioka00_NRG_Anderson_impurity_on_SC}, will also occur at $\alpha_R \neq 0$ if we define the Kondo temperature with a generalization of Haldane's formula for $\epsilon_R$ finite, as was done in Ref. \onlinecite{Wong16_Influence_Rashba_spin-orbit_coupling_Kondo_effect}: \begin{figure}[t] \includegraphics[width=8.5cm]{fig5.pdf} \caption{(Color online) Normalized Kondo Temperature according to Eq. (\ref{TK_Haldane2}) as a function of the normalized Rashba energy. In this calculation we have used the value $U=0.5D$.} \label{TK} \end{figure} \begin{equation} T_K = 0.364 \sqrt{\frac{2 \Gamma\left(\epsilon_R/D\right) U}{\pi}} \exp\left[{\frac{\epsilon_d \left(\epsilon_d+U\right)\pi }{2 \Gamma\left(\epsilon_R/D\right) U}}\right]. \label{TK_Haldane2} \end{equation} In Fig. \ref{TK} we show $T_K$ given by Eq. (\ref{TK_Haldane2}) as a function of $\epsilon_R/D$. Since the Rashba coupling always lowers the density of states (with respect to $\rho_0$), this implies that the Kondo temperature is always decreased when $\alpha_R$ (and hence $\epsilon_R$) is increased. We stress that the above phenomenology is a consequence of the reduced dimensionality of the 1D nanowire, and that in a 2D system the situation might be different. The effect of the Rashba SOC on $T_K$ in the 2D case has been treated in many previous works \cite{Malecki07_Two_Dimensional_Kondo_Model_with_Rashba_Spin-Orbit_Coupling, Zarea2012_Enhancement_Kondo_Effect_through_Rashba_Spin-Orbit_Interactions,Isaev12_Kondo_effect_in_the_presence_of_spin-orbit_coupling, Wong16_Influence_Rashba_spin-orbit_coupling_Kondo_effect, Yanagisawa12_Kondo_Effect_Spin–Orbit_Coupling, Li18_Rashba-induced_Kondo_screening_magnetic_impurity_two-dimensional}, with very different conclusions (i.e., $T_K$ can either increase, remain constant, or decrease). In particular, in a 2D system the Rashba SOC mixes the spin and orbital momenta of the conduction electrons, and this mixing results in an apparent effective two-channel Anderson or Kondo Hamiltonian. However, as Zitko and Bon{\v c}a explain\cite{Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction}, solving the problem exactly always results in a single-channel model. Therefore, in 2D and near the Fermi level the total density of states does not change with the Rashba SOC, and the Kondo temperature is only weakly affected, linearly increasing or decreasing depending on the impurity parameters \cite{Zitko11_Kondo_effect_in_the_presence_of_Rashba_spin-orbit_interaction}. It is also worth to mention that it has been claimed \cite{Li18_Rashba-induced_Kondo_screening_magnetic_impurity_two-dimensional} that the mixing of spin and orbital momentum of the conduction electrons leads to a Rashba-dependent effective SC gap $\Delta\rightarrow \Delta\left(\epsilon_R\right)$, something that does not occur in 1D case. On the other hand, in the purely 1D case the influence of the Rashba SOC on the Kondo temperature has also been studied using a Schrieffer-Wolff transformation and a ``poor's man'' scaling approach, and it was shown that the coupling $J$ of the resulting Kondo model \emph{increased} with the Rashba SOC\cite{deSousa16_Kondo_effect_quantum_wire_with_spin-orbit_coupling} . Nevertheless, since $T_K$ in the Kondo model essentially depends on the product $\rho_0 J$, and taking into account the renormalization of $\rho_0$ in Eq. (\ref{eq:rho0R}), it is clear that the increase in $J$ must be overcome by the decrease in $\rho_0$, in such a way that the overall effect is a \emph{net decrease} of the product $\rho_0 J$ with the Rashba SOC. \begin{figure}[t] \includegraphics[width=8.5cm]{fig6.pdf} \caption{(Color online) Universality for the particle-hole symmetric case. Energy of the Shiba states, in units of the SC gap, for different values of $\epsilon_R/D$ as a function of $T_K/\Delta$. $U=0.5$} \label{universality} \end{figure} \begin{figure}[t] \includegraphics[width=8.5cm]{fig7.pdf} \caption{(Color online) Universality away from the particle-hole symmetric case, for different values of $\epsilon_R/D$. In this figure $\pi \Gamma_0=0.05$ and $U=0.5$.} \label{universality2} \end{figure} Finally, we analize the the universality of the model, following the work of Yoshioka and Ohashi \cite{Yoshioka00_NRG_Anderson_impurity_on_SC} for the case without Rashba SOC. In Figs. \ref{universality} and \ref{universality2} we plot the energy of the Shiba states as a function of $T_K/\Delta$, with $T_K$ defined as in Eq. (\ref{TK_Haldane2}). Here we only vary the SC gap $\Delta$, with $U$, $\epsilon_d$ and $\Gamma_0$ fixed \cite{note_universality}. In Fig. \ref{universality} we show the DMRG results for the symmetric case $\epsilon_d=-U/2$ and in Fig. \ref{universality2} the results away from the particle-hole symmetric case, with fixed $\pi \Gamma_0$. The results corresponding to $\alpha_R=0$ match very well those obtained with NRG by Yoshioka and Ohashi \cite{Yoshioka00_NRG_Anderson_impurity_on_SC} (not shown to maintain the clarity of the figure). In the Kondo singlet phase (lower half region of the figures) the results are universal independently of the value of $\alpha_R$. This is expected since , as was shown in Eq. (\ref{newGamma}), the effect of the Rashba coupling can be reduced to a change in the hybridization, but again we note that in 2D the results are different due to the renormalization of the SC gap $\Delta$\cite{Li18_Rashba-induced_Kondo_screening_magnetic_impurity_two-dimensional}. Therefore, when the generalized Kondo temperature Eq. (\ref{TK_Haldane2}) is used, all the curves collapse into a single one, and hence we can argue that the 0-$\pi$ transition occurs at the universal value $T_K/\Delta\simeq 0.3$, even in the presence of Rasbha SOC, at least in our 1D case. In the doublet phase where $T_K/\Delta\ll 1$, universality is lost (see down turn of the curves) and the value of $\Delta$ needed to achieve this non-universal regime is larger as $\alpha_R$ increases. \section{Summary and perspectives} \label{conclusions} We have studied the effect of the Rashba spin-orbit coupling present in a one-dimensional superconducting nanowire coupled to a single-level quantum dot, and have analyzed its influence on the 0-$\pi$ transition and Kondo temperature. Our work was motivated mainly by recent experimental systems where the Rashba spin-orbit coupling has been identified as an unavoidable ingredient, such as semiconductor nanowires proximitized by nearby bulk superconductors or the surface of superconducting materials (such as Al or Pb) with large atomic numbers and strong intra-atomic spin-orbit interaction. Those systems have been used as experimental platforms with magnetic impurities or quantum dots where Kondo and Shiba physics has been revealed. We have modeled the quantum dot by means of the Anderson impurity model. In order to solve the many-body problem, we have implemented the DMRG, in combination with logarithmic discretization of the conduction band and a subsequent mapping onto a Wilson chain Hamiltonian, in order to accomodate subgap Shiba states with exceedingly long localization lengths. We have benchmarked and tested this method against previous results obtained with the NRG techninque in the absence of Rashba SOC, with excellent agreement. We have particularly studied the 0-$\pi$ singlet-to-doublet phase transition and the position of the subgap (Shiba) states, showing in detail their dependence on the Rashba coupling. By the means of a straightforward unitary transformation, we have been able to show that in a 1D geometry, the most important effect of the Rashba coupling can be accounted for in a reduction in the density of normal states in the conduction band. Using this result and a generalized Haldane's formula for $T_K$ we have shown that the Kondo temperature it is always lowered by the Rashba coupling in this one dimensional case. Physically, this has the indirect effect of favoring the doublet phase. The excellent results obtained with DMRG open the possibility of studying chains or clusters of impurities coupled to normal or topological superconductors. This is an interesting perspective since this kind of systems have only be studied analytically in non-interacting systems where the Kondo effect is absent. \section*{Acknowledgments} We thank Luis O. Manuel for very useful discussions. This work was partially supported by CONICET (Grants PIP 112-20150100-364 and PIP 1122015010036) and ANPCyT (PICT 2017-2081), Argentina. \bibliographystyle{apsrev}
proofpile-arXiv_065-5996
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The interplay between symmetry and topology leads to various of topological phases. For a translationally invariant noninteracting gapped system, the topological phase is characterized by the band structure topology, as well as the symmetries the system respects. Along with these thoughts, a classification was obtained for topological insulators and superconductors (TI/SC) \cite{Hasan2010,Qi2011,Bernevig2013book} in the ten Altland-Zirnbauer (AZ) symmetry classes \cite{Schnyder2008,Kitaev2009,Ryu2010,Teo2010,Chiu2016}, which is determined by the presence or absence of three types of nonspatial symmetries, i.e. the time-reversal, particle-hole and chiral symmetries. One nice feature of these tenfold-way phases is the bulk-boundary correspondence, namely, a topologically nontrival bulk band structure implies the existence of codimension-one gapless boundary modes on the surface, irrespective the surface orientation. (The codimension is defined as the difference between the bulk dimension and the dimension of the boundary where the gapless mode propagates). When considering more symmetries beyond the nonspatial ones, the topological classification is enriched. Topological crystalline insulators \cite{Fu2011, Chiu2013, Shiozaki2014, Ando2015, Kruthoff2017} are such systems protected by crystalline symmetries. They are able to host codimension-one gapless boundary modes only when the boundary is invariant under the crystalline symmetry operation. For example, topological crystalline insulators protected by reflection symmetry \cite{Chiu2013} can support gapless modes only on the reflection invariant boundary. On the other hand, inversion symmetric topological crystalline insulators do not necessarily give rise to codimension-one gapless boundary modes \cite{Turner2010, Hughes2011}, because no boundary is invariant under inversion. Remarkably, it was recently demonstrated that a crystal with a crystalline-symmetry compatible bulk topology may manifest itself through protected boundary modes of codimension greater than one \cite{Slager2015, Benalcazar2017, Peng2017, Langbehn2017, Benalcazar2017s, Song2017, Schindler2018, Geier2018, Khalaf2018, Khalaf2018prx, Trifunovic2019}. Such insulating and superconducting phases are called higher-order topological insulators and superconductors (HOTI/SCs). Particularly, an $n$th order TI/SC can support codimension-$n$ boundary modes. (The strong TI/SCs in the tenfold-way phases with protected boundary modes at codimension one can be called as first-order TI/SCs according to this definition.) A higher-order bulk-boundary correspondence between the bulk topology and gapless boundary modes at different codimensions was derived in Ref.~\cite{Trifunovic2019} based on $K$-theory. Beyond equilibrium or static conditions, it is known that topological phases also exist, and one of the famous examples is the Floquet topological insulator, which is proposed to be brought from a static band insulator by applying a periodic drive, such as a circularly polarized radiation or an alternating Zeeman field \cite{Oka2009,Inoue2010,Kitagawa2011,Lindner2011,Lindner2013}. A complete classification of the Floquet topological insulators (as well as superconductors) in the AZ symmetry classes has been obtained in Ref.~\cite{Roy2017, Yao2017}, which can be regarded as a generalization of the classfication for static tenfold-way TI/SCs. In a periodically driven, or Floquet, system, the nontrivialty can arise from the nontrivial topology of the unitary time-evolution operator $U(t)$ (with period $T$), which can be decomposed into two parts as $U(t) = e^{-iH_F t}P(t)$. Here, the first part describes the stroboscopic evolution at time of multiples of $T$ in terms of a static effective Hamiltonian $H_F$, and the second part is known as the micromotion operator $P(t) = P(t+T)$ describing the evolution within a given time period \cite{Shirley1965}. (We will make this decomposition more explicit later). Thus, the nontrivial topology can separately arise from $H_F$ as in a static topological phase, or from the nontrivial winding of $P(t)$ over one period. Whereas Floquet topological phase in former situation is very similar to a static topological phase as it has a static limit, the latter is purely dynamical and cannot exist if the time-periodic term in the Hamiltonian vanishes. Therefore, systems belong to the latter case are more interesting and are known as the anomalous Floquet topological phases. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{FHOTI} \caption{Floquet second-order TI/SCs protected by time-glide symmetry/antisymmetry can be mapped to static second-order TI/SCs protected by reflection symmetry/antisymmetry. The dashed line indicates the reflection (time-glide) plane. } \label{fig:FHOTI} \end{figure} In a Floquet system, energy is not conserved because of the excplicit time-depdendence of the Hamiltonian. However, one can define quasienergies as eigenvalues of $H_F = \frac{i}{T}\ln U(T)$, which are only defined modulo the periodic driving frequency $\omega = 2\pi/T$. This can be intuitively understood due to the existence of energy quanta $\omega$ that can be absorbed and emitted. Similar to static topological phases, the quasienergy spectrum can be different with different boundary conditions. Particularly, inside a bulk quasienergy gap (when periodic boundary condition is applied), there may exist topologically protected boundary modes. In Floquet topological phases protected by nonspatial symmetries (tenfold-way phases), the bulk-boundary correspondence is also expected to hold \cite{Roy2017}, namely the number of boundary modes inside a particular bulk gap can be fully obtained from the topology of the evolution operator $U(t)$, when periodic boundary condition is applied. Interestingly, when there exists a symmetry relating states at quasienergies $\epsilon$ and $-\epsilon$, then the topological protected boundary modes will appear inside the quasienergy gap at $0$ and $\omega/2$, since these are quasienergies that are invariant under the above symmetry operation. In particular, a bulk micromotion operator with nontrivial topology is able to produce gapless Floquet codimension-one boundary modes at quasienergy $\omega/2$ (which will be made clear later). The natural following question to ask is that how can we create Floquet higher-order topological phases, with protected gapless modes at arbitrary codimensions. In particular, we want to have the topological nontriviality arise from the micromotion operator, otherwise we just need to have $H_F$ as a Hamiltonian for a static higher-order topological phase. Similar to the static situation, when only nonspatial symmetries are involved, the tenfold-way Floquet topological phases are all first-order phases which can only support codimension-one boundary modes. Higher-order phases are yet possible when symmetries relating different spatial points of the system are involved. These symmetries can be static crystalline symmetries, as well as space-time symmetries which relates systems at different times. Recently, the authors in Refs.~\cite{Huang2018, Bomantara2019,Rodriguez2019, Peng2019, Seshadri2019, Nag2019} constructed Floquet second-order TI/Scs. Particularly, the authors of Ref.~\cite{Peng2019} were able to construct Floquet corner modes by exploiting the time-glide symmetry~\cite{Morimoto2017}, which combines a half-period time translation and a spatial reflection, as illustrated in the left part of Fig.~\ref{fig:FHOTI}. It turns out that the roles played by such space-time symmetries in Floquet systems cannot be trivially replaced by spatial symmetries. As pointed out in Ref.~\cite{Peng2019}, in protecting anomalous Floquet boundary modes, the space-time symmetries generally have different commutation relations with the nonspatial symmetries, compared to what the corresponding spatial symmetries do. Since the use of space-time symmetries opens up new possibilities in engineering Floquet topological phases, especially the Floquet HOTI/SCs, it is important to have a thorough topological classification, as well as a general recipe of model construction for such systems. In this work, we completely classify Floquet HOTI/SCs with an order-two space-time symmetry/antisymmetry realized by an operator $\hat{\mathcal{O}}$, which can be either unitary or antiunitary. By order-two, it means that the symmetry/antisymmetry operator twice trivially acts on the time-periodic Hamiltonian $H(t)$, namely \begin{equation} [\hat{\mathcal{O}}^{2},H(t)]=0,\quad\hat{\mathcal{O}}=\hat{\mathcal{U}},\hat{\mathcal{A}} \label{eq:order_two_symmetry} \end{equation} where $\hat{\mathcal{O}}$ can be either unitary $\hat{\mathcal{U}}$ or antiunitary $\hat{\mathcal{A}}$. We further provide a general recipe of constructing tight-binding Hamiltonians for such Floquet HOTI/SCs in different symmetry classes. Note that the order-two static crystalline symmetries/antisymmetries considered in Ref.~\cite{Shiozaki2014} will be a subset of the symmetries/antisymmetries considered in this work. Our classification and model construction of Floquet HOTI/SCs involve two complementary approaches. The first approach is based on the classification of gapped unitaries \cite{Roy2017,Morimoto2017}, namely the time-evolution operator $U(t)$ at time $t \in [0,T)$, with $U(T)$ gapped in its eigenvalues' phases. It turns out that the gapped unitaries can be (up to homotopy equivalence) decomposed as a unitary loop (which is actually the micromotion operator) and a unitary evolution under the static Floquet Hamiltonian $H_F$. Thus, a general gapped unitary is classified by separately considering the unitary loop and the static Hamiltonian $H_F$, where the latter is well known for systems in AZ classes as well as systems with additional crystaline symmetries. The classification of unitary loops on the other hand is less trivial since it is responsible for the existence of anomalous Floquet phases \cite{Rudner2013}, especially when we are considering space-time symmetries. We focus on the classification of Floquet unitary loops in this work. In particular, a hermitian map between unitary loops and hermitian matrices is introduced, which is inspired by the dimensional reduction map used in the classification of TI/SCs with scattering matrices \cite{Fulga2012}. The key observation is that the symmetry constraints on the unitary loops share the same features as the ones on scattering matrices. This hermitian map has advantages over the one used in earlier works \cite{Roy2017, Morimoto2017}, because it simply maps a unitary loop with a given order-two space-time symmetry/antisymmetry to a static Hamiltonian of a topological crystaline insulator with an order-two crystalline symmetry/antisymmetry. This enable us to exploit the full machinary of $K$ theory, to define $K$ groups, as well as the $K$ subgroup series introduced in Ref.~\cite{Trifunovic2019}, for the unitary loops subject to space-time symmetries/antisymmetries. \begin{table} \caption{\label{tab:summary} Nontrivial space-time symmetry/antisymmetry with subscript $T/2$ vs. static spatial symmetry/antisymmetry with subscript $0$, sharing the same $K$ groups at the same dimension. $\hat{\mathcal{U}}$, $\hat{\mathcal{A}}$, $\overline{\mathcal{U}}$ and $\overline{\mathcal{A}}$ denote unitary symmetry, antiunitary symmetry, unitary antisymmetry and antiunitary antisymmetry, respectively. The commutation (anticommutation) relations with coexisting nonspatial symmetries are denoted as additional subscripts $+$ ($-$), while the superscript indicates the square of the operator. In the case of classes BDI, DIII, CII, CII, the first and second $\pm$ correspond to time-reversal, and particle-hole symmetries, respectively. } \begin{ruledtabular} \centering \begin{tabular}{ccc} AZ class& Space-time & Static \\ \hline \multirow{2}{*}{A} &$\hat{\mathcal{U}}_{T/2}^+$ &$\hat{\mathcal{U}}_{0}^+$ \\ &$\overline{\mathcal{A}}_{T/2}^{\pm}$ &$\overline{\mathcal{A}}_{0}^{\mp}$ \\ \hline \multirow{2}{*}{AIII} &$\hat{\mathcal{U}}_{T/2,\pm}^{+}$ & $\hat{\mathcal{U}}_{0,\mp}^{+}$\\ &$\hat{\mathcal{A}}_{T/2,\pm}^{\pm}$ & $\hat{\mathcal{A}}_{0,\mp}^{\pm}$\\ \hline \multirow{2}{*}{AI, AII} &$\hat{\mathcal{U}}_{T/2,\pm}^+$ &$\hat{\mathcal{U}}_{0,\pm}^+$ \\ &$\overline{\mathcal{U}}_{T/2,\pm}^+$ &$\overline{\mathcal{U}}_{0,\mp}^+$ \\ \hline C, D &$\hat{\mathcal{U}}_{T/2,\pm}^+$ &$\hat{\mathcal{U}}_{0,\mp}^+$ \\ \hline BDI, DIII, CII, CI &$\hat{\mathcal{U}}_{T/2,\pm \pm}$ &$\hat{\mathcal{U}}_{0,\pm \mp}$ \end{tabular} \end{ruledtabular} \end{table} Based on this approach, we obtain the first important result of the this work, namely, for every order-two nontrivial space-time (anti)unitary symmetry/antisymmetry, which involves a half-period time translation, there always exists a unique order-two static spatial (anti)unitary symmetry/antisymmetry, such that the two symmetries/antisymmetries corresopond to the same $K$ group and thus the same classification. This result is illustrated in Fig.~\ref{fig:FHOTI} for the case of time-glide vs. reflection symmetries. The explicit relations are summarized in Table~\ref{tab:summary}. Because of these relations, all results for the classification \cite{Benalcazar2017, Peng2017, Langbehn2017, Benalcazar2017s, Song2017, Schindler2018, Geier2018, Khalaf2018, Khalaf2018prx} as well as the higher-order bulk-boundary correspondence \cite{Trifunovic2019} of static HOTI/SCs can be applied directly to the anomalous Floquet HOTI/SCs. In the second approach, by exploiting the frequency-domain formulation, we obtain the second important result of this work, which is a general recipe of constructing harmonically driven Floquet HOTI/SCs from static HOTI/SCs. This recipe realizes the $K$ group isomorphism of systems with a space-time symmetry and systems with a static crystalline symmetry at the microscopic level of Hamiltonians, and therefore provide a very intuitive way of understanding the classification table obtained from the formal $K$ theory. The rest of the paper is organized as follows. We first introduce the symmetries, both nonspatial symmetries and the order-two space-time symmetries, for Floquet system in Sec.~\ref{sec:symmetries}. Then, in Sec.~\ref{sec:hermitian_map}, we introduce a hermitian map which enables us to map the classification of unitary loops to the classification of static Hamiltonians. In Sec.~\ref{sec:classification_order_two}, by using the hermitian map, we explicitly map the classification of unitary loops in all possible symmetry classes supporting an order-two symmetry, to the classification of static Hamiltonians with an order-two crystalline symmetry. In Sec.~\ref{sec:Kgroup}, we derive the corresponding $K$ groups for unitary loops in all possible symmetry classes and dimensions. In Sec.~\ref{sec:FHOTI_extension}, we introduce the $K$ subgroup series for unitary loops, which enables us to completely classify Floquet HOTI/SCs. In Sec.~\ref{sec:FHOTI_frequency_domain}, the frequency-domain formulation is introduced, which provides a complimentary perspective on the topological classification of Floquet HOTI/SCs. In Sec.~\ref{sec:models}, we introduce a general recipe of constructing harmonically driven Floquet HOTI/SCs, and provide examples in different situations. Finally, we conclude our work in Sec.~\ref{sec:conclusion}. Note that it is possible to skip the $K$-theory classification sections from \ref{sec:hermitian_map} to \ref{sec:FHOTI_extension}, and understand the main results in terms of the frequency-domain formulation. \section{Floquet basics \label{sec:floquet_basics}} In a Floquet system, the Hamiltonian \begin{equation} H(t+T)=H(t) \end{equation} is periodic in time with period $T=2\pi/\omega$, where $\omega$ is the angular frequency. In a $d$-dimensional system with translational symmetry and periodic boundary condition, we have well defined Bloch wave vector $\boldsymbol{k}$ in the $d$ dimensional Brillouin zone $T^d$ (torus). The system can thus be characterized by a time-periodic Bloch Hamiltonian $H(\boldsymbol{k},t)$. In the presence of a $d_{\mathrm{def}}-$dimensional topological defect, the wave vector $\boldsymbol{k}$ is no longer a good quantum number due to the broken translational symmetry. However, the topological properties of the defect can be obtained by considering a large $D=(d-d_{\mathrm{def}}-1)$-dimensional surface, on which the translational symmetry is asymptotically restored so that $\boldsymbol{k}$ can be defined, surrounding the defect. We will denote $\boldsymbol{r}$ as the real space coordinate on this surrounding surface, or a $D$-sphere $S^{D}$, which will determine the topological classification. Thus, we have a time-periodic ($t\in S^{1}$) Bloch Hamiltonian $H(\boldsymbol{k},\boldsymbol{r},t)$ defined on $T^{d}\times S^{D+1}$. In the following, we will denote the dimension of such a system with a topological defect as a pair $(d,D)$. The topological properties for a given Hamiltonian $H(\boldsymbol{k},\boldsymbol{r},t)$, can be derived from its time-evolution operator \begin{equation} U(\boldsymbol{k},\boldsymbol{r},t_0+t,t_0)=\hat{\mathscr{T}}\exp\left[-i\int_{t_0}^{t_0+t}dt'\,H(\boldsymbol{k},\boldsymbol{r},t')\right], \end{equation} where $\hat{\mathscr{T}}$ denotes the time-ordering operator. The Floquet effective Hamiltonian $H_F(\boldsymbol{k},\boldsymbol{r})$ is defined as \begin{equation} U(\boldsymbol{k},\boldsymbol{r},T+t_0,t_0) = \exp(-iH_F(\boldsymbol{k},\boldsymbol{r})T). \end{equation} Note that different $H_F$s defined at different $t_0$s are related by unitary transformations, and thus the eigenvalues of the Floquet effective Hamiltonian are uniquely defined independent of $t_0$. which is independent of $t_0$. We also introduce $\epsilon_{n}(\boldsymbol{k},\boldsymbol{r})\in [-\pi/T,\pi/T]$ to denote the $n$th eigenvalue of $H_F(\boldsymbol{k},\boldsymbol{r})$, and call it the $n$th quasienergy band. Although $H_F$ captures the stroboscopic evolution of the system, it does not produce a complete topological classification of the Floquet phases. It is known that one can have the so called anomalous Floquet phases even when $H_F$ is a trivial Hamiltonian. To fully classify the Floquet phases, we need information of the evolution operator at each $t$ within the period. In order to have a well defined phase, we will only consider gapped unitary evolution operators, whose quasienergy bands are gapped at a particular quasienergy $\epsilon_{\rm gap}$. Thus, given a set of symmetries the system respects, one needs to classify these gapped unitaries defined from each gapped quasienergies $\epsilon_{\rm gap}$. The most common considered gapped energies in a system with particle-hole or chiral symmetry are $0$ and $\omega/2$, since such energies respect the symmetry. Note that the $\epsilon_{\rm gap}=\omega/2$ case is more interesting since they correspond to anomalous Floquet phases \cite{Rudner2013}, which has no static analog. When neither of the two above mentioned symmetries exists, the gapped energy can take any value, but one can always deform the Hamiltonian such that the gapped energy appears at $\omega/2$ without changing the topological classification. Hence, in the following we will fix $\epsilon_{\rm gap}=\omega/2$. It is evident that the initial time $t_0$ in the evolution operator does not affect the classification, since it corresponds to different ways of defining the origin of time. Thus, from now on, we will set $t_0=0$ and denote \begin{equation} U(\boldsymbol{k},\boldsymbol{r},t) = U(\boldsymbol{k},\boldsymbol{r},t,0). \end{equation} A less obvious fact, is that one can define the symmetrized time-evolution operator \cite{Roy2017} centered around time $\tau$ as \begin{equation} U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)=\mathscr{T}\exp\left[-i\int_{\tau - \frac{t}{2}}^{\tau + \frac{t}{2}}dt'\,H(\boldsymbol{k},\boldsymbol{r},t')\right]\label{eq:symm-evolution}, \end{equation} which will also give rise to the same topological classification. This statement is proved in Appendix~\ref{app:proof_symmetric_evolution}. In fact, $U_{\tau}(\boldsymbol{k},\boldsymbol{r},T)$ leads to the same quasienergy band structure independent of the choice of $\tau$. This is because (the explicit $\boldsymbol{k},\boldsymbol{r}$ dependence is omitted) \begin{equation} U_\tau(T) = W U_0(T) W^\dagger \end{equation} with unitary matrix $W = U(\tau+T/2)U^\dagger(T/2)$. Thus, $U_\tau(T)$s at different $\tau$s are related by unitary transformation, and we will in the following use $U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)$ to classify Floquet topological phases. For classification purpose, we need to setup the notion of homotopy equivalence between unitary evolutions. Let us consider evolution operators gapped at a given quasienergy. Following the definition in Ref.~\cite{Roy2017}, we say two evolution operators $U_{1}$ and $U_{2}$ are homotopic, denoted as $U_{1}\approx U_{2}$, if and only if there exists a continuous unitary-matrix-valued function $f(s)$, with $s\in[0,1]$, such that \begin{equation} f(0)=U_1,\quad f(1)=U_2, \end{equation} where $f(s)$ is a gapped evolution operator for all intermediat $s$. It is worth mentioning that when dealing with symmetrized evolution operators instead of ordinary evolution operators, the definition of homotopy equivalence is similar except one needs to impose that the interpolation function $f(s)$ for all $s$ is also a gapped symmetrized evolution operator. When comparing evolution operators with different number of bands, the equivalence relation of stable homotopy can be further introduced. Such a equivalence relation is denoted as $U_{1}\sim U_{2}$ if there exist two trivial unitaries $U_{n_1}^0$ and $U_{n_2}^0$, with $n_1$ and $n_2$ bands respectively, such that \begin{equation} U_1 \oplus U_{n_1}^0 \approx U_2 \oplus U_{n_2}^0, \end{equation} where $\oplus$ denotes the direct sum of matrices. We will now define how to make compositions between two symmetrized evolution operators. Using the notation in Ref.~\cite{Roy2017}, we write the evolution due to $U_{\tau,1}$ followed by $U_{\tau,2}$ as $U_{\tau,1} * U_{\tau,2}$, which is given by the symmetrized evolution under Hamiltonian $H(t)$ given by \begin{equation} H(t)=\begin{cases} H_{2}(2t+\frac{T}{2}-\tau) & \tau-\frac{T}{2}\leq t\leq\tau-\frac{T}{4}\\ H_{1}(2t-\tau) & \tau-\frac{T}{4}\leq t\leq\tau+\frac{T}{4}\\ H_{2}(2t-\frac{T}{2}-\tau) & \tau+\frac{T}{4}\leq t\leq\tau+\frac{T}{2}, \end{cases} \end{equation} where $H_1(t)$ and $H_2(t)$ are the corresponding Hamiltonians used for the evolution operators $U_{\tau,1}$ and $U_{\tau,2}$, respectively. As proved in Ref.~\cite{Roy2017}, with such definitions of homotopy and compositions of evolution operators, one can obtain the following two important theorems. First, every gapped symmetrized evolution operator $U_{\tau}$ is homotopic to a composition of a unitary loop $L_{\tau}$, followed by a constant Hamiltonian evolution $C_{\tau}$, unique up to homotopy. Here the unitary loop is a special time evolution operator such that it becomes an identity operator after a full period evolution. Second, $L_{\tau,1}*C_{\tau,1} \approx L_{\tau,2} * C_{\tau,2}$ if and only if $L_{\tau,1}\approx L_{\tau,2}$ and $C_{\tau,1}\approx C_{\tau,2}$, $L_{\tau,1}$, $L_{\tau,2}$ are unitary loops, and $C_{\tau,1}$, $C_{\tau,2}$ are constant Hamiltonian evolutions. For completeness, we put the proof of the two theorems in Appendix~\ref{app:two_theorems}. Because of these two theorems, classifying generic time-evolution operators reduces to classifying separately the unitary loops and the constant Hamiltonian evolutions. Since the latter is exactly the same as classifying static Hamiltonians, we will in this work only focus on the classification of unitary loops. In the following, all the following time-evolution operators are unitary loops, which additionally satisfy $U_{\tau}(\boldsymbol{k},\boldsymbol{r},t) = U_{\tau}(\boldsymbol{k},\boldsymbol{r},t+T)$. \section{Symmetries in Floquet systems \label{sec:symmetries}} In this section, we will summarize the transformation properties of the time evolution opeerator under various of symmetry operators. \subsection{Nonspatial symmetries} Let us first look at the nonspatial symmetries and consider systems belong to one of the ten AZ classes (see Table \ref{tab:AZ-symmetry-classes}), determmined by the presence or absence of time-reversal, particle-hole and chiral symmetries, which are defined by the operators $\hat{\mathcal{T}}=\mathcal{U}_{T}\mathcal{\hat{K}}$, $\hat{\mathcal{C}}=\mathcal{U}_{C}\hat{\mathcal{K}}$ and $\hat{\mathcal{S}}=\mathcal{U}_{S}=\hat{\mathcal{T}}\hat{\mathcal{C}}$ respectively, such that \begin{gather} \hat{\mathcal{T}}H(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{T}}^{-1}=H(-\boldsymbol{k},\boldsymbol{r},-t) \nonumber\\ \hat{\mathcal{C}}H(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{C}}^{-1}=-H(-\boldsymbol{k},\boldsymbol{r},t) \nonumber\\ \hat{\mathcal{S}}H(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{S}}^{-1}=-H(\boldsymbol{k},\boldsymbol{r},-t) \label{eq:AZsym_H}. \end{gather} where $\hat{\mathcal{T}}=\mathcal{U}_{T}\mathcal{\hat{K}}$, $\hat{\mathcal{C}}=\mathcal{U}_{C}\hat{\mathcal{K}}$ are antiunitary operators with unitary matrices $\mathcal{U}_{T},\mathcal{U_{C}}$ and complex conjugation operator $\hat{\mathcal{K}}$. Here $\boldsymbol{r}$ is invariant in the above equations, because of the nonspatial nature of the symmetries. For a Floquet system, the action of symmetry operations $\hat{\mathcal{T}}$, $\hat{\mathcal{C}}$, and $\hat{\mathcal{S}}$ on the symmetrized unitary loops $U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)$ can be summarized as \begin{gather} \hat{\mathcal{T}}U_\tau(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{T}}^{-1}=U_{-\tau}^{\dagger}(-\boldsymbol{k},\boldsymbol{r},t) \label{eq:time-reversal-U} \\ \hat{\mathcal{C}}U_\tau(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{C}}^{-1}=U_\tau(-\boldsymbol{k},\boldsymbol{r},t) \label{eq:particle-hole-U} \\ \hat{\mathcal{S}}U_\tau(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{S}}^{-1}=U_{-\tau}^{\dagger}(\boldsymbol{k},\boldsymbol{r},t) \label{eq:chiral-U} \end{gather} which follow directly from Eqs.~(\ref{eq:AZsym_H}). For later convenience, we further introduce notations $\text{\ensuremath{\epsilon_{T}=}}\mathcal{U}_{T}\mathcal{U}_{T}^{*}=\hat{\mathcal{T}}^{2}=\pm1$, $\epsilon_{C}=\mathcal{U}_{C}\mathcal{U}_{C}^{*}=\hat{\mathcal{C}}^{2}=\pm1$, and $\epsilon_{S}=\mathcal{U}_{S}^{2}=\hat{\mathcal{S}}^{2}=1$ respectively. \subsection{Order-two space-time symmetry \label{sec:order-two}} In addition to the nonspatial symmetries, let us assume the system supports an order-two space-time symmetry realized by $\hat{\mathcal{O}}$, as defined in Eq.~(\ref{eq:order_two_symmetry}). Moreover, we assume $\hat{\mathcal{O}}$ commute or anticommute with the operators for the nonspatial symmetries of the system. Under the order-two space-time symmetry operation $\hat{\mathcal{O}}$, the momentum $\boldsymbol{k}$ transforms as \cite{Shiozaki2014} \begin{equation} \boldsymbol{k}\to\begin{cases} \hat{\mathcal{O}}\boldsymbol{k}=(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp}) & {\rm for}\ \hat{\mathcal{O}}=\hat{\mathcal{U}} \\ -\hat{\mathcal{O}}\boldsymbol{k}=(\boldsymbol{k}_{\perp},-\boldsymbol{k}_{\parallel}) & {\rm for}\ \hat{\mathcal{O}}=\hat{\mathcal{A}}, \end{cases} \end{equation} where the second equality assumes we are in the diagonal basis of $\hat{\mathcal{O}}$, $\boldsymbol{k}_\parallel = (k_1,k_2,\dots,k_{d_{\parallel}})$, and $\boldsymbol{k}_\perp = (k_{d_{\parallel}+1}, k_{d_{\parallel}+2},\dots, k_{d})$. While the nonspatial symmetries leave the spatial coordinate $\boldsymbol{r}$ invariant, the order-two space-time symmetry transforms $\boldsymbol{r}$ nontrivially. To determine the transformation law, we follow Ref.~\cite{Shiozaki2014} and consider a $D$-dimensional sphere $S^{D}$ surrounding the topological defect, whose coordinates in Euclidean space are determined by \begin{equation} \boldsymbol{n}^2 = a^2, \quad \boldsymbol{n}=(n_1,n_2,\dots,n_{D+1}), \end{equation} with radius $a>0$. Since $\hat{\mathcal{O}}$ maps $S^D$ into itself, we have \begin{equation} \boldsymbol{n} \to (-\boldsymbol{n}_{\parallel},\boldsymbol{n}_{\perp}), \end{equation} with $n_{\parallel}=(n_1,n_2,\dots,n_{D_{\parallel}})$, and $n_{\perp} = (n_{D_{\parallel}+1},n_{D_\parallel+2},\dots,n_{D+1})$ in a diagonal basis of $\hat{\mathcal{O}}$. When $D_{\parallel}\leq D$, we can introduce the coordinate $\boldsymbol{r}\in S^{D}$ by \begin{equation} r_{i} = \frac{n_i}{a-n_{D+1}}, \quad (i=1,\dots,D), \end{equation} which leads to \begin{equation} \boldsymbol{r}\to(-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}). \end{equation} Here, $\boldsymbol{r}_{\parallel}= (r_1,r_2,\dots,r_{D_{\parallel}})$ and $\boldsymbol{r}_{\perp} = (r_{D_{\parallel+1}}, r_{D_{\parallel}+2},\dots,r_{D})$. Thus, we need to introduce $(d,d_{\parallel},D,D_{\parallel})$ to characterize the dimension of the system according to the transformation properties of the coordinates, where $d$ and D are defined the same as defined previously, while $d_{\parallel}$ and $D_{\parallel}$ denote the dimensions of the flipping momenta and the defect surrounding coordinates, respectively. For example, a unitary symmetry with $(d,d_{\parallel},D,D_{\parallel})=(2,1,1,1)$ correspond to the reflection in 2D with a point defect on the reflection line, while a unitary symmetry with $(d,d_{\parallel},D,D_\parallel) = (3,2,2,2)$ is a two-fold rotation in 3D with a point defect on the rotation axis. Next, let us consider the action of the order-two space-time symmetry on the time arguement. For unitary symmetries, an action on $t$ can generically have the form $t \to t+s$. Due to the periodicity in $t$ and the order-two nature of the symmetry, $s$ can either be $0$ or $T/2$. For antiunitary symmetries, we have $t \to -t+s$. When the system does not support time-reversal or chiral symmetry, as in classes A, C, and D, the constraints due to time-periodicity and the order-two nature do not restrict the value $s$ takes. Hence, $s$ is an arbitrary real number in this situation. However, when the system has at least one of the time-reversal and chiral symmetries, denoted as $\hat{\mathcal{P}}$, $s$ will be restricted to take a few values as shown in the following. The composite operation $\hat{\mathcal{P}}\hat{\mathcal{O}}$ shift the time as $t \to -s + t$. On the other hand, since $\hat{\mathcal{P}}\hat{\mathcal{O}}$ is another order-two symmetry, $s$ can be either $0$ or $T/2$ (note that $s$ is defined modulo $T$). To summarize, for a Hamiltonian $H(\boldsymbol{k},\boldsymbol{r},t)$ living in dimension $(d,d_{\parallel},D,D_{\parallel})$, under the action of $\hat{\mathcal{O}}$, it transforms as \begin{gather} \hat{\mathcal{U}}_{s}H(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{s}^{-1}=H(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t+s) \label{eq:unitary_symmetry} \\ \hat{\mathcal{A}_{s}}H(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{s}^{-1}=H(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},-t+s) \end{gather} in the diagonal basis of $\hat{\mathcal{O}}$. for unitary and antiunitary symmetries. Let us suppose $\hat{\mathcal{O}}^{2}=\epsilon_{O}=\pm1$, and $\hat{\mathcal{O}}$ commutes or anticommutes with coexisting nonspatial symmetries according to \begin{equation} \hat{\mathcal{O}}\hat{T}=\eta_{T}\hat{T}\hat{\mathcal{O}},\quad\hat{\mathcal{O}}\hat{\mathcal{C}}=\eta_{C}\hat{\mathcal{C}}\hat{\mathcal{O}},\quad\hat{\mathcal{O}}\hat{\mathcal{S}}=\eta_{S}\hat{\mathcal{S}}\hat{\mathcal{O}}, \end{equation} where $\eta_{T}=\pm1$, $\eta_{C}=\pm1$, and $\eta_{S}=\pm1$. Note that when $\mathcal{\hat{O}}=\hat{\mathcal{U}}$, we can always set $\epsilon_{O}=1$ with the help of multiplying $\hat{\mathcal{O}}$ by imaginary unit $i$, but this changes the (anti)commutation relation with $\hat{\mathcal{T}}$ and/or $\hat{\mathcal{C}}$ at the same time. One can also consider an order-two antisymmetry $\overline{\mathcal{O}}$ defined by \begin{gather} \overline{\mathcal{U}}_{s}H(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{s}^{-1}=-H(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},-t+s)\nonumber \\ \overline{\mathcal{A}}_{s}H(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{A}}_{s}^{-1}=-H(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t+s), \end{gather} where $\overline{\mathcal{O}}$ can be either unitary $\overline{\mathcal{U}}$ or antiunitary $\overline{\mathcal{A}}$. Such an antisymmetry can be realized by combining any of order-two symmetries with chiral or particle-hole symmetry. Similar to $\hat{\mathcal{O}}$, we define $\overline{\mathcal{O}}^{2}=\epsilon_{\overline{O}}$, $\overline{\mathcal{O}}\hat{\mathcal{T}}=\overline{\eta}_{T}\hat{\mathcal{T}}\overline{\mathcal{O}}$, $\overline{\mathcal{O}}\hat{\mathcal{C}}=\overline{\eta}_{C}\hat{\mathcal{C}}\overline{\mathcal{O}}$, and $\overline{\mathcal{O}}\hat{\mathcal{S}}=\overline{\eta}_{S}\hat{\mathcal{S}}\overline{\mathcal{O}}$. The values that the time shift $s$ takes are similar to the ones in the case of symmetries. We have $s=0,T/2$ for $\overline{\mathcal{U}}_s$. For $\overline{\mathcal{A}}_s$, $s$ is arbitrary in classes A, C and D, whereas $s=0,T/2$ the rest of classes. The actions of symmetry/antisymmetry operators $\hat{\mathcal{O}}$ and $\overline{\mathcal{O}}$, either unitary or antiunitary, on the unitary loops can be summarized as follows \begin{gather} \hat{\mathcal{U}}_{s}U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{s}^{-1} =U_{\tau+s}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \\ \hat{\mathcal{A}}_{s}U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{s}^{-1} =U_{s-\tau}^{\dagger}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \\ \overline{\mathcal{U}}_{s}U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{s}^{-1}=U_{s-\tau}^{\dagger}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \\ \overline{\mathcal{A}}_{s}U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{A}}_{s}^{-1}=U_{s+\tau}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t). \end{gather} In the following, we will discuss each symmetry/antisymmetry operator separately, and choose a particular value of $\tau$ for each case, since we know the classification would not depend on what the value $\tau$ takes. For $\hat{\mathcal{U}}_s$ and $\overline{\mathcal{A}}_s$ $s=0, T/2$, and we take $\tau={T/2}$. By using \begin{equation} U_{\tau+T/2}(\boldsymbol{k},\boldsymbol{r},t) = U_{\tau}^{\dagger}(\boldsymbol{k},\boldsymbol{r},T-t), \end{equation} and omitting the subscript $\tau$ from $U_{\tau}(\boldsymbol{k},\boldsymbol{r},t)$ from now on for simplicity, we get \begin{gather} \hat{\mathcal{U}}_{0}U(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{0}^{-1}=U(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t) \nonumber \\ \hat{\mathcal{U}}_{T/2}U(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{T/2}^{-1}=U^{\dagger}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t) \nonumber \\ \overline{\mathcal{A}}_{0}U(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{A}}_{0}^{-1}=U(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t) \nonumber \\ \overline{\mathcal{A}}_{T/2}U(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{A}}_{T/2}^{-1}=U^{\dagger}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t). \end{gather} When considering $\overline{\mathcal{U}}_s$ and $\hat{\mathcal{A}}_s$ in classes A, C and D, we can choose $\tau = s/2$, which gives \begin{gather} \hat{\mathcal{A}}_{s}U(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{s}^{-1} =U^{\dagger}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t) \nonumber \\ \overline{\mathcal{U}}_{s}U(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{s}^{-1}=U^{\dagger}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t) . \end{gather} This implies that the value $s$ here actually does not play a role in determining topological classification. In the remaining classes, we have $s=0, T/2$, and we will choose $\tau = T/2$. This leads to \begin{gather} \hat{\mathcal{A}}_{0}U(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{0}^{-1}=U^{\dagger}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t)\nonumber \\ \hat{\mathcal{A}}_{T/2}U(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{T/2}^{-1}=U(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t) \nonumber \\ \overline{\mathcal{U}}_{0}U(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{0}^{-1}=U^{\dagger}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t) \nonumber \\ \overline{\mathcal{U}}_{T/2}U(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{T/2}^{-1}=U(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t). \end{gather} \section{Hermitian map \label{sec:hermitian_map}} One observation that can be made from Eqs.~(\ref{eq:time-reversal-U}--\ref{eq:chiral-U}) is that at fixed $\boldsymbol{r}$ and $t$, the transformation properties for the unitary loops $U(\boldsymbol{k},\boldsymbol{r},t)$ under the actions of $\hat{\mathcal{T}}$, $\hat{\mathcal{C}}$, and $\hat{\mathcal{S}}$ are exactly the same as the ones for unitary boundary reflection matrices introduced in, for example, Refs.~\cite{Fulga2012, Peng2017}. In these works, an effective hermitian matrix can be constructed from a given reflection matrix, which maps the classification of reflection matrices into the classification of hermitian matrices. Here, we can borrow the same hermitian mapping defined as \begin{equation} \mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)=\mathcal{U}_{S}U(\boldsymbol{k},\boldsymbol{r},t)\label{eq:effective_hamiltonian_chiral} \end{equation} if $U(\boldsymbol{k},\boldsymbol{r},t)$ has a chiral symmetry, and \begin{equation} \mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)=\left(\begin{array}{cc} 0 & U(\boldsymbol{k},\boldsymbol{r},t)\\ U^{\dagger}(\boldsymbol{k},\boldsymbol{r},t) & 0 \end{array}\right)\label{eq:effective_hamiltonian_nonchiral} \end{equation} if $U(\boldsymbol{k},\boldsymbol{r},t)$ does not have a chiral symmetry. In the latter case, $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ aquires a new chiral symmetry \begin{equation} \mathcal{U}_{S}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)=-\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\mathcal{U}_{S}', \end{equation} with $\mathcal{U}_{S}=\rho_{z}\otimes\mathbb{I}$, where we have introduced a set of Pauli matrices $\rho_{x,y,z}$ in the enlarged space. Note that when the unitary loop $\mathcal{U}(\boldsymbol{k},\boldsymbol{r},t)$ does not have a chiral symmetry, our hermitian map is the same as the one used in Refs.~\cite{Roy2017,Morimoto2017}. When the unitary loop does have a chiral symmetry, however, we chose a new map which maps the unitary loop into a hermitian matrix without unitary symmetry. The advantage of the hermitian map defined here over the one in the previous works will become clear soon. Note that the hermitian matrix $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ can be regarded as a static spatially modulated Hamiltonian in $(d,D+1)$ dimension, because the time arguement transforms like a spatial coordinate similar to $\boldsymbol{r}$. The classification of unitary loops in $(d,D)$ dimension in a given symmetry class, is then the same as the classification of static Hamiltonians in $(d,D+1)$ dimension in the symmetry class shifted upward by one ($s\to s-1$) (mod 2 or 8 depending for complex or real symmetry classes), where $s$ is used to order the symmetry classes according to Table~\ref{tab:AZ-symmetry-classes}. Thus, one can directly apply the classification scheme of the static Hamiltonians $H(\boldsymbol{k},\boldsymbol{r})$ using $K$ theory, as was done in Ref.~\cite{Shiozaki2014}. This is provided by a homotopy classification of maps from the base space $(\boldsymbol{k},\boldsymbol{r})\in S^{d+D}$ to the classifying space of Hamiltonians $H(\boldsymbol{k},\boldsymbol{r})$ subject to the given symmetries, which we denoted as $\mathcal{C}_{s}$ or $\mathcal{R}_{s}$ as shown in the table. \begin{table} \caption{\label{tab:AZ-symmetry-classes}AZ symmetry classes and their classifying spaces. The top two rows ($s=0,1\mod2$) are complex AZ classes, while the rest eight rows $(s=0,\dots,7\mod8)$ are real AZ classes. The third to fifth columns denote the absence (0) or presence ($\epsilon_{T},\epsilon_{C}=$$\pm1$ or $\eta_{S}=1$) of time-reversal ($\hat{\mathcal{T}}$), particle-hole ($\hat{\mathcal{C}}$) and chiral symmetries ($\hat{\mathcal{S}}$). $\mathcal{C}_{s}$ ($\mathcal{R}_{s}$) denotes the classifying space of $s$ complex (real) AZ class.} \begin{ruledtabular} \centering \begin{tabular}{ccccccc} $s$ & AZ class & $\hat{\mathcal{T}}$ & $\hat{\mathcal{C}}$ & $\hat{S}$ & $\mathcal{C}_{s}$ or $\mathcal{R}_{s}$ & $\pi_{0}(\mathcal{C}_{s})$ or $\pi_{0}(\mathcal{R}_{s})$\\ \hline $0$ & A & 0 & 0 & 0 & $\mathcal{C}_{0}$ & $\mathbb{Z}$\\ $1$ & AIII & 0 & 0 & 1 & $\mathcal{C}_{1}$ & 0\\ \hline $0$ & AI & $+1$ & 0 & 0 & $\mathcal{R}_{0}$ & $\mathbb{Z}$\\ $1$ & BDI & $+1$ & +1 & 1 & $\mathcal{R}_{1}$ & $\mathbb{\mathbb{Z}}_{2}$\\ $2$ & D & 0 & $+1$ & 0 & $\mathcal{R}_{2}$ & $\mathbb{\mathbb{Z}}_{2}$\\ $3$ & DIII & $-1$ & +1 & 1 & $\mathcal{R}_{3}$ & 0\\ $4$ & AII & $-1$ & 0 & 0 & $\mathcal{R}_{4}$ & $2\mathbb{Z}$\\ $5$ & CII & $-1$ & $-1$ & 1 & $\mathcal{R}_{5}$ & 0\\ $6$ & C & 0 & $-1$ & 0 & $\mathcal{R}_{6}$ & 0\\ $7$ & CI & +1 & $-1$ & 1 & $\mathcal{R}_{7}$ & 0\\ \end{tabular} \end{ruledtabular} \end{table} Because of the Bott periodicity in the periodic table of static TI/SCs \cite{Schnyder2008,Kitaev2009,Ryu2010,Teo2010,Chiu2016}, the classification is unchanged when simultaneously shifting the dimension $D \to D+1$ and the symmetry class upward by one $s\to s-1$ (mod 2 or 8 for complex or real symmetry classes). It turns out that the classification of unitary loops is the same as the classification of the static Hamiltonian in the same symmetry class and with the same dimension $(d,D)$. In the following, we will explicitly derive the action of the hermitian map on each symmetry classes. \subsection{Classes A and AIII} We first consider the two complex classes. Under the hermitian map defined above, classifying unitary loops in $(d,D)$ dimension in class A is the same as classifying hermitian matrices in $(d,D+1)$ dimension in class AIII. On the other hand, classifying unitary loops in $(d,D)$ dimension in class AIII is the same as classifying hermitian matrices in $(d,D+1)$ dimension in class A. \subsection{Classes AI and AII} Now we turn to real symmetry classes. Since classes AI and AII have only time-reversal symmetry, we need to apply the hermitian map defined in Eq.(\ref{eq:effective_hamiltonian_nonchiral}). By using Eq.~(\ref{eq:time-reversal-U}) with $\tau=T/2$, or \begin{equation} \mathcal{U}_{T}U^{T}(\boldsymbol{k},\boldsymbol{r},t)=U(-\boldsymbol{k},\boldsymbol{r},t)\mathcal{U}_{T}, \end{equation} we have effective time-reversal symmetry \begin{equation} \mathcal{U}_{T}'\mathcal{H}^{*}(\boldsymbol{k},\boldsymbol{r},t)=\mathcal{H}(-\boldsymbol{k},\boldsymbol{r},t)\mathcal{U}_{T}' \end{equation} with $\mathcal{U}_{T}'=\rho_{x}\otimes\mathcal{U}_{T}$, and effective particle-hole symmetry \begin{equation} \mathcal{U}_{C}'\mathcal{H}^{*}(\boldsymbol{k},\boldsymbol{r},t)=-\mathcal{H}^{*}(-\boldsymbol{k},\boldsymbol{r},t)\mathcal{U}_{C}' \end{equation} with $\mathcal{U}_{C}'=i\rho_{y}\otimes\mathcal{U}_{T}$. Note that the effective time-reversal and particle-hole symmetries combines into the chiral symmetry as expected. The types of the effective time-reversal and particle-hole symmetries of $\mathcal{H}(\boldsymbol{k},t)$ are determined from \begin{gather} \mathcal{U}_{T}'\mathcal{U}_{T}'^{*}=\rho_{0}\otimes(\mathcal{U}_{T}\mathcal{U}_{T}^{*})\\ \mathcal{U}_{C}'\mathcal{U}_{C}'^{*}=-\rho_{0}\otimes(\mathcal{U}_{T}\mathcal{U}_{T}^{*}), \end{gather} where $\rho_{0}$ is the two-by-two identity matrix in the extended space. Under the hermitian map, classifying unitary loops in ($d,D)$ dimension in classes AI and AII, are the same as classifying hermitian matrices in $(d,D+1)$ dimension in classes CI and DIII. \subsection{Classes C and D} Let us consider classes C and D with only particle-hole symmetry. We need to apply the hermitian map defined in Eq.(\ref{eq:effective_hamiltonian_nonchiral}). By using Eq.(\ref{eq:particle-hole-U}), one can define effective time-reversal symmetry with $\mathcal{U}_{T}'=\rho_{0}\otimes\mathcal{U}_{C}$, and particle-hole symmetry with $\mathcal{U}_{C}'=\rho_{z}\otimes\mathcal{U}_{C}$, such that \begin{gather} \mathcal{U}_{T}'\mathcal{H}^{*}(\boldsymbol{k},\boldsymbol{r},t)=\mathcal{H}(-\boldsymbol{k},\boldsymbol{r},t)\mathcal{U}_{T}'\\ \mathcal{U}_{C}'\mathcal{H^{*}}(\boldsymbol{k},\boldsymbol{r},t)=-\mathcal{H}(-\boldsymbol{k},\boldsymbol{r},t)\mathcal{U}_{C}'. \end{gather} Note that $\mathcal{U}_{T}'$ and $\mathcal{U}_{C}'$ combines into the chiral symmetry as expected. The types of these effective symmetries are determined by \begin{gather} \mathcal{U}_{T}'\mathcal{U}_{T}'^{*}=\rho_{0}\otimes(\mathcal{U}_{C}\mathcal{U}_{C}^{*})\\ \mathcal{U}_{C}'\mathcal{U}_{C}'^{*}=\rho_{0}\otimes(\mathcal{U}_{C}\mathcal{U}_{C}^{*}). \end{gather} Under the hermitian map, classifying unitary loops in $(d,D)$ dimension in classes C and D, are the same as classifying hermitian matrices in $(d,D+1)$ dimension in classes CII and BDI. \subsection{classes CI, CII, DIII, and BDI} Here we consider symmetry classes where time-reversal, particle-hole, and chiral symmetries are all present In this case, $\mathcal{U}_{S}=\mathcal{U}_{T}\mathcal{U}_{C}^{*}$. By $\mathcal{U}_{S}^{2}=1,$ we have $\mathcal{U}_{T}^{*}\mathcal{U}_{C}\mathcal{U}_{S}^{*}=1$. This can be used to show that \begin{equation} \mathcal{U}_{S}\mathcal{U}_{C}=\mathcal{U}_{C}\mathcal{U}_{S}^{*}(\mathcal{U}_{C}\mathcal{U}_{C}^{*})(\mathcal{U}_{T}\mathcal{U}_{T}^{*}). \end{equation} Notice that $\mathcal{U}_{C}\mathcal{U}_{C}^{*}=\pm1$ and $\mathcal{U}_{T}\mathcal{U}_{T}^{*}=\pm1$ are just numbers. The effective Hamiltonian $\mathcal{H}(\boldsymbol{k},t)$ defined in Eq.(\ref{eq:effective_hamiltonian_chiral}) has the property \begin{equation} \mathcal{H}(\boldsymbol{k},\boldsymbol{r,}t)\mathcal{U}_{C}=(\mathcal{U}_{C}\mathcal{U}_{C}^{*})(\mathcal{U}_{T}\mathcal{U}_{T}^{*})\mathcal{U}_{C}\mathcal{H}(-\boldsymbol{k},\boldsymbol{r},t)^{*}. \end{equation} This gives rise to time-reversal or particle-hole symmetry depending on $(\mathcal{U}_{C}\mathcal{U}_{C}^{*})(\mathcal{U}_{T}\mathcal{U}_{T}^{*})=1$ or $-1$, respectively. Therefore, under the hermitian map, the unitary loops in $(d,D)$ dimension in classes CI, CII, DIII, and BDI, map to hermitian matrices in $(d,D+1)$ dimension in classes C, AII, D, and AI, respectively. \section{Classfication with additional order-two space-time symmetry \label{sec:classification_order_two}} After introducing the hermitian map which reduces the classification of unitary loops to the classification of static hermitian matrices, or Hamiltonians, in the AZ symmetry classes, let us now assume the system supports an additional order-two space-time symmetry/antisymmetry, which is either unitary or antiunitary, as defined in Sec.~\ref{sec:order-two}. In the following, we will focus on each class separately. \subsection{Complex symmetry classes} The complex classes A and AIII are characterized by the absence of time-reversal and particle-hole symmetries. \subsubsection{Class A} Let us start with Class A, with additional symmetry realized by $\hat{\mathcal{O}}$ or $\overline{\mathcal{O}}$, whose properties are summarized as $({\rm A},\hat{\mathcal{O}}^{\epsilon_{O}})$ or $({\rm A,\overline{\mathcal{O}}^{\epsilon_{\overline{O}}}})$. For unitary symmetry realized by $\hat{\mathcal{U}}$ and $\overline{\mathcal{U}}$, one can fix $\epsilon_{U}=1$ or $\epsilon_{\overline{U}}=1$. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{0}$} We have \begin{equation} \hat{\mathcal{U}}_{0}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{0}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} where $\hat{\mathcal{U}}_{0}'=$ $\rho_{0}\otimes\hat{\mathcal{U}}_{0}$ behaves as an order-two crystalline symmetry if one regards $t\in S^{1}$ as an additional defect surrounding parameter. Recall that $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ has chiral symmetry realized by operator $\hat{\mathcal{S}}'=\mathcal{U}_{S}'=\rho_{z}\otimes\mathbb{I}$, we have \begin{equation} [\hat{\mathcal{U}}_{0}',\hat{\mathcal{S}}']=0. \end{equation} This means under the hermitian map, unitary loops with symmetry $({\rm A},\hat{\mathcal{U}}_{0}^{+})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm AIII},\hat{\mathcal{U}}_{+}^{+})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}).$ Here, we use the notation $({\rm AIII},\hat{\mathcal{O}}_{\eta_{\Gamma}}^{\epsilon_{O}})$ to denote class AIII with an additional symmetry realized by $\hat{\mathcal{O}}$, which squares to $\epsilon_{O}$ and commutes ($\eta_{S}=1$) or anticommutes $(\eta_{S}=-1)$ with the chiral symmetry operator $\hat{\mathcal{S}}'$. One can also replace $\hat{\mathcal{O}}$ by $\overline{\mathcal{O}}$ to define class AIII with an additional antisymmetry in the similar way. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{T/2}$} We have \begin{equation} \hat{\mathcal{U}}_{T/2}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{T/2}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t) \end{equation} where $\hat{\mathcal{U}}_{T/2}'=\rho_{x}\otimes\hat{\mathcal{U}}_{T/2}$, which satisfies $\{\hat{\mathcal{U}}_{T/2}',\hat{\mathcal{S}}'\}=0$ and $\hat{\mathcal{U}}_{T/2}'^{2}=1$. Since $t\in S^{1}$, if we shift the origin by defining $t=\frac{T}{2}+t'$, and use $t'\in S^{1}$ instead of $t$, then the map $t\to T-t$ becomes $t'\to-t'$. Now $t'$ can be regards as an additional defect surrounding coordinate which flips under the order-two symmetry. Under the hermitian map, unitary loops with symmetry $({\rm A},\hat{\mathcal{U}}_{T/2}^{+})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm AIII},\hat{\mathcal{U}}_{-}^{+})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1).$ \paragraph{$\overline{\mathcal{O}}=\overline{\mathcal{U}}_{s}$} The unitary antisymmetry $\overline{\mathcal{U}}_{s}$ leads to an order-two symmetry on $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ with \begin{equation} \overline{\mathcal{U}}_{s}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{s}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} where $\overline{\mathcal{U}}_{s}'=\rho_{x}\otimes\overline{\mathcal{U}}_{s}$. Moreover, we have $\overline{\mathcal{U}}_{s}'^{2}=1$ and $\{\overline{\mathcal{U}}_{s}',\hat{\mathcal{S}'}\}=0$. Under the hermitian map, unitary loops with symmetry $({\rm A},\overline{\mathcal{U}}_{s}^{+})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm AIII},\hat{\mathcal{U}}_{-}^{+})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}).$ \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{A}}_{s}$} We have \begin{equation} \hat{\mathcal{A}}_{s}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{s}'^{-1}=\mathcal{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t) \end{equation} with $\hat{\mathcal{A}}_{s}'=\rho_{x}\otimes\hat{\mathcal{A}}_{s}$. Moreover, we have $\{\hat{\mathcal{A}}_{s}',\hat{\mathcal{S}'}\}=0$ and $\hat{\mathcal{A}}_{s}'^{2}=\hat{\mathcal{A}}_{s}^{2}$. Thus, under the hermitian map, unitary loops with symmetry $({\rm A},\hat{\mathcal{A}}_{s}^{\pm})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm AIII},\hat{\mathcal{A}}_{-}^{\pm})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$. \paragraph{$\overline{\mathcal{O}}=\overline{\mathcal{A}}_{0}$} We have \begin{equation} \overline{\mathcal{A}}_{0}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{A}}_{0}'^{-1}=\mathcal{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} with $\overline{\mathcal{A}}_{0}'=\rho_{0}\otimes\overline{\mathcal{A}}_{0}$, which satisfies $\overline{\mathcal{A}}_{0}'^{2}=\overline{\mathcal{A}}_{0}^{2}$ and $\text{[\ensuremath{\overline{\mathcal{A}}_{0}',\hat{\mathcal{S}}'}]=0}$. Under the hermitian map, unitary loops with symmetry $({\rm A},\overline{\mathcal{A}}_{0}^{\pm})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm AIII},\hat{\mathcal{A}}_{+}^{\pm})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$. \paragraph{$\overline{\mathcal{O}}=\overline{\mathcal{A}}_{T/2}$} We have \begin{equation} \overline{\mathcal{A}}_{T/2}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{A}}_{T/2}'^{-1}=\mathcal{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t) \end{equation} with $\overline{\mathcal{A}}_{T/2}'=\rho_{x}\otimes\overline{\mathcal{A}}_{T/2}$, which satisfies $\overline{\mathcal{A}}_{T/2}'^{2}=\overline{\mathcal{A}}_{T/2}^{2}$, $\{\overline{\mathcal{A}}_{T/2}',\hat{\mathcal{S}}'\}=0$. Under the hermitian map, unitary loops with symmetry $({\rm A},\overline{\mathcal{A}}_{T/2}^{\pm})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm AIII},\hat{\mathcal{A}}_{-}^{\pm})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$. \subsubsection{Class AIII} In class AIII, we have a chiral symmetry realized by $\hat{\mathcal{S}}$. We assume an additional order-two symmetry $\hat{\mathcal{U}}_{\eta_{S}}^{\epsilon_{U}}$ or antisymmetry $\overline{\mathcal{U}}_{\overline{\eta}_{S}}^{\epsilon_{\overline{U}}}$. Moreover, we can fix $\epsilon_{U}=1$ and $\epsilon_{\overline{U}}=1$ for unitary symmetries and antisymmetries realized by $\hat{\mathcal{U}}$ and $\overline{\mathcal{U}}$ respectively. For unitary (anti)symmetries, note that $\overline{\mathcal{U}}_{\eta_{S}}$ in class AIII is essentially the same as $\hat{\mathcal{U}}_{\eta_{S}}$, because they can be converted to each other by $\overline{\mathcal{U}}_{\eta_{s}}=\hat{\mathcal{S}}\hat{\mathcal{U}}_{\eta_{S}}$. Similarly, for antiunitary (anti)symmetries, $\hat{\mathcal{A}}_{\eta_{S}}^{\epsilon_{A}}$ and $\overline{\mathcal{A}}_{\eta_{S}}^{\epsilon_{A}\eta_{S}}$ are equivalent since $\hat{\mathcal{A}}_{\eta_{S}}^{\epsilon_{A}}=\hat{\mathcal{S}}\overline{\mathcal{A}}_{\eta_{S}}^{\epsilon_{A}\eta_{S}}.$ Hence in the following, we only discuss unitary and antiunitary symmetries. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{0}$} We have \begin{equation} \hat{\mathcal{U}}_{0}\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{0}^{-1}=\eta_{S}\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t). \end{equation} Under the hermitian map, unitary loops with symmetry $({\rm AIII},\hat{\mathcal{U}}_{0,+}^{+})$ and $({\rm AIII},\hat{\mathcal{U}}_{0,-}^{+})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm A},\hat{\mathcal{U}}^{+})$ and $({\rm A},\overline{\mathcal{U}}^{+})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$, respectively. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{T/2}$} We have \begin{equation} \hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2}\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)(\hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2})^{-1}=\eta_{S}\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t). \end{equation} Under the hermitian map, unitary loops with symmetry $({\rm AIII},\hat{\mathcal{U}}_{T/2,+}^{+})$ and $({\rm AIII},\hat{\mathcal{U}}_{T/2,-}^{+})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm A},\hat{\mathcal{U}}^{+})$ and $({\rm A},\overline{\mathcal{U}}^{+})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$, respectively. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{A}}_{0}$} We have \begin{equation} \hat{\mathcal{S}}\hat{\mathcal{A}}_{0}\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)(\hat{\mathcal{S}}\hat{\mathcal{A}}_{0})^{-1}=\eta_{S}\mathcal{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t). \end{equation} Under the hermitian map, unitary loops with symmetry $({\rm AIII},\hat{\mathcal{A}}_{0,+}^{\pm})$ and $({\rm AIII},\hat{\mathcal{A}}_{0,-}^{\pm})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm A},\hat{\mathcal{A}}^{\pm})$ and $({\rm A},\overline{\mathcal{A}}^{\mp})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$, respectively. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{A}}_{T/2}$} We have \begin{equation} \hat{\mathcal{A}}_{T/2}\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{A}}_{T/2}^{-1}=\eta_{S}\mathcal{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t). \end{equation} Under the hermitian map, unitary loops with symmetry $({\rm AIII},\hat{\mathcal{A}}_{T/2,+}^{\pm})$ and $({\rm AIII},\hat{\mathcal{A}}_{T/2,-}^{\pm})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm A},\hat{\mathcal{A}}^{\pm})$ and $({\rm A},\overline{\mathcal{A}}^{\pm})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$, respectively. \subsection{Real symmetry classes} Now let us consider real symmetry classes, where at least one antiunitary symmetry is present. In classes AI and AII, only time reversal symmetry is present. We have the following equivalence relations between the additional order-two symmetries/antisymmetries \begin{gather} \hat{\mathcal{U}}_{\eta_{T}}^{\epsilon_{U}}=i\hat{\mathcal{U}}_{-\eta_{T}}^{-\epsilon_{U}}=\hat{\mathcal{T}}\hat{\mathcal{A}}_{\eta_{T}}^{\eta_{T}\epsilon_{T}\epsilon_{U}}=i\hat{\mathcal{T}}\hat{\mathcal{A}}_{\eta_{T}}^{\eta_{T}\epsilon_{T}\epsilon_{U}}\\ \overline{\mathcal{U}}_{\overline{\eta}_{T}}^{\overline{\epsilon}_{U}}=i\overline{\mathcal{U}}_{-\overline{\eta}_{T}}^{-\overline{\epsilon}_{U}}=\hat{\mathcal{T}}\overline{\mathcal{A}}_{\overline{\eta}_{T}}^{\overline{\eta}_{T}\epsilon_{T}\overline{\epsilon}_{U}}=i\hat{\mathcal{T}}\overline{\mathcal{A}}_{-\overline{\eta}_{T}}^{-\overline{\eta}_{T}\epsilon_{T}\overline{\epsilon}_{U}}, \end{gather} where $\epsilon_{U}=\hat{\mathcal{U}}^{2}$, $\epsilon_{\overline{U}}=\overline{\mathcal{U}}^{2}$, $\epsilon_{T}=\hat{\mathcal{T}}^{2}$. We only need to consier four cases $\mathcal{\hat{U}}_{+}^{+}$, $\hat{\mathcal{U}}_{-}^{+}$, $\overline{\mathcal{U}}_{+}^{+}$, and $\overline{\mathcal{U}}_{-}^{+}$. In classes C and D, the particle-hole symmetry leads to the following equivalence relations between the additional order-two symmetries/antisymmetries \begin{gather} \hat{\mathcal{U}}_{\eta_{C}}^{\epsilon_{U}}=i\hat{\mathcal{U}}_{-\eta_{C}}^{-\epsilon_{U}}=\hat{\mathcal{C}}\overline{\mathcal{A}}_{\eta_{C}}^{\eta_{C}\epsilon_{C}\epsilon_{U}}=i\hat{\mathcal{C}}\overline{\mathcal{A}}_{-\eta_{C}}^{\eta_{C}\epsilon_{C}\epsilon_{U}}\\ \overline{\mathcal{U}}_{\overline{\eta}_{C}}^{\overline{\epsilon}_{U}}=i\overline{\mathcal{U}}_{-\overline{\eta}_{T}}^{-\overline{\epsilon}_{U}}=\hat{\mathcal{C}}\hat{\mathcal{A}}_{\overline{\eta}_{C}}^{\overline{\eta}_{C}\epsilon_{C}\overline{\epsilon}_{U}}=i\hat{\mathcal{C}}\hat{\mathcal{A}}_{-\overline{\eta}_{C}}^{\overline{\eta}_{C}\epsilon_{C}\overline{\epsilon}_{U}}, \end{gather} where $\epsilon_{C}=\hat{\mathcal{C}}^{2}$. We just need to consider four cases $\mathcal{\hat{U}}_{+}^{+}$, $\hat{\mathcal{U}}_{-}^{+}$, $\overline{\mathcal{U}}_{+}^{+}$, and $\overline{\mathcal{U}}_{-}^{+}$. Finally, in classes BDI, DIII, CII and CI, with time-reversal, particle-hole and chiral symmetries all together, we have \begin{align} & \hat{\mathcal{U}}_{\eta_{T},\eta_{C}}^{\epsilon_{U}}=i\hat{\mathcal{U}}_{-\eta_{T},-\eta_{C}}^{-\epsilon_{U}}=\hat{\mathcal{T}}\hat{\mathcal{A}}_{\eta_{T},\eta_{C}}^{\eta_{T}\epsilon_{T}\epsilon_{U}}=i\hat{\mathcal{T}}\hat{\mathcal{A}}_{-\eta_{T},\eta_{C}}^{\eta_{T}\epsilon_{T}\epsilon_{U}} \nonumber \\ &=\hat{\mathcal{C}}\overline{\mathcal{A}}_{\eta_{T},\eta_{C}}^{\eta_{C}\epsilon_{C}\epsilon_{U}}=i\hat{\mathcal{C}}\overline{\mathcal{A}}_{-\eta_{T},-\eta_{C}}^{\eta_{C}\epsilon_{C}\epsilon_{U}} \end{align} \begin{align} &\overline{\mathcal{U}}_{\overline{\eta}_{T},\overline{\eta}_{C}}^{\overline{\epsilon}_{U}}=i\overline{\mathcal{U}}_{-\overline{\eta}_{T},-\overline{\eta}_{C}}^{-\overline{\epsilon}_{U}}=\hat{\mathcal{T}}\overline{\mathcal{A}}_{\overline{\eta}_{T},\overline{\eta}_{C}}^{\overline{\eta}_{T}\epsilon_{T}\overline{\epsilon}_{U}}=i\hat{\mathcal{T}}\overline{\mathcal{A}}_{-\overline{\eta}_{T},-\overline{\eta}_{C}}^{\overline{\eta}_{T}\epsilon_{T}\overline{\epsilon}_{U}}\nonumber \\ &=\hat{\mathcal{C}}\hat{\mathcal{A}}_{\overline{\eta}_{T},\overline{\eta}_{C}}^{\overline{\eta}_{C}\epsilon_{C}\overline{\epsilon}_{U}}=i\hat{\mathcal{C}}\hat{\mathcal{A}}_{-\overline{\eta}_{T},-\overline{\eta}_{C}}^{\overline{\eta}_{C}\epsilon_{C}\overline{\epsilon}_{U}}. \end{align} Hence, only four cases $\hat{\mathcal{U}}_{+,+}^{+}$, $\hat{\mathcal{U}}_{+,-}^{+}$, $\hat{\mathcal{U}}_{-,-}^{+}$ and $\hat{\mathcal{U}}_{-,+}^{+}$ need to be considered. \subsubsection{Classes AI and AII} \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{0}$ } The new hermitian matrix $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ under the hermitian map defined by Eq.(\ref{eq:effective_hamiltonian_nonchiral}) aquires new time-reversal and particle-hole symmetries, realized by $\hat{\mathcal{T}}'=\rho_{x}\otimes\hat{\mathcal{T}}$ and $\hat{\mathcal{C}'}=i\rho_{y}\otimes\hat{\mathcal{T}}$, respectively. Due to the order-two symmetry realized by $\hat{\mathcal{U}}_{0}$, we have \begin{equation} \hat{\mathcal{U}}_{0}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{0}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} with $\hat{\mathcal{U}}_{0}'=\rho_{0}\otimes\hat{\mathcal{U}}_{0}$. Moreover, we have \begin{gather} \hat{\mathcal{U}}_{0}'\hat{\mathcal{T}'}=\eta_{T}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{0}'\\ \hat{\mathcal{U}}_{0}'\hat{\mathcal{C}}'=\eta_{T}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{0}'. \end{gather} and $\hat{\mathcal{U}}_{0}'^{2}=\epsilon_{U}$. Under the hermitian map, unitary loops with symmetry $({\rm AI},\text{\ensuremath{\hat{\mathcal{U}}_{0,\eta_{T}}^{\epsilon_{U}}}})$ and $({\rm AII},\hat{\mathcal{U}}_{0,\eta_{T}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CI},\hat{\mathcal{U}}_{\eta_{T},\eta_{T}}^{\epsilon_{U}})$ and $({\rm DIII},\hat{\mathcal{U}}_{\eta_{T},\eta_{T}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$, respectively. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{T/2}$} Due to the order-two symmetry realized by $\hat{\mathcal{U}}_{T/2}$, we have \begin{equation} \hat{\mathcal{U}}_{T/2}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{T/2}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t), \end{equation} with $\hat{\mathcal{U}}_{T/2}'=\rho_{x}\otimes\mathcal{\hat{\mathcal{U}}}_{T/2}$, which satisfies \begin{gather} \hat{\mathcal{U}}_{T/2}'\hat{\mathcal{T}'}=\eta_{T}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{T/2}'\\ \hat{\mathcal{U}}_{T/2}'\hat{\mathcal{C}}'=-\eta_{T}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{T/2}'. \end{gather} and $\hat{\mathcal{U}}_{T/2}'^{2}=\epsilon_{U}$. Under the hermitian map, unitary loops with symmetry $({\rm AI},\text{\ensuremath{\hat{\mathcal{U}}_{T/2,\eta_{T}}^{\epsilon_{U}}}})$ and $({\rm AII},\hat{\mathcal{U}}_{T/2,\eta_{T}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CI},\hat{\mathcal{U}}_{\eta_{T},-\eta_{T}}^{\epsilon_{U}})$ and $({\rm DIII},\hat{\mathcal{U}}_{\eta_{T},-\eta_{T}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$, respectively. \paragraph{$\overline{\mathcal{O}}=\text{\ensuremath{\overline{\mathcal{U}}_{0}}}$} Due to the order-two antisymmetry realized by $\overline{\mathcal{U}}_{0}$, we have \begin{equation} \overline{\mathcal{U}}_{0}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{0}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} with $\overline{\mathcal{U}}_{0}'=\rho_{x}\otimes\overline{\mathcal{U}}_{0}$, which satisfies \begin{gather} \hat{\mathcal{U}}_{0}'\hat{\mathcal{T}'}=\overline{\eta}_{T}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{0}'\\ \hat{\mathcal{U}}_{0}'\hat{\mathcal{C}}'=-\overline{\eta}_{T}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{0}', \end{gather} and $\overline{\mathcal{U}}_{0}'^{2}=\overline{\epsilon}_{U}$. Under the hermitian map, unitary loops with symmetry $({\rm AI},\text{\ensuremath{\overline{\mathcal{U}}_{0,\overline{\eta}_{T}}^{\overline{\epsilon}_{U}}}})$ and $({\rm AII},\text{\ensuremath{\overline{\mathcal{U}}_{0,\overline{\eta}_{T}}^{\overline{\epsilon}_{U}}}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CI},\hat{\mathcal{U}}_{\overline{\eta}_{T},-\overline{\eta}_{T}}^{\overline{\epsilon}_{U}})$ and $({\rm DIII},\hat{\mathcal{U}}_{\overline{\eta}_{T},-\overline{\eta}_{T}}^{\overline{\epsilon}_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$, respectively. \paragraph{$\overline{\mathcal{O}}=\overline{\mathcal{U}}_{T/2}$} Due to the order-two antisymmetry realized by $\overline{\mathcal{U}}_{T/2}$, we haves \begin{equation} \overline{\mathcal{U}}_{T/2}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{T/2}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t), \end{equation} with $\overline{\mathcal{U}}_{T/2}'=\rho_{0}\otimes\overline{\mathcal{U}}_{T/2}$, which satisfies \begin{gather} \hat{\mathcal{U}}_{0}'\hat{\mathcal{T}'}=\overline{\eta}_{T}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{0}'\\ \hat{\mathcal{U}}_{0}'\hat{\mathcal{C}}'=\overline{\eta}_{T}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{0}', \end{gather} and $\overline{\mathcal{U}}_{T/2}'^{2}=\overline{\epsilon}_{U}$. Under the hermitian map, unitary loops with symmetry $({\rm AI},\text{\ensuremath{\overline{\mathcal{U}}_{T/2,\overline{\eta}_{T}}^{\overline{\epsilon}_{U}}}})$ and $({\rm AII},\text{\ensuremath{\overline{\mathcal{U}}_{T/2,\overline{\eta}_{T}}^{\overline{\epsilon}_{U}}}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CI},\hat{\mathcal{U}}_{\overline{\eta}_{T},\overline{\eta}_{T}}^{\overline{\epsilon}_{U}})$ and $({\rm DIII},\hat{\mathcal{U}}_{\overline{\eta}_{T},\overline{\eta}_{T}}^{\overline{\epsilon}_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$, respectively. \subsubsection{Classes C and D} \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{0}$} The new hermitian matrix $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ under the hermitian map defined by Eq.(\ref{eq:effective_hamiltonian_nonchiral}) aquires new time-reversal and particle-hole symmetries, realized by $\hat{\mathcal{T}}'=\rho_{0}\otimes\hat{\mathcal{C}}$ and $\hat{\mathcal{C}'}=\rho_{z}\otimes\hat{\mathcal{T}}$, respectively. Due to the order-two symmetry realized by $\hat{\mathcal{U}}_{0}$, we have \begin{equation} \hat{\mathcal{U}}_{0}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{0}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} with $\hat{\mathcal{U}}_{0}'=\rho_{0}\otimes\hat{\mathcal{U}}_{0}$. which satisfies \begin{gather} \hat{\mathcal{U}}_{0}'\hat{\mathcal{T}'}=\eta_{C}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{0}'\\ \hat{\mathcal{U}}_{0}'\hat{\mathcal{C}}'=\eta_{C}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{0}', \end{gather} and $\hat{\mathcal{U}}_{0}'^{2}=\epsilon_{U}$. Under the hermitian map, unitary loops with symmetry $({\rm C},\text{\ensuremath{\hat{\mathcal{U}}_{0,\eta_{C}}^{\epsilon_{U}}}})$ and $({\rm D},\hat{\mathcal{U}}_{0,\eta_{C}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CII},\hat{\mathcal{U}}_{\eta_{C},\eta_{C}}^{\epsilon_{U}})$ and $({\rm BDI},\hat{\mathcal{U}}_{\eta_{C},\eta_{C}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$, respectively. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{T/2}$} Due to the order-two symmetry realized by $\hat{\mathcal{U}}_{T/2}$, , we have \begin{equation} \hat{\mathcal{U}}_{T/2}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{T/2}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t), \end{equation} with $\hat{\mathcal{U}}_{T/2}'=\rho_{x}\otimes\mathcal{\hat{\mathcal{U}}}_{T/2}$, which satisfies \begin{gather} \hat{\mathcal{U}}_{T/2}'\hat{\mathcal{T}'}=\eta_{C}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{T/2}'\\ \hat{\mathcal{U}}_{T/2}'\hat{\mathcal{C}}'=-\eta_{C}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{T/2}' \end{gather} and $\hat{\mathcal{U}}_{T/2}'^{2}=\epsilon_{U}$. Under the hermitian map, unitary loops with symmetry $({\rm C},\text{\ensuremath{\hat{\mathcal{U}}_{T/2,\eta_{C}}^{\epsilon_{U}}}})$ and $({\rm D},\hat{\mathcal{U}}_{T/2,\eta_{C}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CII},\hat{\mathcal{U}}_{\eta_{C},-\eta_{C}}^{\epsilon_{U}})$ and $({\rm BDI},\hat{\mathcal{U}}_{\eta_{C},-\eta_{C}}^{\epsilon_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$, respectively. \paragraph{$\overline{\mathcal{O}}=\text{\ensuremath{\overline{\mathcal{U}}_{s}}}$} Due to the order-two antisymmetry realized by $\overline{\mathcal{U}}_{s}$, we have \begin{equation} \overline{\mathcal{U}}_{s}'\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\overline{\mathcal{U}}_{s}'^{-1}=\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} with $\overline{\mathcal{U}}_{s}'=\rho_{x}\otimes\overline{\mathcal{U}}_{s}$, which satisfies \begin{gather} \hat{\mathcal{U}}_{s}'\hat{\mathcal{T}'}=\overline{\eta}_{C}\hat{\mathcal{T}}'\hat{\mathcal{U}}_{s}'\\ \hat{\mathcal{U}}_{s}'\hat{\mathcal{C}}'=-\overline{\eta}_{C}\hat{\mathcal{C}}'\hat{\mathcal{U}}_{s}', \end{gather} and $\overline{\mathcal{U}}_{s}'^{2}=\overline{\epsilon}_{U}$. Hence, under the hermitian map, unitary loops with symmetry $({\rm C},\text{\ensuremath{\overline{\mathcal{U}}_{s,\overline{\eta}_{C}}^{\overline{\epsilon}_{U}}}})$ and $({\rm D},\text{\ensuremath{\overline{\mathcal{U}}_{s,\overline{\eta}_{C}}^{\overline{\epsilon}_{U}}}})$ in dimension $(d,d_{\parallel},D,D_{\parallel})$ are mapped to static Hamiltonians with symmetry $({\rm CII},\hat{\mathcal{U}}_{\overline{\eta}_{C},-\overline{\eta}_{C}}^{\overline{\epsilon}_{U}})$ and $({\rm BDI},\hat{\mathcal{U}}_{\overline{\eta}_{C},-\overline{\eta}_{C}}^{\overline{\epsilon}_{U}})$ in dimension $(d,d_{\parallel},D+1,D_{\parallel})$, respectively. \subsubsection{Classes CI, CII, DIII, and BDI} In these classes, the time-reversal, particle-hole and chiral symmetries are all present. Without loss of generality, we assume $\hat{S}=\hat{\mathcal{T}}\hat{\mathcal{C}}$ and $\hat{\mathcal{S}}^{2}=1$. The hermitian matrix $\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)$ defined according to Eq.(\ref{eq:effective_hamiltonian_chiral}) has either time-reversal or particle-hole symmetry realized by \begin{equation} (\epsilon_{C}\epsilon_{T})\hat{\mathcal{C}}\mathcal{H}(-\boldsymbol{k},\boldsymbol{r},t)=\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{C}}, \end{equation} depending on whether $\epsilon_{C}\epsilon_{T}$ is $1$ or $-1$. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{0}$} Due to the order-two symmetry realized by $\hat{\mathcal{U}}_{0}$, we have \begin{equation} \hat{\mathcal{U}}_{0}\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)\hat{\mathcal{U}}_{0}^{-1}=\eta_{T}\eta_{C}\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},t), \end{equation} with $\hat{\mathcal{U}}_{0}\hat{\mathcal{C}}=\eta_{C}\hat{\mathcal{C}}\hat{\mathcal{U}}_{0}$ and $\hat{\mathcal{U}}_{0}^{2}=\epsilon_{U}$. Under the hermitian map, unitary loops in dimension $(d,d_{\parallel},D,D_{\parallel})$ with a given symmetry are mapped to static Hamiltonians in dimension $(d,d_{\parallel},D+1,D_{\parallel})$ with another symmetry according to \begin{equation} ({\rm X},\hat{\mathcal{U}}_{0,\eta_{T},\eta_{C}}^{\epsilon_{U}})\to\begin{cases} ({\rm Y},\hat{\mathcal{U}}_{\eta_{C}}^{\epsilon_{U}}) & \eta_{T}\eta_{C}=1\\ ({\rm Y},\overline{\mathcal{U}}_{\eta_{C}}^{\epsilon_{U}}) & \eta_{T}\eta_{C}=-1 \end{cases}, \end{equation} with ${\rm X}={\rm {\rm CI,CII,DIII,BDI}}$, and ${\rm Y}={\rm {\rm C,AII,D,AI}}$ respectively. \paragraph{$\hat{\mathcal{O}}=\hat{\mathcal{U}}_{T/2}$} Due to the order-two symmetry realized by $\hat{\mathcal{U}}_{T?2}$, we have \begin{align} & (\hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2})\mathcal{H}(\boldsymbol{k},\boldsymbol{r},t)(\hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2})^{-1} \nonumber \\ &=\eta_{T}\eta_{C}\mathcal{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp},T-t). \end{align} Moreover, we have $(\hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2})\hat{\mathcal{C}}=\eta_{C}\epsilon_{C}\epsilon_{T}\hat{\mathcal{C}}(\hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2})$, $(\hat{\mathcal{S}}\hat{\mathcal{U}}_{T/2})^{2}=\eta_{T}\eta_{C}\epsilon_{U}$. Under the hermitian map, unitary loops in dimension $(d,d_{\parallel},D,D_{\parallel})$ with a given symmetry are mapped to static Hamiltonians in dimension $(d,d_{\parallel},D+1,D_{\parallel}+1)$ with another symmetries according to \begin{align} &({\rm X},\hat{\mathcal{U}}_{T/2,\eta_{T},\eta_{C}}^{\epsilon_{U}})\to \nonumber \\ &\begin{cases} ({\rm Y},\hat{\mathcal{U}}_{\eta_{C}\epsilon_{C}\epsilon_{T}}^{\epsilon_{U}}) & \eta_{T}\eta_{C}=1\\ ({\rm Y},\overline{\mathcal{U}}_{\eta_{C}\epsilon_{C}\epsilon_{T}}^{-\epsilon_{U}})=({\rm Y},\overline{\mathcal{U}}_{-\eta_{C}\epsilon_{C}\epsilon_{T}}^{\epsilon_{U}}) & \eta_{T}\eta_{C}=-1 \end{cases}. \end{align} with ${\rm X}={\rm {\rm CI,CII,DIII,BDI}}$, and ${\rm Y}={\rm {\rm C,AII,D,AI}}$ respectively. \section{$K$ groups in the presence of order two symmetry \label{sec:Kgroup}} Using the hermitian map introduced in the previous sections, the unitary loops with an order-two space-time symmetry/antisymmetry are successfully mapped into static Hamiltonians with an order-two crystalline symmatry/antisymmetry, whose classfication has already been worked out in Ref.~\cite{Shiozaki2014}. Thus, the latter result can be directly applied to the classification of unitary loops. We first summarize the $K$-theory-based method used for classifying static Hamiltonians, and then finish the classification of unitary loops. Let us consider static Hamiltonians defined on a base space of momentum $\boldsymbol{k}\in T^d$ and real space coordinate $\boldsymbol{r}\in S^D$. For the classification of strong topological phases, one can instead simply use $S^{d+D}$ as the base space \cite{Kitaev2009,Teo2010}. To classify these Hamiltonians, we will use notion of stable homotopy equivalence as we defined for unitaries in Sec.~\ref{sec:floquet_basics}, by identifying Hamiltonians which are continuously deformable into each other up to adding extra trivial bands, while preserving an energy gap at the chemical potential. These equivalence classes can be formally added and they form an abelian group. For a given AZ symmetry class $s$, the classification of static Hamiltonians is given by the set of stable equivalence classes of maps $\mathcal{H}(\boldsymbol{k},\boldsymbol{r})$, from the base space $(\boldsymbol{k},\boldsymbol{r})\in S^{d+D}$ to the classifying space, denoted as $\mathcal{C}_s$ or $\mathcal{R}_s$, for complex and real symmetry classes, as listed in Table~\ref{tab:AZ-symmetry-classes}. The abelian group structure inherited from the equivalence classes leads to the group structure in this set of maps, which is called the $K$ group, or classification group. For static topological insulators and superconductors of dimension $(d,D)$ in an AZ class $s$ without additional spatial symmetries, the $K$ groups are denoted as $K_{\mathbb{C}}(s;d,D)$ and $K_{\mathbb{R}}(s;d,D)$, for complex and real symmetry classes, respectively. Note that for complex symmetry classes, we have $s=0,1 \mod 2$, whereas for real symmetry classes, $s=0,1,\dots,7 \mod 8$. These $K$ groups have the following properties \begin{gather} K_{\mathbb{C}}(s;d,D) = K_{\mathbb{C}}(s-d+D;0,0)= \pi_0(\mathcal{C}_{s-d+D})\\ K_{\mathbb{R}}(s;d,D) = K_{\mathbb{R}}(s-d+D;0,0)= \pi_0(\mathcal{R}_{s-d+D}) \end{gather} known as the Bott periodicity, where $\pi_0$ denotes the zeroth homotopy group which counts the number of path connected components in a given space. In the following, we will introduce the $K$ groups for Hamiltonians supporting an additional order-two spatial symmetry/antisymmetry following Ref.~\cite{Shiozaki2014}. Because of the hermitian map, these $K$ groups can also be associtated with the unitary loops, whose classification is then obtained. \subsection{Complex symmetry classes with an additional order-two unitary symmetry/antisymmetry} When a spatial or space-time symmetry/antisymmetry is considered, one needs to include the number of ``flipped'' coordinates for both $\boldsymbol{k}$ and $\boldsymbol{r}$, into the dimensions. For a static Hamiltonian of dimension $(d,d_{\parallel},D,D_{\parallel})$ in complex AZ classes with an additional order-two unitary symmetry/antisymmetry, the $K$ group is denoted as $K_{\mathbb{C}}^{U}(s,t;d,d_{\parallel},D,D_{\parallel})$, where the additional parameter $t=0,1 \mod 2$, specifies the coexisting order-two unitary symmetry/antisymmetry. These $K$ groups satisfy the following relation \begin{align} K_{\mathbb{C}}^{U}(s,t;d,d_{\parallel},D,D_{\parallel})&=K_{\mathbb{C}}^{U}(s-\delta,t-\delta_{\parallel}; 0,0,0,0) \nonumber \\ &\equiv K_{\mathbb{C}}^{U}(s-\delta,t-\delta_{\parallel}),\label{eq:complex_unitary_Kgroup} \end{align} where $\delta=d-D$, $\delta_{\parallel}=d_{\parallel}-D_{\parallel}$. Thus, for classification purpose, one can use the pair $(\delta,\delta_{\parallel})$ instead of $(d,d_{\parallel},D,D_{\parallel})$ to denote the dimensions of the base space, on which the static Hamiltonian is defined. \begin{table} \caption{\label{tab:complex_unitary}Possible types ($t=0,1\mod2$) of order-two additional unitary symmetr $\hat{\mathcal{U}}_{\eta_{S}}^{\epsilon_{U}}$/$\overline{\mathcal{U}}_{\overline{\eta}_{S}}^{\overline{\epsilon}_{U}}$ in complex AZ classes ($s=0,1\mod2$). The superscript and subscript are defined as $\epsilon_{U}=\hat{\mathcal{U}}^{2}$, $\overline{\epsilon}_{U}=\overline{\mathcal{U}}^{2}$, $\hat{\mathcal{U}}\hat{\mathcal{S}}=\eta_{S}\hat{\mathcal{S}}\hat{\mathcal{U}}$, $\overline{\mathcal{U}}\hat{\mathcal{S}}=\overline{\eta}_{S}\hat{\mathcal{S}}\overline{\mathcal{U}}$. } \begin{ruledtabular} \centering \begin{tabular}{cccc} s & AZ class & $t=0$ & $t=1$\\ \hline 0 & A & $\hat{\mathcal{U}}_{0}^{+}, \hat{\mathcal{U}}_{T/2}^{+}$ & $\overline{\mathcal{U}}^{+}_{s}$\\ 1 & AIII & $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^+$ & $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^{+}$\\ \end{tabular} \end{ruledtabular} \end{table} To define $K$ groups for unitary loops, we use the fact that the $K$ group for certain unitary loops should be the same as the one for the corresponding static Hamiltonians under the hermitian map. The $K$ groups for unitary loops are explicitly defined in Table~\ref{tab:complex_unitary}, where the two arguements $s,t$ label the AZ class and the coexisting order-two space-time symmetry/antisymmetry. \subsection{Complex symmetry classes with an additional order-two antiunitary symmetry/antisymmetry} We now consider static Hamiltonians of dimension $(d,d_{\parallel},D,D_{\parallel})$, in complex AZ classes, with an order-two antiunitary symmetry/antisymmetry, realized by $\hat{\mathcal{A}}$ or $\overline{\mathcal{A}}$. It turns out that complex AZ classes acquire real structures because of the antiunitary symmetry \cite{Shiozaki2014}. Indeed, effective time-reversal or particle-hole symmetry realized by $\hat{\mathcal{A}}$ or $\overline{\mathcal{A}}$ emerges, if we regard $(\boldsymbol{k}_{\perp},\boldsymbol{r}_{\parallel})$ as ``momenta'', and $(\boldsymbol{k}_{\parallel},\boldsymbol{r}_{\perp})$ as ``spatial coordinates''. Thus, a system in complex AZ classes with an antiunitary symmetry can be mapped into a real AZ class without additional spatial symmetries. \begin{table} \caption{\label{tab:complex_antiunitary}Possible types ($s=0,\dots,7\mod8$) of order-two additional antiunitary symmetry $\hat{\mathcal{A}}_{\eta_{S}}^{\epsilon_{A}}$/$\overline{\mathcal{A}}_{\overline{\eta}_{S}}^{\overline{\epsilon}_{A}}$ in complex AZ classes. The superscript and subscript are defined as $\epsilon_{A}=\hat{\mathcal{A}}^{2}$, $\overline{\epsilon}_{A}=\overline{\mathcal{A}}^{2}$, $\hat{\mathcal{A}}\hat{\mathcal{S}}=\eta_{S}\hat{\mathcal{S}}\hat{\mathcal{A}}$, $\overline{\mathcal{A}}\hat{\mathcal{S}}=\overline{\eta}_{S}\hat{\mathcal{S}}\overline{\mathcal{A}}$. } \begin{ruledtabular} \centering \begin{tabular}{cccc} $s$ & AZ class & Coexisiting symmetry & Mapped AZ class\\ \hline $0$ & A & $\hat{\mathcal{A}}_{s}^{+}$ & AI\\ $1$ & AIII & $\hat{\mathcal{A}}_{0,+}^{+}$, $\hat{\mathcal{A}}_{T/2,-}^{+}$ & BDI\\ $2$ & A & $\overline{\mathcal{A}}_{0}^{+}$, $\overline{\mathcal{A}}_{T/2}^{-}$ & D\\ $3$ & AIII & $\hat{\mathcal{A}}_{0,-}^{-}$, $\hat{\mathcal{A}}_{T/2,+}^{-}$ & DIII\\ $4$ & A & $\hat{\mathcal{A}}_{s}^{-}$ & AII\\ $5$ & AIII & $\hat{\mathcal{A}}_{0,+}^{-}$, $\hat{\mathcal{A}}_{T/2,-}^{-}$ & CII\\ $6$ & A & $\overline{\mathcal{A}}_{0}^{-}$, $\overline{\mathcal{A}}_{T/2}^{+}$ & C\\ $7$ & AIII & $\hat{\mathcal{A}}_{0,-}^{+}$, $\hat{\mathcal{A}}_{T/2,+}^{+}$ & CI\\ \end{tabular} \end{ruledtabular} \end{table} The $K$ groups for these Hamiltonians are denoted as $K_{\mathbb{C}}^{A}(s;d,d_{\parallel},D,D_{\parallel})$, which satisfies \begin{align} K_{\mathbb{C}}^{A}(s;d,d_{\parallel},D,D_{\parallel})&=K_{\mathbb{C}}^{A}(s-\delta+2\delta_{\parallel};0,0,0,0) \nonumber \\ &\equiv K_{\mathbb{C}}^{A}(s-\delta+2\delta_{\parallel}).\label{eq:complex_antiunitary_Kgroup}. \end{align} Similar to the previous case, the unitary loops with an antiunitary space-time symmetry/antisymmetry can also be associated with these $K$ groups. If we group these antiunitary symmetries and antisymmetries in terms of the index $s = 0,\dots,7 \mod 8$, according to Table~\ref{tab:complex_antiunitary}, then $K_{\mathbb{C}}^{A}(s)$ can further be reduced to $K_{\mathbb{R}}(s) \equiv K_{\mathbb{R}}(s;0,0)$. \subsection{Real symmetry classes with an additional order-two symmetry} In real symmetry classes, there are equivalence relations between order-two unitary and antiunitary symmetries/antisymmetries, as discussed previously. Thus, one can focus on unitary symmetries/antisymmetries only. The existence of an additional order-two unitary symmetry divide each class into four families ($t=0,\dots,3\mod4$), as summarized in Table \ref{tab:real_unitary}, where we have used the equivalence of $K$ groups for static Hamiltonians and unitary loops in terms of the hermitian map. \begin{table*} \caption{\label{tab:real_unitary}Possible types ($t=0,\dots,3\mod4$) of order-two additional symmetry $\hat{\mathcal{U}}_{\eta_{S}}^{\epsilon_{U}}$/$\overline{\mathcal{U}}_{\overline{\eta}_{S}}^{\overline{\epsilon}_{U}}$ in real AZ classes. The superscript and subscript are defined as $\epsilon_{U}=\hat{\mathcal{U}}^{2}$, $\overline{\epsilon}_{U}=\overline{\mathcal{U}}^{2}$, $\hat{\mathcal{U}}\hat{\mathcal{S}}=\eta_{S}\hat{\mathcal{S}}\hat{\mathcal{U}}$, $\overline{\mathcal{U}}\hat{\mathcal{S}}=\overline{\eta}_{S}\hat{\mathcal{S}}\overline{\mathcal{U}}$. We fix $\epsilon_{U}=\overline{\epsilon}_{U}=1$. } \begin{ruledtabular} \centering \begin{tabular}{cccccc} s & AZ Class & $t=0$ & $t=1$ & $t=2$ & $t=3$\\ \hline 0 & AI & $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^{+}$ & $\overline{\mathcal{U}}_{0,-}^{+}$, $\overline{\mathcal{U}}_{T/2,+}^{+}$ & $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^{+}$ & $\overline{\mathcal{U}}_{0,+}^{+}$, $\overline{\mathcal{U}}_{T/2,-}^{+}$\\ 1 & BDI & $\hat{\mathcal{U}}_{0,++}^{+}$, $\hat{\mathcal{U}}_{T/2,+-}^{+}$ & $\hat{\mathcal{U}}_{0,+-}^{+}$, $\hat{\mathcal{U}}_{T/2,++}^{+}$ & $\hat{\mathcal{U}}_{0,--}^{+}$,$\hat{\mathcal{U}}_{T/2,-+}^{+}$ & $\hat{\mathcal{U}}_{0,-+}^{+}$, $\hat{\mathcal{U}}_{T/2,--}^{+}$\\ 2 & D & $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^{+}$ & $\overline{\mathcal{U}}_{s,+}^{+}$ & $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^{+}$ & $\overline{\mathcal{U}}_{s,-}^{+}$\\ 3 & DIII & $\hat{\mathcal{U}}_{0,++}^{+}$, $\hat{\mathcal{U}}_{T/2,+-}^{+}$ & $\hat{\mathcal{U}}_{0,-+}^{+}$, $\hat{\mathcal{U}}_{T/2,--}^{+}$ & $\hat{\mathcal{U}}_{0,--}^{+}$, $\hat{\mathcal{U}}_{0,-+}^{+}$ & $\hat{\mathcal{U}}_{0,+-}^{+}$, $\hat{\mathcal{U}}_{T/2,++}^{+}$\\ 4 & AII & $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^{+}$ & $\overline{\mathcal{U}}_{0,-}^{+}$, $\overline{\mathcal{U}}_{T/2,+}^{+}$ & $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^{+}$ & $\overline{\mathcal{U}}_{0,+}^{+}$, $\overline{\mathcal{U}}_{T/2,-}^{+}$\\ 5 & CII & $\hat{\mathcal{U}}_{0,++}^{+}$, $\hat{\mathcal{U}}_{T/2,+-}^{+}$ & $\hat{\mathcal{U}}_{0,+-}^{+}$, $\hat{\mathcal{U}}_{T/2,++}^{+}$ & $\hat{\mathcal{U}}_{0,--}^{+}$, $\hat{\mathcal{U}}_{T/2,-+}^{+}$ & $\hat{\mathcal{U}}_{0++,-+}^{+}$, $\hat{\mathcal{U}}_{T/2,--}^{+}$\\ 6 & C & $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^{+}$ & $\overline{\mathcal{U}}_{s,+}^{+}$ & $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^{+}$ & $\overline{\mathcal{U}}_{s,-}^{+}$\\ 7 & CI & $\hat{\mathcal{U}}_{0,++}^{+}$, $\hat{\mathcal{U}}_{T/2,+-}^{+}$ & $\hat{\mathcal{U}}_{0,-+}^{+}$, $\hat{\mathcal{U}}_{T/2,--}^{+}$ & $\hat{\mathcal{U}}_{0,--}^{+}$, $\hat{\mathcal{U}}_{T/2,-+}^{+}$ & $\hat{\mathcal{U}}_{0,+-}^{+}$, $\hat{\mathcal{U}}_{T/2,++}^{+}$\\ \end{tabular} \end{ruledtabular} \end{table*} We denote the $K$ group for unitary loops in real AZ classes ($s=0,\dots,7\mod8$) with an additional order-two unitary symmetry/antisymmetry ($t=0,\dots,3\mod4$) as $K_{\mathbb{R}}^{U}(s,t;d,d_{\parallel},D,D_{\parallel})$, which satisfies \begin{align} K_{\mathbb{R}}^{U}(s,t;d,d_{\parallel},D,D_{\parallel})&=K_{\mathbb{R}}^{U}(s-\delta,t-\delta_{\parallel};0,0,0,0) \nonumber \\ &\equiv K_{\mathbb{R}}^{U}(s-\delta,t-\delta_{\parallel}).\label{eq:real_unitary_Kgroup} \end{align} \subsection{Nontrivial space-time vs static spatial symmetries/antisymmetries \label{sec:substitution}} The classification of unitary loops with an order-two space-time symmetry/antisymmetry is given by the $K$ groups, $K_{\mathbb{C}}^{U}(s,t)$, $K_{\mathbb{C}}^{A}(s)$ or $K_{\mathbb{R}}^{U}(s,t)$. As can be seen in Tables~\ref{tab:complex_unitary}--\ref{tab:real_unitary}, for every order-two space-time (anti)unitary symmetry/antisymmetry that is nontrivial, namely the half-period time translation is involved, there always exists a unique static spatial (anti)unitary symmetry/antisymmetry, such that both symmetries/antisymmetries give rise to the same $K$ group. It is worth mentioning that when looking at the static symmetries/antisymmetries alone, the corresponding $K$ groups for unitary loops are defined in the same way as the ones for Hamiltonians introduced in Ref.~\cite{Shiozaki2014}, as expected. The explicit relations between the two types of symmetries/antisymmetries (nontrivial space-time vs static) with the same $K$ group can be summarized as follows. Recall that we use $\eta_{S}$ ($\overline{\eta}_{S}$), $\eta_{T}$ ($\overline{\eta}_{T}$) and $\eta_{C}$ ($\overline{\eta}_{C}$) to characterize the commutation relations between the order-two symmetry (antisymmetry) operator and the nonspatial symmetry operators. For two unitary order-two symmetries giving rise to the same $K$ group, the $\eta_{S}$s and $\eta_{C}$s for the two symmetries take opposite signs, whereas $\eta_{T}$s are the same. For two antiunitary order-two symmetries, we have $\eta_{S}$s take opposite signs. For two unitary antisymmetries, the $\overline{\eta}_{T}$s have opposite signs. Finally, for class A, the antiunitary space-time antisymmetry operator $\overline{\mathcal{A}}_{T/2}^{\pm}$ have the same $K$ group as the one for $\overline{\mathcal{A}}_{0}^{\mp}$. These relations are summarized in Table~\ref{tab:summary}, and can be better understood after we introduce the frequency-domain formulation of the Floquet problem in Sec.~\ref{sec:harmonically_driven_models}. \subsection{Periodic table \label{sec:periodic_table}} From the $K$ groups introduced previously, we see that in addition to the mod $2$ or mod $8$ Bott periodicity in $\delta$, there also exists a periodic structure in flipped dimensions $\delta_{\parallel}$, because of the twofold or fourfold periodicity in $t$, which accounts for the additional order-two symmetry/antisymmetry. In particular, for complex symmetry classes with an order-two unitary symmetry/antisymmetry, the classification has a twofold periodicity in $\delta_{\parallel}$, whereas for complex symmetry classes with an order-two antiunitary symmetry/antisymmetry, and for real symmetry classes with an order-two unitary/antisymmetry, the periodicity in $\delta_{\parallel}$ is fourfold. These periodic features are the same as the ones obtained in Ref.~\cite{Shiozaki2014} for static Hamiltonians with an order-two crystalline symmetry/antisymmetry. We summarize the periodic tables for the four $(\text{\ensuremath{\delta_{\parallel}=0,\dots,3\mod4}})$ different families below in the supplemental material \cite{suppl}. Note that in obtaining the classification Tables, we made use of the $K$ groups in their zero dimensional forms defined in Eqs.~(\ref{eq:complex_unitary_Kgroup}), (\ref{eq:complex_antiunitary_Kgroup}) and (\ref{eq:real_unitary_Kgroup}), as well as the following relations \begin{gather} K_{\mathbb{C}}^{U}(s,t=0)=\pi_{0}(\mathcal{C}_{s}\times\mathcal{C}_{s})=\pi_{0}(\mathcal{C}_{s})\oplus\pi_{0}(\mathcal{C}_{s}) \nonumber \\ K_{\mathbb{C}}^{U}(s,t=1)=\pi_{0}(\mathcal{C}_{s+1}), \nonumber \\ K_{\mathbb{C}}^{A}(s) = \pi_0(\mathcal{R}_s), \nonumber \\ K_{\mathbb{R}}^{U}(s,t=0)=\pi_{0}(\mathcal{R}_{s}\times\mathcal{R}_{s})=\pi_{0}(\mathcal{R}_{s})\oplus\pi_{0}(\mathcal{R}_{s}), \nonumber\\ K_{\mathbb{R}}^{U}(s,t=1)=\pi_{0}(\mathcal{R}_{s+7})\nonumber \\ K_{\mathbb{R}}^{U}(s,t=2)=\pi_{0}(\mathcal{C}_{s}) \nonumber \\ K_{\mathbb{R}}^{U}(s,t=3)=\pi_{0}(\mathcal{R}_{s+1}). \end{gather} where $\mathcal{C}_{s}$ ($s=0,1\mod 2$) and $\mathcal{R}_{s}$ ($s=0,\dots, 7 \mod 8$) represent the classifying space of complex and real AZ classes, see Table \ref{tab:AZ-symmetry-classes}. \section{Floquet higher-order topological insulators and superconductors \label{sec:FHOTI_extension}} In the previous sections, we obtained a complete classification of the anomalous Floquet TI/SCs using $K$ theory, where the $K$ groups for the unitary loops were defined as the same ones for the static Hamiltonians, according to the hermitian map. Noticeably, the classification obtained in this way is a bulk classification, since the only the bulk unitary evolution operators were considered. These bulk $K$ groups include the information of topological classification at any order. For static tenfold-way TI/SCs, in which the topological property is determined from the nonspatial symmetries, there is a bulk-boundary correspondence which essentially says that the nontrivial topological bulk indicates protected gapless boundary modes living in one dimension lower. This boundary modes is irrespective of boundary orientation and lattice termination. The same is true for tenfold-way Floquet TI/SCs with only nonspatial symmetries. In this situation, since only first-order topological phases are allowed, this bulk $K$ group is enough to understand the existence of gapless boundary modes. However, when an additional crystalline symmetry/antisymmetry is taking into account, the existence of gapless boundary modes due to nontrivial topological bulk is not guaranteed unless the boundary is invariant under the nonlocal transformation of the symmetry/antisymmetry \cite{Fu2011, Ando2015}. \begin{table*} \caption{\label{tab:complex_unitary_hoti_2}Subgroup series $K^{(d)}\subseteq \dots \subseteq K' \subseteq K$ for zero- ($d=0$), one- ($d=1$), and two-dimensional ($d=2$) anomalous Floquet HOTI/SCs with a unitary order-two space-time symmetry/antisymmetry in complex classes. The number of flipped dimensions for the symmetry/antisymmetry is denoted as $d_{\parallel}$. } \begin{ruledtabular} \centering \begin{tabular}{cccccccc} & &$d=0$ &$d=1$ &$d=1$ &$d=2$ &$d=2$ &$d=2$ \\ Symmetry &Class &$d_{\parallel}=0$ &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=2$ \\ \hline $\hat{\mathcal{U}}_{0}^{+}$, $\hat{\mathcal{U}}_{T/2}^{+}$ &A &$\mathbb{Z}^2$ &$0\subseteq 0$ &$\mathbb{Z}\subseteq\mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq 0$ &$\mathbb{Z}\subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ \\ $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^{+}$ &AIII &$0$ &$0 \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0\subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ \hline $\overline{\mathcal{U}}_{s}^{+}$ &A &$0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^+$ &AIII &$\mathbb{Z}$ &$0 \subseteq 0$ &$\mathbb{Z}\subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:complex_antiunitary_hoti_2} Same as Table~\ref{tab:complex_unitary_hoti_2} for antiunitary symmetries/antisymmetries.} \begin{ruledtabular} \centering \begin{tabular}{cccccccc} & &$d=0$ &$d=1$ &$d=1$ &$d=2$ &$d=2$ &$d=2$ \\ Symmetry &Class &$d_{\parallel}=0$ &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=2$ \\ \hline $\hat{\mathcal{A}}_{s}^{+}$ & A &$\mathbb{Z}$ &$0 \subseteq 0$ &$\mathbb{Z}_2\subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$\mathbb{Z}_{2} \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_{2}$ \\ $\hat{\mathcal{A}}_{0,+}^{+},\hat{\mathcal{A}}_{T/2,-}^{+}$ & AIII &$\mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}$ &$\mathbb{Z}_2 \subseteq \mathbb{Z}_{2}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ \\ $\overline{\mathcal{A}}_{0}^{+},\overline{\mathcal{A}}_{T/2}^{-}$ & A &$\mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 2\mathbb{Z}$ \\ $\hat{\mathcal{A}}_{0,-}^{-},\hat{\mathcal{A}}_{T/2,+}^{-}$ & AIII &$0$ &$0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{A}}_{s}^{-}$ & A &$2\mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_{2}$ &$0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{A}}_{0,+}^{-},\hat{\mathcal{A}}_{T/2,-}^{-}$ & AIII &$0$ &$0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ $\overline{\mathcal{A}}_{0}^{-},\overline{\mathcal{A}}_{T/2}^{+}$ & A &$0$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{A}}_{0,-}^{+},\hat{\mathcal{A}}_{T/2,+}^{+}$ & AIII &$0$ &$0 \subseteq 0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_{2}$ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:real_unitary_hoti_2} Subgroup series $K^{(d)}\subseteq \dots \subseteq K' \subseteq K$ for zero- ($d=0$), one- ($d=1$), and two-dimensional ($d=2$) anomalous Floquet HOTI/SCs with a unitary order-two space-time symmetry/antisymmetry in real classes. The number of flipped dimensions for the symmetry/antisymmetry is denoted as $d_{\parallel}$. } \begin{ruledtabular} \centering \begin{tabular}{cccccccc} & &$d=0$ &$d=1$ &$d=1$ &$d=2$ &$d=2$ &$d=2$ \\ Symmetry &Class &$d_{\parallel}=0$ &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=2$ \\ \hline $\text{\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}}},\text{\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}}$ & AI &$\mathbb{Z}^{2}$ &$0 \subseteq 0$ &$\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & BDI &$\mathbb{Z}_{2}^{2}$ &$0 \subseteq \mathbb{Z}^2$ &$\mathbb{Z}_{2} \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ $\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}},\text{\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}}$ & D &$\mathbb{Z}_{2}^{2}$ &$0 \subseteq \mathbb{Z}_{2}^{2}$ &$\mathbb{Z}_{2} \subseteq \mathbb{Z}_{2}$ &$0 \subseteq 0 \subseteq \mathbb{Z}^{2}$ &$0 \subseteq \mathbb{Z}_{2} \subseteq \mathbb{Z}_{2}$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & DIII &$0$ &$0 \subseteq \mathbb{Z}_{2}^2$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ \\ $\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}},\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}$ & AII &$2\mathbb{Z}^2$ &$0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_{2}^{2}$ &$0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & CII &$0$ &$0 \subseteq 2\mathbb{Z}^{2}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ $\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}},\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}$ & C &$0$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & CI &$0$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ \hline $\text{\ensuremath{\overline{\mathcal{U}}_{0,-}^{+}}},\text{\ensuremath{\overline{\mathcal{U}}_{T/2,+}^{+}}}$ & AI &$0$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+},\hat{\mathcal{U}}_{T/2,++}^{+}$ & BDI &$\mathbb{Z}$ &$0 \subseteq 0$ &$\mathbb{Z} \subseteq \mathbb{Z}^2$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ \\ $\overline{\mathcal{U}}_{s,+}^{+}$ & D &$\mathbb{Z}_{2}$ &$0 \subseteq \mathbb{Z}$ &$\mathbb{Z}_2 \subseteq \mathbb{Z}_{2}^2$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & DIII &$\mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}_2$ &$\mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq \mathbb{Z}_2 \subseteq Z_{2}^2$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\ensuremath{\overline{\mathcal{U}}_{0,-}^{+}},\ensuremath{\overline{\mathcal{U}}_{T/2,+}^{+}}$ & AII &$0$ &$0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_{2}$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+},\hat{\mathcal{U}}_{T/2,++}^{+}$ & CII &$2\mathbb{Z}$ &$0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ \\ $\overline{\mathcal{U}}_{s,+}^{+}$ & C &$0$ &$0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}^2$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & CI &$0$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ \hline $\ensuremath{\hat{\mathcal{U}}_{0,-}^{+}},\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}$ & AI &$\mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ & BDI &$0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ $\text{\ensuremath{\hat{\mathcal{U}}_{0,-}^{+}}},\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}$ & D &$\mathbb{Z}$ &$0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ & DIII &$0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ \\ $\hat{\mathcal{U}}_{0,-}^{+},\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}$ & AII &$\mathbb{Z}$ &$0 \subseteq 0$ &$\mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$\mathbb{Z}_2 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ & CII &$0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-}^{+},\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}$ & C &$\mathbb{Z}$ &$0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}^2$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ &CI &$0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ \hline $\ensuremath{\overline{\mathcal{U}}_{0,+}^{+}},\text{\ensuremath{\overline{\mathcal{U}}_{T/2,-}^{+}}}$ & AI &$\mathbb{Z}_2$ &$ 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & BDI &$\mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ \\ $\overline{\mathcal{U}}_{s,-}^{+}$ & D &$0$ &$0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+},\hat{\mathcal{U}}_{T/2,++}^{+}$ & DIII &$2\mathbb{Z}$ &$0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ \\ $\ensuremath{\overline{\mathcal{U}}_{0,+}^{+}},\overline{\mathcal{U}}_{T/2,-}^{+}$ & AII &$0$ &$0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & CII &$0$ &$0 \subseteq 0$ &$0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\overline{\mathcal{U}}_{s,-}^{+}$ & C &$0$ &$0 \subseteq 0$ &$0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+}\hat{,\mathcal{U}}_{T/2,++}^{+}$ & CI &$\mathbb{Z}$ &$0 \subseteq 0$ &$\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:complex_unitary_hoti_3} Subgroup series $K^{(d)}\subseteq \dots \subseteq K' \subseteq K$ for three-dimensional ($d=3$) anomalous Floquet HOTI/SCs with a unitary order-two space-time symmetry/antisymmetry in complex classes. The number of flipped dimensions for the symmetry/antisymmetry is denoted as $d_{\parallel}$.} \begin{ruledtabular} \centering \begin{tabular}{cccccc} Symmetry &Class &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=2$ &$d_{\parallel}=3$ \\ \hline $\hat{\mathcal{U}}_{0}^{+}$, $\hat{\mathcal{U}}_{T/2}^{+}$ &A &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z} $ \\ $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{U}}_{T/2,-}^{+}$ &AIII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ \hline $\overline{\mathcal{U}}_{s}^{+}$ &A &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-}^{+}$, $\hat{\mathcal{U}}_{T/2,+}^+$ &AIII &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:complex_antiunitary_hoti_3}Same as Table~\ref{tab:complex_unitary_hoti_3} for antiunitary symmetries/antisymmetries.} \begin{ruledtabular} \centering \begin{tabular}{cccccc} Symmetry &Class &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=2$ &$d_{\parallel}=3$ \\ \hline $\hat{\mathcal{A}}_{s}^{+}$ & A &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{A}}_{0,+}^{+},\hat{\mathcal{A}}_{T/2,-}^{+}$ & AIII &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}$ \\ $\overline{\mathcal{A}}_{0}^{+},\overline{\mathcal{A}}_{T/2}^{-}$ & A &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{A}}_{0,-}^{-},\hat{\mathcal{A}}_{T/2,+}^{-}$ & AIII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{A}}_{s}^{-}$ & A &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{A}}_{0,+}^{-},\hat{\mathcal{A}}_{T/2,-}^{-}$ & AIII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ \\ $\overline{\mathcal{A}}_{0}^{-},\overline{\mathcal{A}}_{T/2}^{+}$ & A &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\hat{\mathcal{A}}_{0,-}^{+},\hat{\mathcal{A}}_{T/2,+}^{+}$ & AIII &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \end{tabular} \end{ruledtabular} \end{table*} \begin{table*} \caption{\label{tab:real_unitary_hoti_3} Subgroup series $K^{(d)}\subseteq \dots \subseteq K' \subseteq K$ for three-dimensional ($d=3$) anomalous Floquet HOTI/SCs with a unitary order-two space-time symmetry/antisymmetry in real classes. The number of flipped dimensions for the symmetry/antisymmetry is denoted as $d_{\parallel}$. } \begin{ruledtabular} \centering \begin{tabular}{cccccc} Symmetry &Class &$d_{\parallel}=0$ &$d_{\parallel}=1$ &$d_{\parallel}=2$ &$d_{\parallel}=3$ \\ \hline $\text{\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}}},\text{\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}}$ & AI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & BDI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq \mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}},\text{\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}}$ & D &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & DIII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}^2$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}},\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}$ & AII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & CII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_{2}$ \\ $\ensuremath{\hat{\mathcal{U}}_{0,+}^{+}},\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}$ & C &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\hat{\mathcal{U}}_{0,++}^{+},\hat{\mathcal{U}}_{T/2,+-}^{+}$ & CI &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}^2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ \hline $\text{\ensuremath{\overline{\mathcal{U}}_{0,-}^{+}}},\text{\ensuremath{\overline{\mathcal{U}}_{T/2,+}^{+}}}$ & AI &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+},\hat{\mathcal{U}}_{T/2,++}^{+}$ & BDI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ \\ $\overline{\mathcal{U}}_{s,+}^{+}$ & D &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & DIII &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}^2$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ \\ $\ensuremath{\overline{\mathcal{U}}_{0,-}^{+}},\ensuremath{\overline{\mathcal{U}}_{T/2,+}^{+}}$ & AII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+},\hat{\mathcal{U}}_{T/2,++}^{+}$ & CII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ \\ $\overline{\mathcal{U}}_{s,+}^{+}$ & C &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & CI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}^2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ \\ \hline $\ensuremath{\hat{\mathcal{U}}_{0,-}^{+}},\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}$ & AI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ & BDI &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\text{\ensuremath{\hat{\mathcal{U}}_{0,-}^{+}}},\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}$ & D &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ & DIII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}^{2}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\hat{\mathcal{U}}_{0,-}^{+},\ensuremath{\hat{\mathcal{U}}_{T/2,-}^{+}}$ & AII &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ & CII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-}^{+},\ensuremath{\hat{\mathcal{U}}_{T/2,+}^{+}}$ & C &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ \\ $\hat{\mathcal{U}}_{0,--}^{+},\hat{\mathcal{U}}_{T/2,-+}^{+}$ &CI &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}^2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ \hline $\ensuremath{\overline{\mathcal{U}}_{0,+}^{+}},\text{\ensuremath{\overline{\mathcal{U}}_{T/2,-}^{+}}}$ & AI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & BDI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\overline{\mathcal{U}}_{s,-}^{+}$ & D &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+},\hat{\mathcal{U}}_{T/2,++}^{+}$ & DIII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}^2$ \\ $\ensuremath{\overline{\mathcal{U}}_{0,+}^{+}},\overline{\mathcal{U}}_{T/2,-}^{+}$ & AII &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ \\ $\hat{\mathcal{U}}_{0,-+}^{+},\hat{\mathcal{U}}_{T/2,--}^{+}$ & CII &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 2\mathbb{Z} \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}_2$ &$0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2^2$ \\ $\overline{\mathcal{U}}_{s,-}^{+}$ & C &$0 \subseteq 0 \subseteq 0 \subseteq 2\mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ \\ $\hat{\mathcal{U}}_{0,+-}^{+}\hat{,\mathcal{U}}_{T/2,++}^{+}$ & CI &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$0 \subseteq 0 \subseteq 0 \subseteq \mathbb{Z}$ &$0 \subseteq 0 \subseteq 0 \subseteq 0$ &$2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq 2\mathbb{Z}^2$ \end{tabular} \end{ruledtabular} \end{table*} A more intriguing fact regarding crystalline symmetries/antisymmetries is that they can give rise to boundary modes with codimension higher than one, such as corners of 2D or 3D systems, as well as hinges of 3D systems \cite{Benalcazar2017, Peng2017, Langbehn2017, Benalcazar2017s, Song2017, Schindler2018, Geier2018, Khalaf2018, Khalaf2018prx}. Such systems are known as HOTI/SCs, in which the existence of the high codimension gapless boundary modes is guarenteed when the boundaries are compatible with the crystalline symmetry/antisymmetry, i.e. a group of boundaries with different orientations are mapped onto each other under the nonlocal transformation of a particular crystalline symmetry/antisymmetry. For example, to have a HOTI/SC protected by inversion, one needs to create boundaries in pairs related by inversion \cite{Khalaf2018, Khalaf2018prx}. An additional requirement for these corner or hinge modes is that they should be intrinsic, namely their existence should not depend on lattice termination, otherwise such high codimension boundary modes can be thought as a (codimension one) boundary modes in the low dimensional system, which is then glued to the original boundary. In other words, an $n$th order TI/SCs has codimension-$n$ boundary modes which cannot be destroyed through modifications of lattice terminations at the boundaries while preserving the bulk gap and the symmetries. According to this definition, the tenfold-way TI/SCs are indeed intrinsic first-order TI/SCs. In Ref.~\cite{Trifunovic2019}, a complete classification of these intrinsic corner or hinge modes was derived and a higher-order bulk-boundary correspondence between these high codimension boundary modes and the topological bulk was obtained. These were accomplished by considering a $K$ subgroup series for a $d$-dimensional crystal, \begin{equation} K^{(d)} \subseteq \dots \subseteq K'' \subseteq K' \subseteq K, \end{equation} where $K\equiv K^{(0)}$ is the $K$ group which classifies the bulk band structure of Hamiltonians with coexisting order-two symmetry/antisymmetry, defined in the previous section. $K^{(n)}\subseteq K$ is a subgroup excluding topological phases of order $n$ or lower, for any crystalline-symmetry compatible boundaries. For example, $K'$ classifies the ``purely crystalline phases'' \cite{Geier2018, Trifunovic2019}, which exclude the tenfold-way topological phases, which are first-order topological phases protected by nonspatial symmetries alone and have gapless modes at any codimension-one boundaries. This purely crystalline phases can have gapless modes only when the boundary preserves the crystalline symmetry, and the gapless modes will be gapped when the crystalline symmetry is broken. From a boundary perspective, one can define the boundary $K$ group $\mathcal{K}'$, which classifies the tenfold-way topological phases with gapless codimension-one boundary modes irrespective of boundary orientations, as long as the crystal shape and lattice termination are compatible crystalline symmetries. According to the above definitions, $\mathcal{K}'$ can be identified as the quotient group \begin{equation} \mathcal{K}'= K/K'. \end{equation} Generalizing this idea, a series of boundary $K$ groups denoted as $\mathcal{K}^{(n)}$ can be defined, which classify the intrinsic $n$-th order TI/SCs with intrinsic gapless codimension-$n$ boundary modes, when the crystal has crystalline-symmetry-compatible shape and lattice termination. In Ref.~\cite{Trifunovic2019}, the authors proved the following relation, \begin{equation} \mathcal{K}^{(n+1)} = K^{(n)}/K^{(n+1)}, \end{equation} known as the higher-order bulk-boundary correspondence: an intrinsic higher-order topological phase is uniquely associated with a topologically nontrivial bulk. Moreover, the above equation provides a systematic way of obtaining the complete classification of intrinsic HOTI/SCs from $K$ subgroup series, which were computed for crystals up to three dimensions with order-two crystalline symmetries/antisymmetries. We can generalize these results to anomalous Floquet HOTI/SCs, by considering unitary loops $U(\boldsymbol{k},t)$ in $d$ dimension without topological defect. To define a $K$ subgroup series for unitary loops with an order-two space-time symmetry/antisymmetry, one can exploit the hermitian map and introduce the $K$ groups according to their corresponding Hamiltonians with an order-two crystalline symmetry/antisymmetry. One obtains that the $K$ subgroup series for each nontrivial space-time symmetry/antisymmetry are the same as the ones for a corresponding static order-two crystalline symmetry/antisymmetry, according to the substitution rules summarized in Sec.~\ref{sec:substitution} and Table~\ref{tab:summary}. On the other hand, the $K$ groups are the same for unitary loops and Hamiltonians when static order-two symmetries/antisymmetries are considered. Using the results from Ref.~\cite{Trifunovic2019}, we present the $K$ subgroup series for unitary loops with an order-two space-time symmetry/antisymmetry in Tables~\ref{tab:complex_unitary_hoti_2}--\ref{tab:real_unitary_hoti_3}, for systems up to three dimensions. In these tables, we use the notation $G^2$ to denote $G \oplus G$, with $G = \mathbb{Z}, 2\mathbb{Z}, \mathbb{Z}_2$. One also notices that the largest $K$ group $K^{(0)}$ in the series is actually the ones shown in Tables of the supplemental material \cite{suppl}. The classification of intrinsic codimension-$n$ anomalous Floquet boundary modes is then given by the quotient $\mathcal{K}^{(n)} = K^{(n-1)}/K^{(n)}$. \section{Floquet HOTI/SCs in frequency domain \label{sec:FHOTI_frequency_domain}} In this section, we take an alternative route to connect a Floquet HOTI/SC with a nontrivial space-time symmetry/antisymmetry, to a static HOTI/SC with a corresponding crystalline symmetry/antisymmetry. This connection is based on the frequency-domain formulation of the Floquet problem \cite{Rudner2013}, which provides a more intuitive perspective to the results obtained by $K$ theory. \subsection{Frequency-domain formulation \label{sec:frequency_domain}} In the frequency-domain formulation of the Floquet problem, the quasienergies are obtained by diagonalizing the enlarged Hamiltonian \begin{equation} \mathscr{H}(\boldsymbol{k},\boldsymbol{r})=\left(\begin{array}{ccccc} \ddots\\ & h_{0}+\omega & h_{1} & h_{2}\\ & h_{1}^{\dagger} & h_{0} & h_{1}\\ & h_{2}^{\dagger} & h_{1}^{\dagger} & h_{0}-\omega\\ & & & & \ddots, \end{array}\right) \end{equation} where the matrix blocks are given by \begin{equation} h_n(\boldsymbol{k},\boldsymbol{r}) = \frac{1}{T} \int_0^T dt\, H(\boldsymbol{k},t) e^{-in\omega t}. \end{equation} Here, the appearance of the infinite dimensional matrix $\mathscr{H}$ can be subtle, and should be defined more carefully. Since later we would like to discuss the gap at $\epsilon_{\rm gap}=\omega/2$, we will assume that the infinite dimensional matrix $\mathscr{H}$ should be obtained as taking the limit $n \to \infty$ of a finite dimensional matrix whose diagonal blocks are given from $h_0+n\omega$ to $h_{0}-(n-1)\omega$, with $n$ a positive integer. With this definition, $\omega/2$ will be the particle-hole/chiral symmetric energy whenever the system has particle-hole/chiral symmetries. As a static Hamiltonian, $\mathscr{H}(\boldsymbol{k},\boldsymbol{r})$ has the same nonspatial symmetries as the original $H(\boldsymbol{k},\boldsymbol{r},t)$ does. Indeed, one can define the effective time-reversal $\mathscr{T}$, particle-hole $\mathscr{C}$ and chiral $\mathscr{S}$ symmetries for the enlarged Hamiltonian $\mathscr{H}(\boldsymbol{k},\boldsymbol{r})$ as \begin{align} &\mathscr{T}=\left(\begin{array}{ccccc} \ddots\\ & \hat{\mathcal{T}}\\ & & \hat{\mathcal{T}}\\ & & & \hat{\mathcal{T}}\\ & & & & \ddots \end{array}\right), \end{align} \begin{align} &\mathscr{C}=\left(\begin{array}{ccccc} & & & & \dots\\ & & & \hat{\mathcal{C}}\\ & & \hat{\mathcal{C}}\\ & \hat{\mathcal{C}}\\ \dots \end{array}\right), \end{align} \begin{align} &\mathscr{S}=\left(\begin{array}{ccccc} & & & & \dots\\ & & & \hat{\mathcal{S}}\\ & & \hat{\mathcal{S}}\\ & \hat{\mathcal{S}}\\ \dots \end{array}\right). \end{align} On the other hand, when the original $H(\boldsymbol{k},\boldsymbol{r},t)$ has a nontrivial space-time symmetry/antisymmetry, the enlarged Hamiltonian $\mathscr{H}(\boldsymbol{k},\boldsymbol{r})$ will acquire the spatial (crystalline) symmetry/antisymmetry inherited from the spatial part of the space-time symmetry/antisymmetry. Let us first consider $\hat{\mathcal{U}}_{T/2}$ defined in Eq.~(\ref{eq:unitary_symmetry}) for $s=T/2$, which is an unitary operation together with a half-period time translation. Since \begin{equation} \hat{\mathcal{U}}_{T/2}h_{n}(\boldsymbol{k},\boldsymbol{r})\hat{\mathcal{U}}_{T/2}^{-1} =(-1)^{n} h_{n}(-\boldsymbol{k}_{\parallel}, \boldsymbol{k}_{\perp}, -\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}), \end{equation} the enlarged Hamiltonian thus respects a unitary spatial symmetry defined by \begin{equation} \mathscr{U}\mathscr{H}(\boldsymbol{k},\boldsymbol{r})\mathscr{U}^{-1}= \mathscr{H}(-\boldsymbol{k}_{\parallel}, \boldsymbol{k}_{\perp}, -\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}), \end{equation} where the unitary operator \begin{equation} \mathscr{U} = \left(\begin{array}{ccccc} \ddots\\ & \hat{\mathcal{U}}_{T/2}\\ & & -\hat{\mathcal{U}}_{T/2}\\ & & & \hat{\mathcal{U}}_{T/2}\\ & & & & \ddots \end{array}\right) \end{equation} is inherited from $\hat{\mathcal{U}}_{T/2}$. Next, we consider $\overline{\mathcal{A}}_{T/2}$. Since \begin{equation} \overline{\mathcal{A}}_{T/2}h_{n}(\boldsymbol{k},\boldsymbol{r})\overline{\mathcal{A}}_{T/2}^{-1} =-(-1)^{n}h_{-n}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}), \end{equation} one can define \begin{equation} \overline{\mathscr{A}}=\left(\begin{array}{ccccc} & & & & \iddots\\ & & & -i\overline{\mathcal{A}}_{T/2}\\ & & i\overline{\mathcal{A}}_{T/2}\\ & -i\overline{\mathcal{A}}_{T/2}\\ \iddots \end{array}\right) \end{equation} such that \begin{equation} \overline{\mathscr{A}}\mathscr{H}(\boldsymbol{k},\boldsymbol{r})\overline{\mathscr{A}}^{-1} = -\mathscr{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}). \end{equation} We now consider symmetry operators $\hat{\mathcal{A}}_{T/2}$ and $\overline{\mathcal{U}}_{T/2}$, for symmetry classes other than A, C, and D. For $\hat{\mathcal{A}}_{T/2}$, we have \begin{equation} \hat{\mathcal{A}}_{T/2}h_{n}(\boldsymbol{k},\boldsymbol{r})\hat{\mathcal{A}}_{T/2}^{-1} = (-1)^{n}h_{n}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}). \end{equation} Thus, the enlarged Hamiltonian $\mathscr{H}(\boldsymbol{k},\boldsymbol{r})$ also has an antiunitary spatial symmetry inherited from $\hat{\mathcal{A}}_{T/2}$, given by \begin{equation} \mathscr{A}\mathscr{H}(\boldsymbol{k},\boldsymbol{r})\mathscr{A}^{-1} = \mathscr{H}(\boldsymbol{k}_{\parallel},-\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}), \end{equation} where the antiunitary operator \begin{equation} \mathscr{A}=\left(\begin{array}{ccccc} \ddots\\ & \hat{\mathcal{A}}_{T/2}\\ & & -\hat{\mathcal{A}}_{T/2}\\ & & & \hat{\mathcal{A}}_{T/2}\\ & & & & \ddots \end{array}\right). \end{equation} Finally, for $\overline{\mathcal{U}}_{T/2}$, it satisfies \begin{equation} \overline{\mathcal{U}}_{T/2}h_{n}(\boldsymbol{k},\boldsymbol{r})\overline{\mathcal{U}}_{T/2}^{-1} =-(-1)^{n}h_{-n}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}). \end{equation} Hence, if we define \begin{equation} \overline{\mathscr{U}}=\left(\begin{array}{ccccc} & & & & \iddots\\ & & & -i\overline{\mathcal{U}}_{T/2}\\ & & i\overline{\mathcal{U}}_{T/2}\\ & -i\overline{\mathcal{U}}_{T/2}\\ \iddots \end{array}\right), \end{equation} the enlarged Hamiltonian will satisfy \begin{equation} \overline{\mathscr{U}}\mathscr{H}(\boldsymbol{k},\boldsymbol{r})\overline{\mathscr{U}}^{-1} = -\mathscr{H}(-\boldsymbol{k}_{\parallel},\boldsymbol{k}_{\perp},-\boldsymbol{r}_{\parallel},\boldsymbol{r}_{\perp}). \end{equation} \subsection{Harmonically driven systems \label{sec:harmonically_driven_models}} To simplify the discussion, it is helpful to restrict ourselves to a specific class of periodically driven systems, the harmonically driven ones, whose Hamiltonians have the following form \begin{equation} H(\boldsymbol{k},t) = h_{0}(\boldsymbol{k}) + h_{1}(\boldsymbol{k})e^{i\omega t} + h_{1}^\dagger(\boldsymbol{k})e^{-i\omega t}. \label{eq:harmonic_hamiltonian} \end{equation} To discuss the band topology around at $\epsilon_{\rm gap}=\omega/2$, one can further truncate the enlarged Hamiltonian $\mathscr{H}$ to the $2 \times 2$ block, containing two Floquet zones with energy difference $\omega$, namely \begin{equation} \mathscr{H}(\boldsymbol{k}) = \left(\begin{array}{cc} h_{0}(\boldsymbol{k})+\frac{\omega}{2} & h_{1}(\boldsymbol{k})\\ h_{1}^{\dagger}(\boldsymbol{k}) & h_{0}(\boldsymbol{k})-\frac{\omega}{2} \end{array}\right) + \frac{\omega}{2}\rho_0, \label{eq:effective_2by2} \end{equation} where $\rho_{0}$ is the identity in the two Floquet-zone basis. For later convenience, we use $\rho_{x,y,z}$ to denote the Pauli matrices this basis. Since the last term in Eq.~(\ref{eq:effective_2by2}) is a shift in energy by $\omega/2$, we have a Floquet HOTI/SC at $\epsilon_{\rm gap}=\omega/2$ if and only if the first term in Eq.~(\ref{eq:effective_2by2}) is a static HOTI/SC. When restricted to the two Floquet-zone basis, the nonspatial symmetries can be conveniently writtens \begin{equation} \mathscr{T} = \rho_0\hat{\mathcal{T}},\ \mathscr{C} = \rho_x\hat{\mathcal{C}}, \ \mathscr{S} = \rho_x\hat{\mathcal{S}}. \label{eq:effective_AZ} \end{equation} The spatial symmetries/antisymmetries for $\mathscr{H}$, which are inherited from the space-time symmetries/antisymmetries, can also be written simply as \begin{align} &\mathscr{U} = \rho_z\hat{\mathcal{U}}_{T/2},\quad \overline{\mathscr{A}} = \rho_y\overline{\mathcal{A}}_{T/2}, \nonumber \\ &\mathscr{A} = \rho_z\hat{\mathcal{A}}_{T/2},\quad \overline{\mathscr{U}} = \rho_y\overline{\mathcal{U}}_{T/2}. \label{eq:effective_spatial} \end{align} From these relations, one arrives at the same results as the ones from $K$ theory in the previous sections. When a spatial symmetry $\mathscr{O}$, with $\mathscr{O}=\mathscr{U},\mathscr{A}$, coexists with the particle-hole or/and chiral symmetry operators $\mathscr{C}$, $\mathscr{S}$, $\mathscr{O}$ will commute or anticommute with $\mathscr{C}$ or/and $\mathscr{S}$. Let us write \begin{align} \mathscr{O}\mathscr{C} &= \chi_C \mathscr{C}\mathscr{O},\\ \mathscr{O}\mathscr{S} &= \chi_S \mathscr{S}\mathscr{O}, \end{align} with $\chi_C, \chi_S=\pm 1$. Because of the additional Pauli matrices $\rho_{x,y,z}$ in Eqs.~(\ref{eq:effective_AZ}) and (\ref{eq:effective_spatial}), we have $\eta_C = -\chi_C$ and $\eta_S= -\chi_S$. For $\mathscr{O}$, the commutation relation with respect to the time-reversal symmetry does not vary, whereas for a spatial antisymmetry $\overline{\mathscr{O}}$, with $\overline{\mathscr{O}}=\overline{\mathscr{U}},\overline{\mathscr{A}}$, coexisting with the time-reversal symmetry, the commutation relation with respect to the latter does get switched. Let us write \begin{equation} \overline{\mathscr{O}}\mathscr{T} = \chi_T \mathscr{T}\overline{\mathscr{O}}, \end{equation} with $\chi_T = \pm 1$, then we would have \begin{equation} \eta_{T} = -\chi_{T}, \end{equation} because $\rho_{y}$ is imaginary. Because of this, we can also obtain $\overline{\mathscr{A}}^2 = -\overline{\mathcal{A}}_{T/2}^2$. \section{Model Hamiltonians for Floquet HOTI/SCs \label{sec:models}} In this section, we introduce model Hamiltonians, which are simple but still sufficiently general, for Floquet HOTI/SCs in all symmetry classes. Particularly, we consider harmonically driven Floquet HOTI/SCs Hamiltonians with a given nontrivial space-time symmetry/antisymmetry, realized by $\hat{\mathcal{U}}_{T/2}$, $\overline{\mathcal{A}}_{T/2}$, $\hat{\mathcal{A}}_{T/2}$, or $\overline{\mathcal{U}}_{T/2}$. One should notice that the latter two symmetries/antisymmetries are only available when the system is not in classes A,C or D, because in these classes, the symmetries with $s=0$ and $T/2$ are the same up to redefining the origin of time coordinate. \subsection{Hamiltonians} The harmonically driven Floquet HOTI/SCs in $d$-dimension to be constructed have Bloch Hamiltonians of the following general form \begin{equation} H(\boldsymbol{k},t,m) = d_{0}(\boldsymbol{k},m)\Gamma_0 + \sum_{j = 1}^{d} d_{j}(\boldsymbol{k})\Gamma_j \cos(\omega t), \end{equation} where \begin{align} d_{0}(\boldsymbol{k},m) &= m+\sum_{j=1}^{d} (1-\cos k_j) + \dots\\ d_{j}(\boldsymbol{k}) &= \sin k_j, \quad j=1,\dots d, \end{align} and $\{\Gamma_{i}, \Gamma_{j}\} = 2\delta_{ij}\mathbb{I}$, with $\mathbb{I}$ the identity matrix. Here ``$\dots$'' represents $\boldsymbol{k}$-independent symmetry allowed perturbations that will in general gap out unprotected gapless modes. One can further choose a representation of these $\Gamma_{j}$s such that \begin{equation} \Gamma_{0}=\left(\begin{array}{cc} \mathbb{I} & 0\\ 0 & -\mathbb{I} \end{array}\right) = \tau_{z}, \quad \Gamma_{j}=\left(\begin{array}{cc} 0 & \gamma_j\\ \gamma_j^{\dagger} &0 \end{array}\right), \label{eq:gamma_representation} \end{equation} for $j = 1,\dots, d$. By the transformation properties of the symmetry/antisymmetry operators, we have, in this representation, $\hat{\mathcal{T}}$, $\hat{\mathcal{U}}_{T/2}$ and $\hat{\mathcal{A}}_{T/2}$ are block diagonal, namely they act independently on the two subspaces with $\tau_{z} = \pm 1$, whereas the operators $\hat{\mathcal{C}}$, $\hat{\mathcal{S}}$, $\overline{\mathcal{U}}_{T/2}$ and $\overline{\mathcal{A}}_{T/2}$ are block off-diagonal, which couple the two subspaces. In this representation, the enlarged Hamiltonian $\mathscr{H}(\boldsymbol{k})$ truncated to two Floquet zones, up to the constant shift $\omega/2$, can be decoupled into two sectors with $\rho_z\tau_z = \pm 1$. Hence, one can write it as a direct sum \begin{equation} \mathscr{H}(\boldsymbol{k})=h(\boldsymbol{k},m+\omega/2) \oplus h(\boldsymbol{k},m-\omega/2), \label{eq:direct_sum} \end{equation} with \begin{equation} h(\boldsymbol{k},m) = d_{0}(\boldsymbol{k},m)\tilde{\Gamma}_0 + \sum_{j=1}^{d}d_{j}(\boldsymbol{k})\tilde{\Gamma}_j. \label{eq:static_models} \end{equation} Here the matrices $\tilde{\Gamma}_{j}$s have a two-by-two block structure when restricting to the $\rho_z\tau_z = \pm 1$ sectors of $\mathscr{H}(\boldsymbol{k})$. If we abuse the notation by still using $\tau_{x,y,z}$ for this two-by-two degree of freedom, we can identify $\tilde{\Gamma}_{j}= \Gamma_j$, for $j=0,\dots,d$. It is straightforward to verify that the static Hamiltonian $h(\boldsymbol{k},m)$ respects the same nonspatial symmetries as the harmonically driven Hamiltonian $H(\boldsymbol{k},t,m)$ does, with the same symmetry operators. Moreover, if $H(\boldsymbol{k},t,m)$ respects a nontrivial space-time symmetry, realized by $\hat{\mathcal{U}}_{T/2}$ or $\hat{\mathcal{A}}_{T/2}$, then $h(\boldsymbol{k},m)$ will respect a spatial symmetry, realized by $\Gamma_0\hat{\mathcal{U}}_{T/2}$ or $\Gamma_0\hat{\mathcal{A}}_{T/2}$, respectively. However, if $H(\boldsymbol{k},t,m)$ respects a nontrivial space-time antisymmetry, realized by $\overline{\mathcal{U}}_{T/2}$ or $\overline{\mathcal{A}}_{T/2}$, then $h(\boldsymbol{k},m)$ will respect a spatial antisymmetry, realized by $-i\Gamma_0\hat{\mathcal{U}}_{T/2}$ or $-i\Gamma_0\hat{\mathcal{A}}_{T/2}$, respectively. These relations can be worked out by using the block diagonal or off-diagonal properties of the operators of space-time symmetries/antisymmetries, as well as the relations in Eq.~(\ref{eq:effective_spatial}). Thus, we have established a mapping between harmonically driven Hamiltonians $H(\boldsymbol{k},t,m)$ and static Hamiltonians $h(\boldsymbol{k},m)$, as well as their transformation properties under symmetry/antisymmetry operators. On the other hand, $h(\boldsymbol{k},m)$ given in Eq.~(\ref{eq:static_models}) are well studied models for static HOTI/SCs \cite{Geier2018, Trifunovic2019}. It is known that for $-2<m<0$, the Hamiltonian $h(\boldsymbol{k},m)$ is in the topological phases (if the classification is nontrivial), whereas for $m>0$ the Hamiltonian is in a trivial phase. A topological phases transition occurs at $m=0$ with the band gap closing at $\boldsymbol{k}=0$. Since the enlarged Hamiltonian $\mathscr{H}(\boldsymbol{k})$, up to a constant $\omega/2$ shift, can be written as a direct sum of $h(\boldsymbol{k},m\pm\omega/2)$, the static Hamiltonian $\mathscr{H}(\boldsymbol{k})$ will be in the topological phase (with chemical potential inside the gap at $\omega/2$) if $-2<m-\omega/2<0$ and $m+\omega/2>0$. This is also the condition when $H(k,t,m)$ is in a Floquet topological phase at $\epsilon_{\rm gap} = \omega/2$. \subsection{Symmetry/antisymmetry-breaking mass terms \label{sec:symmetry_mass_terms}} Let us consider $-2<m-\omega/2<0$ and $m+\omega/2>0$. In this parameter regime, $h(\boldsymbol{k},m+\omega/2)$ is always in a trivial insulating phase, whereas $h(\boldsymbol{k},m-\omega/2)$ is in a nontrivial topological phase, if there exists no mass term $M$ that respect the nonspatial symmetries, as well as the spatial symmetry/antisymmetry inherited from the space-time symmetry/antisymmetry of $H(\boldsymbol{k},t,m)$. Here, the mass term in addition satisfies $M^2=1$, $M=M^{\dagger}$ and $\{M,h(\boldsymbol{k},m)\}=0$. Such a mass term will gap out any gapless states that may appear in a finite-size system whose bulk is given by $h(\boldsymbol{k},m-\omega/2)$. When $M$ exist, one can define a term $M\cos(\omega t)$ respecting all nonspatial symmetries and the space-time symmetry/antisymmetry of $H(\boldsymbol{k},m,t)$, and it will gap out any gapless Floquet boundary modes at quasienergy $\epsilon_{\rm gap} = \omega/2$. If no mass term $M$, which satisfies only the nonspatial symmetries irrespective of the spatial symmetry/antisymmetry, exists, then $h(\boldsymbol{k},m-\omega/2)$ ($H(\boldsymbol{k},m,t)$) is in the static (Floquet) tenfold-way topologogical phases, as it remains nontrivial even when the spatial (space-time) symmetry/antisymmetry is broken. Thus, the tenfold-way phases are always first-order topological phases. However, if such a $M$ exists, $h(\boldsymbol{k},m-\omega/2)$ ($H(\boldsymbol{k},m,t)$) describes a static (Floquet) ``purely crystalline'' topological phase, which can be higher-order topological phases, and the topological protection relies on the spatial (space-time) symmetry/antisymmetry. As pointed out in Ref.~\cite{Trifunovic2019}, several mutually anticommuting spatial-symmetry/antisymmetry-breaking mass terms $M_{l}$ can exist for $h(\boldsymbol{k},m-\omega/2)$, where $M_{l}$ also anticommutes with $h$. Furthermore, if $h$ has the minimum possible dimension for a given ``purely crystalline'' topological phase, then the mass terms $M_{l}$ all anticommute (commute) with the spatial symmetry (antisymmetry) operator of $h(\boldsymbol{k},m-\omega/2)$. In this case, one can relate the number of these mass terms $M_{l}$ and the order of the topological phase \cite{Trifunovic2019}: When $n$ mass terms $M_{l}$ exist, with $l=1,\dots,n$, boundaries of codimension up to $\min(n,d_{\parallel})$ are gapped, and one has a topological phase of order $\min (n+1,d_{\parallel}+1)$ if $\min(n+1,d_{\parallel}+1) \leq d$. However, if $\min(n+1,d_{\parallel}+1) > d$, the system does not support any protected boundary modes at any codimension. See Ref.~\cite{Trifunovic2019}, or Appendix~\ref{app:order_mass_terms} for the proof of this statement. Hence, the order of the Floquet topological phase described by $H(\boldsymbol{k},t,m)$ is reflected in the number of symmetry/antisymmetry-breaking mass terms $M_{l}$, due to the mapping between $H(\boldsymbol{k},t,m)$ and $h(\boldsymbol{k},m-\omega/2)$. In the following, we explicitly construct model Hamiltonians for Floquet HOTI/SCs with a given space-time symmetry/antisymmetry. \subsection{First-order phase in $d_{\parallel}=0$ family} When $d_{\parallel} = 0$, the symmetries/antisymmetries are onsite. From Tables~\ref{tab:complex_unitary_hoti_2}--\ref{tab:real_unitary_hoti_3}, we see that the onsite symmetries/antisymmetries only give rise to first-order TI/SCs, since only the $K^{(0)}$ in the subgroup series can be nonzero. This can also be understood from the fact that $\min(n+1,d_{\parallel}+1) = 1$ in this case. We will in the following provide two examples in which we have anomalous Floquet boundary modes of codimension one which are protected by the unitary onsite space-time symmetry. \subsubsection{2D system in class AII with $\hat{\mathcal{U}}_{T/2, -}^{+}$} The simplest static topological insulator protected by unitary onsite symmetry is the quantum spin Hall insulator with additional two-fold spin rotation symmetry around the $z$ axis \cite{Shiozaki2014}. This system is in class AII with time-reversal symmetry $\hat{\mathcal{T}}^2 = -1$. It is known that either a static or a Floquet system of class AII in 2D will have a $\mathbb{Z}_{2}$ topological invariant \cite{Chiu2016, Roy2017}. However, with a static unitary $d_{\parallel}=0$ symmetry (such as a two-fold spin rotation symmetry), realized by the operator $\hat{\mathcal{U}}^{+}_{0,-}$ that squares to one and anticommutes with the time-reversal symmetry operator, a $K^{(0)}= \mathbb{Z}$ topological invariant known as the spin Chern number can be defined. In fact, such a $\mathbb{Z}$ topological invariant (see Table~\ref{tab:real_unitary_hoti_2}) can also appear due to the existence of space-time symmetry realized by $\hat{\mathcal{U}}_{T/2,-}^{+}$ at quasienergy gap $\epsilon_{\rm gap} =\omega/2$. A lattice model that realizes a spin Chern insulator can be defined using the following Bloch Hamiltonian \begin{align} h(\boldsymbol{k},m) &= (m+2-\cos k_{x} - \cos k_{y})\tau_z \nonumber \\ & + (\sin k_{x}\tau_x s_z + \sin k_{y}\tau_{y}), \label{eq:hamiltonian_spin_chern} \end{align} where $s_{x,y,z}$ and $\tau_{x,y,z}$ are two sets of Pauli matrices for spins and orbitals. This Hamiltonian has time-reversal symmetry realized by $\hat{\mathcal{T}} = -i s_{y}\hat{\mathcal{K}}$ as well as the unitary symmetry realized by operator $\hat{\mathcal{U}}_{0,-}^{+} = s_{z}$. When we choose an open boundary condition along $x$ while keep the $y$ direction with a periodic boundary condition, there will be gapless helical edge states inside the bulk gap propagating along the $x$ edge at $k_y=0$ for $-2<m<0$. The corresponding harmonically driven Hamiltonian can be written as \begin{align} H(\boldsymbol{k},t,m) &= (m+2-\cos k_{x} - \cos k_{y})\tau_z \nonumber \\ & + (\sin k_{x}\tau_x s_z + \sin k_{y}\tau_{y})\cos(\omega t), \end{align} where the time-reversal and the half-period time translation onsite symmetry operators are defined as $\hat{\mathcal{T}}=-is_{y}\hat{\mathcal{K}}$ and $\hat{\mathcal{U}}_{T/2,-}^{+}=s_{z}\tau_z$ respectively. When $-2<m-\omega/2<0$ and $m+\omega/2>0$ are satisfied, this model supports gapless helical edge states at $k_{y}=0$ inside the bulk quasienergy gap $\epsilon_{\rm gap}=\omega/2$ when the x direction has an open boundary condition. Furthermore, such gapless Floquet edge modes persist as one introduces more perturbations that preserve the time-reversal and the $\hat{\mathcal{U}}_{T/2,-}^{+}$ symmetry. \subsubsection{2D system in class D with $\hat{\mathcal{U}}_{T/2,-}^{+}$} For 2D, either static or Floquet, superconductors in class D with no additional symmetries, the topological invariant is $\mathbb{Z}$ given by the Chern number of the Bogoliubov--de Gennes (BdG) bands. When there exists a static unitary $d_{\parallel}=0$ symmetry, realized by $\hat{\mathcal{U}}^{+}_{0,+}$ which commutes with the particle-hole symmetry operator, the topological invariant instead becomes to $K^{(0)} = \mathbb{Z}\oplus \mathbb{Z}$, see Table~\ref{tab:real_unitary_hoti_2}. The same topological invariant can also be obtained from a space-time unitary symmetry realized by $\hat{\mathcal{U}}_{T/2,-}^{+}$, which anticommutes with the particle-hole symmetry operator. In the following, we construct a model Hamiltonian for such a Floquet system. Let us start from the static 2D Hamiltonian in class D given by \begin{align} h(\boldsymbol{k},m) &= (m+2-\cos k_x -\cos k_y + b s_z)\tau_z \nonumber \\ &+ \sin k_x s_z \tau_x + \sin k_y \tau_y, \label{eq:static_D_del0} \end{align} with particle-hole symmetry and the unitary onsite symmetries realized by $\hat{\mathcal{C}} = \tau_{x}\hat{\mathcal{K}}$ and $\hat{\mathcal{U}}_{0,+}^+ = s_z$, where $\tau_{x,y,z}$ are the Pauli matrices for the Nambu space. Here, the unitary symmetry can be thought as the mirror reflection with respect to the $xy$ plane, and $bs_{z}$ is the Zeeman term which breaks the time-reversal symmetry. The $\mathbb{Z}\oplus \mathbb{Z}$ structure is coming from the fact that $\hat{\mathcal{U}}_{0,+}^{+}$, $\hat{\mathcal{C}}$ and $h(\boldsymbol{k},m)$ can be simultaneously block diagonalized, according to the $\pm 1$ eigenvalues of $\hat{\mathcal{U}}_{0,+}^{+}$. Each block is a class D system with no additional symmetries, and thus has a $\mathbb{Z}$ topological invariant. Since the two blocks are independent, we have the topological invariant of the system should be a direct sum of the topological invariant for each block, leading to $\mathbb{Z} \oplus \mathbb{Z}$. The harmonically driven Hamiltonian with a unitary space-time onsite symmetry realized by $\hat{\mathcal{U}}_{T/2,-}^{+} = s_z\tau_z$ can be written as \begin{align} H(\boldsymbol{k},t,m) &= (m+2-\cos k_x -\cos k_y + b s_z)\tau_z \nonumber \\ &+ (\sin k_x s_z\tau_x - \sin k_y \tau_y)\cos(\omega t). \end{align} The particle-hole symmetry operator for this Hamiltonian is $\hat{\mathcal{C}} = \tau_{x}\hat{\mathcal{K}}$. \subsection{Second-order phase in $d_\parallel=1$ family} When a $d_{\parallel}=1$ space-time symmetry/antisymmetry is present, the system can be at most a second-order topological phase, since the order is given by $\min(n+1,d_{\parallel}+1)\leq 2$. Note that the unitary symmetry in this case is the so-called time-glide symmetry, which has been already discussed thoroughly in Refs.~\cite{Morimoto2017, Peng2019}, we will in the following construct models for second-order topological phases with antiunitary symmetries, as well as models with unitary antisymmetries. \subsubsection{2D system in class AIII with $\hat{\mathcal{A}}_{T/2,-}^{+}$} For 2D systems in class AIII without any additional symmetries, the topological classification is trivial, since the chiral symmetry will set the Chern number of the occupied bands to zero. However, in Table~\ref{tab:complex_antiunitary_hoti_2}, we see that when the 2D system has an antiunitary symmetry realized by either $\hat{\mathcal{A}}_{0,+}^{+}$ or $\hat{\mathcal{A}}_{T/2,-}^{+}$, the $K$ subgroup series is $0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_2$. Let us first understand the $K^{(0)} = \mathbb{Z}_{2}$ classification in the case of $\hat{\mathcal{A}}_{0,+}^{+}$ in a static system with Hamiltonian $h(k_{x},k_{y})$. Let us assume that $\hat{\mathcal{A}}_{0,+}^{+}$ corresponds to the antiunitary reflection about the $x$ axis, then we have \begin{equation} \hat{\mathcal{A}}_{0,+}^{+} h(k_x,k_y) (\hat{\mathcal{A}}_{0,+}^{+})^{-1} = h(k_x,-k_y). \end{equation} On the other hand, the chiral symmetry imposes the following condition \begin{equation} \hat{\mathcal{S}}h(k_x,k_y) \hat{\mathcal{S}}^{-1}= -h(k_x,k_{y}). \end{equation} Thus, if we regard $k_{x}\in S^{1}$ as a cyclic parameter, then at every $k_{x}$, $h(k_{x},k_{y})$ as a function of the Bloch momentum $k_{y}$ is actually a 1D system in class BDI. Thus, the topological classification in this case is the same as the one for a topological pumping for a 1D system in class BDI described by a Hamiltonian $h'(k,t)$, with momentum $k$ and periodic time $t$. This gives rise to a $\mathbb{Z}_{2}$ topological invariant, corresponding to either the fermion parity has changed or not after an adiabatic cycle \cite{Teo2010}, when the 1D system has an open boundary condition. Since the bulk is gapped at any $t$, such a fermion pairty switch is allowed only when the boundary becomes gapless at some intermediate time $t$. Since our original Hamiltonian $h(k_x,k_y)$ is related to $h'(k,t)$ by replacing $k\leftrightarrow k_y$ and $t \leftrightarrow k_x$, a nontrivial phase for $h(k_x,k_y)$ implies the existance of a counter propagating edge modes on the $x$ edge when we choose an open boundary condition along $y$. Let us understand the pure crystalline classification $K' = \mathbb{Z}_2$. One can consider the edge Hamiltonian for a pair of counter propagating gapless mode on the edge parallel to $x$ as $H_{\rm edge} = k_{x}\sigma_z$, with $\hat{S} = \sigma_x$ and $\hat{\mathcal{A}}_{0,+}^{+} = \hat{\mathcal{K}}$. This pair of gapless mode cannot be gapped by any mass term. However, if there exist two pairs of gapless modes, whose Hamiltonian can be written as $H_{\rm edge} = k_x \tau_0\sigma_z$, one can then add a mass term $m \tau_y\sigma_y$ to $H_{\rm edge}$ to gap it out. On the other hand, if the edge does not preserve the antiunitary symmetry given by $\hat{\mathcal{A}}_{0,+}^{+}$, then a mass term $m\sigma_y$ can be added to gap out a single pair of gapless mode, which implies that there is no intrinsic codimension-one boundary modes. Thus, $K'= \mathbb{Z}_2$, and $\mathcal{K}' =0$. Instead of intrinsic codimension-one boundary modes, the system supports intrinsic codimension-two boundary modes, implying it as a second-order TI. If one creates a corner that is invariant under the reflection $x \to -x$, this corner will support a codimension-two zero mode, with a $\mathcal{K}'' = K'/K'' = \mathbb{Z}_{2}$ classification. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{2d_distort} \caption{(a) Gapless modes at a reflection invariant edge. (b) Corner modes at a reflection invariant corner. The dashed line indicates the reflection (time-glide) plane.} \label{fig:2d_distort} \end{figure} An explicit Hamiltonian that realizes this phases can have the following form \begin{align} h(\boldsymbol{k},m) &= (m+2-\cos k_x \cos k_y)\tau_z + \sin k_x\tau_x\sigma_x \nonumber \\ &+ \sin k_y\tau_y + b\tau_z\sigma_z, \end{align} where $\tau_{x,y,z}$ and $\sigma_{x,y,z}$ are two sets of Pauli matrices, and the parameter $b$, which gaps out the $y$ edge, is numerically small. One can show that this Hamiltonian has desired chiral and antiunitary reflection symmetries given by $\hat{\mathcal{S}} = \tau_x\sigma_z$ and $\hat{\mathcal{A}}_{0,+}^{+} = \hat{\mathcal{K}}$, respectively. When $-2<m<-0$, there are counter propagating edge modes on each $x$ edge at momentum $k_{x}=0$. On the other hand, a corner, which is invariant under reflection $x \to -x$, will bound a zero mode. These two different boundary conditions are illustrated in Fig.~\ref{fig:2d_distort}. The corresponding harmonically driven system has the following Hamiltonian \begin{align} H(\boldsymbol{k},t,m) &= (m+2 - \cos k_x - \cos k_y + b\sigma_z)\tau_z \\ &+ (\sin k_x \tau_x\sigma_x - \sin k_y\tau_y)\cos(\omega t), \end{align} which has chiral and the antiunitary time-glide (antiunitary reflection together with half period time translation) symmetries, realized by $\hat{\mathcal{S}} = \tau_x\sigma_z$ and $\hat{\mathcal{A}}_{T/2,-}^{+} = \tau_z\hat{\mathcal{K}}$. With appropriately chosen boundary conditions, one can either have counter propagating anomalous Floquet gapless mode at the reflection symmetric edge (Fig.~\ref{fig:2d_distort}(a)), or a corner mode at $\epsilon_{\rm gap} = \omega/2$ at the reflection symmetry corner (Fig.~\ref{fig:2d_distort}(b)). \subsubsection{2D system in class AI with $\overline{\mathcal{U}}_{T/2,-}^{+}$} For 2D systems in class AI, with only spinless time-reversal symmetry $\hat{\mathcal{T}}^{2}=1$, the topological classification is trivial. However, with a unitary, either static or space-time, $d_{\parallel}=1$ antisymmetry realized by $\overline{\mathcal{U}}_{0,+}^{+}$ or $\overline{\mathcal{U}}_{T/2,-}^{+}$, the $K$ group subseries is $0 \subseteq \mathbb{Z} \subseteq \mathbb{Z}$, as given in Table~\ref{tab:real_unitary_hoti_2}. Let us start by considering a Hamiltonian $h(k_x,k_y)$ with a static $d_{\parallel}=1$ antisymmetry, given by \begin{equation} \overline{\mathcal{U}}_{0,+}^{+}h(k_x,k_y)(\overline{\mathcal{U}}_{0,+}^{+})^{-1} = -h(-k_x,k_y), \end{equation} in addition to the spinless time-reversal symmetry. At the reflection symmetric momenta $k_x = 0, \pi$, the Hamiltonian as a function of $k_y$ reduces to a 1D Hamiltonian in class BDI, which has a $\mathbb{Z}$ winding number topological invariant. One can also undertand the topological classification from the edge perspective. At reflection invariant edge, the $x$ edge in this case, multiple pairs of counter propagating edge modes can exist. One can write the edge Hamiltonian as $H_{\rm edge} = k_{x}\Gamma_x + m\Gamma_m$, with a possible mass term of magnitude $m$. Here the matrices $\Gamma_x$ and $\Gamma_m$ anticommute with each other and squares to identity. Since the edge is reflection invariant, we have $[\Gamma_x, \overline{\mathcal{U}}_{0,+}^{+}]=0$, and $\{\Gamma_{m},\overline{\mathcal{U}}_{0,+}^{+}\} = 0$. Hence we can simultaneously block diagonalize $\Gamma_x$ and $\overline{\mathcal{U}}_{0,+}^{+}$, and label the pair of gapless modes in terms of the eigenvalues $\pm 1$ of $\overline{\mathcal{U}}_{0,+}^{+}$. If we denote the number of pairs of gapless modes with opposite $\overline{\mathcal{U}}_{0,+}^{+}$ parity by $n_{\pm}$, then only $(n_{+} - n_{-}) \in \mathbb{Z}$ pairs of gapless modes are stable because the mass $m\Gamma_m$ gaps out gapless modes with opposite eigenvalues of $\overline{\mathcal{U}}_{0,+}^{+}$. These gapless modes are purely protected by the $d_{\parallel}=1$ antisymmetry, and will be completely gapped when the edge is not invariant under reflection, which implies $K' = K^{(0)} = \mathbb{Z}$. Indeed, we can assume there are $(n_{+} - n_{-})$ pairs of gapless modes which have positive parity under $\overline{\mathcal{U}}_{0,+}^{+}$. The time-reversal operator can be chosen as $\hat{\mathcal{T}}=\hat{\mathcal{K}}$, because $[\hat{\mathcal{T}},\overline{\mathcal{U}}_{0,+}^{+}] = 0$. We will write $\Gamma_{x} = \mathbb{I}_{(n_{+} - n_{-})}\otimes\sigma_y$, where $\mathbb{I}_{n}$ denotes the identity matrix of dimension $n$. When the edge is deformed away symmetrically around a corner at $x=0$, mass terms $m_{1}(x)\sigma_x + m_{2}(x)\sigma_z$, with $m_{i}(x) = -m_{i}(-x)$, $i = 1,2$, can be generated. This gives rise to $(n_{+} - n_{-})$ zero energy corner modes, corresponding to $\mathcal{K}' = K'/K'' = \mathbb{Z}$. An explicit Hamiltonian for $h(k_{x},k_{y})$ can have the following form \begin{align} h(\boldsymbol{k},m) &= (m+2-\cos k_x - \cos k_y)\tau_z + \sin k_x \tau_x\sigma_y \nonumber \\ &+ \sin k_y\tau_y + b\tau_z\sigma_z \end{align} with $\hat{\mathcal{T}}=\hat{\mathcal{K}}$ and $\overline{\mathcal{U}}_{0,+}^{+} = \tau_x$, and numerically small $b$. When $-2<m<0$, there exist counter propagating gapless modes on the $x$ edges when the system has an open boundary condition in the $y$ direction. The corresponding harmonically driven Hamiltonian with a unitary space-time antisymmetry has the following form \begin{align} H(\boldsymbol{k},t,m) &= (m+2-\cos k_x - \cos_y + b\sigma_z)\tau_z \nonumber \\ &+ (\sin k_x \tau_x\sigma_y - \sin k_y \tau_y)\cos(\omega t), \end{align} where the time-reversal symmetry and the unitary space-time antisymmetry are realized by $\hat{\mathcal{T}}=\hat{\mathcal{K}}$ and $\overline{\mathcal{U}}_{T/2,-}^{+} = \tau_{y}$, respectively. Gapless Floquet edge modes, or Floquet corner modes at $\epsilon_{\rm gap}=\omega/2$, can be created, with appropriately chosen boundary conditions, when both $-2<(m-\omega/2)<0$ and $(m+\omega/2)>0$ are satisfied. \subsection{Third-order phase in $d_{\parallel} = 2$ family} When a Floquet system respects a $d_{\parallel}=2$ space-time symmetry/antisymmetry, it can be at most a third-order topological phase, because $\min(n+1,d_{\parallel}+1)\leq 3$. In the following, we construct a model Hamiltonian for a third-order TI representing such systems. \subsubsection*{3D system in class AIII with $\hat{\mathcal{A}}_{T/2,-}^{+}$} It is known that for 3D system in class AIII without any additional spatial symmetries, the topological classification is $\mathbb{Z}$ \cite{Chiu2016}, which counts the number of surface Dirac cones at the boundary of the 3D insulating bulk. When there exists an antiunitary two-fold rotation symmetry, either $\hat{\mathcal{A}}_{0,+}^{+}$ or $\hat{\mathcal{A}}_{T/2,-}^{+}$, the topological invariants are given by the $K$ subgroup series $0 \subseteq \mathbb{Z}_2 \subseteq \mathbb{Z}_{2} \subseteq \mathbb{Z}_{2}$ in Table~\ref{tab:complex_antiunitary_hoti_3}. Indeed, because of the additional symmetry realized by $\hat{\mathcal{A}}_{0,+}^{+}$ or $\hat{\mathcal{A}}_{T/2,-}^{+}$, the symmetry invariant boundary surface is able to support gapless Dirac cone pairs. As will be shown in the following, it turns out that the number of such pairs is maximum to be one, which gives rise to the $K^{(0)} = \mathbb{Z}_{2}$ topological invariant. Let us first look at the static antiunitary two-fold rotation symmetry, realized by $\hat{\mathcal{A}}_{0,+}^{+}$, which transforms a static Bloch Hamiltonian as \begin{equation} \hat{\mathcal{A}}_{0,+}^{+} h(k_x,k_y,k_z) (\hat{\mathcal{A}}_{0,+}^{+})^{-1} = h(k_x,k_y,-k_z). \end{equation} With an appropriate basis, one can write $\hat{S} = \tau_z$, and $\hat{\mathcal{A}}_{0,+}^{+} = \hat{\mathcal{K}}$. At the symmetry invariant boundary surface perpendicular to $z$, while keeping the periodic boundary condition in both $x$ and $y$ directions, a single Dirac cone pair with a dispersion $h_{\rm surf} = \tau_x(\sigma_{x}k_x + \sigma_zk_y)$ can exist. This Dirac cone pair cannot be gapped by an additional mass term preserving the $\hat{\mathcal{A}}_{0,+}^{+}$ symmetry, which requires the mass term to be real. However, when there are two pairs of Dirac cones, described by the surface Hamiltonian $h_{\rm surf} = \mu_0\tau_x(\sigma_x k_x + \sigma_z k_y)$, with $\mu_0$ a two-by-two identity matrix for another spinor degree of freedom, for which we also introduce a new set of Pauli matrices $\mu_{x,y,z}$. Noticeably, a mass term which couple the two pairs of Dirac cones and gap them out can be chosen as $\mu_{y}\sigma_x\tau_y$, which preserves the antiunitary two-fold symmetry. Hence, we have a $K^{(0)} = \mathbb{Z}_{2}$. When the surface is tilted away from the rotation invariant direction, two mutually anticommuting rotation-symmetry-breaking mass terms exist, and can be written as $m_{1}\tau_y\sigma_0 + m_{2}\tau_{x}\sigma_y$, in which $m_{1,2}$ must change signs under two-fold rotation. Hence, boundaries of codimension up to $\min(n,d_{\parallel}) = 2$ are gapped. This leads to $\mathcal{K}'' = \mathcal{K}' = 0$, which implies $K'' = K' = K^{(0)} = \mathbb{Z}_{2}$. Moreover, at the symmetry invariant corner, this mass must vanish, and thus the system can host zero-energy corner mode. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{3d_distort} \caption{(a) Gapless surface mode (Dirac cone) on the rotation invariant surface. (b) Corner mode at rotation invariant corner. The dashed line indicates the two-fold rotation (time-screw) axis. } \label{fig:3d_distort} \end{figure} One can write down the following concrete model Hamiltonian with eight bands, \begin{align} &h(\boldsymbol{k},m) = (m+3 - \cos k_x -\cos k_y - \cos k_z)\tau_z \nonumber \\ &+ \sin k_x \tau_x\sigma_x + \sin k_y \tau_x\sigma_z + \sin k_z \tau_y \nonumber \\ &+ b_1\mu_x\tau_z\sigma_z + b_2\mu_x\tau_z\sigma_x. \end{align} where the parameters $b_1$ and $b_2$ are numerically small. Here, the chiral and the antiunitary two-fold rotation symmetries are realized by $\hat{\mathcal{S}} = \mu_y\tau_x\sigma_y$, and $\hat{\mathcal{A}}_{0,+}^{+}=\hat{\mathcal{K}}$, respectively. This Hamiltonian supports a single pair of Dirac cones on the boundary surfaces perpendicular to the $z$ axis, at $k_{x}=k_{y}=0$ for $-2<m<0$, as illustrated in Fig.~\ref{fig:3d_distort}(a). When the surface perpendicular to the rotation axis gets deformed from Fig.~\ref{fig:3d_distort}(a) to (b), the rotation invariant corner then bounds a codimension-two boundary mode. The corresponding harmonically driven model has the following Hamiltonian \begin{align} &H(\boldsymbol{k},t,m) = (m+3-\cos k_x - \cos k_y - \cos k_z \nonumber \\ &+ b_1\mu_x\sigma_z + b_2\mu_x\sigma_x)\tau_z \nonumber \\ &+ \left[(\sin k_x \sigma_x + \sin k_y \sigma_y)\tau_x - \sin k_z \tau_y\right]\cos(\omega t). \end{align} Here, the chiral symmetry is realized by $\hat{\mathcal{S}}=\mu_y\tau_x\sigma_y$, while the the antiunitary two-fold time-screw symmetry is realized by $\hat{\mathcal{A}}_{T/2,-}^{+}=\tau_z\hat{\mathcal{K}}$. This Hamiltonian is able to support a pair of Dirac cones on the boundary surface perpendicular to $z$ direction inside the bulk quasienergy gap around $\epsilon_{\rm gap} = \omega/2$ (Fig.~\ref{fig:3d_distort}(a)), as well as codimension-two mode with quasienergy $\omega/2$ localized at the rotation invariant corner of the system (Fig.~\ref{fig:3d_distort}(b)). \subsection{Higher-order topological phases in $d_{\parallel} = 3$ family} Unlike the symmetries discussed previously, the $d_{\parallel} = 3$ symmetry (antisymmetry) operator $\hat{\mathcal{P}}$ ($\overline{\mathcal{P}}$) does not leave any point invariant in our three dimensional world. In particular, since the surface of a 3D system naturally breaks the inversion symmetry, the topological classification of the gapless surface modes (if exist), should be the same as the 3D tenfold classification disregarding the crystalline symmetry, in the same symmetry class. Hence, we have the boundary $K$ group \begin{equation} \mathcal{K}' = K^{(0)}/K' = \begin{cases} K_{\rm TF} & K_{\rm TF} \subseteq K^{(0)}, \\ 0 & \mathrm{otherwise}, \end{cases} \end{equation} where $K_{\rm TF}$ is the corresponding $K$ group for the tenfold-way topological phase, with only nonspatial symmetries considered. However, inversion related pairs of boundaries with codimension larger than one are able to host gapless modes, which can not be gapped out without breaking the symmetry (antisymmetry) realized by $\hat{\mathcal{P}}$ ($\overline{\mathcal{P}}$). This can be understood by simply considering the surface Hamiltonian $h(\boldsymbol{p}_{\parallel},\hat{\boldsymbol{n}})$ with $\hat{\boldsymbol{n}}\in S^{2}$. Here $\boldsymbol{p}_{\parallel}$ is the momenta perpendicular to $\hat{\boldsymbol{n}}$. Let us assume there are $n$ spatially dependent mass terms $m_{l}(\boldsymbol{\hat{\boldsymbol{n}}})M_{l}$, with $l=1,\dots, n$, that can gap out the surface Hamiltonian $h(\boldsymbol{p}_{\parallel},\hat{\boldsymbol{n}})$. The inversion symmetry/antisymmetry restricts $m_{l}(\hat{\boldsymbol{n}})=-m_{l}(-\hat{\boldsymbol{n}})$ (see Appendix~\ref{app:order_mass_terms} for details), which implies that there must exist a 1D inversion symmetric loop $S^{1}\subseteq S^{2}$, such that $m_{l}(\hat{\boldsymbol{n}}) = 0$, for $\hat{\boldsymbol{n}} \in S^{1}$. This 1D loop for different $l$s can be different, but they all preserve the inversion symmetry, and cannot be removed. Hence, for $n=1$, we have a 1D massless great circle, whereas for $n=2$ we have a pair of antipodal massless points. The 1D or 0D massless region are irremovable topological defects which are able to host gapless modes. Since inversion operation maps one point to another point, the stability of the gapless modes on the massless 1D or 0D region must be protected by the nonspatial symmetries alone \cite{Khalaf2018}. Hence, the codimension-$k$ gapless modes are stable only when the $(4-k)$-dimensional system has a nontrivial tenfold classification, namely $K_{\rm TF}\neq 0$. Moreover, the number of these gapless modes is at most one \cite{Khalaf2018}. Indeed, a system consisting of a pair of inversion symmetric systems with protected gapless modes can be deformed into a system with completely gapped boundaries without breaking the inversion symmetry. This statement can be understood by considering a pair of inversion symmetric surface Hamiltonians \begin{equation} h'(\boldsymbol{p}_\parallel,\hat{\boldsymbol{n}})=\left(\begin{array}{cc} h(\boldsymbol{p}_\parallel,\hat{\boldsymbol{n}}) & 0\\ 0 & \pm h(\boldsymbol{p}_\parallel,-\hat{\boldsymbol{n}}) \end{array}\right), \end{equation} where the $+$ ($-$) sign is taken when we have a inversion symmetry (antisymmetry). In this situation, the $h'(\boldsymbol{p}_\parallel,\hat{\boldsymbol{n}})$ has a inversion symmetry or antisymmetry realized by \begin{equation} \hat{\mathcal{P}}'=\left(\begin{array}{cc} 0 & \hat{\mathcal{P}}\\ \hat{\mathcal{P}} & 0 \end{array}\right)\ \mathrm{or}\ \overline{\mathcal{P}}'=\left(\begin{array}{cc} 0 & \overline{\mathcal{P}}\\ \overline{\mathcal{P}} & 0 \end{array}\right). \end{equation} Now one can introduce mass terms \begin{equation} \left(\begin{array}{cc} m_{l}(\hat{\boldsymbol{n}})M_{l} & 0\\ 0 & -m_{l}(-\hat{\boldsymbol{n}})M_{l} \end{array}\right), \end{equation} In this case $m_l(\hat{\boldsymbol{n}})$ can be nonzero for all $\hat{\boldsymbol{n}} \in S^{2}$, and therefore $h'(\boldsymbol{p}_{\parallel},\hat{\boldsymbol{n}})$ can always be gapped. Hence, we obtain the boundary $K$ groups $\mathcal{K}^{(k)}$ which classifies boundary modes of codimension $k=2$ and $3$ as \begin{equation} \mathcal{K}^{(k)} = K^{(k-1)}/K^{(k)} \begin{cases} \mathbb{Z}_{2} & \mathbb{Z}_2 \subseteq K_{TF} \ \mathrm{in}\ (4-k)\mathrm{D} \\ 0 & \mathrm{otherwise}. \end{cases} \end{equation} Having understood the general structure of $K$ subgroup series, let us in the following construct model Hamiltonians for Floquet HOTI/SCs in class DIII with a unitary space-time symmetry realized by $\hat{\mathcal{U}}_{T/2,++}^+$ ($d_{\parallel}=3$), as an example. From Table~\ref{tab:real_unitary_hoti_3}, we see that the $K$ subgroup series is $4\mathbb{Z} \subseteq 2\mathbb{Z} \subseteq \mathbb{Z} \subseteq \mathbb{Z}^2$, which implies we can have first-order phase classified by $\mathcal{K}' = \mathbb{Z}^2/\mathbb{Z} = \mathbb{Z}$, second-order phase classified by $\mathcal{K}'' = \mathbb{Z}/2\mathbb{Z} = \mathbb{Z}_2 $, and third-order phase classified by $\mathcal{K}^{(3)} = 2\mathbb{Z}/4\mathbb{Z} = \mathbb{Z}_2$. \subsubsection{First-order topological phase} Under the operator $\hat{\mathcal{U}}_{T/2,++}^+$, no points on the surface of a 3D bulk are left invariant. Hence, the existence of codimension-one boundary modes is due to the protection from the nonspatial symmetries alone. A tight-binding model realizing such a phase can be constructed from its static counter part, namely, a model in class DIII with a static inversion symmetry realized by $\hat{\mathcal{U}}_{0,+-}^{+}$. The static model can have the following Hamiltonian \begin{align} &h_{\pm}(\boldsymbol{k},m) = (m+3-\cos k_x - \cos k_y - \cos k_z)\tau_z \nonumber \\ & \pm (\sin k_x \sigma_x + \sin k_y \sigma_y + \sin k_z \sigma_z)\tau_x, \end{align} where the time-reversal, particle-hole, chiral and the inversion symmetres are realized by $\hat{\mathcal{T}} = -i\sigma_y \hat{\mathcal{K}}$, $\hat{\mathcal{C}} = \sigma_y \tau_y \hat{\mathcal{K}}$, $\hat{\mathcal{S}} = \tau_y$, and $\hat{\mathcal{U}}_{0,+-}^{+} = \tau_z$, respectively. When $-2<m<0$, this model hosts hosts a gapless Dirac cone with chirality $\pm 1$ on any surfaces of the 3D bulk. Hence, the Hamiltonian for the corresponding Floquet first-order topological phase with a space-time symmetry can be written as \begin{align} &H_{\pm}(\boldsymbol{k},t,m) = (m+3-\cos k_x - \cos k_y - \cos k_z)\tau_z \nonumber \\ & \pm (\sin k_x \sigma_x + \sin k_y \sigma_y + \sin k_z \sigma_z)\tau_x\cos(\omega t), \end{align} where the space-time symmetry is realized by $\hat{\mathcal{U}}_{T/2,++}^{+} = \mathbb{I}$, and the nonspatial symmetry operators are the same as in the static model. When $-2<(m-\omega/2)<0$ and $(m+\omega/2)>0$ are satisfied, $H_{\pm}(\boldsymbol{k},t,m)$ will host a gapless Dirac cone at quasienergy $\omega/2$ with chirality $\pm 1$. \subsubsection{Second-order topological phase} Similar to the construction of the first-order phase, let us start from the corresponding static model. A static second-order phase can be obtained by couple $h_{+}(\boldsymbol{k},m_{1})$ and $h_{-}(\boldsymbol{k},m_{2})$. When both $m_1$ and $m_{2}$ are within the interval $(-2,0)$, the topological invariant for the codimension-one boundary modes vanishes and their exists a mass term on the surface which gaps out all boundary modes of codimenion one. Explicitly, one can define the following Hamiltonian \begin{equation} h(\boldsymbol{k},m_{1},m_{2})=\left(\begin{array}{cc} h_{+}(\boldsymbol{k},m_{1}) & 0\\ 0 & h_{-}(\boldsymbol{k},m_{2}) \end{array}\right), \end{equation} and introduce a set of Pauli matrices $\mu_{x,y,z}$ for this newly introduced spinor degrees of freedom. There is only one mass term $M_{l} = \tau_x \mu_x$, which satisfies $\{M_1, h(\boldsymbol{k},m_{1},m_{2})\}=0$, $\{M_1,\hat{\mathcal{S}}\}=0$, $\{M_1,\hat{\mathcal{C}}\}=0$, and $[M_1,\hat{\mathcal{T}}]=0$. According to the discussion on relation between mass terms and the codimension of boundary modes in Sec.~\ref{sec:symmetry_mass_terms}, as well as Appendix~\ref{app:order_mass_terms}, one can add a perturbation \begin{equation} V = b_{1}^{(1)}\sigma_x\tau_z\mu_x+b_{2}^{(1)}\sigma_y\tau_z\mu_x + b_{3}^{(1)}\sigma_z\tau_z\mu_x \end{equation} that preserves all symmetries, to $h(\boldsymbol{k},m_{1},m_{2})$. This perturbation gaps out all codimension-one surfaces and left a codimension-two inversion invariant loop gapless, giving rise to a second-order topological phase. \begin{figure}[t] \centering \includegraphics[width=0.40\textwidth]{inversion_hoti} \caption{Spectral weight (darkness) of the Floquet boundary mode at $\omega/2$, cutted to an approximate sphere geometry of radius 10 lattice spacing. (a) Codimension-two boundary mode, computed with parameters $m_{1} = m_{2} = 0.5$, $\omega =3$, $b_{1}^{(1)} = b_{2}^{(1)} = b_{3}^{(1)} = 0.3$. (b) Codimension-three boundary mode, computed with $m_{1} = m_{2} = m_{3} = m_{4} = 0.5$, $\omega =3$, $b_{1}^{(1)} = b_{2}^{(1)} = b_{3}^{(1)} = b_{3}^{(2)} = -b_{1}^{(2)} = -b_{2}^{(2)}=0.3$.} \label{fig:inversion_hoti} \end{figure} The Floquet second-order topological phase can therefore be constructed by addition the perturbation $V$ to the following Hamiltonian \begin{equation} H(\boldsymbol{k},t, m_{1},m_{2})=\left(\begin{array}{cc} H_{+}(\boldsymbol{k},t,m_{1}) & \\ 0 & H_{-}(\boldsymbol{k},t,m_{2}) \end{array}\right). \end{equation} In Fig.~\ref{fig:inversion_hoti}(a), we show the spectral weight of the codimension-one Floquet boundary mode at $\omega/2$, when the system is cutted to an approximate sphere geometry. This boundary mode is localized on an inversion invariant loop. \subsubsection{Third-order topological phase} To construct a model for the third-order topological phase, one needs to find two anticommuting masses $M_{1},M_{2}$, which satisfy the same conditions discussed previously. This can be realized by introducing another spinor degrees of freedom, as one couples two copies of $h(\boldsymbol{k},m_{1},m_{2})$. Explicitly, one can take the following Hamiltonian \begin{align} &\tilde{h}(\boldsymbol{k},m_{1},m_{2},m_3,m_4) \nonumber \\ &=\left(\begin{array}{cc} h(\boldsymbol{k},m_{1},m_2) & 0\\ 0 & h(\boldsymbol{k},m_{3},m_4) \end{array}\right), \end{align} as well as the corresponding Pauli matrices $\tilde{\mu}_{x,y,z}$ for the spinor degrees of freedom. Thus, two anticommuting mass terms $M_{1} =\tau_x\mu_x$ and $M_{2} = \tau_x\mu_y\tilde{\mu}_{y}$ can be found. Therefore, one can introduce the symmetry preserving perturbation \begin{align} \tilde{V} &= (b_{1}^{(1)}\sigma_x+b_{2}^{(1)}\mu_x + b_{3}^{(1)}\sigma_z)\tau_z\mu_x \nonumber \\ &+(b_{1}^{(2)}\sigma_x+b_{2}^{(2)}\mu_x + b_{3}^{(2)}\sigma_z)\tau_z\mu_y\tilde{\mu}_{y}, \end{align} which in general gaps out all boundary modes except at two antipodal points, at which codimension-three modes can exist. The Floquet version of such a third-order topological phase is constructed by adding the perturbation $\tilde{V}$ to the following periodically driven Hamiltonian \begin{align} &\tilde{H}(\boldsymbol{k},t,m_{1},m_{2},m_3,m_4) \nonumber \\ &=\left(\begin{array}{cc} H(\boldsymbol{k},t,m_{1},m_2) & 0\\ 0 & H(\boldsymbol{k},t,m_{3},m_4) \end{array}\right). \end{align} In Fig.~\ref{fig:inversion_hoti}(b), the spectral weight of the zero-dimensional (codimension-three) Floquet modes at quasienergy $\omega/2$ is shown in a system with an approximate sphere geometry. The other zero-dimensional mode is located at the antipodal point. \section{Conclusions \label{sec:conclusion}} In this work, we have completed the classification of the Floquet HOTI/SCs with an order-two space-time symmetry/antisymmetry. By introducing a hermitian map, we are able to map the unitary loops into hermitian matrices, and thus define bulk $K$ groups as well as $K$ subgroup series for unitary loops. In particular, we show that for every order-two nontrivial space-time (anti)unitary symmetry/antisymmetry involving a half-period time translation, there always exists a unique order-two static spatial (anti)unitary symmetry/antisymmetry, such that the two symmetries/antisymmetries share the same $K$ group, as well as the subgroup series, and thus have the same topological classification. Further, by exploiting the frequency-domain formulation, we introduce a general recipe of constructing tight-binding model Hamiltonians for Floquet HOTI/SCs, which provides a more intuitive way of understanding the topological classification table. It is also worth mentioning that although in this work we only classify the Floquet HOTI/SCs with an order-two space-time symmetry/antisymmetry, the hermitian map introduced here can also be used to map the classification of unitary loops involving more complicated space-time symmetry, to the classification of Hamiltonians with other point group symmetries. Similarly, the frequency-domain formulation and the recipe of constructing Floquet HOTI/SCs should also work with some modifications. In this sense, our approach can be more general than what we have shown in this work. Finally, we comment on one possible experimental realization of Floquet HOTI/SCs. As lattice vibrations naturally break some spatial symmetries instantaneously, while preserving the certain space-time symmetries, one way to engineer a Floquet HOTI/SC may involve exciting a particular phonon mode with a desired space-time symmetry, which is investigated in Ref.~\cite{Swati2019}. \acknowledgments Y.P. acknowledges support from the startup fund from California State University, Northridge, as well as the IQIM, an NSF physics frontier center funded in part by the Moore Foundation, and the support from the Walter Burke Institute for Theoretical Physics at Caltech. Y.P. also acknowledges partial support from Y.P. is grateful for the helpful discussions with Gil Refeal at Caltech, and with Luka Trifunovic.
proofpile-arXiv_065-6007
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection*{Our Contributions} We develop logics that address more than one shortcoming of $\text{LTL}$ at a time. See Figure~\ref{fig:logics} for an overview. \begin{wrapfigure}{R}{.45\textwidth} \centering \vspace{-.4cm} \begin{tikzpicture}[thick] \draw[rounded corners,fill=black!10,draw=white] (-3.3, .5) |- (.9,2.25) |- (3.1,3.5) |- (0,.5) -- cycle; \node[align=center] (ltl) at (0,0.8) {\text{LTL}}; \begin{scope}[shift={(0,1.75)}] \node[align=center] (rltl) at (-2.25,0) {\text{rLTL}(\ensuremath{\Boxdot, \Diamonddot})}; \node[align=center] (promptltl) at (0,0) {\text{Prompt-LTL}}; \node[align=center] (ldl) at (2,0) {\text{LDL}}; \end{scope} \begin{scope}[shift={(0,3)}] \node[align=center] (rpromptltl) at (-2.1,0) {\text{rPrompt-LTL}}; \node[align=center] (rldl) at (0,0) {\text{rLDL}}; \node[align=center] (promptldl) at (2,0) {\text{Prompt-LDL}}; \end{scope} \node[align=center] (rpromptldl) at (0,4.2) {\text{rPrompt-LDL}}; \path[-stealth,] (ltl) edge[dashed] (rltl) edge[dashed] (promptltl) edge[dashed] (ldl) (rltl) edge[out=90,in=-90] (rpromptltl) edge[out=30,in=-90] (rldl) (promptltl) edge[out=150,in=-90] (rpromptltl) edge[dashed,out=30,in=-90] (promptldl) (ldl) edge[out=135,in=-90] (rldl) edge[dashed] (promptldl) (rpromptltl) edge[out=30,in=-90] (rpromptldl) (rldl) edge (rpromptldl) (promptldl) edge[out=150,in=-90] (rpromptldl); \end{tikzpicture} \caption{The logics studied in this work. Existing logics and influences are marked gray with dashed arrows.} \label{fig:logics} \end{wrapfigure} In Section~\ref{sec-rprompt}, we ``robustify'' $\text{Prompt-LTL}$. More precisely, we introduce a novel logic, named $\text{rPrompt-LTL}$, by extending the five-valued semantics from robust $\text{LTL}$ to $\text{Prompt-LTL}$. Our main result here shows that $\text{rPrompt-LTL}$ retains the exponential compilation property. Then, in Section~\ref{sec-rldl}, we ``robustify'' $\text{LDL}$: we introduce a novel logic, named $\text{rLDL}$, by lifting the five-valued semantics of robust $\text{LTL}$ to $\text{LDL}$. Our main result shows that $\text{rLDL}$ also retains the exponential compilation property. Hence, one can indeed combine any two of the three extensions of $\text{LTL}$ while still preserving the desirable algorithmic properties of $\text{LTL}$. In particular, let us stress again that all highly sophisticated algorithmic backends developed for $\text{LTL}$ are applicable to these novel logics as well, e.g., we show that the verification problem and the synthesis problem for each of these logics is solvable without an (asymptotic) increase in complexity. Tabuada and Neider gave two proofs showing that robust $\text{LTL}$ has the exponential compilation property. The first one presented a translation of robust $\text{LTL}$ into equivalent Büchi automata of exponential size while the second one is based on a polynomial translation of robust $\text{LTL}$ into (standard) $\text{LTL}$, which is known to be translatable into equivalent Büchi automata of exponential size. We refer to those two approaches as the \emph{direct} approach and the \emph{reduction-based} approach. To obtain our results mentioned above, we need to generalize both. To prove the exponential compilation property for $\text{rLDL}$, we generalize the direct approach by exhibiting a direct translation of $\text{rLDL}$ into Büchi automata via alternating automata. In contrast, to prove the exponential compilation property for $\text{rPrompt-LTL}$, we present a generalization of the reduction-based approach translating $\text{rPrompt-LTL}$ into equivalent $\text{Prompt-LTL}$ formulas of linear size, which have the exponential compilation property. Finally, in Section~\ref{sec-towardsrpldl}, we discuss the combination of all three aspects. Recall that we present a direct translation to automata for $\text{rLDL}$ and a reduction-based one for $\text{rPrompt-LTL}$. For reasons we discuss in Section~\ref{sec-towardsrpldl}, it is challenging to develop a reduction from $\text{rLDL}$ to $\text{LDL}$ or a direct translation for $\text{rPrompt-LTL}$ that witness the exponential compilation property. Hence, both approaches seem inadequate to deal with the combination of all three extensions. Ultimately, we leave the question of whether the logic combining all three aspects has the exponential compilation property for future work. Proofs omitted due to space restrictions can be found in the full version~\cite{fullversion}. \section{Introduction} \label{sec-intro} \input{content/intro} \section{Preliminaries} \label{sec-prel} \input{content/prel} \subsection{Robust Linear Temporal Logic} \label{subsec-briefrltl} \input{content/prel-rltl} \subsection{Linear Dynamic Logic} \label{subsec-briefldl} \input{content/prel-ldl} \subsection{Prompt Linear Temporal Logic} \label{subsec-briefprompt} \input{content/prel-prompt} \section{Robust and Prompt Linear Temporal Logic} \label{sec-rprompt} \input{content/rprompt} \subsection{Model Checking} \label{subsec-rpromptresults-mc} \input{content/rprompt-mc} \subsection{Synthesis} \label{subsec-rpromptresults-synt} \input{content/rprompt-synt} \section{Robust Linear Dynamic Logic} \label{sec-rldl} \input{content/rldl} \subsection{Expressiveness} \label{subsec-rldl-expressiveness} \input{content/rldl-expressiveness} \subsection{Model Checking and Synthesis} \label{subsec-rldl-modelchecking} \input{content/rldl-mcsynt} \section{Towards Robust and Prompt Linear Dynamic Logic} \label{sec-towardsrpldl} \input{content/towardsrpromptldl} \section{Conclusion} \label{sec-conc} \input{content/conclusion} \bibliographystyle{eptcs}
proofpile-arXiv_065-6018
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Black holes have played, and continue to play, a central role in fundamental aspects of physics. Cutting-edge advances in the understanding of quantum gravitational aspects of string theory have been possible thanks to their study \cite{Kallosh:1992ii, Ferrara:1995ih, Strominger:1996sh, Callan:1996dv, Ferrara:1996dd, Ferrara:1996um, Sen:2005wa, Mathur:2005zp, Papadodimas:2013jku}. In recent years some attention has been dedicated to the problem of whether extremal black holes, those with vanishing temperature, should decay or remain as stable states. The question arises because, in many situations, black hole remnants are regarded as problematic. For example, it can be argued that a theory of gravity with global symmetries contains an infinite number of remnants below a certain mass scale, which is usually considered to be inconsistent \cite{Banks:2010zn, Susskind:1995da}. On the other hand, stable black holes of theories with local symmetries are not infinitely degenerated (under a certain mass scale), so it is more difficult to produce arguments against their existence. Still, they are associated to an infinite tower of charged states not protected by a symmetry principle, and it was conjectured in \cite{ArkaniHamed:2006dz} that a finiteness criterion should be applied such that those are unstable. Hence, we are led to consider the decay of extremal black holes. The simplest of those is the Reissner-Nordstr\"om one, which has $M=Q$ in appropriate units. It is immediate to see that decay of this black hole in two separated states with $q_1+q_2=Q$, $m_1+m_2 \leq M$ requires $q_i/m_i \geq 1$ for at least one of them\footnote{This is a necessary but not sufficient condition for the process to occur, which seems thermodynamically disfavored. Since these black holes have zero temperature, standard Hawking radiation does not take place.}. These bounds can be saturated in the special case in which there is no binding energy between the products, while an strict inequality is expected in generic situations. The weak gravity conjecture proposes that the spectrum of a quantum theory of gravity must be such that extremal black holes can decay, as far as energy and charge conservation are concerned. There are two possible scenarios compatible with the conjecture. Under the rather reasonable assumption that the decay would occur through the emission of a \emph{light particle} by the black hole, \emph{i.e.} $m_1 >> m_2$, the consequences of having $q_2/m_2>1$ are stronger than those of the complementary scenario $q_1/m_1>1$ in the following sense. In terms of the former, the WCG becomes a sharp tool that can be useful to discern if an effective theory belongs or not to the swampland \cite{Vafa:2005ui, ArkaniHamed:2006dz}. Examples of applications of the conjecture in this direction can be found in \cite{Palti:2019pca} and references therein. On the other hand, the latter possibility corresponds to a milder version of the conjecture that would just provide information about states far heavier than the Planck mass, and hence it is less useful for the swampland program. Of course, the two scenarios are not mutually exclusive, and it is conceivable that they might be related \cite{Aalsma:2019ryi}. The problem of computing corrections to the extremal charge-to-mass bound has been considered before by other authors in several frameworks. To date, the Reissner-Nordstr\"om black hole of Einstein-Maxwell theory supplemented by higher-derivative terms is arguably the system which is best understood in that respect. An explicit computation for that system that gives the corrections to the ratio in terms of the value of the coefficients of the higher-derivative terms was presented in \cite{Kats:2006xp} (see also \cite{Loges:2019jzs}). Subsequent works have proposed that, demanding analyticity of scattering amplitudes, unitarity and causality constrain these coefficients such that there is a positive deviation of the charge-to-mass ratio of extremal black holes \cite{Cheung:2014ega, Hamada:2018dde, Bellazzini:2019xts}. Likewise, it has been proposed that this can be related to the positivity of the corrections to the entropy of the black hole induced by higher-derivative operators \cite{Cheung:2018cwt, Cheung:2019cwi,Goon:2019faz}. The study of black holes in Einstein-Maxwell theory is well justified and interesting, as it provides a relatively simple arena to explore this question while making contact with dominant interactions in real world experiences. On the other hand, since the problem at hands is intimately related to quantum gravity, it is important to ask if the positive character of the deviation is displayed by models with an explicit string theory embedding. Effective gravitational theories derived from string theory usually contain, besides vectors, scalars. Einstein-Maxwell-Dilaton (EMD) theory arises as a natural truncation of the effective theories of different string models. For instance, it appeared as a truncation of $N=4,d=4$ supergravtiy\footnote{This is the effective theory of the Heterotic Superstring compactified on $T^{6}$} in \cite{Gibbons:1982ih}, where the first EMD black-hole solutions where found. These solutions were later rederived and studied in \cite{Gibbons:1987ps, Garfinkle:1990qj}.\footnote{This solution is usually referred as the GHS black hole.} The Kaluza-Klein theory obtained by compactifying the 5-dimensional Einstein-Hilbert action on a circle also provides another particular example of EMD theory with the Kaluza-Klein scalar playing the role of dilaton field. Different instances of EMD theory are distinguished by the different couplings of the dilaton to the vector field kinetic term in the action. Although EMD theory looks very similar to Einstein-Maxwell (EM), there are important differences. In particular, the purely electrically or magnetically charged Reissner-Nordstr\"om black hole with constant scalar is not a solution of the equations of motion, as the vector field acts as a source for the dilaton. Only when the black hole is dyonic with equal magnetic and electric charges the source term in the dilaton equation vanishes and the dilaton can take a constant value. For generic values of the electric and magnetic charges, one gets charged black holes with a non-trivial scalar field. In the extremal limit, these black holes are regular, except in the purely electric and purely magnetic cases. Corrections to the charge-to-mass ratio in one of these singular cases have been studied in \cite{Kats:2006xp}, although, due to the singularities, this is not a good ground to discuss stability of extremal black holes. Still, it is worth mentioning that the correction has again a positive character, which gives some support to the mild WGC. However, the positive deviation of the charge-to-mass ratio cannot be general in string theory. Supersymmetric black holes are special. When they are regular they necessarily carry several charges and their mass is given by a linear combination of them with moduli-dependent coefficients. Typically, the scalars are not constant, but their value at the horizon is fixed in terms of the charges due to the attractor mechanism \cite{Ferrara:1995ih, Strominger:1996kf, Ferrara:1996dd, Ferrara:1996um, Ferrara:1997tw, Ferrara:2006em}. The linear relation between mass and charge is a salient feature of supersymmetric systems and, as supersymmetric black holes are extremal \cite{Kallosh:1992ii}, the charge-to-mass ratio can be expected to remain unmodified by higher-derivative corrections. This has been recently shown to be the case in three- and four-charge heterotic black holes \cite{Cano:2018qev, Cano:2018brq} in, respectively, five and four dimensions.\footnote{In those articles, it was noted that the relation between mass and the number of fundamental constituents of the black hole is modified by $\alpha'$-corrections. However, the relation between mass and asymptotic charges remains unchanged.} Nevertheless, one should notice that these configurations correspond to a bound state of a (large) number of fundamental objects without binding energy. This means that, speaking in terms of energy and charge conservation, the decay of supersymmetric black holes is possible. Another family of configurations of $\mathcal{N}\geq2$ four-dimensional supergravity for which no corrections to the ratio have been observed (even at one-loop quantum level and non-supersymmetric solutions) was recently described \cite{Charles:2019qqt}. These black holes are obtained through a particular embedding of dyonic solutions of Einstein-Maxwell theory for which the solution is claimed to not receive corrections at all \cite{Charles:2017dbr}. In this article we compute explicitly the first-order $\alpha'$ (fourth order in derivatives) corrections to the charge-to-mass ratio of the extremal Reissner-Nordstr\"om black hole embedded in heterotic string theory in several ways. All the embeddings that we will consider here are dyonic (so the dilaton vanishes at lowest order in $\alpha'$) and non-supersymmetric (so there is a chance of having non-vanishing $\alpha'$ corrections). To the best of our knowledge, these provide the first examples in which such a computation has been made using a explicit embedding of the black-hole solutions in a superstring theory whose first-order in $\alpha'$ corrections are explicitly known in detail. We start in Section~\ref{sec-family} with a description of the zeroth-order solutions we start from. They are 2-vector dyonic, extremal Reissner-Nordstr\"om black holes, although we take only one of the charges to be independent for simplicity. Depending on the election of the relative signs of the charges, there are two families of solutions that can be considered, for which the consequences of including the higher-curvature corrections are different. In one case the charge-to-mass ratio of the solution remains unchanged, while in the other it deviates positively from one. On the other hand, the Wald entropy of both solutions is equal and differs from the expression of the zeroth-order system. Our results are discussed in Section~\ref{sec-discussion}. In the different appendices we include the details about the different computations that we have performed. The effective field theory of the heterotic string at first order in $\alpha'$ is described in Appendix~\ref{sec:theory}. The equations of motion evaluated for the spherically-symmetric ansatz can be found in Appendix~\ref{sec-eom} and the calculation of the Wald entropy is addressed in Appendix~\ref{sec:wald}. In Appendix~\ref{sec:case3} we compute the corrections for a solution different to the ones considered in Section~\ref{sec-family}, which has some peculiar features. \section{A family of extremal black holes} \label{sec-family} Let us consider the following field configuration of heterotic superstring theory, whose perturbative action and equations of motion are briefly reviewed in Appendix \ref{sec:theory}, \begin{align} \nonumber d\hat{s}=&e^{2( \phi- \phi_{\infty})}ds^2-c^2(dz+V/c_{\infty})^2-dy^idy^i\, , \\ \nonumber \hat H=&F\wedge (c_{\infty}dz+V)+H\, ,\\ \label{eq:comapct} e^{-2\hat\phi}=&\frac{1}{c}e^{-2\phi} \, , \end{align} where $ds^2$ is the four-dimensional metric in the Einstein frame, $F$ is a 2-form, $V$ is a Kaluza-Klein vector, $c$ is a KK scalar, $H$ is a 3-form, and $\phi$ is the four-dimensional dilaton. In addition, $c_{\infty}$ and $ \phi_{\infty}$ are the asymptotic values of $c$ and $\phi$. These are effective four-dimensional fields, while hatted objects represent ten-dimensional fields of the heterotic theory. This ansatz corresponds to the compactification of heterotic superstring theory on $\mathbb{S}^1_z\times \mathbb{T}^5$, where we truncate all the fields that have indices on $\mathbb{T}^5$, while the KK reduction on $\mathbb{S}^1_z$ is general. The coordinates parametrizing the compact space $z$ and $y^{i}$, $i=1,2,3,4,5$ have all period $2\pi \ell_s$. At the supergravity level (zeroth-order in $\alpha'$), this gives rise to the following four-dimensional effective theory \begin{equation}\label{eq:hetred} \begin{aligned} S=\frac{1}{16\pi G_N^{(4)}}\int d^4x\sqrt{|g|}\Bigg\{&R+2(\partial\phi)^2+\frac{(\partial c)^2}{c^2}+\frac{e^{-4(\phi-\phi_{\infty})}}{2\cdot 3!}H^2\\ &+\frac{e^{-2(\phi-\phi_{\infty})}}{4}\left(G^2+\frac{c_{\infty}^2}{c^2}F^2\right)\Bigg\}\, , \end{aligned} \end{equation} where $G=dV$ and, at this order, $F=dA$. The four-dimensional Newton's constant is given by \begin{equation} G_N^{(4)}=\frac{G_N^{(10)}}{c_{\infty}(2\pi\ell_s)^6}\, . \end{equation} Also, the 3-form satisfies the Bianchi identity \begin{equation} dH=-F\wedge G\, , \end{equation} and using it we can dualize $H$ into a scalar field. From the effective four-dimensional action \eqref{eq:hetred} one sees that, at the supergravity level, one could truncate $V$, $H$ and $c$. This would simplify the system to the Einstein-Maxwell-Dilaton model. However, it turns out that this is inconsistent once $\alpha'$ corrections are taken into account, as those introduce non-trivial couplings between these fields. In other words, higher-derivative corrections to the Einstein-Maxwell-Dilaton effective model in the context of string theory may require the activation of additional fields. This is a well-known but often forgotten fact \cite{Campbell:1991kz, RNCORR}. \subsection{Supergravity zeroth-order solution} A generalized version of the extremal Reissner-Nordstrom black hole can be a solution of the theory (\ref{eq:hetred}) if we allow for dyonic vectors. Let us consider the following configuration, \begin{eqnarray} \nonumber ds^2&=&\left(1+\frac{Q}{r}\right)^{-2}dt^2-\left(1+\frac{Q}{r}\right)^2\left(dr^2+r^2d\Omega_{(2)}^2\right)\, ,\\ \nonumber A&=&\frac{2q_A}{(r+Q)}dt-2p_A\cos\theta d\varphi\, ,\\ \nonumber V&=&\frac{2q_V}{(r+Q)}dt-2p_V\cos\theta d\varphi\, ,\\ \label{eq:solzero} \phi&=&\phi_{\infty}\, ,\quad c=c_{\infty}\, ,\quad H=0\, . \end{eqnarray} where $q_{A,V}$, $p_{A,V}$ are the electric and magnetic charges of the vectors $A$ and $V$ (in Planck units) and $Q=\sqrt{q_A^2+p_A^2+q_V^2+p_V^2}$ is the total duality-invariant charge.\footnote{Invariant under electric-magnetic duality and T-duality.} On the other hand, the mass of this black hole is $M=Q$. It is easy to check that this is a solution of (\ref{eq:hetred}) if the charges satisfy the following conditions \begin{align} \nonumber |q_A|=|p_A|\, ,\quad |q_V|=|p_V|&\, ,\\ q_{A}p_{V}+p_{A}q_{V}=0&\, . \end{align} The first two conditions ensure that $F^2=G^2=0$ while the third one implies that $F\wedge G=0$, and in this way the scalar fields have no sources. This special point in charge space has the property that the scalars are trivial at the supergravity level, although this does not hold once higher-curvature corrections are implemented, as we show below. Let us note that starting from a given solution, the transformation $t\rightarrow -t$ generates a new solution with opposite values of the electric charges and in turn $\phi\rightarrow-\phi$ changes the sign of the magnetic charges. Thus, without loss of generality we can consider $q_A=p_A>0$, and in that case $p_V=-q_V$. Hence, there are two inequivalent sets of solutions, corresponding to $q_A\cdot q_V>0$ and $q_A\cdot q_V<0$. We wish to compute the first-order $\alpha'$-corrections to these solutions, but for simplicity we will restrict to the case in which the absolute value of all the charges is the same. Hence, there are two possibilities: $q_A=q_V=p_A=-p_V=Q/2$ and $q_A=-q_V=p_A=p_V=Q/2$. In addition, the special case with $q_V=0$, in which $\alpha'$ corrections seem to introduce pathologies in the extremal limit, is treated in Appendix \ref{sec:case3}. Thus, in this article we study the corrections to a stringy Reissner-Nordstr\"om black hole which, despite having two independent $U(1)$ dyonic vector fields, has only one independent charge $Q$. The configuration \eqref{eq:solzero} has an event horizon at $r=0$, with near-horizon geometry $AdS_2 \times S^2$ and it is therefore a black hole with vanishing temperature. On the other hand, the configuration does not preserve any supersymmetry, this being related to the presence of dyonic vectors, as in Ref.~\cite{Khuri:1995xq}. \subsection{Case 1: $q_A\cdot q_V<0$}\label{sec:case1} Let us first consider the case $q_A=-q_V=p_A=p_V=Q/2$. Starting from the zeroth-order solution \eqref{eq:solzero} and using (\ref{eq:comapct}), it is possible to compute the first higher-curvature corrections by solving perturbatively the ten-dimensional equations of motion at first order in $\alpha'$, which can be found in Appendix \ref{sec:theory}. The details about the resolution of those equations are shown in Appendix \ref{sec-eom}. We find the following solution \begin{eqnarray} \notag ds^2&=&\left(1+\frac{Q}{r}+\frac{\alpha'Q^2}{8(r+Q)^3r}\right)^{-2}dt^2-\left(1+\frac{Q}{r}+\frac{\alpha'Q^2}{8(r+Q)^3r}\right)^2\left(dr^2+r^2d\Omega_{(2)}^2\right)\, ,\\\notag F&=&\frac{Q}{(r+Q)^2}\left(1+\frac{\alpha'Q^2}{4(r+Q)^4}\right)dt\wedge dr+Q\left(1+\frac{\alpha'Q(Q+4r)}{2(r+Q)^4}\right) \sin\theta d\theta\wedge d\varphi\, ,\\ \notag V&=&-\frac{Q}{(r+Q)}dt-Q\cos\theta d\varphi\, ,\\\notag \hat\phi&=&\hat\phi_{\infty}+\frac{\alpha'Q^2}{4(r+Q)^4}\, ,\\ \label{eq:sol1} c&=&c_{\infty}\left(1+\frac{\alpha'Q^2}{4(r+Q)^4}\right)\, ,\quad H=0\, . \end{eqnarray} The conditions that we have imposed in order to solve the equations of motion are the same than those imposed for the original supergravity solution, namely \begin{itemize} \item regularity of the event horizon located at $r=0$, \item fixed asymptotic value of the scalars: $\hat\phi\rightarrow\hat\phi_{\infty}$ and $c\rightarrow c_{\infty}$, \item the metric is asymptotically flat, with the correct normalization at infinity, and \item absence of additional \emph{free} charges at order $\alpha'$. \end{itemize} The last point means that we do not introduce artificial shifts in the charges. In fact, performing a transformation of the form $Q\rightarrow Q+\alpha' \delta Q$ in the original solution generates a new solution which, apparently, contains $\alpha'$-corrections. The integration constants of the equations of motion have to be appropriately chosen so that this type of shift does not occur. Observe that in the previous solution, $V$ contains no corrections. Also, note that $F$ is not a closed form, $dF\neq0$, so its local expression is no longer given by the exterior derivative of the vector field $A$. This is due to the form of the decomposition (\ref{eq:comapct}) and to the fact that $H$ is not a closed 3-form at first order in $\alpha'$. Thus, $F$ will have an expression of the form $F=dA+\alpha' W$ for some 2-form $W$. Nevertheless, the correct identification of the charges carried by these vector fields can be expressed in terms of $F$ and $G$ as follows\footnote{These integral expressions for the charges are valid in the asymptotic sphere $\mathbb{S}^2_{\infty}$. At a generic sphere some of the expressions contain additional higher-derivative terms, which vanish asymptotically.}$^,$\footnote{In the case of the charges associated with $F$, these can be written in terms of the ten-dimensional Kalb-Ramond field strength as follows, \begin{eqnarray} \notag q_A &=&=\frac{g_s^2}{8\pi (2\pi\ell_s)^5}\int_{\mathbb{S}^2_{\infty}\times \mathbb{T}^5} e^{-2\hat\phi}\star\hat H \, ,\\ \notag p_A&=&\frac{1}{8\pi c_{\infty}(2\pi\ell_s)}\int_{\mathbb{S}^2_{\infty}\times \mathbb{S}^1} \hat H\, . \end{eqnarray} } \begin{eqnarray} \notag q_A &=&\frac{1}{8\pi}\int_{\mathbb{S}^2_{\infty}}\frac{c^2_{\infty}}{c^2}e^{-2(\phi-\phi_{\infty})}\star \, ,\\ \notag p_A&=&\frac{1}{8\pi}\int_{\mathbb{S}^2_{\infty}} \, ,\\ \notag q_V&=&\frac{1}{8\pi}\int_{\mathbb{S}^2_{\infty}}e^{-2(\phi-\phi_{\infty})}\star G\, ,\\ p_V&=&\frac{1}{8\pi}\int_{\mathbb{S}^2_{\infty}} G\, . \end{eqnarray} We have checked that the evaluation of the integrals yields $q_A=p_A=-q_V=p_V=Q/2$, so that the charges of the solution are indeed unmodified and $Q$ is the total charge. One might think that corrections to the charges should not be expected, as they are defined by asymptotic integrals, where the curvature goes to zero. However, there are many examples of solutions for which higher-curvature interactions behave as delocalized sources of charge \cite{Cano:2018qev,Cano:2018brq,Cano:2018hut, Faedo:2019xii}, and hence it is always convenient to perform this computation. The ADM mass of the black hole can be read from the asymptotic expansion of the metric according to \begin{equation} \lim_{r \rightarrow \infty} g_{rr}=1+\frac{2M}{r}+\ldots\, . \end{equation} From \eqref{eq:sol1} on sees that $M=Q$, so the charge-to-mass ratio of this extremal black hole is not modified at this order, \begin{equation} \frac{Q}{M}=1+\mathcal{O}(\alpha'^2)\, . \end{equation} It is also interesting to compute the correction to the entropy of this black hole. The application of Wald's formula for this family of solutions of the heterotic theory is described in Appendix \ref{sec:wald}. Upon evaluation of the resulting expression \eqref{eq:wald2} for the background \eqref{eq:sol1} we get \begin{equation} \mathbb{S}=\frac{\pi}{G_{N}^{(4)}} \left(Q^2+\frac{\alpha'}{4} \right) \, . \end{equation} \noindent The first term in the expression corresponds to the Bekenstein-Hawking entropy. Therefore, we find a positive correction of the entropy, that can be interpreted as capturing additional microscopic degrees of freedom that are frozen in the truncation to the two-derivative supergravity theory. At the computational level, this deviation is originated from an increase in the area of the event horizon, while the contributions in Wald's formula coming explicitly from the higher-derivative terms vanish. \subsection{Case 2: $q_A\cdot q_V>0$}\label{sec:case2} Let us now consider the case with $q_A=q_V=p_A=-p_V=Q/2$. The first-order in $\alpha'$ corrections turn out to be quite different. The solution reads \begin{eqnarray} \notag ds^2&=&A^2\left(1+\frac{Q}{r}\right)^{-2}dt^2-B^2\left(1+\frac{Q}{r}\right)^2\left(dr^2+r^2d\Omega_{(2)}^2\right)\, ,\\\notag F&=&\frac{Q}{(r+Q)^2}\left(1+\alpha'\frac{3 Q^2-10 Q r-3 r^2}{120 (r+Q)^4}\right)dt\wedge dr+Q\left(1-\frac{\alpha' Q^2}{2 (r+Q)^4}\right) \sin\theta d\theta\wedge d\varphi\, ,\\\notag V&=&\frac{Q}{(r+Q)}\left(1-\alpha'\frac{r (63 Q+r)}{120 (r+Q)^4}\right)dt+Q\cos\theta d\varphi\, ,\\\notag \hat\phi&=&\hat\phi_{\infty}-\frac{\alpha' r \left(19 Q^2+12 Q r+3 r^2\right)}{60 Q (r+Q)^4}\, ,\\ c&=&c_{\infty}\left(1-\frac{\alpha'Q^2}{4(r+Q)^4}\right)\, ,\quad H=0\, , \end{eqnarray} where \begin{align} A=&1+\alpha'\frac{6 Q^3+13 Q^2 r+8 Q r^2+2r^3}{40 Q (r+Q)^4}\, ,\\ B=&1-\alpha' \frac{5 Q^3+9 Q^2 r+7 Q r^2+2 r^3}{40 Q (r+Q)^4}\, . \end{align} In order to obtain this solution we have imposed the same conditions as in the previous case. Notice that now $V$ receives corrections, while $F$ is again not closed. Still, one can check that the charges have the correct values $q_A=p_A=q_V=-p_V=Q/2$, and therefore, $Q$ is indeed the total charge. We also note that, unlike in the previous solution, the dilaton acquires a non-trivial charge that cannot be removed, and it reads\footnote{It is identified by the asymptotic expansion $\hat\phi=\hat\phi_{\infty}-Q_{\hat\phi}/r+\mathcal{O}(1/r^2)$.} \begin{equation} Q_{\hat\phi}=\frac{\alpha'}{20Q} \, . \end{equation} On the other hand, the metric component $g_{rr}$ behaves asymptotically as \begin{equation} g_{rr}=1+\frac{1}{r}\left(2Q-\frac{\alpha'}{10 Q}\right)+\ldots\, . \end{equation} and therefore, the mass receives corrections in this case \begin{equation} M=Q-\frac{\alpha'}{20Q}+\mathcal{O}(\alpha'^2)\, . \end{equation} Correspondingly, the extremal charge-to-mass ratio is modified \begin{equation} \frac{Q}{M}=1+\frac{\alpha'}{20 M^2}+\mathcal{O}(\alpha'^2)\, , \end{equation} and it is larger than one, in agreement with the mild form of the WGC. The Wald entropy of this solution is obtained from the evaluation of \eqref{eq:wald2}. In this case there is a negative contribution from the modification of the area of the event horizon as well as a positive one from the higher-derivative terms. The result reads \begin{equation} \mathbb{S}=\frac{\pi}{G_{N}^{(4)}} \left(Q^2+\frac{\alpha'}{4} \right) \, , \end{equation} which surprisingly enough, coincides with the value found in the previous case, even though the rest of the properties of the solution are different. \section{Discussion} \label{sec-discussion} In this article we have analyzed the effect produced by higher-curvature corrections to a family of extremal, non-supersymmetric black holes in the context of heterotic superstring theory. We have found the first example of modification of the charge-to-mass ratio of an extremal black hole explicitly embedded in string theory. This example defies previous expectations that the charge-to-mass ratio of extremal black holes in a supersymmetric theory is not modified by higher-curvature corrections. Likewise, we have presented evidence that such modifications are not necessarily in correspondence with the corrections to the entropy of the black hole. While this differs from the result of the explorations performed in Einstein-Maxwell theory, it agrees with earlier results on the supersymmetric three- and four-charge systems of the heterotic theory, as we mentioned in the introduction. The difference with the results in Refs.~\cite{Cheung:2018cwt,Cheung:2019cwi,Goon:2019faz} can be understood if one takes into account two facts. First of all, topological terms such as the Gauss-Bonnet invariant --- which implicitly appears in the heterotic string effective action --- modify the black hole entropy while keeping the solution unchanged. Thus, in this case corrections to the entropy are independent from deviations of the extremal charge-to-mass ratio. On the other hand, the models considered in the previous literature do not include scalars, which are a key ingredient of stringy effective actions. As we have seen in the examples presented, these scalars are activated by higher-derivative corrections even if they are trivial in the zeroth-order solution. Scalar fields usually affect the thermodynamic description of black holes --- see \textit{e.g.} \cite{Astefanesei:2018vga} --- and it would be interesting to explore whether this could modify the conclusions of \cite{Goon:2019faz}. The string coupling constant and the curvature can be kept sufficiently small in the exterior region of the black hole for the cases we have considered, hence the low energy field theory description gives a good approximate description of the system. In our analysis, we focused on the case $|q_A|=|q_V|$ for simplicity, but it would be interesting to study the corrections to the solution (\ref{eq:solzero}) for general values of $q_A$ and $q_V$, so that the two dyonic vectors are independent. In that case, we expect that the extremality bound will be modified according to \begin{equation} \frac{Q}{M}\le1+\frac{\alpha'}{M^2}f(q_A,q_V)+\mathcal{O}(\alpha'^2)\, , \end{equation} where $f$ is a certain homogeneous function of degree 0 of the charges $q_A$ and $q_V$, which, according to the WGC, should be non-negative. In this paper, we have shown that \begin{equation} f(q,-q)=0\, ,\quad f(q,q)=\frac{1}{20}\, . \end{equation} In addition, in Appendix \ref{sec:case3} we consider the case $q_V=0$ and we show that \linebreak$f(q,0)=\frac{1}{80}$.\footnote{The study of the non-extremal black holes of that type will be the object of a coming publication \cite{RNCORR}. $\alpha'$-corrections seem to introduce pathologies in the extremal case even though the zeroth-order solution is regular.} Furthermore, since the result must be invariant under T-duality we conclude that $f(0,q)=f(q,0)$. Given the values found, it is an interesting problem to search for the general expression of the function $f(q_A,q_V)$ and to check its non-negativity. An important piece in our analysis is that the higher-curvature corrections to the theory are directly taken from the ten-dimensional heterotic string theory. This differs from the approach usually taken in the literature, where given a four-dimensional effective theory, all possible four-derivative terms that can be constructed with the corresponding field content are considered. As we have mentioned in the main text, consistency may require to enlarge the field content of an effective theory when perturbative corrections are being considered. Of course, the details depend on the UV theory on which the system is embedded. It is interesting to mention here the example of the dyonic Reissner-Nordstrom black hole solution of Einstein-Maxwell-Dilaton theory embedded in the heterotic theory ($f(q,0)$ configuration), which requires the activation of additional fields that can be truncated at the two-derivative level \cite{RNCORR}. The mild version of the WGC affects solutions well above Planck mass. Hence, it has little predictive power about low energy effective theories in itself. It has been recently suggested that it might be possible to relate the mild and strong versions of the WGC using modular invariance of string theory \cite{Aalsma:2019ryi}. Given the growing amount of evidence in favor of the mild WGC, this seems a promising idea and it would be interesting to test it for a regular extremal black hole system of string theory.\footnote{Reference \cite{Aalsma:2019ryi} considered two-charge black holes, which have singular horizon in the extremal limit even after higher-curvature corrections are included \cite{Cano:2018hut}.} \section*{Acknowledgments} This work has been supported in part by the INFN, the MCIU, AEI, FEDER (UE) grant PGC2018-095205-B-I00 and by the Spanish Research Agency (Agencia Estatal de Investigaci\'on) through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597. The work of PAC is funded by Fundaci\'on la Caixa through a ``la Caixa - Severo Ochoa'' International pre-doctoral grant. PFR would like thank the Albert Einstein Institute at Potsdam for hospitality while this work was being completed. TO wishes to thank M.M.~Fern\'andez for her permanent support.
proofpile-arXiv_065-6030
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec_intro} Recent revival of the swampland conjecture \cite{Obied:2018sgi, Agrawal:2018own} has boosted a huge amount of interest towards exploring the (non-)existence of de-Sitter vacua within a consistent theory of quantum gravity. The original idea of swampland has been proposed to state that de-Sitter solutions must be absent in a consistent theory of quantum gravity \cite{Ooguri:2006in}. This idea has been recently endorsed as a bound involving the scalar potential ($V$) and its derivatives given in the following manner, \bea \label{eq:old-swamp} & & \frac{|\nabla V|}{V} \geq \frac{c}{M_p} \,, \eea where the constant $c$ is an order one quantity. This conjecture has been supported by several explicit computations in the context of attempts made for realizing classical de-Sitter solutions and inflationary cosmology in the type II superstring flux compactifications \cite{Maldacena:2000mw, Hertzberg:2007wc, Hertzberg:2007ke, Haque:2008jz, Flauger:2008ad, Caviezel:2008tf, Covi:2008ea, deCarlos:2009fq, Caviezel:2009tu, Danielsson:2009ff, Danielsson:2010bc, Wrase:2010ew, Shiu:2011zt, McOrist:2012yc, Dasgupta:2014pma, Gautason:2015tig, Junghans:2016uvg, Andriot:2016xvq, Andriot:2017jhf, Danielsson:2018ztv}. Note that, the bound presented in eqn. (\ref{eq:old-swamp}) does not only forbid the de-Sitter minima but also the de-Sitter maxima as well, and several counter examples were known \cite{Haque:2008jz, Danielsson:2009ff, Danielsson:2011au, Chen:2011ac, Danielsson:2012et} or have been reported soon after the proposal was made \cite{Andriot:2018wzk, Andriot:2018ept, Garg:2018reu, Denef:2018etk, Conlon:2018eyr, Roupec:2018mbn, Murayama:2018lie, Choi:2018rze, Hamaguchi:2018vtv, Olguin-Tejo:2018pfq, Blanco-Pillado:2018xyn} reflecting the need of refining the de-Sitter swampland conjecture in eqn. (\ref{eq:old-swamp}). Subsequently a refined version of the conjecture has been proposed which states that at least one of the following two constraints should always hold \cite{Ooguri:2018wrx}, \bea \label{eq:new-swamp} & & \frac{|\nabla V|}{V} \geq \frac{c}{M_p}\, , \qquad \qquad {\rm min} \Bigl[\frac{\nabla_i \nabla_j V}{V}\Bigr] \leq - \frac{c'}{M_p^2}\,, \eea where $c$ and $c' > 0$ are order one constants. Note that these two parameters can be related to the usual inflationary parameters, namely the $\epsilon_V$ and $\eta_V$ parameters, which are needed to be sufficiently small for having the slow-roll inflation (e.g. see \cite{BlancoPillado:2006he, Hertzberg:2007ke,Hetz:2016ics}), \bea & & \hskip-1cm \epsilon_V \geq \frac{1}{2}\, c^2 \, , \qquad \qquad |\eta_V| \leq c' \,. \eea Therefore it is rather quite obvious that the conjecture (\ref{eq:new-swamp}) poses an obstruction to not only realising de-Sitter vacua but also in realising slow-roll inflationary scenarios, which demands $\epsilon_V \ll 1$ and $|\eta_V| \ll 1$. However, this definition of the $\epsilon_V$ parameter follows from a more general definition given in terms of Hubble parameter as $\epsilon_H = - \dot{H}/H^2$ which only needs to satisfy $\epsilon_H < 1$ for having an accelerated universe. This leads to a possible window circumventing the conjecture in the multi-field inflation with turning trajectories \cite{Hetz:2016ics,Achucarro:2018vey}. Moreover given the fact that no universal theoretical quantification of the $c$ and $c'$ parameters being available (though some experimental estimates have been reported in \cite{Raveri:2018ddi}), the order one statement may still keep some window open \cite{Kehagias:2018uem, Kinney:2018kew}. The question of realising de-Sitters is two-fold; first is about its existence and the second is about the stability, and a plethora of interesting models have been proposed on these lines \cite{Maldacena:2000mw, Hertzberg:2007wc, Caviezel:2008tf,Flauger:2008ad, Danielsson:2009ff, Caviezel:2009tu, Wrase:2010ew, Covi:2008ea, Shiu:2011zt, Danielsson:2012et, Junghans:2016uvg, Banlaki:2018ayh}. The swampland conjecture \cite{Ooguri:2006in} has been also found to be in close connections with the allowed inflaton field range in a trustworthy effective field description as it has been argued that massive tower of states can get excited after a certain limit to the inflaton excursions \cite{Blumenhagen:2017cxt, Blumenhagen:2018nts, Blumenhagen:2018hsh, Palti:2017elp, Conlon:2016aea, Hebecker:2017lxm, Klaewer:2016kiy, Baume:2016psm, Landete:2018kqf, Cicoli:2018tcq, Font:2019cxq, Grimm:2018cpv, Hebecker:2018fln, Banlaki:2018ayh, Junghans:2018gdb}. The recent surge of developments following the recent swampland proposal can be found in \cite{Denef:2018etk, Conlon:2018eyr, Garg:2018reu, Kinney:2018nny, Roupec:2018mbn, Murayama:2018lie, Choi:2018rze, Hamaguchi:2018vtv, Olguin-Tejo:2018pfq, Blanco-Pillado:2018xyn, Achucarro:2018vey, Kehagias:2018uem, Kinney:2018kew, Andriot:2018wzk, Andriot:2018ept, Lin:2018kjm, Han:2018yrk, Raveri:2018ddi, Dasgupta:2018rtp, Danielsson:2018qpa, Andriolo:2018yrz, Dasgupta:2019gcd, Russo:2018akp, Russo:2019fnk, Andriot:2019wrs} with an extensive review on the status in \cite{Palti:2019pca}. In contrary to the (minimal) de-Sitter no-go scenarios, there have been several proposals for realizing stable de-Sitter vacua in the context of string model building \cite{Kachru:2003aw, Burgess:2003ic, Achucarro:2006zf, Westphal:2006tn, Silverstein:2007ac, Rummel:2011cd, Cicoli:2012fh, Louis:2012nb, Cicoli:2013cha, Cicoli:2015ylx, Cicoli:2017shd, Akrami:2018ylq, Antoniadis:2018hqy}; see \cite{Heckman:2019dsj, Heckman:2018mxl} also for the $F$-theoretic initiatives taken in this regard. In fact realizing de-Sitter solutions and possible obstructions on the way of doing it have been always in the center of attraction since decades\footnote{For an updated recent review on realizing de-Sitter solutions in string theoretic models along with the status on Quintessence, we refer the readers to \cite{Cicoli:2018kdo}.}. Moreover, some interesting models realizing de-Sitter vacua in the framework of non-geometric flux compactifications have also been proposed \cite{deCarlos:2009qm, deCarlos:2009fq, Danielsson:2010bc, Danielsson:2012by, Blaback:2013ht, Damian:2013dq, Damian:2013dwa, Blaback:2013fca, Hassler:2014mla, Blumenhagen:2015xpa, Blaback:2018hdo, Damian:2018tlf}. However the issues related to fluxes being integral and whether they satisfy all the NS-NS Bianchi identities or not, can still be considered to be among some open questions in this regard. In fact it has been observed that the Bianchi identities are not fully known in the beyond toroidal examples as there have been some inconsistencies observed in two ways of deriving the identities \cite{Ihl:2007ah, Robbins:2007yv, Gao:2018ayp, Shukla:2016xdy, Shukla:2019akv}. \subsubsection*{Motivations, goals and a brief summary of the results:} Several de-Sitter No-Go theorems on the type IIA side have been well known since a decade or so \cite{Hertzberg:2007wc, Flauger:2008ad, Wrase:2010ew, Hertzberg:2007ke}, which have been also studied for the type II non-geometric compactifications using a simple isotropic torus case in \cite{deCarlos:2009qm, deCarlos:2009fq}. With a goal of extending the non-geometric flux phenomenology beyond the toroidal cases, the study of generic four-dimensional type II scalar potentials and their ten-dimensional origin have been performed in a series of papers \cite{Shukla:2015rua, Shukla:2015hpa, Blumenhagen:2015lta, Blumenhagen:2015kja, Shukla:2016hyy, Gao:2017gxk}. Taking this programme to one step further, in a companion paper \cite{Shukla:2019wfo}, we have presented a one-to-one $T$-dual mapping of the two type II effective scalar potentials, along with the flux constraints arising from the NS-NS Bianchi identities and the tadpole cancellation conditions, which are also in one-to-one correspondence under $T$-duality. The main motivations and the goals aimed in this article can be presented under the following points: \begin{itemize} \item{Our so-called ``cohomology" or ``symplectic" formulation of the scalar potential presented in \cite{Shukla:2019wfo} opens up the window to study the non-geometric models beyond the toroidal constructions, and also enables one to explicitly translate any useful findings of one setup into its $T$-dual picture. On these lines, we plan to $T$-dualize the several de-Sitter no-go scenarios realized in some purely geometric type IIA frameworks \cite{Hertzberg:2007wc, Flauger:2008ad, Wrase:2010ew, Hertzberg:2007ke}. This helps us in delving into their type IIB counterparts which turn out to be non-geometric de-Sitter no-go frameworks, and those have not been known before. The utility of our approach can be underlined by the fact that although the type IIA no-go scenarios have been known since more than a decade, there have been no de-Sitter no-go proposals in generic non-geometric type IIB framework. } \item{In our analysis, we show the relevance of considering the complex structure moduli in deriving the $T$-dual type IIB no-go conditions. Note that all the type IIA no-go results in \cite{Hertzberg:2007wc, Flauger:2008ad, Wrase:2010ew, Hertzberg:2007ke}, which we $T$-dualize, are realized using the extremization conditions only in the `volume/dilaton" plane, and without taking into account the complex structure moduli sector. This illustrates that any claim of evading the no-go originated from the ``volume/dilaton" analysis should be checked by including all the remaining moduli.} \item{On the lines of classifying type IIA and type IIB models based on their (non-)geometric nature via turning on certain set of fluxes at a time, we present an interesting recipe which corresponds to considering what we call some `special solutions' of the NS-NS Bianchi identities. These solutions are such that they lead to a purely geometric framework as a $T$-dual of a non-geometric setup on either of the respective IIA or IIB sides. In particular, type IIA non-geometric model with fluxes allowed as in the `special solution' of the Bianchi identities is $T$-dual to a purely geometric type IIB model, which has been known to have de-Sitter no-go scenario \cite{Shiu:2011zt, Garg:2018reu}, and subsequently our analysis concludes that the corresponding $T$-dual type IIA model despite having the non-geometric fluxes (still allowed by the `special solution') cannot evade the no-go result. This shows that our approach will be useful for playing with constructing models in search of the de-Sitter no-go or against those no-go arguments, given that the most generic non-geometric setup could still be expected to evade the no-go, though there are several specifics to be checked in a given model before arriving at any final conclusion.} \item{In addition to finding the (non-)geometric flux-regime or the types of fluxes needed to evade a certain kind of de-Sitter no-go result, we also find that if there are some specific geometries involved, such as $K3/{\mathbb T}^4$-fibred complex threefold, then there can be a restoration of the no-go results despite the inclusion of those fluxes which apparently could be anticipated to evade the respective no-go results. We illustrate this observation for explicit type IIA and IIB toroidal non-geometric setups.} \end{itemize} So, our results can be considered as providing some systematics about constructing de-Sitter no-go scenarios along with the recipes to find the possibilities of evading them, and at the same time, in looking for some specific geometries of the moduli space which could again restore the de-Sitter no-go result, despite the presence of those fluxes which are naively anticipated to evade the no-go. Thus, our analysis presents a playing ground for constructing/evading the de-Sitter no-go scenarios. The article is organized as follows: In section \ref{sec_sol-BIs} we present some interesting solutions of the NS-NS Bianchi identities which we further use for deriving the no-go conditions in the upcoming sections. Section \ref{sec_nogo1} presents a type IIA no-go with standard fluxes and its $T$-dual type IIB counterpart which includes non-geometric fluxes as well. In section \ref{sec_nogo2} first we re-derive the fact that one can evade the type IIA no-go-1 with geometric fluxes and Romans mass, and then we $T$-dualize it to study the type IIB counter part. Section \ref{sec_nogo3} presents the relevance of $K3/{\mathbb T}^4$-fibred Calabi Yau threefolds which help in finding a new class of de-Sitter no-go scenarios in both the type II theories. Finally we conclude with the results and observations in section \ref{sec_conclusions}. \vskip0.2cm \noindent {\bf Note:} Let us mention at the outset that we will follow the $T$-dual dictionary from a companion paper \cite{Shukla:2019wfo} which includes the necessary ingredients of the generic formulation of the four-dimensional scalar potentials for the type IIA and the type IIB supergrativities with (non-)geometric fluxes, and this dictionary is placed in the appendix \ref{sec_dictionary}. For the current interests in this article, we will directly utilize the scalar potential for the possible applications in the lights of de-Sitter and inflationary no-go scenarios. Though we attempt to keep the article self-contained, we encourage the interested readers to follow the other relevant details if necessary, e.g. on the superpotential, $D$-terms etc., directly from \cite{Shukla:2019wfo}. \section{Solutions of Bianchi identities} \label{sec_sol-BIs} In this section we aim to present some interesting solutions of the Bianchi identities satisfied by the various fluxes of the type IIA and IIB theories. The full list of allowed NS-NS fluxes, namely $\{{\rm H}, w, {\rm Q}, {\rm R}\}$ in type IIA and $\{{H}, \omega, {Q}, {R}\}$ in type IIB along with the RR fluxes, namely $\{{\rm F}_0 \equiv m_0, {\rm F}_2 \equiv m^a, {\rm F}_4 \equiv e_a, {\rm F}_6 \equiv e_0\}$ in type IIA and $\{F_0, F_i, F^i, F^0\}$ in type IIB, and their $T$-duality relations are collected in table \ref{tab_fluxTdual}. \begin{table}[h!] \begin{center} \begin{tabular}{|c||c|c|} \hline & & \\ & Type IIA with $D6/O6$ \quad & \quad Type IIB with $D3/O3$ and $D7/O7$ \\ & & \\ \hline \hline & & \\ $F$-term & ${\rm H}_0$, \quad ${\rm H}_k$, \quad ${\rm H}^\lambda$, & $H_0$, \quad $\omega_{a0}$, \quad $\hat{Q}^\alpha{}_0$, \\ fluxes & & \\ & $w_{a0}$, \quad $w_{ak}$, \quad $w_a{}^\lambda$, & $H_i$, \quad $\omega_{ai}$, \quad $\hat{Q}^\alpha{}_{i}$, \\ & & \\ & ${\rm Q}^a{}_0$, \quad ${\rm Q}^a{}_k$, \quad ${\rm Q}^{a \lambda}$, & $H^i$, \quad $\omega_a{}^i$, \quad $\hat{Q}^{\alpha i}$, \\ & & \\ & ${\rm R}_0$, \quad ${\rm R}_k$, \quad ${\rm R}^\lambda$, & $- H^0$, \quad $- \omega_{a}{}^0$, \quad $- \hat{Q}^{\alpha 0}$, \\ & & \\ & $e_0$, \quad $e_a$, \quad $m^a$, \quad $m_0$. & $F_0$, \quad $F_i$, \quad $F^i$, \quad $- F^0$. \\ \hline & & \\ $D$-term & $\hat{w}_\alpha{}^0$, \quad $\hat{w}_\alpha{}^k$, \quad $\hat{w}_{\alpha \lambda}$, & $-\,R_K$, \quad $-\,Q^a{}_K$, \quad $\hat{\omega}_{\alpha K}$,\\ fluxes & & \\ & $\hat{\rm Q}^{\alpha 0}$, \quad $\hat{\rm Q}^{\alpha k}$, \quad $\hat{\rm Q}^{\alpha}{}_\lambda$. & $-\,R^K$, \quad $-\,Q^{a K}$, \quad $\hat{\omega}_{\alpha}{}^K$.\\ & & \\ \hline & & \\ Complex & \, \, ${\rm N}^0$, \, \, ${\rm N}^k$, \, \, ${\rm U}_\lambda$, \, \, ${\rm T}^a$. & $S$, \, \, $G^a$, \, \, $T_\alpha$, \, \, $U^i$.\\ Moduli& & \\ \hline \end{tabular} \end{center} \caption{T-duality transformations among the various fluxes and complex variable.} \label{tab_fluxTdual} \end{table} \noindent Here the flux as well as various moduli are counted via the Hodge numbers as $\alpha \in \{1, 2, .., h^{1,1}_+\}$, $a \in \{1, 2, .., h^{1,1}_-\}$ on both sides, while $\Lambda \in \{0, 1, 2, .., h^{2,1}_-\}$ and $J, K \in \{1, 2, .., h^{2,1}_+\}$ on the type IIB side, whereas the splitting of the complex structure indices on type IIA side being such that the $k$ and $\lambda$ sum to $h^{2,1}$. The various fluxes appearing in the four-dimensional type IIA supergravity are constrained by the following five classes of NS-NS Bianchi identities \cite{Shukla:2019wfo}, \bea \label{eq:IIABIs2} & {\bf (I).} \quad & {\rm H}^{\lambda} \, \hat{w}_{\alpha\lambda} = {\rm H}_{\hat{k}} \, \hat{w}_\alpha{}^{\hat{k}}, \\ & {\bf (II).} \quad & {\rm H}^{\lambda} \, \hat{\rm Q}^\alpha{}_{\lambda} = {\rm H}_{\hat{k}} \, \hat{\rm Q}^{\alpha \, \hat{k}}, \qquad w_a{}^\lambda \, \hat{w}_{\alpha \lambda} = w_{a \hat{k}} \, \hat{w}_\alpha{}^{\hat{k}}\,, \nonumber\\ & {\bf (III).} \quad & \hat{\rm Q}^\alpha{}_\lambda \, w_a{}^\lambda = w_{a \hat{k}} \, \hat{\rm Q}^{\alpha \hat{k}}, \qquad {\rm Q}^a{}_{\hat k} \, \hat{w}_\alpha{}^{\hat k} = {\rm Q}^{a \lambda} \, \hat{w}_{\alpha \lambda}, \nonumber\\ & & \hat{w}_{\alpha\lambda}\, \hat{\rm Q}^{\alpha \hat{k}} = \hat{\rm Q}^\alpha{}_\lambda \, \hat{w}_\alpha{}^{\hat k}, \quad \hat{w}_{\alpha \lambda} \, \hat{\rm Q}^\alpha{}_\rho = \hat{\rm Q}^\alpha{}_\lambda \, \hat{w}_{\alpha \rho}, \quad \hat{w}_\alpha{}^{\hat k} \, \hat{\rm Q}^{\alpha \hat{k^\prime}} = \hat{\rm Q}^{\alpha \hat{k}} \, \hat{w}_\alpha{}^{\hat{k^\prime}}, \nonumber\\ & & {\rm R}^\lambda \, {\rm H}_{\hat{k}} - {\rm H}^\lambda \, {\rm R}_{\hat{k}} + w_a{}^\lambda \, {\rm Q}^a{}_{\hat{k}} - {\rm Q}^{a \lambda} \, w_{a \hat{k}} =0, \nonumber\\ & & {\rm H}_{[\hat{k}} \, {\rm R}_{\hat{k^\prime}]} + {\rm Q}^a{}_{[\hat{k}} \, w_{a \hat{k^\prime}]} = 0, \qquad {\rm H}^{[\lambda} \, {\rm R}^{\rho]} + {\rm Q}^{a [\lambda} \, w_a{}^{\rho]} = 0, \nonumber\\ & {\bf (IV).} \quad & {\rm R}^\lambda \, \hat{w}_{\alpha \lambda} = {\rm R}_{\hat k} \, \hat{w}_\alpha{}^{\hat k}, \qquad {\rm Q}^{a \lambda} \, \hat{\rm Q}^\alpha{}_\lambda = {\rm Q}^a{}_{\hat k} \, \hat{\rm Q}^{\alpha \hat{k}} \,, \nonumber\\ & {\bf (V).} \quad & {\rm R}^\lambda \, \hat{\rm Q}^\alpha{}_\lambda = {\rm R}_{\hat k} \, \hat{\rm Q}^{\alpha \hat{k}}\,. \nonumber \eea Similarly on type IIB side, we have the following five classes of Bianchi identities \cite{Robbins:2007yv}, \bea \label{eq:IIBBIs2} & {\bf (I).} \quad & H_\Lambda \, \omega_{a}{}^{\Lambda} = H^\Lambda \, \omega_{\Lambda a}, \\ & {\bf (II).} \quad & H^\Lambda \, \hat{Q}_\Lambda{}^\alpha = H_\Lambda \hat{Q}^{\alpha \Lambda}, \qquad \omega_{a}{}^{\Lambda} \, \omega_{b \Lambda} = \omega_{b}{}^{\Lambda} \, \omega_{a \Lambda}, \qquad \hat{\omega}_{\alpha}{}^{K} \, \hat{\omega}_{\beta K} = \hat{\omega}_{\beta}{}^{K} \, \hat{\omega}_{\alpha K}, \nonumber\\ & {\bf (III).} \quad & \omega_{a \Lambda} \, \hat{Q}^{\alpha \Lambda} = \omega_{a}{}^{\Lambda} \, \hat{Q}^\alpha{}_{\Lambda}, \quad Q^{a K} \, \hat{\omega}_{\alpha K} = Q^{a}{}_{K} \, \hat{\omega}_{\alpha}^{K}, \nonumber\\ & & H_\Lambda \, R_K + \omega_{a \Lambda} \, Q^a{}_K + \hat{Q}^\alpha{}_\Lambda \, \hat{\omega}_{\alpha K} = 0, \qquad H^\Lambda \, R_K + \omega_{a}{}^{ \Lambda} \, Q^a{}_K + \hat{Q}^{\alpha{}\Lambda} \, \hat{\omega}_{\alpha K} = 0, \nonumber\\ & & H_\Lambda \, R^K + \omega_{a \Lambda} \, Q^{a{}K} + \hat{Q}^\alpha{}_\Lambda \, \hat{\omega}_{\alpha}{}^{K} = 0, \qquad H^\Lambda \, R^K + \omega_{a}{}^{ \Lambda} \, Q^{a K} + \hat{Q}^{\alpha{}\Lambda} \, \hat{\omega}_{\alpha}{}^{K} = 0, \nonumber\\ & {\bf (IV).} \quad & R^K \, \hat{\omega}_{\alpha K} = R_K \, \hat{\omega}_{\alpha}{}^{K}, \qquad \hat{Q}^{\alpha\Lambda} \, \hat{Q}^\beta{}_{\Lambda} = \hat{Q}^{\beta \Lambda} \, \hat{Q}^\alpha{}_{\Lambda}, \qquad Q^{a K} \, Q^{b}{}_{K} = Q^{b K} \, Q^{a}{}_{K}, \nonumber\\ & {\bf (V).} \quad & R_K \, Q^{a K} = R^K \, Q^{a}{}_{K}\,. \nonumber \eea \noindent First we argue how by choosing a certain type of involution can project out many flux components and hence can indeed simplify the generic set of identities, for which finding solutions becomes rather easier. Moreover we present another set of solutions which we call as `special solution' for both the type IIA and type IIB theories. They are very peculiar in many aspects as we will elaborate later on. \subsection{Simple solutions} The set of type IIA Bianchi identities given in eqn. (\ref{eq:IIABIs2}) suggests that if one choses the anti-holomorphic involution such that the even $(1,1)$-cohomology sector is trivial, which is very often the case one considers for simple phenomenological model \cite{Villadoro:2005cu, Blumenhagen:2013hva, Gao:2017gxk, Gao:2018ayp}, then only the following Bianchi identities remain non-trivial, \bea \label{eq:IIAsimplesol1} & & {\rm R}^\lambda \, {\rm H}_{\hat{k}} - {\rm H}^\lambda \, {\rm R}_{\hat{k}} + w_a{}^\lambda \, {\rm Q}^a{}_{\hat{k}} - {\rm Q}^{a \lambda} \, w_{a \hat{k}} =0, \\ & & {\rm H}_{[\hat{k}} \, {\rm R}_{\hat{k^\prime}]} + {\rm Q}^a{}_{[\hat{k}} \, w_{a \hat{k^\prime}]} = 0, \qquad {\rm H}^{[\lambda} \, {\rm R}^{\rho]} + {\rm Q}^{a [\lambda} \, w_a{}^{\rho]} = 0\,. \nonumber \eea In such a situation, there will be no $D$-term contributions generated to the scalar potential as all the fluxes relevant for $D$-terms have $\alpha \in h^{1,1}_+$ indices, and hence are projected out. For the $T$-dual of the above type IIA setting, one needs to look at the set of type IIB Bianchi identities given in eqn. (\ref{eq:IIBBIs2}) which suggests that if one choses the holomorphic involution such that the even $(2,1)$-cohomology sector is trivial, then only the following Bianchi identities remain non-trivial, \bea \label{eq:IIBsimplesol1} & & H_\Lambda \, \omega_{a}{}^{\Lambda} = H^\Lambda \, \omega_{\Lambda a}, \qquad H^\Lambda \, \hat{Q}_\Lambda{}^\alpha = H_\Lambda \hat{Q}^{\alpha \Lambda}, \qquad \omega_{a}{}^{\Lambda} \, \omega_{b \Lambda} = \omega_{b}{}^{\Lambda} \, \omega_{a \Lambda}, \\ & & \omega_{a \Lambda} \, \hat{Q}^{\alpha \Lambda} = \omega_{a}{}^{\Lambda} \, \hat{Q}^\alpha{}_{\Lambda}, \qquad \hat{Q}^{\alpha\Lambda} \, \hat{Q}^\beta{}_{\Lambda} = \hat{Q}^{\beta \Lambda} \, \hat{Q}^\alpha{}_{\Lambda}, \nonumber \eea which are in a one-to-one correspondence with those in eqn. (\ref{eq:IIAsimplesol1}). In such a situation, there will be no $D$-term generated as all the fluxes with $\{J, K \} \in h^{2,1}_+$ indices are projected out. Moreover, on top of this if the holomorphic involution is chosen to result in a trivial odd $(1,1)$-cohomology, which corresponds to a situation with the absence of odd moduli $G^a$ on the type IIB side and is also very often studied case for being simplistic in nature (e.g. see \cite{Aldazabal:2006up, Blumenhagen:2013hva, Shukla:2016xdy, Betzler:2019kon}), then there are only two Bianchi identities to worry about and they are given as under, \bea \label{eq:IIBsimplesol2} & & H^\Lambda \, \hat{Q}_\Lambda{}^\alpha = H_\Lambda \hat{Q}^{\alpha \Lambda}, \qquad \hat{Q}^{\alpha\Lambda} \, \hat{Q}^\beta{}_{\Lambda} = \hat{Q}^{\beta \Lambda} \, \hat{Q}^\alpha{}_{\Lambda}. \eea This further simplification on type IIB side corresponds to the absence of $N^k$ moduli on the type IIA side, and so is the case for the corresponding fluxes which couple to $N^k$ through the superpotential. This leads to two Bianchi identities on the type IIA side which happen to be $T$-dual to those presented in eqn. (\ref{eq:IIBsimplesol2}), and are given as, \bea \label{eq:IIAsimplesol2} & & \hskip-1cm {\rm R}^\lambda \, {\rm H}_0 - {\rm H}^\lambda \, {\rm R}_0 + w_a{}^\lambda \, {\rm Q}^a{}_0 - {\rm Q}^{a \lambda} \, w_{a 0} = 0, \qquad {\rm H}^{[\lambda} \, {\rm R}^{\rho]} + {\rm Q}^{a [\lambda} \, w_a{}^{\rho]} = 0\,. \eea These `simple' solutions of the Bianchi identities based on some specific choice of orientifold involution leads to some interesting scenarios both in type IIA and type IIB theories \subsection{IIA with `special solution' $\equiv$ IIB with geometric-flux $\equiv$ $\exists$ dS no-go} From the set of type IIA Bianchi identities given in eqn. (\ref{eq:IIABIs2}), one can observe that several Bianchi identities appear in the form of orthogonal symplectic vectors and therefore half of the flux components can be set to zero by performing appropriate symplectic rotations\footnote{See \cite{Ihl:2006pp, Ihl:2007ah, Robbins:2007yv} also, for more arguments in this regard relating to dyonic Black hole charges.}. The same is equivalent to setting some fluxes, say those with upper $h^{2,1}$ indices, to zero as we present below, \bea \label{eq:condHalf-IIA} & & {\rm H}^\lambda = 0, \qquad \hat{w}_\alpha{}^0 = \hat{w}_\alpha{}^k = w_a{}^\lambda = 0, \\ & & {\rm R}^\lambda = 0, \qquad \hat{\rm Q}^{\alpha 0} =\hat{\rm Q}^{\alpha k} = {\rm Q}^{a \lambda} = 0. \nonumber \eea This is what we call as `special solution'. Now, using these `special' flux choices in eqn. (\ref{eq:condHalf-IIA}) results in the fact that all the type IIA Bianchi identities except the following three are trivially satisfied, \bea \label{eq:} & & {\rm H}_{[0} \, {\rm R}_{{k}]} + {\rm Q}^a{}_{[0} \, w_{a {k}]} = 0, \\ & & {\rm H}_{[{k}} \, {\rm R}_{{k^\prime}]} + {\rm Q}^a{}_{[{k}} \, w_{a {k^\prime}]} = 0, \nonumber\\ & & \hat{w}_{\alpha \lambda} \, \hat{\rm Q}^\alpha{}_\rho = \hat{\rm Q}^\alpha{}_\lambda \, \hat{w}_{\alpha \rho}\,. \nonumber \eea This makes a huge simplification in the generic complicated flux constraints. Now considering the $T$-dual of the type IIA `special' flux choice, as given in eqn. (\ref{eq:condHalf-IIA}), turns out to be equivalent to switching-off the following flux components on the type IIB side, \bea \label{eq:Tdual-condHalf-IIA} & & \hat{Q}^\alpha{}_0 = \hat{Q}^\alpha{}_{i} = Q^a{}_K = 0, \qquad R_K = 0, \\ & & Q^{\alpha i} = \hat{Q}^{\alpha 0} = Q^{a K} = 0, \qquad R^K = 0, \nonumber \eea which means setting all the non-geometric ($Q$ as well as $R$) fluxes to zero on the type IIB side. Moreover, using the $T$-dual flux choice on type IIB side as given in eqn. (\ref{eq:Tdual-condHalf-IIA}), one finds that the set of Bianchi identities on the type IIB side are reduced into the following three constraints, \bea \label{eq:} & & \hskip-1cm H_\Lambda \, \omega_{a}{}^{\Lambda} = H^\Lambda \, \omega_{\Lambda a}, \qquad \omega_{a}{}^{\Lambda} \, \omega_{b \Lambda} = \omega_{b}{}^{\Lambda} \, \omega_{a \Lambda}, \qquad \hat{\omega}_{\alpha}{}^{K} \, \hat{\omega}_{\beta K} = \hat{\omega}_{\beta}{}^{K} \, \hat{\omega}_{\alpha K}\,, \eea which is very much expected as there are no non-zero ${\rm Q}$ and ${\rm R}$ flux components present in the current setting. As a side remark, let us point out that if the involutions are considered as per the choices earlier explained as `simple solutions', i.e. those without $D$-terms, then there remains just two identities on the two sides, \bea \label{eq:} & {\rm \bf IIA:} & \qquad {\rm H}_{[0} \, {\rm R}_{{k}]} + {\rm Q}^a{}_{[0} \, w_{a {k}]} = 0, \qquad {\rm H}_{[{k}} \, {\rm R}_{{k^\prime}]} + {\rm Q}^a{}_{[{k}} \, w_{a {k^\prime}]} = 0, \\ & {\rm \bf IIB:} & \qquad H_\Lambda \, \omega_{a}{}^{\Lambda} = H^\Lambda \, \omega_{\Lambda a}, \qquad \qquad \,\,\omega_{a}{}^{\Lambda} \, \omega_{b \Lambda} = \omega_{b}{}^{\Lambda} \, \omega_{a \Lambda}\, \nonumber \eea and even the above ones are absent if one sets $a =0$, i.e. no $G^a$ moduli in IIB and equivalently no ${\rm N}^k$ moduli in IIA. Thus with some orientifold setting one can have `special solutions' in which all the Bianchi identities are trivial ! Note that all these identities are well in line with the $T$-duality transformations inherited from their generic structure before taking any simplification. \subsection*{A no-go condition for de-Sitter and slow-roll inflation:} As we have seen that the type IIA non-geometric setup with `special solution' leads to a type IIB setup without any non-geometric flux. Now, following from the table \ref{tab_scalar-potential} of the dictionary \ref{sec_dictionary}, the type IIB scalar potential can be expressed as a sum of the following pieces, \bea \label{eq:no-goIIB} & & V_{\rm IIB}^{\rm RR} = \frac{e^{4\phi}}{4\,{\cal V}^2\, {\cal U}}\Bigl[f_0^2 + {\cal U}\, f^i \, {\cal G}_{ij} \, f^j + {\cal U}\, f_i \, {\cal G}^{ij} \,f_j + {\cal U}^2\, (f^0)^2\Bigr],\\ & & V_{\rm IIB}^{\rm NS1} = \frac{e^{2\phi}}{4\,{\cal V}^2\,{\cal U}}\Bigl[h_0^2 + {\cal U}\, h^i \, {\cal G}_{ij} \, h^j + {\cal U}\, h_i \, {\cal G}^{ij} \,h_j + {\cal U}^2\, (h^0)^2 \Bigr], \nonumber\\ & & V_{\rm IIB}^{\rm NS2} = \frac{e^{2\phi}}{4\,{\cal V}^2\,{\cal U}}\Bigl[\, {\cal V}\, {\cal G}^{ab}\,(h_{a0} \, h_{b0} + \frac{l_i\, l_j}{4} \,h_a{}^i\, h_b{}^j + \, h_{ai} \, h_{bj} \, u^{i}\, u^{j} + {\cal U}^2\, h_a{}^0\, h_b{}^0 \nonumber\\ & & \quad \qquad - \, \frac{l_i}{2}\, h_a{}^i\, h_{b0} - \frac{l_i}{2} \,h_{a0}\, h_b{}^i - {\cal U} \, u^i \, h_a{}^0 \, h_{bi} - {\cal U} \, u^i \, h_b{}^0 \, h_{ai}) \Bigr], \nonumber\\ & & V_{\rm IIB}^{\rm loc} = \frac{e^{3\phi}}{2\, {\cal V}^2} \left[f^0 h_0 - f^i h_i + f_i h^i - f_0 h^0\right], \nonumber\\ & & V_{\rm IIB}^{D} = \frac{e^{2\phi}}{4\,{\cal V}^2} \Bigl[{t}^\alpha \, {t}^\beta \, ( \hat{h}_{\alpha J} \,{\cal G}^{JK} \, \hat{h}_{\beta K} + \, \hat{h}_\alpha{}^J \,{\cal G}_{JK} \, \hat{h}_{\beta}{}^K) \Bigr], \nonumber \eea where $f_0, \, f_i,\, f^i,\, f^0,\, h_0, \, h_i,\, h^i,\, h^0,\, h_{a0},\, h_{ai},\, h_a{}^0,\, h_a{}^i, \, \hat{h}_{\alpha K}$ and $\hat{h}_\alpha{}^K$ are the axionic flux orbits as defined in table \ref{tab_IIB-Fluxorbits}. However as they do not depend on any of the saxions, it is not relevant to give their explicit lengthy details. Also note that in this orientifold we have the following axionic flux orbits of table table \ref{tab_IIB-Fluxorbits} being identically zero on the type IIB side, \bea & & \hskip-1cm h^\alpha{}_0 = h^\alpha{}_i = h^{\alpha i} = h^{\alpha0} = 0, \quad \quad \quad h_K{}^0 = h^{K0} = 0\,. \eea For studying the scalar potential in eqn. (\ref{eq:no-goIIB}), let us extract the volume factor by introducing a new modulus $\rho$ via defining the two-cycle volume moduli as $t^\alpha = \rho\, \gamma^\alpha$ where $\gamma^\alpha$ is angular K\"ahler moduli satisfying the constraint $\ell_{\alpha\beta\gamma} \gamma^\alpha \gamma^\beta \gamma^\gamma = 6$. This leads to the overall volume being given as ${\cal V} = \rho^3$ and the volume dependent moduli space metric being simplified as, \bea & & {\cal G}^{ab} = - \hat\ell^{ab}= - \frac{1}{\rho} (\hat\ell_{\alpha ab} \gamma^\alpha)^{-1}. \eea Also note that the moduli space metric ${\cal G}^{JK}$ and its inverse ${\cal G}_{JK}$ are independent of any of the volume moduli, and in particular on the $\rho$ modulus as well. Subsequently the scalar potential can be expressed as under, \bea & & V = V_1 + V_2 + V_3 + V_4\,, \eea where defining a new variable $\tau = e^{-\phi} \sqrt{\cal V} = e^{-\phi} \rho^{3/2}$, the above four pieces are given as, \bea & & V_1 = \frac{A_1}{\tau^4}, \qquad V_2 = \frac{A_2}{\tau^2 \, \rho^3}, \qquad V_3 = \frac{A_3}{\tau^2 \, \rho}, \qquad V_4= \frac{A_4}{\tau^3\, \rho^{3/2}}\,. \eea Here $A_i$'s depend on the complex structure moduli and the angular K\"ahler moduli but not on any of the $\tau$ and $\rho$ moduli. In addition one has $A_1 \geq 0, \, A_2 \geq 0$ however signs of $A_3$ and $A_4$ are not fixed. Also note that we have combined the two pieces $V_{\rm IIB}^{\rm NS2}$ and $V_{\rm IIB}^{D}$ as they have the same scaling for the $\rho$ and $\tau$ moduli. This leads to the following relation, \bea & & -3\, \tau\, \partial_\tau V - \rho \, \partial_\rho V = 12 V_1 + 9 V_2 + 7 V_3 + \frac{21}{2}\, V_4. \eea This apparently shows that the necessary condition for the de-Sitter no-go scenario, which one usually gets in the $(\tau, \rho)$-plane, is evaded. But after checking trace and determinants of the Hessian in the $(\tau, \rho)$-plane, one finds that determinant of the Hessian evaluated at the extremum is never positive, and hence confirming a no-go case due to the presence of tachyons. Such a type IIB setup with $D3/D7$ and $O3/O7$ having $F_3, H_3$ and the geometric flux has been also studied in \cite{Shiu:2011zt, Garg:2018reu}, where it was concluded that no stable de-Sitter vacua can be realized in this type IIB setting. Thus from our $T$-duality rules, we conclude the following de-Sitter no-go condition on the dual type IIA side: \begin{mdframed} \noindent {\bf Type IIA No-Go theorem:} In the framework of non-geometric type IIA orientifold compactification with $O6$ planes, one cannot have a de-Sitter solution by merely considering the RR flux $F_0, F_2, F_4, F_6$ along with the `special solutions' of the NS-NS Bianchi identities. \end{mdframed} Note that given the fact that there are certain non-geometric flux components present in the dual type IIA side despite corresponding to the special solutions of the Bianchi identities, this de-Sitter no-go condition would not have been possible to guess a priory the explicit computations are done, but from the type IIB side it is not hard to invoke. \subsection{IIB with `special solution' $\equiv$ IIA with geometric-flux $\equiv$ $\nexists$ dS no-go} Similar to the type IIA case, one can observe from the eqn. (\ref{eq:IIBBIs2}) that many of the type IIB Bianchi identities also appear in the form of orthogonal symplectic vectors and therefore half of the flux components can be rotated away, as presented below: \bea \label{eq:condHalf-IIB} & & H^0 = 0 = H^i, \qquad \omega_a{}^0 = 0 = \omega_a{}^i, \qquad \hat{Q}^{\alpha 0} = 0 = \hat{Q}^{\alpha i}, \\ & & \hat{\omega}_{\alpha}{}^K =0, \qquad Q^{a K} =0, \qquad R^K = 0\,. \nonumber \eea Now, one can observe that using the `special' flux choice in eqn. (\ref{eq:condHalf-IIB}) results in the fact that all the type IIB Bianchi identities except the following two are trivially satisfied, \bea \label{eq:condHalf-IIB2} & & H_0 \, R_K + \omega_{a 0} \, Q^a{}_K + \hat{Q}^\alpha{}_0 \, \hat{\omega}_{\alpha K} = 0, \\ & & H_i \, R_K + \omega_{a i} \, Q^a{}_K + \hat{Q}^\alpha{}_i \, \hat{\omega}_{\alpha K} = 0\,. \nonumber \eea Moreover, the type IIB `special solution' as given in eqn. (\ref{eq:condHalf-IIB}) is equivalent to switching-off the following $T$-dual fluxes on the type IIA side, \bea \label{eq:Tdual-condHalf-IIB} & & {\rm Q}^a{}_0 = {\rm Q}^a{}_k = {\rm Q}^{a \lambda} = 0, \qquad \hat{\rm Q}^{\alpha 0} = \hat{\rm Q}^{\alpha k} = \hat{\rm Q}^{\alpha}{}_\lambda =0, \\ & & {\rm R}_0 = {\rm R}_k = {\rm R}^\lambda =0\,. \nonumber \eea This immediately implies that type IIB `special solutions' correspond to setting all the non-geometric fluxes to zero on the type IIA side. Further, using the $T$-duality on type IIB side, the two constraints given in eqn. (\ref{eq:condHalf-IIB2}) translates into the following two constraints on the type IIA side, \bea \label{eq:Tdual-condHalf-IIB2} & & {\rm H}^{\lambda} \, \hat{w}_{\alpha\lambda} = {\rm H}_{\hat{k}} \, \hat{w}_\alpha{}^{\hat{k}}, \qquad w_a{}^\lambda \, \hat{w}_{\alpha \lambda} = w_{a \hat{k}} \, \hat{w}_\alpha{}^{\hat{k}}\,, \eea which is very much expected as there are no non-zero ${\rm Q}$- and ${\rm R}$-flux components present in this setting. As a side remark, one can observe that for a trivial even $(2,1)$-cohomology on type IIB side, `special solution' is sufficient to satisfy all the flux constraints as the constraints in eqn. (\ref{eq:condHalf-IIB2}) get trivial. On the $T$-dual type IIA side, this would mean to have the even-$(1,1)$-cohomology trivial and so trivially satisfying the eqn. (\ref{eq:Tdual-condHalf-IIB2}). A summary of the results of this section has been presented in table \ref{tab_special-flux-sol}. \begin{table}[h!] \begin{center} \begin{tabular}{|c||c||c|c|} \hline & & & \\ Scenario & $\exists$ no-go & Type IIA with $D6/O6$ \quad & \quad Type IIB with \\ & & & $D3/O3$ and $D7/O7$ \\ \hline \hline & & & \\ Type IIA & Yes & ${\rm H}_0$, \quad $w_{a0}$, \quad ${\rm Q}^a{}_0$, \quad ${\rm R}_0$, & $H_0$, \quad $H_i$, \quad $H^i$, \quad $- H^0$, \\ with & & & \\ special & & ${\rm H}_k$, \quad $w_{ak}$, \quad ${\rm Q}^a{}_k$, \quad ${\rm R}_k$, & $\omega_{a0}$, \quad $\omega_{ai}$, \quad $\omega_a{}^i$, \quad $- \omega_{a}{}^0$, \\ solutions & & & \\ & & $e_0$, \quad $e_a$, \quad $m^a$, \quad $m_0$. & $F_0$, \quad $F_i$, \quad $F^i$, \quad $- F^0$. \\ & & & \\ & & $\hat{w}_{\alpha \lambda}$, \quad $\hat{\rm Q}^{\alpha}{}_\lambda$. & $\hat{\omega}_{\alpha K}$, \quad $\hat{\omega}_{\alpha}{}^K$.\\ & & & \\ & & & (Type IIB with geometric flux) \\ & & & \\ \hline \hline & & & \\ Type IIB & No & ${\rm H}_0$, \quad ${\rm H}_k$, \quad ${\rm H}^\lambda$, & $H_0$, \quad $\omega_{a0}$, \quad $\hat{Q}^\alpha{}_0$, \\ with & & & \\ special & & $w_{a0}$, \quad $w_{ak}$, \quad $w_a{}^\lambda$, & $H_i$, \quad $\omega_{ai}$, \quad $\hat{Q}^\alpha{}_{i}$, \\ solution & & & \\ & & $e_0$, \quad $e_a$, \quad $m^a$, \quad $m_0$. & $F_0$, \quad $F_i$, \quad $F^i$, \quad $- F^0$. \\ & & & \\ & & $\hat{w}_\alpha{}^0$, \quad $\hat{w}_\alpha{}^k$, \quad $\hat{w}_{\alpha \lambda}$, & $-\,R_K$, \quad $-\,Q^a{}_K$, \quad $\hat{\omega}_{\alpha K}$,\\ & & & \\ & & (Type IIA with geometric flux) & \\ \hline \end{tabular} \end{center} \caption{Possible non-zero fluxes in the special solutions of Bianchi identities.} \label{tab_special-flux-sol} \end{table} \noindent \section{No-Go 1} \label{sec_nogo1} In this section we present the de-Sitter no-go scenario realized in the context of type IIA flux compactification with the inclusion of the NS-NS $H_3$ flux, and the standard R-R fluxes, namely the $F_0, F_2, F_4, F_6$ flux \cite{Hertzberg:2007wc}. First we revisit the ingredients of the no-go condition and then we will $T$-dualize the same to investigate the no-go condition in the type IIB theory. \subsection{Type IIA with RR-flux and $H_3$-flux} In the absence of any geometric and non-geometric fluxes in the type IIA flux compactifications, the generic four-dimensional scalar potential presented in the table \ref{tab_scalar-potential} simplifies to a form given as under, \bea \label{eq:nogo1-IIA1} & & \hskip-0.5cm V_{\rm IIA} = \frac{e^{4D}}{4\, {\cal V}}\biggl[{\rm f}_0^2 + {\cal V}\, {\rm f}^a \, \tilde{\cal G}_{ab} \, {\rm f}^b + {\cal V}\, {\rm f}_a \, \tilde{\cal G}^{ab} \,{\rm f}_b + {\cal V}^2\, ({\rm f}^0)^2\biggr]\,\\ & & \hskip0.5cm + \, \frac{e^{2D}}{4\,{\cal V}}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr]\, + \frac{e^{3D}}{2\, \sqrt{\cal U}} \left[{\rm f}^0 \, {\rm h}_0 - \frac{k_\lambda}{2} \, {\rm f}^0\, {\rm h}^\lambda{}_0 \right], \nonumber \eea where the various ``axionic flux orbits" defined in table \ref{tab_IIA-Fluxorbits} are simplified to the following form, \bea & & \hskip-1cm {\rm f}_0 = e_0 + \, {\rm b}^a\, e_a + \frac{1}{2} \, \kappa_{abc} \, {\rm b}^a\, {\rm b}^b \,m^c + \frac{1}{6}\, \kappa_{abc}\, {\rm b}^a \, {\rm b}^b\, {\rm b}^c \, m_0 - \, \xi^{0} \, {\rm H}_0 - \, \xi^k \, {\rm H}_k - {\xi}_\lambda \, {\rm H}^\lambda \,, \nonumber\\ & & \hskip-1cm {\rm f}_a = e_a + \, \kappa_{abc} \, {\rm b}^b \,m^c + \frac{1}{2}\, \kappa_{abc}\, {\rm b}^b\, {\rm b}^c \, m_0\,, \quad {\rm f}^a = m^a + m_0\, {\rm b}^a\,, \quad {\rm f}^0 = m_0\,, \\ & & \hskip-1cm {\rm h}_0 = {\rm H}_0 + {\rm z^k} \,{\rm H}_k + \, \frac{1}{2} \, \hat{k}_{\lambda mn} \rm z^m \rm z^n \, {\rm H}^\lambda, \quad {\rm h}_{k0} = {\rm H}_k + \, \hat{k}_{\lambda k n}\, {\rm z^n} \, {\rm H}^\lambda, \quad {\rm h}^\lambda{}_0 = {\rm H}^\lambda\,. \nonumber \eea We further introduce a new modulus $\rho$ through a redefinition in the overall volume (${\cal V}$) of the Calabi Yau threefold by considering the two-cycle volume moduli $t^a$ via $t^a = \rho \, \gamma^a$, where $\gamma^a$'s denote the angular K\"ahler moduli satisfying the constraint $\kappa_{abc}\gamma^a\gamma^b\gamma^c = 6$ implying ${\cal V} = \rho^3$. Now we can extract the volume factor $\rho$ from the K\"ahler moduli space metric and its inverse in the following way, \bea \label{eq:IIAmetric-rho} & & \hskip-1.5cm \tilde{\cal G}_{ab} = \frac{\kappa_a\, \kappa_b - 4\, {\cal V}\, \kappa_{ab}}{4\,{\cal V}} = \rho \, \tilde{g}_{ab}, \qquad \, \, \tilde{\cal G}^{ab} = \frac{2\, {\rm t}^a \, {\rm t}^b - 4\, {\cal V}\, \kappa^{ab}}{4\,{\cal V}} = \frac{1}{\rho} \, \tilde{g}^{ab}\,, \end{eqnarray} where $\tilde{g}_{ab}$ and the inverse $\tilde{g}^{ab}$ do not depend on $\rho$ modulus. Subsequently the scalar potential in eqn. (\ref{eq:nogo1-IIA1}) can be written as under, \bea \label{eq:nogo1-IIA2} & & V_{\rm IIA} = \frac{e^{4D}}{4\,\rho^3}\biggl[{\rm f}_0^2 + \rho^2\, {\rm f}_a \, \tilde{g}^{ab} \,{\rm f}_b + \rho^4\, {\rm f}^a \, \tilde{g}_{ab} \, {\rm f}^b + \rho^6\, ({\rm f}^0)^2\biggr]\,\\ & & \hskip1cm + \, \frac{e^{2D}}{4\,\rho^3}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr]\,+ \frac{e^{3D}}{2\, \sqrt{\cal U}} \left[{\rm f}^0 \, {\rm h}_0 - \frac{k_\lambda}{2} \, {\rm f}^0\, {\rm h}^\lambda{}_0 \right]. \nonumber \eea Now for the above potential, one can easily show that the following inequality holds, \bea & & \hskip-1cm 3\, \partial_D\,V_{\rm IIA} - \rho \, \partial_\rho V_{\rm IIA} = 9 \, V_{\rm IIA} + \frac{e^{4D}}{4\,\rho^3}\biggl[6\, {\rm f}_0^2 + 4\, \rho^2\, {\rm f}_a \, \tilde{g}^{ab} \,{\rm f}_b + 2\, \rho^4\, {\rm f}^a \, \tilde{g}_{ab} \, {\rm f}^b \biggr] \geq 9 \, V_{\rm IIA}\,, \eea where in the last step we have used the fact that all the additional terms in the bracket are guaranteed to be non-negative. This immediately leads to a de-Sitter no-go theorem because at this extremum $\partial_D\,V_{\rm IIA} = 0 = \partial_\rho V_{\rm IIA}$, the potential is evaluated to take non-positive values as we see below, \bea & & V_{\rm IIA}^{\rm ext} = - \frac{1}{9} \times \frac{e^{4D}}{4\,\rho^3}\biggl[6\, {\rm f}_0^2 + 4\, \rho^2\, {\rm f}_a \, \tilde{g}^{ab} \,{\rm f}_b + 2\, \rho^4\, {\rm f}^a \, \tilde{g}_{ab} \, {\rm f}^b \biggr] \leq 0\,. \eea Moreover, one has the following inequality on the inflationary slow-roll $\epsilon$ parameter, \bea & & \epsilon \geq V_{\rm IIA}^{-2} \biggl[\frac{\rho^2}{3} {(\partial_\rho V_{\rm IIA})}^2 + \frac{1}{4} {(\partial_D V_{\rm IIA})}^2 \biggr] \\ & & \hskip0.3cm = V_{\rm IIA}^{-2} \biggl[\frac{1}{39} (3\, \partial_D\,V_{\rm IIA} - \rho \, \partial_\rho V_{\rm IIA})^2 + \frac{1}{52} (\partial_D\,V_{\rm IIA} + 4\, \rho \, \partial_\rho V_{\rm IIA})^2 \biggr] \geq \frac{27}{13}\,. \nonumber \eea This clearly forbids the slow-roll inflation in this simplistic framework as proposed in \cite{Hertzberg:2007wc, Flauger:2008ad}. \subsection{$T$-dual de-Sitter no-go-1 in type IIB} Now we invoke the $T$-dual of this type IIA no-go scenario and investigate the type IIB side. The type IIB fluxes which are $T$-dual to the non-zero type IIA fluxes are given in table \ref{tab_no-go1}. \noindent \begin{table}[H] \begin{center} \begin{tabular}{|c||c|c|c|c||c|c|c|} \hline & &&&&&&\\ IIA & $e_0 $ & $e_a$ & $m^a$ & $m_0$ & ${\rm H}_0$ & ${\rm H}_k$ & ${\rm H}^\lambda$ \\ & &&&&&&\\ \hline & &&&&&&\\ IIB & ${F}_0 $ & ${F}_i$ & ${F}^i$ & $- {F}^0$ & ${H}_0$ & ${\omega}_{a0}$ & $\hat{Q}^\alpha{}_0$ \\ & &&&&&&\\ \hline \end{tabular} \end{center} \caption{Non-zero Type IIA fluxes and their respective $T$-duals for {\bf No-Go 1}.} \label{tab_no-go1} \end{table} \noindent This shows that type IIB side can generically have all the components of the $F_3$ flux while for the NS-NS sector, there are only the `rigid' fluxes which are allowed, though due to a mixing through the $T$-duality, there are some (non-)geometric flux components present unlike the type IIA case. We call $H_0, \, \omega_{a0}$ and $\hat{Q}^\alpha{}_0$ as `rigid fluxes' because they are the ones which are allowed in a type IIB framework without the complex structure moduli. However by saying this we do not mean that our $T$-dual approach is valid for the rigid Calabi Yau compactification as it is well known that mirror of a rigid Calabi Yau is not a Calabi Yau \cite{Candelas:1993nd, Sethi:1994ch, Hori:2000kt}. We have studied the scalar potentials arising in rigid compactifications separately in \cite{Shukla:2019akv}, and throughout this work we assume that the compactifications are on non-rigid threefolds. For the present case, this type IIB scenario only reflects the fact that we have just rigid fluxes turned-on setting others to zero, and for this, a no-go should exist. Having no (non-)geometric fluxes present, there are no Bianchi identities to satisfy in the type IIA side, and the same is true for the type IIB side as well, despite the presence of some rigid (non-)geometric fluxes\footnote{This is something one would expect from the set of Bianchi identities known to us in the cohomology formulation, though there are several observations based on toroidal examples that there may be a few of the missing identities in this approach \cite{Ihl:2007ah, Robbins:2007yv, Shukla:2016xdy, Gao:2018ayp, Shukla:2019akv}.}. The dual scalar potential for the type IIB side can be read-off from the table \ref{tab_scalar-potential} as under, \bea \label{eq:nogo1-IIB1} & & \hskip-0.5cm V_{\rm IIB} = \frac{e^{4\phi}}{4\,{\cal V}^2\, {\cal U}}\biggl[f_0^2 + {\cal U}\, f_i \, {\cal G}^{ij} \,f_j + {\cal U}\, f^i \, {\cal G}_{ij} \, f^j + {\cal U}^2\, (f^0)^2\biggr]\,\\ & & \hskip0.5cm + \frac{e^{2\phi}}{4\,{\cal V}^2\,{\cal U}}\biggl[h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta}\,h^\alpha{}_0 \, h^\beta{}_0 \biggr]\, + \frac{e^{3\phi}}{2\,{\cal V}^2} \left[{f}^0 \, {h}_0 - \frac{\ell_\alpha}{2} \, {f}^0\, {h}^\alpha{}_0 \right]\,, \nonumber \eea where the simplified axionic flux orbits following from table \ref{tab_IIB-Fluxorbits} are given as under, \bea & & \hskip-0.5cm f_0 = F_0 + v^i\, {F}_i + \frac{1}{2}\, l_{ijk}\, v^j\, v^k \, {F}^i\, - \frac{1}{6}\, l_{ijk}\, v^i \, v^j\, v^k \, {F}^0 \\ & & \hskip0.5cm - \omega_{a0} \, {c}^a - \hat{Q}^\alpha{}_0 \, \hat{c}_\alpha - \, c_0 \, \Big(H_0 + \omega_{a0} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_0 \Bigr)\,, \nonumber\\ & & \hskip-0.5cm f_i = {F}_i +\, l_{ijk}\, v^j \, {F}^k - \frac{1}{2}\, l_{ijk}\, v^j\, v^k \, {F}^0 , \quad f^i = F^i - v^i\, F^0, \quad f^0 = -\, F^0\,, \nonumber\\ & & \hskip-0.5cm h_0 = H_0 + \omega_{a0} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_0, \qquad h_{a0} = \omega_{a0} + \hat{Q}^\alpha{}_0 \, \hat{\ell}_{\alpha a b}\, b^b, \qquad h^\alpha{}_0 = \hat{Q}^\alpha{}_0\,. \nonumber \eea Although the no-scale structure on type IIB is broken by the presence of the non-zero $\hat{Q}^\alpha{}_0$-flux which couples to $T_\alpha$ moduli in the superpotential and subsequently been reflected via the appearance of the moduli space metric ${\cal G}_{\alpha\beta}$ in the scalar potential (\ref{eq:nogo1-IIB1}), but that would not lead to a de-Sitter solution as suggested by the dual type IIA side. Thus the type IIA no-go condition tells us something interesting and harder to guess a priory on the type IIB side. In order to check that this duality based claim is true, all we need to do is to swap the role of the K\"ahler moduli with complex structure moduli. On that line, similar to the case of volume modulus ${\cal V}$, now we defined a new modulus $\sigma$ from the saxion of the complex structure moduli such that $u^i = \sigma\, \lambda^i$, which leads to ${\cal U} = \sigma^3$ subject to a condition: $l_{ijk}\, \lambda^i \, \lambda^j \, \lambda^k =6$ satisfied by the angular complex structure moduli $\gamma^i$ on type IIB side. Now we can extract the $\sigma$ factor from the complex structure moduli space metric and its inverse in the following way, \bea \label{eq:IIBmetric-sigma} & & \hskip-1.5cm {\cal G}_{ij} = \frac{l_i\, l_j - 4\, {\cal U}\, l_{ij}}{4\,{\cal U}} = \sigma \, {g}_{ij}, \qquad \, \, {\cal G}^{ij} = \frac{2\, {u}^i \, {u}^j - 4\, {\cal U}\, l^{il}}{4\,{\cal U}} = \frac{1}{\sigma} \, {g}^{ij}\,, \eea where $g_{ij}$ and $g^{ij}$ depends only on the angular complex structure moduli and not on the $\sigma$ modulus. Using this information the scalar potential in eqn. (\ref{eq:nogo1-IIB1}) can be written as under, \bea & & \hskip-1cm V_{\rm IIB} = \frac{e^{4\phi}}{4\,{\cal V}^2\, \sigma^3}\biggl[f_0^2 + \sigma^2\, f_i \, {g}^{ij} \,f_j + \sigma^4\, f^i \, {g}_{ij} \, f^j + \sigma^6\, (f^0)^2\biggr]\,\\ & & \hskip0.5cm + \frac{e^{2\phi}}{4\,{\cal V}^2\,\sigma^3}\biggl[h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta}\,h^\alpha{}_0 \, h^\beta{}_0 \biggr]\, + \frac{e^{3\phi}}{2\,{\cal V}^2} \left[{f}^0 \, {h}_0 - \frac{\ell_\alpha}{2} \, {f}^0\, {h}^\alpha{}_0 \right]. \nonumber \eea Subsequently it is not hard to show that the following inequality holds, \bea & & \hskip-1.5cm 3\, \partial_\phi \,V_{\rm IIB} - \sigma \, \partial_\sigma V_{\rm IIB} = 9 V_{\rm IIB} + \frac{e^{4\phi}}{4\,{\cal V}^2\, \sigma^3}\biggl[f_0^2 + \, \sigma^2\, f_i \, {g}^{ij} \,f_j + \, \sigma^4\, f^i \, {g}_{ij} \, f^j \biggr] \geq 9 V_{\rm IIB}\,, \eea where in the last step we have used the fact that all the additional terms in bracket are guaranteed to be positive semidefinite. This immediately leads to a de-Sitter no-go theorem as at this extremum $\partial_\phi V_{\rm IIB} = 0 = \partial_\sigma V_{\rm IIB}$, the potential can only take non-positive values as we see below, \bea & & V_{\rm IIB}^{\rm ext} = - \frac{e^{4\phi}}{2\,{\cal V}^2\, \sigma^3}\biggl[6\, f_0^2 + 4\, \sigma^2\, f_i \, {g}^{ij} \,f_j + 2\, \sigma^4\, f^i \, {g}_{ij} \, f^j \biggr] \leq 0\,. \eea Thus we are able to prove an interesting de-Sitter no-go theorem on the type IIB side. \begin{mdframed} \noindent {\bf Type IIB No-Go theorem 1:} In the framework of type IIB non-geometric flux compactification with $O3/O7$ orientifold planes, one cannot have a de-Sitter solution by considering the RR flux $F_3$ along with the rigid NS-NS flux components $H_0, \,\omega_{a0}$ and $\hat{Q}^\alpha{}_0$ only. \end{mdframed} \section{No-Go 2} \label{sec_nogo2} In this section we consider another no-go condition found in the type IIA framework, which in addition to the ingredients of the no-go-1 scenario, also includes the geometric flux \cite{Haque:2008jz, Caviezel:2008tf, Flauger:2008ad}, and subsequently we will $T$-dualize the same to invoke its type IIB counterpart. \subsection{Type IIA with RR-flux, $H_3$-flux and $\omega$-flux} This type IIA de-Sitter no-go scenario includes the NS-NS $H_3$ flux, geometric flux $w$, and the standard R-R fluxes, namely the $F_0, \, F_2, \, F_4$ and the $F_6$ flux \cite{Haque:2008jz, Caviezel:2008tf, Flauger:2008ad}. However, there are no non-geometric fluxes turned-on, i.e. ${\rm Q}^{a}{}_{\hat{k}} = {\rm Q}^{a\lambda} = \hat{{\rm Q}}^{\alpha \hat{k}} = \hat{{\rm Q}}^\alpha{}_\lambda = 0$ and ${\rm R}_{\hat{k}} = 0 = {\rm R}^\lambda$. In order to get the scalar potential from our generic formula in table \ref{tab_scalar-potential} one has to simply set the following flux orbits to zero, \bea \label{eq:axionic-flux-nogo21} & & \hskip-1cm {\rm h}^a = 0 = {\rm h}^0, \qquad {\rm h}^a{}_k = 0 = {\rm h}_k{}^0, \qquad {\rm h}^{a\lambda} = 0 = {\rm h}^{\lambda 0}\,, \qquad \hat{{\rm h}}^{\alpha0} = 0 = \hat{{\rm h}}^\alpha{}_\lambda, \eea where the last two fluxes are parts of the $D$-term contributions via the ${\rm Q}$ flux. Setting off these non-geometric fluxes in eqn. (\ref{eq:axionic-flux-nogo21}), the generic scalar potential given in table \ref{tab_scalar-potential} can be simplified to take a form given as under, \bea \label{eq:main4IIA-nogo2gen} & & \hskip-0.3cm V_{\rm IIA} = \frac{e^{4D}}{4\,{\cal V}}\biggl[{\rm f}_0^2 + {\cal V}\, {\rm f}_a \, \tilde{\cal G}^{ab} \,{\rm f}_b + {\cal V}\, {\rm f}^a \, \tilde{\cal G}_{ab} \, {\rm f}^b + {\cal V}^2\, ({\rm f}^0)^2\biggr] + \frac{e^{2D}}{4\,{\cal V}}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr]\,\nonumber\\ & & \quad \quad + \, \frac{e^{2D}}{4\,{\cal V}\,}\biggl[{\rm t}^{a}\, {\rm t}^{b} \left(\frac{{\rm h}_a \, {\rm h}_b}{\cal U} + \, \tilde{\cal G}^{ij}\, {\rm h}_{ai} \, {\rm h}_{bj} \, + \,\tilde{\cal G}_{\lambda \rho}\, {\rm h}_a{}^\lambda\, {\rm h}_b{}^\rho \right) + \frac{1}{\cal U} \bigl({\rm h}_a - \frac{k_\lambda}{2}\,{\rm h}_a{}^\lambda \bigr) \, \bigl({\cal V}\,\tilde{{\cal G}}^{ab} -{\rm t}^a {\rm t}^b\bigr) \nonumber\\ & & \quad \quad \times \bigl({\rm h}_b - \frac{k_\rho}{2}\,{\rm h}_b{}^\rho \bigr) \, + \, \frac{1}{\, {\cal U}}\, \left({\cal U} \, \hat{\rm h}_\alpha{}^{0} + {\rm z}^\lambda \, \hat{\rm h}_{\alpha \lambda} \right) {\cal V}\,(\hat\kappa_{a\alpha\beta}\, {\rm t}^a)^{-1} \,\left({\cal U} \, \hat{\rm h}_\beta{}^{0} + {\rm z}^\rho \, \hat{\rm h}_{\beta \rho} \right)\biggr]\,, \\ & & \quad \quad + \frac{e^{3D}}{2\, \sqrt{\cal U}} \left[\left({\rm f}^0 \, {\rm h}_0 - {\rm f}^a\, {\rm h}_a \right) - \left({\rm f}^0\, {\rm h}^\lambda{}_0 - {\rm f}^a\, {\rm h}^\lambda{}_a \right)\, \frac{k_\lambda}{2} \right].\nonumber \eea where using the simplifications from the eqn. (\ref{eq:axionic-flux-nogo21}), the various non-zero ``axionic flux orbits" can be written out from the table \ref{tab_IIA-Fluxorbits} and those are simplified as under, \bea \label{eq:axionic-flux-nogo2} & & \hskip-0.3cm {\rm f}_0 = e_0 + \, {\rm b}^a\, e_a + \frac{1}{2} \, \kappa_{abc} \, {\rm b}^a\, {\rm b}^b \,m^c + \frac{1}{6}\, \kappa_{abc}\, {\rm b}^a \, {\rm b}^b\, {\rm b}^c \, m_0 \\ & & \hskip0.3cm - \, \xi^{0} \, ({\rm H}_0 + {\rm b}^a \, {w}_{a0}) - \, \xi^k \, ({\rm H}_k + {\rm b}^a \, {w}_{ak}) - {\xi}_\lambda \, ({\rm H}^\lambda + {\rm b}^a \, {w}_{a}{}^\lambda) \,, \nonumber\\ & & \hskip-0.3cm {\rm f}_a = e_a + \, \kappa_{abc} \, {\rm b}^b \,m^c + \frac{1}{2}\, \kappa_{abc}\, {\rm b}^b\, {\rm b}^c \, m_0 - \, \xi^{0} \, {w}_{a0} - \, \xi^k \, {w}_{ak} - {\xi}_\lambda \, {w}_a{}^\lambda\,, \nonumber\\ & & \hskip-0.3cm {\rm f}^a = m^a + m_0\, {\rm b}^a \,, \quad {\rm f}^0 = m_0\,, \nonumber\\ & & \nonumber\\ & & \hskip-0.3cm {\rm h}_0 = ({\rm H}_0 + {\rm b}^a \, {w}_{a0}) + {\rm z}^k \,({\rm H}_k + {\rm b}^a \, {w}_{ak}) + \, \frac{1}{2} \, \hat{k}_{\lambda mn} {\rm z}^m {\rm z}^n \, ({\rm H}^\lambda + {\rm b}^a \, {w}_{a}{}^\lambda), \nonumber\\ & & \hskip-0.3cm {\rm h}_{k0} = ({\rm H}_k + {\rm b}^a \, {w}_{ak}) + \, \hat{k}_{\lambda k n}\, {\rm z}^n \, ({\rm H}^\lambda + {\rm b}^a \, {w}_{a}{}^\lambda), \quad {\rm h}^\lambda{}_0 = ({\rm H}^\lambda + {\rm b}^a \, {w}_{a}{}^\lambda)\,, \nonumber\\ & & \hskip-0.3cm {\rm h}_a = w_{a0} + {\rm z}^k \,w_{ak} + \, \frac{1}{2} \, \hat{k}_{\lambda mn} {\rm z}^m {\rm z}^n \, w_a{}^\lambda, \quad {\rm h}_{ak} = w_{ak} + \, \hat{k}_{\lambda k n}\, {\rm z}^n \,w_a{}^\lambda, \quad {\rm h}_a{}^\lambda = w_a{}^\lambda \,,\nonumber\\ & & \nonumber\\ & & \hskip-0.3cm \hat{\rm h}_{\alpha\lambda} = \hat{w}_{\alpha \lambda} + \hat{k}_{\lambda km} \, {\rm z}^m \, \hat{w}_\alpha{}^{k} - \frac{1}{2} \hat{k}_{\lambda km} {\rm z}^k {\rm z}^m \hat{w}_\alpha{}^{0}, \quad \hat{h}_\alpha{}^{0} = \hat{w}_\alpha{}^{0}. \nonumber \eea Note that unlike the previous de-Sitter no-go scenario, now there can be non-trivial contributions generated from the $D$-terms via the geometric fluxes. Similar to the previous case, extracting the factor $\rho$ from the various volume moduli and metrics as in eqn. (\ref{eq:IIAmetric-rho}) the total scalar potential in eqn. (\ref{eq:main4IIA-nogo2gen}) simplifies to the following form, \bea \label{eq:main4IIA-nogo2} & & \hskip-0.3cm V_{\rm IIA} = \frac{e^{4D}}{4\,\rho^3}\biggl[{\rm f}_0^2 + \rho^2\, {\rm f}_a \, \tilde{g}^{ab} \,{\rm f}_b + \rho^4\, {\rm f}^a \, \tilde{g}_{ab} \, {\rm f}^b + \rho^6\, ({\rm f}^0)^2\biggr] + \frac{e^{2D}}{4\,\rho^3}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr]\,\nonumber\\ & & \quad \qquad + \, \frac{e^{2D}}{4\,\rho\,}\biggl[\gamma^{a}\, \gamma^{b} \left(\frac{{\rm h}_a \, {\rm h}_b}{\cal U} + \, \tilde{\cal G}^{ij}\, {\rm h}_{ai} \, {\rm h}_{bj} \, + \,\tilde{\cal G}_{\lambda \rho}\, {\rm h}_a{}^\lambda\, {\rm h}_b{}^\rho \right) + \frac{1}{\cal U} \bigl({\rm h}_a - \frac{k_\lambda}{2}\,{\rm h}_a{}^\lambda \bigr) \, \bigl(\tilde{g}^{ab} -\gamma^a \gamma^b\bigr) \nonumber\\ & & \quad \qquad \times \bigl({\rm h}_b - \frac{k_\rho}{2}\,{\rm h}_b{}^\rho \bigr) \, + \, \frac{1}{\, {\cal U}}\, \left({\cal U} \, \hat{\rm h}_\alpha{}^{0} + {\rm z}^\lambda \, \hat{\rm h}_{\alpha \lambda} \right) \,(\hat\kappa_{a\alpha\beta}\, \gamma^a)^{-1} \,\left({\cal U} \, \hat{\rm h}_\beta{}^{0} + {\rm z}^\rho \, \hat{\rm h}_{\beta \rho} \right)\biggr]\,, \\ & & \quad \qquad + \frac{e^{3D}}{2\, \sqrt{\cal U}} \left[\left({\rm f}^0 \, {\rm h}_0 - {\rm f}^a\, {\rm h}_a \right) - \left({\rm f}^0\, {\rm h}^\lambda{}_0 - {\rm f}^a\, {\rm h}^\lambda{}_a \right)\, \frac{k_\lambda}{2} \right].\nonumber \eea Now using the scalar potential in eqn. (\ref{eq:main4IIA-nogo2}) one can show that the following interesting relation holds, \bea & & \hskip-2cm \partial_D\,V_{\rm IIA} - \rho \, \partial_\rho V_{\rm IIA} = 3 \, V_{\rm IIA} + \frac{e^{2D}}{2\,\rho^3}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr] \nonumber\\ & & + \, \frac{e^{4D}}{4\,\rho^3}\biggl[4\, {\rm f}_0^2 + 2\, \rho^2\, {\rm f}_a \, \tilde{g}^{ab} \,{\rm f}_b - 2\, \rho^6\, ({\rm f}^0)^2\biggr]\,. \eea One can observe the fact that for ${\rm f}^0 = m_0$ being set to zero, all the terms on the right hand side are non-negative which results in $(\partial_D\,V_{\rm IIA} - \rho \, \partial_\rho V_{\rm IIA}) \geq 3 \, V_{\rm IIA}$, and hence in this situation a new no-go condition holds despite of the fact that geometric fluxes are included. Moreover, one has the following inequality on the inflationary parameter $\epsilon$, \bea & & \epsilon \geq V_{\rm IIA}^{-2} \biggl[\frac{\rho^2}{3} {(\partial_\rho V_{\rm IIA})}^2 + \frac{1}{4} {(\partial_D V_{\rm IIA})}^2 \biggr] \\ & & \hskip0.3cm = V_{\rm IIA}^{-2} \biggl[\frac{1}{7} (3\, \partial_D\,V_{\rm IIA} - \rho \, \partial_\rho V_{\rm IIA})^2 + \frac{1}{84} (3\,\partial_D\,V_{\rm IIA} + 4\, \rho \, \partial_\rho V_{\rm IIA})^2 \biggr] \geq \frac{9}{7}\,. \nonumber \eea However it is also true that the earlier no-go condition is evaded with the simultaneous presence of geometric flux and the Romans mass term. The extremization conditions $\partial_D\,V_{\rm IIA} = 0 = \partial_\rho V_{\rm IIA}$ lead to the following form of the potential, \bea & & V_{\rm IIA}^{\rm ext} = - \frac{e^{2D}}{6\,\rho^3}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr] - \frac{e^{4D}}{12\,\rho^3}\biggl[4\, {\rm f}_0^2 + 2\, \rho^2\, {\rm f}_a \, \tilde{g}^{ab} \,{\rm f}_b - 2\, \rho^6\, ({\rm f}^0)^2\biggr]\,, \nonumber \eea which clearly opens up the possibility of getting de-Sitter via considering large enough value for the Romans mass parameter ${\rm f}^0 = m_0$ \cite{Haque:2008jz}. \subsection{$T$-dual de-Sitter no-go-2 in type IIB} Now we want to know the $T$-dual version of this second type IIA no-go scenario on the type IIB side, and the $T$-duality from the non-zero fluxes in type IIA gives the flux ingredients of the type IIB setup as given in the table \ref{tab_no-go2}. \noindent \begin{table}[H] \begin{center} \begin{tabular}{|c||c|c|c|c||c|c|c|c|c|c||c|c|c|} \hline & &&&&&& &&& &&&\\ IIA & $e_0 $ & $e_a$ & $m^a$ & $m_0$ & ${\rm H}_0$ & ${\rm H}_k$ & ${\rm H}^\lambda$ & ${w}_{a0}$ & ${w}_{ak}$ & ${w}_a{}^\lambda$ & $\hat{w}_{a}{}^0$ & $\hat{w}_a{}^k$ & $\hat{w}_{\alpha\lambda}$ \\ & &&&&&&&&&&&&\\ \hline & &&&&&&&&&&&&\\ IIB & ${F}_0 $ & ${F}_i$ & ${F}^i$ & $- F^0$ & ${H}_0$ & ${\omega}_{a0}$ & $\hat{Q}^\alpha{}_0$ & ${H}_i$ & ${\omega}_{ai}$ & $\hat{Q}^\alpha{}_i$ & $-{R}_K$ & $- Q^a{}_K$ & $\hat{\omega}_{\alpha K}$ \\ & &&&&&&&&&&&&\\ \hline \end{tabular} \end{center} \caption{Non-zero Type IIA fluxes and their respective $T$-duals for {\bf No-Go 2}.} \label{tab_no-go2} \end{table} \noindent It shows that for this scenario, the dual type IIB side can fairly get complicated with the presence of RR ($F_3$) flux along with all the (non-)geometric NS-NS fluxes unlike the type IIA case. Moreover, given the fact that this scenario corresponds to type IIA without any non-geometric flux, and therefore as we have analysed in previous section, this would be dual to type IIB with the `special solution' of Bianchi identities, in which half of the fluxes can be rotated away by a suitable symplectic transformation. Also, the Bianchi identities to worry about on type IIA and their dual type IIB side are simply the following ones, \bea & {\bf IIA:}& \quad {\rm H}^{\lambda} \, \hat{w}_{\alpha\lambda} = {\rm H}_{\hat{k}} \, \hat{w}_\alpha{}^{\hat{k}}, \qquad w_a{}^\lambda \, \hat{w}_{\alpha \lambda} = w_{a \hat{k}} \, \hat{w}_\alpha{}^{\hat{k}}\,;\\ & {\bf IIB:}& \quad H_0 \, R_K + \omega_{a 0} \, Q^a{}_K + \hat{Q}^\alpha{}_0 \, \hat{\omega}_{\alpha K} = 0, \quad H_i \, R_K + \omega_{a i} \, Q^a{}_K + \hat{Q}^\alpha{}_i \, \hat{\omega}_{\alpha K} = 0\,. \nonumber \eea For implementing the `special solution' of Bianchi identities in the type IIB scalar potential, we need to switch-off the following axionic flux orbits, \bea & & h^0 = 0 = h^i, \quad h_a{}^i = 0 = h_a{}^0, \quad h^{\alpha i} = 0 = h^{\alpha 0}, \qquad \hat{h}_\alpha{}^K = 0 = \hat{h}^K\,, \eea where the last two hatted fluxes are parts of the $D$-term contributions. Using this simplification, and after a bit of reshuffling of terms, the dual scalar potential for the type IIB side can be subsequently read-off from the table \ref{tab_scalar-potential} and turns out to be given as under, \bea \label{eq:main4IIB-special} & & V_{\rm IIB} = \frac{e^{4\phi}}{4\,{\cal V}^2\, {\cal U}}\biggl[f_0^2 + {\cal U}\, f^i \, {\cal G}_{ij} \, f^j + {\cal U}\, f_i \, {\cal G}^{ij} \,f_j + {\cal U}^2\, (f^0)^2\biggr]\,\\ & & \quad \qquad + \frac{e^{2\phi}}{4\,{\cal V}^2\,{\cal U}}\biggl[h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta} \, h^\alpha{}_0 \, h^\beta{}_0 \nonumber\\ & & \quad \qquad + \, u^i\, u^j\, \left(h_i \, h_j + {\cal V} \,{\cal G}_{\alpha \beta}\, h^\alpha{}_i\, h^\beta{}_j + {\cal V}\, {\cal G}^{ab}\, h_{ai} \, h_{bj} \right)\, \nonumber \\ & & \quad \qquad + \left({\cal U}\, {\cal G}^{ij} - u^i\, u^j \right) \left(h_i - \frac{\ell_\alpha}{2} \,h^\alpha{}_i \right) \left(h_j - \frac{\ell_\beta}{2} \,h^\beta{}_j \right) \nonumber\\ & & \quad \qquad + \, {\cal U}\, \left({\cal V} \, \hat{h}_J{}^{0} - {t}^\alpha \, \hat{h}_{\alpha J} \right) \,(\hat\ell_{iJK}\, u^i)^{-1} \,\left({\cal V} \, \hat{h}_K{}^{0} - {t}^\beta \, \hat{h}_{\beta K} \right) \biggr] \nonumber\\ & & \quad \qquad + \frac{e^{3\phi}}{\, {\cal V}^2} \, \left[\left(f^0 \, h_0 - f^i\, h_i \right)\, - \left(f^0\, h^\alpha{}_0 - f^i\, h^\alpha{}_i \right)\, \frac{\ell_\alpha}{2} \right],\nonumber \eea where the simplified version of the non-trivial axionic flux orbits are given as below, \bea \label{eq:IIBorbits-nogo2} & & \hskip-0.5cm f^0 = -\, F^0, \quad f^i = F^i - v^i\, F^0\,, \\ & & \hskip-0.5cm f_i = F_i +\, l_{ijk}\, v^j \, {F}^k\, - \frac{1}{2}\, l_{ijk}\, v^j\, v^k \, {F}^0 - \omega_{ai} \, {c}^a - \hat{Q}^\alpha{}_i \, \hat{c}_\alpha - \, c_0 \, h_i\,, \nonumber\\ & & \hskip-0.5cm f_0 = F_0 + v^i {F}_i + \frac{1}{2}\, l_{ijk} v^j v^k \, {F}^i - \frac{1}{6}\, l_{ijk} v^i v^j v^k \, {F}^0 - \omega_{a0} \, {c}^a - \hat{Q}^\alpha{}_0 \, \hat{c}_\alpha - \, c_0 \, h_0\,, \nonumber\\ & & \nonumber\\ & & \hskip-0.5cm h_0 = H_0 + \omega_{a0} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_0 + v^i \, h_i\, , \quad h_i = H_i + \omega_{ai} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_i\,,\nonumber\\ & & \hskip-0.5cm h_{a0} = \omega_{a0} + \hat{Q}^\alpha{}_0 \hat{\ell}_{\alpha a b}\, b^b + v^i \, h_{ai}, \qquad h_{ai} = \omega_{ai} + \hat{Q}^\alpha{}_i \, \hat{\ell}_{\alpha a b}\, b^b. \nonumber \\ & & \hskip-0.5cm h^\alpha{}_0 = \hat{Q}^\alpha{}_0 + v^i \, \hat{Q}^\alpha{}_i\,\qquad h^\alpha{}_i = \hat{Q}^\alpha{}_i\,, \nonumber\\ & & \hskip-0.5cm \hat{h}_{\alpha K} = \hat{\omega}_{\alpha K}\, - Q^{a}{}_{K} \, \hat{\ell}_{\alpha a b} \, b^b + \frac{1}{2}\hat{\ell}_{\alpha a b} \, b^a \,b^b\, R_K, \qquad \hat{h}_K{}^0 = -\, R_K\,. \nonumber \eea Now similar to the previous no-go-1 case, in order to prove that there is a new de-Sitter no-go scenario in type IIB side with non-geometric flux, all we need to do is to swap the role of complex-structure and the K\"ahler moduli. To see it explicitly we extract the $\sigma$ factor from the complex-structure moduli and the moduli space metrics as given in eqn. (\ref{eq:IIBmetric-sigma}). This leads to the type IIB scalar potential being written as under, \bea \label{eq:main4IIB-special} & & V_{\rm IIB} = \frac{e^{4\phi}}{4\,{\cal V}^2\, \sigma^3}\biggl[f_0^2 + \sigma^2\, f_i \, {g}^{ij} \,f_j + \sigma^4\, f^i \, {g}_{ij} \, f^j + \sigma^6\, (f^0)^2 \biggr]\,\\ & & \quad \qquad + \frac{e^{2\phi}}{4\,{\cal V}^2\,\sigma^3}\biggl[\left(h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta} \, h^\alpha{}_0 \, h^\beta{}_0 \right)\biggr] \nonumber\\ & & \quad \qquad + \frac{e^{2\phi}}{4\,{\cal V}^2\,\sigma} \biggl[\lambda^i\, \lambda^j\, \left(h_i \, h_j + {\cal V} \,{\cal G}_{\alpha \beta}\, h^\alpha{}_i\, h^\beta{}_j + {\cal V}\, {\cal G}^{ab}\, h_{ai} \, h_{bj} \right)\, \nonumber \\ & & \quad \qquad + \,\left({g}^{ij} - \lambda^i\, \lambda^j \right) \left(h_i - \frac{\ell_\alpha}{2} \,h^\alpha{}_i \right) \left(h_j - \frac{\ell_\beta}{2} \,h^\beta{}_j \right) \nonumber\\ & & \quad \qquad + \, \left({\cal V} \, \hat{h}_J{}^{0} - {t}^\alpha \, \hat{h}_{\alpha J} \right) \,(\hat\ell_{iJK}\, \lambda^i)^{-1} \,\left({\cal V} \, \hat{h}_K{}^{0} - {t}^\beta \, \hat{h}_{\beta K} \right) \biggr] \nonumber\\ & & \quad \qquad + \frac{e^{3\phi}}{2\, {\cal V}^2} \, \left[\left(f^0 \, h_0 - f^i\, h_i \right)\, - \left(f^0\, h^\alpha{}_0 - f^i\, h^\alpha{}_i \right)\, \frac{\ell_\alpha}{2} \right],\nonumber \eea where the angular moduli $\lambda^i$'s and the metrics $g^{ij}, g_{ij}$ do not have any dependence on the $\sigma$-modulus. Subsequently it is not hard to show that following relation holds, \bea & & \hskip-1cm \partial_\phi \,V_{\rm IIB} - \sigma \, \partial_\sigma V_{\rm IIB} = 3 V_{\rm IIB} + \frac{e^{2\phi}}{2\,{\cal V}^2\,\sigma^3}\biggl[\left(h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta} \, h^\alpha{}_0 \, h^\beta{}_0 \right)\biggr] \nonumber\\ & & \hskip2cm+ \frac{e^{4\phi}}{4\,{\cal V}^2\, \sigma^3}\biggl[4\, f_0^2 + 2\, \sigma^2\, f_i \, {g}^{ij} \,f_j - 2\, \sigma^6\, (f^0)^2\biggr]\,. \eea The last term is the only non-positive term and this shows that for $f^0 \equiv - F^0 = 0$ we have the inequality $(\partial_\phi \,V_{\rm IIB} - \sigma \, \partial_\sigma V_{\rm IIB}) \geq 3 V_{\rm IIB}$. This immediately leads to a de-Sitter no-go theorem as at this extremum $\partial_\phi V_{\rm IIB} = 0 = \partial_\sigma V_{\rm IIB}$, the potential is allowed to take only the non-positive values as long as $f^0 = 0$ as we see below, \bea & & V_{\rm IIB}^{\rm ext} = - \frac{e^{2\phi}}{3\,{\cal V}^2\,\sigma^3}\biggl[\left(h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta} \, h^\alpha{}_0 \, h^\beta{}_0 \right)\biggr] \\ & & \hskip2cm - \frac{e^{4\phi}}{2\,{\cal V}^2\, \sigma^3}\biggl[4\, f_0^2 + 2\, \sigma^2\, f_i \, {g}^{ij} \,f_j - 2\, \sigma^6\, (f^0)^2\biggr]\,.\nonumber \eea Thus we are able to prove an interesting de-Sitter no-go theorem on type IIB side by $T$-dualizing the type IIA no-go, and moreover we have a possible way for finding de-Sitter by satisfying the necessary condition $F^0 \neq 0$ for the non-geometric flux with `special solutions'. \begin{mdframed} \noindent {\bf Type IIB No-Go theorem 2:} In type IIB framework with $O3/O7$ orientifold planes and (non-)geometric fluxes along with the standard $F_3, H_3$ fluxes, one cannot have stable de-Sitter minima with `special solutions' of Bianchi identities, unless $F^0$ component of the $F_3$ flux is non-zero, where $F_3 = F^\Lambda {\cal A}_\Lambda - F_\Lambda\, {\cal B}^\Lambda$, and $\Lambda \in \{0, 1,....,h^{2,1}_-\}$. \end{mdframed} \section{No-Go 3} \label{sec_nogo3} In the previous section, we have seen that after including the Romans mass term in type IIA or equivalently $F^0$ component of the three-form $F_3$-flux in type IIB, the necessary condition for getting the de-Sitter no-go is violated. This can be taken as a window to hunt for de-Sitter solutions. On the other hand, naively speaking, in order to restore the no-go condition or for finding another no-go, one would need to nullify the effects of these respective fluxes in the type IIA and the type IIB scenarios, and therefore one can ask the question if there are certain geometries which could be useful for this purpose. In this section we will show how the $K3$- or ${\mathbb T}^4$-fibred Calabi Yau threefolds could be useful in this regard as they facilitate a factorization in the moduli space as shown to be needed in \cite{Flauger:2008ad}. \subsection{Type IIA with $K3$- or ${\mathbb T}^4$-fibred (CY) threefolds} Superstring compactifications using $K3$- or ${\mathbb T}^4$-fibred CY threefolds present some interesting case as there is some kind of factorization guaranteed in the K\"ahler moduli space. By the theorem of \cite{Oguiso1993, Schulz:2004tt}, such a Calabi Yau threefold will have at least one two-cycle dual to a $K3$ or a ${\mathbb T}^4$ divisor which appears only linearly in the intersection polynomial\footnote{Such Calabi Yau threefolds with $K3/{\mathbb T}^4$-fibrations have been also useful for realizing Fibre inflation models \cite{Cicoli:2008gp, Cicoli:2016xae, Cicoli:2017axo}.}. In other words, the intersection numbers can be managed to split in the following manner by singling out a component through the splitting of index $a$ as $a = \{1, a'\}$, \bea \label{eq:intK3} & & \hskip-1cm \kappa_{111} = 0 = \kappa_{11a'}, \quad \kappa_{1a'b'} \neq 0, \quad \hat{\kappa}_{1\alpha\beta} \neq 0, \quad \hat{\kappa}_{a'\alpha\beta} = 0, \qquad {\rm where} \, \, a' \neq 1 \neq b'\,. \eea On top of that, in particular we also assume that $\kappa_{a'b'c'}=0$ and note that there is only one non-zero intersection of the type $\hat{\kappa}_{a\alpha\beta}$ with $a =1$. A concrete example of $K3$-fibred CY threefold with such even/odd splitting in the intersection numbers (and hence in the corresponding moduli space metrics) can be found in \cite{Gao:2013pra}. Recall that a non-zero intersection number of the type $\hat{\kappa}_{a\alpha\beta}$ is also essential for generating the $D$-terms by coupling through the (non-)geometric fluxes. Let us say that volume of a two-cycle which is singled out is denoted as ${\rm t}^1 = \rho_0$ leaving ${\rm t}^{a'}$ number of volume moduli as the remaining ones, and then the overall volume of the threefold can be written out as under, \bea & & {\cal V} = \frac{1}{6}\, \kappa_{abc}\, {\rm t}^a\, {\rm t}^b\, {\rm t}^c = \frac{1}{2}\, \kappa_{1a'b'}\, \rho_0\, {\rm t}^{a'}\, {\rm t}^{b'}\,, \eea which leaves the volume form as a homogeneous function of degree 2 in the the remaining prime-indexed K\"ahler moduli. Now we can still assume ${\rm t}^{a'} = \rho\, \gamma^{a'}$ where $\gamma^{a'}$'s are the remaining angular K\"ahler moduli satisfying $\kappa_{1a'b'}\gamma^{a'}\gamma^{b'} = 2$. This leads to a simple volume form given as under, \bea & & {\cal V} = \rho_0 \, \rho^2\,. \eea Before we come to the explicit detail on restoring the de-Sitter no-go condition by making an appropriate choice of the geometry, let us throw some more light on the motivation of looking at this $K3/{\mathbb T}4$-fibred geometry by considering the following Romans mass term as it appears in the type IIA scalar potential, \bea & & \hskip0cm V_{{\rm f}^0} = \frac{e^{4D}}{2} {\cal V}\, ({\rm f}^0)^2 \,. \eea One can easily convince that using eqn. (\ref{eq:main4IIA-nogo2}) in which ${\cal V} = \rho^3$ simplification has been made we get the following relations, \bea & & (\partial_D V_{{\rm f}^0} - \rho\, \partial_\rho V_{{\rm f}^0}) = \, V_{{\rm f}^0}\, \Longrightarrow (\partial_D V_{\rm IIA} - \rho\, \partial_\rho V_{\rm IIA}) = 3 \,V_{\rm IIA} - 2\,V_{{\rm f}^0} + .....\,, \eea where dots have some non-negative pieces as seen while deriving the no-go-2, and this way $V_{{\rm f}^0}$ appearing with a minus sign on the right hand side helps in evading the de-Sitter no-go condition. Now suppose we have a volume form of the type ${\cal V} = \rho_0\, \rho^2$ instead of ${\cal V} = \rho^3$, then the following relations hold, \bea \label{eq:factor-moduli-space-IIA} & \hskip-1cm {\bf (I).} & \quad (\partial_D V_{{\rm f}^0} - \, \rho_0\, \partial_{\rho_0} V_{{\rm f}^0}) = 3\, V_{{\rm f}^0}\, \Longrightarrow (\partial_D V_{\rm IIA} - \rho_0\, \partial_{\rho_0} V_{\rm IIA}) = 3 \,V_{\rm IIA} + .....\,, \\ & \hskip-1cm {\bf (II).} & \quad (2\, \partial_D V_{{\rm f}^0} - \rho\, \partial_{\rho} V_{{\rm f}^0}) = 6\, V_{{\rm f}^0}\, \Longrightarrow (2\, \partial_D V_{\rm IIA} - \rho\, \partial_\rho V_{\rm IIA}) = 6 \,V_{\rm IIA} + .....\,, \nonumber \eea where we can see that now $V_{{\rm f}^0}$ can be completely absorbed in $V_{\rm IIA}$ and so negative piece with $V_{{\rm f}_0}$ is absent. Here we take an assumption (to be proven in a while) that one can appropriately make the flux choice to be such that all the other pieces inside the dots remain to be non-negative. Thus by considering these simple heuristics, one can anticipate to get another de-Sitter no-go with some appropriate choice of fluxes and geometries. Let us mention that one can also demand the splitting of intersection numbers on the mirror side, i.e. $k_{\lambda\rho\sigma}$ leading to the splitting in the complex structure moduli metric, to balance things from the $(\partial_D V_{{\rm f}^0})$ piece \cite{Flauger:2008ad} rather than considering $(\partial_\rho V_{{\rm f}^0})$ via taking a factorizable K\"ahler moduli space as we are considering. That may result in some new no-go scenarios, however we will not consider that case in this work. To explore the details, using the choice for the triple-intersection numbers given in eqn. (\ref{eq:intK3}) and the definitions of the metric given in table \ref{tab_scalar-potential} we have the following block-diagonal forms for the (inverse-)moduli space metrics, \bea \label{eq:K3-moduli-space-IIA} & & \hskip-1cm {\cal V} \,\tilde{\cal G}^{ab} = \begin{pmatrix} \rho_0^2 & \quad 0\\ 0 & \,\, \, \,\rho^2 \left(\gamma^{a'}\, \gamma^{b'} - \tilde\kappa^{a'b'}\right) \end{pmatrix}, \quad {\cal V}\,\tilde{\cal G}_{ab} = \begin{pmatrix} \rho^{4} & \quad 0\\ 0 & \quad \rho_0^{2}\,\rho^{2} \left(\tilde\kappa_{a'}\tilde\kappa_{b'} - \tilde\kappa_{a'b'}\right) \end{pmatrix}\,, \eea where $a' \in \{2, 3, .., h^{1,1}_-\}$ and the angular quantities with $a'$ indices do not depend on any of the moduli $\rho_0$ and $\rho$. From the scalar potential in eqn. (\ref{eq:main4IIA-nogo2gen}), which is relevant for this type IIA case with geometric flux, we observe that the volume moduli $\rho_0$ and $\rho$ can appear through factors like $({\cal V} \, \tilde{\cal G}^{ab}), ({\cal V} \, \tilde{\cal G}_{ab}), ({\rm t}^a\, {\rm t}^b)$ or $(\hat\kappa_{a\alpha\beta}{\rm t}^a)$. As we have seen from eqn. (\ref{eq:K3-moduli-space-IIA}), the moduli space metrics are already block diagonal with the splitting of index `$a$' as $a = \{1, a'\}$. Also note that the piece with $(\hat\kappa_{a\alpha\beta}{\rm t}^a)^{-1}$ will only depend on $\rho_0$ (and not on the $\rho$) modulus as we have assumed in eqn. (\ref{eq:intK3}) that $\hat\kappa_{1\alpha\beta}$ is the only non-zero intersection with index $\alpha, \beta$ being in the even $(1,1)$-cohomology. However, scalar potential pieces involving the factor $({\rm t}^a\, {\rm t}^b)$ can generate off-diagonal mixings and so might disturb the balance of pieces in $(\partial_D V_{\rm IIA} - \rho_0\, \partial_{\rho_0} V_{\rm IIA}) = 3\,V_{\rm IIA} + ...$, so that to keep retaining the pieces hidden in the dots as positive semi-definite, something which was established for the earlier no-go-2. To concretize these arguments, we simplify the geometric type IIA scalar potential given in eqn. (\ref{eq:main4IIA-nogo2gen}) utilising the above splitting of the moduli space metrics, and it turns out to be given as under, \bea \label{eq:main4IIA-nogo3} & & \hskip-0.3cm V_{\rm IIA} = \frac{e^{4D}}{4\,\rho_0\, \rho^2}\biggl[({\rm f}_0)^2 + \rho_0^2\, ({\rm f}_1)^2 + \rho^2\, {\rm f}_{a'} \, (\gamma^{a'}\, \gamma^{b'} - \tilde\kappa^{a'b'}) \,{\rm f}_{b'} \\ & & \quad \quad + \, \rho^4 ({\rm f}^1)^2 + \, \rho_0^{2}\,\rho^{2} \, {\rm f}^{a'} \, (\tilde\kappa_{a'}\tilde\kappa_{b'} - \tilde\kappa_{a'b'}) \, {\rm f}^{b'} + \rho_0^2\, \rho^4\, ({\rm f}^0)^2\biggr] \nonumber\\ & & \quad \quad + \frac{e^{2D}}{4\,\rho_0\, \rho^2}\biggl[\frac{{\rm h}_0^2}{\cal U} +\, \tilde{\cal G}^{ij}\,{\rm h}_{i0} \, {\rm h}_{j0} + \,\tilde{\cal G}_{\lambda \rho} {\rm h}^\lambda{}_0 \, {\rm h}^\rho{}_0 \biggr]\,\nonumber\\ & & \quad \quad + \, \frac{e^{2D}}{4\, \rho^2} \times \rho_0 \biggl[\left(\frac{{\rm h}_1 \, {\rm h}_1}{\cal U} + \, \tilde{\cal G}^{ij}\, {\rm h}_{1i} \, {\rm h}_{1j} \, + \,\tilde{\cal G}_{\lambda \rho}\, {\rm h}_1{}^\lambda\, {\rm h}_1{}^\rho \right) \biggr] \nonumber\\ & & \quad \quad + \, \frac{e^{2D}}{2\, \rho\,}\biggl[\gamma^{a'} \left(\frac{{\rm h}_{a'} \, {\rm h}_1}{\cal U} + \, \tilde{\cal G}^{ij}\, {\rm h}_{a'i} \, {\rm h}_{1j} \, + \,\tilde{\cal G}_{\lambda \rho}\, {\rm h}_{a'}{}^\lambda\, {\rm h}_1{}^\rho \right) \biggr] \nonumber\\ & & \quad \quad + \, \frac{e^{2D}}{4\,\rho_0}\biggl[\gamma^{a'}\, \gamma^{b'} \left(\frac{{\rm h}_{a'} \, {\rm h}_{b'}}{\cal U} + \, \tilde{\cal G}^{ij}\, {\rm h}_{a'i} \, {\rm h}_{b'j} \, + \,\tilde{\cal G}_{\lambda \rho}\, {\rm h}_{a'}{}^\lambda\, {\rm h}_{b'}{}^\rho \right) \biggr] \nonumber\\ & & \quad \quad - \, \frac{e^{2D}}{4\,\rho_0\,{\cal U}}\biggl[\bigl({\rm h}_{a'} - \frac{k_\lambda}{2}\,{\rm h}_{a'}{}^\lambda \bigr) \, \tilde\kappa^{a'b'} \bigl({\rm h}_{b'} - \frac{k_\rho}{2}\,{\rm h}_{b'}{}^\rho \bigr) \biggr]\, \nonumber\\ & & \quad \quad - \, \frac{e^{2D}}{2\, \rho\,{\cal U}}\biggl[\bigl({\rm h}_1 - \frac{k_\lambda}{2}\,{\rm h}_1{}^\lambda \bigr)\, \gamma^{a'} \bigl({\rm h}_{a'} - \frac{k_\rho}{2}\,{\rm h}_{a'}{}^\rho \bigr) \biggr]\, \nonumber\\ & & \quad \quad + \, \frac{e^{2D}}{4\,\rho_0\, {\cal U}}\biggl[\left({\cal U} \, \hat{\rm h}_\alpha{}^{0} + {\rm z}^\lambda \, \hat{\rm h}_{\alpha \lambda} \right) \,(\hat\kappa_{1\alpha\beta}\, \gamma^1)^{-1} \,\left({\cal U} \, \hat{\rm h}_\beta{}^{0} + {\rm z}^\rho \, \hat{\rm h}_{\beta \rho} \right)\biggr]\,\nonumber\\ & & \quad \quad + \frac{e^{3D}}{2\, \sqrt{\cal U}} \left[\left({\rm f}^0 \, {\rm h}_0 - {\rm f}^a\, {\rm h}_a \right) - \left({\rm f}^0\, {\rm h}^\lambda{}_0 - {\rm f}^a\, {\rm h}^\lambda{}_a \right)\, \frac{k_\lambda}{2} \right],\nonumber \eea where the flux orbits can be read-off from the eqn. (\ref{eq:axionic-flux-nogo2}) after imposing the splitting of indices as $a = \{1, a'\}$ and using the intersection numbers given in eqn. (\ref{eq:intK3}). Now from this complicated potential we can see the off-diagonal mixing e.g. arising from $({\rm t}^a\, {\rm t}^b)$ factor as we discussed before. This issue can be avoided by appropriately setting the respective fluxes coupled in the off-diagonal blocks to zero. i.e. by taking either of the following two cases which subsequently leads to the new de-Sitter no-go scenarios, \bea \label{eq:IIA-nogo3} & \hskip-0.5cm {\bf (I).} & \quad {\rm h}_1 = {\rm h}_{1k} = {\rm h}_1{}^\lambda = 0 \quad \Longleftrightarrow \quad {w}_{10} = {w}_{1k} = {w}_1{}^\lambda = 0\,, \\ & & \hskip2cm \Longrightarrow \quad (\partial_D V_{\rm IIA} - \rho_0\, \partial_{\rho_0} V_{\rm IIA}) \geq 3 \,V_{\rm IIA};\nonumber\\ & \hskip-0.5cm {\bf (II).} & \quad {\rm h}_{a'0} = {\rm h}_{a'k} = {\rm h}_{a'}{}^\lambda = \hat{\rm h}_{\alpha}{}^0 = \hat{\rm h}_{\alpha}{}^k = \hat{\rm h}_{\alpha \lambda}= 0 \Longleftrightarrow \nonumber\\ & & \hskip0cm {w}_{a'0} = {w}_{a'k} = {w}_{a'}{}^\lambda = \hat{w}_{\alpha}{}^0 = \hat{w}_{\alpha}{}^k = \hat{w}_{\alpha \lambda}= 0\, \Longrightarrow (2 \partial_D V_{\rm IIA} - \rho\, \partial_{\rho} V_{\rm IIA}) \geq 6 \,V_{\rm IIA} . \nonumber \eea Also note that in the no-go scenarios corresponding to the above two cases, one has to impose those extra flux conditions about vanishing of certain fluxes to determine the simplified axionic flux orbits from their generic expressions. However given their nature of being independent of the saxion, it doesn't bother us for our purpose as we are only interested in considering the saxionic derivatives of the potential to look for the possible no-go inequalities. \subsection{$T$-dual de-Sitter no-go-3 in type IIB} On the lines of computations done for the explicit $T$-dualization of the two de-Sitter no-go scenarios, one can be convinced that the no-go-3 in (\ref{eq:IIA-nogo3}) can be easily $T$-dualized to find new no-go scenarios on the type IIB side. For this to happen, the assumption to make is that type IIB compactification should be done on the CY threefolds which have $K3$- or ${\mathbb T}^4$-fibred mirror CYs. So this framework should not be confused with having type IIB compactification on the $K3$- or ${\mathbb T}^4$-fibred CY itself, although there might be different set of no-go's for that case, but those would not be the ones we are considering as type IIA no-go-3. Having said the above, now the complex structure side can be studied by the mirror CY, and hence will inherit the splitting of complex-structure moduli space on the type IIB side such that one can single out two complex structure moduli $\sigma_0$ and $\sigma$ such that, \bea \label{eq:K3-moduli-space-IIB} & & u^1 = \sigma_0, \qquad u^{i'} = \sigma\, \lambda^{i'}, \qquad l_{1i'j'}\, \lambda^{i'}\, \lambda^{j'} = 2\,, \qquad {\cal U} = \sigma_0 \, \sigma^2\,,\\ & & \hskip-1cm {\cal U} \,{\cal G}^{ij} = \begin{pmatrix} \sigma_0^2 & \quad 0\\ 0 & \,\, \, \,\sigma^2 (\lambda^{i'}\, \lambda^{j'} - \tilde{l}^{i'j'}) \end{pmatrix}, \quad {\cal U}\,{\cal G}_{ij} = \begin{pmatrix} \sigma^{4} & \quad 0\\ 0 & \quad \sigma_0^{2}\,\sigma^{2} (\tilde{l}_{i'}\tilde{l}_{j'} - \tilde{l}_{i'j'}) \end{pmatrix}\,, \nonumber \eea where the indices $i'$'s denote the remaining complex structure moduli different from $u^1$ and quantities like $\tilde{l}_i$ etc. are the ones which only depend on the angular complex structure moduli. Under these circumstances, the type IIB scalar potential can be explicitly given as under, \bea \label{eq:IIBpotential-nogo3} & & V_{\rm IIB} = \frac{e^{4\phi}}{4\,{\cal V}^2\, \sigma_0 \, \sigma^2}\biggl[(f_0)^2 + \left(\sigma^4\, (f^1)^2 + \sigma_0^2\, \sigma^2 \, f^{i'} \, (\tilde{l}_{i'}\tilde{l}_{j'} - \tilde{l}_{i'j'})\, f^{j'} \right) \\ & & \quad \qquad + \left(\sigma_0^2 \, (f_1)^2 + \sigma^2\, f_{i'} \, (\lambda^{i'}\, \lambda^{j'} - \tilde{l}^{i'j'}) \,f_{j'} \right)+ \sigma_0^2 \, \sigma^4\, (f^0)^2\biggr]\,\nonumber\\ & & \quad \qquad + \frac{e^{2\phi}}{4\,{\cal V}^2\,\sigma_0 \, \sigma^2}\biggl[h_0^2 + \, {\cal V}\, {\cal G}^{ab}\,h_{a0} \, h_{b0} + \, {\cal V} \,{\cal G}_{\alpha \beta} \, h^\alpha{}_0 \, h^\beta{}_0 \nonumber\\ & & \quad \qquad + \, \sigma_0^2\, \left((h_1)^2 + {\cal V} \,{\cal G}_{\alpha \beta}\, h^\alpha{}_1\, h^\beta{}_1 + {\cal V}\, {\cal G}^{ab}\, h_{a1} \, h_{b1} \right)\, \nonumber \\ & & \quad \qquad +\, \sigma^2\, \lambda^{i'}\, \lambda^{j'}\, \left(h_{i'} \, h_{j'} + {\cal V} \,{\cal G}_{\alpha \beta}\, h^\alpha{}_{i'}\, h^\beta{}_{j'} + {\cal V}\, {\cal G}^{ab}\, h_{ai'} \, h_{bj'} \right)\, \nonumber \\ & & \quad \qquad + \sigma^2 (\lambda^{i'}\, \lambda^{j'} - \tilde{l}^{i'j'}) \, \left(h_{i'} - \frac{\ell_\alpha}{2} \,h^\alpha{}_{i'} \right) \left(h_{j'} - \frac{\ell_\beta}{2} \,h^\beta{}_{j'} \right) \nonumber\\ & & \quad \qquad + \, \sigma^2\, \left({\cal V} \, \hat{h}_J{}^{0} - {t}^\alpha \, \hat{h}_{\alpha J} \right) \,(\hat\ell_{1JK})^{-1} \,\left({\cal V} \, \hat{h}_K{}^{0} - {t}^\beta \, \hat{h}_{\beta K} \right) \biggr] \nonumber\\ & & \quad \qquad + \frac{e^{3\phi}}{\, {\cal V}^2} \, \left[\left(f^0 \, h_0 - f^i\, h_i \right)\, - \left(f^0\, h^\alpha{}_0 - f^i\, h^\alpha{}_i \right)\, \frac{\ell_\alpha}{2} \right],\nonumber \eea where the only interest for us at the moment lies in the saxionic moduli $\sigma_0$ and $\sigma$, though for completion we do provide the explicit expressions for all the axionic flux orbits as under, \bea \label{eq:IIBorbits-nogo3} & & \hskip-0.5cm f^0 = -\, F^0, \quad f^1 = F^1 - v^1\, F^0\,, \quad f^{i'} = F^{i'} - v^{i'}\, F^0,\\ & & \hskip-0.5cm f_1 = F_1 +\, l_{1i'j'}\, v^{i'} \, {F}^{j'}\, - \frac{1}{2}\, l_{1j'k'}\, v^{j'}\, v^{k'} \, {F}^0 - \omega_{a1} \, {c}^a - \hat{Q}^\alpha{}_1 \, \hat{c}_\alpha - \, c_0 \, h_1\,, \nonumber\\ & & \hskip-0.5cm f_{i'} = F_{i'} +\, l_{1i'j'}\, (v^{j'} \, {F}^{1} + v^{1} \, {F}^{j'})\, - \, l_{1i'j}\, v^{1}\, v^{j'} \, {F}^0 - \omega_{a{i'}} \, {c}^a - \hat{Q}^\alpha{}_{i'} \, \hat{c}_\alpha - \, c_0 \, h_{i'}\,, \nonumber\\ & & \hskip-0.5cm f_0 = F_0 + v^1 {F}_1+ v^{i'} {F}_{i'} + \frac{1}{2}\, l_{1i'j'} v^{i'} v^{j'} \, {F}^1 + l_{1i'j'} v^1 v^{i'} \, {F}^{j'} \nonumber\\ & & \hskip1cm - \frac{1}{2}\, l_{1i'j'} v^{i'} v^{j'} v^1 \, {F}^0 - \omega_{a0} \, {c}^a - \hat{Q}^\alpha{}_0 \, \hat{c}_\alpha - \, c_0 \, h_0\,, \nonumber\\ & & \nonumber\\ & & \hskip-0.5cm h_0 = H_0 + \omega_{a0} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_0 + v^1 \, h_1\,+ v^{i'} \, h_{i'} , \nonumber\\ & & \hskip-0.5cm h_1 = H_1 + \omega_{a1} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_1\,, \qquad h_{i'} = H_{i'} + \omega_{ai'} \, {b}^a + \frac{1}{2}\, \hat{\ell}_{\alpha a b}\, b^a b^b \, \hat{Q}^\alpha{}_{i'}\,,\nonumber\\ & & \hskip-0.5cm h_{a0} = \omega_{a0} + \hat{Q}^\alpha{}_0 \hat{\ell}_{\alpha a b}\, b^b + v^1 \, h_{a1} + v^{i'} \, h_{ai'}, \qquad h_{a1} = \omega_{a1} + \hat{Q}^\alpha{}_1 \, \hat{\ell}_{\alpha a b}\, b^b, \nonumber \\ & & \hskip-0.5cm h_{ai'} = \omega_{ai'} + \hat{Q}^\alpha{}_{i'} \, \hat{\ell}_{\alpha a b}\, b^b, \quad h^\alpha{}_0 = \hat{Q}^\alpha{}_0 + v^1 \, \hat{Q}^\alpha{}_1 + v^{i'} \, \hat{Q}^\alpha{}_{i'}\,, \, \, \, h^\alpha{}_1 = \hat{Q}^\alpha{}_1\,, \, \, \, h^\alpha{}_{i'} = \hat{Q}^\alpha{}_{i'}, \nonumber\\ & & \hskip-0.5cm \hat{h}_{\alpha K} = \hat{\omega}_{\alpha K}\, - Q^{a}{}_{K} \, \hat{\ell}_{\alpha a b} \, b^b + \frac{1}{2}\hat{\ell}_{\alpha a b} \, b^a \,b^b\, R_K, \qquad \hat{h}_K{}^0 = -\, R_K\,. \nonumber \eea A close look at the scalar potential in eqn. (\ref{eq:IIBpotential-nogo3}) confirms that one can have the following two $T$-dual cases, \bea & \hskip-1cm {\bf (I).} & \quad {h}_1 = {h}_{a1} = {h}^\alpha{}_1 = 0 \quad \Longleftrightarrow \quad {H}_{1} = {\omega}_{a1} = \hat{Q}^\alpha{}_1 = 0\,,\\ & & \hskip2cm \Longrightarrow \quad (\partial_D V_{\rm IIB} - \sigma_0\, \partial_{\sigma_0} V_{\rm IIB}) \geq 3 \,V_{\rm IIA}\,,\nonumber\\ & \hskip-1cm {\bf (II).} & \quad {h}_{i'} = {h}_{ai'} = {h}^\alpha{}_{i'} = \hat{h}_{\alpha K} = h^a{}_K = \hat{h}_{K}{}^0 = 0 \nonumber\\ & & \quad \Longleftrightarrow \quad H_{i'}= \omega_{ai'} = \hat{Q}^\alpha{}_{i'} = \hat{\omega}_{\alpha K} = Q^a{}_K = R_K = 0\, \quad \nonumber\\ & & \hskip2cm \Longrightarrow \quad (2\, \partial_D V_{\rm IIB} - \sigma\, \partial_\sigma V_{\rm IIB}) \geq 6 \,V_{\rm IIB}, \nonumber \eea This result can be summarized in the following no-go condition. \begin{mdframed} \noindent {\bf Type IIB No-Go theorem 3:} In type IIB framework with $O3/O7$ orientifold planes and (non-)geometric fluxes along with the standard $F_3, H_3$ fluxes, one cannot have stable de-Sitter minima with `special solutions' of Bianchi identities, if the complex structure moduli space exhibit a factorization on top of suitably having some of the flux components set to zero. This can happen when the mirror of the type IIB compactifying CY is a particular type of $K3/{\mathbb T}^4$-fibred CY threefold satisfying eqn. (\ref{eq:intK3}). \end{mdframed} \noindent \subsection{More de-Sitter no-go conditions for toroidal examples} This no-go 3 appears to be rather a complicated statement to make, however it has several interesting implications. To illustrate what it means in a simple way, we consider the toroidal models based on type IIA and type IIB compactifications using orientifold of the ${\mathbb T}^6/({\mathbb Z}_2 \times {\mathbb Z}_2)$ orbifold. To being with, let us mention that this no-go 3 can be applied directly to these conventional vanilla toroidal orientifold models which have been studied numerous number of times. This model has the only intersection number non-zero to be, \bea & \hskip-1cm {\rm IIA:} & \qquad \kappa_{123} = 1, \quad \ \qquad k_{123} = 1\,, \\ & \hskip-1cm {\rm IIB:} & \qquad \, \ell_{123} = 1, \quad \qquad \, \, \, \, l_{123} = 1\,, \nonumber \eea while all the other intersection numbers are zero. With the standard orientifold involution there are no $D$-terms present in type IIA or type IIB settings. So the total scalar potential arise from the $F$-term contributions itself. In addition, let us note that the even $(1,1)$-cohomology is trivial in type IIA while the odd $(1,1)$-cohomology is trivial in type IIB implying that fluxes/moduli with indices $k$ in type IIA and with index $a$ in type IIB are absent. \subsubsection*{type IIA} It turns out that 12 axionic flux orbits are identically zero in this construction, which in addition does not include non-geometric ${\rm Q}$ and ${\rm R}$ fluxes, \bea & & \hskip-1cm {\rm h}^a = 0 = {\rm h}^0, \quad {\rm h}_{k0} = {\rm h}_{ak} = {\rm h}^a{}_k = {\rm h}_k{}^0 = 0, \quad {\rm h}^{a\lambda} = 0 = {\rm h}^{\lambda 0}\,, \\ & & \qquad \quad \hat{{\rm h}}_{\alpha}{}^{0} = \hat{{\rm h}}_{\alpha\lambda} = \hat{{\rm h}}^{\alpha0} = \hat{{\rm h}}^\alpha{}_\lambda = 0\,.\nonumber \eea As there can be equivalence between the three ${\mathbb T}^2$'s appearing in the six-torus, and therefore one can single out $\rho_0$ modulus from any of the three $t^a$'s, say we take $t^1 = \rho_0$ and subsequently the remaining $2 \times 2$ sector in the K\"ahler moduli space is block diagonal. In fact it is completely diagonal in all the three moduli, though we need it only partially. Noting that the only fluxes which can get non-zero values in this model are the followings: \bea & & {\rm h}_0, \qquad {\rm h}_a, \qquad {\rm h}_{a}{}^\lambda, \qquad {\rm h}^\lambda{}_0, \eea our no-go-3 implies that one would end up in having de-Sitter no-go scenarios if one switches-off certain fluxes as mentioned in table \ref{tab_IIA-nogo3}. \noindent \begin{table}[H] \begin{center} \begin{tabular}{|c|c|c|} \hline & &\\ ${\rm h}_1 = {\rm h}_1{}^\lambda = 0$ & ${w}_{10} = {w}_1{}^\lambda = 0$ & $\partial_D V_{\rm IIA} - \rho_0 \partial_{\rho_0} V_{\rm IIA} \geq 3V_{\rm IIA}$ \\ ${\rm h}_2 = {\rm h}_2{}^\lambda = 0$ & ${w}_{20} = {w}_2{}^\lambda = 0$ & $\partial_D V_{\rm IIA} - \rho_0 \partial_{\rho_0} V_{\rm IIA} \geq 3 V_{\rm IIA}$ \\ ${\rm h}_3 = {\rm h}_3{}^\lambda = 0$ & ${w}_{30} = {w}_3{}^\lambda = 0$ & $\partial_D V_{\rm IIA} - \rho_0 \partial_{\rho_0} V_{\rm IIA} \geq 3V_{\rm IIA}$ \\ & &\\ \hline & & \\ ${\rm h}_{20} = {\rm h}_{30} = {\rm h}_{2}{}^\lambda = {\rm h}_{3}{}^\lambda = 0$ & ${w}_{20} = {w}_{30} = {w}_{2}{}^\lambda = {w}_{3}{}^\lambda = 0$ & $2 \partial_D V_{\rm IIA} - \rho \partial_{\rho} V_{\rm IIA} \geq 6 V_{\rm IIA}$\\ ${\rm h}_{10} = {\rm h}_{30} = {\rm h}_{1}{}^\lambda = {\rm h}_{3}{}^\lambda =0 $ & ${w}_{10} = {w}_{30} = {w}_{1}{}^\lambda = {w}_{3}{}^\lambda = 0$ & $2 \partial_D V_{\rm IIA} - \rho \partial_{\rho} V_{\rm IIA} \geq 6 V_{\rm IIA}$\\ ${\rm h}_{10} = {\rm h}_{20} = {\rm h}_{1}{}^\lambda = {\rm h}_{2}{}^\lambda = 0$ & ${w}_{10} = {w}_{20} = {w}_{1}{}^\lambda = {w}_{2}{}^\lambda = 0$ & $2 \partial_D V_{\rm IIA} - \rho \partial_{\rho} V_{\rm IIA} \geq 6 V_{\rm IIA}$\\ \hline \end{tabular} \end{center} \caption{Type IIA de-Sitter no-go scenarios with ${\mathbb T}^6/({\mathbb Z}_2 \times {\mathbb Z}_2)$ having geometric flux.} \label{tab_IIA-nogo3} \end{table} \noindent The particular models of table \ref{tab_IIA-nogo3} present those cases in which one would have de-Sitter no-go irrespective of the fact whether the Romans mass term is zero or non-zero. This simply means that these are the examples in which geometric fluxes are not enough to evade the no-go-2 despite having non-zero Romans mass. Moreover, from the observations from table \ref{tab_IIA-nogo3} it is not hard to guess that if all the geometric fluxes are zero, one gets back to the no-go-1 having an inequality of the type: $(3 \partial_D V_{\rm IIA} - \rho \partial_{\rho} V_{\rm IIA}) \geq 9 \,V_{\rm IIA}$ ! \subsubsection*{type IIB} Now an interesting question to ask is what happens to the dual type IIB side which would involve non-geometric fluxes as well, unlike the type IIA case. It turns out that 12 axionic flux orbits are identically zero in this construction, and they are given as under: \bea & & h^0 = h^i = 0, \qquad h_{a0} = h_{ai} = h_a{}^i = h_a{}^0 = 0, \qquad h^{\alpha i} = h^{\alpha 0} = 0, \\ & & \hat{h}_K = \hat{h}_{\alpha K} = \hat{h}_\alpha{}^K = \hat{h}^K = 0\,.\nonumber \eea Now due to symmetries in the intersection number $l_{ijk}$, one can single out a $\sigma_0$ modulus from any of the three complex structure saxions $u^i$'s, say we take $u^1 = \sigma_0$ and subsequently the remaining $2 \times 2$ sector in the complex structure moduli space is block diagonal, and one can write ${\cal U} = \sigma_0\, \sigma^2$. As before, it is completely diagonal in all the three moduli. Noting that the only fluxes which can get non-zero values in this model are the following ones: \bea & & {h}_0, \qquad {h}_i, \qquad {h}^{\alpha}{}^0, \qquad {h}^\alpha{}_i, \eea our no-go-3 implies that one would end up in having de-Sitter no-go scenarios if one switches-off certain fluxes as mentioned in table \ref{tab_IIB-nogo3}. \noindent \begin{table}[H] \begin{center} \begin{tabular}{|c|c|c|} \hline & &\\ ${h}_1 = {h}^\alpha{}_1 = 0$ & ${H}_{1} = \hat{Q}^\alpha{}_1 = 0$ & $(\partial_\phi V_{\rm IIB} - \sigma_0 \partial_{\sigma_0} V_{\rm IIB}) \geq 3 \,V_{\rm IIB}$ \\ ${h}_2 = {h}^\alpha{}_2 = 0$ & ${H}_{2} = \hat{Q}^\alpha{}_2 = 0$ & $(\partial_\phi V_{\rm IIB} - \sigma_0 \partial_{\sigma_0} V_{\rm IIB}) \geq 3 \,V_{\rm IIB}$ \\ ${h}_3 = {h}^\alpha{}_3 = 0$ & ${H}_{3} = \hat{Q}^\alpha{}_3 = 0$ & $(\partial_\phi V_{\rm IIB} - \sigma_0 \partial_{\sigma_0} V_{\rm IIB}) \geq 3 \,V_{\rm IIB}$ \\ & &\\ \hline & & \\ ${h}_2 = {h}_3 = {h}^\alpha{}_2 = {h}^\alpha{}_3 = 0$ & ${H}_{2} = {H}_{3} = \hat{Q}^\alpha{}_2 = \hat{Q}^\alpha{}_3 =0$ & $(2 \partial_\phi V_{\rm IIB} - \sigma \partial_{\sigma} V_{\rm IIB}) \geq 6 \,V_{\rm IIB}$\\ ${h}_3 = {h}_1 = {h}^\alpha{}_3 = {h}^\alpha{}_1 = 0$ & ${H}_{3} = {H}_{1} = \hat{Q}^\alpha{}_3 = \hat{Q}^\alpha{}_1 =0$ & $(2 \partial_\phi V_{\rm IIB} - \sigma \partial_{\sigma} V_{\rm IIB}) \geq 6 \,V_{\rm IIB}$\\ ${h}_1 = {h}_2 = {h}^\alpha{}_1 = {h}^\alpha{}_2 = 0$ & ${H}_{1} = {H}_{2} = \hat{Q}^\alpha{}_1 = \hat{Q}^\alpha{}_2 =0$ & $(2 \partial_\phi V_{\rm IIB} - \sigma \partial_{\sigma} V_{\rm IIB}) \geq 6 \,V_{\rm IIB}$\\ \hline \end{tabular} \end{center} \caption{Type IIB de-Sitter no-go scenarios with ${\mathbb T}^6/({\mathbb Z}_2 \times {\mathbb Z}_2)$ having (non-)geometric fluxes.} \label{tab_IIB-nogo3} \end{table} \noindent The particular models of table \ref{tab_IIB-nogo3} present those cases in which one would have de-Sitter no-go irrespective of the fact whether the $F^0$ components of the RR $F_3$ flux is zero or non-zero, and moreover despite having some non-geometric fluxes being turned-on. This means that these are the examples in which non-geometric fluxes are not enough to evade the no-go-2 due to the presence of some specific geometries inherited from the six-torus. \section{Summary and conclusions} \label{sec_conclusions} In this article, we have $T$-dualized several de-Sitter no-go scenarios which have been well known in the type IIA flux compactifications since more than a decade. This subsequently leads to a set of peculiar de-Sitter no-go scenarios in the type IIB flux compactifications with (non-)geometric fluxes. Before exploring the de-Sitter no-go scenarios, we have studied the solutions of Bianchi identities in the type IIA and type IIB theories as the same is crucial for finding a genuinely effective scalar potential. In this context we present a peculiar class of solutions, what we call as the `special solutions' of Bianchi identities, in each of the two type II theories. The main idea behind the existence of such solutions is the fact that several Bianchi identities can be understood as a set of orthogonal symplectic (flux) vectors and hence half of the flux components can be rotated away by a symplectic transformation. The possible non-zero fluxes for the `special solutions' are summarized in table \ref{tab_special-flux-sol}. Moreover, after exploring the $T$-dual versions of these `special solutions' from type IIA to type IIB and vice-versa, we make some very interesting observations as collected in the following points: \begin{itemize} \item{The non-geometric Type IIA setup with the `special solutions' of Bianchi identities is equivalent to type IIB setup without any non-geometric fluxes. Moreover, for such a type IIB geometric setup with $O3/O7$, there is a de-Sitter no-go theorem \cite{Shiu:2011zt, Garg:2018reu}, which we have also re-derived from our approach. This helps us in concluding that the $T$-dual type IIA setting which although includes some non-geometric fluxes, cannot result in stable de-Sitter vacua, and this is something against the naive expectations.} \item{The non-geometric Type IIB setup with `special solutions' of Bianchi identities is equivalent to type IIA setup without any non-geometric fluxes turned-on. Such a type IIA setup has been studied in a variety of models in the past, especially regarding the search of de-Sitter vacua and their no-go conditions \cite{Hertzberg:2007wc, Flauger:2008ad, Haque:2008jz, Banlaki:2018ayh}.} \end{itemize} \noindent In this context of type IIA orientifold compactifications with geometric flux, first we have re-derived several de-Sitter no-go scenarios of \cite{Hertzberg:2007wc, Flauger:2008ad} and have subsequently explored their $T$-dual counterparts in type IIB theory. In particular we have $T$-dualized three classes of type IIA no-go scenarios which are summarized in table \ref{tab_no-go-fluxTdual}. These can be elaborated as: \begin{itemize} \item{no-go-1: Type IIB non-geometric setup with $O3/O7$ and having RR flux $F_3$ along with only the rigid fluxes $H_0, \, \omega_{a0}$ and $\hat{Q}^\alpha{}_0$ cannot give stable de-Sitter vacua.} \item{no-go-2: Type IIB non-geometric setup with $O3/O7$ and having RR flux $F_3$ along with only the `special solutions' of the NS-NS Bianchi identities cannot give stable de-Sitter vacua unless the $F^0$ component of the $F_3$ flux is non-zero.} \item{no-go-3: This no-go scenario is rather a restoration of the no-go-2 itself, in the sense of $F^0$ being zero or non-zero getting irrelevant. This can be done by choosing certain compactification geometries which have factorisation in the complex structure moduli space. To be specific, the violation of no-go-2 via including the non-zero $F^0$ flux (of $F_3$) can be avoided if the type IIB compactification is made on a CY threefold which admits a $K3/{\mathbb T}^4$-fibred mirror Calabi Yau threefold having some specific triple intersection numbers along with the need of setting a couple of fluxes to zero.} \end{itemize} \begin{table}[H] \begin{center} \begin{tabular}{|c||c||c|c|} \hline Scenarios & & Fluxes in Type IIA \quad & \quad Fluxes in Type IIB \\ & & with $D6/O6$ & with $D3/O3$ and $D7/O7$ \\ \hline \hline & & & \\ no-go-1& $F$-term & ${\rm H}_0$, \quad ${\rm H}_k$, \quad ${\rm H}^\lambda$, & $H_0$, \quad $\omega_{a0}$, \quad $\hat{Q}^\alpha{}_0$, \\ & fluxes & & \\ & & $e_0$, \quad $e_a$, \quad $m^a$, \quad $m_0$. & $F_0$, \quad $F_i$, \quad $F^i$, \quad $- F^0$. \\ & & & \\ \hline \hline & & & \\ no-go-2& $F$-term & ${\rm H}_0$, \quad ${\rm H}_k$, \quad ${\rm H}^\lambda$, & $H_0$, \quad $\omega_{a0}$, \quad $\hat{Q}^\alpha{}_0$, \\ and & fluxes & & \\ no-go-3& & $w_{a0}$, \quad $w_{ak}$, \quad $w_a{}^\lambda$, & $H_i$, \quad $\omega_{ai}$, \quad $\hat{Q}^\alpha{}_{i}$, \\ & & & \\ & & $e_0$, \quad $e_a$, \quad $m^a$, \quad $m_0$. & $F_0$, \quad $F_i$, \quad $F^i$, \quad $- F^0$. \\ & & & \\ & $D$-term & $\hat{w}_\alpha{}^0$, \quad $\hat{w}_\alpha{}^k$, \quad $\hat{w}_{\alpha \lambda}$, & $-\,R_K$, \quad $-\,Q^a{}_K$, \quad $\hat{\omega}_{\alpha K}$,\\ & fluxes & & \\ \hline & & & \\ no-scale-& $F$-term & ${\rm H}_0$, \, \, $w_{a0}$, \,\, ${\rm Q}^a{}_0$, \, \, ${\rm R}_0$, & $H_0$, \, \, $H_i$, \, \, $H^i$, \, \, $- H^0$, \\ structure& fluxes & & \\ in IIB & & $e_0$, \quad $e_a$, \quad $m^a$, \quad $m_0$. & $F_0$, \quad $F_i$, \quad $F^i$, \quad $- F^0$. \\ \hline \end{tabular} \end{center} \caption{T-dual fluxes relevant for the three no-go scenarios.} \label{tab_no-go-fluxTdual} \end{table} \noindent Note that in table \ref{tab_no-go-fluxTdual} we have also collected the $T$-dual fluxes corresponding to the type IIB no-scale model which has only the $F_3$ and $H_3$ fluxes. This subsequently shows that in the dual type IIA side, one has all the RR fluxes, and NS-NS fluxes of the `rigid' type only, for which we have already shown that a de-Sitter no-go condition exists. To conclude, we have shown in this analysis how one can engineer a pair of $T$-dual setups in type IIA and type IIB theories in which it may be easier to derive some de-Sitter no-go conditions which can be translated into the mirror side. By considering multiple examples, we have presented a kind of recipe for evading or further restoring the no-go window depending on the various ingredients, including the compactification geometries, one could use. Thus one of the main advantages of this work can also be taken as where to not look for the de-Sitters search, and hence refining the vast non-geometric flux landscape for hunting the de-Sitter vacua. Moreover, our analysis can also be extended to utilize/investigate the non-geometric type II models for/against the recently proposed Trans-Planckian Censorship Conjecture (TCC) \cite{Bedroya:2019snp} and also its possible connection with the swampland distance conjecture. We hope to report on (some of) these issues in near future \cite{shukla:2019abc1}. \section*{Acknowledgments} I am grateful to Fernando Quevedo for his kind support and encouragements. I would like to thank David Andriot, Erik Plauschinn and Thomas Van Riet for useful discussions and communications. \newpage
proofpile-arXiv_065-6046
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $P$ be any polytope in $\mathbb R^d$. An \emph{extension} (or \emph{lift}) of $P$ is a polytope $Q \in \mathbb R^e$ such that $P=\pi(Q)$ for some affine map $\pi : \mathbb R^e \to \mathbb R^d$. The \emph{extension complexity} of $P$, denoted by $\xc(P)$, is the minimum number of facets of an extension of $P$. If $Ay \leq b$ is a linear description of $Q$, then $Ay \leq b$, $x = \pi(y)$ is called an \emph{extended formulation} of $P$ since \( x \in P \iff \exists y : Ay \leq b,\ x = \pi(y). \) Thus the extension complexity of a polytope can also be defined as the minimum number of \emph{inequality constraints} in an extended formulation. Extended formulations are used and studied for a long time, while extension complexity was formally defined less than ten years ago. This definition was much inspired by the seminal work of Yannakakis~\cite{yannakakis1991expressing}. Recently, researchers tried to pin down the extension complexity of several families of polytopes, mainly in connection with combinatorial optimization. By now, we have a quite good understanding of the extension complexity of the polytopes associated to the main ``textbook paradigms'': flows, matchings, arborescences, traveling salesman tours and stable sets, see~\cite{fiorini2012linear,rothvoss2014matching,goos2018extension}. One notable exception is \emph{matroids}. Let $M$ be a matroid. We denote by $E(M)$ the set of elements of $M$ and $\mathcal{I}(M)$ the collection of its independent sets. Also, we denote by $\mathcal{B}(M)$ the collection of its bases. The \emph{independence polytope} of $M$ is the convex hull of the characteristic vectors of independent sets of $M$. Using the notation $P(M)$ for the independence polytope of $M$ and $\chi^I$ for the characteristic vector of independent set $I \in \mathcal{I}(M)$, we have $$ P(M) = \mathrm{conv} \{\chi^I \in \{0,1\}^{E(M)} \mid I \in \mathcal{I}(M)\}\,. $$ Another polytope of interest is the \emph{base polytope} $B(M)$ of matroid $M$. The base polytope is the face of the independence polytope whose vertices are the vectors $\chi^B$, where $B \in \mathcal{B}(M)$. Hence, $$ B(M) = \{x \in \mathbb R^{E(M)} \mid x \in P(M),\ x(E) = \mbox{rk}(M)\} $$ where $x(F) := \sum_{e \in F} x_e$ for $F \subseteq E(M)$ and $\mbox{rk}(M)$ denotes the rank of $M$. Notice that every extended formulation for $P(M)$ yields an extended formulation for $B(M)$ with the same number of inequality constraints, hence $\xc(B(M)) \leq \xc(P(M))$. Letting $n$ denote the number of elements of $M$, we also have $\xc(P(M)) \leq \xc(B(M)) + 2n$ since $P(M) = \{x \in \mathbb R^{E(M)} \mid \exists y \in B(M),\ \mathbf{0} \leq x \leq y\}$. A \emph{regular matroid} is a matroid that is representable over every field, or, equivalently, that is representable over the reals by a totally unimodular matrix. Regular matroids form a fundamental class of matroids, generalizing graphic and cographic matroids. Let $G$ be a graph. Recall that the elements of the corresponding \emph{graphic matroid} $M(G)$ (also called the \emph{cycle matroid} of $G$) are the edges of $G$, and the independent sets are the edge subsets $F \subseteq E(G)$ that define a forest in $G$. The \emph{cographic matroid} $M^*(G)$ is the dual matroid of $M(G)$. Graphic and cographic matroids are regular. Also, matroids that are both graphic and cographic are exactly those of the form $M(G)$ for some \emph{planar} graph $G$. Wong~\cite{wong1980integer} and Martin~\cite{Martin91} proved that $\xc(B(M)) = O(|V(G)| \cdot |E(G)|)$ for all graphic matroids $M = M(G)$. It follows directly that $\xc(P(M)) = O(n^2)$ for all graphic or cographic matroids $M$ on $n$ elements. In case $M$ is both graphic and cographic, then $\xc(P(M)) = O(n)$ follows from Williams~\cite{Williams01}. Let $n$ and $r$ respectively denote the number of elements and rank of $M$. In \cite{kaibel2016extended,weltge2015sizes}, it is claimed that $\xc(P(M)) = O(n^2)$ whenever $M$ is regular. The first version of~\cite{GurjarV17} claimed an even better $O(r \cdot n)$ bound. However, both papers have a fundamental flaw and appear to be difficult to fix\footnote{Actually, \cite{GurjarV17} has been withdrawn after a few months, and \cite{kaibel2016extended} has been recently withdrawn, see \cite{Kaibel2019}.}, and as a result no polynomial bound is currently known. In this paper, we give the first polynomial upper bound on the extension complexity of the independence polytope of a regular matroid. \begin{theorem}[main theorem]\label{thm:main} There exists a constant $c_0$ such that $\xc(P(M)) \leq c_0 \cdot n^{6}$ for all regular matroids $M$ on $n$ elements. \end{theorem} Our proof of Theorem~\ref{thm:main} is by induction on $n$. We rely on the Seymour's celebrated characterization of regular matroids. (A formal definition of $t$-sum for $t \in [3]$ can be found below, in Section~\ref{sec:decomp}.) \begin{theorem}[Seymour's decomposition theorem~\cite{seymour1980decomposition}]\label{thm:seymourdecomposition} A matroid is regular if and only if it is obtained by means of $1$-, $2$- and $3$-sums, starting from graphic and cographic matroids and copies of a certain $10$-elements matroid $R_{10}$. \end{theorem} Let $M$ be a regular matroid on $n$ elements. If $M$ is either graphic, cographic or $R_{10}$, then from~\cite{wong1980integer,Martin91} we directly have $\xc(P(M)) \leq c_0 \cdot n^{6}$, provided that $c_0\geq 2$. Next, assume that $M$ is a $t$-sum of two smaller regular matroids $M_1$ and $M_2$ for some $t \in [2]$ (we write $M=M_1\oplus_t M_2$). Then, using the following simple bound we are done by induction. \begin{lemma}[see \cite{kaibel2016extended,weltge2015sizes} or \cite{aprile20182}] \label{lem:simplebound} For $t \in [2]$, $$ \xc(P(M_1 \oplus_t M_2)) \leq \xc(P(M_1)) + \xc(P(M_2))\,. $$ \end{lemma} For completeness, here is a proof sketch for Lemma~\ref{lem:simplebound}: if $t = 1$ then $P(M_1 \oplus_t M_2)$ is simply the Cartesian product $P(M_1) \times P(M_2)$, and if $t = 2$ then $P(M_1 \oplus_t M_2)$ can be obtained by intersecting $P(M_1) \times P(M_2)$ with a single hyperplane. Since we cannot prove Lemma~\ref{lem:simplebound} for $t = 3$, we switch to a different strategy to treat the remaining case. Instead, we prove that $M$ has a special decomposition as described in the next result. \begin{lemma} \label{lem:star_decomposition} Let $M$ be a regular matroid on $n$ elements that is neither graphic, nor cographic, nor $R_{10}$, and that is neither a $1$-sum nor a $2$-sum. There exist matroids $M_0$, $M_1$, \ldots, $M_k$, for $1\leq k \leq n/4$, such that: \begin{enumerate}[label=(\roman*)] \item $M_0$ is graphic or cographic and has $|E(M_0)| \leq n$, \item $M_1$, \ldots, $M_k$ are mutually disjoint regular matroids, with $|E(M_i)| \leq n/2 + 3$ for $i \in [k]$, \item $M$ can be obtained from $M_0$ by simultaneously performing a $3$-sum with $M_i$ for $i \in [k]$. \end{enumerate} \end{lemma} We call a decomposition as in Lemma~\ref{lem:star_decomposition} a \emph{star decomposition} and write $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$. For such (regular) matroids $M$, we prove the following upper bound on the extension complexity of $P(M)$. \begin{lemma} \label{lem:star_bound} There exists a constant $c_1$ such that $$ \xc(P(M)) \leq c_1 \cdot |E(M_0)|^2 + 16 \sum_{i=1}^k \xc(P(M_i)) $$ for every matroid $M$ that admits a star decomposition $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$. \end{lemma} Since the numbers of elements of $M_1$, \ldots, $M_k$ are smaller than the number of elements of $M$ by a constant factor, Lemmas~\ref{lem:star_decomposition} and \ref{lem:star_bound} are enough to prove a polynomial bound on $\xc(P(M))$. Details will be given below in Section~\ref{sec:main_thm}. Later in this paper, we also consider the \emph{circuit dominant} of a matroid $M$, defined as $$ P^\uparrow_{\mathrm{circuit}}(M) := \mathrm{conv}\{\chi^C \in \{0,1\}^{E(M)} \mid C \text{ is a circuit of } M\}+\mathbb R^{E(M)}_+\,. $$ We can give a $O(n^2)$-size extended formulation for this polyhedron whenever $M$ is a regular matroid on $n$ elements. Interestingly, the extended formulation can be constructed \emph{globally}, in the sense that it does not need Seymour's decomposition theorem. This is in stark contrast with the case of the independence polytope, for which we do not know how to avoid the decomposition theorem. Here is an outline of the paper. In Section~\ref{sec:decomp}, we give some background on $t$-sums for $t \in [3]$ and prove Lemma~\ref{lem:star_decomposition}. Then, in Section~\ref{sec:main_thm}, we prove Theorem~\ref{thm:main} assuming Lemma~\ref{lem:star_bound}. The proof of Lemma~\ref{lem:star_bound} occupies the rest of the paper. In Section~\ref{sec:asymmetric} we give a first \emph{asymmetric} extended formulation of $P(M)$ for regular matroids $M$ that are the $3$-sum of two regular matroids $M_1$ and $M_2$, in order to illustrate the main ideas. Unfortunately, this extended formulation is not small enough for our purposes, and we have to use more specifics of the star decomposition $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$, in particular that $M_0$ is graphic or cographic. The graphic case is done in Section~\ref{sec:graphic}, and the cographic case in Section~\ref{sec:cographic}. In Section \ref{sec:cdom}, we provide a small extended formulation for the circuit dominant of any regular matroid. Finally, we discuss some improvements and open problems in Section~\ref{sec:discussion}. Some technical details necessary for the proof of Lemma~\ref{lem:star_bound} can be found in the appendix. \section{Decompositions} \label{sec:decomp} The main goal of this section is to prove Lemma~\ref{lem:star_decomposition}. We start by giving a few preliminaries on $t$-sums for $t \in [3]$. \subsection{$1$-sums, $2$-sums and $3$-sums} In order to define $t$-sums for $t \in \{1,2,3\}$, we restrict to binary matroids. Recall that regular matroids are in particular binary, since they can be represented over every field. Recall also that a \emph{cycle} of a matroid is the (possibly empty) disjoint union of circuits. Clearly, every matroid is determined by its cycles. (If $M$ is a binary matroid represented by matrix $A \in \mathbb{F}_2^{m \times n}$, then the cycles of $M$ are all solutions $x \in \mathbb{F}_2^n$ of $Ax = \mathbf{0}$.) Let $M_1$, $M_2$ be binary matroids. Following \cite{seymour1980decomposition}, we define a new binary matroid $M := M_1\Delta M_2$ with $E(M) := E(M_1) \Delta E(M_2)$ such that the cycles of $M_1 \Delta M_2$ are all the subsets of $E(M)$ of the form $C_1 \Delta C_2$, where $C_i$ is a cycle of $M_i$ for $i \in [2]$. We are interested in the following three cases: \begin{itemize} \item $E_1$ and $E_2$ are disjoint, and $E_1, E_2\neq \emptyset$: then we write $M=M_1\oplus_1 M_2$, and say that $M$ is the \emph{1-sum} of $M_1, M_2$; \item $E_1$ and $E_2$ share one element $\alpha$, which is not a loop or coloop of $M_1$ or $M_2$, and $|E_1|,|E_2|\geq 3$: then we write $M=M_1\oplus_2 M_2$, and say that $M$ is the \emph{2-sum} of $M_1, M_2$; \item $E_1$ and $E_2$ share a $3$-element subset $T=\{\alpha,\beta,\gamma\}$, where $T$ is a circuit of $M_1$ and $M_2$ (called a \emph{triangle}) that does not contain any cocircuit of $M_1$ or $M_2$, and $|E_1|,|E_2|\geq 7$: then we write $M=M_1\oplus_3 M_2$, and we say that $M$ is the \emph{3-sum} of $M_1, M_2$. \end{itemize} In the following, whenever talking about $t$-sums, we implicitly assume that $M_1, M_2$, also called the \emph{parts} of the sum, satisfy the assumptions in the definition of the corresponding operation. A matroid is said to be \emph{connected} (or \emph{2-connected}) if it is not a 1-sum, and \emph{3-connected} if it is not a 2-sum or a 1-sum. A subset $F$ of a matroid $M$ is said to be connected if the restriction $M \restrict F$ is. \subsection{Star decompositions} We begin by stating a corollary of~\cite{seymour1980decomposition} that refines the decomposition theorem in the $3$-connected case, and is well-suited to our needs. Its proof can be found in the appendix. \begin{theorem}\label{thm:seymourdecomposition3conn} Let $M$ be a $3$-connected regular matroid that is not $R_{10}$. There exists a tree $\mathcal{T}$ such that each node $v \in V(\mathcal{T})$ is labeled with a graphic or cographic matroid $M_v$, each edge $vw \in E(\mathcal{T})$ has a corresponding $3$-sum $M_v \oplus_3 M_w$, and $M$ is the matroid obtained by performing all the $3$-sums operations corresponding to the edges of $\mathcal{T}$ (in arbitrary order). \end{theorem} We will also need the following easy result. \begin{lemma} \label{lem:weightedtree} Consider a tree $\mathcal{T}$ with node weights $w : V(\mathcal{T}) \to \mathbb R$, and denote by $W$ the total weight of $\mathcal{T}$. Then there is a node $v_0 \in V(\mathcal{T})$ such that each component of $\mathcal{T} - v_0$ has total weight at most $W/2$. \end{lemma} \begin{proof} Orient each edge $e \in E(\mathcal{T})$ towards the heaviest component of $\mathcal{T} - e$, breaking ties arbitrarily. Now, let $v_0$ be a sink node of this orientation, which exists since $\mathcal{T}$ is a tree. Let $\mathcal{T}_1$, \ldots, $\mathcal{T}_k$ denote the components of $\mathcal{T} - v_0$. Since $v_0$ is a sink, we have $w(\mathcal{T}_i) \leq W - w(\mathcal{T}_i)$ and hence $w(\mathcal{T}_i) \leq W/2$, for all $i \in [k]$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:star_decomposition}] Let $\mathcal{T}$ be a decomposition tree for $M$, as described in Theorem~\ref{thm:seymourdecomposition3conn}. Thus each node $v \in V(\mathcal{T})$ is labeled with a graphic or cographic matroid $M_v$. We assign to each node $v$ the weight $w(v) := |E(M) \cap E(M_v)|$, so that the total weight $W$ is $n$. Pick a node $v_0$ as in Lemma~\ref{lem:weightedtree}. Let $M_0 := M_{v_0}$ be the (graphic or cographic) matroid corresponding to $v_0$. We have that $M_0$ is a minor of $M$ (see Section \ref{sec:appendixdecomp} of the appendix for definitions and further details) and thus $|E(M_0)| \leq |E(M)|$. Letting $\mathcal{T}_1$, \ldots, $\mathcal{T}_k$ denote the components of $\mathcal{T} - v_0$, define $M_i$ to be the matroid obtained by performing all the $3$-sums corresponding to the edges of $\mathcal{T}_i$. By choice of $v_0$, for $i \in [k]$, we have $|E(M_i)| \leq n/2 + 3$ (the three extra elements are those that get deleted in the $3$-sum $M_0 \oplus_3 M_i$). Finally, we need to argue that $k\leq n/4$: this is implied by the fact that each $M_i$ is part of a 3-sum, hence it has at least 7 elements, at least 4 of which are shared with $M$. Therefore, we have that $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$. \end{proof} \section{Proof of main theorem} \label{sec:main_thm} In this section, we prove Theorem~\ref{thm:main} assuming that Lemma~\ref{lem:star_bound} holds. The following technical lemma will be useful. \begin{lemma} \label{lem:convex} Let $f : [a,b] \to \mathbb R$ be a convex function. For every $\varepsilon \in [0,b-a]$, there holds $$ f(a+\varepsilon) + f(b-\varepsilon) \leq f(a) + f(b)\,. $$ \end{lemma} We have all ingredients to prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:main}] Let $M$ be a regular matroid on $n$ elements. We go by induction on $n$. If $M$ is either graphic, cographic or $R_{10}$, then $\xc(P(M)) \leq c_0 \cdot n^{6}$, for $c_0\geq 2$. If $M$ is graphic or cographic, this follows from~\cite{wong1980integer,Martin91}. If $M$ is isomorphic to $R_{10}$, we can use the trivial bound $\xc(P(M)) \leq 2^n$. Next, assume that $M$ is a $1$- or $2$-sum of regular matroids $M_1$ and $M_2$. If $M$ is a 1-sum, then the bound on $\xc(M)$ follows directly from Lemma~\ref{lem:simplebound}) applying induction. Otherwise, we have $M=M_1\oplus_2 M_2$. For $i \in [2]$, let $n_i := |E(M_i)| \geq 3$. We get \begin{align*} \xc(P(M)) &= \xc(P(M_1 \oplus_2 M_2))\\ &\leq \xc(P(M_1)) + \xc(P(M_2)) \quad \text{(by Lemma~\ref{lem:simplebound})}\\ &\leq c_0 \cdot n_1^{6} + c_0 \cdot n_2^{6} \quad \text{(by induction)}\\ &\leq c_0 \cdot (\underbrace{n_1+n_2-3}_{=n-1})^{6} + c_0 \cdot 3^{6} \quad \text{(by Lemma~\ref{lem:convex})}\\ &\leq c_0 \cdot n^{6} \quad \text{(since $n \geq 4$)}\,. \end{align*} In the remaining case, $M$ is neither graphic, nor cographic and not a $1$- or $2$-sum. By Lemma~\ref{lem:star_decomposition}, $M$ has a star decomposition $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$. For $i \in \{0\} \cup [k]$, let $n_i := |E(M_i)|$. Notice that $3k \leq n_0 \leq n$, $7 \leq n_i \leq n/2 + 3$ for $i \in [k]$ and $\sum_{i=0}^k n_i = n + 6k$, thus $\sum_{i=1}^k n_i \leq n + 3k$. This time, we bound $\xc(P(M))$ as follows: \begin{align*} \xc(P(M)) &= \xc(P(M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k))\\ &\leq c_1 \cdot |E(M_0)|^2 + 16 \sum_{i=1}^k \xc(P(M_i)) \quad \text{(by Lemma~\ref{lem:star_bound})}\\ &\leq c_0 \cdot n_0^2 + 16c_0 \cdot \sum_{i=1}^k n_i^{6} \quad \text{(by induction, provided that $c_0 \geq c_1$)}\\ &\leq c_0 \cdot \underbrace{n_0^2}_{\leq n^2} + 16c_0 \cdot \big( 3 \cdot (n/2+3)^{6}+\underbrace{k}_{\leq n/4} \cdot 7^{6} \big) \quad \text{(by Lemma~\ref{lem:convex})\footnotemark}\\ &\leq c_0 \cdot n^{6} \quad \text{(if $n$ is large enough, in particular $n \geq 123$)}\,. \end{align*} \footnotetext{By applying Lemma~\ref{lem:convex} repeatedly, and by reordering, we can assume that $n_1,\dots, n_h$ for some $h$ are equal to $n/2+3$ and $n_{h+2,\dots},\dots, n_k$ are equal to 7, with $n_{h+1}$ possibly in between. Since $k\leq n/4$, a simple calculation implies $h+1\leq 3$, and the bound follows. } If $n$ is too small for the last inequality to hold, we use the direct bound $\xc(P(M)) \leq 2^n \leq c_0 \cdot n^{6}$, which holds provided that $c_0$ is large enough. \end{proof} \section{Asymmetric formulations for $3$-sums} \label{sec:asymmetric} In this section we take one big conceptual step towards a proof of Lemma~\ref{lem:star_bound}. Using the characterization of bases in a $3$-sum, it is easy to obtain an extended formulation for $P(M_1 \oplus_3 M_2)$ whose size is bounded by $c_2 \cdot \xc(P(M_1)) + c_2 \cdot \xc(P(M_2))$ for some constant $c_2 \geq 1$. We call this type of formulation \emph{symmetric}\footnote{We point out that the symmetry in extended formulation was studied before, with a different meaning, see e.g.~\cite{yannakakis1991expressing,KaibelPT12}. In contrast, the adjective ``symmetric'' is used here in an illustrative way and does not have a mathematically precise meaning.}, since $M_1$ and $M_2$ play similar roles. Unless $c_2 = 1$, symmetric formulations do not lead to a polynomial size extended formulation for $P(M)$ for all regular matroids $M$. Since the best constant we know of is $c_2 = 4$, we do not see how to prove Theorem~\ref{thm:main} in this way. Instead, we propose an \emph{asymmetric} formulations for $P(M_1 \oplus_3 M_2)$, that is, an extended formulation of size at most $c_3 \cdot \xc(P(M_1)) + c_4 \cdot \xc(P(M_2))$ where $1 \leq c_3 \leq c_4$ and $c_3$ is as small as possible, at the cost of making $c_4$ large. This is our first insight. Our intuition for asymmetric formulations mainly comes from optimization. Let $M_1$ and $M_2$ be binary matroids sharing a triangle $T := E(M_1) \cap E(M_2)$. In order to find a maximum weight independent set in $M_1 \oplus_3 M_2$ we first solve \emph{several} subproblems in $M_2$, then use this to define weights for the elements of the triangle $T := E(M_1) \cap E(M_2)$ and then solve a \emph{single} optimization problem over $M_1$, where the elements of $E_1\setminus T$ keep their original weights. Eventually, this leads to an asymmetric formulation with $c_3 = 2$ and $c_4 = 16$. (Roughly speaking, the reason why this gives $c_3 = 2$ and not $c_3 = 1$ is that in order to convert the optimization algorithm into an extended formulation, we need to distinguish between two types of objective functions. Actually, our point of view below will be slightly different.) Next, we quickly explain how $c_3$ can be lowered to $1$ when the term $\xc(P(M_1))$ is replaced by the extension complexity of a certain \emph{pair} of polytopes depending on $M_1$ and $T$. This is our second insight, and will serve as a conceptual basis for our proof of Lemma~\ref{lem:star_bound}. Finally, we discuss how things change when, instead of being defined by a single $3$-sum, $M$ is defined by a star decomposition. Hence, instead of having a single triangle $T$, we will have $k \geq 1$ disjoint triangles $T_1$, \ldots, $T_k$. \subsection{Preliminaries} We state some facts on $3$-sums that will be useful below. If $M$ is a matroid and $e \in E(M)$, we denote by $M \delete e$ the matroid obtained from $M$ by deleting $e$ and by $M \contract e$ the matroid obtained from $M$ by contracting $e$. These notations carry on to subsets $F \subseteq E(M)$. Also, recall that $M \restrict F$ denotes the restriction of $M$ to $F$. For the rest of the section, we consider a binary matroid $M$ such that $M = M_1\oplus_3 M_2$, where $M_1$ and $M_2$ are binary matroids. Let $T := E(M_1) \cap E(M_2)$ be the triangle on which $M_1$ and $M_2$ are attached to form their $3$-sum. Our first lemma lists some useful well known facts. We refer to \cite{oxley2006matroid} and \cite{schrijver2003combinatorial} for proofs. \begin{lemma}\label{lem:3sumfacts} If $M=M_1\oplus_3 M_2$, then the following hold. \begin{enumerate}[label=(\roman*)] \item $\mbox{rk}(M)=\mbox{rk}(M_1)+\mbox{rk}(M_2)-2$. \item The flats of $M$ are of the form $F_1\Delta F_2$, where $F_i$ is a flat of $M_i$ for $i \in [2]$, with $F_1\cap T=F_2\cap T$. \item The circuits of $M$ are of the form $C_1\Delta C_2$, where $C_i$ is a circuit of $M_i$ for $i \in [2]$, with $C_1\cap T=C_2\cap T$. \item Let $F \subseteq E(M)$ such that $F \subseteq E(M_1)$ (resp.\ $F \subseteq E(M_2)$). Then $M \restrict F = M_1 \restrict F$ (resp.\ $M \restrict F = M_2 \restrict F$). In particular, $I \subseteq F$ is an independent set of $M$ if and only if it is an independent set of $M_1$ (resp.\ $M_2$). \end{enumerate} \end{lemma} Our next lemma gives a characterization of the bases of a $3$-sum. Its proof can be found in the appendix. \begin{lemma}\label{lem:3sumbases} Let $M=M_1\oplus_3 M_2$ and let $T := E(M_1) \cap E(M_2)$. A subset $B \subseteq E(M)$ is a basis of $M$ if and only if one of the following holds for some $i \in [2]$ \begin{enumerate}[label=(\roman*)] \item $B = B_i \cup (B_{3-i}-t_1-t_2)$, where $B_i$ is a basis of $M_i$ disjoint from $T$ and $B_{3-i}$ is a basis of $M_{3-i}$ containing two elements $t_1, t_2 \in T$. \item $B = (B_i - t_1) \cup (B_{3-i} - t_2)$, where $B_i$ is a basis of $M_i$ intersecting $T$ in a single element $t_1$, $B_{3-i}$ is a basis of $M_{3-i}$ intersecting $T$ in a single element $t_2$ distinct from $t_1$, and moreover $B_i - t_1 + t_3$ is a basis of $M_i$ and $B_{3-i} - t_2 + t_3$ is a basis of $M_{3-i}$ where $t_3$ denotes the third element of $T$. \end{enumerate} \end{lemma} We conclude these preliminaries with properties of connected flats in a $3$-sum for later use. Our interest for these flats is motivated by the well known fact that for any (loopless) matroid $M$, $$ P(M)=\{x \in \mathbb R^{E(M)}_+ \mid \forall \text{ connected flat } F\subseteq E(M) : x(F) \leq \mbox{rk}(F)\}\,. $$ See, e.g., \cite{schrijver2003combinatorial}. We refer the reader to the appendix for the proof of Lemma~\ref{lem:3sumflats}. \begin{lemma}\label{lem:3sumflats} Let $M=M_1\oplus_3 M_2$ and let $T := E(M_1) \cap E(M_2)$. If $F$ is a connected flat of $M$, then $F$ satisfies one of the following. \begin{enumerate}[label=(\roman*)] \item $F \subseteq E(M_i)$ for some $i \in [2]$ and $F$ is a connected flat of $M_i$. \item There are connected flats $F_1$, $F_2$ of $M_1$, $M_2$ respectively such that $F = F_1 \Delta F_2$, $F_1 \cap T = F_2\cap T$ is a singleton, and $\mbox{rk}(F) = \mbox{rk} (F_1)+\mbox{rk}(F_2)-1$. \item There are connected flats $F_1$, $F_2$ of $M_1$, $M_2$ respectively such that $F = F_1 \Delta F_2$, $F_1 \cap T = F_2\cap T$ is the whole triangle $T$, and $\mbox{rk}(F) = \mbox{rk}(F_1)+\mbox{rk}(F_2)-2$. \end{enumerate} \end{lemma} For simplicity (and without loss of generality) throughout the paper we assume that the matroids considered have no loop. \subsection{A first asymmetric formulation} We start by recalling a well known result by Balas \cite{balas1979disjunctive}, that we will need below. We state a refinement of it that is proved in \cite{weltge2015sizes}. \begin{proposition}\label{prop:Balas} Let $P_1,\dots,P_k\in\mathbb R^n$ be polytopes of dimension at least 1. Then $$\xc(\mathrm{conv} \bigcup_{i=1}^k P_i)\leq \sum_{i=1}^k \xc(P_i).$$ \end{proposition} Let $M_1$, $M_2$ be binary matroids sharing a triangle $T := \{\alpha,\beta,\gamma\}$, and let $M := M_1 \oplus_3 M_2$. We give an extended formulation for $P(M)$ showing that $$ \xc(P(M_1\oplus_3 M_2))\leq 2\xc(P(M_1))+16\xc(P(M_2))\,. $$ For $X \subseteq T$, we consider the convex hull $P(M_2 \delete T,X)$ of all characteristic vectors $\chi^I \in \{0,1\}^{E(M_2 \delete T)}$ where $I \subseteq E(M_2 \delete T)$ is an independent set of $M_2$ whose span $F$ satisfies $F \cap T \subseteq X$. Observe that \begin{align*} P(M_2 \delete T,\emptyset) &= P(M_2 \contract \alpha \contract \beta \contract \gamma) = P(M_2 \contract T)\,,\\ P(M_2 \delete T,T) &= P(M_2 \delete \alpha \delete \beta \delete \gamma) = P(M_2 \delete T)\,,\\ P(M_2 \delete T,\{\alpha\}) &= P(M_2 \contract \beta \delete \alpha \delete \gamma)\cap P(M_2 \contract \gamma \delete \alpha \delete \beta) \end{align*} and similarly for $P(M_2 \delete T,\{\beta\})$ and $P(M_2 \delete T,\{\gamma\})$ (the last equality follows from matroid intersection). \begin{proposition}\label{prop:asymmetric} Let $M_1$, $M_2$ be binary matroids sharing a triangle $T := \{\alpha,\beta,\gamma\}$, and let $M := M_1 \oplus_3 M_2$. Define $P'(M_2)$ as $$ P'(M_2) := \mathrm{conv} \left( P(M_2 \delete T,\emptyset) \times \{\mathbf{0}\}\ \cup\ \bigcup_{t \in T} P(M_2 \delete T,\{t\}) \times \{\mathbf{e}_{t}\} \ \cup\ P(M_2 \delete T,T) \times \{\mathbf{e}_\alpha + \mathbf{e}_\beta\}\right) $$ and $P''(M_2)$ similarly, replacing the last polytope in the union by $P(M_2 \delete T,T) \times \{\mathbf{e}_\beta + \mathbf{e}_\gamma\}$. If we let \begin{align*} \label{eq:EF} Q(M) := \Big\{(x^1,x^2)\in \mathbb R^{E(M)} \mid \exists \ x^{T'}, x^{T''}\in \mathbb R^{T} :\:& (x^1,x^{T'})\in P(M_1),\ (x^1,x^{T''})\in P(M_1)\\ & (x^2,x^{T'})\in P'(M_2),\ (x^2,x^{T''})\in P''(M_2)\Big\}\, \end{align*} then $P(M)=Q(M)$. In particular, we have $\xc(P(M_1\oplus_3 M_2))\leq 2\xc(P(M_1))+16\xc(P(M_2))$. \end{proposition} \begin{proof} To prove $P(M) \subseteq Q(M)$, we show that $B(M) \subseteq Q(M)$ using Lemma~\ref{lem:3sumbases}, and observe that $Q(M)$ is of antiblocking type (this follows from the fact that $P(N)$ is of antiblocking type for every matroid $N$). Let $B \in \mathcal{B}(M)$ be a basis of $M$. We distinguish cases as in Lemma~\ref{lem:3sumbases}. For the sake of conciseness, we skip the cases that follow from other cases by symmetry, and omit the conditions on the bases $B_1$ and $B_2$ (these can be found in the statement of the lemma).\medskip \noindent (i) First, assume $B = B_1 \cup (B_2 - \alpha - \beta)$ and let $I_2 := B_2 - \alpha - \beta$. The span of $I_2$ (in $M_2$) is disjoint from $T$. Hence, we have $\chi^{I_2} \in P(M_2 \delete T, \emptyset)$ and $(\chi^{I_2},\mathbf{0}) \in P'(M_2) \cap P''(M_2)$. Then it is easy to check that $\chi^B \in Q(M)$ by setting $x^{T'} = x^{T''} := \mathbf{0}$. Next, assume $B = (B_1 - \alpha - \beta) \cup B_2$. Then it is easy to check that $\chi^B \in Q(M)$ by setting $x^{T'} := \mathbf{e}_\alpha + \mathbf{e}_\beta$ and $x^{T''} := \mathbf{e}_\beta + \mathbf{e}_\gamma$.\medskip \noindent (ii) $B = (B_1 - \alpha) \cup (B_2 - \beta)$. Then we see that $\chi^B \in Q(M)$ by setting $x^{T'} = x^{T''} := \mathbf{e}_\alpha$.\medskip To prove $Q(M) \subseteq P(M)$, let $F \subseteq E(M)$ be any connected flat and let $x=(x^1,x^2)$ be any point of $Q(M)$. We have to show that $x^1(F\cap E(M_1))+x^2(F\cap E(M_2)) \leq \mbox{rk}(F)$. We use Lemma~\ref{lem:3sumflats}. If $F \subseteq E(M_1)$, or $F \subseteq E(M_2)$, there is nothing to show. Hence we may focus on cases (ii) and (iii) of the lemma. Therefore, $F = F_1 \Delta F_2$, where $F_i$ is a connected flat of $M_i$ for $i \in [2]$.\medskip \noindent (ii) $F_1 \cap T = F_2 \cap T$ is a singleton, and $\mbox{rk}(F)=\mbox{rk}(F_1)+\mbox{rk}(F_2)-1$. First, assume that $F_1 \cap T = F_2 \cap T = \{\alpha\}$. Let $x^{T'}$ be such that $(x^1,x^{T'})\in P(M_1)$ and $(x^2,x^{T'}) \in P'(M_2)$. Clearly, we have $x^1(F\cap E(M_1))+x^{T'}_{\alpha} \leq \mbox{rk}(F_1)$. We claim that \begin{equation} \label{eq:2ndpart} x^2(F \cap E(M_2)) \leq \mbox{rk}(F_2)-1+x^{T'}_{\alpha}\,. \end{equation} This concludes the proof for this case as summing the two inequalities, we get the desired inequality. To prove the claim, we may assume that $(x^2,x^{T'})$ is a vertex of $P'(M_2)$. We consider all the possible subcases one after the other. \begin{itemize} \item If $(x^2,x^{T'}) \in P(M_2 \delete T,\emptyset) \times \{\mathbf{0}\}$, then since the rank of $F \cap E(M_2)$ in $M_2 \contract T$ is $\mbox{rk}(F_2)-1$, \eqref{eq:2ndpart}~holds. \item If $(x^2,x^{T'}) \in P(M_2 \delete T, \{\alpha\}) \times \{\mathbf{e}_{\alpha}\}$, then $x^{T'}_{\alpha}=1$ and \eqref{eq:2ndpart}~holds. \item If $(x^2,x^{T'}) \in P(M_2 \delete T, \{\beta\}) \times \{\mathbf{e}_{\beta}\}$, then in particular $x^2\in P(M_2 \contract \alpha \delete \beta \delete \gamma)$, and the rank of $F \cap E(M_2)$ in $M_2 \contract \alpha \delete \beta \delete \gamma$ is $\mbox{rk}(F_2)-1$. Hence, \eqref{eq:2ndpart}~holds. \item If $(x^2,x^{T'}) \in P(M_2 \delete T, \{\gamma\}) \times \{\mathbf{e}_{\gamma}\}$ then a similar argument as in the previous case applies. \item If $(x^2,x^{T'}) \in P(M_2 \delete T, T) \times \{\mathbf{e}_{\alpha} + \mathbf{e}_{\beta}\}$, then $x^{T'}_{\alpha}=1$ and \eqref{eq:2ndpart}~holds. \end{itemize} The above argument can be easily adapted in case $F_1 \cap F_2 = \{\beta\}$. If $F_1 \cap F_2 = \{\gamma\}$, one needs to use the variables $x^{T''}$ instead. We can show similarly as above that, whenever $(x^1,x^{T''})\in P(M_1)$ and $(x^2,x^{T''}) \in P''(M_2)$, $$ x^2(F \cap E(M_2)) \leq \mbox{rk}(F_2)-1+x^{T''}_{\gamma}\,. $$ Together with $x^1 (F \cap E(M_1))+x^{T''}_{\gamma} \leq \mbox{rk}(F_1)$, this concludes this case.\medskip \noindent (iii) $F_1 \cap T = F_2 \cap T = T$ and $\mbox{rk}(F)=\mbox{rk}(F_1)+\mbox{rk}(F_2)-2$. Again, let $x^{T'}$ be such that $(x^1,x^{T'})\in P(M_1)$ and $(x^2,x^{T'}) \in P'(M_2)$. We have $x^1(F\cap E(M_1))+x^{T'}_{\alpha}+x^{T'}_{\beta}+x^{T'}_{\gamma}\leq \mbox{rk}(F_1)$. We claim that \begin{equation} \label{eq:2ndpartbis} x^2(F\cap E(M_2)) \leq \mbox{rk}(F_2)-2+x^{T'}_{\alpha}+x^{T'}_{\beta}+x^{T'}_{\gamma} \end{equation} holds, which concludes the proof for this case as summing the two inequalities we get the desired inequality. As above, we consider all subcases in order to establish~\eqref{eq:2ndpartbis}. \begin{itemize} \item If $(x^2,x^{T'}) \in P(M_2 \delete T,\emptyset) \times \{\mathbf{0}\}$, then since the rank of $F \cap E(M_2)$ in $M_2 \contract T$ is $\mbox{rk}(F_2)-2$, \eqref{eq:2ndpartbis}~holds. \item If $(x^2,x^{T'}) \in P(M_2 \delete T, \{t\}) \times \{\mathbf{e}_t\}$ for some $t \in T$ then $x^{T'}_{t}=1$. Since the rank of $F \cap E(M_2)$ in the corresponding minor of $M_2$ is $\mbox{rk}(F_2)-1$, \eqref{eq:2ndpartbis} holds. \item If $(x^2,x^{T'}) \in P(M_2 \delete T, T) \times \{\mathbf{e}_{\alpha} + \mathbf{e}_{\beta}\}$, then $x^{T'}_{\alpha}=x^{T'}_{\beta}=1$ and \eqref{eq:2ndpartbis}~holds. \end{itemize} \end{proof} \subsection{Making the formulation smaller} In the upper bound on $\xc(P(M_1 \oplus_3 M_2))$ from Proposition~\ref{prop:asymmetric}, the term $2 \xc(P(M_1))$ comes from the constraints $(x^1,x^{T'}) \in P(M_1)$, $(x^1,x^{T''}) \in P(M_1)$ that are part of the extended formulation. In order to make this term smaller, and hence the formulation more compact on the $M_1$ side, it suffices to find a smaller extended formulation for the polytope $$ Q_T(M_1) := \{(x^1,x^{T'},x^{T''}) \mid (x^1,x^{T'}) \in P(M_1), (x^1,x^{T''}) \in P(M_1)\}\,. $$ Now with a bit more thinking, we see that it is not necessary to express $Q_T(M_1)$ exactly. In fact, the proof goes through as long as our extended formulation for that part is contained in $Q_T(M_1)$ and contains $$ P_T(M_1) := \mathrm{conv}\{ (\chi^I,\chi^{I'},\chi^{I''}) \mid I \cup I',I \cup I' \in \mathcal{I}(M_1),\ I'=I'' \mbox{ or } I'=\{\alpha,\beta\},\ I''=\{\beta,\gamma\}\}\,. $$ In other words, all we need is an extended formulation \emph{for the pair} of nested polytopes $(P_T(M_1),Q_T(M_1))$. Before stating our next result, we give some terminology relative to pairs of polytopes. If $P \subseteq Q \subseteq \mathbb R^d$ are nested polytopes, an \emph{extension} of the pair $(P,Q)$ is an extension of some polytope $R$ such that $P \subseteq R \subseteq Q$. Similarly, an \emph{extended formulation} for $(P,Q)$ is an extended formulation for such a polytope~$R$. The \emph{extension complexity} of $(P,Q)$ is defined as $\xc(P,Q) := \min \{\xc(R) \mid R$ polytope, $P \subseteq R \subseteq Q\}$. The proof of the following is simple and omitted. \begin{proposition} \label{prop:asymmetricpair} Let $M_1$, $M_2$ be binary matroids sharing a triangle $T$, and let $M := M_1 \oplus_3 M_2$. Let $P'(M_2), P''(M_2)$ be defined as in Proposition~\ref{prop:asymmetric}, and $P_T(M_1), Q_T(M_1)$ as above. If $R_T(M_1)$ is any polytope such that $P_T(M_1)\subseteq R_T(M_1)\subseteq Q_T(M_1)$, then \begin{align*} P(M)=\{(x^1,x^2)\in \mathbb R^{E(M)} \mid \exists \ x^{T'}, x^{T''} \in \mathbb R^{T} :\: &(x^1,x^{T'},x^{T''})\in R_T(M_1)\\ & (x^2,x^{T'})\in P'(M_2), (x^2,x^{T''})\in P''(M_2)\}. \end{align*} In particular, we have $\xc(P(M_1\oplus_3 M_2))\leq \xc(P_T(M_1),Q_T(M_1))+16\xc(P(M_2))$. \end{proposition} \subsection{Dealing with several $3$-sums simultaneously} We would now like to further extend the above results to the setting where $M = M_0 \oplus_3 \dots \oplus_3 M_k$ for some binary matroids $M_0$, $M_1$, \ldots, $M_k$ such that each $M_i$, $i \in [k]$ shares a triangle $T_i := \{\alpha_i,\beta_i,\gamma_i\}$ with $M_0$ and is disjoint from $M_j$ for all $j \in [k]$ such that $j \neq i$. Notice that a true star decomposition satisfies more conditions (see (i) and (ii) in Lemma~\ref{lem:star_decomposition}). In particular, $M_0$ is required to be graphic or cographic. This will be exploited in the next section. Here, $M_0$ can be any binary matroid. For simplicity, we partition $E(M_0)$ into $T_1, \dots, T_k$ and $E_0 := E(M_0) \setminus (T_1 \cup \cdots \cup T_k)$. We let \begin{align*} P_{T_1,\dots,T_k}(M_0) :=&\mathrm{conv}\Big\{(\chi^{J_0},\chi^{J_1'}, \chi^{J_1''},\dots, \chi^{J_k'}, \chi^{J_k''}) \in \mathbb R^{E_0} \times \mathbb R^{T_1} \times \mathbb R^{T_1} \times \cdots \times \mathbb R^{T_k} \times \mathbb R^{T_k} \mid\\ &\qquad \qquad \forall J_1^*, \ldots, J_k^* : \forall i \in [k] : J_i^* \in \{J_i',J_i''\} : J_0 \cup J_1^* \cup \dots \cup J_k^* \in \mathcal{I}(M_0), \text{ and}\\ &\qquad \qquad \forall i \in [k] : J_i'=J_i'' \text{ or } J_i'=\{\alpha_i,\beta_i\},\ J_i''=\{\beta_i,\gamma_i\}\Big\}\,,\\ Q_{T_1,\dots,T_k}(M_0) :=&\Big\{ (x^0,x^{T_1'},x^{T_1''},\ldots,x^{T_k'},x^{T_k''}) \in \mathbb R^{E_0} \times \mathbb R^{T_1} \times \mathbb R^{T_1} \times \cdots \times \mathbb R^{T_k} \times \mathbb R^{T_k} \mid \\ &\qquad \forall T_1^*, \ldots, T_k^* : \forall i \in [k] : T_i^* \in \{T_i',T_i''\} : (x^0,x^{T_1^*},\dots,x^{T_k^*}) \in P(M_0)\Big\}. \end{align*} \begin{proposition}\label{prop:asymmetricstar} Let $M = M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$ where $M_0, M_1, \ldots, M_k$ are binary matroids such that $M_1$, \ldots, $M_k$ are mutually disjoint. For $i \in [k]$, define $P'(M_i), P'(M_i)$ as in Proposition~\ref{prop:asymmetric}. If $R_{T_1,\dots,T_k}(M_0)$ is any polytope such that $P_{T_1,\dots,T_k}(M_0)\subseteq R_{T_1,\dots,T_k}(M_0)\subseteq Q_{T_1,\dots,T_k}(M_0)$, then \begin{align} \label{eq:EFstar} P(M)=\Big\{(x^0,x^1,\dots,x^k)\in \mathbb R^{E(M)} \mid&\: \exists \ x^{T_1'}, x^{T_1''} \in \mathbb R^{T_1}, \ldots, x^{T_k'}, x^{T_k''} \in \mathbb R^{T_k} : &\\ \nonumber&\quad (x^0,x^{T_1'},x^{T_1''}, \dots, x^{T_k'},x^{T_k''})\in R_{T_1,\dots,T_k}(M_0)&\\ \nonumber&\quad \forall i \in [k] : (x^i,x^{T_i'})\in P'(M_i),\ (x^i,x^{T_i''})\in P''(M_i) \Big\}. \end{align} In particular, we have $\xc(P(M)) \leq \xc(P_{T_1,\dots,T_k}(M_0),Q_{T_1,\dots,T_k}(M_0))+16\sum_{i=1}^k\xc(P(M_i))$. \end{proposition} \begin{proof} We will proceed by induction on $k$. Notice that the base case $k=1$ is Proposition~\ref{prop:asymmetricpair}. Let $k>1$, and let $M' := M_0\oplus_3\dots \oplus_3 M_{k-1}$, so that $M = M'\oplus_3 M_k$, with $T_k$ being the common triangle. Denote by $Q(M)$ the polytope in the right-hand side of~\eqref{eq:EFstar}. By induction, we have that $P(M')=Q(M')$, which we will use below. Let \begin{align*} R(M') := \Big\{ (x^0,x^1,\dots,x^{k-1}, x^{T_k'},x^{T_k''}) \mid & \ \exists \ x^{T_1'},x^{T_1''} \in \mathbb R^{T_1}, \dots, x^{T_{k-1}'}, x^{T_{k-1}''} \in \mathbb R^{T_{k-1}} : &\\ & \quad (x^0,x^{T_1'},x^{T_1''}, \dots, x^{T_k'},x^{T_k''})\in R_{T_1,\dots,T_k}(M_0)&\\ & \quad \forall \ i \in [k] : (x^i,x^{T_i'})\in P'(M_i),\ (x^i,x^{T_i''})\in P''(M_i)\Big\}. \end{align*} We claim that $P_{T_k}(M')\subseteq R(M')\subseteq Q_{T_k}(M')$. Then, by Proposition~\ref{prop:asymmetricpair}, we have that \begin{align*} P(M)=\{(x^0,x^1,\dots,x^k)\in \mathbb R^{E(M)} \mid \ \exists \ x^{T_k'}, x^{T_k''} \in \mathbb R^{T_k} :\:& (x^0,x^1,\dots,x^{k-1}, x^{T_k'},x^{T_k''})\in R(M') \\ & (x^k,x^{T_k'})\in P'(M_k),\ (x^k,x^{T_k''}) \in P''(M_k) \}. \end{align*} But the latter, by definition of $R(M')$, is exactly $Q(M)$, which concludes the proof. We prove the claim below. To show $P_{T_k}(M')\subseteq R(M')$, one proceeds as in the proof of Proposition~\ref{prop:asymmetric}, by showing that for every vertex of $P_{T_k}(M')$ there are choices for $x^{T_1'}, x^{T_1''}, \ldots, x^{T_{k-1}'},x^{T_{k-1}''}$ that satisfy all the constraints in $R(M')$. To show $R(M')\subseteq Q_{T_k}(M')$, it suffices to prove that whenever $(x^0,x^1,\dots,x^{k-1},x^{T_k'},x^{T_k''}) \in R(M')$, we have $(x^0,x^1,\dots,x^{k-1}, x^{T_k'}) \in P(M')$ and $(x^0,x^1,\dots,x^{k-1}, x^{T_k''}) \in P(M')$. Since $R_{T_1,\ldots,T_k}(M_0) \subseteq Q_{T_1,\ldots,T_k}(M_0)$, we have $(x^0,x^1,\dots,x^{k-1}, x^{T_k'}) \in Q(M')$ and $(x^0,x^1,\dots,x^{k-1},x^{T_k''}) \in Q(M')$. Using $P(M') = Q(M')$, this observation concludes the proof. \end{proof} \section{Smaller formulation for star decompositions: the graphic case} \label{sec:graphic} In this section we first review Wong's extended formulation for the spanning tree polytope~\cite{wong1980integer}, which will be the basis for our extended formulation of the independence polytope of any regular matroid $M$ that has a star decomposition $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$. Then, we prove Lemma~\ref{lem:star_bound} in case $M_0$ is a graphic matroid. The case where $M_0$ is a cographic matroid will be addressed in the next section. \subsection{Wong's extended formulation for the spanning tree polytope}\label{sec:stpwong} Let $D$ be a directed graph, and $r$ any of its nodes, that we call the \emph{root}. An $r$-arborescence of $D$ is an inclusion-wise minimal subset of arcs of $D$ containing, for every node $v$ distinct from $r$ at least one directed path from $r$ to $v$. The \emph{$r$-arborescence polytope} $P_{r\mathrm{-arborescence}}(D)$ is the convex hull of the characteristic vectors of $r$-arborescences of $D$. It is well known~\cite[Corollary 52.3a]{schrijver2003combinatorial} that one can express its dominant as $$ P^\uparrow_{r\mathrm{-arborescence}}(D) = \{c \in \mathbb R^{A(D)}_+ \mid \forall S \subsetneq V(D),\ r \in S : c(\delta^{\mathrm{out}}(S)) \geq 1\} $$ and that $P_{r\mathrm{-arborescence}}(D)$ is the face of $P^\uparrow_{r\mathrm{-arborescence}}(D)$ defined by the single valid inequality $c(A(D)) \geq |V(D)| - 1$ (or equivalently, by the valid inequalities $c(\delta^{\mathrm{in}}(v)) \geq 1$ for $v \in V(D) - r$ and $c_a \geq 0$ for $a \in \delta^{\mathrm{in}}(r)$). The $r$-arborescence dominant has a canonical flow-based compact extended formulation in which every non-root node $v$ has unit flow $\phi^v \in \mathbb R^{A(D)}$ from $r$ to $v$, and the variables $c_a$ of the $r$-arborescence dominant act as capacities: \begin{align} \nonumber P^\uparrow_{r\mathrm{-arborescence}}(D) = \Big\{c\in \mathbb R^{A(D)} \mid\:& \forall \ v \in V(D)-r : \ \exists \ \phi^v \in \mathbb R^{A(D)} :\\ &\quad \phi^v(\delta^{\mathrm{out}}(r)) - \phi^v(\delta^{\mathrm{in}}(r)) = 1, \label{constr:wongunit}\\ &\quad \forall \ u \in V(D) - r - v : \phi^v(\delta^{\mathrm{out}}(u)) - \phi^v(\delta^{\mathrm{in}}(u)) = 0, \label{constr:wongflowu}\\ &\quad \mathbf{0} \leq \phi^v \leq c \label{constr:wongcapacity} \Big\} \end{align} Let $G$ be a connected graph. Wong's formulation for the spanning tree polytope $P_{\mathrm{spanning~tree}}(G)$ can be obtained by bidirecting each edge of $G$ to get a directed graph $D$, picking an arbitrary root $r \in V(D)$, and then regarding spanning trees as ``undirected $r$-arborescences''. Formally, $$ P_{\mathrm{spanning~tree}}(G) = \{x \in \mathbb R^{E(G)} \mid \exists \ c \in P_{r\mathrm{-arborescence}}(D) : \forall \ uv \in E(G) : x_{uv} = c_{(u,v)} + c_{(v,u)}\}\,. $$ The independence polytope of $M(G)$ can then be expressed as follows: \begin{align*} P(M(G)) &= \{x \in \mathbb R^{E(G)} \mid \exists \ y \in P_{\mathrm{spanning~tree}}(G) : \mathbf{0} \leq x \leq y\}\\ &= \{x \in \mathbb R^{E(G)} \mid \exists \ c \in P_{r\mathrm{-arborescence}}(D) : \forall \ uv \in E(G) : 0 \leq x_{uv} \leq c_{(u,v)} + c_{(v,u)}\}\,. \end{align*} \subsection{Tweaking Wong's formulation} \label{sec:tweaking_wong} Assume that $M \stackrel{\star}{=} M_0 \oplus_3 M_1 \oplus_3 \cdots \oplus_3 M_k$ is a 3-connected regular matroid with $M_0$ graphic. Let $G$ be a graph such that $M_0 = M(G)$. One can see (see Section \ref{sec:appendixdecomp}) that $M_0$ is connected, implying that $G$ is connected (actually, even $2$-connected). Let $T_i$ denote the common triangle of $M_0$ and $M_i$ for $i \in [k]$. Hence, $T_1$, \ldots, $T_k$ are (the edge sets of) $k$~edge-disjoint triangles ($3$-cliques) in graph $G$. Using as a basis Wong's formulation for $P_{\mathrm{spanning~tree}}(G)$, we construct an extended formulation for the pair $(P_{T_1,\dots,T_k}(M_0), Q_{T_1,\dots,T_k}(M_0))$ whose size is $O(|V(G)| \cdot |E(G)|)$. That is, we define a polytope $R_{T_1,\dots,T_k}(M_0)$ containing $P_{T_1,\dots,T_k}(M_0)$ and contained in $Q_{T_1,\dots,T_k}(M_0)$ by giving a size-$O(|V(G)| \cdot |E(G)|)$ extended formulation for it. As before, we partition the edges of $G$ into $T_1$, \ldots, $T_k$ and $E_0 := E(G) \setminus (T_1 \cup \cdots \cup T_k)$. Again, let $D$ denote the directed graph obtained by bidirecting each edge of $G$, and let $r \in V(D)$ be an arbitrary root. For $i \in [k]$, let $B_i := \{(u,v) \mid uv \in T_i\}$ denote the (arc set of the) bidirected triangle obtained from $T_i$. We partition the arcs of $D$ into $B_1$, \ldots, $B_k$ and $A_0 := A(D) \setminus (B_1 \cup \cdots \cup B_k)$. In addition to the variables $x^0 \in \mathbb R^{E_0}$, $x^{T'_1}, x^{T''_1} \in \mathbb R^{T_1}$, \ldots, $x^{T'_k}, x^{T''_k} \in \mathbb R^{T_k}$, our formulation has \begin{itemize} \item arc capacities $c^0 \in \mathbb R^{A_0}$, $c^{T'_1}, c^{T''_1} \in \mathbb R^{B_1}$, \ldots, $c^{T'_k}, c^{T''_k} \in \mathbb R^{B_k}$, \item a unit flow $\phi^v \in \mathbb R^{A(D)}$ from $r$ to $v$ for each $v \in V(D) - r$, \item a circulation $\Delta^v_i \in \mathbb R^{B_i}$ for each $v \in V(D) - r$ and $i \in [k]$. \end{itemize} For each $I \subseteq [k]$ and non-root node $v$, we obtain a unit flow $\phi^v_I \in \mathbb R^{A(D)}$ from $r$ to $v$ by adding to $\phi^v$ the circulation $\Delta^v_i$ on the arcs of each $B_i$ with $i \in I$. That is, we let $$ \phi^v_{I,a} := \begin{cases} \phi^v_a &\text{if } a \in A_0 \text{ or } a \in B_i,\ i \notin I\,,\\ \phi^v_a + \Delta^v_{i,a} &\text{if } a \in B_i,\ i \in I\,. \end{cases} $$ The $2^k$ flows $\phi^v_I$ are not explicitly part of the formulation. Instead, they are implicitly defined from the flows $\phi^v$ and the circulations $\Delta^v_i$. The idea is that each circulation $\Delta^v_i$ describes how the flow $\phi^v$ is to be rerouted within the $i$th triangle. Now, we give a formal definition of our extended formulation: $R_{T_1,\ldots,T_k}(M_0)$ is the set of tuples $(x^0,x^{T_1'},x^{T_1''},\ldots,x^{T_k'},x^{T_k''})$ such that there exists capacities $(c^0,c^{T_1'},c^{T_1''},\ldots,c^{T_k'},c^{T_k''})$ satisfying the following constraints. First, the $x$-variables and the capacities are related similarly as in the extended formulation for the spanning forest polytope: \begin{align} \label{constr:x-UB1} &\forall \ uv \in E_0 : 0 \leq x^0_{uv} \leq c^0_{(u,v)} + c^0_{(v,u)},\\ \label{constr:x-UB2} &\forall \ i \in [k],\ uv \in T_i : 0 \leq x^{T'_i}_{uv} \leq c^{T'_i}_{(u,v)} + c^{T'_i}_{(v,u)},\ 0 \leq x^{T''_i}_{uv} \leq c^{T''_i}_{(u,v)} + c^{T''_i}_{(v,u)}\,. \end{align} Second, we include constraints that force $(c^0,c^{T^*_1},\ldots,c^{T^*_k})$ to be in the $r$-arborescence polytope for every choice of $T^*_i \in \{T'_i,T''_i\}$, $i \in [k]$: \begin{align} \label{constr:c-arb_eq} &c^0(A_0) + \sum_{i=1}^k c^{T'_i}(B_i) = |V(D)|-1\,,\\ \label{constr:c-consistency} &\forall \ i \in [k] : c^{T'_i}(B_i) = c^{T''_i}(B_i)\,. \end{align} Third, for all $v \in V(D) - r$ there exists $\phi^v \in \mathbb R^{A(D)}$, $\Delta^v_1 \in \mathbb R^{B_1}$, \ldots, $\Delta^v_k \in \mathbb R^{B_k}$ such that $\phi^v$ is a unit flow from $r$ to $v$, see \eqref{constr:wongunit} and \eqref{constr:wongflowu} above, and $\Delta^v_i$ is a circulation for all $i \in k$: \begin{align} &\forall \ u \in V(D) : \Delta^v_i(\delta^{out}(u) \cap B_i) - \Delta^v_i(\delta^{in}(u) \cap B_i) = 0\,. \end{align} Fourth, the flows should satisfy the following lower and upper bounds: \begin{align} \label{constr:wongflowbound1}&\forall \ a \in A_0 : 0 \leq \phi^v_a \leq c^0_a\,,\\ \label{constr:wongflowbound2}&\forall \ i \in [k],\ a \in B_i : 0 \leq \phi^v_a \leq c^{T'_i}_a,\ 0 \leq \phi^v_a + \Delta^v_{i,a} \leq c^{T''_i}_a\,. \end{align} The resulting formulation has in total $|E_0|+6k$ $x$-variables, $2|E_0|+12k$ $c$-variables, $(|V(G)|-1) \cdot 2|E(G)|$ $\phi$-variables and $(|V(G)|-1)\cdot 6k$ $\Delta$-variables. Given that $|E(G)| = |E_0| + 3k$, the total number of variables is $O(|V(G)| \cdot |E(G)|)$. Since each variable is involved in a constant number of inequalities, the total number of inequalities is also $O(|V(G)| \cdot |E(G)|)$. \begin{proposition}\label{prop:wongtweak} Let $G$ be a connected graph with $k$ edge-disjoint triangles $T_1$, \ldots, $T_k$, and let $M_0 := M(G)$. Letting $R_{T_1,\ldots,T_k}(M_0)$ be defined as above, we have $P_{T_1,\dots,T_k}(M_0) \subseteq R_{T_1,\ldots,T_k}(M_0) \subseteq Q_{T_1,\dots,T_k}(M_0)$. It follows that $\xc(P_{T_1,\dots,T_k}(M_0),Q_{T_1,\dots,T_k}(M_0)) = O(|V(G)| \cdot |E(G)|)$. \end{proposition} \begin{proof} The inclusion $R_{T_1,\ldots,T_k}(M_0) \subseteq Q_{T_1,\dots,T_k}(M_0)$ follows easily from our construction. Fix any $I \subseteq [k]$, and let $T^*_i := T'_i$ if $i \notin I$ and $T^*_i := T''_i$ if $i \in I$. We see that $(x^0,x^{T^*_1},\ldots,x^{T^*_k})$ is a convex combination of spanning forests since there are capacities $(c^0,c^{T^*_1},\ldots,c^{T^*_k})$ and unit flows $\phi^v_I$ for each $v \in V(D) - r$ that witness this. In order to prove the inclusion $P_{T_1,\ldots,T_k}(M_0) \subseteq R_{T_1,\dots,T_k}(M_0)$, we only need to focus on the case of spanning trees, since $R_{T_1,\ldots,T_k}(M_0)$ is by definition of anti-blocking type. More precisely, let $J_0 \subseteq E_0$, $J_1', J_1'' \subseteq T_1$, \ldots, $J_k', J_k'' \subseteq T_k$ be such that $J_0 \cup J_1^* \cup \cdots \cup J_k^*$ is a spanning tree of $G$ for all choices of $J_i^* \in \{J_i',J_i''\}$, $i \in [k]$ and in addition $J_i'=J_i''$ or $J_i'=\{\alpha_i,\beta_i\}$ and $J_i''=\{\beta_i,\gamma_i\}$ for all $i \in [k]$. (As before, $\alpha_i$, $\beta_i$ and $\gamma_i$ denote the edges of $T_i$.) We claim that the 0/1-vector $(\chi^{J_0},\chi^{J_1'}, \chi^{J_1''},\dots, \chi^{J_k'}, \chi^{J_k''})$ belongs to $R_{T_1,\dots,T_k}(M_0)$. We define capacities $(c^0,c^{T'_1},\ldots,c^{T'_k})$ and unit flows $\phi^v$ for $v \in V(D) - r$ from the spanning tree $(J_0,J'_1,J'_2,\ldots,J'_k)$ exactly as in Wong's formulation. For all indices $i \in [k]$ such that $J'_i = J''_i$, we let $c^{T''_i} := c^{T'_i}$ and $\Delta^v_i := \mathbf{0}$ for all $v \in V(D) - r$. Now consider an index $i \in [k]$ such that $J'_i = \{\alpha_i,\beta_i\}$ and $J''_i = \{\beta_i,\gamma_i\}$. We explain how to define the capacities $c^{T''_i}$ and the ``alternative'' flow values $\phi^v_a + \Delta^v_{i,a}$ for $a \in B_i$ in Figure~\ref{fig:rerouting}. (In case $\phi^v$ is zero on $T_i$, we let $\Delta^v_i := \mathbf{0}$.) We leave to the reader to check that all constraints defining $R_{T_1,\ldots,T_k}$ are satisfied by our choice of capacities $c^0$ and $c^{T'_i}, c^{T''_i}$ ($i \in [k]$), flows $\phi^v$ ($v \in V(D)-r$) and circulations $\Delta^v_i$ ($v \in V(D)-r$, $i \in [k]$). This establishes the claim, and concludes the proof. \end{proof} \begin{figure} \centering \begin{tabular}{c|c|c|c} Capacities &Alt.\ capacities &Flow &Alt.\ flow\\ \hline % % \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (a) -- (b); \draw[thick,->,>=latex] (a) -- (c); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (a) -- (c); \draw[thick,->,>=latex] (c) -- (b); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (a) -- (b); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (a) -- (c); \draw[thick,->,>=latex] (c) -- (b); \end{tikzpicture}\\ \hline && \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (a) -- (c); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (a) -- (c); \end{tikzpicture}\\ % % \hline \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (b) -- (a); \draw[thick,->,>=latex] (a) -- (c); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (b) -- (c); \draw[thick,->,>=latex] (c) -- (a); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (b) -- (a); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (b) -- (c); \draw[thick,->,>=latex] (c) -- (a); \end{tikzpicture}\\ \hline && \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (b) -- (a); \draw[thick,->,>=latex] (a) -- (c); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (b) -- (c); \end{tikzpicture}\\ \hline % % \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (c) -- (a); \draw[thick,->,>=latex] (a) -- (b); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (c) -- (b); \draw[thick,->,>=latex] (c) -- (a); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (c) -- (a); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (c) -- (a); \end{tikzpicture}\\ \hline && \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (c) -- (a); \draw[thick,->,>=latex] (a) -- (b); \end{tikzpicture} & \begin{tikzpicture}[scale=.35] \tikzstyle{vtx}=[circle,draw,thick,inner sep=2.5pt,fill=white] \node[vtx] (a) at (90:2) {}; \node[vtx] (b) at (210:2) {}; \node[vtx] (c) at (330:2) {}; \draw[thick,->,>=latex] (c) -- (b); \end{tikzpicture}\\ \end{tabular} \caption{Definition of $c^{T''_i}$ (second column) and $\phi^v + \Delta^v_i$ (fourth column), in each case.} \label{fig:rerouting} \end{figure} \section{Smaller formulation for star decompositions: the cographic case} \label{sec:cographic} In this section, we consider the case where $M_0$ is cographic, i.e.\ $M_0=M^*(G)$ for a ($2$-)connected graph $G$. Our goal is to prove Lemma~\ref{lem:star_bound} in this case. As in the previous section, we rely on Proposition~\ref{prop:asymmetricstar}. By duality, we have that $x \in B(M_0)$ if and only if $\textbf{1}-x \in B(M_0^*)$. Hence, we will again deal with the spanning tree polytope of $G$. If $T$ is a triangle of cographic matroid $M_0$, then the corresponding edges of $G$ form a cut of size three. Let $T_1,\dots, T_k$ be the pairwise disjoint triangles of $M_0$ involved in the $3$-sums with $M_1,\dots,M_k$ respectively, where $T_i = \{\alpha_i,\beta_i,\gamma_i\}$ for $i \in [k]$, as previously. We can assume (see the appendix, and specifically Proposition \ref{prop:claws}) that each $T_i$ is of the form $\delta(v_i)$ for some degree-$3$ node $v_i \in V(G)$. We denote by $a_i$, $b_i$, and $c_i$ the neighbors of $v_i$. We may assume that $\alpha_i = a_iv_i$, $\beta_i = b_iv_i$ and $\gamma_i = c_iv_i$. Let $V^* := \{v_1,\dots,v_k\}$. Observe that $V^*$ is a stable set of $G$. Let $D$ be the directed graph obtained from $G$ by bidirecting each edge. Let $r$ be any node in $V(D) \setminus V^*$. Such a node exists since we can take $r := a_1$ for instance. As a first step, we simplify Wong's formulation for the $r$-arborescence polytope of $D$. As stated in the next lemma, it is sufficient to have a unit flow $\phi^v$ for each $v \in V(G) \setminus (V^* \cup \{r\})$, and impose a single specific constraint on the arcs entering $v_i$ for each $i \in [k]$. \begin{lemma}\label{lem:wongmodcographic} Let $D$ be a directed graph with specified distinct node $r, v_1, \dots, v_k$ (for some $k\geq 1$) such that $\delta^{\mathrm{out}}(v_i) \cap \delta^{\mathrm{in}}(v_j) = \emptyset$ for every $i, j \in [k]$. Letting $V^* := \{v_1,\ldots,v_k\}$, we have % \begin{align*} \nonumber P^\uparrow_{r\mathrm{-arborescence}}(D) = \Big\{c\in \mathbb R^{A(D)} \mid\:& \forall i \in [k] : c(\delta^{\mathrm{in}}(v_i)) \geq 1,\\ &\forall v \in V(D) \setminus (V^* \cup \{r\}) : \exists \phi^v \in \mathbb R^{A(D)} : \text{\eqref{constr:wongunit}--\eqref{constr:wongcapacity}} \Big\}\,. \end{align*} \end{lemma} \begin{proof} Let $Q(D)$ denote the right-hand side of the target equation. Proving that $P^\uparrow_{r\mathrm{-arborescence}}(D) \subseteq Q(D)$ is straightforward. Let $A \subseteq A(D)$ be an $r$-arborescence, and let $c := \chi^A$. For each $v \in V(D) \setminus (V^* \cup \{r\})$, define $\phi^v$ as the characteristic vector of the $r$--$v$ directed path in $A$. All constraints of $Q(D)$ are clearly satisfied by this choice of unit flows. Now, we prove $Q(D) \subseteq P^\uparrow_{r\mathrm{-arborescence}}(D)$ by using the linear description of $P^\uparrow_{r\mathrm{-arborescence}}(D)$, see Section \ref{sec:stpwong}. Let $c \in Q(D)$ and let $\phi^v$, $v \in V(D) \setminus (V^* \cup \{r\})$ be corresponding flows. Consider any proper node subset $S$ with $r \in S$. If there is any $v \in V(D) \setminus S$ with $v \notin V^*$, then we have $c(\delta^{\mathrm{out}}(S)) \geq \phi^v(\delta^{\mathrm{out}}(S)) \geq 1$, since $S$ is an $r$--$v$ cut and $\phi^v$ is an $r$--$v$ flow of value~$1$. Otherwise, $V(D) \setminus S \subseteq V^*$. Pick an arbitrary node $v_i \in V^* \cap (V(D) \setminus S)$. Since $\delta^{\mathrm{in}}(v_i) \subseteq \delta^{\mathrm{out}}(S)$, we have $c(\delta^{\mathrm{out}}(S)) \geq c(\delta^{\mathrm{in}}(v_i)) \geq 1$. \end{proof} We are now ready to describe the extended formulation for our intermediate polytope $R_{T_1,\ldots,T_k}(M_0)$. The formulation is similar to that given in the previous section for the graphic case, except that our starting point is the formulation for $P^\uparrow_{r\mathrm{-arborescence}}(D)$ given in Lemma \ref{lem:wongmodcographic}. Also, it turns out that we do not need the $\Delta$-variables. Finally, the root node $r$ should be picked outside of $V^* := \{v_1,\ldots,v_k\}$. Using the same notation as above in Section~\ref{sec:tweaking_wong}, we define $R_{T_1,\ldots,T_k}(M_0)$ as the set of tuples $(x^0,x^{T_1'},x^{T_1''}$, \ldots, $x^{T_k'},x^{T_k''})$ such that there exist capacities $(c^0,c^{T_1'},c^{T_1''},\ldots,c^{T_k'},c^{T_k''})$ satisfying the following constraints. First, instead of~\eqref{constr:x-UB1} and \eqref{constr:x-UB2} we ask \begin{align} &\forall uv \in E_0 : 0 \leq x^0_{uv} \leq 1 - c^0_{(u,v)} - c^0_{(v,u)}\,,\\ &\forall i \in [k],\ uv \in T_i : 0 \leq x^{T'_i}_{uv} \leq 1 - c^{T'_i}_{(u,v)} - c^{T'_i}_{(v,u)},\ 0 \leq x^{T''_i}_{uv} \leq 1 - c^{T''_i}_{(u,v)} - c^{T''_i}_{(v,u)}\,. \end{align} Second, we impose constraints~\eqref{constr:c-arb_eq} and \eqref{constr:c-consistency} as before. Third, for all $v \in V(D) \setminus (V^* \cup \{r\})$ there exists $\phi^v \in \mathbb R^{A(D)}$ satisfying \eqref{constr:wongunit} and \eqref{constr:wongflowu}. Fourth, the flows $\phi^v$ should satisfy the bounds \eqref{constr:wongflowbound1} and \begin{align} &\forall \ i \in [k],\ a \in B_i : 0 \leq \phi^v_a \leq c^{T'_i}_a,\ \phi^v_a \leq c^{T''_i}_a\, \end{align} (this last constraint replaces~\eqref{constr:wongflowbound2}). Fifth, we include explicit constraints on the capacities entering each node in $V^*$, as in Lemma~\ref{lem:wongmodcographic}: \begin{align} \label{constr:c-bound_explicit}&\forall i \in [k] : c^{T'_i}(\delta^{\mathrm{in}}(v_i)) \geq 1,\ c^{T''_i}(\delta^{\mathrm{in}}(v_i)) \geq 1 \end{align} One can easily check that the extended formulation defining $R_{T_1,\ldots,T_k}(M_0)$ has size $O(|V(G)| \cdot |E(G)|)$. \begin{proposition}\label{prop:wongtweakcographic} Let $G$ be connected graph, let $V^* := \{v_1,\ldots,v_k\}$ be a nonempty stable set such that each $v_i$ has degree~$3$, let $T_i := \delta(v_i)$ for $i \in [k]$, and let $M_0 := M^*(G)$ be the cographic matroid associated with $G$. Letting $R_{T_1,\ldots,T_k}(M_0)$ be defined as above, we have $P_{T_1,\dots,T_k}(M_0) \subseteq R_{T_1,\ldots,T_k}(M_0) \subseteq Q_{T_1,\dots,T_k}(M_0)$. Hence, $\xc(P_{T_1,\dots,T_k}(M_0),Q_{T_1,\dots,T_k}(M_0)) = O(|V(G)| \cdot |E(G)|)$. \end{proposition} \begin{proof} Let $x := (x^0,x^{T'_1},x^{T''_1},\ldots,x^{T'_k},x^{T'_k}) \in R_{T_1,\ldots,T_k}(M_0)$, let $(c^0,c^{T'_1},c^{T''_1},\ldots,c^{T'_k},c^{T''_k})$ be capacities and let $\phi^v$, $v \in V(D) \setminus (V^* \cup \{r\})$ be unit flows witnessing $x \in R_{T_1,\ldots,T_k}(M_0)$. It should be clear from Lemma~\ref{lem:wongmodcographic} and \eqref{constr:c-arb_eq} and \eqref{constr:c-consistency} that $(c^0,c^{T*_1},\ldots,c^{T^*_k})$ is in the $r$-arborescence polytope for every choice of $T^*_i \in \{T'_i,T''_i\}$, $i \in [k]$. Hence, $(x^0,x^{T*_1},\ldots,x^{T^*_k}) \in P(M_0)$ for every choice of $T^*_i$. Therefore, $x \in Q_{T_1,\ldots,T_k}(M_0)$. This proves the rightmost inclusion. In order to prove the leftmost inclusion, let $J_0 \subseteq E_0$, $J_1', J_1'' \subseteq T_1$, \ldots, $J_k', J_k'' \subseteq T_k$ be such that $J_0 \cup J_1^* \cup \cdots \cup J_k^*$ is a basis of $M_0$ (that is, the complement of a spanning tree of $G$) for all choices of $J_i^* \in \{J_i',J_i''\}$, $i \in [k]$ and in addition $J_i'=J_i''$ or $J_i'=\{\alpha_i,\beta_i\}$ and $J_i''=\{\beta_i,\gamma_i\}$ for all $i \in [k]$. Let $x := (\chi^{J_0},\chi^{J_1'}, \chi^{J_1''},\dots, \chi^{J_k'}, \chi^{J_k''})$. The capacities $(c^0,c^{T'_1},\ldots,c^{T'_k}) \in \mathbb R^{A(D)}$ are simply the characteristic vector of the $r$-arborescence obtained from the complement of $J_0 \cup J'_1 \cup \ldots \cup J'_k$. In case $J_i'' = J_i'$ we let $c^{T''_i} := c^{T'_i}$. Otherwise, $J'_i = \{\alpha_i,\beta_i\}$ and $J''_i = \{\beta_i,\gamma_i\}$, which means in particular that $v_i$ is a leaf of all spanning trees $E(G) \setminus (J_0 \cup J_1^* \cup \cdots \cup J_k^*)$. Also, $c^{T'_i} = \mathbf{e}_{(c_i,v_i)}$. We define $c^{T''_i} \in \mathbb R^{B_i}$ by letting $c^{T''_i} := \mathbf{e}_{(a_i,v_i)}$. Again, all the constraints defining $R_{T_1,\ldots,T_k}(M_0)$ are satisfied by this choice of capacities $(c^0,c^{T'_1},c^{T''_1}$, \ldots, $c^{T'_k},c^{T''_k})$ and flows $\phi^v$ for $v \in V(D) \setminus (V^* \cup \{r\})$. \end{proof} We are now ready to prove Lemma~\ref{lem:star_bound}. \begin{proof}[Proof of Lemma~\ref{lem:star_bound}] The bound on $\xc(P(M))$ follows directly either from Propositions~\ref{prop:asymmetricstar} and \ref{prop:wongtweak} in case $M_0$ is graphic, or from Propositions~\ref{prop:asymmetricstar} and \ref{prop:wongtweakcographic} in case $M_0$ is cographic. \end{proof} \section{The circuit dominant of a regular matroid}\label{sec:cdom} In this section, we deal with another polyhedron that one can define for every matroid $M$, and provide a small extended formulation for it whenever $M$ is regular. Recall that the circuit dominant of a matroid $M$ is $P^\uparrow_{\mathrm{circuit}}(M) = \mathrm{conv}\{\chi^C \mid C $ is a circuit of $M\}+\mathbb R^{E(M)}_+$. When $M$ is cographic, $P^\uparrow_{\mathrm{circuit}}(M)$ is known as the \emph{cut dominant}, for which several polynomial size extended formulations are known \cite{tamir1994polynomial, weltge2015sizes}. In this section, we extend this to all regular matroids. We stress that, even for the special case of cographic matroids, a complete facial description of the circuit dominant in the original space is not known (see \cite{conforti2016cut, conforti2004cut}). We remark that, in \cite{kapadia2014}, a polynomial time algorithm is given to find a minimum weight circuit in a regular matroid (under the assumption that the weights are non-negative). The algorithm uses Seymour's decomposition theorem, which suggests that a small extended formulation for $P^\uparrow_{\mathrm{circuit}}(M)$ could be found using similar techniques as done in this paper for the independence polytope. However, we give here a direct extended formulation that is based on total unimodularity only. Given its compactness, our extended formulation together with any polynomial time algorithm for linear programming provides an alternative way to find minimum weight circuits in regular matroids, that is arguably simpler than the one given in~\cite{kapadia2014}. Before establishing the main result of this section, we need the following simple fact on cycles of regular matroids. \begin{proposition}\label{prop:circuitvector} Let $M$ be a regular matroid on ground set $E$ and let $A$ be a TU (totally unimodular) matrix representing $M$ over $\mathbb R$. Let $C \subseteq E$. Then $C$ is a cycle of $M$ if and only if there is a vector $\psi = \psi(C) \in \{-1,0,1\}^{E}$ whose support is $C$ and such that $A \psi =\mathbf{0}$. \end{proposition} \begin{proof} First, we remark that, since $A$ is TU, $A$ represents $M$ over any field, in particular over $\mathbb{F}_2$. Hence, the ``if" direction follows. Indeed, suppose that a vector $\psi \in \{-1,0,1\}^{E}$ with support $C$ satisfies $A \psi = \mathbf{0}$. Then, taking this equation over $\mathbb{F}_2$, we see that $A \chi^C = \mathbf{0}$, implying that $C$ is a cycle of $M$. For the ``only if" direction, we can restrict $C$ to be a circuit. Let $\varphi \in \mathbb R^{E}$ be any vector such that $A \varphi = \mathbf{0}$ and the support of $\varphi$ equals $C$. Consider the polytope $P = P(C) := \{x \in \mathbb R^{E} \mid Ax = \mathbf{0},\ \forall e \in C : -1 \leqslant x_e \leqslant 1,\ \forall e \in E \setminus C : x_e = 0\}$. Since $A$ is TU, $P$ is integer. In fact, the vertex set of $P$ is contained in $\{-1,0,1\}^{E}$. Since, $\varphi / ||\varphi||_\infty$ is a non-zero point in $P$, we see that $P$ has a vertex that is non-zero. Let $\psi$ be any such vertex. Since $C$ is a circuit, $\psi$ satisfies all the required properties. \end{proof} \begin{theorem}\label{thm:cdom} Let $M$ be a regular matroid on ground set $E$, represented (over $\mathbb R$) by a totally unimodular matrix $A$. For $e\in E$, define the polyhedron \begin{align} \nonumber P^\uparrow_{\mathrm{circuit}}(M,e) := \{x \mid \exists y : -x \leqslant y &\leqslant x,\\ Ay &=\mathbf{0},\label{eq:cdomfirst}\\ y_e &=1,\\ -\mathbf{1} \leqslant y &\leqslant \mathbf{1}\}\,.\label{eq:cdomlast} \end{align} Then $P^\uparrow_{\mathrm{circuit}}(M)=\mathrm{conv} \left( \bigcup_{e\in E}P^\uparrow_{\mathrm{circuit}}(M,e) \right)$. In particular, $\xc(P^\uparrow_{\mathrm{circuit}}(M)) = O(|E|^2)$. \end{theorem} \begin{proof} It suffices to show, for each $e\in E$, that $$ P^\uparrow_{\mathrm{circuit}}(M,e)=\mathrm{conv}\{\chi^C \mid C \text{ is a circuit of } M, e\in C\}+\mathbb R^E_+\,. $$ Fix an arbitrary element $e \in E$. For the ``$\supseteq$" inclusion, let $C$ be a circuit of $M$ with $e\in C$. Then $\chi^C \in P^\uparrow_{\mathrm{circuit}}(M,e)$ by setting $y := \pm \psi(C)$ (see Proposition \ref{prop:circuitvector}). The inclusion follows since $P^\uparrow_{\mathrm{circuit}}(M,e)$ is upward closed. For the ``$\subseteq$" inclusion, we show that any vertex of $P^\uparrow_{\mathrm{circuit}}(M,e)$ dominates $\chi^C$, for some circuit $C$ of $M$ containing $e$. Consider the polytope $Q = Q(e) := \{y \mid \text{\eqref{eq:cdomfirst}--\eqref{eq:cdomlast}}\}$ and the polyhedron $R = R(e) := \{(x,y) \mid -x \leqslant y \leqslant x,\ y \in Q\}$. Fix a vertex $x^*$ of $P^\uparrow_{\mathrm{circuit}}(M,e)$, and let $I_0 := \{i \mid x^*_i = 0\}$. There exists $y^*\in Q$ such that $(x^*,y^*)$ is a vertex of $R$. Notice that $y^*_i = 0$ for all $i \in I_0$. For $i \notin I_0$, we have either $x^*_i = y^*_i$ or $x^*_i = -y^*_i$, but not both. Necessarily, $y^*$ is a vertex of the polytope $Q \cap \{y \mid \forall i \in I_0 : y_i = 0\}$. Since $Q$ is defined by a TU system with integral right-hand sides, its intersection with any coordinate subspace is integral. Hence $y^*$ is an integer point, and so is $(x^*,y^*)$. Let $D$ denote the support of $y^*$, so that $x^* = \chi^D$. By Proposition \ref{prop:circuitvector}, $D$ is a cycle of $M$. Moreover, $D$ contains $e$. Thus, $D$ contains a circuit $C$ of $M$ containing $e$. Since $x^*$ dominates $\chi^C$, we are done. The bound on $\xc(P^\uparrow_{\mathrm{circuit}}(M))$ now follows from Balas Union (see Proposition \ref{prop:Balas}), a version of which applies to unbounded polyhedra with the same recession cone \cite{balas1979disjunctive}. \end{proof} We conclude the section by remarking that, due to the fact that the dual of a regular matroid is regular, Theorem \ref{thm:cdom} applies to the \emph{cocircuit} dominant as well (which is defined in the obvious way). \section{Discussion} \label{sec:discussion} It is straightforward to improve the $O(n^6)$ bound of Theorem~\ref{thm:main} to a $O(n^{6-\varepsilon})$ bound, for sufficiently small $\varepsilon > 0$ (for instance, we may take $\epsilon = .41$). However, we believe that a better bound should hold. We leave this as our first open problem. Related to this question, we suspect that the simple upper bound $\xc(P(M_1 \oplus_3 M_2)) \leq \xc(P(M_1)) + \xc(P(M_2))$ \emph{fails} for some regular matroids $M_1$ and $M_2$, although we do not have any concrete counterexample. If the simple bound held, then this would give a $O(n^2)$ upper bound on $\xc(P(M))$ for all regular matroids $M$ on $n$ elements, see~\cite{kaibel2016extended,weltge2015sizes}. Weltge~\cite{weltge2015sizes} proved that for every graphic matroid $M = M(G)$, the extension complexity of $P^\uparrow_{\mathrm{circuit}}(M^*)$ (that is, the cut dominant of $G$) is bounded as follows: \begin{equation} \label{eq:Weltge} \xc(P^\uparrow_{\mathrm{circuit}}(M^*)) \leqslant \xc(P(M))+O(|E(M)|)\,. \end{equation} We do not know whether $\xc(P(M))$ can be similarly bounded in terms of $\xc(P^\uparrow_{\mathrm{circuit}}(M^*))$. If this could be done for all regular matroids $M$, then this could lead to an improved upper bound on the extension complexity of $P(M)$, via Theorem~\ref{thm:cdom}. Rothvoss~\cite{rothvoss2013some} has proved via a counting argument involving \emph{sparse paving} matroids that the independence polytope of many matroids has exponential extension complexity. It is unclear that one can find an explicit infinite family of sparse paving matroids $M$ with $\xc(P(M))$ superpolynomial, since that would automatically yield an explicit infinite family of Boolean functions requiring superlogarithmic depth circuits, see G\"o\"os~\cite{Goos16}. At this point, we do not know what is the worst case extension complexity of $P(M)$ even when $M$ is a binary matroid. Let $f(n)$ denote the maximum of $\xc(P(M))$ where $M$ is a binary matroid on $n$ elements. Is $f(n)$ polynomial? This is our second open problem. We stress that optimizing over $P^\uparrow_{\mathrm{circuit}}(M)$ in a general binary matroid is NP-hard~\cite{vardy1997intractability}. We suspect that $\xc(P^\uparrow_{\mathrm{circuit}}(M))$ is superpolynomial for these matroids. If \eqref{eq:Weltge} could be generalized to all binary matroids (even with worse polynomial bounds), then this would give explicit binary matroids with $\xc(P(M))$ superpolynomial. We conclude by remarking that, by Proposition 3.3.10 of \cite{weltge2015sizes}, Theorem \ref{thm:main} implies a polynomial bound on the extension complexity of independence polytopes of almost-regular matroids. \section{Acknowledgements} The first author thanks Georg Loho, Volker Kaibel, Matthias Walter and Stefan Weltge for joining the first attempts to solve the flaw in~\cite{kaibel2016extended}. We also thank Tony Huynh for taking part in the early stages of the research. Finally, we thank Stefan Weltge for discussions related to Section~\ref{sec:cdom}. This project was supported by ERC Consolidator Grant 615640-ForEFront. \bibliographystyle{plain}
proofpile-arXiv_065-6051
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{introduction} Phases of matter and phase transitions have always been the hot topics in the statistical and the condensed matter physics. For decades, Landau's symmetry-breaking theory was believed fully qualified to identify and describe different phases and phase transitions in-between. As a seminal illustration of the spontaneous symmetry breaking, the classical Ising model ($Z_2$ symmetry) on the square lattice, undergoes a typical order-disorder phase transition. Its opposite extreme, the continuous $XY$ model ($U(1)$ symmetry), involves the exotic topological vortices excitation and a phase transition without symmetry broken, i.e. the so-called Kosterlitz-Thouless (KT)\cite{KT1,KT2} transition beyond Landau's theory. Both types of transitions can be easily probed by the magnetic susceptibility, which reflects the system's response to an external magnetic field and behaves distinctively across the critical point. One natural question is how the universality class evolves with the symmetry of the models, which arouses intensive interest in the intermediate $q$-state clock model with a finite $q$. As well known, when $q$ is no bigger than 4, it has one unique second-order phase transition; otherwise, there are two separate transitions sandwiching a critical KT phase with quasi-long-range order. So far, major debates focus on $q$ near 5, about the nature and the precise locations of the transitions. Usually, Monte Carlo (MC), one of the principal methods for many-body problems, calculates the helicity modulus\cite{Lapilli, Baek1, Baek3, Kumano, Okabe} to characterize the KT transition. As in the continuous $XY$ model on the square lattice, it jumps abruptly from finite to zero at the critical point\cite{Minnhagen}. For the $5$-state clock model, at the upper transition point, it behaves similarly to the $XY$ case. However, as to the lower one, inconsistent conclusions about the transition type were claimed by different groups with MC studies\cite{Kumano, Baek3}. By proposing an extended universality class theory with MC simulations, Lapilli et al.\cite{Lapilli} even declared both transitions are not KT-type when $q\le6$, which is supported by Ref.~[\onlinecite{Hwang}] from a Fish zero analysis for $q=6$ case. Another powerful method, the renormalization group (RG) predicted two KT transitions early\cite{Kadanoff}, and a recent density matrix renormalization group (DMRG) study\cite{Christophe} favored this assertion by calculating the helicity modulus with relatively small system sizes. The tensor network states, generalized from DMRG to higher dimensional strongly correlated systems, have developed rapidly and been widely used to investigate both the classical and the quantum systems. Among those, the tensor renormalizaton group method based on the higher-order singular value decomposition (abbreviated as HOTRG)\cite{Xie2}, has been successfully applied to study the 3D Ising model\cite{Xie2}, the Potts model\cite{MPQin, WShun}, and the continuous $XY$ model\cite{JFYu}. Actually, it has been also utilized to study the 5-state clock model, where the magnetic susceptibility properly describes the upper phase transition, but does not work well for the lower one\cite{Chenyong}, as also presented in Fig. \ref{fe}(b) below. Therefore, a gauge invariant factor from the fixed point tensor of the RG flow, proposed in Ref.~[\onlinecite{Gu}], was adopted to measure the degeneracy of each phase, which also precisely estimates the critical points of the 6-state clock model\cite{Jing}. Nevertheless, some important information is still missing, e.g., the reason why the magnetic susceptibility loses its efficacy for the lower transition in this model, and the nature of the transitions in particular, although many researches claimed both are KT transition. A duality analysis by the conformal field theory (CFT) deemed that two transitions are KT-type, but they still have some differences\cite{Elitzur, Matsuo}. Recently, a universal entropy predicted by CFT on a Klein bottle\cite{TuPRL, TuPRB} has different values at these two critical points\cite{TuPrivate}, which is believed valuable to distinguish different CFTs. Here, we intend to detect and clarify the nature/mechanism of the classical phase transitions, within a unified frame, by reexamining the fundamental thermodynamic function, Gibbs free energy $F$, which is intrinsically a signpost of the universal entropy increase of a spontaneous change\cite{Atkins}, and contains information about the phase transitions. However, the free energy and its temperature derivatives, i.e. the internal energy and the specific heat, are analytically continuous without any singularity in the KT transition. So, besides the temperature, we propose an auxiliary parameter, a weak external magnetic field, which interacts with spin degree of freedom and competes with thermal excitations, thus provides us a convenient tool to investigate the dynamical behavior of the system. By detailed analyses on the cross derivative of $F(T, h)$ with respect to both temperature and field, we can easily identify and precisely locate the transition points. Moreover, since the free energy is fundamental, this idea is readily applied to any classical spin system, like Ising or $XY$ model with trivial or exotic transitions. In other words, it is universal. First, we demonstrate this idea explicitly by the ferromagnetic $5$-state clock model with an in plane magnetic field, whose Hamiltonian is written as \begin{equation} H = - J \sum_{\left< ij \right>}\cos(\theta_{i}-\theta_{j})-h\sum_{i}\cos\theta_{i}, \end{equation} where $\left< ij \right>$ means summing over all nearest neighbors. $\theta_i$ is the spin angle on lattice site $i$, selected from $\theta=2\pi k/q$, $k = 0, 1, 2, \ldots, q-1$. $J$ is the nearest coupling. $h$ is the applied field in unit of $J/\mu$, and $\mu$ is the magnetic moment of each spin. Both $J$ and $\mu$ are set as 1 for convenience. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth,clip,angle=0]{fe.eps} \caption{\label{fe}(Color online) (a) Gibbs free energy $F$ (blue blank square) of 5-state clock model with a magnetic field $h=4.0 \times 10^{-5}$; comparison of $-\partial F/\partial h$ (black empty circle) and $\boldsymbol{m}$ (red filled circle); (b) magnetic susceptibility $\partial \boldsymbol{m} / \partial h$ (blue blank square), cross derivative ${\partial}^2 F / \partial T \partial h$ (black blank circle), $-\partial \boldsymbol{m} / \partial T$ (red filled circle), and $-\partial{S} / \partial h$ (green cross).} \end{center} \end{figure} Here, we employ HOTRG method to compute the desired physical quantities. Details about the algorithms are in Refs.~[\onlinecite{Xie2, JFYu, Chenyong}]. Its accuracy, like other RG algorithms, is subject to the number of states kept during the RG process, labelled by the bond dimension $D$. Initially, it equals $q$, then expands exponentially along the RG process. Therefore, a truncation is necessary to ensure further steps sustainable. The free energy $F(T, h_1)$ with field $h_1$ is presented in Fig. \ref{fe}(a), wherein $-\partial F/ \partial h$ and the magnetization $\boldsymbol{m}$ are also shown. For comparison, the quantity $-\partial F/ \partial h$ is computed directly from $-[F(h_2)-F(h_1)]/(h_2-h_1)$ by using two close field strengths, and $\boldsymbol{m}$ is calculated by the impurity tensor algorithm\cite{Xie2, JFYu, Chenyong}. One can see they agree well with each other as should do. For this model, as discussed in Ref.~[\onlinecite{Chenyong}], the magnetic susceptibility can clearly identify the upper phase transition, but not be so convenient for the lower one. As shown in Fig. \ref{fe}(b) by blue blank squares, an exponential divergence clearly labels a phase transition near $T=1.0$. Meanwhile, a broad shoulder-shape structure emerges below, indicating something happens, but not as evident as the upper one. Instead, the cross derivative of the free energy with respect to both temperature and field, i.e. $\partial^2{F}/{\partial{T}\partial{h}}$, is able to characterize both transitions simultaneously. Clearly, as shown in Fig. \ref{fe}(b) by black blank circles, two separate sharp peaks show up. In particular, the upper one is coincident perfectly with the susceptibility curve, for both the position and the shape, although it decays exponentially from a much smaller peak other than divergence as in the magnetic susceptibility. The lower one, small but still obvious, locates near $T=0.90$. Here, a relatively small bond dimension $D=40$ is used just for illustration. As verified in Fig. \ref{fe}(a), $-\partial{F}/{\partial{h}}$ is just $\boldsymbol{m}$, then the cross derivative equals the temperature derivative of the magnetization $-\partial \boldsymbol{m} / \partial T$. As both shown in Fig. \ref{fe}(b), they match up well with each other. Similarly, one can choose function $-\partial S / \partial h$, as also presented in Fig. \ref{fe}(b), because $-\partial{F}/{\partial{T}}$ is just the thermodynamic entropy $S$. Additionally, the Maxwell relation\cite{Reichl} $\partial{S}/\partial{h}=\partial{\boldsymbol{m}}/\partial{T}$ is numerically verified by computing $S$ directly from the difference between Gibbs free energy and the internal energy, because both terms essentially spring from the cross derivative. For numerical simplicity and convenience, we adopt the notation $-\partial \boldsymbol{m} / \partial T$ hereafter, while keeping in mind its physical origin. Some may question the validity or the physical meaning of this cross derivative. One can imagine slicing the 3D curved surface $F(T, h)$ along $h$-axis, then performing the derivative $\partial{F}/{\partial{T}}$ for each $h$ slice, and observing its evolution along $h$-axis; or equivalently slicing $F(T, h)$ along $T$-axis and obtaining $\partial{F}/{\partial{h}}$, then investigating its evolution along $T$-axis. Thus, each captures the effects of both temperature and field, and the system dynamics can be easily deduced. This scheme may be elaborated by a formula \begin{equation} \left( \frac{\partial}{\partial T}+\frac{\partial}{\partial h}\right)^2 F = \nabla ^2 F + 2\frac{\partial^2 F}{\partial T\partial h} ,\label{formu2} \end{equation} where the left part in parentheses is a linear combination of two derivative operators in the two-dimensional orthogonal space expanded by temperature and field, and the Laplacian stands for the second-order derivatives with respect to each individual parameter, i.e. the specific heat and the magnetic susceptibility respectively, while neither is adequate to characterize the system dynamics comprehensively. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth,clip,angle=0]{q5_Tc.eps} \caption{\label{q5_Tc}(Color online) (a) Illustration of the peak positions of $-\partial \boldsymbol{m} / \partial T$ versus the magnetic fields for 5-state clock model with $D=40$, along with a power law fitting to extrapolate the transition temperature as $T_{c1}=0.9038$ and $T_{c2}=0.9557$ respectively; (b) The transition temperature versus the tensorial bond dimension $D$ to obtain the converged $T_c$ as 0.9063 and 0.9557, respectively.} \end{center} \end{figure} Similar to the procedure used in the continuous $XY$ model to locate the transition temperature\cite{JFYu}, we vary the applied field, and obtain the peak positions of $-\partial \boldsymbol{m} / \partial T$, as presented in Fig. \ref{q5_Tc}(a). To determine the critical points for a given $D$, an extrapolation to a zero field is performed by a power law fitting $T_p-T_c \sim h^x.$ As demonstrated in Fig. \ref{q5_Tc}(a), the results with $D=40$ are obtained as $T_{c1}=0.9038$ and $T_{c2}=0.9557$. Likewise, we replicate the above process with different bond dimensions, and obtain the converged transition temperatures, i.e. $T_{c1}=0.9063$ and $T_{c2}=0.9557$, as shown in Fig. \ref{q5_Tc}(b). Both agree well with the estimations from other researches\cite{Kumano, Christophe, Chenyong, Borisen2, Chatterjee}. Once obtaining the critical points, we can investigate the central charge $c$ as well as the critical exponent $\delta$ to determine the universality class of the transitions. According to the results of CFT\cite{ Nightingale, Afflect}, we calculate the finite-size partition function on a torus, to obtain the central charge at two critical points and the sandwiched critical phase as $c=1.04$, which indicates that both transitions belong to the same $c=1$ CFT class. Meanwhile, the critical exponent $\delta$ is calculated, which signifies the change of the system magnetization with the applied magnetic field at the transition point as $m\sim h^{1/\delta}$. The results are $\delta_1=15.81$ and $\delta_2=15.77$ respectively, by using the bond dimension $D=70$. Both are consistent well with the theoretical value $\delta=15$ for the KT transition in the 2D $XY$ model\cite{KT2}. Combining $c$ and $\delta$ together, it probably implies two KT-type transitions. As also shown clearly in Fig. \ref{q5_Tc}(a), the upper critical point shifts with the applied magnetic field, the stronger a field, the higher the transition temperature, similar to the $XY$ case\cite{JFYu}, because more heat energy is needed to overcome the additional barrier introduced by the magnetic field. However, as seen in Fig. \ref{q5_Tc}(a), the lower one moves oppositely, which seems to indicate a different scenario. Besides the vortex excitation, another typical topological excitation responsible for the melting of the magnetic order in magnetic systems is the domain wall\cite{Ortiz, Einhorn, Fertig, Chatterjee}, which probably plays an important role in this transition. To clarify the mechanism, we adopt the procedure of Refs.~[\onlinecite{Soumyadeep, Deng}] and the references therein, to investigate the influence of the vortices excitation on the phase transitions by introducing a parameter $\lambda$ to adjust the vortex core energy as \begin{equation}\label{eqH5lamda} H = - J \sum_{\left< ij \right>} \cos(\theta_{i}-\theta_{j}) + \lambda\sum_{i'}\left |\omega_{i'}\right |, \end{equation} where $\omega_{i'}=(\delta_{ba}-\delta_{cb}-\delta_{dc}-\delta_{ad})/5$, and $\delta_{ba}$ is $s_b-s_a$ wrapped in $[-1, 1]$. $s_a, s_b, s_c, s_d$ are spins on four vertexes of a square plaquette labelled by $i'$. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth,clip,angle=0]{MC_all.eps} \caption{\label{MC_all}(Color online) MC simulation of the Hamiltonian (Eq. \ref{eqH5lamda}) with $L=128$ for different $\lambda$: (a) magnetization; (b) magnetic susceptibility; (c) $-\partial \boldsymbol{m} / \partial T$; (d) number densities of the domain walls ($\rho_d$) and the vortices ($\rho_v$), where $\rho_v$ is multiplied by 2 for a better view.} \end{center} \end{figure} By MC simulations about the above Hamiltonian (Eq. \ref{eqH5lamda}) on a square lattice with $L=128$, we obtain the magnetization, the magnetic susceptibility and the deduced $-\partial \boldsymbol{m} / \partial T$ for different $\lambda$, as all shown in Fig. \ref{MC_all}. Increasing the vortex core energy to suppress its formation, a clear shift of the upper critical point can be seen from each curve. Again, $-\partial \boldsymbol{m} / \partial T$ looks much more convincing than the magnetic susceptibility for the lower transition. More importantly, as manifested in Fig. \ref{MC_all}(b) and (c), this lower temperature phase transition is barely affected by the vortex suppression, which strongly suggests it is dominated by the domain wall excitation\cite{Ortiz, Einhorn, Fertig, Chatterjee}. A more intuitive illustration is presented in Fig. \ref{MC_all}(d), i.e. the number density of each excitation, by adopting the definition in Ref.~[\onlinecite{Soumyadeep}]. One can clearly observe that, near the lower transition point, the number density of the domain wall decreases negligibly, while the vortices are greatly suppressed even eliminated, when increasing $\lambda$. Furthermore, we calculate the aforementioned universal entropy $\ln{g}$ of CFT on a Klein bottle\cite{TuPRL} at the critical points, because CFT asserts the two transitions in this model are KT-type but with different $g$\cite{TuPrivate}. Our computation gives $g_1=3.30$ and $g_2=3.09$ respectively, both of which agree well with the CFT conclusion\cite{TuPrivate}. From the foregoing discussions, we can conclude that both transitions are indeed KT-type, but with subtle differences: the upper one is attributed to the unbinding of the vortices pairs, while the lower one is dominated by the domain wall excitation instead; and they belong to different CFTs. The differences may be closely related to why the magnetic susceptibility works fine for the upper transition but not so well for the lower one; and why the transition points shift oppositely with the external field as shown in Fig. \ref{q5_Tc}(a). They may also be the reason why studies from different groups would give controversial estimations about the nature of the transitions. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.48\textwidth, clip, angle=0]{combine.eps} \caption{\label{combine}(Color online) $-\partial\boldsymbol{m} / \partial T$ and the power law fitting of its peak position varying with the applied field: (a) 2D $XY$ model with $D=40$; (b) 2D Ising model with $D=40$; (c) 3D Ising model with $D=10$.} \end{center} \end{figure} Briefly, with an auxiliary external magnetic field, the function $\partial \boldsymbol{m} / \partial T$ accurately reflects the interplay of the field and the temperature, and captures the implicit dynamics of the excitations in this model, hence correctly describes the phase transitions. While, the derivative of the free energy $F$ with respect to each single parameter, such as the specific heat or the magnetic susceptibility, is inadequate for lacking of information about the internal competition/interplay among those mingled complex excitations. The auxiliary magnetic field and the cross derivative provide us a convenient way to observe the response/dynamics of different excitations. To further check the universality of this idea, we apply it to the 2D $XY$, the 2D Ising, and the 3D Ising models separately. A sample of $-\partial\boldsymbol{m} / \partial T$ and the power law fitting of the peak position for each model are illustrated in Fig. \ref{combine} as (a), (b), and (c) respectively. For the $XY$ case, the transition temperature is obtained at $T_c=0.8924(16)$, which is coincident to the previous estimation\cite{JFYu} from the magnetic susceptibility $T_c=0.8921(19)$ with same bond dimension $D=40$, and both conform to the results from other methods like MC\cite{Hasenbusch2, Tomita} at $T_c=0.89294(8)$. For the 2D Ising case, the power law extrapolation yields the transition temperature at $T_c=2.26893(18)$, and a simultaneous prediction from the magnetic susceptibility (not shown in the figure) is $T_c=2.26904(22)$. They agree well with each other, and with the exact value $T_c=2/\ln{(\sqrt{2}+1)}\sim2.26919$, even using a relatively small bond dimension $D=40$. As to the 3D Ising case, the same procedure is carried out with the bond dimension $D=10$. The similar efficiency of the function $-\partial \boldsymbol{m} / \partial T$ is clearly demonstrated once again, from which the critical temperature is located at $T_c=4.5014(2)$. Also, the $T_c$ is determined at $4.5013(1)$ from the magnetic susceptibility. Both are consistent with the prediction at $T_c=4.5015$ by the HOTRG calculation with same $D$\cite{Xie2}. What's more, we can also observe the singularity in the $-\partial \boldsymbol{m} / \partial T$ curve of the 2D/3D Ising model, indicating a second-order phase transition. It becomes more manifest and sharper if lowering the field down to zero, then a direct and accurate determination of the critical point can be obtained with no need for an extrapolation. These examples have verified the capability of our idea perfectly, and the proposed cross derivative $\partial^2{F}/{\partial{T}\partial{h}}$ seems more versatile and effective, no matter a transition is trivial or exotic, especially when multiple exotic excitations are involved and other quantities/methods are difficult to clarify. Also, we think this strategy is universal, as long as the free energy can be calculated accurately with a weak external magnetic field included. Experimentally, one can measure the system magnetization $\boldsymbol{m}(T, h)$, from which the phase transition information can be easily deduced. More importantly, the magnetic field and the magnetization in Gibbs free energy or the Hamiltonian are just one typical conjugate pair of generalized force and displacement\cite{Reichl}. Likewise, other conjugate pairs, if introduced into the Hamiltonian to regulate a system's behavior, would play a similar role in investigating the phase transitions, e.g. the electric field and the polarization in an electronic system, which could be integrated into formula \eqref{formu2} similarly. Thus, this idea will greatly enrich our vision and means to study the phase transitions both theoretically and experimentally. Considering its accuracy and simplicity, the idea we proposed in this work is efficient and universal to investigate the phase transitions in classical spin systems, trivial or complex, 2D or 3D. The predictions will be more accurate if the free energy or the physical quantities involved could be computed more precisely. We are grateful to Hong-Hao Tu, Fuxiang Li, and Yu-Chin Tzeng for valuable discussions and comments. Y. Chen thanks Mr. Yuan Si for helps on Monte Carlo simulations. This work was supported by the Shanghai Pujiang Program (No. 17PJ1407400), the National Natural Science Foundation of China (No. 11774420), the National R\&D Program of China (No. 2016YFA0300503, No. 2017YFA0302900), and the Natural Science Foundation of Hunan Province (No. 851204035).
proofpile-arXiv_065-6059
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{} \subsection{} \subsubsection{}
proofpile-arXiv_065-6067
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Planning under uncertainty is a central problem in robotics. The space of current methods includes several contenders, each with different simplifying assumptions, approximations, and domains of applicability. This is a natural consequence of the fact that the challenge of dealing with the continuous state, control and observation space problems, for non-linear systems and across long-time horizons with significant noise, and potentially multiple agents, is fundamentally intractable. Model Predictive Control is one popular means for tackling optimal control problems~\cite{Mayne_1,Mayne_2}. The MPC approach solves a finite horizon ``deterministic'' optimal control problem at every time step given the current state of the process, performs only the first control action and then repeats the planning process at the next time step. In terms of computation, this is a costly endeavor. When a stochastic control problem is well approximated by the deterministic problem, namely when the noise is meager, much of this computation is simply superfluous. In this paper we consider a recently proposed method~\cite{D2C1.0}, grounded on a decoupling result, that uses a local feedback to control noise induced deviations from the deterministic (that we term the ``nominal'') trajectory. When the deviation is too large for the feedback to manage, replanning is triggered and it computes a fresh nominal. Otherwise, the feedback tames the perturbations during execution and no computation is expended in replanning. Figure~\ref{fig:timeplot} illustrates this: the areas under the respective curves give the total computational resources consumed---the savings are seen to be considerable. This paper presents an empirical investigation of this decoupling approach, exploring dimensions that are important in characterizing its performance. The primary focus is on understanding the performance across a wide range of noise conditions. \begin{figure} \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/time_comp1.pdf} \caption{A single agent.} \label{1_agent_replan_time} \end{subfigure} % \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/time_comp3.pdf} \caption{Three agents.} \label{3_agent_replan_time} \end{subfigure} \caption{Computation time expended by MPC (in blue) and the algorithms we describe (in green), at each time step for a sample experiment involving navigation. Both cases result in nearly identical motions by the robot. The peaks in T-LQR2 and MT-LQR2 happen only when replanning takes place. Computational effort decreases for both methods because the horizon diminishes as the agent(s) reach their goals. (To relate to subsequent figures: noise parameter $\epsilon = 0.4$ and the replan threshold = 2\% of cost deviation.)} \label{fig:timeplot} \end{figure}% \subsection{Related Work} Robotic planning problems under uncertainty can be posed as a stochastic optimal control problem that requires the solution of an associated Dynamic Programming (DP) problem, however, as the state dimension $d$ increases, the computational complexity goes up exponentially \cite{bertsekas1}, Bellman's infamous ``curse of dimensionality". There has been recent success using sophisticated (Deep) Reinforcement Learning (RL) paradigm to solve DP problems, where deep neural networks are used as the function approximators \cite{RLHD1, RLHD2,RLHD3, RLHD4, RLHD5}, however, the training time required for these approaches is still prohibitive to permit real-time robotic planning that is considered here. In the case of continuous state, control and observation space problems, the Model Predictive Control \cite{Mayne_1, Mayne_2} approach has been used with a lot of success in the control system and robotics community. For deterministic systems, the process results in solving the original DP problem in a recursive online fashion. However, stochastic control problems, and the control of uncertain systems in general, is still an unresolved problem in MPC. As succinctly noted in \cite{Mayne_1}, the problem arises due to the fact that in stochastic control problems, the MPC optimization at every time step cannot be over deterministic control sequences, but rather has to be over feedback policies, which is, in general, difficult to accomplish because a compact, tractable parametrization of such policies to perform the optimization is, in general, unavailable. Thus, the tube-based MPC approach, and its stochastic counterparts, typically consider linear systems \cite{T-MPC1, T-MPC2,T-MPC3} for which a linear parametrization of the feedback policy suffices but the methods require expensive offline computation when dealing with nonlinear systems. In recent work, we have introduced a ``decoupling principle" that allows us to tractably solve such stochastic optimal control problems in a near optimal fashion, with applications to highly efficient RL and MPC implementations \cite{D2C1.0,T-PFC}. However, this prior work required a small noise assumption. In this work, we relax this small noise assumption to show, via extensive empirical evaluation, that even when the noise is not small, a ``replanning" modification of the decoupled planning algorithms suffice to keep the planning computationally efficient while retaining performance comparable to MPC. The problem of multiple agents further and severely compounds the planning problem since now we are also faced with the issue of a control space that grows exponentially with the number of agents in the system. Moreover, since the individual agents never have full information regarding the system state, the observations are partial. Furthermore, the decision making has to be done in a distributed fashion which places additional constraints on the networking and communication resources. In a multi-agent setting, the stochastic optimal problem can be formulated in the space of joint policies. Some variations of this problem have been successfully characterized and tackled based on the level of observability, in/dependence of the dynamics, cost functions and communications \cite{seuken2008formal,oliehoek2016concise,pynadath2002communicative}. This has resulted in a variety of solutions from fully-centralized \cite{boutilier1996planning} to fully-decentralized approaches with many different subclasses \cite{amato2013decentralizedB,oliehoek2012decentralized}. The major concerns of the multi-agent problem are tractability of the solution and the level of communication required during the execution of the policies. In this paper, we shall consider a generalization of the decoupling principle to a multi-agent, fully observed setting. We show that this leads to a spatial decoupling between agents in that they do not need to communicate for long periods of time during execution. Albeit, we do not consider the problem of when and how to replan in this paper, assuming that there exists a (yet to be determined) distributed mechanism that can achieve this, we nonetheless show that there is a highly significant increase in planning efficiency over a wide range of noise levels. \subsection{Outline of Paper} The rest of the document is organised as follows: Section~\ref{section:prob} states the problem, \ref{section:decoupling} gives background on the decoupling principle, \RomanNumeralCaps{4} explains the planning algorithms used, \RomanNumeralCaps{5} discusses the results and observations and \ref{section:conclusion} concludes. \section{Problem Formulation} \label{section:prob} The problem of robot planning and control under noise can be formulated as a stochastic optimal control problem in the space of feedback policies. We assume here that the map of the environment is known and state of the robot is fully observed. Uncertainty in the problem lies in the system's actions. % \subsection{System Model:} For a dynamic system, we denote the state and control vectors by $\V{x}_t \in \ \mathbb{X} \subset \ \mathbb{R}^{n_x}$ and $\V{u}_t \in \ \mathbb{U} \subset \ \mathbb{R}^{n_u}$ respectively at time $t$. The motion model $f : \mathbb{X} \times \mathbb{U} \times \mathbb{R}^{n_u} \rightarrow \mathbb{X} $ is given by the equation \begin{equation} \V{x}_{t+1}= f(\V{x}_t, \V{u}_t, \epsilon\V{w}_t); \ \V{w}_t \sim \mathcal{N}(\V{0}, {\mathbf \Sigma}_{\V{w}_t}) \label{eq:model}, \end{equation} where \{$\V{w}_t$\} are zero mean independent, identically distributed (i.i.d) random sequences with variance ${\mathbf\Sigma}_{\V{w}_t}$, and $\epsilon$ is a small parameter modulating the noise input to the system. \subsection{Stochastic optimal control problem:} % The stochastic optimal control problem for a dynamic system with initial state $\V{x}_0$ is defined as: \begin{equation} J_{\pi^{*}}(\V{x}_0) = \min_{\pi} \ \Exp{}{\sum^{T-1}_{t=0} c(\V{x}_t, \pi_t (\V{x}_t)) + c_T(\V{x}_T)}, \end{equation} \begin{equation} s.t.\ \V{x}_{t+1} = f(\V{x}_t, \pi_t (\V{x}_t), \epsilon\V{w}_t), \end{equation} where: \begin{itemize} \item the optimization is over feedback policies $\pi := \{ \pi_0, \pi_1, \ldots, \pi_{T-1} \} $ and $\pi_t(\cdot)$: $\mathbb{X} \rightarrow \mathbb{U}$ specifies an action given the state, $\V{u}_t = \pi_t(\V{x}_t)$; \item $J_{\pi^{*}}(\cdot): \mathbb{X} \rightarrow \mathbb{R}$ is the cost function when the optimal policy $\pi^{*}$ is executed; \item $c_t(\cdot,\cdot): \mathbb{X} \times \mathbb{U} \rightarrow \mathbb{R} $ is the one-step cost function; \item $c_T(\cdot): \mathbb{X} \rightarrow \mathbb{R}$ is the terminal cost function; \item $T$ is the horizon of the problem; \item the expectation is taken over the random variable $\V{w}_t$. \end{itemize} \section{A Decoupling Principle} \label{section:decoupling} Now, we give a brief overview of a ``decoupling principle'' that allows us to substantially reduce the complexity of the stochastic planning problem given that the parameter $\epsilon$ is small enough. We only provide an outline here and the relevant details can be found in our recent work \cite{D2C1.0}. We shall also present a generalization to a class of multi-robot problems. Finally, we preview the results in the rest of the paper. \subsection{Near-Optimal Decoupling in Stochastic Optimal Control} Let $\pi_t(\V{x}_t)$ denote a control policy for the stochastic planning problem above, not necessarily the optimal policy. Consider now the control actions of the policy when the noise to the system is uniformly zero, and let us denote the resulting ``nominal'' trajectory and controls as $\overline{\V{x}}_t$ and $\overline{\V{u}}_t$ respectively, i.e., $\overline{\V{x}}_{t+1} = f(\overline{\V{x}}_t, \overline{\V{u}}_t, 0)$, where $\overline{\V{u}}_t = \pi_t(\overline{\V{x}}_t)$. Note that this nominal system is well defined. \\ Further, let us assume that the closed-loop (i.e., with $\V{u}_t = \pi_t(\V{x}_t)$), system equations, and the feedback law are smooth enough that we can expand the feedback law about the nominal as $\pi_t(\V{x}_t) = \overline{\V{u}}_t + \M{K}_t\delta \V{x}_t + \M{R}_t^{\pi}(\delta \V{x}_t)$, where $\delta \V{x}_t = \V{x}_t - \overline{\V{x}}_t$, i.e., the perturbation from the nominal, $\M{K}_t$ is the linear gain obtained by the Taylor expansion about the nominal in terms of the perturbation $\delta \V{x}_t$, and $\M{R}_t^{\pi}(\cdot)$ represents the second and higher order terms in the expansion of the feedback law about the nominal trajectory. Further we assume that the closed-loop perturbation state can be expanded about the nominal as: $\delta \V{x}_t = \M{A}_t \delta \V{x}_t + \M{B}_t \M{K}_t \delta \V{x}_t + \M{R}_t^f (\delta \V{x}_t) + \epsilon \M{B}_t \V{w}_t$, where the $\M{A}_t$, $\M{B}_t$ are the system matrices obtained by linearizing the system state equations about the nominal state and control, while $\M{R}_t^f(\cdot)$ represents the second and higher order terms in the closed-loop dynamics in terms of the state perturbation $\delta \V{x}_t$. Moreover, let the nominal cost be given by $\overline{J}^{\pi} = \sum_{t=0}^T \overline{c}_t$, where $\overline{c}_t = c(\overline{\V{x}}_t,\overline{\V{u}}_t)$, for $t\leq T-1$, and $\overline{c}_T = c_T(\overline{\V{x}}_T,\overline{\V{u}}_T)$. Further, assume that the cost function is smooth enough that it permits the expansion $J^{\pi} = \overline{J} + \sum_t \M{C}_t \delta \V{x}_t + \sum_t \M{R}_t^c(\delta \V{x}_t)$ about the nominal trajectory, where $\M{C}_t$ denotes the linear term in the perturbation expansion and $\M{R}_t^c(\cdot)$ denote the second and higher order terms in the same. Finally, define the exactly linear perturbation system $\delta \V{x}_{t+1}^\ell = \M{A}_t \delta \V{x}_t^\ell + \M{B}_t\M{K}_t \delta \V{x}_t^\ell + \epsilon \M{B}_t \V{w}_t$. Further, let $\delta J_1^{\pi,\ell}$ denote the cost perturbation due to solely the linear system, i.e., $\delta J_1^{\pi,\ell} = \sum_t \M{C}_t \delta \V{x}_t^\ell$. Then, the Decoupling result states the following \cite{D2C1.0}: \begin{theorem} The closed-loop cost function $J^{\pi}$can be expanded as $J^{\pi} = \overline{J}^{\pi} + \delta J_1^{\pi,\ell} + \delta J_2^{\pi}$. Furthermore, $\Exp{}{J^{\pi}} = \overline{J}^{\pi} + O(\epsilon^2)$, and $\Var[J^{\pi}] = \Var[\delta J_1^{\pi,\ell}] + O(\epsilon^4)$, where $\Var[\delta J_1^{\pi,\ell}]$ is $O(\epsilon^2)$. \end{theorem} Thus, the above result says the mean value of the cost is determined almost solely by the nominal control actions while the variance of the cost is almost solely determined by the linear closed-loop system. Thus, decoupling result says that the feedback law design can be decoupled into an open-loop and a closed-loop problem.\\ \textit{Open-Loop Problem:} This problem solves the deterministic/ nominal optimal control problem: \begin{equation} J= \min_{\overline{\V{u}}_t} \sum_{t=0} ^{T-1} c(\overline{\V{x}}_t,\overline{\V{u}}_t) + c_T(\overline{\V{x}}_T), \end{equation} subject to the nominal dynamics: $\overline{\V{x}}_{t+1} = f(\overline{\V{x}}_t, \overline{\V{u}}_t)$. \\ \textit{Closed-Loop Problem:} One may try to optimize the variance of the linear closed-loop system \begin{equation} \min_{\M{K}_t} \Var[\delta J_1^{\pi,\ell}] \end{equation} subject to the linear dynamics $\delta \V{x}_{t+1}^\ell = \M{A}_t \delta \V{x}_t^\ell + \M{B}_t \M{K}_t \delta \V{x}_t^\ell + \epsilon \M{B}_t \V{w}_t$. However, the above problem does not have a standard solution but note that we are only interested in a good variance for the cost function and not the optimal one. Thus, this may be accomplished by a surrogate LQR problem that provides a good linear variance as follows.\\ \textit{Surrogate LQR Problem:} Here, we optimize the standard LQR cost: \begin{equation} \delta J_{\textsc{lqr}} =\min_{\V{u}_t} \Exp{\V{w}_t}{\sum_{t=0}^{T-1} \delta {\tr{\V{x}}_t} \M{Q} \delta \V{x}_t + \delta \tr{\V{u}}_t\M{R}\delta \V{u}_t + \delta \tr{\V{x}}_T \M{Q}_f \delta \V{x}_T}, \label{LQRcost} \end{equation} subject to the linear dynamics $\delta \V{x}_{t+1}^\ell = \M{A}_t \delta \V{x}_t^\ell + \M{B}_t \delta \V{u}_t + \epsilon \M{B}_t \V{w}_t$. In this paper, this decoupled design shall henceforth be called the trajectory-optimized LQR (T-LQR) design. \subsection{Multi-agent setting} Now, we generalize the above result to a class of multi-agent problems. We consider a set of agents that are transition independent, i.e, their dynamics are independent of each other. For simplicity, we also assume that the agents have perfect state measurements. Let the system equations for the agents be given by: $ \V{x}_{t+1}^j = f(\V{x}_t^j) + \M{B}^j_t(\V{u}_t^j + \epsilon \V{w}_t^j), $ where $j = 1,2,\dots,M$ denotes the $j\ensuremath{{}^{\textrm{th}}}$ agent. (We have assumed the control affine dynamics for simplicity). Further, let us assume that we are interested in the minimization of the joint cost of the agents given by $\mathcal{J} = \sum_{t=0}^{T-1} c(\M{X}_t,\M{U}_t) + \Phi(\M{X}_T)$, where $\M{X}_t = [\V{x}_t^1,\dots,\V{x}_t^M]$, and $\M{U}_t = [\V{u}_t^1,\dots, \V{u}_t^M]$ are the joint state and control action of the system. The objective of the multi-agent problem is minimize the expected value of the cost $\Exp{}{\mathcal{J}}$ over the joint feedback policy $\M{U}_t(\cdot)$. The decoupling result holds here too and thus the multi-agent planning problem can be separated into an open and closed-loop problem. The open-loop problem consists of optimizing the joint nominal cost of the agents subject to the individual dynamics.\\ \textit{Multi-Agent Open-Loop Problem:}\\ \begin{align}\label{OL-MA} \overline{\mathcal{J}} = \min_{\overline{\M{U}}_t} \sum_{t=0}^{T-1} c(\overline{\M{X}}_t,\overline{\M{U}}_t) + \Phi(\overline{\M{X}}_T), \end{align} subject to the nominal agent dynamics $ \overline{\V{x}}_{t+1}^j = f(\overline{\V{x}}_t^j) + \M{B}^j_t\overline{\V{u}}_t^j. $ The closed-loop, in general, consists of optimizing the variance of the cost $\mathcal{J}$, given by $\Var[\delta \mathcal{J}^\ell_1]$, where $\delta \mathcal{J}_1^\ell = \sum_t \M{C}_t \delta \M{X}_t^l$ for suitably defined $\M{C}_t$, and $\delta \V{X}_t^\ell = [\delta \V{x}_t^1,\dots, \delta \V{x}_t^M]$, where the perturbations $\delta \V{x}_t^j$ of the $j\ensuremath{{}^{\textrm{th}}}$ agent's state is governed by the decoupled linear multi-agent system $\delta \V{x}_t^j = \M{A}_t\delta \V{x}_t^j + \M{B}_t^j \delta \V{u}^j_t + \epsilon \M{B}_t^j \V{w}_t^j.$ This design problem does not have a standard solution but recall that we are not really interested in obtaining the optimal closed-loop variance, but rather a good variance. Thus, we can instead solve a surrogate LQR problem given the cost function $\delta \mathcal{J}_{\textsc{mtlqr}} = \sum_{t=0}^{T-1} \sum_j \delta {\tr{\V{x}_t^j}} \M{Q}^j \delta \V{x}_t^j + \delta \tr{\V{u}_t^j}\M{R}\delta \V{u}_t^j + \sum_j\delta \tr{\V{x}_T^j} \M{Q}^j_f \delta \V{x}_T^j$. Since the cost function itself is decoupled, the surrogate LQR design degenerates into a decoupled LQR design for each agent.\\ \textit{Surrogate Decoupled LQR Problem:} \begin{equation} \delta \mathcal{J}^j =\min_{\V{u}_t^j} \Exp{\V{w}_t^j}{\sum_{t=0}^{T-1} \delta {\tr{\V{x}_t^j}} \M{Q}^j \delta \V{x}_t^j + \delta \tr{\V{u}_t^j}\M{R}\delta \V{u}_t^j + \delta \tr{\V{x}_T^j} \M{Q}^j_f \delta \V{x}_T^j}, \end{equation} subject to the linear decoupled agent dynamics $\delta \V{x}_t^j = \M{A}_t\delta \V{x}_t^j + \M{B}_t^j \delta \V{u}^j_t + \epsilon \M{B}_t^j \V{w}_t^j.$\\ \begin{remark} Note that the above decoupled feedback design results in a spatial decoupling between the agents in the sense that, at least in the small noise regime, after their initial joint plan is made, the agents never need to communicate with each other in order to complete their missions. \end{remark} \subsection{Planning Complexity versus Uncertainty} The decoupling principle outlined above shows that the complexity of planning can be drastically reduced while still retaining near optimal performance for sufficiently small noise (i.e., parameter $\epsilon \ll 1$). Nonetheless, the skeptical reader might argue that this result holds only for low values of $\epsilon$ and thus, its applicability for higher noise levels is suspect. Still, because the result is second order, it hints that near optimality might be over a reasonably large $\epsilon$. Naturally, the question is \textsl{`will it hold for medium to higher levels of noise?'}\\ \begin{figure}[t] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/cost_comp_all5_1_agent_c7_p7_02.pdf} \caption{Full noise spectrum.} \label{1 agent cost full} \end{subfigure}% % \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/cost_comp_all5_1_agent_c7_p7_02_low.pdf} \caption{Enhanced detail: $0\leq\epsilon\leq0.4$.} \label{1 agent cost low} \end{subfigure} \caption{Cost evolution of the different algorithms for varying noise for a single agent. Control Horizon ($H_c$) used for MPC-SH and T-LQR2-SH was 7. $J_{\textrm{thresh}}$ = 2\% was the replanning threshold used. $J/\overline{J}$ is the ratio of the cost incurred during execution to the nominal cost and is used as the performance measure throughout the paper. The nominal cost $\overline{J}$ which is calculated by solving the deterministic OCP for the total time horizon, just acts as a normalizing factor here. (T-LQR2-SH is not shown in (b) since it skews the graph and is not important in low noise cases.)} \label{1_agent_cost} \end{figure}% \textit{Preview of our Results.} In this paper, we illustrate the degree to which the above result still holds when we allow periodic replanning of the nominal trajectory in T-LQR in an event triggered fashion, dubbed T-LQR2. Here, we shall use MPC as a ``gold standard'' for comparison since the true stochastic control problem is intractable, and it was shown by Fleming in a seminal paper~\cite{fleming1971stochastic} that, effectively speaking, the MPC policy is $O(\epsilon^4)$ near-optimal compared to the true stochastic policy. We show that though the number of replanning operations in T-LQR2 increases the planning burden over T-LQR, it is still much reduced when compared to MPC, which replans continually. The ability to trigger replanning means that T-LQR2 can always produce solutions with the same quality as MPC, albeit by demanding the same computational cost as MPC in some instances. But for moderate levels of noise, T-LQR2 can produce comparable quality output to MPC with substantial computational savings. In the high noise regime, replanning is more frequent but we shall see that there is another consideration at play. Namely, that the effective planning horizon decreases and there is no benefit in planning all the way to the end rather than considering only a few steps ahead, and in fact, in some cases, it can harmful to consider the distant future. Noting that as the planning horizon decreases, planning complexity decreases, this helps recover tractability even in this regime.\\ Thus, while lower levels of noise render the planning problem tractable due to the decoupling result, planning under even medium and higher levels of noise can be practical because the planning horizon should shrink as uncertainty increases. When noise inundates the system, long-term predictions become so uncertain that the best-laid plans will very likely run awry, then it would be wasteful to invest significant time thinking very far ahead. To examine this widely-recognized truth more quantitatively, the parameter $\epsilon$ will be a knob we adjust, exploring these aspects in the subsequent analysis. \section{The Planning Algorithms} The preliminaries and the algorithms are explained below \subsection{Deterministic Optimal Control Problem:} Given the initial state $\V{x}_0$ of the system, the solution to the deterministic OCP is given as:% \begin{equation} J^{*}(\V{x}_0) = \min_{\V{u}_{0:T-1}} \left[\sum^{T-1}_{t=0} c_t(\V{x}_t, \V{u}_t) + c_T(\V{x}_T)\right], \label{DOCP} \end{equation}% \begin{align*} s.t. \ \V{x}_{t+1} = f(\V{x}_t) + \M{B}_t \V{u}_t,\\ \V{u}_{\text{min}} \leq \V{u}_t \leq \V{u}_{\text{max}},\\ | \V{u}_{t} - \V{u}_{t-1}| \leq \Delta \V{u}_{\text{max}}. \end{align*} The last two constraint model physical limits that impose upper bounds and lower bounds on control inputs and rate of change of control inputs. The solution to the above problem gives the open-loop control inputs $\overline{\V{u}}_{0:T-1}$ for the system. For our problem, we take a quadratic cost function for state and control as $ c_t(\V{x}_t,\V{u}_t) = \tr{\V{x}}_t \M{W}^x\V{x}_t + \tr{\V{u}}_t\M{W}^u \V{u}_t, $ $ c_T(\V{x}_T) = \tr{\V{x}}_T\M{W}^x_f\V{x}_T, $ where $\M{W}^x,\ \M{W}^x_f \succeq \M{0}$ and $\M{W}^u \ \succ \ \M{0}$.\\ \subsection{Model Predictive Control (MPC):} We employ the non-linear MPC algorithm due to the non-linearities associated with the motion model. The MPC algorithm implemented here solves the deterministic OCP~\eqref{DOCP} at every time step, applies the control inputs computed for the first instant and uses the rest of the solution as an initial guess for the subsequent computation. In the next step, the current state of the system is measured and used as the initial state and the process is repeated. \subsection{Short Horizon MPC (MPC-SH):} We also implement a variant of MPC which is typically used in practical applications where it solves the OCP only for a short horizon rather than the entire horizon at every step. At the next step, a new optimization is solved over the shifted horizon. This implementation gives a greedy solution but is computationally easier to solve. It also has certain advantageous properties in high noise cases which will be discussed in the results section. We denote the short planning horizon as $H_c$ also called as the control horizon, upto which the controls are computed. A generic algorithm for MPC is shown in Algorithm~\ref{MPC_algo}. \begin{algorithm}[h] \SetAlgoLined \KwIn{$\V{x}_0$ -- initial state, $\V{x}_g$ -- final state, $T$ -- time horizon, $H_c$ -- control horizon, $\Delta t$ -- time step, $\mathcal{P}$ -- system and environment parameters.} \For{$t \leftarrow 0$ \KwTo $T-1$}{ $\V{u}_{t:t+H_c-1} \leftarrow$ \phantom{xxxxxxx}OCP($\V{x}_{t},\V{x}_g, \min(H_c, T\!-\!t),\V{u}_{t-1}, \V{u}_{\textrm{guess}},\mathcal{P}$) $\V{x}_{t+1} \leftarrow$ $f(\V{x}_{t}) + \M{B}_t(\V{u}_t + \epsilon\V{w}_t)$ } \caption{MPC algorithm\label{MPC_algo}.} \end{algorithm} \subsection{Trajectory Optimised Linear Quadratic Regulator~\mbox{(T-LQR)}:} \label{sec:lqr_gains} As discussed in Section~\ref{section:decoupling}, stochastic optimal control problem can be decoupled and solved by designing an optimal open-loop (nominal) trajectory and a decentralized LQR policy to track the nominal. \\ \textit{Design of nominal trajectory}: The nominal trajectory is generated by first finding the optimal open-loop control sequence by solving the deterministic OCP~\eqref{DOCP} for the system. Then, using the computed control inputs and the noise-free dynamics, the sequence of states traversed $\overline{\V{x}}_{0:T}$ can be calculated.\\ \textit{Design of feedback policy:} In order to design the LQR controller, the system is first linearised about the nominal trajectory ($\overline{\V{x}}_{0:T}$, $\overline{\V{u}}_{0:T-1}$). Using the linear time-varying system, the feedback policy is determined by minimizing a quadratic cost as shown in~\eqref{LQRcost}. The linear quadratic stochastic control problem~\eqref{LQRcost} can be easily solved using the algebraic Riccati equation and the resulting policy is $\delta{\V{u}}_{t} = -\M{L}_t\delta{\V{x}}^\ell_t$. The feedback gain and the Riccati equations are given by \begin{equation} \M{L}_t = \inv{(\M{R} + \M{B}^T_t\M{P}_{t+1} \M{B}_t)} \tr{\M{B}}_t \M{P}_{t+1} \M{A}_t, \label{LQR_gain} \end{equation} \begin{equation} \M{P}_{t} = \tr{\M{A}}_t \M{P}_{t+1} \M{A}_t - \tr{\M{A}}_t \M{P}_{t+1} \M{B}_t \M{L}_t + \M{Q}, \label{Riccati} \end{equation} respectively where $\M{Q}_f, \M{Q} \succeq \V{0}, \M{R} \ \succ \V{0}$ are the weight matrices for states and control. Here~\eqref{Riccati} is the discrete-time dynamic Riccati equation which can be solved by backward iteration using the terminal condition $\M{P}_{T} = \M{Q}_f $. \subsection{T-LQR with Replanning (\mbox{T-LQR2}):} T-LQR performs well at low noise levels, but at medium and high noise levels the system tends to deviate from the nominal. So, at any point during the execution if the deviation is beyond a threshold $J_{\textrm{thresh}}= \frac{J_t - \overline{J}_t}{\overline{J}_t}$, where $J_t$ denotes the actual cost during execution till time $t$ while $\overline{J}_t$ denotes the nominal cost. The factor $J_{\textrm{thresh}}$ measures the percentage deviation of the online trajectory from the nominal, and replanning is triggered for the system from the current state for the remainder of the horizon. Note that if we set $J_{\textrm{thresh}}=0$, T-LQR2 reduces to MPC. The calculation of the new nominal trajectory and LQR gains are carried out similarly to the explaination in Section~\ref{sec:lqr_gains}. A generic algorithm for T-LQR and T-LQR2 is shown in Algorithm~\ref{TLQR_algo}. \subsection{Short Horizon T-LQR with Replanning (T-LQR2-SH):} A T-LQR equivalent of MPC-SH is also implemented where the nominal is planned only for a short horizon and it is tracked with a feedback policy as described in T-LQR. It also inherits the replanning property of T-LQR2.\\ The implementations of all the algorithms are available at \url{https://github.com/MohamedNaveed/Stochastic_Optimal_Control_algos/}. \begin{algorithm}[h] \SetAlgoLined \SetKwProg{Fn}{Function}{ is}{end} \SetKwComment{Comment}{/*}{*/} \KwIn{$\V{x}_0$ -- initial state, $\V{x}_g$ -- final state, $T$ -- time horizon, $J_{\textrm{thresh}}$ -- replan threshold, $\Delta t$ -- time step, $\mathcal{P}$ -- system and environment parameters.} \Fn{Plan($\V{x}_{0},\V{x}_g, T, \V{{u}}_{\textrm{init}}$, $\V{u}_{\textrm{guess}}$,$\mathcal{P}$)}{ $\overline{\V{{u}}}_{0:T-1}$ $\leftarrow$ OCP($\V{x}_{0},\V{x}_g, T,\V{{u}}_{\textrm{init}}$, $\V{u}_{\textrm{guess}}$,$\mathcal{P}$) \For{$t \leftarrow 0$ \KwTo $T-1$}{ $\overline{\V{x}}_{t+1} \leftarrow f(\overline{\V{x}}_{t}) + \M{B}_t\overline{\V{u}}_t$ } $\M{L}_{0:T-1} \leftarrow $ $Compute\_LQR\_Gain(\overline{\V{x}}_{0:T-1},\overline{\V{u}}_{0:T-1}$) return $\overline{\V{x}}_{0:T},\overline{\V{u}}_{0:T-1},\M{L}_{0:T-1}$ } \Fn{Main()}{ $\overline{\V{x}}_{0:T}$,$\overline{\V{u}}_{0:T-1}$,$\M{L}_{0:T-1}$ $\leftarrow$ $\textrm{Plan}(\V{x}_{0},\V{x}_g, T,\mathbf{0}, \V{u}_{\textrm{guess}},\mathcal{P})$ \For{$t \leftarrow 0$ \KwTo $T-1$}{ $\V{u}_{t} \leftarrow \overline{\V{u}}_{t} - \M{L}_{t}(\V{x}_{t} - \overline{\V{x}}_{t})$ $\V{u}_{t} \leftarrow \textrm{Constrain}(\V{u}_{t})$ \tcp*[f]{ Enforce limits} $\V{x}_{t+1} \leftarrow f(\V{x}_{t}) + \M{B}_t(\V{u}_t + \epsilon\V{w}_t)$ \If(\tcp*[f]{Replan?}){$ (J_t - \overline{J}_t)/\overline{J}_t > J_{\textrm{thresh}}$}{ $\overline{\V{x}}_{t+1:T}, \overline{\V{u}}_{t+1:T-1}, \M{L}_{t+1:T-1} \leftarrow \textrm{\phantom{xxxxxx}Plan}(\V{x}_{t+1},\V{x}_g, T\!-\!t\!-\!1,\V{u}_t, \V{u}_{\textrm{guess}},\mathcal{P})$ } } } \caption{T-LQR algorithm with replanning}\label{TLQR_algo} \end{algorithm} \subsection{Multi-Agent versions} \begin{figure}[t] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/cost_comp_all5_3_agent_c7_p7_02.pdf} \caption{Full noise spectrum.} \label{3 agent cost full} \end{subfigure} \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/cost_comp_all5_3_agent_c7_p7_02_low.pdf} \caption{Enhanced detail: $0\leq\epsilon\leq0.4$.} \label{3 agent cost low} \end{subfigure} \caption{Cost evolution of the different algorithms for varying noise for 3 agents. Control Horizon ($H_c$) used for MPC-SH and MT-LQR2-SH was 7. $J_{\textrm{thresh}}$ = 2\% was the replanning threshold used.} \label{3_agent_cost} \end{figure}% The MPC version of the multi-agent planning problem is reasonably straightforward except that the complexity of the planning increased (exponentially) in the number of agents. Also, we note that the agents have to always communicate with each other in order to do the planning.\\ The Multi-agent Trajectory-optimised LQR (MT-LQR) version is also relatively straightforward in that the agents plan the nominal path jointly once, and then the agents each track their individual paths using their decoupled feedback controllers. There is no communication whatsoever between the agents during this operation.\\ The MT-LQR2 version is a little more subtle. The agents have to periodically replan when the total cost deviates more than $J_{\textrm{thresh}}$ away from the nominal, i.e., the agents do not communicate until the need to replan arises. In general, the system would need to detect this in a distributed fashion, and trigger replanning. We postpone consideration of this aspect of the problem to a subsequent paper more directly focused on networking considerations. We will assume that there exists a (yet to be determined) distributed strategy that would perform the detection and replanning. \subsection{Analysis of the High Noise Regime} In this section, we perform a rudimentary analysis of the high noise regime. The medium noise case is more difficult to analyze and is left for future work, along with a more sophisticated treatment of the high noise regime.\\ First, recall the Dynamic Programming (DP) equation for the backward pass to determine the optimal time varying feedback policy: J_t(\V{x}_t) = \min_{\V{u}_t}\left\{c(\V{x}_t,\V{u}_t) + \Exp{}{J_{t+1}(\V{x}_{t+1})}\right\}, where $J_t(\V{x}_t)$ denotes the cost-to-go at time $t$ given the state is $\V{x}_t$, with the terminal condition $J_T(\cdot) = c_T(\cdot)$ where $c_T$ is the terminal cost function, and the next state $\V{x}_{t+1} = f(\V{x}_t) + \M{B}_t(\V{u}_t + \epsilon \V{w}_t)$. Suppose now that the noise is so high that $\V{x}_{t+1} \approx \M{B}_t \epsilon \V{w}_t$, i.e., the dynamics are completely swamped by the noise.\\ Consider now the expectation $\Exp{}{c_T(\V{x}_{t+1})}$ given some control $\V{u}_t$ was taken at state $\V{x}_t$. Since $\V{x}_{t+1}$ is determined entirely by the noise, $\Exp{}{c_T(\V{x}_{t+1})} = \int c_T(\M{B}_t\epsilon \V{w}_t)\mathbf{p}(\V{w}_t) d\V{w}_t = \overline{c_T}$, where $\overline{c_T}$ is a constant regardless of the previous state and control pair $\V{x}_t, \V{u}_t$. This observation holds regardless of the function $c_T(\cdot)$ and the time $t$.\\ Next, consider the DP iteration at time $T-1$. Via the argument above, it follows that $\Exp{}{J_T(\V{x}_T)}=\Exp{}{c_T(\V{x}_T)} = \overline{c_T}$, regardless of the state control pair $\V{x}_{T-1},\V{u}_{T-1}$ at the $(T-1)^{th}$ step, and thus, the minimization reduces to $J_{T-1}(\V{x}_{T-1}) = \min_{\V{u}} \left\{c(\V{x}_{T-1},\V{u}) + \overline{c_T}\right\}$, and thus, the minimizer is just the greedy action $\V{u}^*_{T-1} = \argmin_{\V{u}} c(\V{x}_{T-1},\V{u})$ due to the constant bias $\overline{c_T}$. The same argument holds for any $t$ since, although there might be a different $J_{t}(\cdot)$ at every time $t$, the minimizer is still the greedy action that minimizes $c(\V{x}_t,\V{u})$ as the cost-to-go from the next state is averaged out to simply some $\bar{J}_{t+1}$.\\ \begin{figure} \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/1_agent_mpc_test_cases.pdf} \label{test_cases_high_mpc} \caption{MPC} \end{subfigure}% % \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/1_agent_replan_test_cases1.pdf} \label{test_cases_high_tlqr_replan} \caption{T-LQR2} \end{subfigure} \newline \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/1_agent_MPCfast_test_cases.pdf} \label{test_cases_high_shmpc} \caption{MPC-SH} \end{subfigure}% % \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/1_agent_tlqr_test_cases.pdf} \label{test_cases_high_tlqr} \caption{T-LQR} \end{subfigure} \caption{Performance of the algorithms for varying levels of noise.} \label{fig:test_cases_high} \end{figure} \section{Simulation Results:} \begin{figure}[h] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/costvsHcvsdelta_1agents_1surf1.pdf} \caption{} \label{costvsHc_1_1} \end{subfigure}% % \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/timevsHcvsdelta_1agents_1surf1.pdf} \caption{} \label{timevsHc_1_1} \end{subfigure}% \newline \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/costvsHcvsdelta_1agents_7surf1.pdf} \caption{} \label{costvsHc_1_7} \end{subfigure}% % \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/timevsHcvsdelta_1agents_7surf1.pdf} \caption{} \label{timevsHc_1_7} \end{subfigure}% \caption{Variation seen in cost incurred and computation time by changing the $J_{\textrm{thresh}}$ and control horizon ($H_c$) in T-LQR2 and MPC for a single agent case. (a) and (b) show the performance in terms of cost and computation time respectively for the same experiment at $\epsilon = 0.1$. Similarly, (c) and (d) show for $\epsilon=0.7$. Though MPC doesn't have a threshold for replanning, it is plotted at $J_{\textrm{thresh}} = 0\%$ since it replans at every time step.} \label{fig:1_agent_3d} \end{figure} \begin{figure}[h] \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/costvsHcvsdelta_3agents_1surf1.pdf} \caption{} \label{fig:costvsHc_3_1} \end{subfigure}% % \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/timevsHcvsdelta_3agents_1surf1.pdf} \caption{} \label{fig:timevsHc_3_1} \end{subfigure}% \newline \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/costvsHcvsdelta_3agents_7surf1.pdf} \caption{} \label{fig:costvsHc_3_7} \end{subfigure}% % \begin{subfigure}[b]{0.25\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/timevsHcvsdelta_3agents_7surf1.pdf} \caption{} \label{fig:timevsHc_3_7} \end{subfigure}% \caption{Variation seen in cost incurred and computational time by changing the $J_{\textrm{thresh}}$ and control horizon ($H_c$) in MT-LQR2 and MPC for 3 agents. (a) and (b) show for $\epsilon = 0.1$, (c) and (d) show for $\epsilon=0.7$.} \label{fig:3_agent_3d} \end{figure} We test the performance of the algorithms in a car-like robot model. Numerical optimization is carried out using \texttt{CasADi} framework \cite{Andersson2018} with \texttt{Ipopt} \cite{Ipopt} NLP solver in \texttt{Python}. To provide a good estimate of the performance the results presented were averaged from 100 simulations for every value of noise considered. Simulations were carried out in parallel across 100 cores in a cluster equipped with Intel Xeon 2.5GHz E5-2670 v2 10-core processors. The experiments chosen were done with a time horizon T = 35. \subsection*{Car-like robot model:} The car-like robot considered in our work has the following motion model: \begin{align*} x_{t+1} &= x_t + v_{t}\cos(\theta_t)\Delta t, & \theta_{t+1} &= \theta_{t} + \frac{v_t}{L}\tan(\phi_t)\Delta t, \\ y_{t+1} &= y_t + v_{t}\sin(\theta_t)\Delta t, & \phi_{t+1} &= \phi_{t} + \omega_t \Delta t, \end{align*} where $\tr{(x_t, y_t, \theta_t, \phi_t)}$ denote the robot's state vector namely, robot's $x$ and $y$ position, orientation and steering angle at time $t$. Also, $\tr{(v_t, \omega_t)}$ is the control vector and denotes the robot's linear velocity and angular velocity (i.e., steering). Here $\Delta t$ is the discretization of the time step. The values of the parameters used in the simulation were $L = \SI{0.5}{\meter}$ and $\Delta t = \SI{0.1}{\second}$. \subsection*{Noise characterization:} We add zero mean independent identically distributed (i.i.d), random sequences ($\V{w}_t$) as actuator noise to test the performance of the control scheme. The standard deviation of the noise is $\epsilon$ times the maximum value of the corresponding control input, where $\epsilon$ is a scaling factor which is varied during testing, that is: $ \V{w}_t = \V{u}_{\textrm{max}} \bm{\nu}; \quad \bm{\nu} \sim \mathcal{N}(\V{0}, \M{I}) $ and the noise is added as $\epsilon \V{w}_t$. Note that, we enforce the constraints in the control inputs before the addition of noise, so the controls can even take a value higher after noise is added. \subsection{Single agent setting:} A car-like robot is considered and is tasked to move from a given initial pose to a goal pose. The environment of the robot is shown in Figure~\ref{fig:test_cases_high}. The experiment is done for all the control schemes discussed and their performance for different levels of noise are shown in Figure~\ref{1_agent_cost}. \subsection{Multi-agent setting:} A labelled point-to-point transition problem with 3 car-like robots is considered where each agent is assigned a fixed destination which cannot be exchanged with another agent. The performance of the algorithms is shown in Figure~\ref{3_agent_cost}. The cost function involves the state and control costs for the entire system similar to the single agent case. One major addition to the cost function is the penalty function to avoid inter-agent collisions which is given by $ \Psi^{(i,j)} = \textrm{M}\exp\left(-(\Vert \V{p}_t^i - \V{p}_t^j\Vert_2^2 - r_{\textrm{thresh}}^2)\right) $ where $\textrm{M} > 0$ is a scaling factor, $\V{p}^i_t = (x^i_t, y^j_t)$ and $r_{\textrm{thresh}}$ is the desired minimum distance the agents should keep between themselves. \begin{figure}[h] \centering \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/1_agent_replan_wo_obs_02.pdf} \caption{A single agent.} \label{1_agent_replan} \end{subfigure} % \begin{subfigure}[b]{0.225\textwidth} \centering \includegraphics[width=\textwidth]{images_pdf/3_agent_replan_wo_obs_02.pdf} \caption{Three agents.} \label{3_agent_replan} \end{subfigure} \caption{Replanning operations vs. $\epsilon$ for $J_{\textrm{thresh}} = 2\%$} \label{fig:replan plot} \vspace*{-12pt} \end{figure}% \subsection{Interpretation of the results:} From Figures~\ref{1 agent cost low} and~\ref{3 agent cost low} it can be clearly seen that the decoupled feedback law (T-LQR and MT-LQR) shows near-optimal performance compared to MPC at low noise levels~($\epsilon \ll 1$). At medium noise levels, replanning (T-LQR2 and MT-LQR2) helps to constrain the cost from deviating away from the optimal. Figure~\ref{fig:replan plot} shows the significant difference in the number of replans, which determines the computational effort, taken by the decoupled approach compared to MPC. Note that the performance of the decoupled feedback law approaches MPC as we decrease the value of $J_{\textrm{thresh}}$. The significant difference in computational time between MPC and T-LQR2 can be seen from Figure~\ref{timevsHc_1_1} which shows results for $\epsilon=0.1$. For $H_c = 35$ (i.e. we plan for the entire time horizon), and $J_{\textrm{thresh}}$= .2\% there is not much difference in the cost between them in~\ref{costvsHc_1_1} (both are in the dark green region), while there is a significant change in computation time as seen in~\ref{timevsHc_1_1}. The trend is similar in the multi-agent case as seen in Figures~\ref{fig:costvsHc_3_1} and ~\ref{fig:timevsHc_3_1} which again shows that the decoupling feedback policy is able to give computationally efficient solutions which are near-optimal in low noise cases by avoiding frequent replanning. At high noise levels, Figures~\ref{1 agent cost full} and~\ref{3 agent cost full} show that T-LQR2 and MT-LQR2 are on a par with MPC. Additionally, we also claimed that planning too far ahead is not beneficial at high noise levels. It can be seen in Figure~\ref{fig:costvsHc_3_7} that the performance for MPC as well as MT-LQR2 is best at $H_c = 20$. Planning for a shorter horizon also eases the computation burden as seen in Figure~\ref{fig:timevsHc_3_7}. Though not very significant in the single agent case, we can still see that there is no difference in the performance as the horizon is decreased in Figure~\ref{costvsHc_1_7}. It can also be seen in Figure~\ref{3 agent cost full} where MPC-SH and MT-LQR2-SH both with $H_c=7$ outperform MPC with $H_c=35$ at high noise levels which again show that the effective planning horizon decreases at high noise levels. \section{CONCLUSIONS} \label{section:conclusion} In this paper, we have considered a class of stochastic motion planning problems for robotic systems over a wide range of uncertainty conditions parameterized in terms of a noise parameter $\epsilon$. We have shown extensive empirical evidence that a simple generalization of a recently developed ``decoupling principle" can lead to tractable planning without sacrificing performance for a wide range of noise levels. Future work will seek to treat the medium and high noise systems, considered here, analytically and look to establish the near-optimality of the scheme. Further, we shall consider the question of ``when and how to replan'' in a distributed fashion in the multi-agent setting, as well as relax the requirement of perfect state observation. \printbibliography \end{document}
proofpile-arXiv_065-6089
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} \noindent The axiomatic presentation of mathematical theories allows the selection of different sets of axioms for its development. This choice depends on criteria of economy, elegance, simplicity or pedagogy. The definition of an abelian group was initially formulated for finite groups (with the axioms of closure, associativity, commutativity and existence of inverses) by Kronecker in 1870 and by Weber for infinite groups in 1893 \cite{waerden}. In 1878 Cayley introduces the notion of abstract group and in 1882 Dick Van presents the first explicit definition of this notion \cite{wussing}. In 1938 Tarski \cite{tarski} defines an abelian group $(G, +)$ as an associative and commutative quasigroup and characterizes it in terms of subtraction using only two axioms, one that indicates that the subtraction is an operation in $G$ and the other which is a property that includes three variables. In 1952 Higman and Neumann \cite{higman} give an axiomatization for groups with one axiom in terms of division, using three variables. In 1981 Neumann \cite{neumann81} proposes another single law in terms of multiplication and inversion, in equational form with four variables. In 1993 McCune \cite{mccune} presents cha\-rac\-terizations of abelian groups with one axiom that has three or five variables, using computational tools, but in terms of operations such as \{addition and inverse\}, \{double subtraction\}, \{double subtraction, identity\}, \{subtraction, identity\}, \{subtraction, inverse\}, \{double subtraction, inverse\}. In all cases in which the groups or abelian groups are characterized in equational form with one axiom, it has an extensive expression and the proofs are intricate. How\-ever, in 1996 McCune and Sands \cite{mcsands} proposed a single law but in implicative form, which is simpler than the equational form, not only in appearance but in the proofs too. In the present work we give some characterizations of abelian groups with two elementary axioms whose expressions display an elegant simplicity. The same applies to the proofs which can help to understand this basic algebraic structure. Algebraic structures can be classified, giving them special names, according to the operations they involve. We have limited ourselves to consider algebraic structures with one operation $(G,+)$ with $G$ a nonempty set, called \textit{magma} or \textit{groupoid}. The best known are \textit{Semigroup}, a groupoid with one associative operation (A); \textit{Monoid}, a semigroup that has neutral element (NE); \textit{Group}, a monoid such that all elements have inverse elements (IN); and \textit{Abelian group}, a commutative, (C), group. However, there are other properties (see \cite{ilse}) such as : for all $a$, $b$, $c$, $d \in G$ \begin{itemize} \item CAI. \textit{Cyclic associativity I}: $a + (b + c) = c + (a + b)$. \item CAII. \textit{Cyclic associativity II}: $a + (b + c) = (c + a) + b$. \item AGI. \textit{Abel-Grassmann I}: $a + (b + c) = c + (b + a)$. \item AGII. \textit{Abel-Grassmann II}: $a + (b + c) = (b + a) + c$. \item R. \textit{Reduced product property}: $(a + b) + c = a + (c + b)$. \item H. \textit{Hilbert property}\footnote{This property was presented as part of an axiomatization for real numbers in \cite[p. 51-52]{hilbert}.}: the equations $x + a = b$ and $a + y = b$ have a unique solution. \end{itemize} Algebraic structures whose operations satisfy some of these properties have also received special names such as \textit{Quasigroup}\footnote{This concept was introduced by B. A. Hausmann and O. Ore in 1937 \cite[p. 22]{ilse}. An equivalent definition appears in \cite[p. 50]{warner}.}, a groupoid that satisfy H and \textit{Loop}, a quasigroup having a neutral element. \begin{theorem}\label{teor1} If $(G, +)$ is a commutative semigroup then it satisfies the properties CAI, CAII, AGI, AGII and R. \end{theorem} \begin{theorem}\label{teor2} If $(G, +)$ is an abelian group then it satisfies H. \end{theorem} Although classical structures such as the abelian group and the commutative semigroup satisfy the properties mentioned, this does not mean that these properties are not independent. \section*{Examples} \begin{enumerate} \item The natural numbers with usual addition and multiplication are commutative semigroups, have neutral elements 0 and 1 respectively, but are not quasigroups. \item Integers, rational, real and complex numbers with the usual sum are commutative semigroups and loops. \item A lattice with the meet ($\land$) and join ($\lor$) operations is a commutative semigroup, but not a quasigroup. \item The integers with subtraction $x \circ y = x - y$ is a quasigroup and satisfies AGI but not AGII. It neither satisfies ACI nor ACII, R, A or C and also does not have a neutral element. \item The integers with reciprocal subtraction $x \bullet y = y - x$ is a quasigroup which satisfies AGII, but not AGI. It neither satisfies ACI nor ACII, R, A or C and does not have a neutral element. \item A set A with the second projection operation defined by $x \ \pi_2 \ y = y$, is a non-commutative semigroup, which satisfies AGII but not AGI. It neither satisfies ACI nor ACII or R. It has a neutral element and it is not a quasigroup. \item A set A with the first projection operation defined by $x \ \pi_1 \ y = x$, is a non-commutative semigroup, which satisfies R but not AGI. It neither satisfies AGII nor ACI or ACII. It does not have a neutral element and it is not a quasigroup. \item In the real interval $[0, 1]$ the operation $p * q = 1 - pq$ is commutative but not associative. It does not have a neutral element. It is neither AGI nor AGII, ACI, ACII or R and it is not a quasigroup. This operation is used in probability theory to determine the probability for two independent events to not occur simultaneously when the probability of occurrence of one is $p$ and of the other is $q$. \item In the ordered set $\{0, 1/2, 1\}$ the operation defined by table \ref{tabla1}, is commutative, but not associative. It has a neutral element 1. It is not AGI, neither AGII nor ACI, ACII or R and it is not a quasigroup. This operation is the logical equivalence which is used in a trivalent Heyting algebra and it was used by Reichenbach in a formulation of quantum mechanics \cite[366-367]{jammer}. \begin{table}[h] \begin{center} \begin{tabular}{c|ccc} $\leftrightarrow$&0&1/2&1\\ \hline 0&1&0&0 \\ 1/2&0&1&1/2 \\ 1&0&1/2&1\\ \end{tabular} \end{center} \caption{Trivalent logical equivalence} \label{tabla1} \end{table} \end{enumerate} \section{Substituting associativity and commutativity} A strategy to search for new characterizations of abelian groups is to exchange or replace some of the properties that define them by others such that these new properties, when mixed with the remaining ones, give us a new definition of the abelian group. To this end, we now establish some relations between structures that satisfy some of the properties mentioned above. We can find structures characterized by some of the unusual properties mentioned above which together with the property NE will result into the properties A and C. \begin{theorem}\label{teor3} If a groupoid $(G, +)$ has a neutral element, $e$, and satisfies the property AGII, then it is a commutative semigroup. \end{theorem} \begin{proof} We first show that $+$ is commutative. Applying the properties NE and AGII we obtain \[a + b = a + (b + e) = (b + a) + e = b + a\] From properties AGII and C we deduce that $+$ is associative: \[a + (b + c) = (b + a) + c = (a + b) + c \qedhere\] \end{proof} The proof of theorem \ref{teor4} below is analogous to the one of theorem \ref{teor3}. \begin{theorem}\label{teor4} If a groupoid $(G, +)$ has a neutral element and satisfies one of the pro\-perties CAI, CAII, AGI or R, then it is a commutative semigroup. \end{theorem} Note that from theorems \ref{teor3} and \ref{teor4} we can replace the associative and commutative properties by any of the properties CAI, CAII, AGI, AGII or R, in the de\-fi\-ni\-tion of an abelian group. This way we obtain another characterization of the structure under consideration, only with three axioms. \begin{theorem} The following conditions are equivalent: \begin{enumerate} \item $(G, +)$ is an abelian group. \item $(G, +)$ is a groupoid that satisfies the properties NE, IN and CAI. \item $(G, +)$ is a groupoid that satisfies the properties NE, IN and CAII. \item $(G, +)$ is a groupoid that satisfies the properties NE, IN and AGI. \item $(G, +)$ is a groupoid that satisfies the properties NE, IN and AGII. \item $(G, +)$ is a groupoid that satisfies the properties NE, IN and R. \end{enumerate} \end{theorem} It should be noted that the properties CAI, CAII, AGII and AGI have been used \cite[p. 10]{pad} for axiomatizing the lattice theory. \section{Substituting inverse elements and neutral element} From theorems \ref{teor1} and \ref{teor2} we deduce that an abelian group satisfies the properties CAI, CAII, AGI, AGII, R and H. The next theorem indicates how the property H may be used to characterize the abelian groups, but without forgetting the commutative and associative properties. \begin{theorem}\label{teor6} If $(G, +)$ is an associative and commutative quasigroup then it is an abelian group. \end{theorem} \begin{proof} Since $(G, +)$ is a quasigroup, for all $a \in G$, the equation $a + x = a$ has a unique solution, say $e_a$, i.e. $a + e_a = a$. By C, $a + e_a = e_a + a = a$. Now, let $b \in G$ then by A, $a + b = (a + e_a) + b = a + (e_a + b)$ and as the equation $a + y = d$ with $d = a + b$, has a unique solution, we conclude that $b = e_a + b$. Therefore, $e_a + b = b = e_b + b$ and again by uniqueness of the solution of the equation $y + b = b$, it follows that $e_a = e_b$. Hence, $e_a$ is the neutral element of $G$ since the above argument is valid for all $b \in G$. The existence of an inverse element for each element $a$ of $G$ is guaranteed by the exis\-tence of the solution of the equation $a + x = e$ with $e$ the neutral element, and pro\-perty C. \end{proof} From the arguments presented in the proof of theorem \ref{teor6} we can conclude: \begin{theorem}\label{CA} If $(G, +)$ is a quasigroup then it satisfies the property of being cancelative (CA). CA is defined as follows: for all $a, b, c \in G$, \center{if \ $a + b = a + c$ \ then \ $a = c$ \ \ and \ \ if \ $b + a = c + a$ \ then \ $b = c$} \end{theorem} Combining the results of theorems \ref{teor2} and \ref{teor6} we obtain other characterizations of abelian groups with three axioms. \begin{theorem} The following conditions are equivalent: \begin{enumerate} \item $(G, +)$ is an abelian group. \item $(G, +)$ is a groupoid that satisfies the properties H, A and C. \end{enumerate} \end{theorem} \section{Substituting all properties} Below we present results in which we cha\-rac\-te\-ri\-ze the structure of an abelian group without using the usual properties. We focus on replacing the property NE, a key pro\-per\-ty that has been used in previous results, without having to resort to the properties A and C. \begin{theorem}\label{CAI} If $(G, +)$ is a quasigroup that satisfies the property CAI, then it is a loop. \end{theorem} \begin{proof} For all $a \in G$, let \begin{equation} a + e_a = a \end{equation} with $e_a$ the unique solution of the equation $a + x = a$. Combining (1) with the pro\-per\-ty CAI we obtain $e_a + a = e_a + (a + e_a) = e_a + (e_a + a)$. From theorem \ref{CA} we get \begin{equation} e_a + a = a. \end{equation} Given $b \in G$, from (2) and CAI we have \[(e_a + b) + a = (e_a + b) + (e_a + a) = e_a + (a + (e_a + b)) = e_a + (b + (a + e_a))\] and by (1) and CAI we obtain \[e_a + (b + (a + e_a)) = e_a + (b + a) = b + (a + e_a) = b +a.\] Then $(e_a + b) + a = b +a$ and by theorem \ref{CA} we conclude that $e_a + b = b$. Hence, $e_a + b = e_b + b$ and again by theorem \ref{CA}, $e_a = e_b$. As this argument is valid for all $b \in G$ we arrive at the conclusion that $e_a$ is the neutral element of $G$ which proves the theorem. \end{proof} \begin{theorem}\label{CAII} If $(G, +)$ is a quasigroup that satisfies the property CAII, then it is a loop. \end{theorem} We shall give two proofs for this theorem: one, similar to the previous proof, sho\-wing directly that there is a neutral element and the other, proving that under these assumptions the property CAI holds and so the assertion follows from theorem \ref{CAI}. \begin{proof}[Proof 1] As for all $a \in G$ the equation $a + x = a$ has a unique solution, say $e_a$, i.e. \begin{equation} a + e_a = a \end{equation} Applying (3) and the property CAII we have \[e_a + a = e_a + (a + e_a) = (e_a + e_a) + a\] From theorem \ref{CA} we get \begin{equation} e_a + e_a = e_a \end{equation} Given $b \in G$, from (4) and CAII it follows that \[a + e_a = a + (e_a + e_a) = (e_a + a) + e_a\] Again by theorem \ref{CA} we have $a = e_a + a$ for all $a \in G$. Now let $b \in G$, by (3) and CAII we obtain $b + a = b + (a + e_a) = (e_a + b) + a$. By theorem \ref{CA}, $b = e_a + b$. Thus, $e_a + b = e_b + b$ and so $e_a = e_b$. As this argument is valid for all $b \in G$ we again conclude that $e_a$ is the neutral element of $G$ and as a consequence $(G, +)$ is a loop. \end{proof} \begin{proof}[Proof 2] We first show that $+$ is associative. Let $a, b, c \in G$, as $G$ is a quasigroup there is $u \in G$ such that $b + u = c$. Hence, \begin{equation} a + (b + c) = a + (b + (b + u)) \end{equation} Applying the property CAII repeatedly we get \[a + (b + (b + u)) = ((b + u) + a) + b = (u + (a + b)) + b = (a + b) + (b + u)\] and replacing $b+u$ by $c$ we conclude that $a + (b + c) = (a + b) + c$. Now let us prove that $+$ satisfies CAI. Again applying the property CAII repeatedly to (5), we obtain \begin{align*} &a + (b + (b + u)) = a + ((u + b) + b) = (b + a) + (u + b) \\ &\hspace*{0.5cm}= (b + (b + a)) + u = ((a + b) + b) + u = b + (u + (a + b)) \end{align*} By the property A and replacing $b+u$ by $c$, we have $a + (b + c) = c + (a + b)$. Then from theorem \ref{CAI} it follows that $(G, +)$ is a loop. \end{proof} \begin{theorem}\label{AGII} If $(G, +)$ is a quasigroup that satisfies the property AGII, then it is a loop. \end{theorem} \begin{proof} For each $a \in G$ let $e_a$ the unique solution of the equation $a + x = a$, i.e. \begin{equation} a + e_a = a \end{equation} By (6) and property AGII we have \[e_a + a = e_a + (a + e_a) = (a + e_a) + e_a = a + e_a = a\] Therefore, for all $a \in G$, it holds that \begin{equation} e_a + a = a = a + e_a \end{equation} Now, let $b \in G$, then by (7) and the property AGII we obtain \[b + a = b + (e_a + a) = (e_a + b) + a\] Furthermore, by theorem \ref{CA} we conclude that $b = e_a + b$. Hence, $e_a + b = e_b + b$ and so $e_a = e_b$. Since this argument is valid for all $b \in G$, $e_a$ is the neutral element of $G$ which proves the theorem. \end{proof} \begin{theorem}\label{R} If $(G, +)$ is a quasigroup which satisfies the property R, then it is a loop. \end{theorem} \begin{proof} Since $G$ is a quasigroup, for each $a \in G$ there is $e_a \in G$ such that \begin{equation} a + e_a = a \end{equation} Therefore, given any $b \in G$, by (8) and the property R we get \[a + b = (a + e_a) + b = a + (b + e_a) \] By the theorem \ref{CA} we obtain $b = b + e_a$. As a consequence $b + e_a = b + e_b$ and so $e_a = e_b$. This argument holds for all $b \in G$. We therefore conclude that there is a unique right neutral element, which we denote by $e$. On the other hand, for all $a \in G$ the equation $y + a = a$ has a unique solution, say $\hat{e}_a$, i.e. \begin{equation} \hat{e}_a + a = a \end{equation} As $e$ is a right neutral element we have $(e + a) + e = e + a$. Applying (9) and the pro\-per\-ty R, we obtain $e + a = e + (\hat{e}_a + a) = (e + a) + \hat{e}_a$. Then $(e + a) + e = (e + a) + \hat{e}_a$ and by theorem \ref{CA}, $e = \hat{e}_a$ for all $a \in G$. Thus, $e$ is also left neutral element of $G$ which completes the proof. \end{proof} Finally, the results of theorems \ref{CAI}-\ref{R} together with the theorems \ref{teor1}-\ref{teor4} and \ref{teor6} can be condensed into the following final result. \begin{theorem} The following conditions are equivalent: \begin{enumerate} \item $(G, +)$ is an abelian group. \item $(G, +)$ is a groupoid that satisfies the properties H and CAI. \item $(G, +)$ is a groupoid that satisfies the properties H and CAII. \item $(G, +)$ is a groupoid that satisfies the properties H and AGII. \item $(G, +)$ is a groupoid that satisfies the properties H and R. \end{enumerate} \end{theorem} Examples 1 and 4 show the independence of each of the properties CAI, CAII, AGII and R with respect to property H, and examples 1 and 5 show the independence of AGI and H. One would think that with AGI we would have an analogous theorem to those presented in this section, however this combination of properties does not even provide us a loop. For example, integers with subtraction satisfy H and AGI but it is not an abelian group as it does not have a neutral element and hence no inverses.
proofpile-arXiv_065-6090
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Supplementary Material} \subsection{Dynamical Two-point function} \noindent In this section we compute the dynamical two-point function defined as $F(x,t;x_0,0) \equiv \langle \phi(x,t)\phi(x_0,0) \rangle$ corresponding to a primary field $\phi$ of conformal dimension $h$. The time evolution of the primary is governed by the Floquet Hamiltonian $\mathcal{H}_\mathrm{F}(t)$ defined in the main text. We closely follow the strategy employed in \cite{Wen:2018agb} wherein the time evolution of the entanglement entropy for a system driven by $\mathcal{H}_\mathrm{F}(t)$ was computed. Within this setup, we work in imaginary time $\tau$, and introduce Euclidean coordinates $\omega=\tau+\mathrm{i} x$. Before getting to the computation for an $n$-cycle drive, we describe the 1-cycle drive as a warm-up. The two- point function is \begin{equation} F(x,\tau;x_0,0)=\langle\mathrm{e}^{\tau_1\mathcal{H}_{\text{SSD}}} \mathrm{e}^{\tau_0\mathcal{H}_{0}}\phi(\omega_1,\bar{\omega}_1)\mathrm{e}^{-\tau_0\mathcal{H}_{0}}\mathrm{e}^{-\tau_1\mathcal{H}_{\text{SSD}}}\phi(\omega_0,\bar{\omega}_0) \rangle, \end{equation} where $\omega_1=0+\mathrm{i} x$, $\omega_0=0+\mathrm{i} x_0$ and $\tau=\tau_0+\tau_1$. $\mathcal{H}_{\text{SSD}}$ and $\mathcal{H}_{0}$ are the SSD and uniform Hamiltonian described in the main text. Next, under the conformal mapping $z=\exp\left\{\frac{2\pi\omega}{L}\right\}$, the two-point function transforms as \begin{equation} F(x,\tau;x_0,0)=\left(\frac{2\pi}{L}\right)^{4h}\langle\mathrm{e}^{\tau_1\mathcal{H}_{\text{SSD}}} \mathrm{e}^{\tau_0\mathcal{H}_{0}}\phi(z_1,\bar{z}_1)\mathrm{e}^{-\tau_0\mathcal{H}_{0}}\mathrm{e}^{-\tau_1\mathcal{H}_{\text{SSD}}}\phi(z_0,\bar{z}_0)\rangle. \end{equation} To compute the time evolution with $\mathcal{H}_\text{SSD}$ in the complex plane, we introduce the so-called M\"obius Hamiltonian \cite{Okunishi:2016zat} \begin{equation} \mathcal{H}_{\text{M\"ob}(\theta)}=L_0-\frac{\tanh(2\theta)}{2}(L_1+L_{-1})+\overline{L}_0-\frac{\tanh(2\theta)}{2}(\overline{L}_1+\overline{L}_{-1}), \label{mobham} \end{equation} defined for $\theta\in\mathbb{R}^+$. Interestingly, there exists an $SL(2,\mathbb{R})$ transformation mapping the M\"obius Hamiltonian to a uniform Hamiltonian. Such mapping is explicitly given by \begin{equation} \hat{z}=f(z)=\frac{-\cosh(\theta)z+\sinh(\theta)}{\sinh(\theta)z-\cosh(\theta)}. \label{mobgood} \end{equation} In the $\hat{z}$-coordinates, $\mathcal{H}_{\text{M\"ob} (\theta)}\propto \frac{2\pi}{L\cosh(2\theta)}(L_0+\overline{L}_0)$. Thus the time evolution with $\mathcal{H}_{\text{M\"ob}(\theta)}$ for a time $\tau$ in the $\hat{z}$-coordinates is a simple dilation by a factor $\lambda = \exp\left\{\frac{2\pi \tau}{L\cosh{2\theta}}\right\}$. Then going back to the original coordinates, the whole time evolution with $\mathcal{H}_{\text{M\"ob}(\theta)}$ amounts to a simple change of coordinates $z^{\text{new}}_{\theta}(z)=f^{-1}\left(\lambda f(z)\right)$ (in the following of the text we often leave the $z$ dependence of the conformal mappings implicit): \begin{equation} z^{\text{new}}_{\theta}(z)=\frac{\left[(1-\lambda)\cosh(2\theta)-(\lambda+1)\right]z+(\lambda-1)\sinh(2\theta)}{(1-\lambda)\sinh(2\theta)z+\left[(\lambda-1)\cosh(2\theta)-(\lambda+1)\right]}. \label{znew} \end{equation} The Hamiltonian $\mathcal{H}_0$ and $\mathcal{H}_{\text{SSD}}$ can be seen as two different limits of the interpolating Hamiltomian $\mathcal{H}_{\text{M\"ob}(\theta)}$. Indeed, $\mathcal{H}_0=\mathcal{H}_{\text{M\"ob}(0)}$ and $\mathcal{H}_{\text{SSD}}=\mathcal{H}_{\text{M\"ob}(\theta\rightarrow\infty)}$. From this observation, it may be deduced that one can first evaluate $e^{\tau_0 \mathcal{H}_{0}}\phi(\omega,\bar{\omega})e^{-\tau_0 \mathcal{H}_{0}}$ by applying the method in the case $\theta=0$. \begin{equation} e^{\tau_0 \mathcal{H}_{0}}\phi(\omega_1,\bar{\omega}_1)e^{-\tau_0 \mathcal{H}_{0}}=\left(\frac{2\pi}{L}\right)^{2h}\left[\frac{\partial z^{\text{new}}_{\theta=0}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{z}^{\text{new}}_{\theta=0}}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\phi\left(z^{\text{new}}_{\theta=0}(z_1),\bar{z}^{\text{new}}_{\theta=0}(z_1)\right). \end{equation} By looking at the expression for $z^{\text{new}}_{\theta}(z)$ in equation \eqref{znew}, we get $z^{\text{new}}_{\theta=0}(z)=\lambda z$, which is a dilatation in the $z$ plane, as expected for the uniform Hamiltonian $\mathcal{H}_0$. Next, we need to evaluate \begin{align} \mathrm{e}^{\tau_1 \mathcal{H}_{\text{SSD}} }(\mathrm{e}^{\tau_0 \mathcal{H}_{0}}\phi(z_1,\bar{z}_1)\mathrm{e}^{-\tau_0 \mathcal{H}_{0}})\mathrm{e}^{-\tau_1 \mathcal{H}_{\text{SSD}}}\propto \mathrm{e}^{\tau_1 \mathcal{H}_{\text{SSD}} }\phi(\lambda z_1,\lambda \bar{z}_1)\mathrm{e}^{-\tau_1 \mathcal{H}_{\text{SSD}}}, \end{align} which can be obtained by using expression of $z^{\text{new}}_{\theta}$ in the limit $\theta\rightarrow\infty$. This just amounts to going to the coordinates $\tilde{z}_1$, defined as \begin{equation} \tilde{z}_1=\lim_{\theta\rightarrow \infty}z^{\text{new}}_{\theta}(\lambda z)= \frac{(1+\frac{\pi\tau_1}{L})e^{\frac{2\pi\tau_0}{L}}z-\frac{\pi\tau_1}{L}}{\frac{\pi \tau_1}{L}e^{\frac{2\pi\tau_0}{L}}z+(1-\frac{\pi\tau_1}{L})}. \label{1-cycle} \end{equation} Hence, $\tilde{z}_1$ is once again related to $z$ by a M\"obius transformation, as expected because it is the obtained via a composition of two (invertible) M\"obius transformations. Consequently the time evolution $\mathrm{e}^{\tau_1 \mathcal{H}_{\text{SSD}} }\mathrm{e}^{\tau_0 \mathcal{H}_{0}}\phi(z,\bar{z})\mathrm{e}^{-\tau_0 \mathcal{H}_{0}}\mathrm{e}^{-\tau_1 \mathcal{H}_{\text{SSD}}}$ for a 1-cycle drive of any primary field of a CFT can be reduced to a normalized M\"obius transformation \begin{equation} \tilde{z}_1=\frac{az+b}{cz+d}, \label{normalized} \end{equation} with \[ \begin{cases} a=(1+\frac{\pi\tau_1}{L})e^{\frac{\pi\tau_0}{L}}, \\ b=-\frac{\pi\tau_1}{L}e^{-\frac{\pi\tau_0}{L}}, \\ c=\frac{\pi \tau_1}{L}e^{\frac{\pi\tau_0}{L}},\\ d=(1-\frac{\pi\tau_1}{L})e^{-\frac{\pi\tau_0}{L}}. \end{cases} \] Explicitly the two-point function at different times for a 1-cycle drive is \begin{equation} \langle\mathrm{e}^{\tau_1\mathcal{H}_{\text{SSD}}} \mathrm{e}^{\tau_0\mathcal{H}_{0}}\phi(\omega_1,\bar{\omega}_1)\mathrm{e}^{-\tau_0\mathcal{H}_{0}}\mathrm{e}^{-\tau_1\mathcal{H}_{\text{SSD}}}\phi(\omega_0,\bar{\omega}_0) \rangle=\left(\frac{2\pi}{L}\right)^{4h}\left[\frac{\partial \tilde{z}_{1}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_1}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\langle\phi(\tilde{z}_{1},\tilde{z}_{1})\phi(z_0,\bar{z}_0)\rangle. \label{uncycle} \end{equation} Therefore we learnt that the time evolution of any primary field during a one cycle of this Floquet drive between $\mathcal{H}_0$ and $\mathcal{H}_{\text{SSD}}$ only amounts to a conformal transformation, as seen in \eqref{uncycle}. The main task now is to find how to generalize this result to the full Floquet drive with $n$ cycles. Clearly, the $n$-cycle Floquet time evolution will just amount to a change of coordinates to $\tilde{z}_n$, defined as \begin{equation} \tilde{z}_n(z)=\underbrace{(\tilde{z}_1\circ...\circ \tilde{z}_1)}_{n\text{ times}}(z). \label{eq:composing_Mobius} \end{equation} This means that increasing the number of cycles only amounts to composing the 1-cycle transformation with itself.\\ The $n$-cycle M\"obius transformation can be computed by writing the 1-cycle M\"obius transformation in its so-called normal form. Introducing the two fixed-points $\gamma_1$, $\gamma_2$, and the multiplier $\eta$, \begin{equation} \begin{cases} \gamma_1=\frac{a-d-\sqrt{(a-d)^2+4bc}}{2c}, \\ \gamma_2=\frac{a-d+\sqrt{(a-d)^2+4bc}}{2c}, \\ \eta=\frac{(a+d)+\sqrt{(a-d)^2+4bc}}{a+d-\sqrt{(a-d)^2+4bc}}. \end{cases} \end{equation} The normal form of $\tilde{z}_1$ is then \begin{equation} \frac{ \tilde{z}_1-\gamma_1}{\tilde{z}_1-\gamma_2}=\eta\frac{z-\gamma_1}{z-\gamma_2}. \label{normalform} \end{equation} It can be shown that in normal form the $n$-cycle evolution simply amounts to \begin{equation} \frac{\tilde{z}_n-\gamma_1}{\tilde{z}_n-\gamma_2}=\eta^n\frac{z-\gamma_1}{z-\gamma_2}. \label{normalform2} \end{equation} Then all the stroboscopic time evolution is encoded in the M\"obius multiplier $\eta$. This defines different phases, classified by the trace squared of the $1$-cycle transformation \cite{Wen:2018agb}: \begin{align} \text{Tr}^2\begin{pmatrix} a&b \\ c&d \end{pmatrix}=4(1-\Delta). \end{align} Indeed if $\Delta>0$ the associated transformation is elliptic and $\eta$ is a phase: the system does not heat. If $\Delta<0$ the associated transformation is hyperbolic and $\eta$ is a positive number: the system heats. $\Delta=0$ corresponds to a parabolic M\"obius transformation, $\eta=1$ and the system is at the phase transition. After analytic continuation, $\Delta$ is written as \begin{equation} \Delta=\left[1-\left(\frac{\pi T_1}{L}\right)^2\right]\sin^2\left(\frac{\pi T_0}{L}\right)+\frac{\pi T_1}{L}\sin\left(\frac{2\pi T_0}{L}\right). \label{delta} \end{equation} The $n$-cycles M\"obius transformation can be explicitly written in terms of the parameters of the system as equation \eqref{eq:tildez_app}, \begin{equation} \tilde{z}_n=\frac{\mathfrak{a} z+\mathfrak{b}}{\mathfrak{c} z +\mathfrak{d}}, \label{eq:tildez_app} \end{equation} with: \[ \begin{cases} \mathfrak{a}=\gamma_1-\eta^n\gamma_2, \\ \mathfrak{b}=(\eta^n-1)\gamma_1\gamma_2, \\ \mathfrak{c}=1-\eta^n,\\ \mathfrak{d}=\gamma_1\eta^n-\gamma_2. \end{cases} \] Then the stroboscopic time evolution $t=n(T_0+T_1)$ of any primary field $\phi$ can be computed by using this conformal transformation. We stress the fact that here the time evolution is stroboscopic in order to get an analytic handle on the long time dynamics. However, by sacrificing some analytic succinctness we can actually access the full continuous time evolution. The two-point function at different times is directly obtained with equation \eqref{eq:two_poin}, \begin{equation}\label{eq:two_poin} \langle \phi(x,t) \phi(x_0,0) \rangle = \left(\frac{2\pi}{L}\right)^{4h}\left[\frac{\partial \tilde{z}_{n}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\langle \phi(\tilde{z}_n, \bar{\tilde{z}}_n) \phi(\tilde z_0, \bar{\tilde z}_0) \rangle. \end{equation} The correlator $\langle \phi(\tilde{z}_n, \bar{\tilde{z}}_n) \phi(\tilde z_0, \bar{\tilde z}_0) \rangle$ can either be computed within the ground state of $\mathcal{H}_0$ with open boundary conditions $|G\rangle$, or the $SL(2,\mathbb{C})$ invariant vacuum $|0\rangle$ of the periodic chain. As $|0\rangle$ is an eigenstate of $\mathcal{H}_{\text{SSD}}$, the Floquet dynamics should be trivial when computing correlation functions at equal times, as the SSD time evolution is just a phase. However for dynamical two-point functions $\langle 0|e^{\mathrm{i} \mathcal{H}_{\text{SSD}}t}\phi(x,0)e^{-\mathrm{i} \mathcal{H}_{\text{SSD}}t}\phi(x_0,0)|0\rangle$ the result should not be trivial as $|\Phi\rangle\equiv\phi(x_0,0)|0\rangle$ is not an eigenstate of $\mathcal{H}_{\text{SSD}}$ in general. Therefore this choice for the computation of $F(x,t;x_0,0)$ is legitimate. In the case of open boundary conditions, we need to use the mapping $z\rightarrow\sqrt{z}$ to map the complex plane with a slit to the upper-half plane, and then evaluate the two point function in the upper-half plane~\cite{Calabrese:2007rg}. This introduces some complications regarding branch cuts of the square root mapping. For simplicity, we choose the periodic case, where \begin{equation} \langle 0|\phi(\tilde{z}_n, \bar{\tilde{z}}_n) \phi(\tilde z_0, \bar{\tilde z}_0) |0\rangle \propto \frac{1}{(z_0-\tilde{z}_n)^{2h}}\frac{1}{(\bar{z}_0-\bar{\tilde{z}}_n)^{2h}}. \end{equation} This leads to the final formula for the two-point function at different times for $n$-cycles \begin{equation} \langle 0|\phi(x,t) \phi(x_0,0) |0\rangle = \left(\frac{2\pi}{L}\right)^{4h}\left[\frac{\partial \tilde{z}_{n}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\frac{1}{(z_0-\tilde{z}_n)^{2h}}\frac{1}{(\bar{z}_0-\bar{\tilde{z}}_n)^{2h}}. \end{equation} It can further be shown that the derivative term simplifies to \begin{equation} \frac{\partial \tilde{z}_n}{\partial z}\bigg|_{z_1}\frac{\partial \bar{ \tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1} = \frac{(\mathfrak{a}\mathfrak{d}-\mathfrak{b}\mathfrak{c})^2}{(\mathfrak{c}^2+\mathfrak{d}^2+2\mathfrak{c}\mathfrak{d}\cos\left(\frac{2\pi x}{L}\right))^2}. \label{derideri} \end{equation} In the heating phase, $\eta$ is a real positive number, such that $\eta^n$ tends either to $0$ or $\infty$ depending on the sign of $\eta-1$, corresponding to $\tilde{z}_n$ converging either to $\gamma_1$ or $\gamma_2$. Then $\langle \phi(\tilde{z}_n,\bar{\tilde{z}}_n)\phi(z_0,\bar{z}_0)\rangle$ tends to a constant, and the derivative term \eqref{derideri} is exponentially suppressed for every $x\notin\{x_c,L-x_c\}$, with $x_c$ defined by the fixed points: $\gamma_{1/2}=e^{2\pi x_c/L}$, where $\gamma_{1/2}$ corresponds to $\gamma_{2}$ if $\tilde{z}_n$ converges to $\gamma_1$, and vice-versa. This can be seen explicitly in equation \eqref{limitlongtime}, \begin{equation} \frac{\partial \tilde{z}_n}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1}= \frac{(\gamma_1-\gamma_2)^4}{\left(\eta^{-n}(1+\gamma_2^2-2\gamma_2\cos{\frac{2\pi x}{L}})+2((\gamma_1+\gamma_2)\cos{\frac{2\pi x}{L}}-1-\gamma_1\gamma_2)+\eta^{n}(1+\gamma_1^2-2\gamma_1\cos{\frac{2\pi x}{L}})\right)^2}. \label{limitlongtime} \end{equation} The expression \eqref{limitlongtime} has a two poles, either in $x_c=\frac{L}{2\pi}\arccos\left(\frac{1+\gamma_2^2}{2\gamma_2}\right)$ and $L-x_c$ if $\lim_{n\rightarrow\infty}\eta^n\rightarrow0$, or in $x_c=\frac{L}{2\pi}\arccos\left(\frac{1+\gamma_1^2}{2\gamma_1}\right)$ and $L-x_c$ if $\lim_{n\rightarrow\infty}\eta^n\rightarrow\infty$. Therefore $F(x,t;x_0,0)$ remains finite even at very long times at these two points, for any choice of $x_0$. The interpretation is that even after a very long number of driving cycles, the excitations will always arrive at one of these two-points at stroboscopic times. Therefore these particular points, only defined with the choice of $T_0/L$ and $T_1/L$, act as attractors for the excitations, which will be better understood within a stroboscopic black-hole picture in an effective space-time in the next section. In the non-heating phase, $\eta$ is a phase, and then after analytic continuation the periodicity reads $T_E=2\pi\frac{ (T_0+T_1)}{|\arg(\eta)|}$. The excitations are then propagating periodically with $T_E$. However if $T_E/(T_0+T_1) \notin \mathbb Q$, the system is pseudo-periodic, as the two-point function is only defined at stroboscopic times.\\ \subsection{Effective curved space-time} \noindent The two-point function at different times $F(x,t;x_0,0)$ enables us to get the light-cone propagation of the gapless excitations. For homogeneous Luttinger liquids, the excitations are following straight lines in space-time. However for inhomogeneous Luttinger liquid with spatial deformation $v(x)$, they are following curves, which are nothing more than light-like geodesics in an effective curved space-time specified by the metric $\mathrm{d} s^2=\mathrm{d} x^2+v(x)^2\mathrm{d} \tau^2$ \cite{Dubail_2017}. In the case of the sine-square deformation, the metric is $\mathrm{d} s^2=\mathrm{d} x^2+\sin^2\left(\frac{\pi x}{L}\right)\mathrm{d} \tau^2$. Thus, the null geodesics are simply given by the light-like condition $\mathrm{d} s^2=0$, giving the propagation of the excitations starting at $x_0$: $x_{\pm}(t)=\frac{L}{\pi}\cot^{-1}{\left(\pm \frac{2\pi}{L} t +\cot{\frac{\pi x_0}{L}}\right)}$. Therefore it is clear that the excitations never reach the boundaries of the system in this case, as their local group velocity goes to $0$ at the edges. \medskip \noindent We now derive the effective space-time metric for the Floquet drive defined at stroboscopic times. We are interested in finding some coordinates $\tilde{z}$ in which the effective metric describing the $n$-cycle Floquet drive is conformally flat, and then going back to the original coordinates $(x,\tau)$ to get the expression of the metric. Such coordinates are called isothermal coordinates and always exist in $(1+1)$-dimensional space-times. For the Floquet drive, they are given by the effective M\"obius transformation \eqref{eq:tildez_app}, so that the metric reads \begin{equation} \mathrm{d} s^2=\mathrm{d} \tilde{z}_n\mathrm{d} \bar{\tilde{z}}_n. \end{equation} Introducing the real and imaginary parts of $\tilde{z}$ \[ \begin{cases} \tilde{u}_n(x,\tau)=\text{Re}(\tilde{z}_n)=\frac{\mathfrak{a}\mathfrak{c} + \mathfrak{b}\mathfrak{d} +(\mathfrak{a}\mathfrak{d}+\mathfrak{b}\mathfrak{c})\cos{\left(\frac{2\pi x}{L}\right)}}{\mathfrak{c}^2+\mathfrak{d}^2+2\mathfrak{c}\mathfrak{d}\cos{\left(\frac{2\pi x}{L}\right)}},\\ \tilde{v}_n(x,\tau)=\text{Im}(\tilde{z}_n)=\frac{(\mathfrak{a}\mathfrak{d}-\mathfrak{b}\mathfrak{c})\sin{\left(\frac{2\pi x}{L}\right)}}{\mathfrak{c}^2+\mathfrak{d}^2+2\mathfrak{c}\mathfrak{d}\cos{\left(\frac{2\pi x}{L}\right)}}. \end{cases} \] The effective metric reads $\mathrm{d} s^2=\mathrm{d} \tilde{u}_n^2+\mathrm{d} \tilde{v}_n^2$. It is now straightforward to apply the change to ($x$,$\tau$) coordinates. After some computations, the metric takes the familiar form, after analytic continuation: \begin{equation*} \mathrm{d} s^2= \mathrm{e}^{2\sigma(x,\tau)}\left(\mathrm{d} x^2 - g(x)\mathrm{d} \tau^2 + 2 h(x) \mathrm{d} x \mathrm{d} t \right). \label{souri} \end{equation*} The value we find for $g(x)$ and $h(x)$ are then given by equations \eqref{efe} and \eqref{timerev} \begin{equation} g(x)= \zeta^2\prod_{i=1}^2\left[1+\gamma_i^2-2\gamma_i\cos{\left(\frac{2\pi x}{L}\right)}\right], \label{efe} \end{equation} \begin{equation} h(x)= \mathrm{i}\zeta (\gamma_1\gamma_2 - 1) \sin\left(\frac{2\pi x}{L}\right), \label{timerev} \end{equation} where $\zeta= -\frac{L}{2\pi \mathrm{i}}\frac{1}{(T_0 + T_1)}\frac{\log( \eta)}{(\gamma_1 - \gamma_2)}$, and as before $\eta$ is the multiplier of the M\"obius transformation, which is a complex exponential in the non-heating phase and a real exponential in the heating phase, and $\gamma_1$, $\gamma_2$ are the two fixed-points of the 1-cycle M\"obius transformation. After analytic continuation, both $g(x)$ and $h(x)$ are real-valued functions.\\ The Weyl prefactor $\mathrm{e}^{2\sigma(x,\tau)}$ is a positive number before analytic continuation, \begin{equation} \mathrm{e}^{2\sigma(x,\tau)}=\frac{4\pi^2}{L^2}\frac{\eta^{2n}(\gamma_1-\gamma_2)^4}{\left(1 + \eta^{2n} (1 + \gamma_1^2) + \gamma_2^2 - 2\eta^n (1 + \gamma_1 \gamma_2) - 2 (-1 + \eta^n) (\eta^n \gamma_1 - \gamma_2)\cos(\frac{2\pi x}{L})\right)^2}. \end{equation} Inverting the Weyl transformation, the metric is finally given by \begin{equation} \mathrm{d} s^2= \mathrm{d} x^2-g(x) \mathrm{d} t^2 +2h(x)\mathrm{d} x\mathrm{d} t. \label{eq:metricnontrv} \end{equation} The null geodesics of this $(1+1)$d space-time are uniquely determined by the condition $\mathrm{d} s^2=0$. Thus they are the solutions of the equation $ 2h(x(t))\dot{x}(t)+\dot{x}^2(t)-g(x(t))=0$: \begin{equation} \pm t(x)= \int_{x_0}^{x}dx'\frac{1}{\sqrt{h(x')^2+g(x')}\mp h(x')}. \end{equation} Then the local group velocity of the excitations is $v(x) = h(x) \mp \sqrt{h(x)^2+g(x)}$, where the sign corresponds to chiral and anti-chiral excitations. The expression \eqref{eq:metricnontrv} is not time-reversal invariant because of the off-diagonal term $h(x)$. Only if $\gamma_1\gamma_2=1$, $h(x)=0$ and the system is time-reversal invariant. This can be fulfilled by starting the drive in a symmetric way. For concreteness, we shift the origin of time by $\frac{T_0}{2}$ \begin{align} \mathcal{H}_\mathrm{F}(t)= \begin{cases} \mathcal{H}_0 & 0<t<\frac{T_0}{2},\\ \mathcal{H}_{\text{SSD}}&\frac{T_0}{2}<t<\frac{T_0}{2}+T_1,\\ \mathcal{H}_0&\frac{T_0}{2}+T_1<t<\frac{3T_0}{2}+T_1,\\ \text{etc.} \end{cases} \end{align} The associated 1-cycle M\"obius transformation is therefore given by the equation \begin{equation} \tilde{z}_1=\frac{\left(1+\frac{\pi \tau_1}{L}\right)\mathrm{e}^{\frac{\pi\tau_0}{L}}z-\frac{\pi\tau_1}{L}}{\frac{\pi\tau_1}{L}z+\left(1-\frac{\pi\tau_1}{L}\right)\mathrm{e}^{-\frac{\pi\tau_0}{L}}}. \label{link} \end{equation} It is interesting to compare \eqref{link} and \eqref{normalized}. The coefficients $a$ and $d$ are the same as in the non symmetric case, but not $b$ and $c$. Furthermore $bc$ is unchanged. Thus, looking at the definitions of the fixed points and the multiplier, this only redefines the denominators of $\gamma_1$ and $\gamma_2$, and keeps the multiplier $\eta$ invariant. It can then be shown that $\gamma_1\gamma_2=1$. Furthermore in the heating phase $|\gamma_1|=|\gamma_2|=1$, therefore in the time-reversal symmetric situation $\gamma_1=\gamma_2^*$, whereas in the non-heating phase, $\gamma_1$ and $\gamma_2$ are both real and inverse of each other. In this case, the metric is simplified to $\mathrm{d} s^2=\mathrm{d} x^2-g(x)\mathrm{d} t^2$. Applying the time-reversal condition, one finds that \begin{equation} v(x)=[g(x)]^{1/2}= \frac{1}{2\pi \mathrm{i}}\frac{L\log(\eta)}{(T_0+T_1)} \frac{\left(1+\gamma_1^2-2\gamma_1\cos{\frac{2\pi x}{L}}\right)}{\gamma_1^2-1}. \label{vitevite} \end{equation} This is the effective velocity of the excitations. In the heating phase their local group velocity goes to 0 at two points, which are found to be $x_c=\frac{L}{2\pi}\arccos\left(\frac{1+\gamma_1^2}{2\gamma_1}\right)$ and $L-x_c$. After analytic continuation, it can be shown that $x_c= \tfrac{L}{2\pi} \arccos(\cos \tfrac{\pi T_0}{L} + \tfrac{L}{\pi T_1} \sin \tfrac{\pi T_0}{L})$. Thus $x_c\in [0,\frac{L}{2}]$ in the heating phase, and $x_c$ is a complex number in the non-heating phase, therefore the velocity never goes to $0$. Thus the heating phase at these two points, the velocity of the excitations vanishes, meaning that their wordlines, following null geodesics of the metric, will tend to one of these two points. We rewrite the metric in the heating phase in terms of the singularity $x_c$. We first notice that as $\cos\left(\frac{2\pi x_c}{L}\right)=\frac{1}{2}\frac{\gamma_1^2+1}{\gamma_1}$, then the effective deformation is rewritten directly in terms of the singularity \begin{equation} v(x)= A \left(1-\frac{\cos\left(\frac{2\pi x}{L}\right)}{\cos\left(\frac{2\pi x_c}{L}\right)}\right), \end{equation} with $A=\frac{1+\gamma_1^2}{\gamma_1^2-1}\frac{1}{2\pi \mathrm{i}(T_0+T_1)}L\log(\eta)$. Using trigonometric formulae, this leads to the desired form of the velocity \begin{equation} v(x)= 2A \frac{\sin\left[\frac{\pi}{L}(x-x_c)\right]\sin\left[\frac{\pi}{L}(x+x_c)\right]}{\cos\left(\frac{2\pi x_c}{L}\right)}, \label{vitess} \end{equation} The effective metric can now be easily expressed in terms of the singularity $x_c$ \begin{equation} \mathrm{d} s^2=-4A^2\frac{\sin^2\left[\frac{\pi}{L}(x-x_c)\right]\sin^2\left[\frac{\pi}{L}(x+x_c)\right]}{\cos^2\left(\frac{2\pi x_c}{L}\right)}\mathrm{d} t^2+\mathrm{d} x^2. \label{basbas} \end{equation} This form is still hard to interpret in terms of Schwarzschild metric. However, by doing an expansion around one of the two singularities, i.e., around $x_c$ or $L-x_c$, and keeping only the lowest order contribution, only the contribution from one of the singularities should matter and the metric should be simpler. Therefore we expand the expression \eqref{basbas} around $x_c$. At leading order in $x-x_c$, we may simplify \begin{align} \sin^2\left[\frac{\pi}{L}(x-x_c)\right]\sin^2\left[\frac{\pi}{L}(x+x_c)\right]\approx \frac{\pi^2}{L^2}(x-x_c)^2\sin^2\left(\frac{2\pi x_c}{L}\right) + \mathcal{O}((x-x_c)^3). \end{align} The metric finally simplifies to \begin{equation} \mathrm{d} s^2= -4A^2\tan^2\left(\frac{2\pi x_c}{L}\right)\frac{\pi^2}{L^2}(x-x_c)^2 \mathrm{d} t^2+\mathrm{d} x^2. \label{rindler} \end{equation} This metric is known as the Rindler metric, which describes an accelerated frame transformation of the flat Minkowski metric. Writing $C^2=4A^2\tan^2\left(\frac{2\pi x_c}{L}\right)\frac{\pi^2}{L^2}$, the metric reads $\mathrm{d} s^2= -C^2 (x-x_c)^2\mathrm{d} t^2+\mathrm{d} x^2$. One can now introduce the following coordinate change: $\frac{C}{2}\left(x-x_c\right)^2=\left(y-x_c\right)$. In the new coordinates, the metric reads \begin{equation} \mathrm{d} s^2=-2C\left(y-x_c\right)\mathrm{d} t^2+\frac{1}{2C}\frac{1}{\left(y-x_c\right)}\mathrm{d} y^2. \end{equation} This is the well-known Schwarzschild metric in $(1+1)$ dimensions. Thus expanding our space-time effective metric around one of the two singularities gives (at leading order) a black hole metric. One can also do that for the second singularity by expanding the metric around $L-x_c$, to get similar results: $\mathrm{d} s^2=-2C\left[y-(L-x_c)\right]\mathrm{d} t^2+\frac{1}{2c}\frac{1}{\left[y-(L-x_c)\right]}\mathrm{d} y^2$. \medskip \noindent The Hawking temperature $\Theta_H$ can be directly read-off from the expression of the metric as $\Theta_H=\frac{C}{2\pi}$. We can finally use the formula $\tan{\left(\frac{2\pi x_c}{L}\right)}=\frac{1}{\mathrm{i}}\frac{\gamma_1^2-1}{\gamma_1^2+1}$ to conclude that the Hawking temperature is given by $\Theta_H=\frac{|\log{(\eta)}|}{2\pi(T_0+T_1)}$.\\ \subsection{Effective Hamiltonian} \noindent Using the effective metric in the time-reversal symmetric case, we deduce that the stroboscopic effective Hamiltonian is $\mathcal{H}_{\text{eff}}=\int_0^L v(x)T_{00}(x)\mathrm{d} x$. Using the Fourier decomposition of $v(x)$, given by the equation \eqref{vitevite}, and using the definition of the Virasoro generators $L_n=\frac{1}{2\pi \mathrm{i}}\oint \mathrm{d} z z^{n+1}T(z)$, we can conclude that the stroboscopic Hamiltonian is \begin{equation} \mathcal{H}_{\text{eff}}=\alpha\left[L_0-\frac{\beta}{2}(L_1+L_{-1})+\overline{L}_0-\frac{\beta}{2}(\overline{L}_1+\overline{L}_{-1})\right], \end{equation} where $\alpha=\frac{1+\gamma_1^2}{\gamma_1^2-1}\frac{L}{2\pi \mathrm{i}(T_0+T_1)}\log{(\eta)}$, $\beta=\frac{2\gamma_1}{1+\gamma_1^2}$, which are real numbers. It can further be shown using the expressions of the fixed-points that: $\beta^{-1}=\cos(\frac{\pi T_0}{L})+\frac{L}{\pi T_1}\sin(\frac{\pi T_0}{L})$. Therefore in the heating phase $|\beta|>1$, and in the non-heating phase $|\beta|<1$. \medskip \noindent In the case $|\beta|<1$, the effective Hamiltonian is simply the M\"obius Hamiltonian \eqref{mobham}. This observation is consistent with the fact that $F(x,t;x_0,0)$ is periodic in the non-heating phase. Indeed, the propagation of the excitations after a quench with the M\"obius Hamiltonian is also periodic, with period $T=\frac{1}{L\cosh(2\theta)}$. Therefore the effective stroboscopic Hamiltonian in the non-heating phase in the time-reversal symmetric case is just an interpolating Hamiltonian between $\mathcal{H}_0$ and $\mathcal{H}_{\text{SSD}}$. $\mathcal{H}_{\text{eff}}$ can be further written as the convex combination of the two original Hamiltonians \begin{equation} \mathcal{H}_{\text{eff}}=\alpha\left[(1-\beta)\mathcal{H}_0+\beta\mathcal{H}_{\text{SSD}}\right]. \end{equation} Therefore, for $0<\beta<1$, the effective Hamiltonian interpolates between the uniform and the SSD Hamiltonian, as we already understood through the comparison with the M\"obius Hamiltonian. \medskip \noindent For $\beta>1$, the effective Hamiltonian cannot be understood as an interpolation between the two original Hamiltonians, giving rise to the physics of heating. The effective Hamiltonian in the heating phase can be rewritten, using \eqref{vitess} as: \begin{equation} \mathcal{H}_{\text{eff}}=2L\Theta_H\int_0^L\mathrm{d} x\frac{ \sin\left(\frac{\pi}{L}(x-x_c)\right)\sin\left(\frac{\pi}{L}(x+x_c)\right)}{\sin\left(\frac{2\pi x_c}{L}\right)}T_{00}(x). \end{equation} This form of the effective Hamiltonian is reminiscent of the entanglement Hamiltonian $K_A$ for a system of finite size $[0,L]$, introduced in \cite{Cardy:2016fqc}, with subsystem $A=(x_c,L-x_c)$. However here the Hamiltonian density is integrated over the whole chain. For such entanglement Hamiltonian, an effective local temperature can be defined, diverging at $x_c$ and $L-x_c$. This is an indication that energy should be absorbed exponentially at these two points. \medskip \noindent Finally, looking at the effective deformation $v(x)$ is insightful: in the non-heating phase, we notice that $v(x)$ has no roots. Therefore, it can be thought of as a shifted sine-square, deforming the homogeneous system only smoothly. By going through the phase diagram following the line $T_0=T_1$, this shifted sine-square will simply tend to the usual sine-square. At the phase transition, the effective Hamiltonian is similar to the sine-square deformation, and has roots at $x_c=0$ and $x_c=L$. Then, in the heating phase, the two roots will approach symmetrically to the center of the system, giving rise to a cosine-square deformation at the second phase transition , having a single root at $x_c=\frac{L}{2}$. Therefore, the effective Hamiltonian in the heating and non-heating phase only interpolates between the sine-square and the cosine-square deformations. \subsection{Energy density} \noindent The different phases arising within the Floquet CFT were understood in \cite{Wen:2018agb} by computing the entanglement entropy, which grows linearly in the heating phase and oscillates with period $T_E$ in the non-heating phase. We would like to characterize more precisely these phases by computing the evolution of the energy density $\mathcal{E}(x,t)$ in the system. In particular, we expect to observe an exponential increase of energy in the heating phase precisely at the location of the two singularities $x_c$ and $L-x_c$, whereas the rest of the system should not absorb energy, to agree with our stroboscopic black hole picture. The energy density $\mathcal{E}(x,t)$ under the Floquet drive is defined by \begin{equation} \mathcal{E}(x,t)=\langle \psi(t)|T_{00}(x)|\psi(t)\rangle. \label{njr} \end{equation} As usual $T_{00}$ is the energy density of the uniform CFT. $|\psi(t)\rangle=U(t)|G\rangle$ is the time evolved ground state of the uniform Hamiltonian $H_0$, with open boundary conditions. We chose open boundary conditions in this case, as in the periodic case $\mathcal{E}(x,t)=0$. In Euclidean coordinates, $T_{00}=T(\omega)+\bar{T}(\bar{\omega})$. The strategy is the same as for the two point function at different times: the first step is to map the strip to the complex plane with a slit, using the exponential mapping \begin{equation} \langle G|\mathrm{e}^{\tau \mathcal{H}_{\text{SSD}}}T(\omega)\mathrm{e}^{-\tau \mathcal{H}_{\text{SSD}}}|G\rangle=\left(\frac{\partial z}{\partial \omega}\right)^2 \langle G|\mathrm{e}^{\tau \mathcal{H}_{\text{SSD}}}T(z)\mathrm{e}^{-\tau \mathcal{H}_{\text{SSD}}}|G\rangle-\frac{\pi^2 c}{6L^2}. \end{equation} Then, the usual procedure consists in mapping the complex plane to itself in the M\"obius $\tilde{z}_n$ coordinates, applying the time evolution and transforming back to the $z$ coordinates. The extra terms coming from the Schwarzian derivative vanish because of $SL(2,\mathbb{R})$ invariance \begin{equation} \langle G|\mathrm{e}^{\tau \mathcal{H}_{\text{SSD}}}T(\omega)\mathrm{e}^{-\tau \mathcal{H}_{\text{SSD}}}|G\rangle=\left(\frac{\partial z}{\partial \omega}\right)^2 \left(\frac{\partial \tilde{z}_n}{\partial z}\right)^2 \langle G|T(\tilde{z}_n)|G\rangle-\frac{\pi^2 c}{6L^2}. \end{equation} The final step is to evaluate the correlation function $\langle G|T(\tilde{z}_n)|G\rangle$ in a boundary CFT, defined on the complex plane with a slit on the real positive axis. This can be done using a square-root mapping $\sqrt{z}$ to the upper-half plane $\mathbb{H}$. This gives a non-trivial Schwarzian derivative term given by $\{z,\sqrt{z}\}=\dfrac{3}{8z^2}$. The upper-half plane can me mapped to the unit disc with a M\"obius transformation, therefore, due to rotational symmetry, $\langle G|T(\sqrt{\tilde{z}_n})|G\rangle_{\mathbb{H}}=0$ \cite{Calabrese:2009qy}. Finally, only the Schwarzian derivative term of the square root mapping contributes to the energy density, which before analytic continuation reads \begin{equation} \mathcal{E}(x,t)= \frac{c}{32}\left[\left(\frac{\partial z}{\partial \omega}\right)^2 \left(\frac{\partial \tilde{z}_n}{\partial z}\right)^2\frac{1}{\tilde{z}_n^2}+\left(\frac{\partial \bar{z}}{\partial \bar{\omega}}\right)^2 \left(\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\right)^2\frac{1}{\bar{\tilde{z}}_n^2}\right]-\frac{\pi^2 c}{3L^2}, \end{equation} for stroboscopic times $t=n(T_0+T_1)$. In the heating phase, as $\mathcal{E}(x_c,t)\sim \eta^{-2n}$ at long time because of the derivative term $\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}$, we conclude that $\mathcal E(x_c,t) \sim \mathrm{e}^{4\pi\Theta_\mathrm{H} t}$, such that the Hawking temperature is really the heating rate. Similarly for the other singularity $L-x_c$, where the energy is also growing exponentially because of the other derivative term $\frac{\partial \tilde{z}_n}{\partial z}$. Therefore the energy density grows exponentially in the heating phase only at the positions of the two black holes, as expected. In the non-heating phase, the energy density oscillates in time with period $T_E=2\pi\frac{ (T_0+T_1)}{|\arg(\eta)|}$. \end{widetext} \end{document} \section*{Supplementary Material} \subsection{Dynamical Two-point function} \noindent In this section we compute the dynamical two-point function defined as $F(x,t;x_0,0) \equiv \langle \phi(x,t)\phi(x_0,0) \rangle$ corresponding to a primary field $\phi$ of conformal dimension $h$. The time evolution of the primary is governed by the Floquet Hamiltonian $\mathcal{H}_\mathrm{F}(t)$ defined in the main text. We closely follow the strategy employed in \cite{Wen:2018agb} wherein the time evolution of the entanglement entropy for a system driven by $\mathcal{H}_\mathrm{F}(t)$ was computed. Within this setup, we work in imaginary time $\tau$, and introduce Euclidean coordinates $\omega=\tau+\mathrm{i} x$. Before getting to the computation for an $n$-cycle drive, we describe the 1-cycle drive as a warm-up. The two- point function is \begin{equation} F(x,\tau;x_0,0)=\langle\mathrm{e}^{\tau_1\mathcal{H}_{\text{SSD}}} \mathrm{e}^{\tau_0\mathcal{H}_{0}}\phi(\omega_1,\bar{\omega}_1)\mathrm{e}^{-\tau_0\mathcal{H}_{0}}\mathrm{e}^{-\tau_1\mathcal{H}_{\text{SSD}}}\phi(\omega_0,\bar{\omega}_0) \rangle, \end{equation} where $\omega_1=0+\mathrm{i} x$, $\omega_0=0+\mathrm{i} x_0$ and $\tau=\tau_0+\tau_1$. $\mathcal{H}_{\text{SSD}}$ and $\mathcal{H}_{0}$ are the SSD and uniform Hamiltonian described in the main text. Next, under the conformal mapping $z=\exp\left\{\frac{2\pi\omega}{L}\right\}$, the two-point function transforms as \begin{equation} F(x,\tau;x_0,0)=\left(\frac{2\pi}{L}\right)^{4h}\langle\mathrm{e}^{\tau_1\mathcal{H}_{\text{SSD}}} \mathrm{e}^{\tau_0\mathcal{H}_{0}}\phi(z_1,\bar{z}_1)\mathrm{e}^{-\tau_0\mathcal{H}_{0}}\mathrm{e}^{-\tau_1\mathcal{H}_{\text{SSD}}}\phi(z_0,\bar{z}_0)\rangle. \end{equation} To compute the time evolution with $\mathcal{H}_\text{SSD}$ in the complex plane, we introduce the so-called M\"obius Hamiltonian \cite{Okunishi:2016zat} \begin{equation} \mathcal{H}_{\text{M\"ob}(\theta)}=L_0-\frac{\tanh(2\theta)}{2}(L_1+L_{-1})+\overline{L}_0-\frac{\tanh(2\theta)}{2}(\overline{L}_1+\overline{L}_{-1}), \label{mobham} \end{equation} defined for $\theta\in\mathbb{R}^+$. Interestingly, there exists an $SL(2,\mathbb{R})$ transformation mapping the M\"obius Hamiltonian to a uniform Hamiltonian. Such mapping is explicitly given by \begin{equation} \hat{z}=f(z)=\frac{-\cosh(\theta)z+\sinh(\theta)}{\sinh(\theta)z-\cosh(\theta)}. \label{mobgood} \end{equation} In the $\hat{z}$-coordinates, $\mathcal{H}_{\text{M\"ob} (\theta)}\propto \frac{2\pi}{L\cosh(2\theta)}(L_0+\overline{L}_0)$. Thus the time evolution with $\mathcal{H}_{\text{M\"ob}(\theta)}$ for a time $\tau$ in the $\hat{z}$-coordinates is a simple dilation by a factor $\lambda = \exp\left\{\frac{2\pi \tau}{L\cosh{2\theta}}\right\}$. Then going back to the original coordinates, the whole time evolution with $\mathcal{H}_{\text{M\"ob}(\theta)}$ amounts to a simple change of coordinates $z^{\text{new}}_{\theta}(z)=f^{-1}\left(\lambda f(z)\right)$ (in the following of the text we often leave the $z$ dependence of the conformal mappings implicit): \begin{equation} z^{\text{new}}_{\theta}(z)=\frac{\left[(1-\lambda)\cosh(2\theta)-(\lambda+1)\right]z+(\lambda-1)\sinh(2\theta)}{(1-\lambda)\sinh(2\theta)z+\left[(\lambda-1)\cosh(2\theta)-(\lambda+1)\right]}. \label{znew} \end{equation} The Hamiltonian $\mathcal{H}_0$ and $\mathcal{H}_{\text{SSD}}$ can be seen as two different limits of the interpolating Hamiltomian $\mathcal{H}_{\text{M\"ob}(\theta)}$. Indeed, $\mathcal{H}_0=\mathcal{H}_{\text{M\"ob}(0)}$ and $\mathcal{H}_{\text{SSD}}=\mathcal{H}_{\text{M\"ob}(\theta\rightarrow\infty)}$. From this observation, it may be deduced that one can first evaluate $e^{\tau_0 \mathcal{H}_{0}}\phi(\omega,\bar{\omega})e^{-\tau_0 \mathcal{H}_{0}}$ by applying the method in the case $\theta=0$. \begin{equation} e^{\tau_0 \mathcal{H}_{0}}\phi(\omega_1,\bar{\omega}_1)e^{-\tau_0 \mathcal{H}_{0}}=\left(\frac{2\pi}{L}\right)^{2h}\left[\frac{\partial z^{\text{new}}_{\theta=0}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{z}^{\text{new}}_{\theta=0}}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\phi\left(z^{\text{new}}_{\theta=0}(z_1),\bar{z}^{\text{new}}_{\theta=0}(z_1)\right). \end{equation} By looking at the expression for $z^{\text{new}}_{\theta}(z)$ in equation \eqref{znew}, we get $z^{\text{new}}_{\theta=0}(z)=\lambda z$, which is a dilatation in the $z$ plane, as expected for the uniform Hamiltonian $\mathcal{H}_0$. Next, we need to evaluate \begin{align} \mathrm{e}^{\tau_1 \mathcal{H}_{\text{SSD}} }(\mathrm{e}^{\tau_0 \mathcal{H}_{0}}\phi(z_1,\bar{z}_1)\mathrm{e}^{-\tau_0 \mathcal{H}_{0}})\mathrm{e}^{-\tau_1 \mathcal{H}_{\text{SSD}}}\propto \mathrm{e}^{\tau_1 \mathcal{H}_{\text{SSD}} }\phi(\lambda z_1,\lambda \bar{z}_1)\mathrm{e}^{-\tau_1 \mathcal{H}_{\text{SSD}}}, \end{align} which can be obtained by using expression of $z^{\text{new}}_{\theta}$ in the limit $\theta\rightarrow\infty$. This just amounts to going to the coordinates $\tilde{z}_1$, defined as \begin{equation} \tilde{z}_1=\lim_{\theta\rightarrow \infty}z^{\text{new}}_{\theta}(\lambda z)= \frac{(1+\frac{\pi\tau_1}{L})e^{\frac{2\pi\tau_0}{L}}z-\frac{\pi\tau_1}{L}}{\frac{\pi \tau_1}{L}e^{\frac{2\pi\tau_0}{L}}z+(1-\frac{\pi\tau_1}{L})}. \label{1-cycle} \end{equation} Hence, $\tilde{z}_1$ is once again related to $z$ by a M\"obius transformation, as expected because it is the obtained via a composition of two (invertible) M\"obius transformations. Consequently the time evolution $\mathrm{e}^{\tau_1 \mathcal{H}_{\text{SSD}} }\mathrm{e}^{\tau_0 \mathcal{H}_{0}}\phi(z,\bar{z})\mathrm{e}^{-\tau_0 \mathcal{H}_{0}}\mathrm{e}^{-\tau_1 \mathcal{H}_{\text{SSD}}}$ for a 1-cycle drive of any primary field of a CFT can be reduced to a normalized M\"obius transformation \begin{equation} \tilde{z}_1=\frac{az+b}{cz+d}, \label{normalized} \end{equation} with \[ \begin{cases} a=(1+\frac{\pi\tau_1}{L})e^{\frac{\pi\tau_0}{L}}, \\ b=-\frac{\pi\tau_1}{L}e^{-\frac{\pi\tau_0}{L}}, \\ c=\frac{\pi \tau_1}{L}e^{\frac{\pi\tau_0}{L}},\\ d=(1-\frac{\pi\tau_1}{L})e^{-\frac{\pi\tau_0}{L}}. \end{cases} \] Explicitly the two-point function at different times for a 1-cycle drive is \begin{equation} \langle\mathrm{e}^{\tau_1\mathcal{H}_{\text{SSD}}} \mathrm{e}^{\tau_0\mathcal{H}_{0}}\phi(\omega_1,\bar{\omega}_1)\mathrm{e}^{-\tau_0\mathcal{H}_{0}}\mathrm{e}^{-\tau_1\mathcal{H}_{\text{SSD}}}\phi(\omega_0,\bar{\omega}_0) \rangle=\left(\frac{2\pi}{L}\right)^{4h}\left[\frac{\partial \tilde{z}_{1}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_1}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\langle\phi(\tilde{z}_{1},\tilde{z}_{1})\phi(z_0,\bar{z}_0)\rangle. \label{uncycle} \end{equation} Therefore we learnt that the time evolution of any primary field during a one cycle of this Floquet drive between $\mathcal{H}_0$ and $\mathcal{H}_{\text{SSD}}$ only amounts to a conformal transformation, as seen in \eqref{uncycle}. The main task now is to find how to generalize this result to the full Floquet drive with $n$ cycles. Clearly, the $n$-cycle Floquet time evolution will just amount to a change of coordinates to $\tilde{z}_n$, defined as \begin{equation} \tilde{z}_n(z)=\underbrace{(\tilde{z}_1\circ...\circ \tilde{z}_1)}_{n\text{ times}}(z). \label{eq:composing_Mobius} \end{equation} This means that increasing the number of cycles only amounts to composing the 1-cycle transformation with itself.\\ The $n$-cycle M\"obius transformation can be computed by writing the 1-cycle M\"obius transformation in its so-called normal form. Introducing the two fixed-points $\gamma_1$, $\gamma_2$, and the multiplier $\eta$, \begin{equation} \begin{cases} \gamma_1=\frac{a-d-\sqrt{(a-d)^2+4bc}}{2c}, \\ \gamma_2=\frac{a-d+\sqrt{(a-d)^2+4bc}}{2c}, \\ \eta=\frac{(a+d)+\sqrt{(a-d)^2+4bc}}{a+d-\sqrt{(a-d)^2+4bc}}. \end{cases} \end{equation} The normal form of $\tilde{z}_1$ is then \begin{equation} \frac{ \tilde{z}_1-\gamma_1}{\tilde{z}_1-\gamma_2}=\eta\frac{z-\gamma_1}{z-\gamma_2}. \label{normalform} \end{equation} It can be shown that in normal form the $n$-cycle evolution simply amounts to \begin{equation} \frac{\tilde{z}_n-\gamma_1}{\tilde{z}_n-\gamma_2}=\eta^n\frac{z-\gamma_1}{z-\gamma_2}. \label{normalform2} \end{equation} Then all the stroboscopic time evolution is encoded in the M\"obius multiplier $\eta$. This defines different phases, classified by the trace squared of the $1$-cycle transformation \cite{Wen:2018agb}: \begin{align} \text{Tr}^2\begin{pmatrix} a&b \\ c&d \end{pmatrix}=4(1-\Delta). \end{align} Indeed if $\Delta>0$ the associated transformation is elliptic and $\eta$ is a phase: the system does not heat. If $\Delta<0$ the associated transformation is hyperbolic and $\eta$ is a positive number: the system heats. $\Delta=0$ corresponds to a parabolic M\"obius transformation, $\eta=1$ and the system is at the phase transition. After analytic continuation, $\Delta$ is written as \begin{equation} \Delta=\left[1-\left(\frac{\pi T_1}{L}\right)^2\right]\sin^2\left(\frac{\pi T_0}{L}\right)+\frac{\pi T_1}{L}\sin\left(\frac{2\pi T_0}{L}\right). \label{delta} \end{equation} The $n$-cycles M\"obius transformation can be explicitly written in terms of the parameters of the system as equation \eqref{eq:tildez_app}, \begin{equation} \tilde{z}_n=\frac{\mathfrak{a} z+\mathfrak{b}}{\mathfrak{c} z +\mathfrak{d}}, \label{eq:tildez_app} \end{equation} with: \[ \begin{cases} \mathfrak{a}=\gamma_1-\eta^n\gamma_2, \\ \mathfrak{b}=(\eta^n-1)\gamma_1\gamma_2, \\ \mathfrak{c}=1-\eta^n,\\ \mathfrak{d}=\gamma_1\eta^n-\gamma_2. \end{cases} \] Then the stroboscopic time evolution $t=n(T_0+T_1)$ of any primary field $\phi$ can be computed by using this conformal transformation. We stress the fact that here the time evolution is stroboscopic in order to get an analytic handle on the long time dynamics. However, by sacrificing some analytic succinctness we can actually access the full continuous time evolution. The two-point function at different times is directly obtained with equation \eqref{eq:two_poin}, \begin{equation}\label{eq:two_poin} \langle \phi(x,t) \phi(x_0,0) \rangle = \left(\frac{2\pi}{L}\right)^{4h}\left[\frac{\partial \tilde{z}_{n}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\langle \phi(\tilde{z}_n, \bar{\tilde{z}}_n) \phi(\tilde z_0, \bar{\tilde z}_0) \rangle. \end{equation} The correlator $\langle \phi(\tilde{z}_n, \bar{\tilde{z}}_n) \phi(\tilde z_0, \bar{\tilde z}_0) \rangle$ can either be computed within the ground state of $\mathcal{H}_0$ with open boundary conditions $|G\rangle$, or the $SL(2,\mathbb{C})$ invariant vacuum $|0\rangle$ of the periodic chain. As $|0\rangle$ is an eigenstate of $\mathcal{H}_{\text{SSD}}$, the Floquet dynamics should be trivial when computing correlation functions at equal times, as the SSD time evolution is just a phase. However for dynamical two-point functions $\langle 0|e^{\mathrm{i} \mathcal{H}_{\text{SSD}}t}\phi(x,0)e^{-\mathrm{i} \mathcal{H}_{\text{SSD}}t}\phi(x_0,0)|0\rangle$ the result should not be trivial as $|\Phi\rangle\equiv\phi(x_0,0)|0\rangle$ is not an eigenstate of $\mathcal{H}_{\text{SSD}}$ in general. Therefore this choice for the computation of $F(x,t;x_0,0)$ is legitimate. In the case of open boundary conditions, we need to use the mapping $z\rightarrow\sqrt{z}$ to map the complex plane with a slit to the upper-half plane, and then evaluate the two point function in the upper-half plane~\cite{Calabrese:2007rg}. This introduces some complications regarding branch cuts of the square root mapping. For simplicity, we choose the periodic case, where \begin{equation} \langle 0|\phi(\tilde{z}_n, \bar{\tilde{z}}_n) \phi(\tilde z_0, \bar{\tilde z}_0) |0\rangle \propto \frac{1}{(z_0-\tilde{z}_n)^{2h}}\frac{1}{(\bar{z}_0-\bar{\tilde{z}}_n)^{2h}}. \end{equation} This leads to the final formula for the two-point function at different times for $n$-cycles \begin{equation} \langle 0|\phi(x,t) \phi(x_0,0) |0\rangle = \left(\frac{2\pi}{L}\right)^{4h}\left[\frac{\partial \tilde{z}_{n}}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1}\right]^{h}\frac{1}{(z_0-\tilde{z}_n)^{2h}}\frac{1}{(\bar{z}_0-\bar{\tilde{z}}_n)^{2h}}. \end{equation} It can further be shown that the derivative term simplifies to \begin{equation} \frac{\partial \tilde{z}_n}{\partial z}\bigg|_{z_1}\frac{\partial \bar{ \tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1} = \frac{(\mathfrak{a}\mathfrak{d}-\mathfrak{b}\mathfrak{c})^2}{(\mathfrak{c}^2+\mathfrak{d}^2+2\mathfrak{c}\mathfrak{d}\cos\left(\frac{2\pi x}{L}\right))^2}. \label{derideri} \end{equation} In the heating phase, $\eta$ is a real positive number, such that $\eta^n$ tends either to $0$ or $\infty$ depending on the sign of $\eta-1$, corresponding to $\tilde{z}_n$ converging either to $\gamma_1$ or $\gamma_2$. Then $\langle \phi(\tilde{z}_n,\bar{\tilde{z}}_n)\phi(z_0,\bar{z}_0)\rangle$ tends to a constant, and the derivative term \eqref{derideri} is exponentially suppressed for every $x\notin\{x_c,L-x_c\}$, with $x_c$ defined by the fixed points: $\gamma_{1/2}=e^{2\pi x_c/L}$, where $\gamma_{1/2}$ corresponds to $\gamma_{2}$ if $\tilde{z}_n$ converges to $\gamma_1$, and vice-versa. This can be seen explicitly in equation \eqref{limitlongtime}, \begin{equation} \frac{\partial \tilde{z}_n}{\partial z}\bigg|_{z_1}\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\bigg|_{\bar{z}_1}= \frac{(\gamma_1-\gamma_2)^4}{\left(\eta^{-n}(1+\gamma_2^2-2\gamma_2\cos{\frac{2\pi x}{L}})+2((\gamma_1+\gamma_2)\cos{\frac{2\pi x}{L}}-1-\gamma_1\gamma_2)+\eta^{n}(1+\gamma_1^2-2\gamma_1\cos{\frac{2\pi x}{L}})\right)^2}. \label{limitlongtime} \end{equation} The expression \eqref{limitlongtime} has a two poles, either in $x_c=\frac{L}{2\pi}\arccos\left(\frac{1+\gamma_2^2}{2\gamma_2}\right)$ and $L-x_c$ if $\lim_{n\rightarrow\infty}\eta^n\rightarrow0$, or in $x_c=\frac{L}{2\pi}\arccos\left(\frac{1+\gamma_1^2}{2\gamma_1}\right)$ and $L-x_c$ if $\lim_{n\rightarrow\infty}\eta^n\rightarrow\infty$. Therefore $F(x,t;x_0,0)$ remains finite even at very long times at these two points, for any choice of $x_0$. The interpretation is that even after a very long number of driving cycles, the excitations will always arrive at one of these two-points at stroboscopic times. Therefore these particular points, only defined with the choice of $T_0/L$ and $T_1/L$, act as attractors for the excitations, which will be better understood within a stroboscopic black-hole picture in an effective space-time in the next section. In the non-heating phase, $\eta$ is a phase, and then after analytic continuation the periodicity reads $T_E=2\pi\frac{ (T_0+T_1)}{|\arg(\eta)|}$. The excitations are then propagating periodically with $T_E$. However if $T_E/(T_0+T_1) \notin \mathbb Q$, the system is pseudo-periodic, as the two-point function is only defined at stroboscopic times.\\ \subsection{Effective curved space-time} \noindent The two-point function at different times $F(x,t;x_0,0)$ enables us to get the light-cone propagation of the gapless excitations. For homogeneous Luttinger liquids, the excitations are following straight lines in space-time. However for inhomogeneous Luttinger liquid with spatial deformation $v(x)$, they are following curves, which are nothing more than light-like geodesics in an effective curved space-time specified by the metric $\mathrm{d} s^2=\mathrm{d} x^2+v(x)^2\mathrm{d} \tau^2$ \cite{Dubail_2017}. In the case of the sine-square deformation, the metric is $\mathrm{d} s^2=\mathrm{d} x^2+\sin^2\left(\frac{\pi x}{L}\right)\mathrm{d} \tau^2$. Thus, the null geodesics are simply given by the light-like condition $\mathrm{d} s^2=0$, giving the propagation of the excitations starting at $x_0$: $x_{\pm}(t)=\frac{L}{\pi}\cot^{-1}{\left(\pm \frac{2\pi}{L} t +\cot{\frac{\pi x_0}{L}}\right)}$. Therefore it is clear that the excitations never reach the boundaries of the system in this case, as their local group velocity goes to $0$ at the edges. \medskip \noindent We now derive the effective space-time metric for the Floquet drive defined at stroboscopic times. We are interested in finding some coordinates $\tilde{z}$ in which the effective metric describing the $n$-cycle Floquet drive is conformally flat, and then going back to the original coordinates $(x,\tau)$ to get the expression of the metric. Such coordinates are called isothermal coordinates and always exist in $(1+1)$-dimensional space-times. For the Floquet drive, they are given by the effective M\"obius transformation \eqref{eq:tildez_app}, so that the metric reads \begin{equation} \mathrm{d} s^2=\mathrm{d} \tilde{z}_n\mathrm{d} \bar{\tilde{z}}_n. \end{equation} Introducing the real and imaginary parts of $\tilde{z}$ \[ \begin{cases} \tilde{u}_n(x,\tau)=\text{Re}(\tilde{z}_n)=\frac{\mathfrak{a}\mathfrak{c} + \mathfrak{b}\mathfrak{d} +(\mathfrak{a}\mathfrak{d}+\mathfrak{b}\mathfrak{c})\cos{\left(\frac{2\pi x}{L}\right)}}{\mathfrak{c}^2+\mathfrak{d}^2+2\mathfrak{c}\mathfrak{d}\cos{\left(\frac{2\pi x}{L}\right)}},\\ \tilde{v}_n(x,\tau)=\text{Im}(\tilde{z}_n)=\frac{(\mathfrak{a}\mathfrak{d}-\mathfrak{b}\mathfrak{c})\sin{\left(\frac{2\pi x}{L}\right)}}{\mathfrak{c}^2+\mathfrak{d}^2+2\mathfrak{c}\mathfrak{d}\cos{\left(\frac{2\pi x}{L}\right)}}. \end{cases} \] The effective metric reads $\mathrm{d} s^2=\mathrm{d} \tilde{u}_n^2+\mathrm{d} \tilde{v}_n^2$. It is now straightforward to apply the change to ($x$,$\tau$) coordinates. After some computations, the metric takes the familiar form, after analytic continuation: \begin{equation*} \mathrm{d} s^2= \mathrm{e}^{2\sigma(x,\tau)}\left(\mathrm{d} x^2 - g(x)\mathrm{d} \tau^2 + 2 h(x) \mathrm{d} x \mathrm{d} t \right). \label{souri} \end{equation*} The value we find for $g(x)$ and $h(x)$ are then given by equations \eqref{efe} and \eqref{timerev} \begin{equation} g(x)= \zeta^2\prod_{i=1}^2\left[1+\gamma_i^2-2\gamma_i\cos{\left(\frac{2\pi x}{L}\right)}\right], \label{efe} \end{equation} \begin{equation} h(x)= \mathrm{i}\zeta (\gamma_1\gamma_2 - 1) \sin\left(\frac{2\pi x}{L}\right), \label{timerev} \end{equation} where $\zeta= -\frac{L}{2\pi \mathrm{i}}\frac{1}{(T_0 + T_1)}\frac{\log( \eta)}{(\gamma_1 - \gamma_2)}$, and as before $\eta$ is the multiplier of the M\"obius transformation, which is a complex exponential in the non-heating phase and a real exponential in the heating phase, and $\gamma_1$, $\gamma_2$ are the two fixed-points of the 1-cycle M\"obius transformation. After analytic continuation, both $g(x)$ and $h(x)$ are real-valued functions.\\ The Weyl prefactor $\mathrm{e}^{2\sigma(x,\tau)}$ is a positive number before analytic continuation, \begin{equation} \mathrm{e}^{2\sigma(x,\tau)}=\frac{4\pi^2}{L^2}\frac{\eta^{2n}(\gamma_1-\gamma_2)^4}{\left(1 + \eta^{2n} (1 + \gamma_1^2) + \gamma_2^2 - 2\eta^n (1 + \gamma_1 \gamma_2) - 2 (-1 + \eta^n) (\eta^n \gamma_1 - \gamma_2)\cos(\frac{2\pi x}{L})\right)^2}. \end{equation} Inverting the Weyl transformation, the metric is finally given by \begin{equation} \mathrm{d} s^2= \mathrm{d} x^2-g(x) \mathrm{d} t^2 +2h(x)\mathrm{d} x\mathrm{d} t. \label{eq:metricnontrv} \end{equation} The null geodesics of this $(1+1)$d space-time are uniquely determined by the condition $\mathrm{d} s^2=0$. Thus they are the solutions of the equation $ 2h(x(t))\dot{x}(t)+\dot{x}^2(t)-g(x(t))=0$: \begin{equation} \pm t(x)= \int_{x_0}^{x}dx'\frac{1}{\sqrt{h(x')^2+g(x')}\mp h(x')}. \end{equation} Then the local group velocity of the excitations is $v(x) = h(x) \mp \sqrt{h(x)^2+g(x)}$, where the sign corresponds to chiral and anti-chiral excitations. The expression \eqref{eq:metricnontrv} is not time-reversal invariant because of the off-diagonal term $h(x)$. Only if $\gamma_1\gamma_2=1$, $h(x)=0$ and the system is time-reversal invariant. This can be fulfilled by starting the drive in a symmetric way. For concreteness, we shift the origin of time by $\frac{T_0}{2}$ \begin{align} \mathcal{H}_\mathrm{F}(t)= \begin{cases} \mathcal{H}_0 & 0<t<\frac{T_0}{2},\\ \mathcal{H}_{\text{SSD}}&\frac{T_0}{2}<t<\frac{T_0}{2}+T_1,\\ \mathcal{H}_0&\frac{T_0}{2}+T_1<t<\frac{3T_0}{2}+T_1,\\ \text{etc.} \end{cases} \end{align} The associated 1-cycle M\"obius transformation is therefore given by the equation \begin{equation} \tilde{z}_1=\frac{\left(1+\frac{\pi \tau_1}{L}\right)\mathrm{e}^{\frac{\pi\tau_0}{L}}z-\frac{\pi\tau_1}{L}}{\frac{\pi\tau_1}{L}z+\left(1-\frac{\pi\tau_1}{L}\right)\mathrm{e}^{-\frac{\pi\tau_0}{L}}}. \label{link} \end{equation} It is interesting to compare \eqref{link} and \eqref{normalized}. The coefficients $a$ and $d$ are the same as in the non symmetric case, but not $b$ and $c$. Furthermore $bc$ is unchanged. Thus, looking at the definitions of the fixed points and the multiplier, this only redefines the denominators of $\gamma_1$ and $\gamma_2$, and keeps the multiplier $\eta$ invariant. It can then be shown that $\gamma_1\gamma_2=1$. Furthermore in the heating phase $|\gamma_1|=|\gamma_2|=1$, therefore in the time-reversal symmetric situation $\gamma_1=\gamma_2^*$, whereas in the non-heating phase, $\gamma_1$ and $\gamma_2$ are both real and inverse of each other. In this case, the metric is simplified to $\mathrm{d} s^2=\mathrm{d} x^2-g(x)\mathrm{d} t^2$. Applying the time-reversal condition, one finds that \begin{equation} v(x)=[g(x)]^{1/2}= \frac{1}{2\pi \mathrm{i}}\frac{L\log(\eta)}{(T_0+T_1)} \frac{\left(1+\gamma_1^2-2\gamma_1\cos{\frac{2\pi x}{L}}\right)}{\gamma_1^2-1}. \label{vitevite} \end{equation} This is the effective velocity of the excitations. In the heating phase their local group velocity goes to 0 at two points, which are found to be $x_c=\frac{L}{2\pi}\arccos\left(\frac{1+\gamma_1^2}{2\gamma_1}\right)$ and $L-x_c$. After analytic continuation, it can be shown that $x_c= \tfrac{L}{2\pi} \arccos(\cos \tfrac{\pi T_0}{L} + \tfrac{L}{\pi T_1} \sin \tfrac{\pi T_0}{L})$. Thus $x_c\in [0,\frac{L}{2}]$ in the heating phase, and $x_c$ is a complex number in the non-heating phase, therefore the velocity never goes to $0$. Thus the heating phase at these two points, the velocity of the excitations vanishes, meaning that their wordlines, following null geodesics of the metric, will tend to one of these two points. We rewrite the metric in the heating phase in terms of the singularity $x_c$. We first notice that as $\cos\left(\frac{2\pi x_c}{L}\right)=\frac{1}{2}\frac{\gamma_1^2+1}{\gamma_1}$, then the effective deformation is rewritten directly in terms of the singularity \begin{equation} v(x)= A \left(1-\frac{\cos\left(\frac{2\pi x}{L}\right)}{\cos\left(\frac{2\pi x_c}{L}\right)}\right), \end{equation} with $A=\frac{1+\gamma_1^2}{\gamma_1^2-1}\frac{1}{2\pi \mathrm{i}(T_0+T_1)}L\log(\eta)$. Using trigonometric formulae, this leads to the desired form of the velocity \begin{equation} v(x)= 2A \frac{\sin\left[\frac{\pi}{L}(x-x_c)\right]\sin\left[\frac{\pi}{L}(x+x_c)\right]}{\cos\left(\frac{2\pi x_c}{L}\right)}, \label{vitess} \end{equation} The effective metric can now be easily expressed in terms of the singularity $x_c$ \begin{equation} \mathrm{d} s^2=-4A^2\frac{\sin^2\left[\frac{\pi}{L}(x-x_c)\right]\sin^2\left[\frac{\pi}{L}(x+x_c)\right]}{\cos^2\left(\frac{2\pi x_c}{L}\right)}\mathrm{d} t^2+\mathrm{d} x^2. \label{basbas} \end{equation} This form is still hard to interpret in terms of Schwarzschild metric. However, by doing an expansion around one of the two singularities, i.e., around $x_c$ or $L-x_c$, and keeping only the lowest order contribution, only the contribution from one of the singularities should matter and the metric should be simpler. Therefore we expand the expression \eqref{basbas} around $x_c$. At leading order in $x-x_c$, we may simplify \begin{align} \sin^2\left[\frac{\pi}{L}(x-x_c)\right]\sin^2\left[\frac{\pi}{L}(x+x_c)\right]\approx \frac{\pi^2}{L^2}(x-x_c)^2\sin^2\left(\frac{2\pi x_c}{L}\right) + \mathcal{O}((x-x_c)^3). \end{align} The metric finally simplifies to \begin{equation} \mathrm{d} s^2= -4A^2\tan^2\left(\frac{2\pi x_c}{L}\right)\frac{\pi^2}{L^2}(x-x_c)^2 \mathrm{d} t^2+\mathrm{d} x^2. \label{rindler} \end{equation} This metric is known as the Rindler metric, which describes an accelerated frame transformation of the flat Minkowski metric. Writing $C^2=4A^2\tan^2\left(\frac{2\pi x_c}{L}\right)\frac{\pi^2}{L^2}$, the metric reads $\mathrm{d} s^2= -C^2 (x-x_c)^2\mathrm{d} t^2+\mathrm{d} x^2$. One can now introduce the following coordinate change: $\frac{C}{2}\left(x-x_c\right)^2=\left(y-x_c\right)$. In the new coordinates, the metric reads \begin{equation} \mathrm{d} s^2=-2C\left(y-x_c\right)\mathrm{d} t^2+\frac{1}{2C}\frac{1}{\left(y-x_c\right)}\mathrm{d} y^2. \end{equation} This is the well-known Schwarzschild metric in $(1+1)$ dimensions. Thus expanding our space-time effective metric around one of the two singularities gives (at leading order) a black hole metric. One can also do that for the second singularity by expanding the metric around $L-x_c$, to get similar results: $\mathrm{d} s^2=-2C\left[y-(L-x_c)\right]\mathrm{d} t^2+\frac{1}{2c}\frac{1}{\left[y-(L-x_c)\right]}\mathrm{d} y^2$. \medskip \noindent The Hawking temperature $\Theta_H$ can be directly read-off from the expression of the metric as $\Theta_H=\frac{C}{2\pi}$. We can finally use the formula $\tan{\left(\frac{2\pi x_c}{L}\right)}=\frac{1}{\mathrm{i}}\frac{\gamma_1^2-1}{\gamma_1^2+1}$ to conclude that the Hawking temperature is given by $\Theta_H=\frac{|\log{(\eta)}|}{2\pi(T_0+T_1)}$.\\ \subsection{Effective Hamiltonian} \noindent Using the effective metric in the time-reversal symmetric case, we deduce that the stroboscopic effective Hamiltonian is $\mathcal{H}_{\text{eff}}=\int_0^L v(x)T_{00}(x)\mathrm{d} x$. Using the Fourier decomposition of $v(x)$, given by the equation \eqref{vitevite}, and using the definition of the Virasoro generators $L_n=\frac{1}{2\pi \mathrm{i}}\oint \mathrm{d} z z^{n+1}T(z)$, we can conclude that the stroboscopic Hamiltonian is \begin{equation} \mathcal{H}_{\text{eff}}=\alpha\left[L_0-\frac{\beta}{2}(L_1+L_{-1})+\overline{L}_0-\frac{\beta}{2}(\overline{L}_1+\overline{L}_{-1})\right], \end{equation} where $\alpha=\frac{1+\gamma_1^2}{\gamma_1^2-1}\frac{L}{2\pi \mathrm{i}(T_0+T_1)}\log{(\eta)}$, $\beta=\frac{2\gamma_1}{1+\gamma_1^2}$, which are real numbers. It can further be shown using the expressions of the fixed-points that: $\beta^{-1}=\cos(\frac{\pi T_0}{L})+\frac{L}{\pi T_1}\sin(\frac{\pi T_0}{L})$. Therefore in the heating phase $|\beta|>1$, and in the non-heating phase $|\beta|<1$. \medskip \noindent In the case $|\beta|<1$, the effective Hamiltonian is simply the M\"obius Hamiltonian \eqref{mobham}. This observation is consistent with the fact that $F(x,t;x_0,0)$ is periodic in the non-heating phase. Indeed, the propagation of the excitations after a quench with the M\"obius Hamiltonian is also periodic, with period $T=\frac{1}{L\cosh(2\theta)}$. Therefore the effective stroboscopic Hamiltonian in the non-heating phase in the time-reversal symmetric case is just an interpolating Hamiltonian between $\mathcal{H}_0$ and $\mathcal{H}_{\text{SSD}}$. $\mathcal{H}_{\text{eff}}$ can be further written as the convex combination of the two original Hamiltonians \begin{equation} \mathcal{H}_{\text{eff}}=\alpha\left[(1-\beta)\mathcal{H}_0+\beta\mathcal{H}_{\text{SSD}}\right]. \end{equation} Therefore, for $0<\beta<1$, the effective Hamiltonian interpolates between the uniform and the SSD Hamiltonian, as we already understood through the comparison with the M\"obius Hamiltonian. \medskip \noindent For $\beta>1$, the effective Hamiltonian cannot be understood as an interpolation between the two original Hamiltonians, giving rise to the physics of heating. The effective Hamiltonian in the heating phase can be rewritten, using \eqref{vitess} as: \begin{equation} \mathcal{H}_{\text{eff}}=2L\Theta_H\int_0^L\mathrm{d} x\frac{ \sin\left(\frac{\pi}{L}(x-x_c)\right)\sin\left(\frac{\pi}{L}(x+x_c)\right)}{\sin\left(\frac{2\pi x_c}{L}\right)}T_{00}(x). \end{equation} This form of the effective Hamiltonian is reminiscent of the entanglement Hamiltonian $K_A$ for a system of finite size $[0,L]$, introduced in \cite{Cardy:2016fqc}, with subsystem $A=(x_c,L-x_c)$. However here the Hamiltonian density is integrated over the whole chain. For such entanglement Hamiltonian, an effective local temperature can be defined, diverging at $x_c$ and $L-x_c$. This is an indication that energy should be absorbed exponentially at these two points. \medskip \noindent Finally, looking at the effective deformation $v(x)$ is insightful: in the non-heating phase, we notice that $v(x)$ has no roots. Therefore, it can be thought of as a shifted sine-square, deforming the homogeneous system only smoothly. By going through the phase diagram following the line $T_0=T_1$, this shifted sine-square will simply tend to the usual sine-square. At the phase transition, the effective Hamiltonian is similar to the sine-square deformation, and has roots at $x_c=0$ and $x_c=L$. Then, in the heating phase, the two roots will approach symmetrically to the center of the system, giving rise to a cosine-square deformation at the second phase transition , having a single root at $x_c=\frac{L}{2}$. Therefore, the effective Hamiltonian in the heating and non-heating phase only interpolates between the sine-square and the cosine-square deformations. \subsection{Energy density} \noindent The different phases arising within the Floquet CFT were understood in \cite{Wen:2018agb} by computing the entanglement entropy, which grows linearly in the heating phase and oscillates with period $T_E$ in the non-heating phase. We would like to characterize more precisely these phases by computing the evolution of the energy density $\mathcal{E}(x,t)$ in the system. In particular, we expect to observe an exponential increase of energy in the heating phase precisely at the location of the two singularities $x_c$ and $L-x_c$, whereas the rest of the system should not absorb energy, to agree with our stroboscopic black hole picture. The energy density $\mathcal{E}(x,t)$ under the Floquet drive is defined by \begin{equation} \mathcal{E}(x,t)=\langle \psi(t)|T_{00}(x)|\psi(t)\rangle. \label{njr} \end{equation} As usual $T_{00}$ is the energy density of the uniform CFT. $|\psi(t)\rangle=U(t)|G\rangle$ is the time evolved ground state of the uniform Hamiltonian $H_0$, with open boundary conditions. We chose open boundary conditions in this case, as in the periodic case $\mathcal{E}(x,t)=0$. In Euclidean coordinates, $T_{00}=T(\omega)+\bar{T}(\bar{\omega})$. The strategy is the same as for the two point function at different times: the first step is to map the strip to the complex plane with a slit, using the exponential mapping \begin{equation} \langle G|\mathrm{e}^{\tau \mathcal{H}_{\text{SSD}}}T(\omega)\mathrm{e}^{-\tau \mathcal{H}_{\text{SSD}}}|G\rangle=\left(\frac{\partial z}{\partial \omega}\right)^2 \langle G|\mathrm{e}^{\tau \mathcal{H}_{\text{SSD}}}T(z)\mathrm{e}^{-\tau \mathcal{H}_{\text{SSD}}}|G\rangle-\frac{\pi^2 c}{6L^2}. \end{equation} Then, the usual procedure consists in mapping the complex plane to itself in the M\"obius $\tilde{z}_n$ coordinates, applying the time evolution and transforming back to the $z$ coordinates. The extra terms coming from the Schwarzian derivative vanish because of $SL(2,\mathbb{R})$ invariance \begin{equation} \langle G|\mathrm{e}^{\tau \mathcal{H}_{\text{SSD}}}T(\omega)\mathrm{e}^{-\tau \mathcal{H}_{\text{SSD}}}|G\rangle=\left(\frac{\partial z}{\partial \omega}\right)^2 \left(\frac{\partial \tilde{z}_n}{\partial z}\right)^2 \langle G|T(\tilde{z}_n)|G\rangle-\frac{\pi^2 c}{6L^2}. \end{equation} The final step is to evaluate the correlation function $\langle G|T(\tilde{z}_n)|G\rangle$ in a boundary CFT, defined on the complex plane with a slit on the real positive axis. This can be done using a square-root mapping $\sqrt{z}$ to the upper-half plane $\mathbb{H}$. This gives a non-trivial Schwarzian derivative term given by $\{z,\sqrt{z}\}=\dfrac{3}{8z^2}$. The upper-half plane can me mapped to the unit disc with a M\"obius transformation, therefore, due to rotational symmetry, $\langle G|T(\sqrt{\tilde{z}_n})|G\rangle_{\mathbb{H}}=0$ \cite{Calabrese:2009qy}. Finally, only the Schwarzian derivative term of the square root mapping contributes to the energy density, which before analytic continuation reads \begin{equation} \mathcal{E}(x,t)= \frac{c}{32}\left[\left(\frac{\partial z}{\partial \omega}\right)^2 \left(\frac{\partial \tilde{z}_n}{\partial z}\right)^2\frac{1}{\tilde{z}_n^2}+\left(\frac{\partial \bar{z}}{\partial \bar{\omega}}\right)^2 \left(\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}\right)^2\frac{1}{\bar{\tilde{z}}_n^2}\right]-\frac{\pi^2 c}{3L^2}, \end{equation} for stroboscopic times $t=n(T_0+T_1)$. In the heating phase, as $\mathcal{E}(x_c,t)\sim \eta^{-2n}$ at long time because of the derivative term $\frac{\partial \bar{\tilde{z}}_n}{\partial \bar{z}}$, we conclude that $\mathcal E(x_c,t) \sim \mathrm{e}^{4\pi\Theta_\mathrm{H} t}$, such that the Hawking temperature is really the heating rate. Similarly for the other singularity $L-x_c$, where the energy is also growing exponentially because of the other derivative term $\frac{\partial \tilde{z}_n}{\partial z}$. Therefore the energy density grows exponentially in the heating phase only at the positions of the two black holes, as expected. In the non-heating phase, the energy density oscillates in time with period $T_E=2\pi\frac{ (T_0+T_1)}{|\arg(\eta)|}$. \end{widetext} \end{document}
proofpile-arXiv_065-6096
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:struc} Nematic phases from time-averaging} In this Section, we derive a coarse-grained description for the structure of a fluid composed of rapidly rotating anisotropic objects. Define the director to be $\hat{\mathbf{n}}(t) = [\cos \theta(t), \sin \theta(t) ]$ and modulate the orientational dynamics of the rods via the angle $\theta(t)$: \begin{equation} \theta(t) = \Omega t - \alpha \sin (2 \Omega t + \delta), \label{eq:theta} \end{equation} where $\alpha$ is the modulation amplitude, $\Omega = \langle \dot{\theta}(t) \rangle$ is the average rotation rate, $\delta$ is the rotation phase, and the averaging is over a period of rotation from $t = 0$ to ${t = 2 \pi/\Omega}$. In the context of equilibrium spontaneous symmetry breaking, the constituent shape determines mesophase order. For example, at high density or low temperature, rod-shaped constituents can form nematic (two-fold rotationally symmetric) phases. By contrast, in our case, anisotropic responses and structure emerge from dynamics. In order to characterize such structure on long timescales, we average over the fast timescale of a single rotation period. We formally define this time-averaging via the integral \begin{equation} \langle \chi(t) \rangle \equiv \frac{\Omega}{2 \pi} \int_0^{2 \pi/\Omega} \! dt \,\, \chi(t) \label{eq:av} \end{equation} for an arbitrary periodic function $\chi(t)$. For example, substituting Eq.~(\ref{eq:theta}), with $\delta = 0$, into the orientational order parameter $e^{i 2 \theta}$ and evaluating the average using Eq.~(\ref{eq:av}), we find \begin{equation} \langle e^{i 2 \theta(t)} \rangle = J_1(2 \alpha) \approx \alpha + O(\alpha^3), \label{eq:op} \end{equation} where $J_1(x)$ is a Bessel function of the first kind~\footnote{This expression can be obtained using the definition $ J_1(x) \equiv \frac{1}{2 \pi}\int_{-\pi}^{\pi}e^{i(\tau - x \sin \tau)} d\tau $.}. This order parameter connects the modulation defined by Eq.~(\ref{eq:theta}) to time-averaged orientational order with $2 \pi$ rotational symmetry. In the isotropic case, the system becomes a fluid composed of objects rotating at a constant rate~\cite{Sumino2012,Bonthuis2009,Furthauer2012,Oswald2015,Riedel2005, Denk2016, Snezhko2016,Lemaire2008, Uchida2010,Yan2015}. The mechanics of matter composed of such chiral active building blocks is crucial for biological function~\cite{Drescher2009, Petroff2015,Nonaka2002,Guirao2010,Button2012,Brumley2015,Kirchhoff2005,Kaiser2013,Lenz2003} and synthetic materials design~\cite{Tabe2003, Maggi2015, Nguyen2014,Sabrina2015,Spellings2015}. One exotic feature in the mechanics of these fluids are local torques due to antisymmetric components of the stress tensor~\cite{Dahler1961, Condiff1964, Tsai2005}. The order parameter captures the appearance of nematic anisotropy in a fluid with a cyclically modulated drive. Rotations of time-averaged order are captured by modulation phase $\delta$ that enter the nematic $\mathrm{Q}$-tensor. (The order parameter $S \equiv |\langle e^{i 2 \theta(t)} \rangle|$ does not depend on rotations by $\delta$.) For a fluid with nematic symmetry, the time-averaged $\mathrm{Q}$-tensor is defined by $\langle Q_{ij} \rangle \equiv 2 (\langle n_i n_j \rangle - \langle n_i n_j \rangle_{\alpha = 0})$, where $\langle n_i n_j \rangle_{\alpha = 0} = \delta_{ij}$ is the average in the isotropic case ($\delta_{ij}$ is the Kronecker-$\delta$). Using Eq.~(\ref{eq:theta}), we find: \begin{align} \label{eq:qdef} \langle Q_{ij} \rangle & = \frac{S}{2} \begin{bmatrix} \cos2\delta & \sin2\delta \\ \sin2\delta & - \cos2\delta \end{bmatrix}. \end{align} In this time-averaged sense, the fluid is not an ordinary nematic, which would have a spontaneously broken symmetry and long, slow variations in $Q_{ij}(\bm{{\rm x}},t)$ over time and space. Instead, in the driven fluid such fluctuations are suppressed because rotational symmetry is explicitly broken by the drive. $Q_{ij}$ is prescribed and constant in both time and space. For this nematic fluid, the naive time-average of the director $\hat{\bm{{\rm n}}}$ is zero by symmetry: $\langle \hat{\bm{{\rm n}}} \rangle = 0$. Nevertheless, a time-averaged director $\hat{n}^a$ can be defined from the time-averaged $\mathrm{Q}$: $\langle Q_{ij} \rangle =\langle e^{i 2 \theta(t)} \rangle [\hat{n}^a_i \hat{n}^a_j - \delta_{ij}/2]$. This quantity is defined by the phase $\delta$, $\hat{n}^a = (\cos \delta, \sin \delta)$. The two parameters $\alpha$ and $\delta$ determine, respectively, the magnitude and orientation of the time-averaged order in the emergent nematic fluid (as does the equivalent description using the $\mathrm{Q}$-tensor). \section{\label{sec:model2} Anisotropic odd viscosity} A fluid with orientational order has a direction-dependent mechanical response. Here, we ask ``does time-modulated drive lead to a response, for example in the viscosity tensor, which is not possible in equilibrium?'' To probe such viscosities, we consider timescales for which $\dot{\theta}$ is fast and the strain rates $\nabla_i v_j$ are slow. In our analysis, we begin with a coarse-grained description of an equilibrium nematic liquid crystal and add drive. Such a description is appropriate if $\dot{\theta}$ is slow compared to the microscopic collision processes between the fluid particles, allowing us to keep only the lowest-order terms in $\dot{\theta}$. In our description, the fast director is averaged over a rotational period, and only the slow velocity field remains (see Fig.~\ref{Fig1}). For general two-dimensional fluids that conserve angular momentum, the odd viscosity encoded in the tensor $\eta^o_{ijkl}$ ($= - \eta^o_{klij}$) has three independent components $\eta^Q_{\alpha,\beta}$ and $\eta^o$~\cite{Avron1998}. Because the driven rotation is a clear source and sink for angular momentum in the overdamped fluid that we consider, odd viscosity includes the three extra components $\eta^Q_{\gamma,\delta}$ and $\eta^A$. Whereas the components $\eta^{o,A}$ are isotropic, the $\eta^Q_{\alpha,\beta,\gamma,\delta}$ rotate like the components of the $Q$-tensor for a nematic liquid crystal. Therefore, for a fluids with three-fold rotational symmetry (or higher), only the two isotropic components $\eta^{o,A}$ will remain~\cite{Avron1998}. Note that for any odd viscosity tensor $\eta^o_{ijkl}$, the resulting stress $\eta^o_{ijkl} v_{kl}$ is dissipationless. This can be evaluated from the rate $\partial_t s$ of entropy production, $\partial_t s \approx \sum_{ijkl}\eta^{o}_{ijkl} v_{ij}v_{kl} = 0$ using the anti-symmetry of $\eta^o_{ijkl}$. Odd viscosity may be a useful tool in the study of parity-broken quantum systems such as quantum Hall states, Chern insulators, and topological superconductors~\cite{read2011hall, abanov2014electromagnetic, bradlyn2015low, gromov2015framing, gromov2016boundary}, because this anomalous response can be used to identify topological phases of matter. For two-dimensional quantum fluids, an anisotropic generalization of odd viscosity has recently been proposed in Refs.~\cite{Haldane2009, Haldane2015, Gromov2017, lapa2018hall}. In these cases, the fluid has inversion symmetry as well as angular momentum conservation, and the full information about odd viscosity is encoded into a symmetric rank-2 tensor $\eta^{o}_{ij}$: \begin{equation} \eta^o_{ij} = \eta^o \delta_{ij} + \eta^Q_{\alpha} \sigma^x_{ij} + \eta^Q_{\beta} \sigma^z_{ij}, \label{eq:oddten} \end{equation} where the traceless part of $\eta^o_{ij}$ is the symmetric matrix $\eta^Q_{\alpha} \sigma^x + \eta^Q_{\beta} \sigma^z$. As an example, if the nematic director aligns with the x-axis, then $\delta = 0$. Physically, this means that only the horizontal pure shear leads to either a torque or a pressure change. Isotropic odd viscosity $\eta^o$ has been observed in magnetized plasmas~\cite{Korving1966, Landau10}, whereas nematic components of odd viscosity have not yet been realized in any experimental context. In order to estimate anisotropic odd viscosity in chiral active fluids, we begin with an anisotropic classical fluid with overdamped orientational dynamics, i.e., a nematic liquid crystal~\cite{deGennes1995,Chandrasekhar1992}. Typical nematics are composed of anisotropic, rod-like constituents (called nematogens) on molecular or colloidal scales. When the rods align with their neighbors, they carry no angular momentum or inertia. Vibrated rods can order into a nematic pattern as a nonequilibrium example of a system with liquid-crystalline order~\cite{Galanis2006}. Nematogens can transition between a disordered state at high temperature (or low density) and an aligned state at low temperature (or high density). In the nematic state, the rods tend to all point in the same direction, and the mechanical response varies relative to this alignment. The Leslie-Ericksen coefficients characterize the linear response of the fluid stress to either the strain rate or the rotation rate of the nematic director. We now consider the nonlinear generalization of the Leslie-Ericksen stress, to lowest orders in nonlinearities~\cite{Moritz1976} (see Supporting Information for full expression). After averaging over the fast dynamics of the nematic director, the terms linear in strain rate $A_{ij}$ contribute to the viscous components of the stress tensor. However, terms even in $\dot{\hat{\bm{{\rm n}}}}$ (i.e., order $(\dot{\hat{\bm{{\rm n}}}})^{2p}$ for integer $p$, including $p=0$, which are those independent of $\dot{\hat{\bm{{\rm n}}}}$) do not break time-reversal symmetry and cannot contribute to odd viscosity. We focus on those terms that contribute to the odd viscosity tensor, which therefore must be odd in $\dot{\hat{\bm{{\rm n}}}}$ ($\dot{n}_i = - \dot{\theta} \epsilon_{ij} n_j$, where $\epsilon_{ij}$ is the two-dimensional Levi-Civita symbol defined via $\epsilon_{xy} = - \epsilon_{yx} = 1$ and $\epsilon_{xx} = \epsilon_{yy} = 0$) and linear in $A_{kl}$. For positive integers $\beta$ ($ = 1,2,3,\ldots$), these terms, of order $\dot{\theta}^{2 \beta - 1}$, are~\cite{Moritz1976} \begin{align} & \sigma^{EL,\beta}_{ij} = \dot{\theta}^{2 \beta - 2 } \big[ \xi^\beta_{10} n_p A_{ip} \dot{n}_j + \xi^\beta_{11} n_p A_{jp} \dot{n}_i + \xi^\beta_{12} n_i A_{jp} \dot{n}_p \nonumber \\ & + \xi^\beta_{14} n_j A_{ip} \dot{n}_p + \xi^\beta_{16} n_i n_p n_q A_{pq} \dot{n}_j + \xi^\beta_{17} n_j n_p n_q A_{pq} \dot{n}_i \big]\,. \label{eq:el1} \end{align} We focus on the stress components $\sigma^{EL,1}$ and $\sigma^{EL,2}$, which have similar forms, but different orders of $\dot{\theta}$ and, in general, different sets of coefficients $\{\xi^\beta_\kappa\}$. The local forces $\rho_0 \partial_t \bm{{\rm v}}$ are calculated using gradients of the time-averaged stress, resulting in the equation for the flow $\bm{{\rm v}}$: $\rho_0 \partial_t v_i = \nabla_j \langle \sigma^{EL}_{ij}\rangle$, where $\rho_0$ is the fluid density. For modulations with $n = 2$, we obtain the following expression for isotropic odd viscosity: \begin{equation} \label{eq:etao1} \eta^o = - \frac{\Omega}{8} \xi^1_L - \frac{\Omega^3}{8} (1 + 2 \alpha^2 ) \xi^2_L + O(\Omega^5), \end{equation} where $\xi^\beta_L \equiv 2 [\xi^\beta_{10} + \xi^\beta_{11} - \xi^\beta_{12} - \xi^\beta_{14}] + \xi^\beta_{16} +\xi^\beta_{17}$ is a linear combination of the $\xi^\beta_\kappa$ coefficients. The first right-hand-side term in Eq.~[\ref{eq:etao1}] comes from the lowest-order nonlinearities in the equilibrium fluid stress, whereas the higher-order term involves higher-order nonlinearities and will in general be subdominant. Despite constraints (stemming from stability at equilibrium) on the signs of $\xi^\beta_i$, the resulting expression~(\ref{eq:etao1}) for $\eta^o$ can change sign either via reversal of the spinning rate $\Omega$ or by changing the relative magnitudes of $\xi^\beta_\kappa$ that enter Eq.~(\ref{eq:etao1}) with different signs. To analyze the tensorial (angular-momentum conserving) components of the odd viscosity tensor $\eta^o_{ijkl}$, we calculate the rank-2 odd viscosity tensor $\eta^o_{ij}$ using~\cite{Haldane2009, Haldane2015,Gromov2017} $\eta^o_{ij} = (\delta_{ni} \delta_{kj} \epsilon_{ml} + \delta_{mi} \delta_{lj} \epsilon_{nk}) \eta^o_{nmkl}/4$. From $\langle \sigma^{EL,2}_{ij} \rangle$, we find \begin{align} \eta^Q & = \frac{\alpha \Omega^3}{4} (\xi^2_{16} + \xi^2_{17}) + O(\Omega^5), \end{align} where again $\eta^Q$ is defined via $(\eta^Q)^2 \equiv (\eta^{Q}_{\alpha})^2+(\eta^{Q}_{\beta})^2$. Because effects of modulated drive enter via terms of the stress $\sigma_{ij}$ higher-order in the rotation rate $\dot{\theta}$, $\eta^Q$ scales as $\Omega^3$ in contrast to $\eta^o$, which scales as $\Omega$. If $\alpha \rightarrow 0$, the driven fluid loses anisotropy and the nematic odd viscosity $\eta^Q$ vanishes. In addition to the components of the odd viscosity tensor that conserve angular momentum, the chiral active fluid also includes the components $\eta^Q_{\gamma,\delta}$ and $\eta^A$ that couple explicitly to the antisymmetric component of the stress and which therefore correspond to induced microscopic torques. These out-of-equilibrium responses differentiate the far-from-equilibrium fluid from, for example, quantum Hall fluids that break time-reversal symmetry at equilibrium due to an applied magnetic field and which instead do conserved angular momentum. From the averaging procedure, these extra responses can be read off as \begin{align} &\eta^A = \frac{\Omega}{4} (- \xi^1_{9} + \xi^1_{10} - \xi^1_{14}) + O(\Omega^3)\\ &\eta^{K} = \frac{\alpha \Omega^3}{4} (2 \xi^2_{11} + 2 \xi^2_{12} - \xi^2_{16} + \xi^2_{17}) + O(\Omega^5) \end{align} to lowest orders in $\Omega$, where $\eta^K$ is defined via $(\eta^K)^2 \equiv (\eta^{Q}_{\gamma})^2+(\eta^{Q}_{\delta})^2$. In many contexts, odd viscosity goes hand in hand with inertia. In vortex fluids, the vortex circulation encodes both fluid inertia and odd viscosity~\cite{Wiegmann2014}. For chiral active fluids in which collisions conserve angular momentum, a simple argument gives the value of odd viscosity: if an inclusion changes its area, the torque on the inclusion is given by the rate of change in area times the odd viscosity or, equivalently, by the expelled angular momentum. As a result, odd viscosity is given by half of the angular momentum density~\cite{Ganeshan2017,Banerjee2017}. For fluid phenomena at the smallest scales, dissipation dominates over inertia. In this limit, chiral active fluids composed of colloidal particles have the broken-$T$ symmetry necessary for odd viscosity to arise. However, the arguments based on angular momentum cannot give an accurate estimate of the value of odd viscosity because momentum plays no role in the mechanics. Instead, in the dissipative, overdamped model that we propose, isotropic odd viscosity $\eta^o$ arises from the lowest-order nonlinear coupling between director rotation and fluid strain rate. Furthermore, in a fluid with broken time-reversal, parity, and rotational symmetries, higher-order nonlinear couplings lead to an anisotropic components $\eta^{Q,K}$ of the odd viscosity tensor. \begin{figure}[t!] \includegraphics[angle=0]{Fig2.pdf} \caption Schematics of the physics of tensorial odd viscosity. (a) The response characteristic of isotropic odd viscosity, corresponsing to $\eta^o = \mathrm{Tr}(\eta^o_{ij}) /2$: for an object with time-varying area $a(t)$, isotropic odd viscosity is related to the ratio of torque $\tau_I$ to areal rate of change $\dot{a}$: $\eta^o = \tau_I/(2\dot{a})$~\cite{Lapa2014,Ganeshan2017}. For a given fluid chirality (in this case, $\eta^o > 0$), the torque changes sign depending on whether the object is contracting ($\dot{a} < 0$ and $\tau_I < 0$, left) or expanding ($\dot{a} > 0$ and $\tau_I > 0$, right). (b) If the areal rate of change is zero, but the shape is sheared, then the torque $\tau_Q$ is given by the anisotropic component of the odd viscosity tensor. This nematic odd viscosity has two independent components captured by the traceless symmetric tensor $Q_{ij}$ [$= S (n_i n_j - \delta_{ij}/2)$], which control the amplitude and shear-angle-dependence of the resulting torque. Specifically, this torque depends on the angle of the shear relative to the director $n_i$ and is proportional to the (signed) shear rate. For example, for a sheared circle, a rotation of the shear by $\pi/2$ is equivalent to a shear of opposite sign, and therefore corresponds to a torque $\tau_Q$ of the opposite sign (right). The orientation at angle $\pi/4$ at which the shear is diagonal corresponds to zero torque.} \label{Fig2} \end{figure} \section{\label{sec:torque1}Equation of motion with anisotropic odd viscosity} In this section, we show the consequences of tensorial odd viscosity on fluid flow. Using the Helmholtz decomposition in two dimensions, the fluid flow can be expressed in terms of the compression rate $\nabla \cdot \bm{{\rm v}}$ and the vorticity $\nabla \times \bm{{\rm v}}$. To derive the equation of motion for vorticity, we follow the usual route by taking the curl of the velocity equation. This simplifies the equation by removing the gradient terms due to isotropic stress (because $\epsilon_{ij} \partial_i \partial_j \sigma_{kk} = 0$). Without any odd viscosity contributions, the equation of motion would become the two-dimensional vorticity-diffusion equation. We find that whereas isotropic odd viscosity contributes only compression-rate-dependent terms, anisotropic odd viscosity changes the vorticity profile even for an incompressible fluid~\cite{Banerjee2017}. We do so by substituting the expression for the stress $\sigma_{ij} = \eta_{ijkl} v_{kl}$ into the velocity equation $\rho D_t v_j = \partial_i \sigma_{ij}$. We begin with the full anti-symmetric viscosity tensor $\eta^o_{ijkl}$ from Eq.~(\ref{eq:as}) and, for brevity, only the isotropic shear viscosity $\eta$ from the symmetric, dissipative viscosity (see Appendix~\ref{sec:apdis} for a detailed discussion of the anisotropic dissipative viscosity tensor.) Taking the curl, we arrive at the (pseudo-scalar) vorticity equation (see Appendix~\ref{sec:derom} for details): \begin{align} &\rho D_t \omega =\eta \nabla^2 \omega - (\nabla \cdot \cM_1 \cdot \nabla) \omega + \nabla^2 [\nabla \cdot (\cM_1^* \cdot \bm{{\rm v}})] \nonumber \\& + (\eta^o + \eta^A) \nabla^2 (\nabla \cdot \bm{{\rm v}}) - (\nabla \cdot {\mathcal M}_2 \cdot \nabla) (\nabla \cdot \bm{{\rm v}}) \label{eq:omm2}, \end{align} where $D_t$ is the convective derivative, and \begin{align} \label{eq:mdef} \cM_1 & \equiv \eta^Q_{\gamma} \sigma^x + \eta^Q_{\delta} \sigma^z, \\ \cM_2 & \equiv \eta^Q_{\alpha} \sigma^x + \eta^Q_{\beta} \sigma^z \nonumber \end{align} and $\cM_1^* \equiv \eta^Q_{\delta} \sigma^x - \eta^Q_{\gamma} \sigma^z$ (i.e., $\cM_1$ rotated by $\pi/4$). For incompressible flow, $\nabla \cdot \bm{{\rm v}} = 0$, and the last two terms in Eq.~(\ref{eq:omm2}) proportional to the odd viscosity components $\eta^{o}$, $\eta^{A}$, and ${\mathcal M}_2$ all vanish~\cite{Avron1998}. This reduces Eq.~(\ref{eq:omm2}) to Eq.~(\ref{eq:omm}). This feature distinguishes components of anisotropic odd viscosity ${\mathcal M}_1$ (and $\eta^{K}$) from both isotropic odd viscosities $\eta^{o}$ and $\eta^{A}$: ${\mathcal M}_1$ can be measured directly from the flow of an incompressible fluid in the bulk. The expression $\nabla \cdot ({\mathcal M}^*_1 \cdot \bm{{\rm v}})$ can be interpreted as a shear-strain rate associated with $\bm{{\rm v}}$ (because $\mathrm{Q}$ and ${\mathcal M}_{1,2}$, like shear transformations, are all symmetric and traceless). Alternatively, we can rewrite the last term in Eq.~(\ref{eq:omm}) using the nematic director rotated by $\pi/4$, which we call $\hat{\mathbf{m}}$, finding the term proportional to $\nabla^2 [(\hat{\mathbf{m}} \cdot \nabla) (\hat{\mathbf{m}} \cdot \bm{{\rm v}})] $, where we used $\nabla \cdot \bm{{\rm v}} = 0$. This form demonstrates that anisotropic odd viscosity induces torques due to (the Laplacian of) gradients that are rotated by $\pi/4$ relative to the nematic director of the velocity component along the same direction. A further simplification to these expressions can arise in fluids with nematic symmetry. In that case, we expect both ${\mathcal M}_1$ and ${\mathcal M}_2$ to be proportional to the nematic $Q$-tensor, which implies that the angle $\delta$ defined in Eq.~(\ref{eq:qdef}) is the same for the two tensors ${\mathcal M}_{1,2}$. This implies a relation between components $\eta^Q_{\alpha,\beta,\gamma,\delta}$ that reduces the number of independent anisotropic viscosities from four to three. This relation between the four anisotropic odd viscosities is expected to hold for a wide range of models of anisotropic fluids with odd viscosity and without angular momentum conservation, including the one we consider in this work. \section{\label{sec:torque2}Torques on an inclusion} Whereas the anisotropic component, $\eta^K$, can be measured directly from the flow of an incompressible fluid, the other tensorial odd viscosity, $\eta^Q$, requires the measurement of forces. Below, we show how tensorial odd viscosity $\eta^Q$ determines the mechanical forces that the fluid exerts on immersed objects. For simplicity, consider the case in which $\eta^A = \eta^{K} = 0$. This case also applies to the quantum Hall fluid, because the consevation of total angular momentum is preserved. We find that such a fluid exerts torques due to the shape change of the object. We calculate the torque on a shape-changing object by integrating the local force over the object's boundary. We focus on expressions that apply to both inertial and overdamped fluids by only considering the instantaneous forces $f_j$ on the boundary element of the object (and not the flow away from the boundary). These forces are determined from the instantaneous velocity $\bm{{\rm v}}$ via the fluid stress tensor $\sigma_{ij}$: \begin{equation} f_j=m_i \sigma_{ij}\, , \end{equation} where $m_i$ is the normal to the boundary at that point. We then substitute into the odd-viscosity stress $\sigma_{ij}$ ($ = \eta^o_{ijkl} \partial_k v_l\,$) the (general) expression~\cite{Haldane2009, Haldane2015,Gromov2017} \begin{equation} \eta^o_{ijkl} = \frac{1}{2} \left( \epsilon_{ik} \eta^o_{jl} + \epsilon_{jk} \eta^o_{il}+ \epsilon_{il} \eta^o_{jk}+ \epsilon_{jl} \eta^o_{ik}\, \right). \label{eq:eo} \end{equation} The force on an element of the boundary of an inclusion is given by \begin{equation} f_j = \frac{1}{2} \left( m_k \eta^o_{jl} \partial_k^* v_l + m_i \eta^o_{il}\partial_j^*v_l + m_i \eta^o_{kj}\partial_kv^*_i+m_i\eta^o_{ik}\partial_kv^*_j \right) \end{equation} where we have used the notation $v^*_i \equiv \epsilon_{ij} v_j$. The total torque $\tau$ on a compact inclusion is given by the integral of the local torque ${\cal T}$ acting on an infinitesimal boundary element, $\tau=\oint {\cal T}(s) ds$, where $s$ is an arc-length parameterization of the boundary. The local torque is given by the standard expression ${\cal T} = \epsilon_{ij}x_i f_j = \vec{x} \times \vec{f}$. For example, in the isotropic case $\eta^o_{ij} = \eta^o \delta_{ij}$, one obtains the relation derived in Refs.~\cite{Lapa2014, Ganeshan2017}: \begin{equation} \tau_I = 2 \oint N_i \eta^o_{ij}v_j = 2 \eta^o\oint v_{N}=2 \eta^o\dot{a}, \end{equation} where $\dot{a}$ the rate of change of area for the inclusion and $N_i$ is the normal to the inclusion boundary. Substituting Eq.~(\ref{eq:oddten}) into the expression for the integrand of the torque, we find \begin{align} N_i \eta^o_{ij}v_j & = \eta^o v_N + \eta^Q_{\alpha} \sigma^{x}_{ij} N_i v_j + \eta^Q_{\beta} \sigma^{z}_{ij} N_i v_j. \end{align} Thus, the contribution $\tau_Q$ to the torque due to nematicity is \begin{align} \tau_Q & = 2 \eta^Q_{\alpha} \oint \sigma^{x}_{ij} N_i v_j + 2 \eta^Q_{\beta} \oint\sigma^{z}_{ij} N_i v_j. \end{align} For a circle of radius $r_0$ at the origin, a deformation with a zero change in area and a nonzero shear rate (applied affinely, i.e., uniformly across the entire shape) is captured by the second angular harmonic of the velocity field, \begin{equation} f_2(\gamma) = \int d\theta \cos (2\theta-2\gamma) v_N(\theta), \label{eq:seg} \end{equation} where $v_N(\theta) = \bm{{\rm v}}(r = r_0, \theta) \cdot \hat{\mathbf{N}}$ is the normal (i.e., radial) displacement of the circle's boundary (see Fig.~\ref{Fig2}). The parameter $\gamma$ sets the angle of the applied shear. To better intuit Eq.~(\ref{eq:seg}), the angular dependence can be contrasted with areal deformation, which corresponds to the zeroth angular harmonic, $\int d\theta v_N(\theta)$ ($=\dot{a}$), and a net translation at fixed shape, which corresponds to the first harmonic, $\int d\theta [\cos\theta,\sin\theta]v_N(\theta)$ ($=[v_x,v_y]$). To evaluate $\tau_Q$, we use the relation $Q_{ij} N_i N_j = \frac{S}{2}\cos(2 \theta - 2 \delta)$ and assume that $v_i = v_N N_i$, i.e., the velocity is normal to the boundary. We then find \begin{equation} \tau_Q = 2 \eta^Q \oint Q_{ij} N_i N_j v_N = \eta^Q f_2(\delta), \end{equation} where we used Eq.~(\ref{eq:seg}). The torque magnitude is set by the nematic part of the odd viscosity tensor, $\eta^{Q}$, and the angular dependence is set by the nematic director angle $\delta$. The $\eta^{Q}$ component of the nematic odd viscosity can be measured from the ratio $\tau_Q/f_2(\delta)$, i.e., measuring the torque $\tau_Q$ due to a shear rate $f_2(\delta)$ in a direction along which $f_2(\delta) \ne 0$ (see Fig.~\ref{Fig2}). Note that $\eta^{Q}_{\alpha,\beta}$ are two independent components of the odd viscosity tensor: these could be defined, for example, in terms of the torque amplitude and the direction of largest torque. In two dimensions, measuring the torques due to both a uniform expansion and an area-preserving shear of the inclusion would allow one to determine the three independent components of the odd viscosity tensor $\eta^o_{ijkl}$ present in a fluid with conserved angular momentum. \section{\label{conc} Conclusions} In the design of active materials with tailored mechanical characteristics, a basic question is: what is the relationship between activity and mechanical response? Whereas fluids that break both parity and time-reversal symmetries can generically exhibit an anomalous response called odd viscosity, it remains a challenge to determine the value of this mechanical property. When inertial effects dominate, odd viscosity is related to the angular momentum density $\ell$ via $\eta^o = \ell/2$~\cite{Banerjee2017}. In thermal plasmas, odd viscosity is proportional to temperature~\cite{Landau10}. We explore a different regime, in which the fluid constituents are anisotropic and the dynamics do not conserve angular momentum. In this regime, the equilibrium stress tensor of the fluid without drive determines the effective odd viscosity of the active fluid once the drive is turned on. This odd viscosity is proportional to the dissipative coefficients of nemato-hydrodynamics, but in addition depends on the angular velocity $\Omega$ of the drive. By modulating $\Omega$ in time, we design a classical fluid with tensorial odd viscosity. With this work, we aim to inspire the design of metafluids in which anomalous response can be engineered to order and observed experimentally. Whereas in mechanical metamaterials the arrangement of the constituents leads to exotic elastic response, in these metafluids the exotic hydrodynamic response arises from time modulated drive. These phases present an array of unexplored physical phenomena which combine the anisotropy of liquid crystals with the far-from-equilibrium nature of active matter. In addition, experimental tests of anisotropic odd viscosity could help to elucidate this exotic and unexplored property of quantum Hall fluids in a classical fluid context. There are two distinct experimental signatures of anisotropic odd viscosity. First, unlike its isotropic counterpart, anisotropic odd viscosity can modify the flow in the bulk of an {\it incompressible} fluid of self-rotating object by acting as a source of vorticity, see Eq.~(\ref{eq:omm}). Second, anisotropic odd viscosity generates torques on inclusions: isotropic odd viscosity results in torques on an immersed object proportional to rate of change in its area, whereas nematic odd viscosity results in torques due to the rate of area-preserving shear distortion of an inclusion's shape, see Fig.~2. The conversion between torque and shape-change via such exotic fluids may inspire soft mechanical components and devices at the microscale. {We thank Toshikaze Kariyado, Sofia Magkiriadou, Daniel Pearce, Alexander Abanov, William Irvine, and Tom Lubensky for insightful discussions. AS, AG and VV were primarily supported by the University of Chicago Materials Research Science and Engineering Center, which is funded by the National Science Foundation under award number DMR-1420709. AG was also supported by the Quantum Materials program at LBNL, funded by the U.S. Department of Energy under Contract No. DE-AC02-05CH11231}
proofpile-arXiv_065-6101
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} The {\textit{ab initio}} description of strongly correlated electrons in solids is a major challenge, limiting the quantitative understanding of interacting electronic phases, such as the Mott~\cite{Imada98} and high-temperature superconducting phases~\cite{Dagotto94, Sachdev03, Lee06RMPhightc}. The heart of the difficulty lies in the need to use computational methods that can treat correlated electrons, which usually means a steep computational scaling with system size, as well as treat the thermodynamic limit (TDL), in order to observe distinct phases. A formal route to extend high-level correlated electron methods to infinite systems is provided by \emph{quantum embedding}~\cite{Zgid11, Sun16QET}. While there are today a wide variety of techniques termed embedding~\cite{Sun16QET}, we will be concerned with the type of quantum embeddings in condensed phases that historically started with the treatment of defects in solids via the Anderson impurity model, where the interacting impurity site is surrounded by a set of bath orbitals that approximately represent the environment \cite{Anderson61}. This impurity idea can be generalized to translationally invariant systems, where the lattice is subdivided into multiple clusters (also termed impurities or fragments) where each is embedded in a self-consistent environment generated by the other impurities. In the embedding treatment, only the solution of the embedded cluster (i.e. the cluster along with its quantum bath) is treated by the high-level correlated method (the impurity solver), while interactions between clusters are treated at a lower level of theory, typically within a single-particle framework such as mean-field. Dynamical mean-field theory (DMFT) was the first quantum embedding algorithm for periodic systems based on the above self-consistent quantum impurity idea~\cite{Georges92, Georges96}, and has since been extended in many different directions and settings~\cite{Georges96, Kotliar06RMP, Held2007, Maier05RMP, Potthoff03, Senechal08, Kananenka15, Rusakov19, Biermann14JPCM}. DMFT is formulated in terms of the one-particle Green's function, and solving the embedded impurity problem yields a local self-energy that is then used in the single-particle Green's function description of the periodic lattice. More recently, density matrix embedding theory (DMET) \cite{Knizia12} has been proposed as a computationally simpler quantum embedding algorithm, also for a self-consistent quantum impurity, but adopting the one-particle reduced density matrix as the fundamental variable, in conjunction with a static mean-field description of the periodic lattice~\cite{Knizia12, Bulik14, Chen14dmethoneycomb, Zheng16, Zheng17}. Because DMET only requires to compute frequency-independent observables, it is less expensive than DMFT, and in practice, a wider variety of correlated electron methods can be applied to the impurity problem. A further kind of quantum embedding, density functional (or wavefunction-in-density functional) embedding \cite{Wesolowski93, Goodpaster10dftemb, Huang11dftemb, Libisch14, Jacob14, Chulhai2018, Lee19dftemb, Zhu2016, Zhu2019a} is also of much current interest. However, this is not usually applied to strongly correlated phases, and thus we do not consider it further here. In this work, we will focus our attention on the {\textit{ab initio}} implementation of DMET in periodic solids. While DMET has been successfully applied to compute electronic phase diagrams across a range of strongly correlated lattice models~\cite{Knizia12, Bulik14, Chen14dmethoneycomb, Fan15, Zheng16, Zheng17, Zheng17sci, Gunst17, Sandhoefer16, Wu19pdmet}, the extension of DMET to a practical {\textit{ab initio}} method for periodic systems remains incomplete. There have been several works on cyclic H and Be ring structures\cite{Knizia13, Wouters16, Fulde17dmet, Pham18} and an early DMET implementation for solids that treated minimal unit cells and small basis sets~\cite{Bulik14detsolid} (e.g. 2D boron nitride in the 6-31G basis and diamond in a STO-3G basis~\cite{Bulik14detsolid}). However, such calculations are best considered model {\textit{ab initio}} calculations in the sense that the basis sets and impurity sizes are too small for quantitative or chemical accuracy. What remains to be developed is a comprehensive computational framework in periodic DMET calculations that can use both large and realistic basis sets, and treat non-trivial cluster sizes or complicated unit cells with many atoms. Describing such a framework is the purpose of the current work. To establish a practical implementation of {\textit{ab initio}} periodic DMET, it is worth outlining the similarities and differences between a calculation on a lattice model and a realistic solid. On the one hand, both models and real solids are translationally invariant over cells, and thus for an efficient computational algorithm, ${\mathbf{k}}$-point symmetry should be utilized wherever possible. On the other hand, there are many important differences, i.e. (\lroman{1}) in a realistic solid, one needs to define the impurity basis, and different definitions can vary widely in terms of locality and other properties, (\lroman{2}) the number of atoms and basis functions per impurity cell can be very large in a realistic system, and (\lroman{3}) realistic Hamiltonians contain complicated interactions between all the basis functions, including potentially divergent long-range Coulomb terms. Thus, realizing {\textit{ab initio}} DMET involves both specifying some, in principle, arbitrary choices (such as the choice of impurity orbitals) as well as carrying out efficient implementations of many standard quantum chemistry routines, such as integrals and their transformations. The latter is also part of the general infrastructure of {\textit{ab initio}} periodic quantum chemistry. In this work we rely heavily on the periodic computational infrastructure established in the \textsc{PySCF} package~\cite{Sun18pyscf, McClain17, Sun17}, which in fact historically grew out of an effort to implement {\textit{ab initio}} DMET. The remainder of the paper is organized as follows. In Sec. \ref{sec:theory}, we first describe the detailed DMET embedding framework for periodic solids, including the definition of the impurity and lattice basis, the construction of local orbitals, bath truncation, efficient integral transformation, and DMET and charge self-consistency. In Sec. \ref{sec:results}, we apply the method to some prototype crystals with realistic basis sets and non-trivial cluster sizes with up to $\sim 300$ embedded cluster orbitals, including a 2D hexagonal boron nitride monolayer, 3D crystalline silicon, and the antiferromagnetic (AFM) \uroman{2} phase of NiO. We finish in Sec. \ref{sec:conclusion} with conclusions and remarks. \ZHC{Note added: In a recent submission, Pham et al. have also presented related work \cite{Pham19DMETsolid} that applies {\textit{ab initio}} DMET to periodic systems.} \section{Theory}\label{sec:theory} \subsection{DMET Implementation}\label{subsec:dmet} In this section, we describe the detailed implementation of DMET for {\textit{ab initio}} calculations in solids, focusing on aspects related to periodic systems that have not been reported in the previous DMET literature. For a general description of the DMET algorithm (and a detailed description of its molecular implementation) we refer readers to Ref. \onlinecite{Wouters16}. {\bf Lattice and impurity localized orbitals}. The infrastructure of {\textit{ab initio}} mean-field theory uses crystal (Bloch) orbitals and ${\mathbf{k}}$-point quantities, while quantum embedding is naturally formulated in terms of local orbitals and real-space quantities. Thus, we first define a translation from the mean-field computational basis to one appropriate for embedding. To do so, we construct atom-centered orthogonal local orbitals (LO) $\qty{w_i ({\mathbf{r}})}$ that define the lattice Hilbert space, which can be cleanly partitioned into a product of impurity Hilbert spaces. Here, we will assume that the mean-field computational basis is a set of crystal atomic orbitals (AOs) $\qty{\phi^{{\mathbf{k}}}_{\mu} ({\mathbf{r}})}$ (which constitutes a non-orthogonal basis, with an AO index $\mu$ and a ${\mathbf{k}}$-point index in the first Brillouin zone). It is convenient to first define an intermediate set of local crystal orbitals, \begin{equation}\label{eq:C aolo} w^{{\mathbf{k}}}_{i} ({\mathbf{r}}) = \sum_{\mu} \phi^{{\mathbf{k}}}_{\mu} ({\mathbf{r}})C^{{\mathbf{k}}, {\rm AO}, {\rm LO}}_{\mu i} , \end{equation} where the notation $C^{\mathrm{X}, \mathrm{Y}}$ denotes the transformation from basis $\mathrm{X}$ to basis $\mathrm{Y}$. The real-space LOs in any cell can then be obtained by a Wannier summation over the local crystal orbitals, for example, the LOs at the lattice origin (${\mathbf{R}} = {\mathbf{0}}$) are given by \begin{equation}\label{eq:w0} w^{{\mathbf{R}} = {\mathbf{0}}}_{i} ({\mathbf{r}}) = \frac{1}{\sqrt{N_{{\mathbf{k}}}}} \sum_{{\mathbf{k}}} w^{{\mathbf{k}}}_{i} ({\mathbf{r}}). \end{equation} Expressed in the LOs, the {\textit{ab initio}} periodic system is isomorphic to a periodic lattice problem, with reciprocal lattice vectors ${\mathbf{k}}$. We choose a subset of $\qty{w_i({\mathbf{r}})}$ to define the impurity. It is natural to choose the impurity to be spanned by LOs in a single unit cell or a supercell, and for definiteness, we choose the cell or supercell at the lattice origin as the impurity. {\bf{Choice of local orbitals.}} The next computational task is to specify the coefficients in Eq.~\ref{eq:C aolo} that define the LOs in terms of the crystal AOs. There are two strategies to construct orthogonal local orbitals: a \emph{top-down} strategy [transforming from canonical mean-field molecular orbitals (MOs) to LOs] and a \emph{bottom-up} strategy (transforming from the AO computational basis to LOs). The first strategy finds a unitary transformation of the MOs to optimize a metric (such as $\expval{r^2} - \expval{{\mathbf{r}}}^2$) that measures the spatial locality of the LOs. Examples of such approaches are the Boys\cite{Foster60}, Pipek-Mezey (PM)\cite{Pipek98} and Edmiston-Ruedenberg (ER)\cite{Edmiston63} methods in molecules, and the maximally localized Wannier function (MLWF)\cite{Marzari97, Marzari12RMP} and Pipek-Mezey Wannier function (PMWF)\cite{Jonsson17} methods in solids. The top-down scheme can yield more localized orbitals than bottom-up schemes. However, due to the need to carry out an optimization, the disadvantages are also apparent: (\lroman{1}) the procedure can be numerically expensive and one can easily get stuck in a local minimum of the cost function, particularly when constructing a large number of local virtual orbitals; (\lroman{2}) with periodic boundary conditions, entangled bands \cite{Souza01, Damle18} often exist among the high-energy virtual MOs, and special techniques are required; (\lroman{3}) a false minimum or discontinuity in ${\mathbf{k}}$-space can lead to non-real orbitals after the Wannier summation in Eq. \ref{eq:w0}, giving a Hamiltonian with complex coefficients in the LO basis, which is incompatible with many impurity solver implementations. In the bottom-up strategy, one avoids optimization and relies only on linear algebra to construct the LOs. Examples of LOs of this type are the L\"owdin and meta-L\"owdin orbitals \cite{Lowdin50, Sun14qmmm}, natural atomic orbitals (NAO) \cite{Reed85} and intrinsic atomic orbitals (IAO) \cite{Knizia13IAO}. Bottom-up methods avoid the difficulties of the top-down strategy: (\lroman{1}) the construction is usually cheap (i.e. suited to producing large numbers of local orbitals); (\lroman{2}) there is no initial guess dependence or local minimum problem; (\lroman{3}) the LOs are guaranteed to be real as long as the phases of crystal AOs and other ${\mathbf{k}}$-space orbitals in the formalism (e.g. the reference crystal AOs used to construct the IAOs) are smooth in ${\mathbf{k}}$-space. Since we aim to carry out calculations beyond a minimal basis, and thus with many virtual orbitals, we have chosen the bottom-up strategy to avoid difficulties in optimization and non-real Hamiltonian coefficients. In particular, we have adapted the molecular IAO routine to crystal MOs with ${\mathbf{k}}$-point sampling (see Appendix~\ref{app: kiao}) to generate the set of crystal IAOs. The crystal IAOs are \emph{valence} orbitals that exactly span the occupied space of the mean-field calculation. Note that the number of IAOs is the same as the size of the minimal basis only. To obtain a complete set of LOs that span the same space as the original AO basis (thus making a square rotation matrix $C^{{\mathbf{k}}, {\rm AO}, {\rm LO}}$ in Eq. \ref{eq:C aolo}) we need to further augment the IAOs with LOs that live purely in the virtual space. Here we choose these additional orbitals to be the projected atomic orbitals (PAO) for non-valence orbitals \cite{Saebo93}, orthogonalized with L{\"o}wdin orthogonalization, as originally proposed for local correlation calculations\cite{Saebo93}. The IAOs + PAOs then together span the complete space of AOs and constitute a complete LO basis. A related scheme has previously been used in the molecular DMET calculations\cite{Wouters16, Motta17}. {\bf{DMET bath and truncation.}} The DMET embedded Hilbert space consists of the impurity LOs and a set of bath orbitals; these together are the embedding orbitals (EOs). We define the bath orbitals in DMET by using the SVD of the mean-field off-diagonal density matrix between the impurity and remaining lattice $\gamma^{{\mathbf{R}} \neq {\mathbf{0}}, {\mathbf{0}}}_{ij}$~\cite{Wouters16}, \begin{equation}\label{eq:bath R SVD} \gamma^{{\mathbf{R}} \neq {\mathbf{0}}, {\mathbf{0}}}_{ij} = \sum_{\tilde{i}} B^{{\mathbf{R}} \neq {\mathbf{0}}}_{i\tilde{i}} \Lambda_{\tilde{i}\tilde{i}} V^{{\mathbf{0}} \dagger}_{\tilde{i}j} . \end{equation} where $B^{{\mathbf{R}} \neq {\mathbf{0}}}$ gives the coefficients of the bath orbitals and we use ``$\sim$'' above the orbital indices to denote orbitals in the embedding space. The overall projection from the LO basis to the EO basis then has the following form, \begin{equation}\label{eq:C loemb R} C^{{\mathbf{R}}, {\rm LO}, {\rm EO}} = \begin{bmatrix} \mathbbm{1} & {\mathbf{0}} \\ {\mathbf{0}} & \mathbf{B}^{{\mathbf{R}} \neq {\mathbf{0}}} \end{bmatrix} , \end{equation} where the identity block means that the impurity LOs (i.e. the basis defined in Eq. \ref{eq:w0}) are left unchanged. To transform from the computational crystal AO basis to the embedding orbitals, we multiply two transformations, \begin{align}\label{eq:C loemb k} C^{{\mathbf{k}}, {\rm LO}, {\rm EO}} = \sum_{{\mathbf{R}}} {\mathrm{e}}^{-{\mathrm{i}} {\mathbf{k}} \cdot {\mathbf{R}}} C^{{\mathbf{R}}, {\rm LO}, {\rm EO}} \notag, \\ C^{{\mathbf{k}}, {\rm AO}, {\rm EO}} = C^{{\mathbf{k}}, {\rm AO}, {\rm LO}} C^{{\mathbf{k}}, {\rm LO}, {\rm EO}} . \end{align} Although the DMET bath is formally of the same size as the number of impurity orbitals, the mean-field wavefunction only contains appreciable entanglement between partially occupied LOs on the impurity and corresponding bath orbitals. Very low-lying core and high-energy virtual impurity orbitals thus are not entangled with any bath orbitals. In practice, this manifests as very small singular values $\Lambda_{\tilde{i}\tilde{i}}$ and the corresponding singular vectors (bath orbitals) can vary between different DMET iterations~\cite{Wouters16} leading to difficulties in converging the DMET self-consistency procedure. To eliminate this instability, we use the procedure previously recommended in molecular DMET calculations \cite{Wouters16}. We first partition the impurity orbitals into core, valence and virtual orbitals, and only carry out the SVD for the impurity valence columns of the off-diagonal density matrix to construct corresponding valence bath orbitals~\cite{Wouters16}, i.e. the index $j$ in Eq. \ref{eq:bath R SVD} can be constrained to the valence orbitals only. Note that when pseudopotentials are used in the calculation, there is no core subspace, and thus no core bath orbitals appear. With this construction, the number of embedding orbitals is reduced from $2 n_{{\mathrm{imp}}}$ to $n_{{\mathrm{imp}}} + n_{{\mathrm{val}}}$, where $n_{{\mathrm{val}}}$ is the number of valence orbitals, which is smaller than the number of impurity orbitals $n_{{\mathrm{imp}}}$, and we recover smooth DMET convergence. {\bf{Constructing the embedding Hamiltonian.}} Using the EOs defined above, we can construct the DMET embedding Hamiltonian. The embedding Hamiltonian in the DMET \emph{interacting bath} formalism\cite{Knizia13,Wouters16} takes the form, \begin{equation}\label{eq:emb Ham} \mathcal{H} = \sum_{\tilde{i}\tilde{j}} \tilde{F}_{\tilde{i}\tilde{j}} c^{\dagger}_{\tilde{i}}c_{\tilde{j}} - \mu \sum_{\tilde{i}\in {\mathrm{imp}}} c^{\dagger}_{\tilde{i}}c_{\tilde{i}} +\frac{1}{2} \sum_{\tilde{i}\tilde{j} \tilde{k}\tilde{l}} \eri{\tilde{i}\tilde{j}}{\tilde{k}\tilde{l}} c^{\dagger}_{\tilde{i}}c^{\dagger}_{\tilde{k}}c_{\tilde{l}}c_{\tilde{j}} . \end{equation} Besides the normal one- and two-particle terms, a chemical potential $\mu$ is added to the impurity Hamiltonian so that the number of electrons on the impurity is constrained to be precisely correct. An alternative choice is the DMET \emph{non-interacting bath} formalism~\cite{Wouters16}. In this case, the two-particle interactions are restricted to the impurity orbitals, and interactions on the bath are mimicked by adding the correlation potential to the bath. For further details, we refer to Ref.~\onlinecite{Wouters16}. In this work, we primarily use the interacting bath formalism, and only briefly consider the non-interacting bath formalism for comparison. To obtain the coefficients of the embedding Hamiltonian, we first transform the Fock matrix from the AOs to the EOs, \begin{equation}\label{eq:fock transform} F^{{\mathbf{0}}, {\rm EO}} = \frac{1}{N_{{\mathbf{k}}}}\sum_{{\mathbf{k}}} C^{{\mathbf{k}}, {\rm AO}, {\rm EO} \dagger} F^{{\mathbf{k}}, {\rm AO}} C^{{\mathbf{k}}, {\rm AO}, {\rm EO}} , \end{equation} where $F^{{\mathbf{k}}, {\rm AO}}$ is the Fock matrix in the periodic mean-field calculation. [Note that regardless of the mean-field orbitals used (i.e. Hartree-Fock or DFT), the Fock matrix refers to the Hartree-Fock one-particle Hamiltonian, \emph{not} the Kohn-Sham Hamiltonian]. To eliminate double counting, we subtract the contribution of the embedding electron repulsion integrals (ERIs, see below for their construction) from the transformed Fock matrix $F^{{\mathbf{0}}, {\rm EO}}$ in Eq. \ref{eq:fock transform}, \begin{equation}\label{eq:fock double counting} \tilde{F}_{\tilde{i}\tilde{j}} = F^{{\mathbf{0}}, {\rm EO}}_{\tilde{i}\tilde{j}} - \qty[\sum_{\tilde{k}\tilde{l}} \eri{\tilde{i}\tilde{j}}{\tilde{k}\tilde{l}} \gamma_{\tilde{l}\tilde{k}} - \frac{1}{2}\eri{\tilde{i}\tilde{k}}{\tilde{l}\tilde{j}} \gamma_{\tilde{k}\tilde{l}}] , \end{equation} where $\gamma$ is the density matrix rotated to the embedding basis. The construction and integral transformation of the two-particle ERIs of the embedding orbitals can be computationally expensive. A significant reduction in cost is obtained by using density fitting \cite{Whitten73, Sun17}. Density fitting defines the 4-center ERIs in terms of the 3-center ERIs. In the presence of ${\mathbf{k}}$ symmetry, this takes the form \begin{equation}\label{eq:density fitting} \eri{\mu {\mathbf{k}}_{\mu} \nu {\mathbf{k}}_{\nu}}{\kappa {\mathbf{k}}_{\kappa} \lambda {\mathbf{k}}_{\lambda}} \approx \sum_{L} \eri{\mu {\mathbf{k}}_\mu \nu {\mathbf{k}}_\nu}{L} \eri{L}{\kappa {\mathbf{k}}_\kappa \lambda {\mathbf{k}}_\lambda} , \end{equation} where $L$ is the auxiliary basis and only three ${\mathbf{k}}$ indices are independent. There are many choices of auxiliary basis and here we will mainly use Gaussian density fitting (GDF), where $L$ is a set of chargeless Gaussian crystal orbitals, with the divergent part of the Coulomb term treated in Fourier space~\cite{Sun17}. [We discuss plane-wave density fitting (FFTDF) in Appendix \ref{app:fftdf eri}]. $L$ has an implicit ${\mathbf{k}}$ dependence in Eq.~\ref{eq:density fitting}. This means the 3-center integral $\eri{L}{\mu{\mathbf{k}}_\mu \nu{\mathbf{k}}_\nu}$ is more precisely written as $\eri{L{\mathbf{k}}_L}{\mu{\mathbf{k}}_\mu \nu{\mathbf{k}}_\nu}$, where ${\mathbf{k}}_L={\mathbf{k}}_\mu-{\mathbf{k}}_\nu + n\mathbf{b}$ due to momentum conservation ($n\mathbf{b}$ is integer multiple of reciprocal lattice vectors). We construct the embedding ERIs starting from the GDF 3-center integrals according to Algorithm \ref{alg:eri with gdf}. \begin{algorithm}[hbt] \caption{Pseudocode for the embedding ERI transformation with GDF.} \label{alg:eri with gdf} \begin{algorithmic}[1] \For{all ${\mathbf{k}}_{L}$} \For{$\qty({\mathbf{k}}_{\mu}, {\mathbf{k}}_{\nu})$ that conserves momentum} \State Transform $\eri{L}{\mu{\mathbf{k}}_\mu \nu{\mathbf{k}}_\nu}$ to $\eri{L}{\tilde{i}{\mathbf{k}}_\mu \tilde{j} {\mathbf{k}}_\nu}$ by $C^{{\mathbf{k}}, {\rm AO}, {\rm EO}}$ \Comment{${\mathbf{k}}$-AO to ${\mathbf{k}}$-EO} \State $\eri{L}{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}} \mathrel{+}= \frac{1}{N_{{\mathbf{k}}}} \eri{L}{\tilde{i} {\mathbf{k}}_\mu \tilde{j} {\mathbf{k}}_\nu}$ \Comment{FT to the reference cell ${\mathbf{R}} = {\mathbf{0}}$} \EndFor \State $\eri{\tilde{i}\tilde{j}}{\tilde{k}\tilde{l}} \mathrel{+}= \frac{1}{N_{{\mathbf{k}}}} \sum_{L} \eri{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}}{L} \eri{L}{{\mathbf{0}} \tilde{k} {\mathbf{0}} \tilde{l}}$ \Comment{Contraction for the embedding ERI} \EndFor \end{algorithmic} \end{algorithm} \ZHC{In this algorithm, the final contraction step scales as $\mathcal{O}\qty(n_{{\mathbf{k}}} n_{L} n^4_{{\rm EO}})$ while the transformation step (${\mathbf{k}}$-AO to ${\mathbf{k}}$-EO) scales as $\mathcal{O}\qty(n^{2}_{{\mathbf{k}}} n_{L} n_{{\rm AO}} n^{2}_{{\rm EO}}) + \mathcal{O}\qty(n^{2}_{{\mathbf{k}}} n_{L} n^2_{{\rm AO}} n_{{\rm EO}})$ where we use $n_{{\rm AO}} (n_{{\rm EO}})$ to denote the number of atomic (embedding) basis functions per cell. Note that $n_{{\rm EO}}$ is larger than $n_{{\rm AO}}$ and thus the first term is the dominant term.} If the number of ${\mathbf{k}}$-points is not too large, the contraction is the rate determining step. It is noteworthy that the scaling with respect to ${\mathbf{k}}$ is only linear (contraction) and quadratic (transformation). As an example, the embedding ERIs of a $3\times3\times1$ cluster of boron nitride (GTH-DZVP basis and a $6\times6\times1$ mean-field lattice corresponding to transforming 936 crystal AOs to 306 embedding orbitals) can be constructed in about 200s using 28 cores. The largest objects during the calculation are the final set of ERIs $\eri{\tilde{i}\tilde{j}}{\tilde{k}\tilde{l}}$ and the AO density fitting integral $(L|\mu{\mathbf{k}}_\mu \nu {\mathbf{k}}_\nu)$. The latter is stored on disk and loaded into memory blockwise to further reduce the required memory. Finally, we note that if the impurity solver supports density fitting without requiring explicit ERIs, the contraction step in Algorithm \ref{alg:eri with gdf} can be omitted. {\bf{DMET and charge self-consistency.}} A key component in the DMET description of phases and order parameters is the imposition of self-consistency between the ``high-level'' (HL) embedded wavefunction and the ``low-level'' (LL) mean-field description. We matched the correlated one-particle density matrix $\gamma$ from the impurity solver and the mean-field one-particle density matrix by minimizing their Frobenius norm difference with respect to the correlation potential $u$, \begin{equation}\label{eq:cost func} \min_{u} \sum_{ij \in {{\rm EO}}} \qty[\gamma^{{\rm LL}}_{ij}(u) - \gamma^{{\rm HL}}_{ij}]^2 , \end{equation} where the indices $i, j$ loop over all embedding orbitals \ZHC{and the high-level density matrix $\gamma^{{\rm HL}}_{ij}$ is kept fixed during the correlation potential fitting. } Other choices of cost function are also possible, e.g. only matching the impurity \cite{Knizia13, Wouters16} or diagonal part \cite{Bulik14} of the density matrix. However, we only consider full matching in this work. \ZHC{The correlation potential is a local term (i.e. independent of the impurity cell ${\mathbf{k}}$). In the current work, the correlation potential is chosen to be a spin-dependent potential where the number of independent elements per spin-component is $n_{{\rm LO}} (n_{{\rm LO}} + 1) / 2$.} With large basis sets, the number of parameters in $u$ can be very large. To reduce the degrees of freedom in the numerical optimization, we can add $u$ only to a subset of orbitals, e.g. the valence orbitals. With a small set of parameters, the optimization problem can be easily solved, e.g. by a conjugate gradient algorithm. \ZHC{It should be noted that the minimization of the cost function is not a convex problem, thus in principle there can be multiple local minima; for example in an AFM system, there may be multiple solutions corresponding to different spin polarization patterns. However, we have not observed multiple local minima in this work, since the BN and Si systems do not break spin symmetry, and in NiO, we always start with a particular AFM order in the initial guess of $u$.} In an {\textit{ab initio}} DMET calculation, an additional layer of self-consistency appears associated with the non-linear {\textit{ab initio}} lattice mean-field calculation [this is sometimes referred to as \emph{charge self-consistency} (CSC) in DMFT calculations \cite{Savrasov01, Savrasov04, Pourovskii07, Park14dmftcsc}]. In our implementation, the AO-based Fock matrix $F^{{\mathbf{k}}, {\rm AO}}$ is updated at the beginning of each DMET cycle, using the improved DMET mean-field density matrix from the previous iteration, which reflects the response of the mean-field density (matrix) to the DMET local correction. We always perform CSC in our calculations unless otherwise specified. We finally note that the LOs, in principle, can be redefined based on the new mean-field MOs at each DMET iteration. However, we do not consider such an update in the current work. Instead, we only determine the LOs at the beginning of the calculation and keep the LOs fixed in the following DMET self-consistency loops. This choice introduces a small dependence on the initial orbitals (e.g. using HF- or DFT-MOs to define the LOs). However, it is usually reasonable to assume that the LOs do not change significantly during the embedding self-consistency. \ZHC{ We illustrate the periodic {\textit{ab initio}} DMET algorithm, with both DMET correlation potential and charge self-consistency, in Fig. \ref{fig:DMET flowchart}. } \begin{figure}[hbt] \includegraphics[width=0.5\textwidth]{./fig/DMET-algo-flowchart.pdf} \caption{\ZHC{The DMET self-consistency procedure, where ``mf'' is used to denote the relevant mean-field physical quantities, e.g. the Fock matrix $F$, density matrix $\gamma$; $\mu$ and $u$ are used to denote the chemical potential and correlation potential respectively. ``CSC'' denotes charge self-consistency and is an optional step in the algorithm. The flowchart starts at the blue block and ends at the green block when self-consistency is reached. }}\label{fig:DMET flowchart} \end{figure} \subsection{Computational Details}\label{sec:compdetails} We consider three prototypical solids: a 2D hexagonal boron nitride monolayer (h-BN), crystalline silicon (Si) and nickel monoxide (NiO). The lattice parameters were taken from experiment: $a = 2.50 \text{\AA}$ for the BN monolayer\cite{Li11hBN} (with $20.0 \text{\AA}$ vacuum to eliminate fictitious interactions between mirrors); $a = 5.43053 \text{\AA}$ for Si \cite{Tobbens01Si}, and $a = 4.17 \text{\AA}$ for NiO \cite{Cheetham83NiO}. To target the AFM-\uroman{2} state, the minimal unit cell of NiO was chosen as the rhombohedral cell that contains two formula units of NiO. \ZHC{We used 28 Intel [email protected] cores in all the calculations.} We summarize the computational parameters for DMET below. {\bf{Mean-field calculations.}} All mean-field calculations were performed using the \textsc{PySCF} package \cite{Sun18pyscf} with Hartree-Fock or DFT [Perdew-Burke-Ernzerhof (PBE) functional \cite{Perdew96PBE}]. GTH pseudopotentials \cite{Goedecker96, Hartwigsen98} were used to replace the sharp core electron density, with corresponding GTH-DZVP ($2s2p3s3p3d$ AOs for B and N, and $3s3p3d4s4p$ AOs for Si) and GTH-DZVP-MOLOPT-SR ($3s3p3d4s4p4d4f5s$ AOs for Ni, and $2s2p3s3p3d$ AOs for O) basis sets~\cite{VandeVondele2007} used to represent the valence electrons. Gaussian density fitting was used to compute the two-electron integrals~\cite{Sun17}. \ZHC{We used an even-tempered Gaussian basis \cite{Stoychev17ETBbasis} as the density fitting auxiliary basis, i.e. $L_{nl} (r) \propto r^l \exp(\alpha \beta^n r^2)$, where we used the exponential factor $\beta = 2.3$ for NiO and $\beta = 2.0$ for all other systems. The number of fitting functions was chosen to ensure high accuracy, and thus the size of the auxiliary basis is about 10 times as large as the number of AOs.} The GTH-SZV (h-BN and Si) and GTH-SZV-MOLOPT-SR (NiO) basis functions were used as the reference free-atom AOs to construct the IAOs. In the mean-field calculations used to derive the embedding Hamiltonian and in the DMET self-consistency, we sampled the Brillouin zone with a $\Gamma$ centered mesh chosen so as to be able to fit unit multiples of the DMET impurity supercell. These included a $6\times 6 \times 1$ mesh for BN, and a $4 \times 4 \times 4$ mesh for Si and NiO. Larger meshes were used in independent estimates of the mean-field TDL for BN (up to $12 \times 12 \times 1$) and Si (up to $8\times 8 \times 8$). All mean-field calculations were converged to an accuracy of better than $10^{-10}$ a.u. per unit cell. In the case of Hartree-Fock energies, all energies included the leading-order exchange finite-size correction (probe-charge Ewald~\cite{Paier05, Sundararaman13regularization}, \texttt{exxdiv=ewald} in \textsc{PySCF}). Note that the above correction applies to all DMET energies as these use the Hartree-Fock expression for the mean-field energy even when density functional orbitals are used. {\bf{Impurity solver.}} We used coupled cluster singles and doubles (CCSD) \cite{Bartlett07} as an impurity solver, as implemented in \textsc{PySCF} \cite{Sun18pyscf}, which is able to treat a large number of orbitals efficiently. In NiO where DMET self-consistency produced symmetry breaking, we used unrestricted CCSD (UCCSD). The CC density matrices were obtained from the CC $\Lambda$ equations~\cite{Shavitt09book}. The CC energies were converged to $10^{-8}$ a.u.. {\bf{DMET self-consistency.}} For BN and NiO, the correlation potential $u$ was added to only the valence orbitals and for Si, $u$ was added to all impurity orbitals as this gave smoother DMET convergence. We carried out CSC calculations for all three systems, and included additional non-CSC results of NiO for comparison. The convergence criterion on the DMET self-consistency was chosen such that the maximal change of an element in $u$ was less than $5\times 10^{-5}$ a.u., which corresponded roughly to an energy accuracy of better than $1\times 10^{-5}$ a.u.. \section{Results and Discussion}\label{sec:results} \subsection{2D Boron Nitride}\label{subsec:BN} We first study the behavior of DMET on a 2D boron nitride monolayer. In a GTH-DZVP basis, BN has a unit cell of 2 atoms, with $2s2p$ AOs on each atom giving 8 valence orbitals per cell, and $3s3p3d$ AOs on each atom providing 18 higher-energy virtual orbitals per cell. We illustrate the valence IAOs of boron in BN in Fig. \ref{fig:orb iao}. \begin{figure}[hbt] \subfigure[]{\label{fig:orb iao}\includegraphics[width=0.4\textwidth]{./fig/BN-iao.pdf}} \subfigure[]{\label{fig:orb bath}\includegraphics[width=0.4\textwidth]{./fig/BN-bath.png}} \caption{Impurity orbitals and bath density of BN used in the DMET calculations. The boron and nitrogen atoms are colored pink and blue respectively. (\lroman{1}) Impurity valence orbitals associated with one boron atom (IAOs from boron). (\lroman{2}) Bath orbital density coupled to the first reference cell.} \end{figure} As expected, the IAOs of boron are quite local, retaining their original AO character but with some slight polarization to reflect the mean-field solution in the crystal environment. The bath orbital density is plotted in Fig. \ref{fig:orb bath} (we only show the total density summed over the bath orbitals here, since the embedded problem only depends on the linear span of the bath). It is clear that the bath orbitals are localized around the impurity cluster and give an effective representation of the remainder of the boron nitride crystal. In particular, the bath orbitals serve to terminate the dangling bonds on the impurity boundary, thus turning the embedding problem into a closed-shell one at the mean-field level. The impurity valence orbitals and bath orbitals pictured here, together with the impurity virtual orbitals (not shown), constitute the embedding orbitals. We computed total energies (per cell) from DMET for different cluster sizes, $1\times 1$, $2\times 2 $ and $3\times 3 $. We compare these total energies to those from ${\mathbf{k}}$-sampled periodic CCSD (${\mathbf{k}}$-CCSD) extrapolated to the TDL (see Fig. \ref{fig:BN extrapolation}) \ZHC{which has recently been demonstrated to be a high accuracy method in a variety of different materials~\cite{McClain17, Gao19CCTMO, Zhang19KCCreview}. Note that, accounting fully for the ${\mathbf{k}}$-point symmetry, ${\mathbf{k}}$-CCSD has a computational scaling of $n_{{\rm AO}}^6 n^4_{{\mathbf{k}}}$.} \begin{figure}[hbt] \includegraphics[width=0.5\textwidth]{./fig/BN-extrapolation.pdf} \caption{Upper panel: Total energy from DMET compared with ${\mathbf{k}}$-sampled CCSD. In the case of DMET with the interacting bath (IB), both one-shot and self-consistent energies are reported. DMET with non-interacting bath (NIB) is also shown for comparison. The extrapolated values of DMET is from an average of linear regression and quadratic fitting. The error bar is the difference between the linear and quadratic fitted values. \ZHC{ We plot the energy of ${\mathbf{k}}$-CCSD with small ${\mathbf{k}}$-mesh (one curve with HF energy at corresponding small ${\mathbf{k}}$-mesh and the other with HF energy at $6\times 6$ ${\mathbf{k}}$-mesh) and the extrapolated TDL results as reference. Lower panel: Correlation energy ratio with respect to the extrapolated CCSD correlation energy.} }\label{fig:BN extrapolation} \end{figure} The reference TDL ${\mathbf{k}}$-CCSD energy is the sum of the extrapolated HF energy using a large ${\mathbf{k}}$-mesh (up to $12\times 12 \times 1$, extrapolating with the form $n^{-1}_{{\mathbf{k}}}$ after using the Ewald exchange divergence correction~\cite{Gygi86, Paier05}) and the extrapolated ${\mathbf{k}}$-CCSD correlation energy using a smaller ${\mathbf{k}}$-mesh (up to $6\times 6 \times 1$, extrapolating with the form $n^{-1}_{{\mathbf{k}}}$). Compared to the TDL reference energy, even using the smallest ($1 \times 1$) cluster, DMET gives an accurate total energy that captures about 95\% of the correlation energy. Extrapolating over the DMET cluster size (using the surface to volume form $N_{\mathrm{c}}^{-1/2}$, where $N_{\mathrm{c}}$ is the cluster size) further improves the accuracy by about 1-2\% in the correlation energy. The one-shot DMET result (i.e. without DMET self-consistency) is less accurate than the self-consistent one by $\sim 8$ mHartree (3\% of the correlation energy), demonstrating the contribution of self-consistent matching between the high-level calculation and the low-level mean-field calculation. We note that self-consistency is generally not very important in non-magnetic weakly-correlated systems, as there are no symmetry broken phases to be generated by DMET, and only provides a modest quantitative correction to the observables. Compared to small $N\times N\times 1$ ${\mathbf{k}}$-mesh CCSD energies, the DMET total energies are more accurate for the $1 \times 1$ and $2 \times 2$ cluster sizes, but less accurate for the $3 \times 3$ case. \ZHC{The finite size error in the total energy, arising from the finite ${\mathbf{k}}$-mesh or DMET cluster size, can be separated into two sources, (\lroman{1}) the finite size error in the mean-field energy and (\lroman{2}) the finite size error in the many-body correlation energy.} For embedding methods like DMET, the error from the first source is (largely) eliminated. Thus, as shown in Fig. \ref{fig:BN extrapolation}, the DMET total energy is good even for a small cluster size. In the CCSD calculation, however, the error from (\lroman{1}) is large for small clusters, and therefore, a potentially better recipe for the total energy is to sum the HF energy from a larger cluster (or even extrapolated to the TDL) and the correlation energy from the small cluster calculation. \ZHC{In the upper panel, we show the ${\mathbf{k}}$-CCSD correlation energy added to the $6\times 6$ HF energy (corresponding to the size of the DMET lattice), as well as to the extrapolated TDL HF energy.} Together with the data in the lower panel of Fig. \ref{fig:BN extrapolation}, we see that the correlation energy $E_{\mathrm{corr}}$ of CCSD, which relies on the above error cancellation, is already very accurate for the $2 \times 2$ cluster and is better than that of DMET for this cluster size. It is then worth analyzing the source of errors in the small cluster DMET correlation energy. One source is the lack of embedding of the non-valence virtual orbitals, which are localized to the reference cell with the periodicity of the large DMET mean-field lattice, not the periodicity of the impurity (as in the ${\mathbf{k}}$-CCSD calculation). The advantages of DMET in the current implementation thus manifest when the predominant correlation is within the valence space itself (which is fully embedded) as is typical of strong correlations, rather than primarily involving excitations to non-embedded, non-valence, virtual orbitals as in this system. One way to diminish the boundary effect on the DMET non-valence virtuals is to evaluate the energy from the central part of the supercell, for which the surrounding atoms effectively provide a bath for the virtuals. We find then that the energy evaluated using the central cell of the embedded cluster covers 103.8\% of the correlation energy (using the preceding $3\times 3$ cluster calculation) or 100.1\% (if no chemical potential fitting is used), which is better than that obtained by direct energy evaluation using the entire embedded cluster. It may be possible to further reduce this boundary error using the dynamical cluster approximation formulation of DMET (DCA-DMET)\cite{Zheng17} or bootstrap embedding\cite{Welborn16bootstrap, Ricke17bootstrap, Ye19bootstrap}. We finally consider DMET results obtained using the non-interacting bath (NIB), as also shown in Fig. \ref{fig:BN extrapolation}. We see that although the extrapolation is quite systematic, the accuracy is worse than that of the interacting bath for all three cluster sizes. This result is generally found in chemical systems with long-range Coulomb interactions, as the interacting bath carries some information about the inter-cluster interactions. However, the NIB formalism has the potential computational advantage that the construction of the NIB embedded Hamiltonian is cheaper than the IB one, since only the impurity part of the two-particle Hamiltonian is needed. In addition, the correlation potential can be used to mimic the effect of the long-range Coulomb contributions to the Fock matrix. This makes the NIB scheme an interesting possibility in large systems. \subsection{Bulk Silicon}\label{subsec:Si} We next test the ability of DMET to describe the structural properties of bulk Si. We performed a series of calculations on different primitive cell volumes and fitted the relative total energy $E$ as a function of the volume $V$ using the Birch-Murnaghan (B-M) equation of state (EOS) \cite{Murnaghan44, Birch47}, from which the equilibrium volume and bulk modulus can then be determined. To obtain accurate results for the TDL, we considered three clusters of different shapes: a $1\times 1 \times 1$ primitive cell (2 Si atoms), a conventional diamond cubic cell (8 Si atoms) and a $2\times 2 \times 2$ supercell (16 Si atoms). We performed the extrapolation with respect to cluster volume $V_{\mathrm{c}}$ using \begin{equation}\label{eq:extrapolation Si} E(V_{\mathrm{c}}) = E(\infty) + a_0 V^{-1/3}_{\mathrm{c}} + \cdots \end{equation} The total energy includes the correction from HF at the TDL. The equilibrium volumes and bulk moduli are collected in Table \ref{tab:bulk property Si}. \begin{table}[ht!] \centering \caption{Equilibrium volume of the primitive cell $V_0$ and bulk modulus $B_0$ of silicon from different approaches. The extrapolated values are from the linear fit of $1\times 1\times 1$ and $2\times 2 \times 2$ results. The CCSD results are taken from Ref. \onlinecite{McClain17} \ZHC{, which uses the larger GTH-TZVP basis}. The experimental $V_0$ is from Ref. \onlinecite{Tobbens01Si} and $B_0$ is from Ref. \onlinecite{Schimka11} with a zero-point correction.} \label{tab:bulk property Si} \begin{tabular}{lcccc} \hline\hline Methods & & $V_0$ [$\text{\AA}^3$] & $B_0$ [GPa] \\ \hline HF & extrap. & 40.30 & 107 \\ DMET & $1 \times 1 \times 1$ & 42.83 & 87.9 \\ & cubic cell & 41.90 & 88.5 \\ & $2 \times 2 \times 2$ & 41.26 & 91.1 \\ & extrap. & 39.69 & 99.0 \\ CCSD & $3 \times 3 \times 3$ & 39.21 & 103 \\ Expt.& & 40.04 & 101 \\ \hline\hline \end{tabular} \end{table} From the table, we see that the equilibrium volume of DMET using the $1\times 1 \times 1$ cluster deviates from the experimental value by 7\%. The error from the smallest impurity cluster is thus larger for Si than for BN. This is because Si has a much smaller band gap and thus less local correlation involving the non-valence space. However, the results improve rapidly when increasing the size of cluster. To illustrate this, we show the EOS curves for different cluster sizes in Fig. \ref{fig:eos Si}. \begin{figure}[hbt] \includegraphics[width=0.5\textwidth]{./fig/Si-eos.pdf} \caption{Equation of state curves of Si from DMET and CCSD. For DMET, we omit the cubic cell curve for clarity. CCSD data is taken from Ref. \onlinecite{McClain17}. }\label{fig:eos Si} \end{figure} It is clear that the $1\times 1\times 1$ curve is shifted to larger volume compared to experiment or CCSD. Increasing the cluster size systematically shifts the curve back towards experiment and the ${\mathbf{k}}$-CCSD benchmark, resulting in a very small relative error (w.r.t. experiment) of 0.9\% for $V_0$ for the extrapolated curve. The extrapolated bulk modulus $B_0$ also agrees well with the experimental and ${\mathbf{k}}$-CCSD benchmark values. \ZHC{Overall, the accuracy achieved by extrapolated DMET appears comparable to that of the ${\mathbf{k}}$-CCSD benchmark in a full $3 \times 3 \times 3$ periodic calculation, although we note that a different basis was used.} \subsection{Nickel monoxide}\label{subsec:NiO} We now demonstrate the ability of DMET to treat a more strongly correlated problem by considering a typical transition metal compound, NiO. Below the N\'eel temperature, NiO displays an antiferromagnetic (AFM) phase with a staggered magnetization along the [111] direction (the so-called AFM-\uroman{2} phase). Although DFT (with PBE) and HF do predict spin-polarization, it is known that DFT often underpolarizes while HF often overpolarizes antiferromagnetic states. To avoid such biases in the DMET calculation, we embed the DMET calculation in an initial \emph{unpolarized} mean-field state. We constructed the unpolarized mean-field state by using the orbitals obtained from the spin-averaged Fock matrix of an unrestricted Hartree-Fock or DFT calculation. We use the spin-averaged Fock matrix for convenience because without finite-temperature smearing, the restricted calculations either have difficulty converging due to the metallic nature (DFT) or exhibit an unphysical symmetry breaking of the density between the symmetry-equivalent nickel atoms (HF). The spin-averaged Fock matrix is similar to the restricted one with smearing but exactly preserves the symmetry between the two nickel atoms. We denote DMET calculations based on the spin-averaged mean-field orbitals by ${\rm DMET}@\Phi_{{\rm RHF}}^{*}$ (${\rm DMET}@\Phi_{{\rm RPBE}}^{*}$), where ``$*$'' means the restricted orbitals are actually from the spin-averaged unrestricted Fock matrix rather than a real restricted one. The spectrum of such a spin-averaged Fock matrix is gapless. After adding an initial DMET correlation potential, e.g. taken from the local part of the UHF polarized potential, the system becomes gapped and $S^2$ symmetry is broken. Without CSC, the final DMET mean-field gap is $\sim 3$ eV and with CSC, the DMET mean-field gap is $\sim 10$ eV, closer to the Hartree-Fock mean-field gap ($\sim 12$ eV). \ZHC{(Note that the experimental band gap of AFM NiO is $\sim 4.3$ eV~\cite{Sawatzky84}).} It should be emphasized that although the band gap from the DMET lattice mean-field reflects the insulating nature of the system, its value does not correspond to the true fundamental gap of the system. Even if the density from the impurity solver were exact and the matching between density matrices were perfect, the mean-field gap is not exact due to the derivative discontinuity contribution\cite{Perdew17}, similar to the Kohn-Sham gap obtained from an optimized effective potential (OEP) calculation \cite{Kuemmel08RMP}. The ground state charges and local magnetic moments of NiO from DMET starting from different initial mean-fields (spin-averaged HF and PBE) are summarized in Table \ref{tab:charge and magnetic moment}. Assignment of local observables to different atoms (population analysis) was performed using the IAOs + PAOs and the density matrix from the CC impurity solver. \begin{table}[hbt] \centering \caption{Local charge (in $e$) and magnetic moment (in $\mu_{\mathrm{B}}$) of NiO from different methods. The values on Ni (O) are averaged from the two Ni (O) sites in the primitive cell. We include the DMET results from different initial orbitals ($\Phi_{{\rm RHF}}^{*}$ and $\Phi_{{\rm RPBE}}^{*}$), with / without charge self-consistency (CSC). The experimental data is taken from Refs. \onlinecite{Alperin62NiOm, Fender68NiOm, Cheetham83NiO}. } \label{tab:charge and magnetic moment} \begin{tabular}{lcccc} \hline\hline Methods & $\rho_{\mathrm{Ni}}$ & $m_{\mathrm{Ni}}$ & $m_{\mathrm{O}}$ \\ \hline HF & 1.42 & 1.86 & 0.000 \\ PBE & 1.02 & 1.42 & 0.000 \\ ${\rm DMET}@\Phi_{{\rm RHF}}^{*}$ w/o CSC & 1.32 & 1.77 & 0.018 \\ ${\rm DMET}@\Phi_{{\rm RPBE}}^{*}$ w/o CSC & 1.27 & 1.74 & 0.017 \\ ${\rm DMET}@\Phi_{{\rm RHF}}^{*}$ w/ CSC & 1.37 & 1.81 & 0.001 \\ ${\rm DMET}@\Phi_{{\rm RPBE}}^{*}$ w/ CSC & 1.35 & 1.78 & 0.000 \\ Expt. & & 1.70-1.90 & \\ \hline\hline \end{tabular} \end{table} We also include unrestricted HF, PBE results for comparison. First, we observe clear charge transfer from Ni to O in all methods. Among them, HF gives the largest ionic character while PBE smears out the charge and predicts the smallest charge transfer. The DMET results from different starting orbitals and CSC conditions are between these two limits and are relatively close to each other. The DMET results with CSC (starting from HF and PBE) are particularly close to each other as the inter-cluster part of density matrix is updated using information from the high-level embedded calculation. In fact, in the case of CSC, the only effect of the initial choice of orbitals in DMET on the final result comes from the different definition of the local orbitals. Compared to the experimental estimate of the magnetic moment, unrestricted Hartree-Fock gives a Ni magnetic moment at the higher-end of the experimental range, while PBE severely underestimates the magnetic moment. DMET yields results independent of the starting orbitals with a moment that agrees well with experiment. To illustrate the AFM distribution in NiO, we plot the spin density distribution in the (001) plane of NiO in Fig. \ref{fig:NiO spin density}. \begin{figure}[hbt] \includegraphics[width=0.5\textwidth]{./fig/NiO-spin-density.png} \caption{Spin density $\rho_{\alpha} - \rho_{\beta}$ on the (001) plane of NiO from ${\rm DMET}@\Phi_{{\rm RHF}}^{*}$ with charge self-consistency.}\label{fig:NiO spin density} \end{figure} In the figure, the $\alpha$- and $\beta$- spin planes alternately appear along the diagonal direction, showing a clear AFM pattern. In particular, the spin density on Ni is in the shape of the $d_{x^2-y^2}$ orbital, indicating that its occupation is asymmetric with respect to the $\alpha$ and $\beta$ electrons. In fact, the $t_{2g}$ orbitals are almost fully occupied ($\sim 5.97$ $e$ in our population analysis), and the $e_g$ orbitals ($d_{x^2-y^2}$ and $d_{z^2}$) are occupied only in one spin sector ($\sim 1.99$ $e$), and roughly empty in the other ($\sim 0.19$ $e$). The local magnetic moment on Ni therefore mainly comes from the contribution of the $e_g$ electron density, as expected from crystal field theory. The density on oxygen is in the shape of a $p$ orbital and is polarized according to its orientation relative to Ni. The average polarization on oxygen should be close to zero due to symmetry. As shown in Table \ref{tab:charge and magnetic moment}, the magnetic moments on oxygen from DMET (especially with CSC) are indeed close to zero. We now take a closer look at the spin-spin correlation in NiO. To this end, we evaluate the spin-spin correlation function between the two nickels in the unit cell, \begin{equation}\label{eq:spin-spin corr func NiO} \sum_{i\in \ce{Ni1}, j\in \ce{Ni2}}\expval{{\mathbf{S}}_i \cdot {\mathbf{S}}_j} = \sum_{i\in \ce{Ni1}, j\in \ce{Ni2}} \sum_{a=x, y, z}\expval{S_i^{a}S_j^{a}}, \end{equation} where $i$ and $j$ are the indices of LOs located on the first and second Ni respectively. In the DMET@$\Phi^*_{\mathrm{RHF}}$ calculation with charge self-consistency, the expectation value is $-0.8147$, where the minus sign arises from the AFM correlation between the spins of two nickels. This value, however, is very close to the product $\expval{S^{z}}\expval{S^{z}}=-0.8149$. In addition, the spin non-collinear contributions ($\expval{S^{x} S^{x}}$ and $\expval{S^{y} S^{y}}$) are almost zero (note that the calculation spontaneously chooses a $z$ magnetization axis due to the initial unrestricted Hartree-Fock reference or form of the correlation potential). All these features suggest that the ground-state of the AFM spin lattice in NiO is close to that of a classical Ising model, rather than a quantum one. Our results are consistent with experimental measurements on the critical behavior of the magnetic phase transition in NiO \cite{Chatterji09, Germann74, Negovetic73}, where the critical exponents are found to be very close to those of the 3D Ising model. In the above results, we found that the DMET order parameters are insensitive to the initial mean-field orbitals, due to the DMET self-consistency. As discussed in section~\ref{subsec:dmet}, this self-consistency contains two different contributions: self-consistency of the DMET correlation potential (expressed along the cluster blocks of the mean-field lattice Hamiltonian) and charge self-consistency of the mean-field Fock operator (for the off-diagonal blocks of the mean-field lattice Hamiltonian). To show the robustness of the self-consistency with respect to the correlation potential guess and the relative magnitude of these two contributions, we show the convergence of the local magnetic moment of Ni with respect to the number of iterations in Fig. \ref{fig:m-vs-iter} (for initial restricted orbitals from a spin-averaged Fock matrix $\Phi_{{\rm RHF}}^{*}$) with two different initial guesses for the correlation potential: the strongly polarized UHF potential, and a weakly polarized potential equal to the UHF potential scaled by a factor 0.1, both with and without charge self-consistency. \begin{figure}[hbt] \includegraphics[width=0.5\textwidth]{./fig/NiO-m-vs-iter.pdf} \caption{The convergence of the magnetic moment on Ni from different initial correlation potentials. Upper panel: ${\rm DMET}@\Phi_{{\rm RHF}}^{*}$ without CSC using different initial guesses: UHF potential (strongly polarized) or UHF potential scaled by 0.1 (weakly polarized). Lower panel: The same as the upper panel but with CSC.}\label{fig:m-vs-iter} \end{figure} From the figure, we see that starting from different initial guesses for the correlation potential, the magnetic moments from non-self-consistent (i.e. one-shot) DMET (the \nth{0} iteration in Fig. \ref{fig:m-vs-iter}) can be very different. However, after only 1 step, the magnetic moments are significantly improved. Eventually, the magnetic moments from the two guesses converge to a very similar value, showing that the DMET self-consistency effectively removes the initial correlation potential guess dependence. The picture with and without charge self-consistency is very similar, showing that the DMET correlation potential is the main factor controlling the local order parameter. Note that in Fig. \ref{fig:m-vs-iter}, the LOs are the same (based on Hartree-Fock) for all calculations and hence there is no initial LO dependence. \ZHC{Finally, as a rough indicator of cost, each DMET iteration takes about 1 hour (the computational setup is described in Sec.~\ref{sec:compdetails}).} \section{Conclusions}\label{sec:conclusion} In this paper, we described an {\textit{ab initio}} quantum embedding scheme for density matrix embedding calculations in solids, focusing on the practical implementation choices needed for an efficient computational scheme. Our tests on the BN, Si, and NiO systems, that span a range of electronic structure, demonstrate that our implementation can handle both realistic unit cells and basis sets. The strengths of DMET are most visible in the simulations of NiO, where the wide spread in magnetic behavior generated by different mean-field approximations is almost entirely removed in the subsequent DMET calculation. In more weakly correlated systems, more work is needed to improve the quantitative accuracy of DMET arising from the treatment of excitations to non-valence orbitals, which are not fully embedded in our scheme. Overall, however, our results lead us to be optimistic that this computational framework provides a means to realize {\textit{ab initio}} calculations on interesting correlated solids using density matrix embedding theory. Much of the computational framework can be reused also to realize {\textit{ab initio}} dynamical mean-field theory (DMFT) in solids, and elsewhere, we report the results of such a scheme.\cite{Zhu19-dmft-solid}. \begin{acknowledgement} We thank James McClain for providing CCSD data on the equation of state of Si, Lin Lin and Yang Gao for helpful discussions and Mario Motta for helpful comments on the manuscript. This work is partially supported by US Department of Energy via award no. DE-SC19390. Additional support was provided by the Simons Foundation via an Investigatorship and through the Simons Collaboration on the Many-Electron Problem. \end{acknowledgement} \begin{appendix} \section{k-adapted IAO and PAO}\label{app: kiao} The key ingredients for IAO construction \cite{Knizia13IAO} are the occupied MOs $\qty{\ket{\psi_{m}}}$ and two sets of bases, $B_1$ and $B_2$. Concretely, $B_1$ is the normal AO basis used in the mean-field calculation (labeled by $\mu, \nu, \cdots$) and $B_2$ is the reference minimal basis set (labeled by $\rho, \sigma, \cdots$). $B_1$ usually contains the space of $B_2$ and the extra part reflects the polarization. The goal of IAO construction is to obtain a set of AO-like orbitals that contains the occupied space but has the size of the small basis set $B_2$. To achieve this, we first define the \emph{depolarized} MOs $\qty{\ket{\psi_{\bar{m}}}}$ by projecting the MOs to $B_2$, then back to $B_1$, \begin{equation}\label{eq: IAO depolarized MO} \ket{\psi_{\bar{m}}} = \mathrm{orth}\qty(P^{B_1} P^{B_2} \ket{\psi_m}), \end{equation} where $P$ is the resolution of identity (or projector) of AOs, e.g. \begin{equation}\label{eq: IAO AO projector} P^{B_1}_{\mu \nu} = \sum_{\mu \nu} \ket{\phi_{\mu}} S^{B_1}_{\mu \nu} \bra{\phi_{\nu}}. \end{equation} Using the depolarized MO projector $\bar{O} \equiv \sum_{\bar{m}}\dyad{\psi_{\bar{m}}}{\psi_{\bar{m}}}$, we can split the $B_2$ set into occupied ($\bar{O} \ket{\phi_{\rho}}$) and virtual spaces $\qty(1-\bar{O}) \ket{\phi_{\rho}}$. The IAOs $\qty{\ket{w_i}}$ are obtained by further projecting these two subspace bases onto their polarized counterparts ($O \equiv \sum_{m}\dyad{\psi_m}{\psi_m}$ and $1-O$) and applying L{\"o}wdin orthogonalization, \begin{equation}\label{eq: IAO expression} \ket{w_i} = \mathrm{orth}\qty{\qty[O \bar{O} + \qty(1-O) \qty(1-\bar{O})] \ket{\phi_\rho}}. \end{equation} In periodic systems, the quantities in the above equations should be understood to carry ${\mathbf{k}}$ labels, e.g. $\ket{\phi_{\mu}} \rightarrow \ket{\phi_{\mu}^{{\mathbf{k}}}}$ is a crystal AO, and $S^{B_1} \rightarrow S^{{\mathbf{k}}, B_1}$ is the corresponding overlap matrix. These quantities are already evaluated in the mean-field calculations. The only thing we need additionally is the overlap matrix between basis $B_1$ and $B_2$, which can be evaluated directly, \begin{equation}\label{eq: IAO S12} S^{{\mathbf{k}}, B_1, B_2}_{\mu \rho} = \int {\mathrm{d}} {\mathbf{r}} \sum_{{\mathbf{T}}} {\mathrm{e}}^{{\mathrm{i}} {\mathbf{k}}\cdot {\mathbf{T}}} \phi_{\mu}^{*} ({\mathbf{r}}) \phi_{\rho} ({\mathbf{r}} - {\mathbf{T}}), \end{equation} where the summation is over the periodic images ${\mathbf{T}}$. After the IAOs are constructed, the ${\mathbf{k}}$-adapted PAOs are obtained by projecting out the IAO components from the AOs at each ${\mathbf{k}}$-point. \section{Embedding ERI construction with FFTDF}\label{app:fftdf eri} The embedding ERIs can also be constructed from FFTDF, which uses the fast Fourier transform to represent the Coulomb kernel and to expand the AO pairs. In such a case, $L$ in Eq. \ref{eq:density fitting} is a set of planewaves $\qty{{\mathbf{G}}}$ \cite{McClain17}, \begin{equation}\label{eq:FFTDF} \eri{\mu {\mathbf{k}}_\mu \nu {\mathbf{k}}_\nu}{\kappa {\mathbf{k}}_\kappa \lambda {\mathbf{k}}_\lambda} \approx \Omega^2 \sum_{{\mathbf{G}}} \eri{\mu{\mathbf{k}}_\mu \nu {\mathbf{k}}_\nu}{{\mathbf{G}}} \frac{4\pi}{\Omega\qty|{\mathbf{q}} + {\mathbf{G}}|^2} \eri{-{\mathbf{G}}}{\kappa {\mathbf{k}}_\kappa \lambda {\mathbf{k}}_\lambda} , \end{equation} where $\Omega$ is the volume of the unit cell, ${\mathbf{q}} \equiv {\mathbf{k}}_\mu - {\mathbf{k}}_\nu$ and only three ${\mathbf{k}}$s are independent. Similarly to the algorithm for GDF, the AO-to-EO transformation can be performed on the 3-index quantities. The procedure is described in Algorithm \ref{alg:eri with fftdf}. \begin{algorithm}[H] \caption{Pseudocode for embedding ERI transformation with FFTDF.} \label{alg:eri with fftdf} \begin{algorithmic}[1] \For{all ${\mathbf{q}}$} \For{$\qty({\mathbf{k}}_{\mu}, {\mathbf{k}}_{\nu})$ that conserves momentum} \State Transform $\eri{{\mathbf{r}}}{\mu{\mathbf{k}}_\mu \nu{\mathbf{k}}_\nu}$ to $\eri{{\mathbf{r}}}{\tilde{i}{\mathbf{k}}_\mu \tilde{j} {\mathbf{k}}_\nu}$ by $C^{{\mathbf{k}}, {\rm AO}, {\rm EO}}$ \Comment{${\mathbf{k}}$-AO to ${\mathbf{k}}$-EO} \State $\eri{{\mathbf{r}}}{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}} \mathrel{+}= \frac{1}{N_{{\mathbf{k}}}} \eri{{\mathbf{r}}}{\tilde{i} {\mathbf{k}}_\mu \tilde{j} {\mathbf{k}}_\nu}$ \Comment{FT to the reference cell ${\mathbf{R}} = {\mathbf{0}}$} \EndFor \State Calculate $\eri{{\mathbf{G}}}{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}}$ using FFT \State $\eri{{\mathbf{G}}}{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}} \mathrel{*}= \frac{4\pi}{\Omega \qty|{\mathbf{q}} + {\mathbf{G}}|^2}$ \State Calculate $\eri{{\mathbf{r}}}{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}}$ using inverse FFT \For{$\qty({\mathbf{k}}_{\kappa}, {\mathbf{k}}_{\lambda})$ that conserves momentum} \State Transform $\eri{{\mathbf{r}}}{\kappa{\mathbf{k}}_\kappa \lambda{\mathbf{k}}_\lambda}$ to $\eri{{\mathbf{r}}}{\tilde{k}{\mathbf{k}}_\kappa \tilde{l} {\mathbf{k}}_\lambda}$ by $C^{{\mathbf{k}}, {\rm AO}, {\rm EO}}$ \Comment{${\mathbf{k}}$-AO to ${\mathbf{k}}$-EO} \State $\eri{{\mathbf{r}}}{{\mathbf{0}} \tilde{k} {\mathbf{0}} \tilde{l}} \mathrel{+}= \frac{1}{N_{{\mathbf{k}}}} \eri{{\mathbf{r}}}{\tilde{k} {\mathbf{k}}_\kappa \tilde{l} {\mathbf{k}}_\lambda}$ \Comment{FT to the reference cell ${\mathbf{R}} = {\mathbf{0}}$} \EndFor \State $\eri{\tilde{i}\tilde{j}}{\tilde{k}\tilde{l}} \mathrel{+}= \frac{1}{N_{{\mathbf{k}}}} \sum_{{\mathbf{r}}} \eri{{\mathbf{0}} \tilde{i} {\mathbf{0}} \tilde{j}}{{\mathbf{r}}} \eri{{\mathbf{r}}}{{\mathbf{0}} \tilde{k} {\mathbf{0}} \tilde{l}}$ \Comment{Contraction for the embedding ERI} \EndFor \end{algorithmic} \end{algorithm} \end{appendix}
proofpile-arXiv_065-6117
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec_introduction} This paper is a continuation of earlier work on Ricci flow through singularities \cite{Kleiner:2014le,bamler_kleiner_uniqueness_stability,gsc}. While the focus in \cite{Kleiner:2014le,bamler_kleiner_uniqueness_stability} was on analytical properties of singular Ricci flows, such as existence and uniqueness, the aim of \cite{gsc} and the present paper is to apply the flow to topological and geometric problems. One of the main contributions in this paper is a new geometric/topological method for using families of flows with singularities to produce families of nonsingular deformations. This method is new and may be applicable to other settings that are unrelated to Ricci flow. We now present our main results. Let $M$ be a connected, orientable, closed, smooth 3-manifold. We denote by $\met(M)$ and $\operatorname{Diff}(M)$ the space of Riemannian metrics on $M$ and the diffeomorphism group of $M$, respectively; we equip both spaces with the $C^\infty$-topology, and let $\met_{PSC}(M)\subset\met(M)$ denote the subspace of metrics with positive scalar curvature. Our first result settles a well-known conjecture about the topology of the space of metrics with positive scalar curvature: \begin{theorem} \label{thm_psc_contractible} $\met_{PSC} (M)$ is either empty or contractible. \end{theorem} For our second main result consider the subset $\met_{CC}(M)\subset\met(M)$ of metrics that are locally isometric to either the round sphere $S^3$ or the round cylinder $S^2 \times \mathbb{R}$. We will show: \begin{theorem} \label{thm_cc_contractible} $\met_{CC} (M)$ is either empty or contractible. \end{theorem} By a well-known argument, Theorem~\ref{thm_cc_contractible} implies the following conjecture about the structure of diffeomorphism groups: \begin{theorem}[Generalized Smale Conjecture] \label{thm_gen_smal} If $(M,g)$ is an isometric quotient of the round sphere, then the inclusion map $\Isom (M,g) \hookrightarrow \operatorname{Diff} (M)$ is a homotopy equivalence. \end{theorem} We now provide a brief historical overview, before discussing other results. Theorem~\ref{thm_psc_contractible} was inspired by the work of Marques \cite{marques}, who showed that $\met_{PSC}(M)$ is path connected. The analogous statement in dimension $2$ --- the contractibility of $\met_{PSC}(S^2)$ --- can be proven using the uniformization theorem, or by Ricci flow. Starting with the famous paper of Hitchin \cite{hitchin}, there has been a long history of results based on index theory, which show that $\met_{PSC}(M)$ has nontrivial topology when $M$ is high dimensional; we refer the reader to the survey \cite{rosenberg_progress_report} for details. Theorem~\ref{thm_psc_contractible} provides the first examples of manifolds of dimension $\geq 3$ for which the homotopy type of $\met_{PSC}(M)$ is completely understood. Regarding Theorem~\ref{thm_gen_smal}, Smale made his original 1961 conjecture for the case $M=S^3$; this is the first step toward the larger project of understanding $\operatorname{Diff}(M)$ for other $3$-manifolds, which was already underway in the 70s \cite{hatcher_haken,ivanov_haken}. We recommend \cite[Section 1]{rubinstein_et_al} for a nice discussion of the history and other background on diffeomorphism groups. Theorem~\ref{thm_gen_smal} completes the proof of the Generalized Smale Conjecture after prior work by many people. Cerf proved that the inclusion $\operatorname{Isom}(S^3,g)\rightarrow \operatorname{Diff}(S^3)$ induces a bijection on path components \cite{cerf1,cerf2,cerf3,cerf4}, and the full conjecture for $S^3$ was proven by Hatcher \cite{hatcher_smale_conjecture}. Hatcher used a blend of combinatorial and smooth techniques to show that the space of smoothly embedded $2$-spheres in $\mathbb{R}^3$ is contractible. This is equivalent to the assertion that $O(4)\simeq \operatorname{Isom}(S^3,g) \rightarrow \operatorname{Diff}(S^3)$ is a homotopy equivalence when $g$ has sectional curvature $1$ (see the appendix in \cite{hatcher_smale_conjecture}). Other spherical space forms were studied starting in the late 1970s. Through the work of a number of authors it was shown that the inclusion $\operatorname{Isom}(M)\rightarrow \operatorname{Diff}(M)$ induces a bijection on path components for any spherical space form $M$ \cite{asano,rubinstein_klein_bottles,cappell_shaneson,bonahon,rubinstein_birman,boileau_otal}. The full conjecture was proven for spherical space forms containing geometrically incompressible one-sided Klein bottles (prism and quaternionic manifolds), and lens spaces other than $\mathbb{R} P^3$ \cite{ivanov_1,ivanov_2,rubinstein_et_al}. The methods used in these proofs were of topological nature. In our previous paper \cite{gsc}, we used Ricci flow methods to give a unified proof of the conjecture for all spherical space forms, except for $\mathbb{R} P^3$. This established the conjecture for the three remaining families of spherical space forms: tetrahedral, octahedral, and icosahedral manifolds. Although the techniques in \cite{ivanov_1,ivanov_2,rubinstein_et_al} and \cite{gsc} were very different, these results all relied on Hatcher's resolution of the $S^3$ case. It has been a longstanding question whether it is possible to use techniques from geometric analysis to give a new proof of Hatcher's theorem for $S^3$. There are several well-known variational approaches to studying the topology of the space of $2$-spheres in $\mathbb{R}^3$ (or $S^3$); however, they all break down due to the absence of a Palais-Smale condition, because there are too many critical points, or because the natural gradient flow does not respect embeddedness. Analogous issues plague other strategies based more directly on diffeomorphisms. The argument for Theorem~\ref{thm_cc_contractible} is independent of Hatcher's work and applies uniformly to all spherical space forms; in particular it gives a new proof of the Smale Conjecture based on Ricci flow. We believe that the methods used in this paper may be readily adapted to other situations where a geometric flow produces neck-pinch type singularities, for instance $2$-convex mean curvature flow and (conjecturally) mean curvature flow of $2$-spheres in $\mathbb{R}^3$ \cite{haslhofer_et_al,white_icm,brendle_mcf_genus_zero} or to study the space of metrics with positive isotropic curvature in higher dimensions \cite{Hamilton-PIC, Brendle-2019}. \bigskip We now present some further results. Applying Theorem~\ref{thm_cc_contractible} in the case of manifolds covered by $S^2\times\mathbb{R}$, one obtains the following corollaries: \begin{theorem} \label{thm_S2S1_diff} $\operatorname{Diff} (S^2 \times S^1)$ is homotopy equivalent to $O(2) \times O(3) \times \Omega O(3)$, where $\Omega O(3)$ denotes the loop space of $O(3)$. \end{theorem} \begin{theorem} \label{thm_RP3RP3} $\operatorname{Diff} (\mathbb{R} P^3 \# \mathbb{R} P^3)$ is homotopy equivalent to $O(1)\times O(3)$. \end{theorem} Theorem~\ref{thm_S2S1_diff} is due to Hatcher \cite{hatcher_s2xs1}. While Theorem~\ref{thm_RP3RP3} can be deduced directly from Theorem~\ref{thm_cc_contractible}, there is also an alternate approach based on a result of \cite{hatcher_s2xs1}, which reduces it to Theorem~\ref{thm_gen_smal} in the $\mathbb{R} P^3$ case. Theorems~\ref{thm_gen_smal}, \ref{thm_S2S1_diff}, and \ref{thm_RP3RP3} describe the structure of $\operatorname{Diff}(M)$ when $M$ has a geometric structure modelled on $S^3$ or $S^2\times S^1$. In \cite{bamler_kleiner_gsc_ii} we will use Ricci flow methods to study $\operatorname{Diff}(M)$ when $M$ is modelled the Thurston geometry Nil. Combined with earlier work, this completes the classification of $\operatorname{Diff}(M)$ when $M$ is prime \cite{hatcher_haken,ivanov_haken,ivanov_1,ivanov_2,gabai_smale_conjecture_hyperbolic,rubinstein_et_al,mccullough_soma}. \bigskip Theorems~\ref{thm_psc_contractible} and \ref{thm_cc_contractible} are deduced from a single result, which involves fiberwise Riemannian metrics on fiber bundles. A special case is: \begin{theorem}[See Theorem~\ref{Thm_main_conf_flat}] \label{Thm_main_general_case} Let $M$ be a connected sum of spherical space forms and copies of $S^2 \times S^1$, and $K$ be the geometric realization of a simplicial complex. Consider a fiber bundle $\pi : E \to K$ with fibers homeomorphic to $M$ and structure group $\operatorname{Diff} (M)$. Suppose that $g^s$ is a Riemannian metric on the fiber $\pi^{-1}(s)$ for every $s\in K$, such that the family $(g^s)_{s \in K}$ varies continuously with respect the fiber bundle structure. Then there is a family of Riemannian metrics $(h^s_t)_{s \in K, t \in [0,1]}$ such that: \begin{enumerate}[label=(\alph*)] \item $h^s_t$ is a Riemannian metric on $\pi^{-1}(s)$ for every $s\in K$, $t \in [0,1]$, and the family $(h^s_t)_{s\in K, t\in[0,1]}$ varies continuously with respect to the fiber bundle structure. \item $h^s_0 = g^s$ for all $s \in K$. \item $h^s_1$ is conformally flat and has positive scalar curvature for all $s \in K$. \item If for some $s\in K$ the manifold $(\pi^{-1} (s), g^s)$ has positive scalar curvature, then so does $(M^s, h^s_t)$ for all $t\in [0,1]$. \end{enumerate} \end{theorem} As a corollary we have: \begin{corollary} \label{cor_deform_to_psc_cf} If $\pi:E\rightarrow K$ is as in Theorem~\ref{Thm_main_general_case}, then there is a continuously varying family of fiberwise Riemannian metrics $(h^s)_{s \in K}$ such that $(\pi^{-1} (s), h^s)$ is conformally flat and has positive scalar curvature. \end{corollary} Let $\met_{CF}(M)\subset \met(M)$ denote the subspace of conformally flat metrics. Theorem~\ref{Thm_main_general_case} and Corollary~\ref{cor_deform_to_psc_cf} are indications that the space $\met_{PSC}(M)\cap\met_{CF}(M)$ has simple topology, and suggest the following question in conformal geometry: \begin{question} \label{conj_psc_cf_contractible} Is $\met_{PSC}(M)\cap\met_{CF}(M)$ always empty or contractible? Equivalently, is the space of conformally flat metrics with positive Yamabe constant always empty or contractible? \end{question} The fundamental work of Schoen-Yau \cite{schoen_yau_conformally_flat} on the geometry of individual metrics $g\in\met_{PSC}(M)\cap \met_{CF}(M)$ should be helpful in addressing this question. To our knowledge, the current understanding of the corresponding Teichmuller space of conformally flat structures is rather limited. The connection between Question~\ref{conj_psc_cf_contractible} and the preceding results is that the contractibility of $\met_{PSC}(M)\cap\met_{CF}(M)$ logically implies Theorem~\ref{Thm_main_general_case} and Corollary~\ref{cor_deform_to_psc_cf}. \subsection*{Discussion of the proof} To give the reader an indication of some of the issues that must be addressed when attempting to apply Ricci flow to a deformation problem, we first recall the outline of Marques' proof \cite{marques} that $\met_{PSC}(M)/\operatorname{Diff}(M)$ is path-connected, in the special case when $M$ is a spherical space form (see also \cite{haslhofer_et_al}, which was inspired by \cite{marques}). Starting with a metric $h\in \met_{PSC}(M)$, one applies Perelman's Ricci flow with surgery to obtain a finite sequence $\{(M_j,(g_j(t))_{t\in [t_{j-1},t_j]})\}_{1\leq j\leq N}$ of ordinary Ricci flows where $g_1(0)=h$. Since $h$ has positive scalar curvature, so does each of the Ricci flows $g_j(t)$. Marques shows by backward induction on $j$ that the Riemannian metric $g_j(t_{j-1})$ on $M_j$ can be deformed through metrics of positive scalar curvature to a metric of constant sectional curvature. In the induction step he carefully analyzes Perelman's surgery process, and shows that one may pass from $(M_{j+1},g_{j+1}(t_j))$ to $(M_j,g_j(t_j))$ by means of a geometric connected sum operation, which is compatible with deformations through metrics of positive scalar curvature. In other words, the Ricci flow with surgery can be seen as a sequence of of continuous curves in $\Met_{PSC} (M_j)$, whose endpoints are related by a surgery process. Marques' work was to join these endpoints in order to produce a single continuous curve. Let us now consider a family of metrics $(h_s)_{s\in K}$ depending continuously on a parameter $s$. If one attempted to use the above strategy for such a family, then one would immediately run into the problem that the resulting Ricci flows with surgery starting from the metrics $h_s$ may not be unique or may not depend continuously on the parameter $s$. Moreover, the locations and times of the surgery operations may change as $s$ varies --- possibly in a discontinuous fashion. As we vary $s$, the order in which these operations are performed may change and some surgery operations may even appear or disappear. In particular, this means that the underlying topology of the flow at some (or most) positive times may not be constant in $s$. So in summary, every single metric $h_s$ defines a Ricci flow with surgery, which can be turned into a continuous metric deformation. However, there is little hope of producing a useful topological object based on the collection of all such flows for different $s$. In addition, since our second goal is to study the structure of diffeomorphism groups, we are faced with the complication that the argument from the previous paragraph only works modulo the diffeomorphism group. We address these issues using a number of new techniques. First, we employ the singular Ricci flow (or ``Ricci flow through singularities'') from \cite{Kleiner:2014le} in lieu of the Ricci flow with surgery. In \cite{bamler_kleiner_uniqueness_stability} we showed that this flow is canonical. Based on a stability result from the same paper, we show that any continuous family of metrics $(h_s)_{s \in K}$ can be evolved into a \emph{continuous} family of singular Ricci flows. Here the word ``continuous'' has to be defined carefully, since the flows are not embedded in a larger space. Our use of singular Ricci flows ameliorates some of the issues raised above, however, the underlying problem still remains. More specifically, our notion of continuous dependence of singular Ricci flows still allows the possibility that singularities, which are the analogues of the surgery operations from before, may vary in space and in time and may appear or disappear as we vary the parameter $s$. While these phenomena are now slightly more controlled due to our continuity property, we now have to deal with singularities that may occur on some complicated set, possibly of fractal dimension. One of the main conceptual novelties in our proof is a new topological notion called a \emph{partial homotopy}, which can be viewed as a hybrid between a continuous family of singular Ricci flows and a homotopy in the space of metrics. On the one hand, this notion captures the phenomenon of non-constant topology as the parameter $s$ varies by neglecting the singular part, whose topological structure may be ``messy''. On the other hand, if a partial homotopy is ``complete'' in a certain sense, then it constitutes a classical homotopy in a space of metrics. The notion of a partial homotopy is not inherently restricted to to $3d$ Ricci flow. It may therefore also have applications in higher dimensions or to other geometric flows. A large part of our paper will be devoted to the development of a theory that allows us to construct and modify partial homotopies through certain modification moves. We then combine this with the theory of continuous families of singular Ricci flows. Roughly speaking, we will use a continuous family of singular Ricci flows as a blueprint to carry out the modification moves of a partial homotopy, with the goal of improving it towards a complete one. In order to relate partial homotopies to continuous families of singular Ricci flows, we have to do some pre-processing. More specifically, we will study the continuous dependence of the singular set of singular Ricci flows and equip a neighborhood with a continuous family of ``$\mathcal{R}$-structures''. These $\mathcal{R}$-structures limit the dependence of the singular set on the parameter and are used to relate a partial homotopy to a family of singular Ricci flows. In our pre-processing step we also produce a ``rounded'' family of metrics, which are part of these $\mathcal{R}$-structures. Taking a broader perspective, our partial homotopy machinery will ultimately enable us to ``weave'' these metrics together to produce the desired homotopy in the space of metrics. Our theory of partial homotopies brings together and generalizes a number of technical ingredients that have existed in the fields of topology and geometric analysis. Most notable among these are: a surgery technique generalizing connected sums of conformally flat structures to arbitrary metrics and a notion of positivity of the Yamabe constant relative boundary. \subsection*{Organization of the paper} In Section~\ref{sec_preliminaries}, we briefly recapitulate the most important definitions and results related to singular Ricci flows. In Section~\ref{sec_families_srfs} we formalize the idea of a continuous family of singular Ricci flows $(\mathcal{M}^s)_{s\in X}$, where the parameter lies in some topological space $X$; using results from \cite{Kleiner:2014le,bamler_kleiner_uniqueness_stability}, we prove the existence of a unique continuous family of singular Ricci flows with a prescribed family of initial conditions. In Section~\ref{sec_rounding_process}, we implement a ``rounding'' procedure to construct $\mathcal{R}$-structures, which characterize the geometry and topology of the singular part and provides a family of metrics, whose high curvature part is precisely symmetric. In Sections~\ref{sec_partial_homotopy} and \ref{sec_partial_homotopy}, we set up and develop the theory of partial homotopies. In Section~\ref{sec_deforming_families_metrics}, we apply this theory to our families of $\mathcal{R}$-structures from Section~\ref{sec_rounding_process}. More specifically, given a continuous family $(g_{s,0})_{s\in K}$ of Riemannian metrics parametrized by a simplicial complex $K$, we will construct a continuous metric deformation $(g_{s,t})_{s\in K,t\in[0,1]}$, where $g_{s,1}$ is conformally flat for all $s$. Based on this result, we prove the main theorems of this paper in Section~\ref{sec_proofs_main_theorems}. \subsection*{Acknowledgements} The first named author would like to thank Boris Botvinnik for bringing up the question on the contractibility of spaces of positive scalar curvature and for many inspiring discussions. \section{Conventions} Unless otherwise noted, all manifolds are assumed to be smooth, orientable and 3-dimensional. Whenever we refer to a continuous family of functions or maps, we will always work in the smooth topology $C^\infty$, or $C^\infty_{\operatorname{loc}}$ in the non-compact case. So continuity means that all partial derivatives vary continuously in the $C^0$-sense. The same applies to the terminology of ``transverse continuity'' (compare with the discussion after Definition~\ref{def_continuity_maps_between_families}, Definitions~\ref{def_continuity_smooth_objects}, \ref{Def_transverse_O3_action}), which will always mean ``transverse continuity in the smooth topology''. If $(g_t)_{t \in I}$ is a family of Riemannian metrics on a manifold $M$, then we denote by $B(x,t,r)$ the distance ball with respect to the metric $g_t$. If $(M,g)$ is a Riemannian manifold and $X \subset M$ is a measurable subset, then we denote by $V(X, g)$ its volume with respect to the Riemannian measure $d \mu_g$. We will denote by $B^n (r), D^n (r) \subset \mathbb{R}^n$ the open and closed balls of radius $r$ around the origin. Moreover, we will set $B^n := B^n (1)$, $D^n := D^n (1)$ and denote by $A^n (r_1, r_2) := B^n (r_2) \setminus D^n (r_1)$ the annulus of radii $r_1, r_2$. We will say that a Riemannian metric $g$ is \emph{conformally flat} if $e^{2\phi} g$ is flat for some smooth function $\phi$. In dimension 3 this condition is equivalent to the Cotton-York condition, see Remark~\ref{rmk_Cotton_York} and the discussion in Subsection~\ref{subsec_conf_exp}. \bigskip\bigskip \section{Preliminaries} \label{sec_preliminaries} In this section we collect some preliminary material, including background required for Sections~\ref{sec_families_srfs} and \ref{sec_rounding_process}. \subsection{$\kappa$-solutions} The singularity formation in 3-dimensional Ricci flows is usually understood via singularity models called $\kappa$-solutions (see \cite[Sec. 11]{Perelman1}). The definition of a $\kappa$-solution consists of a list of properties that are known to be true for $3$-dimensional singularity models. \begin{definition}[$\kappa$-solution] \label{def_kappa_solution} An ancient Ricci flow $(M, (g_t)_{t \leq 0} )$ on a $3$-dimensional manifold $M$ is called a \textbf{(3-dimensional) $\kappa$-solution}, for $\kappa > 0$, if the following holds: \begin{enumerate}[label=(\arabic*)] \item $(M, g_t)$ is complete for all $t \in (- \infty, 0]$, \item $|{\Rm}|$ is bounded on $M \times I$ for all compact $I \subset ( - \infty, 0]$, \item $\sec_{g_t} \geq 0$ on $M$ for all $t \in (- \infty, 0]$, \item $R > 0$ on $M \times (- \infty, 0]$, \item $(M, g_t)$ is $\kappa$-noncollapsed at all scales for all $t \in (- \infty, 0]$ (This means that for any $(x,t) \in M \times (- \infty, 0]$ and any $r > 0$ if $|{\Rm}| \leq r^{-2}$ on the time-$t$ ball $B(x,t,r)$, then we have $|B(x,t,r)| \geq \kappa r^n$ for its volume.) \end{enumerate} \end{definition} Important examples of $\kappa$-solutions are the \textbf{round shrinking cylinder} \[ \big( S^2 \times \mathbb{R}, (g^{S^2 \times \mathbb{R}}_t := (1-2t) g_{S^2} + g_{\mathbb{R}} )_{t \leq 0} \big), \] the \textbf{round shrinking sphere} \[ \big( S^3, (g^{S^3}_t := (1- 4t) g_{S^3})_{t \leq 0} \big) \] and the \textbf{Bryant soliton} \cite{Bryant2005} \[ \big( M_{\Bry}, (g_{\Bry, t})_{t \leq 0} \big). \] We recall that $M_{\Bry} \approx \mathbb{R}^3$ and the flow $(g_{\Bry, t})_{t \leq 0}$ is invariant under the standard $O(3)$-action. We will denote the fixed point of this action by $x_{\Bry} \in M_{\Bry}$. By parabolic rescaling we may assume in this paper that $R(x_{\Bry}) = 1$. The flow $(g_{\Bry, t})_{t \leq 0}$ is moreover a steady gradient soliton. For the purpose of this paper, it is helpful to remember that the pointed $(M_{\Bry}, g_{\Bry,t}, x_{\Bry})$ are isometric for all $t \leq 0$. We will set $g_{\Bry} := g_{\Bry, 0}$. For more details see the discussion in \cite[Appendix B]{bamler_kleiner_uniqueness_stability}. The following theorem states that these examples provide an almost complete list of $\kappa$-solutions. \begin{theorem}[Classification of $\kappa$-solutions] \label{Thm_kappa_sol_classification} There is a constant $\kappa_0 > 0$ such for any $\kappa$-solution $(M, (g_t)_{t \leq 0} )$ one of the following is true: \begin{enumerate}[label=(\alph*)] \item \label{ass_kappa_sol_classification_a} $(M, (g_t)_{t \leq 0} )$ is homothetic to the round shrinking cylinder or its $\mathbb{Z}_2$-quotient. \item \label{ass_kappa_sol_classification_b} $(M, (g_t)_{t \leq 0} )$ is homothetic to an isometric quotient of the round shrinking sphere. \item \label{ass_kappa_sol_classification_c} $(M, (g_t)_{t \leq 0]} )$ is homothetic to the Bryant soliton. \item \label{ass_kappa_sol_classification_d} $M \approx S^3$ or $\mathbb{R} P^3$ and $(M, (g_t)_{t \leq 0} )$ is rotationally symmetric, i.e. the flow is invariant under the standard $O(3)$-action whose principal orbits are 2-spheres. Moreover, for every $x \in M$ the limit of $(M, R(x,t) g_t, x)$ as $t \searrow - \infty$ exists and is isometric to a pointed round cylinder or Bryant soliton. \end{enumerate} Moreover, in cases (a)--(c) the solution is even a $\kappa_0$-solution. \end{theorem} \begin{proof} See \cite{Brendle2018, BamlerKleiner2019, Brendle2019}. \end{proof} We also recall the following compactness result for $\kappa$-solutions, which is independent of Theorem~\ref{Thm_kappa_sol_classification}. \begin{theorem} \label{Thm_kappa_compactness_theory} If $(M^i, (g^i_t)_{t \leq 0}, x^i)$ is a sequence of pointed $\kappa_i$-solutions for some $\kappa_i$. Suppose that $\lim_{i \to \infty} R(x^i) > 0$ exists. Then, after passing to a subsequence, either all flows $(M^i, (g^i_t)_{t \leq 0})$ are homothetic to quotients of the round shrinking sphere or we have convergence to a pointed $\kappa_\infty$-solution $(M^\infty, (g^\infty_t)_{t \leq 0}, x^\infty)$ in the sense of Hamilton \cite{Hamilton1995}. \end{theorem} \begin{proof} See \cite[Sec. 11]{Perelman1}. \end{proof} The following result will be used in Section~\ref{sec_rounding_process}. \begin{lemma} \label{lem_kappa_identity_1} For any $A < \infty$ there is a constant $D = D(A) < \infty$ such that the following holds. If $(M,(g_t)_{\leq 0})$ is a compact, simply-connected $\kappa$-solution and \[ \int_M R(\cdot, 0) d\mu_{g_0} < A V^{1/3} ( M, g_0), \] then $\diam (M, g_0) < D R^{-1/2} (x,0)$ for all $x \in M$. \end{lemma} \begin{proof} Assume that the lemma was wrong for some fixed $A$. Then we can find a sequence of counterexamples $(M^i, (g^i_t)_{t \leq 0}, x^i)$ with \begin{equation} \label{eq_diam_to_infty} \diam (M^i, g^i_0) R^{1/2} (x^i, 0) \to \infty. \end{equation} It follows that $(M^i, (g^i_t)_{t \leq 0})$ is not homothetic to a round shrinking sphere for large $i$. \begin{Claim} There are constants $C, I < \infty$ such that for all $i \geq I$ and $y \in M^i$ we have \begin{equation} \label{eq_radius_int_R} R^{-1/2} (y,0) \leq \int_{B(y,0,C R^{-1/2} (y,0))} R(\cdot, 0) d\mu_{g^i_0} . \end{equation} \end{Claim} \begin{proof} Assume that, after passing to a subsequence, (\ref{eq_radius_int_R}) was violated for some $y^i \in M^i$ and $C^i \to \infty$. Since the lemma and (\ref{eq_radius_int_R}) are invariant under parabolic rescaling, we can assume without loss of generality that $R(y^i,0) = 1$. Apply Theorem~\ref{Thm_kappa_compactness_theory} to extract a limit $(M^\infty, (g^\infty_t)_{t \leq 0}, y^\infty)$ with $\int_{M^\infty} R(\cdot, 0) d\mu_{g^\infty_0} \leq 1$, which would have to be compact, in contradiction to our assumption (\ref{eq_diam_to_infty}). \end{proof} Fix some $i \geq I$ for a moment. By Vitali's covering theorem we can find points $y_1, \ldots, y_N \in M^i$ such that the balls $B(y_j, 0, C R^{-1/2} (y_j, 0))$ are pairwise disjoint and the balls $B(y_j, 0, 3C R^{-1/2} (y_j, 0))$ cover $M^i$. It follows that \[ \diam (M^i, g^i_0) \leq \sum_{j=1}^N 6C R^{-1/2} (y_j, 0) \leq 6C \int_{M^i} R(\cdot, 0) d\mu_{g^i} <6C A V^{1/3}( M^i, g^i_0). \] By volume comparison this implies that there is a uniform constant $c > 0$ such that $V ( B(z,0,r) , g_0^i) \geq c r^3$ for all $r < \diam (M^i, g^i)$. After parabolic rescaling to normalize $R(x^i, 0)$, the pointed solutions $(M^i, (g^i_t)_{t \leq 0}, x^i)$ subsequentially converge to a non-compact $\kappa$-solution satisfying the same volume bound. This, however, contradicts the fact that non-compact $\kappa$-solutions have vanishing asymptotic volume ratio (see either \cite[Sec. 11]{Perelman1} or Theorem~\ref{Thm_kappa_sol_classification}). \end{proof} Lastly, we recall: \begin{lemma} \label{Lem_Bry_R_Hessian_positive} On $(M_{\Bry}, g_{\Bry})$ the scalar curvature $R$ attains a unique global maximum at $x_{\Bry}$ and the Hessian of $R$ is negative definite at $x_{\Bry}$. \end{lemma} \begin{proof} This follows from the soliton equations $\Ric + \nabla^2 f = 0$ and $R + |\nabla f|^2 = R(x_{\Bry})$ (see \cite[Appendix B]{bamler_kleiner_uniqueness_stability} for more details.). If the $\nabla^2 R$ at $x_{\Bry}$ was not strictly negative definite, then it would vanish due to symmetry. This would imply \[ 0 = \nabla^2 |\nabla f|^2 = 2 |\nabla^2 f|^2 + 2 \nabla^3 f \cdot \nabla f = 2 |\nabla^2 f|^2 = 2 |{\Ric}|^2, \] in contradiction to the positivity of the scalar curvature on $(M_{\Bry}, g_{\Bry})$. \end{proof} \subsection{Singular Ricci flows --- Definition} In the following we recall terminology related to singular Ricci flows. In order to keep this subsection concise, our discussion has been simplified to fit the needs of this paper. For more details, see \cite[Sec~5]{bamler_kleiner_uniqueness_stability}. Singular Ricci flows were introduced by Lott and the second author in \cite{Kleiner:2014le}. In the same paper the existence of a singular Ricci flow starting from compact initial condition was established. Subsequently, uniqueness was shown by the authors in \cite{bamler_kleiner_uniqueness_stability}. The definition of a singular Ricci flow provided in this paper differs slightly from the original definition in \cite{Kleiner:2014le}. It is a priori more general, however, the uniqueness result in \cite{bamler_kleiner_uniqueness_stability} implies that both definitions are equivalent. We first introduce a broader class of Ricci flow spacetimes. A singular Ricci flow will be defined as a Ricci flow spacetime that satisfies certain conditions. \begin{definition}[Ricci flow spacetimes] \label{def_RF_spacetime} A {\bf Ricci flow spacetime} is a tuple $(\mathcal{M}, \linebreak[1] \mathfrak{t}, \linebreak[1] \partial_{\mathfrak{t}}, \linebreak[1] g)$ with the following properties: \begin{enumerate}[label=(\arabic*)] \item $\mathcal{M}$ is a smooth $4$-manifold with (smooth) boundary $\partial \mathcal{M}$. \item $\mathfrak{t} : \mathcal{M} \to [0, \infty)$ is a smooth function without critical points (called {\bf time function}). For any $t \geq 0$ we denote by $\mathcal{M}_t := \mathfrak{t}^{-1} (t) \subset \mathcal{M}$ the {\bf time-$t$-slice} of $\mathcal{M}$. \item $\mathcal{M}_0 = \mathfrak{t}^{-1} (0) = \partial \mathcal{M}$, i.e. the initial time-slice is equal to the boundary of $\mathcal{M}$. \item $\partial_{\mathfrak{t}}$ is a smooth vector field (the {\bf time vector field}) on $\mathcal{M}$ that satisfies $\partial_{\mathfrak{t}} \mathfrak{t} \equiv 1$. \item $g$ is a smooth inner product on the spatial subbundle $\ker (d \mathfrak{t} ) \subset T \mathcal{M}$. For any $t \geq 0$ we denote by $g_t$ the restriction of $g$ to the time-$t$-slice $\mathcal{M}_t$ (note that $g_t$ is a Riemannian metric on $\mathcal{M}_t$). \item $g$ satisfies the Ricci flow equation: $\mathcal{L}_{\partial_\mathfrak{t}} g = - 2 \Ric (g)$. Here $\Ric (g)$ denotes the symmetric $(0,2)$-tensor on $\ker (d \mathfrak{t} )$ that restricts to the Ricci tensor of $(\mathcal{M}_t, g_t)$ for all $t \geq 0$. \end{enumerate} Curvature quantities on $\mathcal{M}$, such as the Riemannian curvature tensor $\Rm$, the Ricci curvature $\Ric$, or the scalar curvature $R$ will refer to the corresponding quantities with respect to the metric $g_t$ on each time-slice $\mathcal{M}_t$. Tensorial quantities will be imbedded using the splitting $T\mathcal{M} = \ker (d\mathfrak{t} ) \oplus \langle \partial_{\mathfrak{t}} \rangle$. When there is no chance of confusion, we will sometimes abbreviate the tuple $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ by $\mathcal{M}$. \end{definition} We emphasize that, while a Ricci flow spacetime may have singularities --- in fact the sole purpose of our definition is to understand flows with singularities --- such singularities are not directly captured by a Ricci flow spacetime, because ``singular points'' are not contained in the spacetime manifold $\mathcal{M}$. Instead, the idea behind the definition of a Ricci flow spacetime is to understand a possibly singular flow by analyzing its asymptotic behavior on its regular part. This will always be sufficient for our applications. Any (classical) Ricci flow of the form $(g_t)_{t \in [0,T)}$, $0 < T \leq \infty$, on a $3$-manifold $M$ can be converted into a Ricci flow spacetime by setting $\mathcal{M} = M \times [0,T)$, letting $\mathfrak{t}$ be the projection to the second factor and letting $\partial_{\mathfrak{t}}$ correspond to the unit vector field on $[0,T)$. Vice versa, if $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is a Ricci flow spacetime with $\mathfrak{t}(\mathcal{M}) = [0, T)$ for some $0 < T \leq \infty$ and the property that every trajectory of $\partial_{\mathfrak{t}}$ is defined on the entire time-interval $[0,T)$, then $\mathcal{M}$ comes from such a classical Ricci flow. We now generalize some basic geometric notions to Ricci flow spacetimes. \begin{definition}[Length, distance and metric balls in Ricci flow spacetimes] Let $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ be a Ricci flow spacetime. For any two points $x, y \in \mathcal{M}_t$ in the same time-slice of $\mathcal{M}$ we denote by $d(x,y)$ or $d_t (x,y)$ the {\bf distance} between $x, y$ within $(\mathcal{M}_t, g_t)$. The distance between points in different time-slices is not defined. Similarly, we define the {\bf length} $\length (\gamma)$ or $\length_t (\gamma)$ of a path $\gamma : [0,1] \to \mathcal{M}_t$ whose image lies in a single time-slice to be the length of this path when viewed as a path inside the Riemannian manifold $(\mathcal{M}_t, g_t)$. For any $x \in \mathcal{M}_t$ and $r \geq 0$ we denote by $B(x,r) \subset \mathcal{M}_t$ the {\bf $r$-ball} around $x$ with respect to the Riemannian metric $g_t$. \end{definition} Our next goal is to characterize the (microscopic) geometry of a Ricci flow spacetime near a singularity or at an almost singular point. For this purpose, we will introduce a \textbf{(curvature) scale function} $\rho : \mathcal{M} \to (0, \infty]$ with the property that \begin{equation} \label{eq_rho_equivalent_Rm} C^{-1} \rho^{-2} \leq |{\Rm}| \leq C \rho^{-2} \end{equation} for some universal constant $C < \infty$. We will write $\rho (p) = \infty$ if $\Rm_p = 0$ at some point $p$. Note that $\rho$ has the dimension of length. In Subsection~\ref{subsec_curvature_scale} we will make a specific choice for $\rho$, which will turn out to be suitable for our needs. The notions introduced in the remainder of this subsection will, however, be independent of the precise choice of $\rho$ or the constant $C$. We now define what we mean by completeness for Ricci flow spacetimes. Intuitively, a Ricci flow spacetime is called complete if its time-slices can be completed by adding countably many ``singular points'' and if no component ``appears'' or ``disappears'' suddenly without the formation of a singularity. \begin{definition}[Completeness of Ricci flow spacetimes] \label{def_completeness} We say that a Ricci flow spacetime $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is {\bf complete} if the following holds: Consider a path $\gamma : [0, s_0) \to \mathcal{M}$ such that $\inf_{s \in [0,s_0)} \rho (\gamma(s)) > 0$ and such that: \begin{enumerate} \item The image $\gamma ([0,s_0))$ lies in a time-slice $\mathcal{M}_t$ and the time-$t$ length of $\gamma$ is finite or \item $\gamma$ is a trajectory of $\partial_{\mathfrak{t}}$ or of $- \partial_{\mathfrak{t}}$. \end{enumerate} Then the limit $\lim_{s \nearrow s_0} \gamma (s)$ exists. \end{definition} Lastly, we need to characterize the asymptotic geometry of a Ricci flow spacetime near its singularities. The idea is to impose the same asymptotic behavior near singular points in Ricci flow spacetimes as is encountered in the singularity formation of a classical (smooth) 3-dimensional Ricci flow. This is done by comparing the geometry to the geometry of $\kappa$-solutions using the following concept of pointed closeness. \begin{definition}[Geometric closeness] \label{def_geometric_closeness_time_slice} We say that a pointed Riemannian manifold $(M, g, x)$ is \textbf{$\varepsilon$-close} to another pointed Riemannian manifold $(\ov{M}, \ov{g}, \ov{x})$ \textbf{at scale $\lambda > 0$} if there is a diffeomorphism onto its image \[ \psi : B^{\ov{M}} (\ov{x}, \varepsilon^{-1} ) \longrightarrow M \] such that $\psi (\ov{x}) = x$ and \[ \big\Vert \lambda^{-2} \psi^* g - \ov{g} \big\Vert_{C^{[\varepsilon^{-1}]}(B^{\ov{M}} (\ov{x}, \varepsilon^{-1} ))} < \varepsilon. \] Here the $C^{[\varepsilon^{-1}]}$-norm of a tensor $h$ is defined to be the sum of the $C^0$-norms of the tensors $h$, $\nabla^{\ov{g}} h$, $\nabla^{\ov{g},2} h$, \ldots, $\nabla^{\ov{g}, [\varepsilon^{-1}]} h$ with respect to the metric $\ov{g}$. \end{definition} We can now define the canonical neighborhood assumption. The main statement of this assumption is that regions of small scale (i.e. high curvature) are geometrically close to regions of $\kappa$-solutions. \begin{definition}[Canonical neighborhood assumption] \label{def_canonical_nbhd_asspt} Let $(M, g)$ be a (possibly incomplete) Riemannian manifold. We say that $(M, g)$ satisfies the {\bf $\varepsilon$-canonical neighborhood assumption} at some point $x \in M$ if there is a $\kappa > 0$, a $\kappa$-solution $(\overline{M}, \linebreak[1] (\overline{g}_t)_{t \leq 0})$ and a point $\ov{x} \in \ov{M}$ such that $\rho (\overline{x}, 0) = 1$ and such that $(M, g, x)$ is $\varepsilon$-close to $(\ov{M}, \ov{g}_0, \ov{x})$ at some (unspecified) scale $\lambda > 0$. We say that $(M,g)$ {\bf satisfies the $\varepsilon$-canonical neighborhood assumption below scale $r_0$,} for some $r_0 > 0$, if every point $x \in M$ with $\rho(x) < r_0$ satisfies the $\varepsilon$-canonical neighborhood assumption. We say that a Ricci flow spacetime $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ satisfies the \textbf{$\varepsilon$-ca\-non\-i\-cal neighborhood assumption} at a point $x \in \mathcal{M}$ if the same is true at $x$ in the time-slice $(\mathcal{M}_{\mathfrak{t}(x)}, g_{\mathfrak{t}(x)})$. Moreover, we say that $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ satisfies the {\bf $\varepsilon$-canonical neighborhood assumption below scale $r_0$ (on $[0,T]$)} if the same is true for each time-slice $(\mathcal{M}_t, g_t)$ (if $t \in [0,T]$) \end{definition} Using this terminology, we can finally define a singular Ricci flow. \begin{definition}[Singular Ricci flow] \label{Def_sing_RF} A {\bf singular Ricci flow} is a Ricci flow spacetime $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ that has the following properties: \begin{enumerate} \item \label{Prop_sing_RF_1} $\mathcal{M}_0$ is compact. \item \label{Prop_sing_RF_2} $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is complete. \item \label{Prop_sing_RF_3} For every $\varepsilon > 0$ and $0 \leq T < \infty$ there is a constant $r_{\varepsilon, T} > 0$ such that $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ satisfies the $\varepsilon$-canonical neighborhood assumption below scale $r_{\varepsilon, T}$ on $[0,T]$. \end{enumerate} We say that $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is {\bf extinct at time $t \geq 0$} if $\mathcal{M}_t = \emptyset$. \end{definition} We remark that we have added Property~\ref{Prop_sing_RF_1} in Definition~\ref{Def_sing_RF} in order to make it equivalent to the definition in \cite{Kleiner:2014le}. All flows encountered in this paper will have compact and non-singular initial data. The property could potentially be dropped or replaced by requiring $(\mathcal{M}_0, g_0)$ to be complete and possibly have bounded curvature. In addition, it can be shown that there is a universal constant $\varepsilon_{\can} > 0$ such that Property~\ref{Prop_sing_RF_3} can be replaced by one of the following properties (due to the results in \cite{bamler_kleiner_uniqueness_stability, Bamler-finite-surg-0}): \begin{enumerate}[label=(\arabic*$'$), start=3] \item[($3'$)] For every $0 \leq T < \infty$ there is a constant $r_{T} > 0$ such that $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ satisfies the $\varepsilon_{\can}$-canonical neighborhood assumption below scale $r_{T}$ on $[0,T]$. \item[($3''$)] $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ satisfies the $\varepsilon_{\can}$-canonical neighborhood assumption below some scale $r_0 > 0$. \end{enumerate} This aspect is, however, inessential for this paper. \subsection{Singular Ricci flows --- Existence and Uniqueness} \label{subsec_sing_RF_exist_unique} The following results establish the existence of a unique (or canonical) singular Ricci flow starting from any compact Riemannian 3-manifold $(M,g)$. The existence result is from \cite[Theorem~1.1]{Kleiner:2014le}. \begin{theorem}[Existence] \label{Thm_sing_RF_existence} For every compact, orientable, Riemannian 3-man\-i\-fold $(M,g)$ there is a singular Ricci flow $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ with the property that $(\mathcal{M}_0, g_0)$ is isometric to $(M,g)$. \end{theorem} The uniqueness result is from \cite[Theorem~1.3]{bamler_kleiner_uniqueness_stability}. \begin{theorem}[Uniqueness] \label{Thm_sing_RF_uniqueness} Any singular Ricci flow $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is uniquely determined by its initial time-slice $(\mathcal{M}_0, g_0)$ up to isometry in the following sense: If $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ and $(\mathcal{M}',\mathfrak{t}', \partial'_{\mathfrak{t}}, g')$ are two singular Ricci flows and $\phi : (\mathcal{M}, g_0) \to (\mathcal{M}',g'_0)$ is an isometry, then there is a diffeomorphism $\td\phi : \mathcal{M} \to \mathcal{M}'$ that is an isometry of Ricci flow spacetimes: \[ \td\phi |_{\mathcal{M}_0} = \phi, \quad \t = \t' \circ \td\phi , \quad \partial_{\mathfrak{t}} = \td\phi^* \partial'_{\mathfrak{t}}, \quad g = \td\phi^* g'. \] \end{theorem} In this paper we will often identify the given initial condition $(M,g)$ with the initial time-slice $(\mathcal{M}_0,g_0)$ if there is no chance of confusion. We will view $\mathcal{M}$ as the ``unique'' singular Ricci flow with initial time-slice $(\mathcal{M}_0, g_0) = (M,g)$. \subsection{The curvature scale} \label{subsec_curvature_scale} We will now define a curvature scale function $\rho$ that satisfies (\ref{eq_rho_equivalent_Rm}). This subsection can be skipped upon a first reading. For most applications we could simply take $\rho := |{\Rm}|^{-1/2}$, however, it will turn out to be convenient at certain points in our proof to work with a slightly different definition. More specifically, our main objective will be to ensure that $\rho = R^{-1/2}$ wherever $\varepsilon$-canonical neighborhood assumption holds for a small enough $\varepsilon$. To achieve this, observe that there is a constant $c_0 > 0$ such that the following holds: Whenever $\Rm$ is an algebraic curvature tensor with the property that its scalar curvature $R$ is positive and all its sectional curvatures are bounded from below by $-\frac1{10} R$, then $c_0 |{\Rm}| \leq R$. We will fix $c_0$ for the remainder of this paper. \begin{definition}[Curvature scale] \label{def_curvature_scale} Let $(M, g)$ be a 3-dimensional Riemannian manifold and $x \in M$ a point. We define the {\bf (curvature) scale} at $x$ to be \begin{equation} \label{eq_def_curvature_scale} \rho (x) = \min \big\{ R_+^{-1/2} (x), \big( c_0 |{\Rm}| (x) \big)^{-1/2}\big\}. \end{equation} Here $R_+(x) := \max \{ R(x), 0 \}$ and we use the convention $0^{-1/2} = \infty$. If $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is a Ricci flow spacetime, then we define $\rho: \mathcal{M} \to \mathbb{R}$ such that it restricts to the corresponding scale functions on the time-slices. \end{definition} The following lemma summarizes the important properties of the curvature scale. \begin{lemma} \label{lem_rho_Rm_R} There is a universal constant $C < \infty$ such that \begin{equation} \label{eq_equivalence_bound_rho_Rm} C^{-1} \rho^{-2} (x) \leq |{\Rm}|(x) \leq C \rho^{-2} (x). \end{equation} Moreover, there is a universal constant $\varepsilon_0 > 0$ such that if $x$ satisfies the $\varepsilon$-canonical neighborhood assumption for some $\varepsilon \leq \varepsilon_0$, then $R(x) = \rho^{-2} (x)$. \end{lemma} \begin{proof} The bound (\ref{eq_equivalence_bound_rho_Rm}) is obvious. For the second part of the lemma observe that for sufficiently small $\varepsilon$ we have $R(x) > 0$ and $\sec \geq - \frac1{10} R(x)$ at $x$. So $R_+^{-1/2}(x) \leq (c_0 |{\Rm}| (x) )^{-1/2}$. \end{proof} In all our future references to the $\varepsilon$-canonical neighborhood assumption, we will assume that $\varepsilon \leq \varepsilon_0$, such that $R = \rho^{-2}$ is guaranteed. Note that in \cite{bamler_kleiner_uniqueness_stability} a factor of $\frac13$ was used in front of $R_+$ in (\ref{eq_def_curvature_scale}). We have omitted this factor for convenience, as it is inessential for the purpose of this paper. \subsection{Singular Ricci flows --- further definitions and properties} The following definitions and results, which are of a more technical nature, will be used in this paper. Let us first discuss the concept of parabolic rescaling for Ricci flow spacetimes. For this purpose, recall that if $(g_t)_{t \in (t_1, t_2)}$ is a conventional Ricci flow and $a > 0$, then $(a^2 g_{a^{-2} t} )_{t \in (a^2 t_1, a^2 t_2)}$ satisfies the Ricci flow equation as well and we refer to this flow as the parabolically rescaled Ricci flow. Similarly, if $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is a Ricci flow spacetime, then so is $(\mathcal{M}, a^2 \t, a^{-2} \partial_{\mathfrak{t}},a^2 g)$, which we will refer to as the {\bf parabolically rescaled Ricci flow spacetime}. If $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ is a singular Ricci flow, then so is $(\mathcal{M}, a^2 \t, a^{-2} \partial_{\mathfrak{t}},a^2 g)$. Moreover, if $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$ (locally or globally) corresponds to a conventional Ricci flow $(g_t)_{t \in (t_1, t_2)}$, as discussed after Definition~\ref{def_RF_spacetime}, then both notions of parabolic rescaling are the same. Next, we introduce some more useful terminology, which helps us characterize the local geometry of singular Ricci flows. Let in the following $(\mathcal{M}, \mathfrak{t}, \partial_{\mathfrak{t}}, g)$, or simply $\mathcal{M}$, be a Ricci flow spacetime; for the purpose of this paper, we may also take $\mathcal{M}$ to be singular Ricci flow. \begin{definition}[Points in Ricci flow spacetimes] \label{def_points_in_RF_spacetimes} Let $x \in \mathcal{M}$ be a point and set $t := \mathfrak{t} (x)$. Consider the maximal trajectory $\gamma_x : I \to \mathcal{M}$, $I \subset [0, \infty)$, of the time-vector field $\partial_{\mathfrak{t}}$ such that $\gamma_x (t) = x$. Note that then $\mathfrak{t} (\gamma_x(t')) = t'$ for all $t' \in I$. For any $t' \in I$ we say that $x$ \textbf{survives until time $t'$} and we write \[ x(t') := \gamma_x (t'). \] Similarly, if $X \subset \mathcal{M}_t$ is a subset in the time-$t$ time-slice, then we say that $X$ \textbf{survives until time $t'$} if this is true for every $x \in X$ and we set $X(t') := \{ x(t') \;\; : \;\; x \in X \}$. \end{definition} \begin{definition}[Product domain] \label{def_product_domain} We call a subset $X \subset \mathcal{M}$ a \emph{product domain} if there is an interval $I \subset [0, \infty)$ such that for any $t \in I$ any point $x \in X$ survives until time $t$ and $x(t) \in X$. \end{definition} Note that a product domain $X$ can be identified with the product $(X \cap \mathcal{M}_{t_0}) \times I$ for an arbitrary $t_0 \in I$. If $X \cap \mathcal{M}_{t_0}$ is sufficiently regular (e.g. open or a domain with smooth boundary in $\mathcal{M}_{t_0}$), then the metric $g$ induces a classical Ricci flow $(g_t)_{t \in I}$ on $X \cap \mathcal{M}_{t_0}$. We will often use the metric $g$ and the Ricci flow $(g_t)_{t \in I}$ synonymously when our analysis is restricted to a product domain. \begin{definition}[Parabolic neighborhood] For any $y \in \mathcal{M}$ let $I_y \subset [0, \infty)$ be the set of all times until which $y$ survives. Now consider a point $x \in \mathcal{M}$ and two numbers $a \geq 0$, $b \in \mathbb{R}$. Set $t := \mathfrak{t} (x)$. Then we define the \textbf{parabolic neighborhood} $P(x, a, b) \subset \mathcal{M}$ as follows: \[ P(x,a,b) := \bigcup_{y \in B(x,a)} \bigcup_{t' \in [t, t+b] \cap I_y} y(t'). \] If $b < 0$, then we replace $[t,t+b]$ by $[t+b, t]$. We call $P(x,a,b)$ \textbf{unscathed} if $B(x,a)$ is relatively compact in $\mathcal{M}_t$ and if $I_y \supset [t, t+b]$ or $I_y\supset [t +b, t] \cap [0, \infty)$ for all $y \in B(x,a)$. Lastly, for any $r > 0$ we introduce the simplified notation \[ P(x,r) := P(x,r,-r^2) \] for the \textbf{(backward) parabolic ball} with center $x$ and radius $r$. \end{definition} Note that if $P(x,a,b)$ is unscathed, then it is a product domain of the form $B(x,a) \times I_y$ for any $y \in B(x,a)$. Borrowing from Definition~\ref{def_geometric_closeness_time_slice}, we will introduce the notion of a $\delta$-neck. \begin{definition}[$\delta$-neck] Let $(M,g)$ be a Riemannian manifold and $U \subset M$ an open subset. We say that $U$ is a {\bf $\delta$-neck at scale $\lambda > 0$} if there is a diffeomorphism \[ \psi : S^2 \times \big( {- \delta^{-1}, \delta^{-1} }\big) \longrightarrow U \] such that \[ \big\Vert \lambda^{-2} \psi^* g - \big( 2 g_{S^2} + g_{\mathbb{R}} \big) \big\Vert_{C^{[\delta^{-1}]}(S^2 \times (- \delta^{-1}, \delta^{-1}))} < \delta. \] We call the image $\psi ( S^2 \times \{ 0 \})$ a {\bf central 2-sphere of $U$} and every point on a central $2$-sphere a {\bf center of $U$}. \end{definition} Note that by our convention (see Definition~\ref{def_curvature_scale}) we have $\rho \equiv 1$ on $(S^2 \times \mathbb{R}, 2 g_{S^2} + g_{\mathbb{R}})$. So on a $\delta$-neck at scale $\lambda$ we have $\rho \approx \lambda$, where the accuracy depends on the smallness of $\delta$. We also remark that a $\delta$-neck $U$ has infinitely many central $2$-spheres, as we may perturb $\psi$ slightly. This is why we speak of \emph{a} central 2-sphere of $U$, as opposed to \emph{the} central 2-sphere of $U$. Similarly, the centers of $U$ are not unique, but form an open subset of $U$. Lastly, we define the initial condition scale. \begin{definition}[Initial condition scale] \label{Def_r_initial} For any closed 3-manifold $(M,g)$ define the {\bf initial condition scale $r_{\initial}(M,g)$} as follows: \[ r_{\initial} (M,g) := \min \big\{ \inf_M |{\Rm_g}|^{-1/2} , \inf_M |\nabla{\Rm_g}|^{-1/3} , \operatorname{injrad} (M,g) \big\} \] \end{definition} So $|{\Rm}| \leq r^{-2}_{\initial} (M,g)$, $|{\nabla\Rm}| \leq r^{-3}_{\initial} (M,g)$ and $\operatorname{injrad} (M,g) \geq r_{\initial} (M,g)$. Moreover, the map $g \mapsto r_{\operatorname{injrad}}(M,g)$ is continuous on $\Met (M)$. \bigskip Let us now state further results. The first result offers more quantitative geometric control on any singular Ricci flow $\mathcal{M}$, depending only on the initial condition scale $r_{\initial} (\mathcal{M}_0, g_0)$ of the initial time-slice. \begin{lemma} \label{lem_rcan_control} For any $\varepsilon > 0$ there is a smooth function $r_{\can, \varepsilon} : \mathbb{R}_+ \times [0, \infty) \to \mathbb{R}_+$ such that the following holds for any $T \geq 0$ and any singular Ricci flow $\mathcal{M}$: \begin{enumerate}[label=(\alph*)] \item \label{ass_rcan_a} $\mathcal{M}$ satisfies the $\varepsilon$-canonical neighborhood assumption below scale $r_{\can, \varepsilon} \linebreak[1] (r_{\initial} (\mathcal{M}_0, \linebreak[1] g_0), \linebreak[1] T)$ on $[0,T]$. \item \label{ass_rcan_b} If $x \in \mathcal{M}$, $\t (x) \leq T$ and $\rho (x) \leq r_{\can, \varepsilon} (r_{\initial} (\mathcal{M}_0, g_0), T)$, then the parabolic neighborhood $P := P(x, \varepsilon^{-1} \rho(x))$ is unscathed and after parabolic rescaling by $\rho^{-2}(x)$ the flow on $P$ is $\varepsilon$-close to the flow on a $\kappa$-solution. \item \label{ass_rcan_c} $\rho \geq \varepsilon^{-1} r_{\can, \varepsilon} (r_{\initial} (\mathcal{M}_0, g_0), T)$ on $\mathcal{M}_0$. \item \label{ass_rcan_d} For any $a, r_0 > 0$ we have $r_{\can, \varepsilon} (a r_0, a^2 T) = a \cdot r_{\can, \varepsilon} (r_0, T)$. \item $|\partial_T^m r_{\can, \varepsilon}| \leq \varepsilon r_{\can, \varepsilon}^{1-2m}$ for $ m = 0, \ldots, [\varepsilon^{-1}]$. \item $r_{\can, \varepsilon} (r_0, T)$ is decreasing in $T$ for any $r_0 > 0$. \end{enumerate} \end{lemma} \begin{proof} We will set \[ r_{\can,\varepsilon} (r_0, T) := r_0 \cdot r'_{\can, \varepsilon} (T r_0^{-2}) \] for some smooth decreasing function $r'_{\can, \varepsilon} : [0, \infty) \to \mathbb{R}_+$, which we will determine in the following. Then Assertion~\ref{ass_rcan_d} holds and due to invariance of all other assertions under parabolic rescaling, it suffices to assume in the following that $r_{\initial} (\mathcal{M}_0, g_0) = 1$. By \cite[Thm. 1.3]{Kleiner:2014le}, \cite[Thm. 1.3]{bamler_kleiner_uniqueness_stability}, it follows that $\mathcal{M}$ is a limit of Ricci flows with surgery $\mathcal{M}^{(\delta_i)}$ with the same initial condition and performed at surgery scales $\delta_i \to 0$, . By Perelman's construction \cite{Perelman2} of the flows $\mathcal{M}^{(\delta_i)}$, we know that there is a continuous, decreasing function $r'_{\can, \varepsilon} : [0, \infty) \to \mathbb{R}_+$ that is independent of $(\mathcal{M}_0, g_0)$ and $i$ such that for any $T > 0$ and $r < r'_{can, \varepsilon} (T)$ and $i \geq \underline{i} (r, T)$ the flows $\mathcal{M}^{(\delta_i)}$ satisfy the hypothesis of Assertion~\ref{ass_rcan_b} with $\varepsilon$ replaced by $\varepsilon/2$ if $\rho(x) \in (r, 2r'_{\can, \varepsilon} (T))$. So Assertion~\ref{ass_rcan_b} also holds for $\mathcal{M}$ if $\rho(x) \leq r'_{\can, \varepsilon} (T)$. Assertion~\ref{ass_rcan_a} is a direct consequence of Assertion~\ref{ass_rcan_b}. In order to prove the remaining assertions, we claim that there is a smooth decreasing function $r''_{\can, \varepsilon} : [0, \infty) \to \mathbb{R}_+$ such that $r''_{\can, \varepsilon} (t) < r'_{\can, \varepsilon}(t)$, $r''_{\can, \varepsilon} (0) < 10^{-3}$ and $|\partial_t^m r''_{\can, \varepsilon}| \leq \varepsilon (r''_{\can, \varepsilon})^{1-2m}$ for $m = 0, \ldots, [\varepsilon^{-1}]$. By convolution with a smooth kernel, we can find a smooth, decreasing function $r'''_{\can, \varepsilon} : [1, \infty) \to \mathbb{R}_+$ such that \[ |\partial_t^m r'''_{\can, \varepsilon} (t)| \leq C_m \sup_{[t-1,t+1]} r'_{\can, \varepsilon} \leq C_m r'_{\can, \varepsilon} (t-1). \] for some universal constants $C_m < \infty$. So $r''_{\can, \varepsilon} (t) := a_\varepsilon r'''_{\can, \varepsilon}(t+1)$, for sufficiently small $a_\varepsilon > 0$, has the desired properties. \end{proof} The next result concerns the preservation of the positive scalar curvature condition. \begin{theorem} \label{Thm_PSC_preservation} If $\mathcal{M}$ is a singular Ricci flow such that $R > 0$ (resp. $R \geq 0$) on $\mathcal{M}_0$, then the same is true on all of $\mathcal{M}$. \end{theorem} \begin{proof} This follows from the corresponding fact for Ricci flows with surgery since $\mathcal{M}$ is a limit of Ricci flows with surgery, as discussed in the proof of Lemma~\ref{lem_rcan_control}. \end{proof} The next result concerns the extinction of singular Ricci flows. \begin{theorem} If a singular Ricci flow $\mathcal{M}$ is extinct at time $t$ (i.e. $\mathcal{M}_t = \emptyset$), then it is also extinct at all later times $t' \geq t$. \end{theorem} \begin{proof} This is a direct consequence of \cite[Theorem 1.11]{Kleiner:2014le}. It also follows using Lemma~\ref{lem_bryant_increasing_scale} below. \end{proof} The next result gives a uniform bound on the extinction time of a singular Ricci flow starting from a connected sum of spherical space forms and copies of $S^2 \times S^1$. \begin{theorem} \label{Thm_extinction_time} Let $M$ be a connected sum of spherical space forms and copies of $S^2 \times S^1$. Consider a compact subset $K \subset \Met (M)$ of Riemannian metrics on $M$. Then there is a time $T_{\ext} < \infty$ such that any singular Ricci flow $(\mathcal{M},\mathfrak{t}, \partial_{\mathfrak{t}}, g)$ with the property that $(\mathcal{M}_0, g_0)$ is isometric to $(M, h)$ for some $h \in K$ is extinct at time $T_{\ext}$. \end{theorem} \begin{proof} See \cite[Thm. 2.16(b)]{gsc}. \end{proof} The next result implies that the function $\t + \rho^{-2}$ on any singular Ricci flow is proper. \begin{theorem} \label{Thm_rho_proper_sing_RF} For any singular Ricci flow $\mathcal{M}$ and any $r, T > 0$ the subset $\{ \rho \geq r, \t \leq T \} \subset \mathcal{M}$ is compact. \end{theorem} \begin{proof} This is one of the properties of a singular Ricci flow according to the definition in \cite{Kleiner:2014le}. By Theorem~\ref{Thm_sing_RF_uniqueness} this definition is equivalent to Definition~\ref{Def_sing_RF}. Alternatively, the theorem can be shown directly using Lemma~\ref{lem_rcan_control}(b). \end{proof} The last result essentially states that at points that satisfy the canonical neighborhood assumption we have $\partial_t \rho < 0$, unless the geometry is sufficiently closely modeled on the tip of a Bryant soliton. See also \cite[Lemma~8.40]{bamler_kleiner_uniqueness_stability}, which is more general. \begin{lemma}[Non-decreasing scale implies Bryant-like geometry] \label{lem_bryant_increasing_scale} For any $\delta > 0$ the following holds if $\varepsilon \leq \ov\varepsilon (\delta)$. Let $\mathcal{M}$ be a singular Ricci flow, $x \in \mathcal{M}_t$ a point and assume that $r:=\rho (x) < r_{\can, \varepsilon} (r_{\initial} (\mathcal{M}_0, g_0), t)$. Then $x$ survives until time $\max \{ t - r^2, 0 \}$. Assume that $\rho (x(t')) \leq \rho (x)$ for some $t' \in [\max \{ t - r^2, 0 \}, t)$. Then the pointed Riemannian manifold $(\mathcal{M}_t, g_t, x)$ is $\delta$-close to the pointed Bryant soliton $(M_{\Bry}, g_{\Bry}, x_{\Bry})$ at scale $\rho(x)$. \end{lemma} Note that here our choice of $\rho$, such that $\rho^{-2} = R$ at points that satisfy a precise enough canonical neighborhood assumption, is important. Had we chosen $\rho$ differently, then we would have had to use a more complicated wording of Lemma~\ref{lem_bryant_increasing_scale}. \begin{proof} Assume that the lemma was false for some fixed $\delta > 0$ and choose a sequence of counterexamples $\mathcal{M}^i, x^i, t^i, t^{\prime, i}$ for $\varepsilon^i \to 0$. By parabolic rescaling we may assume that $\rho (x^i) = 1$. By Lemma~\ref{lem_rcan_control}\ref{ass_rcan_b}, \ref{ass_rcan_c} we may pass to a subsequence and assume that the flows restricted to the universal covers of the parabolic neighborhoods $P(x^i, (\varepsilon^i)^{-1} )$ and pointed at lifts of $x^i$ converge to a pointed $\kappa$-solution $(M^\infty, (g^{\infty}_t)_{t \leq 0}, x^\infty)$ with $R(x^\infty, 0) = \rho^{-2} (x^\infty, 0) = 1$. By assumption $(M^\infty, g^{\infty}_0, x^\infty)$ cannot be isometric to $(M_{\Bry}, g_{\Bry}, x_{\Bry})$. Therefore, by \cite[Proposition C.3]{bamler_kleiner_uniqueness_stability} and Definition~\ref{def_kappa_solution} we have $\partial_t R (x^\infty, 0) > 0$ and $\partial_t R(x^\infty, t) \geq 0$ for all $t \leq 0$. Since the functions $t'' \mapsto R(x^i ( t + t''))$ smoothly converge to $t'' \mapsto R(x^\infty, t'')$ on $[-1,0]$, we obtain a contradiction to the fact that $R(x^i (t^{\prime,i})) = \rho^{-2} (x^i(t^{\prime,i})) \geq 1$ and $t^{\prime,i} < t^i$. \end{proof} \subsection{Singular Ricci flows --- Stability} \label{subsec_sing_RF_stability} Next we formalize the Stability Theorem \cite[Theorem~1.5]{bamler_kleiner_uniqueness_stability} in a way that will fit our needs. In short, this theorem states that two singular Ricci flows are geometrically close on the set $\{ \rho \geq \varepsilon \} \cap \{ \t \leq \varepsilon^{-1} \}$ if their initial time-slices are close enough. This fact will be key to our understanding of continuous dependence of singular Ricci flows on their initial data and the construction of continuous families of singular Ricci flows in Section~\ref{sec_families_srfs}. Let first $(M, g)$, $(M',g')$ be two Riemannian manifolds. \begin{definition}[$\varepsilon$-isometry between Riemannian manifolds] An {\bf $\varepsilon$-isometry from $M$ to $M'$} is a diffeomorphism $\phi:M \to M'$ such that \[ \big\Vert \phi^*g'-g \Vert_{C^{[\varepsilon^{-1}]} (M)} <\varepsilon . \] \end{definition} Next, let $\mathcal{M}, \mathcal{M}'$ be two Ricci flow spacetimes. For our purposes, we may take $\mathcal{M}, \mathcal{M}'$ to be singular Ricci flows. \begin{definition}[$\varepsilon$-isometry between Ricci flow spacetimes] \label{def_eps_isometry_rf_spacetime} An {\bf $\varepsilon$-isometry from $\mathcal{M}$ to $\mathcal{M}'$} is a diffeomorphism $\phi:\mathcal{M}\supset U\rightarrow U'\subset \mathcal{M}'$ where: \begin{enumerate} \item \label{prop_eps_isometry_rf_spacetime_1} $U$, $U'$ are open subsets such that $\rho \leq \varepsilon$ on the subsets \[ (\mathcal{M}\setminus U)\cap\{\t\leq \varepsilon^{-1}\}\,,\quad (\mathcal{M}'\setminus U')\cap\{\t'\leq \varepsilon^{-1}\}\,. \] \item \label{prop_eps_isometry_rf_spacetime_2} $\t'\circ\phi=\t$. \item \label{prop_eps_isometry_rf_spacetime_3} For every $m_1, m_2 = 0, \ldots, [\varepsilon^{-1}]$ we have \[ | \nabla^{m_1} \partial_{\t}^{m_2} (\phi^* g' - g) | \leq \varepsilon, \qquad | \nabla^{m_1} \partial_{\t}^{m_2} (\phi^* \partial'_\t - \partial_\t) | \leq \varepsilon. \] on $U$. Here $\nabla^{m_1}$ denotes the $m_1$-fold covariant derivative with respect to the Riemannian metrics $g_t$ on each time-slice $\mathcal{M}_t \cap U$ and $\partial^{m_2}$ denotes the $m_2$-fold Lie derivative $\mathcal{L}^{m_2}_{\partial_\t}$. \end{enumerate} \end{definition} We can now state our main stability result. For a more general result, which also holds for more general Ricci flow spacetimes, see \cite[Theorem~1.7]{bamler_kleiner_uniqueness_stability}. \begin{lemma}[Stability of singular Ricci flows] \label{lem_stability} Let $\mathcal{M}$ be a singular Ricci flow. Then for every $\varepsilon>0$ there is a $\delta = \delta (\mathcal{M}, \varepsilon)>0$ such that if $\mathcal{M}'$ is a singular Ricci flow and $\phi:\mathcal{M}_0\rightarrow\mathcal{M}'_0$ is a $\delta$-isometry, then there is an $\varepsilon$-isometry $\wh\phi: \mathcal{M}\supset U\rightarrow U'\subset\mathcal{M}'$ extending $\phi$, meaning that $\mathcal{M}_0 \subset U$, $\mathcal{M}'_0 \subset U'$ and $\wh\phi |_{\mathcal{M}_0} = \phi$. \end{lemma} \begin{proof} Fix $\mathcal{M}$ and $\varepsilon > 0$, set $T := \varepsilon^{-1}$ and let $\varepsilon_{\can}, \delta > 0$ be some constants, which we will determine in the course of the proof. By Lemma~\ref{lem_rcan_control}, and assuming that $\delta$ is sufficiently small, $\mathcal{M}$ and $\mathcal{M}'$ both satisfy the $\varepsilon_{can}$-canonical neighborhood assumption below scales $r_0$ on $[0,T]$ for some scale $r_0 (\mathcal{M}, T, \varepsilon_{\can}) > 0$. By \cite[Theorem~1.5]{bamler_kleiner_uniqueness_stability}, and assuming $\varepsilon_{\can}$ and $\delta$ to be sufficiently small depending on $\varepsilon$, $T$, $r_0$ and $\mathcal{M}$, we can extend $\phi$ to a map $\wh\phi : \mathcal{M} \supset U \to U' \subset \mathcal{M}'$ such that Properties~\ref{prop_eps_isometry_rf_spacetime_1}, \ref{prop_eps_isometry_rf_spacetime_2} of Definition~\ref{def_eps_isometry_rf_spacetime} hold. The bounds from Property~\ref{prop_eps_isometry_rf_spacetime_1} follow from the Addendum to \cite[Theorem~1.5]{bamler_kleiner_uniqueness_stability} and the fact that $\wh\phi^* \partial_{\t'} -\partial_\t = \sum_{i=1}^3 \nabla^g_{e_i} e_i - \nabla^{\wh\phi^* g'}_{e_i} e_i$ for any local orthonormal frame $\{ e_i \}_{i=1}^3$, after adjusting $\delta$. \end{proof} The following corollary illustrates the statement of Theorem~\ref{lem_stability}. \begin{corollary} Let $\{\mathcal{M}^i\}$, $\mathcal{M}^\infty$ be singular Ricci flows and suppose that we have convergence $(\mathcal{M}^i_0 , g^i_0) \to (\mathcal{M}^\infty, g^\infty_0)$ in the sense that there is a sequence of $\delta_i$-isometries $\phi_i : \mathcal{M}^\infty \to \mathcal{M}^i$ with $\delta_i \to 0$. Then we have convergence $\mathcal{M}^i \to \mathcal{M}^\infty$ in the sense that there is a sequence of $\varepsilon_i$-isometries $\wh\phi_i$ between $\mathcal{M}^\infty, \mathcal{M}^i$ with $\varepsilon_i \to 0$, which extend $\phi_i$. \end{corollary} \section{Families of singular Ricci flows} \label{sec_families_srfs} The purpose of this section is to distill the results about existence, uniqueness, and continuous dependence of singular Ricci flows into an object that efficiently encodes the properties needed in the remainder of the proof. To motivate this, we recall that by \cite{Kleiner:2014le,bamler_kleiner_uniqueness_stability} (see also Subsection~\ref{subsec_sing_RF_exist_unique}), for every Riemannian manifold $(M,g)$ there exists a singular Ricci flow $\mathcal{M}^{(M,g)}$ with initial condition isometric to $(M,g)$, which is unique up to isometry. Our main result will be to formalize the stability property from \cite{bamler_kleiner_uniqueness_stability} (see also Subsection~\ref{subsec_sing_RF_stability}) as a continuous dependence of $\mathcal{M}^{(M,g)}$ on $(M,g)$. More specifically, we will state that any ``continuous family'' of Riemannian manifolds yields a ``continuous family of singular Ricci flows''. Our starting point is a family of Riemannian manifolds $(M^s, g^s)_{s \in X}$, which depends continuously on a parameter $s$ in a certain sense. One special case is the case in which $M^s$ is constant and $g^s$ depends continuously on $s$ in the smooth topology. More generally, we may also consider a fiber bundle over $X$ whose fibers are smooth 3-dimensional manifolds $M^s$ equipped with a continuous family of Riemannian metrics $g^s$. For each $s \in X$ we consider the singular Ricci flow $\mathcal{M}^s := \mathcal{M}^{(M^s, g^s)}$. Our first step will be to define a topology on the disjoint union $\sqcup_{s\in X}\mathcal{M}^s$ such that the natural projection $\pi:\sqcup_{s\in X}\mathcal{M}^s\rightarrow X$ given by $\pi(\mathcal{M}^s)=s$ is a topological submersion. Secondly, we will endow this space with a lamination structure\footnote{Laminations have arisen in various contexts, including foliation theory, dynamical systems, complex geometry, $3$-manifolds, geometric group theory, and minimal surfaces; see for instance \cite{sullivan,candel,mosher_oertel,gabai,colding_minicozzi}. The laminations appearing in this paper are particularly tame due to the existence of the compatible topological submersion structure, which prohibits nontrivial leaf dynamics -- a central phenomenon in other contexts. } whose leaves are the singular Ricci flows $\mathcal{M}^s$ and that satisfies certain compatibility conditions with the submersion structure. The lamination structure will determine whether a family of maps from or to $\sqcup_{s\in X}\mathcal{M}^s$ is ``transversely continuous'' in the smooth topology. In fact, the objects $\t^s$, $\partial_\t^s$, $g^s$ associated to each singular Ricci flow $\mathcal{M}^s$ will be transversely continuous in the smooth topology. A collection of singular Ricci flows $\{ \mathcal{M}^s \}_{x \in X}$ together with the topology on the disjoint union and the lamination structure, as described above, will be called a ``continuous family of singular Ricci flows'' (see Definition~\ref{def_family_RF_spacetime} for further details). This notion is an instance of a general notion of a continuous family of differential geometric structures, which may be useful in other contexts, and appears to be new. Let us now state our main results. We refer to Subsection~\ref{subsec_laminations} for the precise definitions of the terminology used. For now, we mention that a continuous family of Riemannian manifolds $(M^s, g^s)_{s \in X}$ is essentially given by a fiber bundle over $X$ with smooth fibers and equipped with a continuous family of Riemannian metrics $g^s$ (see Corollary~\ref{Cor_family_compact_fiber_bundle} below). An important special case is given by a family of Riemannian metrics $g^s$ on a fixed manifold $M$, which depend continuously in $s$ in the smooth topology. For the remainder of this section, $X$ will denote an arbitrary topological space. We first address the existence of a ``continuous family of singular Ricci flows''. \begin{theorem}[Existence] \label{thm_existence_family_k} For any continuous family of closed Riemannian 3-manifolds $(M^s, g^s)_{s \in X}$ there is a continuous family of singular Ricci flows $(\mathcal{M}^s)_{s\in X}$ whose continuous family of time-$0$-slices $(\mathcal{M}^s_0, g^s_0)_{s\in X}$ is isometric to $(M^s, g^s)_{s \in X}$. \end{theorem} Although not needed in the remainder of this paper, we will also show that this family is unique. \begin{theorem}[Uniqueness] \label{thm_uniqueness_family_k} The continuous family of singular Ricci flows from Theorem~\ref{thm_existence_family_k} is unique in the following sense. Consider two such families of singular Ricci flows $(\mathcal{M}^{i,s})_{s\in X}$, $i = 1,2$, and isometries $\phi_i : \cup_{s \in X} \mathcal{M}^{i,s}_0 \to \cup_{s \in X} M^s$. Then there is an isometry $\Psi : \cup_{s \in X} \mathcal{M}^{1,s} \to \cup_{s \in X} \mathcal{M}^{2,s}$ of continuous families of singular Ricci flows with the property that $\phi_1 = \phi_2 \circ \Psi$ on $\cup_{s \in X} \mathcal{M}^{1,s}_0$ \end{theorem} We will also show the following properness property: \begin{theorem} \label{Thm_properness_fam_sing_RF} Let $(\mathcal{M}^s)_{s \in X}$ be a continuous family of singular Ricci flows and consider the projection $\pi : \cup_{s \in X} \mathcal{M}^s \to X$. For any $s_0 \in X$, $r > 0$ and $t \geq 0$ there is a family chart $(U, \phi, V)$ with $s_0 \in \pi (U)$ and a compact subset $K \subset V$ such that \[ \phi \big( \cup_{s \in \pi (U)} \{ \rho_{g^{s}} \geq r, \t^{s} \leq t \} \big) \subset K \times \pi (U). \] In particular, the following projection is proper: \[ \pi : \cup_{s \in X} \mathcal{M}^s \cap \{ \rho_{g^s} \geq r, \t^s \leq t \} \longrightarrow X \] \end{theorem} We remark that Theorems~\ref{thm_existence_family_k} and \ref{Thm_properness_fam_sing_RF} imply Theorem~\ref{lem_stability}. \subsection{Continuous families of smooth objects} \label{subsec_laminations} In this subsection we formalize concepts relating to the terminologies of ``continuous family of Riemannian manifolds'' and ``continuous family of singular Ricci flows''. \begin{definition}[Continuous family of $n$-manifolds] \label{def_continuous_family_manifolds} A {\bf continuous family of (smooth) $n$-manifolds (with boundary, over $X$)} is given by a topological space $Y$, a continuous map $\pi:Y\rightarrow X$ and a maximal collection of tuples $\{(U_i,\phi_i,V_i)\}_{i\in I}$ (called {\bf family charts}) such that: \begin{enumerate}[label=(\arabic*)] \item \label{prop_continuous_family_manifolds_1} For all $i \in I$, $U_i$ is an open subset of $Y$, $V_i$ is a smooth $n$-manifold (with boundary), and $\phi_i:U_i\rightarrow V_i \times \pi(U_i)$ is a homeomorphism. \item \label{prop_continuous_family_manifolds_2} $\cup_{i \in I} U_i=Y$. \item \label{prop_continuous_family_manifolds_3} (Local trivialization) For every $i\in I$ the map $\phi_i$ induces an equivalence of the restriction $\pi|_{U_i}:U_i\rightarrow \pi(U_i)$ to the projection $V_i \times \pi(U_i) \rightarrow \pi(U_i)$, i.e. the following diagram commutes: \begin{diagram} U_i & \rTo^{\phi_i} &V_i \times \pi(U_i)\\ \dTo^\pi & \ldTo_{\proj_{\pi(U_i)}} &\\ \pi(U_i) & & \end{diagram} \item \label{prop_continuous_family_manifolds_4} (Compatibility) For any $i,j\in I$ the transition homeomorphism \[ \phi_{ij}:=\phi_j\circ\phi_i^{-1}: V_i \times \pi(U_i)\supset\phi_i(U_i\cap U_j) \longrightarrow \phi_j(U_i\cap U_j)\subset V_j \times \pi(U_j) \] has the form $\phi_{ij}(v,s)=(\beta(v,s),s)$ where $\beta : \phi_i(U_i\cap U_j)\rightarrow V_j$ locally defines a family of smooth maps $s \mapsto \beta(\cdot, s)$ that depend continuously on $s$ in the $C^\infty_{\operatorname{loc}}$-topology. \end{enumerate} Note that Properties~\ref{prop_continuous_family_manifolds_3} and \ref{prop_continuous_family_manifolds_4} imply that the family charts induce the structure of a smooth $n$-manifold with boundary on each fiber $Y^s :=\pi^{-1}(s)\subset Y$, for every $s\in X$. In order to use a more suggestive notation, we will sometimes denote a continuous family of smooth manifolds by $( Y^s )_{s\in X}$, or by the map $\pi:Y\rightarrow X$, suppressing the collection of family charts. \end{definition} \begin{remark} \label{rmk_set_theory_issue} For convenience we have suppressed a set theoretic issue relating to the maximality property in Definition~\ref{def_continuous_family_manifolds}. This issue can be remedied easily by requiring the following weaker version of maximality: If $\{ (U_i, \phi_i, V_i) \}_{i \in I}$ can be enlarged by a tuple $(U, \phi, V)$, while maintaining the validity of Properties~\ref{prop_continuous_family_manifolds_1}--\ref{prop_continuous_family_manifolds_4}, then $(U, \phi, V)$ is conjugate to some $(U_i, \phi_i, V_i)$, $i \in I$, in the sense that $U = U_i$ and $\phi_i = ( \psi, \id_{\pi(U)}) \circ \phi$ for some diffeomorphism $\psi : V \to V_i$. In this case, we say that $(U, \phi, V)$ is a ``family chart'' if it is conjugate to some $(U_i, \phi_i, V_i)$. \end{remark} Relating to this we have the following: \begin{lemma} \label{lem_cont_fam_no_max} If the maximality property in Definition~\ref{def_continuous_family_manifolds} is dropped, then there is a unique maximal extension of $\{ (U_i, \phi_i, V_i ) \}_{i \in I}$ in the sense of Remark~\ref{rmk_set_theory_issue}, whose elements are unique up to conjugation. \end{lemma} \begin{proof} It can be checked easily that if $(U, \phi, V)$ and $(U', \phi', V')$ can each be added to $\{ (U_i, \phi_i, V_i) \}_{i \in I}$ while maintaining the validity of Properties~\ref{prop_continuous_family_manifolds_1}--\ref{prop_continuous_family_manifolds_4} of Definition~\ref{def_continuous_family_manifolds}, then both triples can be added at the same time. We can therefore add all such triples $(U, \phi, V)$, with the extra assumption that $V$ is a smooth manifold structure defined on a fixed set of cardinality $\aleph_1$. \end{proof} \begin{remark} Definition~\ref{def_continuous_family_manifolds} combines two standard notions --- smooth laminations, and topological submersions: Property~\ref{prop_continuous_family_manifolds_3} asserts that $\pi$ is a topological submersion, while Property~\ref{prop_continuous_family_manifolds_4} implies that the collection of family charts defines the structure of a lamination of smooth $n$-manifolds. \end{remark} \begin{remark} \label{rmk_fiber_bundle_construction} If $M$ is a smooth manifold with boundary and $\pi : Y \to X$ is a fiber bundle with fiber $M$ and structure group $\operatorname{Diff} (M)$, then $\pi : Y \to X$ can also be viewed as continuous family of smooth manifolds with boundary. To see this, consider all local trivializations $\phi_i : \pi_i^{-1} (W_i) \to M \times W_i$, where $W_i \subset X$ are open, and form the triples $(U_i := \pi^{-1} (W_i), \phi_i, V_i := M)$. The set of these triples satisfy all properties of Definition~\ref{def_continuous_family_manifolds} except for the maximality property. Due to Lemma~\ref{lem_cont_fam_no_max} these triples define a unique structure of a continuous family of manifolds. A special case of this construction is the case in which $\pi : Y \to X$ is a trivial fiber bundle $Y = M \times X$. In this case the associated continuous family of manifolds can be denoted by $(Y^s = M \times \{ s \})_{s \in X}$. Conversely, every continuous family $(Y^s)_{s \in X}$ of compact manifolds over a connected space $X$ is given by a fiber bundle. This fact will follow from Corollary~\ref{Cor_family_compact_fiber_bundle} below. \end{remark} \begin{remark} If $( Y^s )_{s \in X}$ is a continuous family of $n$-manifolds (with boundary) and $W \subset \cup_{s \in X} Y^s$ is open, then $( Y^s \cap W )_{s \in X}$ carries a natural structure of a continuous family of $n$-manifolds (with boundary). The family charts of this family is the subset of all family charts $(U_i, \phi_i, V_i)$ of $( Y^s )_{s \in X}$ with the property that $U_i \subset W$. So for example, if $W \subset \mathbb{R}^2$ is open, then the projection onto the second factor restricted to $W$, $\pi : W \to \mathbb{R}$, defines a continuous family of 1-manifolds $(Y^s := W \cap \mathbb{R} \times \{ s \})_{s \in \mathbb{R}}$. This is example shows that the topology of the fibers $Y^s$ is not necessarily constant. \end{remark} Next, we characterize maps between continuous families of manifolds. \begin{definition}[Continuity of maps between continuous families] \label{def_continuity_maps_between_families} If $\pi_i:Y_i\rightarrow X_i$, $i = 1,2$, are continuous families of $n$-manifolds with boundary, then a {\bf (continuous) family of smooth maps} is a pair $(F, f)$ where $F:Y_1 \rightarrow Y_2$, $ f:X_1\rightarrow X_2$ are continuous maps such that $\pi_2 \circ F= f\circ\pi_1$, and for every $y \in Y_1$ there are family charts $(U_i, \phi_i, V_i)$ of $\pi_i :Y_i \rightarrow X_i$ such that $y \in U_1$, $F (U_1) \subset U_2$ and such that \[ V_1 \times \pi_1 (U_1) \xrightarrow{\phi_1^{-1}} U_1 \xrightarrow{\;\; F \;\;} U_2 \xrightarrow{\; \phi_2 \;} V_2 \times \pi_2 (U_2) \longrightarrow V_2 \] describes a family $(\beta_s : V_1 \to V_2)_{s \in \pi_1 (U_1)}$ of smooth maps that is continuous in the $C^\infty_{\operatorname{loc}}$-topology. \end{definition} If we express the two continuous families $\pi_i : Y_i \to X_i$ as $( Y^s_i )_{s \in X_i}$, then we will sometimes also express $(F, f)$ as $( F_s : Y^s_1 \to Y^{f(s)}_2)_{s \in X_1}$ and we will say that this family of smooth maps is {\bf transversely continuous (in the smooth topology)}. Two special cases of this terminology will be particularly important for us. First, consider the case in which $X_2$ consists of a single point and $(Y^s_2 = M)_{s \in X_2}$ consists of a single smooth manifold with boundary $M$. Then Definition~\ref{def_continuity_maps_between_families} expresses transverse continuity of a family of maps $(F^s : Y^s_1 \to M)_{s \in X_1}$, or equivalently, of a map $F : \cup_{s \in X_1} Y^s_1 \to M$; in the case in which $M = \mathbb{R}$ this yields a definition of transverse continuity for scalar maps $F : \cup_{s \in X_1} Y^s_1 \to \mathbb{R}$. Second, consider the case in which $X_1 = X_2$, $f = \id_{X_1}$ and $(Y_1^s := M \times \{ s \})_{s \in X_1}$ is the trivial family. In this case Definition~\ref{def_continuity_maps_between_families} introduces the notion of transverse continuity for families of smooth maps $(F^s : M \to Y_2^s)$. \begin{definition}[Isomorphism of continuous families] A continuous family of smooth maps $(F, f)$ is called an {\bf isomorphism} if $F, f$ are invertible and if $(F^{-1}, f^{-1})$ constitutes a continuous family of smooth maps as well. If $X_1 = X_2$, then we also call map $F : Y_1 \to Y_2$ an isomorphism if $(F, \id_{X_1})$ is an isomorphism. \end{definition} Due to the implicit function theorem we have: \begin{lemma} A continuous family $(F,f)$ is an isomorphism if and only if $f$ is a homeomorphism and all maps $F^s : Y_1^s \to Y_2^{f(s)}$, $s \in X_1$, are diffeomorphisms. In particular, if $(Y_i^s)_{s \in X}$, $i=1,2$, are continuous families of smooth manifolds over the same space $X$, then a continuous family of maps $(F^s : Y_1^s \to Y_2^s)_{s \in X}$ constitutes an isomorphism if and only if all maps are diffeomorphisms. \end{lemma} Next, we define the notion of transverse continuity for tensor fields on a continuous family of smooth manifolds. \begin{definition}[Transverse continuity of tensor fields] \label{def_continuity_smooth_objects} Let $(Y^s)_{s \in X}$ be a continuous family of smooth $n$-manifolds with boundary and for every $s\in X$, let $\xi^s$ be a tensor field on $Y^s$. We say that the family $(\xi^s)_{s\in X}$ is {\bf transversely continuous (in the smooth topology)} if for every family chart $\phi:U\rightarrow V \times \pi(U)$ of $(Y^s)_{s \in X}$ and all $s \in \pi (U)$ the push forwards of $\xi^s$ under \[ U \cap Y^s \xrightarrow{\quad \phi \quad} V \times \{s\} \longrightarrow V \] are smooth tensor fields on $V$ that depend continuously on $s \in \pi(U)$ in the $C^\infty_{\operatorname{loc}}$-topology. \end{definition} \begin{remark} In the case of $(0,0)$-tensor fields, this notion offers another definition of transverse continuity of scalar functions, which is equivalent with the one derived from Definition~\ref{def_continuity_maps_between_families}. \end{remark} Adapting elementary results to families we have: \begin{lemma} \label{lem_generalizations_to_families} \begin{enumerate}[label=(\alph*)] \item Let $(F_s : Y_1^s \to Y_2^{f(s)})_{s \in X_1}$ be a continuous family of diffeomorphisms between two continuous families of smooth manifolds with boundary $(Y_i^s)_{s \in X_i}$, $i = 1,2$ and let $(\xi_s)_{s \in X_2}$ be a transversely continuous family of tensor fields on $(Y_2^s)_{s \in X_2}$. Then the pullbacks $((F^s)^* \xi_{f(s)})_{s \in X_1}$ are also transversely continuous. \item Consider is a continuous family $(Y^s)_{s \in X}$ of smooth $n$-manifolds with boundary and a transversely continuous family of smooth maps $(F^s : Y^s \to Z)$ into a $k$-dimensional manifold with boundary. Assume that $F^s (\partial Y^s) \subset \partial Z$ for all $s \in X$. Let $z \in Z$ be a regular value of $F_s$ for all $s \in X$. Then the collection $( \td{Y}^s := (F^s)^{-1}(z)\subset Y^s)_{s\in X}$ of submanifolds inherits the structure of a continuous family of smooth $(n-k)$-manifolds with boundary. \end{enumerate} \end{lemma} Using Definition~\ref{def_continuity_smooth_objects}, we can make the following definition: \begin{definition}[Continuous family of Riemannian manifolds] \label{Def_cont_fam_RM} A {\bf (continuous) family of Riemannian manifolds $( M^s, g^s)_{s \in X}$} consists of a continuous family $( M^s)_{s \in X}$ of smooth manifolds with boundary and a transversely continuous family of Riemannian metrics $(g^s)_{s \in X}$. An isometry between two continuous families of Riemannian manifolds $(M_i^s, \linebreak[1] g_i^s)_{s \in X_1}$, $i=1,2$, is an isomorphism $(F, f)$ between the associated continuous families of manifolds with boundary $(M_i^s)_{s \in X_1}$ with the property that $(F^s)^* g_2^{f(s)} = g_1^s$ for all $s \in X_1$. \end{definition} \begin{remark} If $(M^s)_{s \in X}$ is given by a fiber bundle $\pi : Y \to X$ with fiber $M \approx M^s$ and structure group $\operatorname{Diff} (M)$ (see Remark~\ref{rmk_fiber_bundle_construction}), then any continuous family of Riemannian metrics $(g^s)_{s \in X}$, turning $(M^s, g^s)_{s \in X}$ into a continuous family of Riemannian manifolds in the sense of Definition~\ref{Def_cont_fam_RM}, is given by a fiberwise family of Riemannian metrics $(g^s)_{s \in X}$. \end{remark} \begin{remark} \label{rmk_universal_family_RM} For any smooth manifold $M$ with boundary consider the space $\Met (M)$ of Riemannian metrics equipped with the $C^\infty_{\operatorname{loc}}$-topology. Then $(M \times \{ g \}, g )_{g \in \Met (M)}$ is a continuous family of Riemannian manifolds. If $M$ is closed and 3-dimensional, then Theorem~\ref{thm_existence_family_k} applied to this family asserts the existence of a family of singular Ricci flows $(\mathcal{M}^g)_{g \in \Met (M)}$ such that $\mathcal{M}^g_0 = (M,g)$ for all $g \in \Met (M)$. \end{remark} Lastly, we define continuous families of Ricci flow spacetimes and continuous families of singular Ricci flows. \begin{definition} \label{def_family_RF_spacetime} A {\bf continuous family of Ricci flow spacetimes} \[ ( \mathcal{M}^s, \t^s, \partial_\t^s, g^s )_{s \in X}, \] or in short $(\mathcal{M}^s)_{s \in X}$, consists of: \begin{itemize} \item a continuous family $(\mathcal{M}^s)_{s \in X}$ of smooth $4$-manifolds with boundary, \item a transversely continuous family of smooth scalar functions $\t^s$ on $\mathcal{M}^s$; we will often write $\t : \cup_{s \in X} \mathcal{M}^s \to [0, \infty)$, \item a transversely continuous family of smooth vector fields $\partial_\t^s$ on $\mathcal{M}^s$ such that $\partial^s_\t \t^s \equiv 1$, \item a transversely continuous family of smooth inner products $g^s$ on the subbundle $\ker d\t^s \subset T\mathcal{M}^s$; here we use the splitting $T\mathcal{M}^s = \ker d\t^s \oplus \spann \{ \partial_\t^s \}$ to view $g^s$ as a $(0,2)$-tensor field on $\mathcal{M}^s$ and define transverse continuity as in Definition~\ref{def_continuity_smooth_objects}. \end{itemize} We assume that for all $s \in X$ the tuple $( \mathcal{M}^s, \t^s, \partial_\t^s, g^s )$ is a Ricci flow spacetime in the sense of Definition~\ref{def_RF_spacetime}. If $\mathcal{M}^s$ is even a singular Ricci flow, in the sense of Definition~\ref{Def_sing_RF}, for all $s \in X$, then we call $(\mathcal{M}^s)_{s \in X}$ a {\bf continuous family of singular Ricci flows}. \end{definition} By Lemma~\ref{lem_generalizations_to_families} we have: \begin{lemma} \label{lem_cont_fam_time_slice_inherit} If $(\mathcal{M}^s)_{s \in X}$ is a continuous family of Ricci flow spacetimes, then for any $t \geq 0$, the set of time-$t$-slices $(\mathcal{M}^s_T = (\t^s)^{-1} (t), g^s_t)_{s \in X}$ inherits a structure of a continuous family of Riemannian manifolds. \end{lemma} \begin{remark} In Definition~\ref{def_family_RF_spacetime} we don't require that $(\mathcal{M}^s)_{s \in X}$ comes from a fiber bundle, as explained in Remark~\ref{rmk_fiber_bundle_construction}. In fact, even if $X$ is connected --- such as in the case $X = \Met (M)$ of Remark~\ref{rmk_universal_family_RM} --- the topology of the spacetimes $\mathcal{M}^s$ may depend on $s$. Similarly, in the context of Lemma~\ref{lem_cont_fam_time_slice_inherit}, the time $t$ may be a singular time for some but not all parameters $s \in X$. In this case, the topology of the time-$t$-slices may depend on $s$, even if the topology of $\mathcal{M}^s$ doesn't. \end{remark} \subsection{Existence of family charts} The following result will be used frequently throughout this paper. \begin{lemma} \label{lem_chart_near_compact_subset} If $(M^s)_{s \in X}$ is a continuous family of smooth manifolds with boundary, $s_0 \in X$ and $K \subset M^{s_0}$ is a compact subset, then there is a family chart $(U,\phi, V)$ such that $K \subset U$. \end{lemma} \begin{remark} We note that Lemma~\ref{lem_chart_near_compact_subset} holds more generally for laminations, when one has a compact subset $K$ of a leaf with trivial holonomy. \end{remark} A consequence of Lemma~\ref{lem_chart_near_compact_subset} is: \begin{corollary} \label{Cor_family_compact_fiber_bundle} If $(M^s)_{s \in X}$ is a continuous family of smooth and compact manifolds with boundary and if $X$ is connected, then $(M^s)_{s \in X}$ comes from a fiber bundle with fiber $M \approx M^s$ and structure group $\operatorname{Diff} (M)$, in the sense of Remark~\ref{rmk_fiber_bundle_construction}. \end{corollary} \begin{proof}[Proof of Lemma~\ref{lem_chart_near_compact_subset}] Recall the standard projection $\pi : \cup_{s \in X} M^s \to X$. \begin{Claim} There is an open subset $K \subset U\subset \cup_{s \in X} M^s$ and a retraction $\pi_V :U\rightarrow U \cap M^{s_0} =: V$ that is transversely continuous in the smooth topology. \end{Claim} \begin{proof} Let $\{ (U_i, \phi_i, V_i) \}_{i\in I' \subset I}$ be a finite collection of family charts such that $K\subset \cup_{i \in I'} U_i$. For each $i\in I'$ define $\pi_i:U_i\rightarrow U_i\cap M^{s_0}$ to be the composition \[ U_i \xrightarrow{ \; \phi_i \;} V_i \times \pi (U_i) \longrightarrow V_i \longrightarrow V_i \times \{s_0 \} \xrightarrow{\phi_i^{-1}} M^{s_0} \cap U_i. \] We will obtain the desired map $\pi_V$ by gluing the retractions $\pi_i$, $i\in I'$, with a partition of unity. Let $Z:=\cup_{i \in I'} M^{s_0} \cap U_i$ and fix a partition of unity $\{\alpha_i:Z\rightarrow [0,1]\}_{i\in I'}$ subordinate to the cover $\{ M^{s_0} \cap U_i \}_{i\in I}$ of $Z$. We may assume without loss of generality that $Z$ is an embedded submanifold of $\mathbb{R}^N_{\geq 0} = \mathbb{R}^{N-1} \times [0, \infty)$, for some large $N$, such that $Z \cap \partial \mathbb{R}^N_{\geq 0} = \partial Z$ and $Z$ meets the boundary of $\mathbb{R}^N_{\geq 0}$ orthogonally. Let $Z' \Subset Z$ be a relatively compact neighborhood of $K$. Choose $\delta>0$ such that the nearest point retraction $r_{Z}: N_{\delta}(Z')\rightarrow Z'$ is well-defined and smooth, where $N_{\delta}(Z')$ is the exponential image of the vectors of length $< \delta$ in the normal bundle $\nu Z'$ of $Z'$ in $\mathbb{R}^N_{\geq 0}$. Consider the convex combination $s_{Z'} := \sum_{i \in I'} (\alpha_i \circ \pi_i) \pi_i:\cup_{i \in I'} U_i\rightarrow \mathbb{R}^N_{\geq 0}$. Note that $s_{Z'} |_{Z'} = \id_{Z'}$. Then $U := s_{Z'}^{-1} ( N_\delta (Z')) \subset \cup_{i \in I'} U_i$ and $\pi_V := r_{Z'} \circ s_{Z'}$ have the desired properties. \end{proof} Let $\pi_V:U\rightarrow V$ be as in the claim. After shrinking $U$ and $V$ we may assume without loss of generality that $\pi_V |_{U\cap M^s}: M^s \cap U \to V$ is a local diffeomorphism for every $s \in X$. Consider the map $\phi:=(\pi_V, \pi): U\rightarrow V \times X$. Since $\pi_V$ is a fiberwise local diffeomorphism, it follows from the inverse function theorem (for families of maps) that $\phi$ is a local homeomorphism. Therefore, by compactness of $K$ and the fact that $\pi_V |_V=\id_V$, we find an open neighborhood $U' \subset U$ of $K$ such that $\pi |_{U'}$ is injective. Hence, after shrinking $U$ and $V$ we may assume that $\phi$ is a homeomorphism onto its image $\phi(V)\subset V \times X$. Shrinking $U$ and $V$ further we can arrange for this image to be a product region in $V \times X$. Finally, we replace the target with $\phi(U)$, so that $\phi$ is a homeomorphism. \end{proof} \subsection{Proof of Theorem~\ref{thm_existence_family_k}} \label{subsec_construction_um} By Theorem~\ref{subsec_sing_RF_exist_unique}, for every $s \in X$ we may fix a singular Ricci flow $(\mathcal{M}^s, \t^s, \partial^s_\t, g^s)$ such that $\mathcal{M}^s_0$ is isometric to $(M^s,g^s)$. In the following, we will identify $(\mathcal{M}^s_0, g^s_0)$ with $(M^s, g^s)$. Let $Y := \sqcup_{s \in X} \mathcal{M}^s$ be the disjoint union and $\pi : Y \to X$ the natural projection. Before proceeding further, we first indicate the idea of the proof. Our goal will be to construct a suitable topology, as well as family charts on $Y$. In order to achieve this, we will first introduce the notion of ``families of almost-isometries'' (see Definition~\ref{Def_fam_alm_isometries} below), which will serve as a guide to decide whether a subset of $Y$ is open or a map defined on a subset of $Y$ is a family chart. Roughly speaking, a family of almost-isometries near some parameter $s \in X$ consists of maps $\psi_{s'}$ between a large subset of $\mathcal{M}^s$ and a large subset $\mathcal{M}^{s'}$, for $s'$ close to $s$, which arise from the stability theory of singular Ricci flows in Lemma~\ref{lem_stability}. These maps are almost isometric up to arbitrarily high precision as $s' \to s$. In our construction, the topology and the family charts on $Y$ will be chosen in such a way that a posteriori we have $\psi_{s'} \to \id_{\mathcal{M}^s}$ in the smooth topology as $s' \to s$. After defining the topology and family charts on $Y$, we need to verify all required properties, such as the property that the domains of all family charts cover $Y$. We will do this by propagating family charts from the family of time-$0$ slices to all of $Y$ using the flow of the time vector fields $\partial^s_{\t}$ and the exponential map with respect to the metrics $g^s_t$. Let us now continue with the proof. \begin{definition}[Families of almost-isometries] \label{Def_fam_alm_isometries} Consider a neighborhood $S \subset X$ of some $s \in X$. A family of diffeomorphisms \[ \{ \psi_{s'}: \mathcal{M}^s \supset Z_{s'} \longrightarrow Z_{s'}'\subset\mathcal{M}^{s'} \}_{s' \in S}, \] where $Z_{s'}, Z'_{s'}$ are open, is called {\bf family of almost-isometries near $s$} if the following is true: \begin{enumerate}[label=(\arabic*)] \item \label{prop_fam_alm_isometries_1} For every $\varepsilon > 0$ there is a neighborhood $S_\varepsilon \subset S$ of $s$ such that $\psi_{s'}$ is an $\varepsilon$-isometry for all $s' \in S_\varepsilon$, in the sense of Definition~\ref{def_eps_isometry_rf_spacetime}. \item \label{prop_fam_alm_isometries_2} $\psi_{s'} |_{\mathcal{M}^s_0} \to \id_{\mathcal{M}^s_0}$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s$, where we use the identification $\mathcal{M}^s_0 = M^s$ and interpret $C^\infty_{\operatorname{loc}}$-convergence within the family of 3-manifolds $( M^s )_{s \in X}$. \end{enumerate} \end{definition} Due to Lemmas~\ref{lem_stability} and \ref{lem_chart_near_compact_subset} we have: \begin{lemma} \label{lem_almost_isometries_existence} For every $s \in X$ there is a family of almost-isometries near $s$. \end{lemma} Next we use families of almost-isometries to define a topology on the total space $Y$. \begin{definition}[Topology on $Y$] \label{def_topology_on_Y} We define a subset $U \subset Y$ to be {\bf open} if for every $p \in U$, $s := \pi (p) \in X$ and every family of almost-isometries \[ \{ \psi_{s'}: \mathcal{M}^s \supset Z_{s'} \longrightarrow Z_{s'}'\subset\mathcal{M}^{s'} \}_{s' \in S} \] near $s$ there are neighborhoods $W^* \subset S$ of $s$ and $U^* \subset \mathcal{M}^s$ of $p$ such that $U^* \subset \psi_{s'}^{-1} (U)$ for all $s' \in W^*$. \end{definition} \begin{lemma} Definition~\ref{def_topology_on_Y} defines a topology on $Y$ and the projection $\pi : Y \to X$ is continuous and open. \end{lemma} \begin{proof} For the first statement the only non-trivial part is showing that the intersection of two open subsets $U_1, U_2 \subset Y$ is open. Assume that $p \in U_1 \cap U_2$, $s := \pi (p)$ and consider a family of almost-isometries $\{ \psi_{s'} \}_{s' \in S}$ near $s$. There are neighborhoods $W^*_i \subset W$ of $s$ and $U^*_i \subset \mathcal{M}^s$, of $p$, $i = 1,2$, such that $U^*_i \subset \psi_{s'}^{-1} (U_i )$ for all $s' \in W^*_i$. It follows that $U^*_1 \cap U^*_2 \subset \psi_{s'}^{-1} (U_1 \cap U_2)$ for all $s' \in W^*_1 \cap W^*_2$. This shows that $U_1 \cap U_2$ is open. The continuity of $\pi$ is a direct consequence of Definition~\ref{def_topology_on_Y} and the openness of $\pi$ follows using Lemma~\ref{lem_almost_isometries_existence}. \end{proof} Our next goal will be to find family charts on $Y$, which turn $\{ \mathcal{M}^s \}_{s \in X}$ into a continuous family in the sense of Definition~\ref{def_continuous_family_manifolds}. Due to technical reasons, which will become apparent later, we will only construct a certain subclass of such family charts. We will also define a variant of a family chart whose domain is contained in the union of all time-$t$-slices $\mathcal{M}^s_t$ for a fixed $t$. This notion will be helpful in the statement and proof of Lemma~\ref{lem_propagating_charts}. \begin{definition} \label{Def_time_pres_fam_chart} A triple $(U, \phi, V, \t_V)$ is called a {\bf time-preserving family chart} if \begin{enumerate}[label=(\arabic*)] \item \label{prop_time_pres_fam_chart_1} $V$ is a smooth 4-manifold with boundary equipped with a smooth map $\t_V : V \to [0, \infty)$ such that $\partial V = \t^{-1} (0)$. \item \label{prop_time_pres_fam_chart_2} $\phi : U \to V \times \pi (U)$ is a map. \item \label{prop_time_pres_fam_chart_3} $\t_V \circ \proj_V \circ \phi = \t^s$ for all $s \in \pi (U)$. \item \label{prop_time_pres_fam_chart_4} $\proj_{\pi (U)} \circ \phi = \pi|_U$, as in Property~\ref{prop_continuous_family_manifolds_3} of Definition~\ref{def_continuous_family_manifolds}. \item \label{prop_time_pres_fam_chart_5} $\proj_V \circ \phi |_{U \cap \mathcal{M}^s} : U \cap \mathcal{M}^s \to V$ is a diffeomorphism for all $s \in \pi (U)$. \item \label{prop_time_pres_fam_chart_6} For every $(v,s) \in V \times \pi (U)$ and every family of almost-isometries $\{ \psi_{s'} \}_{s' \in S}$ near $s$ there is a neighborhood $V' \times W' \subset V \times \pi (U)$ such that for all $s' \in W'$ the maps $(\psi_{s'}^{-1} \circ \phi^{-1}) (\cdot, s') : V' \to \mathcal{M}^s$ are well defined on $V'$ and converge to $\phi^{-1} (\cdot, s)$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s$. \end{enumerate} We say that $(U, \phi, V)$ is a {\bf family chart at time $t$} if $U \subset \cup_{s \in X} \mathcal{M}^s_t$, $V$ is a smooth 3-manifold and Properties~\ref{prop_time_pres_fam_chart_2}, \ref{prop_time_pres_fam_chart_4}--\ref{prop_time_pres_fam_chart_6} hold with $\mathcal{M}^s$ replaced by $\mathcal{M}^s_t$. \end{definition} \begin{lemma} Assume that $(U, \phi, V, \t_V)$ is a time-preserving family chart. Then: \begin{enumerate}[label=(\alph*)] \item $U \subset Y$ is open and $\phi$ is a homeomorphism. \item The push forwards of the objects $\partial_\t^s, g^s$ onto $V$ via $(\proj_V \circ \phi )(\cdot, s)$ vary continuously in $s$ in the $C^\infty_{\operatorname{loc}}$-sense. \item For any $t \geq 0$ the restriction $\phi |_{U \cap \cup_{s \in X} \mathcal{M}^s_t} : U \cap \cup_{s \in X} \mathcal{M}^s_t \to \t_{V}^{-1} (t) \times \pi (U)$ is a family chart at time $t$. \end{enumerate} \end{lemma} \begin{proof} (a) \quad Let $W := \pi (U)$ and consider a non-empty subset $U_0 \subset U$. We will show that $U_0$ is open in the sense of Definition~\ref{def_topology_on_Y} if and only if $\phi (U_0) \subset V \times \pi (U)$ is open in the product topology. For this purpose let $p \in U_0$ and set $(v,s) := \phi (p) \in V \times W$. Choose a family of almost-isometries $\{ \psi_{s'} \}_{s' \in S}$ near $s$ according to Lemma~\ref{lem_almost_isometries_existence} and choose $V' \subset V$ and $W' \subset W$ as in Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart}. Set $U' := \phi^{-1} (V' \times W')$. Recall that \begin{equation} \label{eq_psi_phi_C_infty_convergence} (\psi^{-1}_{s'} \circ \phi^{-1} )(\cdot, s') \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} \phi^{-1} (\cdot, s) \qquad \text{as} \quad s' \to s. \end{equation} If $U_0$ is open in the sense of Definition~\ref{def_topology_on_Y}, then there are neighborhoods $W^* \subset S$ of $s$ and $U^* \subset U \cap \mathcal{M}^s$ of $p$ such that $U^* \subset \psi_{s'}^{-1} (U_0)$ for all $s' \in W^*$. Let $V^* \times \{ s \} := \phi (U^*)$. By (\ref{eq_psi_phi_C_infty_convergence}) there are open neighborhoods $V'' \subset V'$ of $v$ and $W'' \subset W'$ of $s$ such that for all $s' \in W''$ \[ (\psi^{-1}_{s'} \circ \phi^{-1} )(V'' \times \{ s' \}) \subset \phi^{-1} ( V^* \times \{ s \} ) = U^*. \] It follows that $\phi^{-1} (V'' \times \{ s' \}) \subset \psi_{s'} (U^*) \subset U_0$ and therefore $\phi^{-1} ( V'' \times W'' ) \subset U_0$. This proves that $\phi (U_0)$ is open. Conversely assume that $\phi (U_0)$ is open. Without loss of generality, we may assume that $\phi (U_0 ) = V_0 \times W_0$ for some open subsets $V_0 \subset V$, $W_0 \subset W$, so $(v,s) \in V_0 \times W_0$. By (\ref{eq_psi_phi_C_infty_convergence}) and the implicit function theorem we can find neighborhoods $V'' \subset V_0 \cap V'$ of $v$ and $W'' \subset W_0 \cap W'$ of $s$ such that for all $s' \in W''$ we have \[ U^* := \phi^{-1} ( V'' \times \{ s \} ) \subset ( \psi^{-1}_{s'} \circ \phi^{-1} )( V_0 \times \{ s' \} ) \subset \psi_{s'}^{-1} ( \phi^{-1} (V_0 \times W_0)) = \psi_{s'}^{-1} (U_0). \] This proves that $U_0$ is open. (b) \quad Fix $s \in \pi (U)$ and choose a family of almost-isometries $\{ \psi_{s'} \}_{s' \in S}$ near $s$. For any $s' \in \pi (U)$ we have \[ \big( (\psi_{s'}^{-1} \circ \phi^{-1} ) (\cdot, s') \big)_* (\proj_V \circ \phi )_* \partial^{s'}_\t = (\psi_{s'}^{-1})_* \partial^{s'}_\t \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} \partial^s_\t \] as $s' \to s$. Since $ (\psi_{s'}^{-1} \circ \phi^{-1} ) (\cdot, s') \to \phi^{-1} (\cdot, s)$ in $C^\infty_{\operatorname{loc}}$, we obtain that \[ (\proj_V \circ \phi )_* \partial^{s'}_\t \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} (\proj_V \circ \phi )_* \partial^s_\t, \] as desired. The continuity of the push forwards of $g^s$ follows analoguously. Assertion (c) is a direct consequence of Definition~\ref{Def_time_pres_fam_chart}. \end{proof} \begin{lemma} \label{lem_fam_chart_is_fam_chart_time_0} The family charts at time $0$ and subspace topology induced from Definition~\ref{def_topology_on_Y} define a structure of a continuous family of manifolds on $\{ \mathcal{M}^s_0 \}_{s \in X}$ that agrees with the given structure on $(M^s)_{s \in X}$ if we use the identification $M^s = \mathcal{M}^s_0$. \end{lemma} \begin{proof} This is a direct consequence of Definitions~\ref{def_continuous_family_manifolds}, \ref{def_topology_on_Y} and \ref{Def_time_pres_fam_chart} and Property~\ref{prop_fam_alm_isometries_2} of \ref{Def_fam_alm_isometries}. \end{proof} \begin{lemma} \label{lem_compatible_fam_chart} If $(U_i, \phi_i, V_i, \t_{V_i})$, $i=1,2$, are two time-preserving family charts according to Definition~\ref{Def_time_pres_fam_chart}, then both charts are compatible in the sense of Property~\ref{prop_continuous_family_manifolds_4} of Definition~\ref{def_continuous_family_manifolds}. In other words, the transition map \[ \phi_{12}:=\phi_2 \circ\phi_1^{-1}: V_1 \times \pi(U_1)\supset\phi_1(U_1\cap U_2)\rightarrow \phi_2(U_1\cap U_2)\subset V_2 \times \pi(U_2) \] has the form $\phi_{12}(v,s)=(\beta(v,s),s)$, where $\beta : \phi_1(U_1\cap U_2)\rightarrow V_2$ locally defines a family of smooth maps $s \mapsto \beta(\cdot, s)$ that depend continuously on $s$ in the $C^\infty_{\operatorname{loc}}$-topology. \end{lemma} \begin{proof} Let $p \in U_1 \cap U_2$ and set $(v_i, s) := \phi_i (p)$. Fix a family of almost-isometries $\{ \psi_{s'} \}_{s' \in S}$ near $s$. Choose neighborhoods $V'_i \times W'_i \subset V_i \times \pi (U_i)$ of $(v_i, s)$ according to Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart}. After shrinking $V'_1, W'_1$, we may assume without loss of generality that $\phi_1^{-1} ( V'_1 \times W'_1 ) \subset \phi_2^{-1} (V'_2 \times W'_2)$. So for all $s' \in W'_1 \cap W'_2$ \[ (\psi_{s'}^{-1} \circ \phi_1^{-1}) (V'_1 \times \{ s' \}) \subset (\psi_{s'}^{-1} \circ \phi_2^{-1}) (V'_2 \times \{ s' \}) \] and therefore for all $v' \in V'_1$ \[ (\beta (v', s') ,s') = \big( ( \psi_{s'}^{-1} \circ \phi_2^{-1} )^{-1} \circ (\psi_{s'}^{-1} \circ \phi_1^{-1}) \big) (v', s') \] Since $(\psi_{s'}^{-1} \circ \phi_i^{-1}) (\cdot, s') \to \phi_i^{-1} (\cdot, s)$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s$, this implies that $\beta (\cdot, s') \to \beta (\cdot, s)$ in $C^\infty_{\operatorname{loc}}$ on $V_1$ as $s' \to s$. \end{proof} Next we show that the domains of all time-preserving family charts cover $Y$. For this purpose, we first prove: \begin{lemma} \label{lem_propagating_charts} Suppose that for some $s \in X$, $t\geq 0$ the point $p\in \mathcal{M}^s_t$ lies in the domain $U$ of a family chart $(U, \phi, V)$ at time $t$, in the sense of Definition~\ref{Def_time_pres_fam_chart}. Consider another point $p'\in \mathcal{M}^s$. \begin{enumerate}[label=(\alph*)] \item \label{ass_propagating_charts_a} If $p'=p(t')$ for some $t' \geq 0$, then $p'$ lies in the domain $U'$ of a time-preserving family chart $(U', \phi', V', \t_{V'})$. See Definition~\ref{def_points_in_RF_spacetimes} for the notation $p(t')$. \item \label{ass_propagating_charts_b} If $p'\in B(p,r)$ for $r<\operatorname{injrad}(\mathcal{M}^s_t,p)$, then $p'$ lies in the domain $U'$ of a family chart $(U', \phi', V')$ at time $t$. \end{enumerate} \end{lemma} \begin{proof} (a) \quad Our strategy will be to extend $\phi$ via the flow of $\partial^s_\t$. Choose a bounded interval $I \subset [0, \infty)$ that is open in $[0, \infty)$, contains $t, t'$ and for which $p ( \ov{I})$ is well-defined, where $\ov{I}$ denotes the closure of $I$. Let $V'' \subset V$ be a subset such that: \begin{enumerate} \item $V''$ is open and has compact closure $\ov{V}''$ in $V$. \item $\phi (p) \in V'' \times \{ s \}$. \item $(\phi^{-1} (\ov{V}'',s))(t'')$ is well-defined for all $t'' \in \ov{I}$. \end{enumerate} Let $W' \subset \pi(U)$ be the set of parameters $s' \in \pi (U)$ for which $(\phi^{-1} (\ov{V}'',s'))(t'')$ is well-defined for all $t'' \in \ov{I}$. Consider the map \[ \alpha : (V'' \times I) \times W' \longrightarrow Y, \qquad (v',t'',s') \longmapsto (\phi^{-1} (v',s')) (t''). \] Note that $\alpha$ is injective. Let $V' := V'' \times I$, $U' := \alpha (V' \times W')$ and $\phi' := \alpha^{-1} : U' \to V' \times W'$. We claim that $(U', \phi', V', \proj_I)$ is a time-preserving family chart in the sense of Definition~\ref{Def_time_pres_fam_chart}. We need to verify Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart} and that $W'$ is open; the remaining properties of Definition~\ref{Def_time_pres_fam_chart} follow directly by construction. For this purpose consider some $s_0 \in W'$, and let $\{ \psi_{s'} \}_{s' \in S}$ be a family of almost-isometries near $s_0$. Since $\ov{V}''$ is compact, we can find neighborhoods $W_0 \subset W \cap S$ of $s_0$ and $V_0 \subset V$ of $\ov{V}''$ such that for all $s' \in W_0$ the maps $(\psi_{s'}^{-1} \circ \phi^{-1}) (\cdot, s')$ are well defined on $V_0$ and converge to $\phi^{-1} (\cdot, s_0)$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s_0$. For any $s' \in W_0$ and $v' \in V_0$ consider the trajectory of $\partial^{s'}_\t$ through $\phi^{-1} (v', s')$. The image of this trajectory under the map $\psi^{-1}_{s'}$ is a trajectory of $\psi_{s'}^* \partial^{s'}_\t$ through $(\psi^{-1}_{s'} \circ \phi^{-1} )(v', s')$, wherever defined. Since $(\psi_{s'}^{-1} \circ \phi^{-1}) (\cdot, s') \to \phi^{-1} (\cdot, s_0)$ and $\psi_{s'}^* \partial^{s'}_\t \to \partial^{s_0}_\t$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s_0$, we can find a neighborhood $W_1 \subset W_0$ of $s_0$ with the property that for any $s' \in W_1$ and $v' \in \ov{V}''$ the trajectory of $\psi_{s'}^* \partial^{s'}_\t$ through $(\psi^{-1}_{s'} \circ \phi^{-1}) (v', s')$ exists for all times of the interval $\ov{I}$. It follows that $W_1 \subset W'$ and for any $s' \in W_1$ the map $(\psi_{s'}^{-1} \circ \alpha) (\cdot, s')$ restricted to $V''$ is given by the flow of $\psi_{s'}^* \partial^{s'}_\t$ starting from $(\psi^{-1}_{s'} \circ \phi^{-1}) (\cdot, s')$. Due to the smooth convergence discussed before we have \begin{equation*} \label{eq_psi_alpha_convergence} (\psi_{s'}^{-1} \circ \alpha) (\cdot, s') \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} \alpha (\cdot, s_0) \qquad \text{as} \quad s' \to s_0. \end{equation*} This verifies Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart} and shows that $W'$ is open. \medskip (b) \quad After shrinking $V$ and $W := \pi (U)$ if necessary, we may assume that for some $r'>r$ and every $s' \in W$ and $v' \in V$, we have \begin{equation} \label{eqn_r_prime_injrad} r<r'<\operatorname{injrad}(\mathcal{M}^{s'}_t, \phi^{-1} (v',s'))\,; \end{equation} this follows from a straightforward convergence argument. Let $(v,s) := \phi (p)$. Using Gram-Schmidt orthogonalization we can find a continuous family of linear maps $( \varphi_{s'} : \mathbb{R}^3 \to T_v V )_{s' \in W}$ that are isometries with respect to the push forward of $g^{s'}_t$ via $\proj_V \circ \phi$. Denote by $B (0,r) \subset \mathbb{R}^3$ the $r$-distance ball and define \[ \alpha : B (0,r) \times W \longrightarrow \cup_{s' \in X} \mathcal{M}^{s'}_t, \quad \alpha (\cdot ,s') := \exp^{g^{s'}_t}_{\phi^{-1} (v, s')} \circ d \big(\phi^{-1} (\cdot, s') \big)_v \circ \varphi_{s'} . \] Due to (\ref{eqn_r_prime_injrad}) this map is injective. Let $V' := B (0,r)$, $U' := \alpha (V' \times W )$, and $\phi := \alpha^{-1} : U' \to V' \times W$. As in the previous case it remains to show that Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart} holds. Let $s_0 \in W$ and let $\{ \psi_{s'} \}_{s' \in S}$ be a family of almost-isometries near $s_0$. For $s'$ sufficiently close to $s_0$ we have \[ (\psi^{-1}_{s'} \circ \alpha) (\cdot, s') = \exp^{\psi^*_{s'} g^{s'}_t}_{(\psi^{-1}_{s'} \circ \phi^{-1}) (v,s')} \circ d \big( ( \psi_{s'}^{-1} \circ \phi^{-1}) (\cdot, s') \big)_v \circ \varphi_{s'}. \] Since \[ \psi^*_{s'} g^{s'}_t \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} g^{s_0}_t, \qquad (\psi^{-1}_{s'} \circ \phi^{-1} )(\cdot, s') \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} \phi^{-1} (\cdot, s_0) \] as $s' \to s_0$, we therefore obtain that \[ (\psi^{-1}_{s'} \circ \alpha )(\cdot, s') \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} \alpha (\cdot, s_0) \] as $s' \to s_0$. This establishes Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart}. \end{proof} \begin{corollary} \label{cor_chart_domains_cover} Every point $p\in Y$ lies in the domain of a time-preserving family chart. \end{corollary} \begin{proof} Let $C\subset Y$ be the set of points lying in the domain of a time-preserving family chart. By Lemma~\ref{lem_fam_chart_is_fam_chart_time_0} and Lemma~\ref{lem_propagating_charts}\ref{ass_propagating_charts_a} we have $\mathcal{M}^s_0 \subset C$ for all $s \in X$. By Lemma~\ref{lem_propagating_charts}, for every $s \in X$, the intersection $C\cap\mathcal{M}^s$ is an open and closed subset of $\mathcal{M}^s$; since it is nonempty, it follows from \cite[Prop. 5.38]{Kleiner:2014le} that $C\cap\mathcal{M}^s=\mathcal{M}^s$. Hence $C=Y$. \end{proof} Corollary~\ref{cor_chart_domains_cover} verifies that the collection of all the set of time-preserving family charts satisfy Property~\ref{prop_continuous_family_manifolds_2} of Definition~\ref{def_continuous_family_manifolds} if we drop the fourth entry ``$\t_V$''. By Lemma~\ref{lem_cont_fam_no_max} this set can be extended to a maximal collection of family charts. All other properties of Definition~\ref{def_continuous_family_manifolds} and the fact that induced continuous structure on the set of time-$0$-slices $(\mathcal{M}^s_0)_{s \in X}$ coincides with that on $(M^s)_{s \in X}$ follow directly from our construction and from Lemmas~\ref{lem_fam_chart_is_fam_chart_time_0} and \ref{lem_compatible_fam_chart}. \subsection{Proof of Theorem~\ref{thm_uniqueness_family_k}} Consider a continuous family of singular Ricci flows $(\mathcal{M}^s)_{s \in X}$ and the associated continuous family of time-$0$-slices $(M^s, g^s)_{s \in X} := (\mathcal{M}^s_0, g^s_0)_{s \in X}$. It suffices to show that in the proof of Theorem~\ref{thm_existence_family_k} the topology and the family charts on $Y = \sqcup_{s \in X} \mathcal{M}^s$ are uniquely determined by $(M^s, g^s)_{s \in X}$. For this purpose we first show: \begin{lemma} \label{lem_convergence_psi} Let $\{ \psi_{s'} \}_{s' \in S}$ be a family of almost-isometries near some $s \in X$. Then $\psi_{s'} \to \id_{\mathcal{M}^s}$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s$, in the sense that for any family chart $(U, \phi, V)$ we have \begin{equation} \label{eq_psi_s_prime_to_id} \proj_V \circ \phi \circ \psi_{s'} \xrightarrow{\quad C^\infty_{\operatorname{loc}} \quad} \proj_V \circ \phi \qquad \text{as} \quad s' \to s. \end{equation} \end{lemma} \begin{proof} We say that $\psi_{s'} \to \id_{\mathcal{M}^s}$ near some point $p \in \mathcal{M}^s$ if (\ref{eq_psi_s_prime_to_id}) holds near $p$ for some (and therefore every) family chart $(U, \phi, V)$ with $p \in U$. Furthermore, we say that $\psi_{s'} \to \id_{\mathcal{M}^s}$ near some point $p \in \mathcal{M}^s_t$ at time $t$ if (\ref{eq_psi_s_prime_to_id}) holds near $p$ in $\mathcal{M}^s_t$ for any family chart $(U,\phi,V)$, with $p \in U$, of the continuous family of time-$t$-slices $(\mathcal{M}^s_t)_{s \in X}$; compare with Lemma~\ref{lem_cont_fam_time_slice_inherit}. \begin{Claim} Assume that $p \in \mathcal{M}^s_t$ has the property that $\psi_{s'} \to \id_{\mathcal{M}^s}$ near $p$ at time $t$ and let $p' \in \mathcal{M}^s$ be another point. \begin{enumerate}[label=(\alph*)] \item If $p' = p (t')$ for some $t' \geq 0$, then $\psi_{s'} \to \id_{\mathcal{M}^s}$ near $p'$. \item If $p' \in B(p,r)$ for $r < \operatorname{injrad} (\mathcal{M}^s_t, p)$, then $\psi_{s'} \to \id_{\mathcal{M}^s}$ near $p'$ at time $t$. \end{enumerate} \end{Claim} \begin{proof} This is a direct consequence of the fact that $\psi_{s'}^* \partial^{s'}_\t \to \partial^s_\t$ and $\psi_{s'}^* g^{s'} \to g^s$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s$. Compare also with the proof of Lemma~\ref{lem_propagating_charts}. \end{proof} By combining the claim with the proof of Corollary~\ref{cor_chart_domains_cover}, we obtain that $\psi_{s'} \to \id_{\mathcal{M}^s}$ everywhere. \end{proof} By Lemma~\ref{lem_convergence_psi} Definition~\ref{def_topology_on_Y} offers the correct description for the topology on $\cup_{s \in X} \mathcal{M}^s$. Next, if $(U, \phi, V, \t_V)$ is a time-preserving family chart, in the sense of Definition~\ref{Def_time_pres_fam_chart}, then $(U, \phi, V)$ satisfies Properties~\ref{prop_continuous_family_manifolds_1} and \ref{prop_continuous_family_manifolds_3} of Definition~\ref{def_continuous_family_manifolds}. By Lemma~\ref{lem_convergence_psi} and the proof of Lemma~\ref{lem_compatible_fam_chart}, $(U, \phi, V)$ is moreover compatible with all family charts of $(\mathcal{M}^s)_{s \in X}$ in the sense of Property~\ref{prop_continuous_family_manifolds_4} of Definition~\ref{def_continuous_family_manifolds}. Therefore, by maximality $(U, \phi, V)$ is a family chart of $(\mathcal{M}^s)_{s \in X}$. This shows that the the topology and the family charts on $(\mathcal{M}^s)_{s \in X}$ are uniquely determined by $(M^s, g^s)_{s \in X}$, concluding the proof. \subsection{Proof of Theorem~\ref{Thm_properness_fam_sing_RF}} We first show: \begin{lemma} \label{lem_psi_phi_K_converge} If $(U, \phi, V)$ is a family chart of $(\mathcal{M}^s)_{s \in X}$, then for every $s \in \pi (U)$, every compact subset $K \subset V$ and every family of almost-isometries $\{ \psi_{s'} \}_{s' \in S}$ near $s$ there is a neighborhood $V' \times W' \subset V \times \pi (U)$ of $K \times \{ s \}$ such that for all $s' \in W'$ the maps $(\psi_{s'}^{-1} \circ \phi^{-1}) (\cdot, s')$ are well defined on $V'$ and converge to $\phi^{-1} (\cdot, s)$ in $C^\infty_{\operatorname{loc}}$ as $s' \to s$. \end{lemma} \begin{proof} Via a covering argument, we can reduce the lemma to the case in which $K = \{ v \}$ consists of a single point. Choose a time-preserving family chart $(U'', \phi'', V'', \t_{V''})$ with $(\phi'')^{-1} (v'', s) := \phi^{-1} (v,s) \in U''$. Then by Property~\ref{prop_time_pres_fam_chart_6} of Definition~\ref{Def_time_pres_fam_chart} the assertion of the lemma holds if $(U,\phi,V)$ and $v$ are replaced by $(U'', \phi'', V'')$ and $v''$. The lemma now follows due to the compatibility of the family charts $(U,\phi,V)$ and $(U'', \phi'', V'')$. \end{proof} By Theorem~\ref{Thm_rho_proper_sing_RF} the subset $K_0 := \{ \rho_{g^{s_0}} \geq r/10, \t^{s_0} \leq t + 1 \} \subset \mathcal{M}^{s_0}$ is compact. Therefore, by Lemma~\ref{lem_chart_near_compact_subset} there is a family chart $(U_0, \phi_0, V_0)$ of $(\mathcal{M}^s)_{s \in X}$ with $K_0 \subset U_0$. Set $K \times \{ s_0 \} := \phi_0 ( K_0 )$. By Lemma~\ref{lem_almost_isometries_existence} there is a family $\{ \psi_{s} : \mathcal{M}^{s_0} \supset Z_{s} \to Z'_{s} \subset \mathcal{M}^{s} \}_{s \in S}$ of almost isometries near $s_0$. Let $\varepsilon > 0$ be a constant whose value we will determine later. By shrinking $U_0$, we may assume without loss of generality that $\pi (U_0) \subset S$ and that all maps $\psi_s$, $s \in \pi (U_0)$, are $\varepsilon$-isometries. If $\varepsilon$ is chosen small enough, then $\rho_{g^s} < r$ on $(\{ \t^s \leq t+1 \} \cap \mathcal{M}^s) \setminus Z'_s$ and $\rho_{g^s} (\psi_s (x)) \leq 2\rho_{g^{s_0}} (x)$ for all $s \in S$ and $x \in Z_s$ with $\t (x) \leq t$. We will now show that there is a neighborhood $W \subset \pi (U)$ of $s_0$ such that for all $s \in W$ we have \begin{equation} \label{eq_rho_r2_psi_phi} \{ \rho_{g^{s_0}} \geq r/2, \t^{s_0} \leq t \} \subset \psi_s^{-1} ( \phi_0^{-1} (K \times \{ s \} )). \end{equation} Then $U := \pi^{-1} (W) \cap U_0$, $\phi := \phi_0 |_{U}$ and $V$ have the desired properties, since (\ref{eq_rho_r2_psi_phi}) implies that for all $s \in W$ we have \[ \phi_0 ( \{ \rho_{g^s} \geq r, \t^s \leq t \} ) \subset \phi_0 \big( \psi_s ( \{ \rho_{g^{s_0}} \geq r/2, \t^{s_0} \leq t \} ) \big) \subset K \times \{ s \}. \] To see that (\ref{eq_rho_r2_psi_phi}) is true let $K_1 \times \{ s_0 \} := \phi ( \{ \rho_{g^{s_0}} \geq r/2, \t^{s_0} \leq t \} )$ and apply Lemma~\ref{lem_psi_phi_K_converge} for $(U_0, \phi_0, V_0)$. We obtain a neighborhood $V' \times W' \subset V_0 \times \pi (U_0)$ of $K \times \{ s_0 \}$ such that for all $s \in W'$ the maps $(\psi^{-1}_s \circ \phi_0^{-1})(\cdot, s)$ are well defined on $V$ and converge to $\phi_0^{-1} (\cdot, s_0)$ in $C^\infty_{\operatorname{loc}}$ as $s \to s_0$. So $\phi_0^{-1} (K \times \{s \}) \subset Z'_s$ for all $s \in W'$ and for $s$ near $s_0$ the set $\phi_0^{-1} (K_1 \times \{ s_0 \})$ lies in the image of the map $(\psi^{-1}_s \circ \phi_0^{-1} )(\cdot, s)$. This implies (\ref{eq_rho_r2_psi_phi}) for $s$ near $s_0$. \section{Rounding process} \label{sec_rounding_process} \subsection{Introduction} Consider a continuous family $(\mathcal{M}^s)_{s \in X}$ of singular Ricci flows over some topological space $X$. By the canonical neighborhood assumption (see Definition~\ref{def_canonical_nbhd_asspt}) we know that regions of every $\mathcal{M}^s$ where the curvature scale $\rho$ is small are modeled on a $\kappa$-solutions, which are rotationally symmetric or have constant curvature. The goal of this section is to perturb the metric $g^s$ on each $\mathcal{M}^s$ to a \emph{rounded} metric $g^{\prime,s}$, which is locally rotationally symmetric or has constant curvature wherever $\rho$ is small. Our process will be carried out in such a way that the rounded metrics still depend continuously on $s$. In addition to the rounded metrics $g^{\prime,s}$, we will also record the spherical fibrations consisting of the orbits of local isometric $O(3)$-actions wherever the metric is rotationally symmetric. The precise structure that we will attach to each flow $\mathcal{M}^s$ will be called an \emph{$\mathcal{R}$-structure} and will be defined in Subsections~\ref{subsec_spherical_struct} and \ref{subsec_RR_structure}. In Subsection~\ref{subsec_main_rounding_statement} we will then state the main result of this section, followed by a proof in the remaining subsections. \subsection{Spherical structures} \label{subsec_spherical_struct} We first formalize a structure that is induced by a locally rotationally symmetric metric. Let $M$ be a smooth manifold of dimension $n \geq 3$; in the sequel we will have $n\in \{3,4\}$. \begin{definition}[Spherical structure] \label{Def_spherical_structure} A {\bf spherical structure} $\mathcal{S}$ on a subset $U \subset M$ of a smooth manifold with boundary $M$ is a smooth fiber bundle structure on an open dense subset $U' \subset U$ whose fibers are diffeomorphic to $S^2$ and equipped with a smooth fiberwise metric of constant curvature $1$ such that the following holds. For every point $x \in U$ there is a neighborhood $V \subset U$ of $x$ and an $O(3)$-action $\zeta : O(3) \times V \to V$ such that $\zeta |_{V \cap U'}$ preserves all $S^2$-fibers and acts effectively and isometrically on them. Moreover, all orbits on $V \setminus U'$ are not diffeomorphic to spheres. Any such local action $\zeta$ is called a {\bf local $O(3)$-action compatible with $\mathcal{S}$}. We call $U = \domain (\SS)$ the {\bf domain} of $\SS$. \end{definition} Consider an action $\zeta$ as in Definition~\ref{Def_spherical_structure}. For any sequence $x_i \to x_\infty \in \mathcal{O}$ the corresponding sequence of orbits $\mathcal{O}_i$ converges to $\mathcal{O}$ in the Hausdorff sense. As $U'$ is dense in $U$, this implies that $\mathcal{O}$ is independent of the choice of $\zeta$. So $\mathcal{O}$ is determined uniquely by a point $x \in \mathcal{O}$ and the spherical structure $\mathcal{S}$. We call any such orbit $\mathcal{O} \subset U \setminus U'$ a {\bf singular (spherical) fiber} and any fiber in $U'$ a {\bf regular (spherical) fiber} of $\mathcal{S}$. By analyzing the quotients of $O(3)$ we get: \begin{lemma} Any singular spherical fiber is either a point or is diffeomorphic to $\mathbb{R} P^2$. \end{lemma} \begin{lemma} \label{lem_local_spherical_struct} If $n = 3$, then $\zeta$ from Definition~\ref{Def_spherical_structure} is locally conjugate to one of the following models equipped with the standard $O(3)$-action: \[ S^2 \times (-1,1), \quad \big( S^2 \times (-1,1) \big) / \mathbb{Z}_2, \quad B^3, \quad S^2 \times [0, 1). \] In the last case $S^2 \times \{ 0 \}$ corresponds to a boundary component of $M$. In the second case there is a unique spherical structure on the local two-fold cover consisting only of regular fibers that extends the pullback via the covering map of the original spherical structure restricted to the union of the regular fibers. \end{lemma} Next we formalize the notion of a rotationally symmetric metric compatible with a given spherical structure. \begin{definition}[Compatible metric] \label{Def_compatible_metric} Let $U \subset M$ be an open subset. A smooth metric $g$ on a subbundle of $TU$ or $TM$ is said to be {\bf compatible with a spherical structure $\mathcal{S}$} if near every point $x \in U$ there is a local $O(3)$-action that is compatible with $\mathcal{S}$ and isometric with respect to $g$. \end{definition} If $M = \mathcal{M}$ is a Ricci flow spacetime and $g$ is a smooth metric on the subbundle $\ker d\t$, then Definition~\ref{Def_compatible_metric} still makes sense. In dimension 3 a metric $g$ compatible with a spherical structure $\SS$ can locally be written as a warped product of the form $g = a^2(r) g_{S^2} + b^2(r) dr^2$ near the regular fibers. However, note that a spherical structure does not record the splitting of the tangent space into directions that are tangential and orthogonal to the spherical fibers. So the metrics $g$ compatible with $\SS$ depend on more data than the warping functions $a(r), b(r)$. Consider for example the quotient of the round cylinder $S^2 \times \mathbb{R}$ by an isometry of the form $(x,r) \mapsto (A x, r+a)$, where $A \in O(3)$, $a > 0$. The induced spherical structures are equivalent for all choices of $A, a$. The following lemma classifies the geometry of metrics that are compatible with a spherical structure near regular fibers. \begin{lemma} \label{lem_compatible_metric_general_form} Let $u < v$ and consider the standard spherical structure $\SS$ on $M = S^2 \times (u, v)$, i.e. the structure whose fibers are of the form $S^2 \times \{ r \}$, endowed with the standard round metric $g_{S^2}$. A Riemannian metric $g$ on $M$ is compatible with $\SS$ if and only if it is of the form \begin{equation} \label{eq_compatible_form_g} g = a^2(r) g_{S^2} + b^2(r) dr^2 + \sum_{i=1}^3 c_i (r) (dr \, \xi_i + \xi_i \, dr), \end{equation} for some functions $a, b, c_1, c_2, c_3 \in C^\infty ((u,v))$ with $a,b > 0$. Here $\xi_i := * dx^i$ denote the 1-forms that are dual to the standard Killing fields on $S^2 \subset \mathbb{R}^3$. \end{lemma} \begin{proof} Consider diffeomorphisms $\phi : M \to M$ of the form $\phi (v,r) = (A(r) v, r)$, where $A : (u,v) \to O(3)$ is smooth. These diffeomorphisms leave $\SS$ invariant and if $g$ is of the form (\ref{eq_compatible_form_g}), then so is $\phi^* g$. It follows that every metric $g$ of the form (\ref{eq_compatible_form_g}) is compatible with $\SS$. On the other hand, assume that $g$ is compatible with $\SS$. Fix some $r_0 \in (u,v)$ and consider the normal exponential map to $S^2 \times \{ r_0 \}$. Using this map we can construct a diffeomorphism of the form $\phi (v,r) = (A(r) v, r)$ such that $\phi^* g = a^2 (r) g_{S^2} + b^2 (r) dr^2$. Thus $g$ is of the form (\ref{eq_compatible_form_g}). \end{proof} Next we define what we mean by the preservation of a spherical structure by a vector field. \begin{definition}[Preservation by vector field] Let $U_1 \subset U_2 \subset M$ be open subsets and consider a vector field $X$ on $U_2$. A spherical structure $\mathcal{S}$ on $U_1$ is said to be {\bf preserved} by $X$ if the flow of $X$ preserves the regular and singular fibers as well as the fiberwise metric on the regular fibers. \end{definition} Lastly, we consider a continuous family of manifolds $(M^s)_{s \in X}$ of arbitrary dimension, which may for example be taken to be a continuous family of Ricci flow space times $(\mathcal{M}^s)_{s \in X}$. We will define the notion of transverse continuity for spherical structures. For this purpose, we need: \begin{definition}[Transversely continuity for families of local $O(3)$-actions] \label{Def_transverse_O3_action} Let $(V^s \subset M^s)_{s \in X}$ be a family of open subsets such that $V := \cup_{s \in X} V^s \subset \cup_{s \in X} M^s$ is open and consider a family of $O(3)$-actions $(\zeta^s : O(3) \times V^s \to V^s)_{s \in X}$, which we may also express as $\zeta : O(3) \times V \to V$. We say that $\zeta^s$ is {\bf transversely continuous in the smooth topology} if for any $A_0 \in O(3)$, $s_0 \in X$, $x_0 \in V^{s_0}$ there are family charts $(U_0, \phi_0, V_0)$, $(U'_0, \phi'_0, V'_0)$ (in the sense of Definition~\ref{def_continuous_family_manifolds}) with $x_0 \in U_0$ and $\zeta (A_0, x_0) \in U'_0$ such that the map $(A, v, s) \mapsto (\proj_{V'_0} \circ \phi'_0 \circ \zeta) (A, \phi_0^{-1} (v,s)) )$ can be viewed as a family of maps in the first two arguments that depend continuously on $s$ in $C^\infty_{\operatorname{loc}}$ near $(A_0, \phi (x_0))$. \end{definition} Let now $(\SS^s)_{s \in X}$ be a family of spherical structures defined on a family of open subsets $(U^s := \domain (\SS^s) \subset M^s)_{s \in X}$. \begin{definition}[Transverse continuity for spherical structures] \label{Def_spherical_struct_transverse_cont} We say that $(\SS^s)_{s \in X}$ is {\bf transversely continuous} if: \begin{enumerate} \item $U := \cup_{s \in X} U^s$ is an open subset in the total space $\cup_{s \in X} M^s$. \item For every point $x \in U$ there is an open neighborhood $V = \cup_{s \in X} V^s \subset U$ and a transversely continuous family of local $O(3)$-actions $(\zeta^s : O(3) \times V^s \to V^s)_{s \in X}$ that are each compatible with $\SS^s$. \end{enumerate} A family of spherical structures $(\SS^s)_{s \in X}$ on a fixed manifold $M$ is called transversely continuous, if it is transversely continuous on the associated continuous family of manifolds $(M \times \{ s \} )_{s \in X}$. \end{definition} \subsection{$\mathcal{R}$-structures} \label{subsec_RR_structure} Consider a singular Ricci flow $\mathcal{M}$. We now define the structure that we will construct in this section. \begin{definition}[$\mathcal{R}$-structure] \label{Def_R_structure} An {\bf $\mathcal{R}$-structure} on a singular Ricci flow $\mathcal{M}$ is a tuple $\mathcal{R} = ( g', \partial'_\t, U_{S2}, U_{S3}, \mathcal{S})$ consisting of a smooth metric $g'$ on $\ker d\t$, a vector field $\partial'_\t$ on $\mathcal{M}$ with $\partial'_\t \, \t = 1$, open subsets $U_{S2}, U_{S3} \subset \mathcal{M}$ and a spherical structure $\mathcal{S}$ on $U_{S2}$ such that for all $t \geq 0$: \begin{enumerate}[label=(\arabic*)] \item \label{prop_def_RR_1} $U_{S3} \setminus U_{S2}$ is open. \item \label{prop_def_RR_2} $U_{S2} \cap \mathcal{M}_t$ is a union of regular and singular fibers of $\mathcal{S}$. \item \label{prop_def_RR_3} $\partial'_{\t}$ preserves $\mathcal{S}$. \item \label{prop_def_RR_4} $g'_t$ is compatible with $\mathcal{S}$. \item \label{prop_def_RR_5} $U_{S3} \cap \mathcal{M}_t$ is a union of compact components of $\mathcal{M}_t$ on which $g'_t$ has constant curvature. \item \label{prop_def_RR_6} $U_{S3}$ is invariant under the forward flow of the vector field $\partial'_\t$, i.e. any trajectory of $\partial'_\t$ whose initial condition is located in $U_{S3}$ remains in $U_{S3}$ for all future times. \item \label{prop_def_RR_7} The flow of $\partial'_\t$ restricted to every component of $U_{S3} \cap \mathcal{M}_t$ is a homothety with respect to $g'$, whenever defined. \end{enumerate} We say that the $\mathcal{R}$-structure is {\bf supported on $U_{S2} \cup U_{S3}$.} \end{definition} Note that Property~\ref{prop_def_RR_1} is equivalent to the statement that any component of $U_{S3}$ is either contained in $U_{S2}$ or disjoint from it. At this point we reiterate the importance of the fact that the spherical structure $\SS$ does not record the splitting of the tangent space into directions that are tangential and orthogonal to the spherical fibers. Therefore the preservation of $\SS$ by $\partial'_{\t}$ does not guarantee a preservation of this splitting under the flow of $\partial'_{\t}$. So the metrics $g'_t$ could be isometric to quotients of the round cylinder $S^2 \times \mathbb{R}$ by isometries of the form $(x,r) \mapsto (A_t x, r+a_t)$, where $A_t \in O(3)$, $a_t > 0$ may vary smoothly in $t$, and $\SS$ could consist of all cross-sectional 2-spheres. In this case the flow of $\partial'_{\t}$ may leave the metric tangential to the fibers of $\SS$ invariant while distorting the metric in the orthogonal and mixed directions. Consider now a continuous family $(\mathcal{M}^s)_{s \in X}$ of singular Ricci flows. For every $s \in X$ choose an $\mathcal{R}$-structure \[ \mathcal{R}^s = ( g^{\prime, s}, \linebreak[1] \partial^{\prime, s}_\t, \linebreak[1] U^s_{S2}, \linebreak[1] U^s_{S3}, \linebreak[1] \mathcal{S}^s) \] on $\mathcal{M}^s$. We define the following notion of transverse continuity: \begin{definition}[Transverse continuity for $\mathcal{R}$-structures] \label{Def_RR_structure_transverse_cont} The family of $\mathcal{R}$-structures $( \mathcal{R}^s )_{s \in X}$ is called {\bf transversely continuous} if: \begin{enumerate} \item $(g^{\prime, s})_{s \in X}, (\partial^{\prime, s}_\t)_{s \in X}$ are transversely continuous in the smooth topology. \item $U_{S2} := \cup_{s \in X} U_{S2}^s$ and $U_{S3} := \cup_{s \in X} U_{S3}^s$ are open subsets of the total space $\cup_{s \in X} \mathcal{M}^s$. \item $(\mathcal{S}^s)_{s \in X}$ is transversely continuous in the sense of Definition~\ref{Def_spherical_struct_transverse_cont}. \end{enumerate} \end{definition} \subsection{Statement of the main result} \label{subsec_main_rounding_statement} The main result of this section, Theorem~\ref{Thm_rounding}, states that for every continuous family of singular Ricci flows there is a transversely continuous family of $\mathcal{R}$-structures supported in regions where $\rho$ is small. Moreover the metrics $g^{\prime,s}$ and $g^s$ will be close in some scaling invariant $C^{[\delta]}$-sense and equal where $\rho$ is bounded from below. The same applies to the vector fields $\partial^{\prime,s}_\t$ and $\partial^s_\t$. Recall the scale $r_{\initial}$ from Definition~\ref{Def_r_initial}. \begin{theorem}[Existence of family of $\mathcal{R}$-structures] \label{Thm_rounding} For any $\delta > 0$ there is a constant $C = C(\delta) < \infty$ and a continuous, decreasing function $r_{\rot, \delta} : \mathbb{R}_+ \times [0, \infty) \to \mathbb{R}_+$ such that the following holds. Consider a continuous family $(\mathcal{M}^s)_{s \in X}$ of singular Ricci flows. Then there is a transversely continuous family of $\mathcal{R}$-structures $(\mathcal{R}^s = ( g^{\prime, s}, \linebreak[1] \partial^{\prime, s}_\t, \linebreak[1] U^s_{S2}, \linebreak[1] U^s_{S3}, \linebreak[1] \mathcal{S}^s))_{s \in X}$ such that for any $s \in X$: \begin{enumerate}[label=(\alph*)] \item \label{ass_thm_rounding_a} $\mathcal{R}^s$ is supported on \[ \big\{ x \in \mathcal{M}^s \;\; : \;\; \rho_{g^{\prime,s}} (x)< r_{\rot, \delta} (r_{\initial} (\mathcal{M}_0^s,g^s_0), \t(x)) \big\}. \] \item \label{ass_thm_rounding_b} $g^{\prime, s} = g^s$ and $\partial^{\prime,s}_\t = \partial^s_\t$ on \[ \big\{ x \in \mathcal{M}^s \;\; : \;\; \rho_{g^{\prime,s}} (x) > C r_{\rot, \delta} (r_{\initial} (\mathcal{M}_0^s, g^s_0), \t(x)) \big\} \supset \mathcal{M}^s_0. \] \item \label{ass_thm_rounding_c} For $m_1, m_2 = 0, \ldots, [\delta^{-1}]$ we have \[ | \nabla^{m_1} \partial_{\t}^{m_2} (g^{\prime, s} - g^s) | \leq \delta \rho^{-m_1-2m_2}, \qquad | \nabla^{m_1} \partial_{\t}^{m_2} (\partial^{\prime, s}_\t - \partial^s_\t) | \leq \delta \rho^{1-m_1-2m_2}. \] \item \label{ass_thm_rounding_d} If $(\mathcal{M}^s_0, g^s_0)$ is homothetic to a quotient of the round sphere or the round cylinder, then $g^{\prime, s} = g^s$ and $\partial^{\prime,s}_\t = \partial^s_\t$ on all of $\mathcal{M}^s$. \item \label{ass_thm_rounding_e} $r_{\rot, \delta} (a \cdot r_0, a^2 \cdot t) = a \cdot r_{\rot, \delta} (r_0,t)$ for all $a, r_0 > 0$ and $t \geq 0$. \end{enumerate} \end{theorem} The proof of this theorem, which will occupy the remainder of this section, is carried out in several steps: \begin{enumerate}[label=\arabic*.] \item In Subsection~\ref{subsec_cutoff}, we first define a cutoff function $\eta : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ whose support is contained in the union of components of time-slices with positive sectional curvature and bounded normalized diameter. By Hamilton's result \cite{hamilton_positive_ricci}, these components become extinct in finite time and the metric becomes asymptotically round modulo rescaling as we approach the extinction time. On the other hand, time-slices on which $\eta < 1$ have sufficiently large normalized diameter such that any point that satisfies the canonical neighborhood assumption is either contained in an $\varepsilon$-neck or has a neighborhood that is sufficiently close to a Bryant soliton. \item In Subsection~\ref{subsec_modify_Bryant}, we modify the metrics $(g^s)_{s \in X}$ at bounded distance to points where the geometry is close to the tip of a Bryant soliton. The resulting metric will be compatible with a spherical structure $\SS_2$ near these points. So any point $x \in \{ \eta < 1 \} \setminus \domain (\SS_2)$ of small curvature scale must be a center of an $\varepsilon$-neck. Our rounding procedure will employ the exponential map based at critical points of the scalar curvature $R$. \item In Subsection~\ref{subsec_modify_necks}, we modify the metrics from the previous step near centers of $\varepsilon$-necks, using the (canonical) constant mean curvature (CMC) foliation by 2-spheres. The resulting metrics will be compatible with a spherical structure $\SS_3$ extending $\SS_2$, whose domain contains all points $x \in \{ \eta < 1 \}$ of small curvature scale. \item In Subsection~\ref{subsec_modify_dt}, we use an averaging procedure to modify the time vector fields $\partial^s_\t$ so that they preserve the spherical structure $\SS_3$. \item In Subsection~\ref{subsec_extend_to_almost_round}, we modify the metric on the support of $\eta$ such that it is compatible with an extension of $\SS_3$. We obtain this new metric by evolving the metric from Step 3 forward in time by the Ricci flow. Therefore the new metric will remain compatible with a spherical structure up to some time that is close to the extinction time. After this time the metric is $\delta$-close to a round spherical space form modulo rescaling, for some small $\delta > 0$. \item In Subsection~\ref{subsec_proof_rounding}, we replace the almost round metrics near the extinction time by canonical metrics of constant curvature. We also modify the time-vector fields to respect these new metrics. This concludes the proof of Theorem~\ref{Thm_rounding}. \end{enumerate} Throughout this section we will attach indices 1--6 to the objects that are constructed in each step. For example, the cutoff function from Step 1 will be called $\eta_1$. The main result in Step 2 will repeat the claim on the existence of this cutoff function, which is then called $\eta_2$, and so on. In order to avoid confusion we will increase the index of each object in each step, even if it remained unchanged during the construction. \subsection{The rounding operators} \label{subsec_RD} We will define operators $\RD^n$ that assign a constant curvature metric to every metric of almost constant curvature in a canonical way. Consider first an $n$-dimensional Riemannian manifold $(M,g)$ that is $\delta$-close to the standard $n$-dimensional round sphere $(S^n, g_{S^n})$ in the smooth Cheeger-Gromov topology for some small $\delta > 0$, which we will determine later. Consider the eigenvalues $0 = \lambda_0 \leq \lambda_1 \leq \ldots$ of the Laplacian on $(M,g)$, counted with multiplicity. Recall that on the standard round $n$-sphere $S^n$ we have $\lambda_0 = 0$ and $\lambda_1 = \ldots = \lambda_{n+1} = n < \lambda_{n+2}$, where the coordinate functions of the standard embedding $S^n \subset \mathbb{R}^{n+1}$ form an orthogonal basis of the eigenspace corresponding to the eigenvalue $n$. So if $\delta$ is chosen sufficiently small, then $\lambda_{n+1} < \lambda_{n+2}$ and there is an $L^2$-orthonormal system $x^0_g, x^1_g, \ldots, x^{n+1}_g$ of eigenfunctions such that $x^0 \equiv const$, $\beta_g : (x^1_g, \ldots, x^{n+1}_g) : M \to \mathbb{R}^{n+1} \setminus \{ 0 \}$ and $\alpha_g := \beta_g / |\beta_g| : M \to S^n$ is a diffeomorphism. Define \[ \td\RD^n ( g ) := \alpha_g^* g_{S^n}. \] Note that $(M, \td\RD^n ( g ))$ is isometric to $(S^n, g_{S^n})$, in particular $V (M,\td\RD^n ( g )) = V(S^n, g_{S^n})$. Next, assume that $(M,g)$ is $\delta$-close to $(S^n, g_{S^n})$ modulo rescaling, in the sense that $(M, a^2 \cdot g)$ is $\delta$-close to $(S^n, g_{S^n})$ for some $a > 0$. If $\delta$ is sufficiently small, then $(M, (V (M,g) / V(S^n, g_{S^n} ))^{-2/n} g)$ is sufficiently close to $(S^n, g_{S^n})$, such that we can define \[ \RD^n (g) :=\bigg( \frac{V (M,g)}{V(S^n, g)} \bigg)^{2/n} \td\RD^n \bigg( \bigg( \frac{V (M,g)}{V(S^n, g)} \bigg)^{-2/n} g \bigg). \] Then \[ V (M, \RD^n(g)) = V(M,g) \] and for any diffeomorphism $\phi : M \to M$ we have \[ \RD^n (\phi^* g) = \phi^* \RD^n (g). \] So if $\phi$ is an isometry with respect to $g$, then it is also an isometry with respect to $\RD^n (g)$. Hence if $(M,g)$ is a Riemannian manifold whose universal cover $\pi : (\td{M}, \td{g}) \to (M,g)$ is $\delta$-close to $(S^n, g_{S^n})$ modulo rescaling, then we can define $\RD^n (g)$ as the unique metric with the property that $\pi^* \RD^n (g) = \RD^n (\td{g})$. We record: \begin{lemma} If $\delta \leq \ov\delta$ and if the universal cover of $(M,g)$ is $\delta$-close to $(S^n, g_{S^n})$ modulo rescaling, then: \begin{enumerate}[label=(\alph*)] \item $\RD^n (g)$ has constant curvature. \item $V (M, \RD^n(g)) = V(M,g)$. \item If $g$ has constant curvature, then $\RD^n (g) = g$. \item $\RD^n (\phi^* g) = \phi^* \RD^n (g)$ for any diffeomorphism $\phi : M \to M$. \item If $g_i \to g$ in the smooth topology, then $\RD^n (g_i) \to \RD^n (g)$ in the smooth topology. \item If $Z$ is a smooth manifold and $(g_z)_{z \in Z}$ is a smooth family of metrics on $M$ such that $(M,g_z)$ is $\delta$-close to $(S^n, g_{S^n})$ modulo rescaling for each $z$, then $(\RD^n (g_z))_{z \in Z}$ is smooth. Moreover, if $(g^i_z)_{z \in Z} \to (g_z)_{z \in Z}$ locally smoothly, then also $((\RD^n (g^i_z))_{z \in Z} \to ((\RD^n (g_z))_{z \in Z}$. \end{enumerate} \end{lemma} \subsection{Conventions and terminology} \label{subsec_rounding_conventions} For the remainder of this section let us fix a continuous family $(\mathcal{M}^s)_{s \in X}$ or singular Ricci flows. It will be clear from our construction that all constants will only depend on the data indicated in each lemma and not on the specific choice of the family $(\mathcal{M}^s)_{s \in X}$. Consider the initial condition scale $r_{\initial}$ from Definition~\ref{Def_r_initial} and the canonical neighborhood scale $r_{\can, \varepsilon} : \mathbb{R}_+ \times [0, \infty) \to \mathbb{R}_+$ from Lemma~\ref{lem_rcan_control}. Then for any $x \in \mathcal{M}^s_t \subset \cup_{s' \in X} \mathcal{M}^{s'}$ the assertions of Lemma~\ref{lem_rcan_control} hold below scale $\td{r}_{\can, \varepsilon} (x) := r_{\can, \varepsilon} (r_{\initial} (\mathcal{M}^s_0, g^s_0), t)$. For ease of notation, we will drop the tilde from $\td{r}_{\can, \varepsilon}$ and view $r_{\can, \varepsilon}$ as a function of the form \[ r_{\can, \varepsilon} : \cup_{s \in X} \mathcal{M}^{s} \longrightarrow \mathbb{R}_+. \] Note that $r_{\can, \varepsilon}$ is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology. Next, we define a scale function $\wh\rho$ on each $\mathcal{M}^s$, which is comparable to $\rho$ and constant on time-slices. For this purpose, let $x \in \mathcal{M}^s_t$ and consider the component $\mathcal{C} \subset \mathcal{M}^s_t$ containing $x$. Let $(\td{\mathcal{C}}, \td{g}^s_t)$ be the universal cover of $(\mathcal{C}, g^s_t |_\mathcal{C} )$ and set \[ \wh\rho (x) := V^{1/n} (\td{\mathcal{C}}, \td{g}^s_t) \in (0, \infty]. \] We will frequently use the following fact, which is a direct consequence of the compactness of $\kappa$-solutions (see Theorem~\ref{Thm_kappa_compactness_theory}): \begin{lemma} Assume that $\varepsilon \leq \ov\varepsilon (D)$. If $\rho (x) < r_{\can, \varepsilon} (x)$ and if the component $\mathcal{C} \subset \mathcal{M}^s_t$ containing $x$ has $\diam \mathcal{C} \leq D \rho (x)$, then \[ C^{-1} (D) \rho (x) \leq \wh\rho (x) \leq C(D) \rho (x). \] \end{lemma} In the following we will successively construct metrics $g^s_{1}, \ldots, g^s_5$ on $\mathcal{M}^s$. Unless otherwise noted, we will compute the quantities $\rho$ and $\wh\rho$ using the original metric $g^s$. Otherwise, we indicated the use of another metric by a subscript, such as ``$\rho_{g^{\prime}_i}$'' or ``$\wh\rho_{g^{\prime}_i}$''. Lastly, let us fix a smooth, non-decreasing cutoff function $\nu : \mathbb{R} \to [0,1]$ such that \[ \nu \equiv 0 \quad \text{on} \quad (-\infty, .1], \qquad \nu \equiv 1 \quad \text{on} \quad [.9, \infty). \] \subsection{Construction of a cutoff function near almost extinct components} \label{subsec_cutoff} Our first goal will be to introduce a cutoff function $\eta_1$ that is supported on components on which the curvature scale is small and the renormalized diameter is bounded. On these components we have $\sec > 0$ and therefore these components become extinct in finite time. On the other hand, regions where the curvature scale is small and where $\eta_1 < 1$ are modeled on a Bryant soliton or a cylinder. \begin{lemma} \label{lem_cutoff_almost_extinct} For every $\delta^*> 0$, $N \in \mathbb{N}$, and assuming $\alpha \leq \ov\alpha(\delta^*)$, $D \geq \underline{D} (\delta^*), D_0 \geq \underline{D}_0 (\delta^*)$, $C_m \geq \underline{C}_m(\delta^*)$ and $\varepsilon \leq \ov\varepsilon (\delta^*, N)$, there is: \begin{itemize} \item a continuous function $\eta_1 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ that is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology, \end{itemize} such that for any $s \in X$ and $t \geq 0$: \begin{enumerate}[label=(\alph*)] \item \label{ass_cutoff_a} $\partial^s_\t \eta_1 \geq 0$. \item \label{ass_cutoff_b} $\eta_1$ is constant on every connected component of $\mathcal{M}^{s}_t$. \item \label{ass_cutoff_c} Any connected component $\mathcal{C} \in \mathcal{M}^{s}_t$ on which $\eta_1 > 0$ is compact there is a time $t_\mathcal{C} > t$ such that: \begin{enumerate}[label=(c\arabic*)] \item \label{ass_cutoff_c1} $\sec > 0$ on $\mathcal{C}(t')$ for all $t' \in [t, t_\mathcal{C})$. \item \label{ass_cutoff_c2} $\mathcal{C}$ survives until time $t'$ for all $t' \in [t, t_\mathcal{C})$ and no point in $\mathcal{C}$ survives until or past time $t_\mathcal{C}$. \item \label{ass_cutoff_c3} $\diam \mathcal{C}(t') < D \rho (x)$ for all $x \in \mathcal{C}(t')$ and $t' \in [t, t_\mathcal{C})$. \item \label{ass_cutoff_c4} $\rho < 10^{-1} r_{\can, \varepsilon}$ on $\mathcal{C} (t')$ for all $t' \in [t, t_\mathcal{C})$. \item \label{ass_cutoff_c5} The time-slices $(\mathcal{C}(t'), g^s_t)$ converge, modulo rescaling, to an isometric quotient of the round sphere as $t' \nearrow t_\mathcal{C}$. \end{enumerate} \item \label{ass_cutoff_d} For every point $x \in \mathcal{M}^s_t$ with $\eta_1 (x) < 1$ (at least) one of the following is true: \begin{enumerate}[label=(d\arabic*)] \item \label{ass_cutoff_d1} $\rho (x) > \alpha r_{\can, \varepsilon} (x)$. \item \label{ass_cutoff_d2} $x$ is a center of a $\delta^*$-neck. \item \label{ass_cutoff_d3} There is an open neighborhood of $x$ in $\mathcal{M}^s_t$ that admits a two-fold cover in which a lift of $x$ is a center of a $\delta^*$-neck. \item \label{ass_cutoff_d4} There is a point $x' \in \mathcal{M}^s_t$ with $d (x, x') < D_0 \rho (x')$ such that $\rho(x') < 10^{-1} r_{\can, \varepsilon} (x')$, $\nabla R (x') = 0$ and $(\mathcal{M}^s_t, g^s_t, x')$ is $\delta^*$-close to $(M_{\Bry},\linebreak[1] g_{\Bry},\linebreak[1] x_{\Bry})$ at some scale. Moreover, the component $\mathcal{C} \subset \mathcal{M}^s_t$ containing $x$ has diameter $> 100 D_0 \rho (x')$. \end{enumerate} \item \label{ass_cutoff_e} $| \partial_\t^m \eta_1 | \leq C_m \rho^{-2m}$ for $m = 0, \ldots, N$. \end{enumerate} \end{lemma} Note that only $\varepsilon$ depends on $N$. The strategy in the following proof is to define $\eta_1$ using two auxiliary functions $\eta^*_1$, $\eta^{**}_1$, which are essentially supported in regions where $\rho$ is small and where the normalized diameter is bounded, respectively. In order to achieve the monotonicity property \ref{ass_cutoff_a}, we define $\eta_1$ by integrating the product $\eta^*_1 \eta^{**}_1$ in time against a weight. \begin{proof} Let $a, \alpha', A, D, D'$ be constants, which will be chosen depending on $\delta^*$ in the course of the proof. We say that a component $\mathcal{C} \subset \mathcal{M}^s_t$ is \emph{$D'$-small} if \[ \diam \mathcal{C} < D' \rho \quad \text{and} \quad \rho < 10^{-1} r_{\can, \varepsilon} \qquad \text{on} \quad \mathcal{C}. \] Let $U_{D'} \subset \cup_{s \in X} \mathcal{M}^s$ be the union of all $D'$-small components. \begin{Claim} \label{cl_UD_prime} \begin{enumerate}[label=(\alph*)] \item \label{ass_UD_prime_a} $U_{D'}$ is open in $\cup_{s \in X} \mathcal{M}^s$. \item \label{ass_UD_prime_b} Assuming $\alpha' \leq \ov\alpha' (D')$, $D \geq \underline{D} (D')$ and $\varepsilon \leq \ov\varepsilon (D')$, the following is true. If $\mathcal{C} \subset \mathcal{M}^s_t$ is a $D'$-small component and $\rho (x) \leq \alpha' r_{\can, \varepsilon} (x)$ for some $x \in \mathcal{C}$, then there is a time $t_\mathcal{C} > t$ such that Assertions~\ref{ass_cutoff_c1}--\ref{ass_cutoff_c5} of this lemma hold. \end{enumerate} \end{Claim} \begin{proof} Assertion~\ref{ass_UD_prime_a} is clear by definition. If Assertion~\ref{ass_UD_prime_b} was false for some fixed $D'$, then we can find singular Ricci flows $\mathcal{M}^i$, $D'$-small components $\mathcal{C}^i \subset \mathcal{M}^i_{t^i}$ and points $x^i \in \mathcal{C}^i$ contradicting the assertion for sequences $\alpha^{\prime, i}, \varepsilon^i \to 0$, $D^i \to \infty$. Since the $\varepsilon^i$-canonical neighborhood assumption holds at $x^i$ and $\varepsilon^i \to 0$, we may assume, after passing to a subsequence, that the universal covers of $(\mathcal{C}^i, \rho^{-2} (x^i) g^i_{t^i})$ converge to a compact smooth limit $(\ov{M}, \ov{g})$, which is the final time-slice of a $\kappa$-solution. Therefore, there is a $c > 0$ such that $\sec > c R$ and $R > c \rho^{-2} (x^i)$ on $\mathcal{C}^i$ for large $i$. By Hamilton's result \cite{hamilton_positive_ricci}, for large $i$ the flow past $\mathcal{C}^i$ becomes asymptotically round and goes extinct at a finite time $t_{\mathcal{C}^i} \in ( t^i, t^i + C \rho^2 (x^i))$ for some universal constant $C = C(c) < \infty$. Since $|\partial_\t r_{\can, \varepsilon^i}| \leq \varepsilon^i r^{-1}_{can, \varepsilon^i}$ by Lemma~\ref{lem_rcan_control} we obtain that $r_{\can, \varepsilon^i} (x^i(t')) \leq 2 r_{\can, \varepsilon^i} (x^i)$ for all $t' \in [t^i, t_{\mathcal{C}^i})$ and large $i$. So Assertions~\ref{ass_cutoff_c1}, \ref{ass_cutoff_c2}, \ref{ass_cutoff_c4}, \ref{ass_cutoff_c5} hold for large $i$. In particular, for large $i$ the $\varepsilon^i$-canonical neighborhood assumption holds on $\mathcal{C}^i (t')$ for all $t' \in [t^i, t_{\mathcal{C}^i})$. Since the pinching $\sec > c R$ is preserved by the flow, another limit argument implies Assertion~\ref{ass_cutoff_c3}. \end{proof} Define functions $\eta^*_1, \eta^*_2 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ as follows. For any $x \in \mathcal{M}^{s}_t$ let $\mathcal{C} \subset \mathcal{M}^s_t$ be the component containing $x$ and set \[ \eta_1^* (x) = \nu \bigg(a \cdot \frac{r_{\can, \varepsilon}(x)}{\wh\rho(x)} \bigg), \qquad \eta_1^{**} (x) = \nu \bigg( A \cdot \frac{\wh\rho(x)}{ \mathcal{R} (\td\mathcal{C})} \bigg), \] where $\mathcal{R} (\td\mathcal{C}) := \int_{\td\mathcal{C}} R_{g_t} d\mu_{g_t}$ and $\nu$ is the cutoff function from Subsection~\ref{subsec_rounding_conventions}. Near any component $\mathcal{C} \subset \mathcal{M}^s_t$ with compact universal cover $\td\mathcal{C}$ the functions $\eta^*_1, \eta^*_2$ are smooth on $\mathcal{M}^s_t$ and transversely continuous in the smooth topology. We now define functions $\eta'_1 : \cup_{s \in X} \mathcal{M}^s \to [0,\infty)$ and $\eta_1 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ as follows. Set $\eta'_1 :\equiv 0$ on $\cup_{s \in X} \mathcal{M}^s \setminus U_{D'}$. For every component $\mathcal{C} \subset \mathcal{M}^s_t$ with $\mathcal{C} \subset U_{D'}$ and for every $x \in \mathcal{C} \subset \mathcal{M}^s_t$ choose $t^*_\mathcal{C} < t$ minimal such that $\mathcal{C}$ survives until all times $t' \in (t^*_\mathcal{C}, t]$ and let \[ \eta'_1(x) := \int_{t^*_\mathcal{C}}^t \frac{\eta^*_1 (x(t')) \eta^{**}_1 (x(t'))}{\wh\rho^2 (x(t'))} dt'. \] Lastly, set \[ \eta_1 (x) := \nu \big(A \cdot \eta'_1 (x) \big). \] Note that the definitions of $\eta'_1, \eta_1$ are invariant under parabolic rescaling, but the definition of $\eta^*_1$ is not invariant under time shifts. Assertions~\ref{ass_cutoff_a} and \ref{ass_cutoff_b} of this lemma hold wherever $\eta_1$ is differentiable. Moreover, $\partial_\t \eta'_1$ restricted to $U_{D'}$ is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology, because the same is true for $\eta^*_1, \eta^{**}_1, \wh\rho$. \begin{Claim} \label{cl_eta1_regularity} If $D' \geq \underline{D}' (A)$, $a \leq \ov{a} (\alpha', A, D')$, $\varepsilon \leq \ov\varepsilon (\alpha', A,a,D', N)$, then \begin{enumerate}[label=(\alph*)] \item \label{ass_eta1_regularity_a} The closure of the support of $\eta^*_1 \eta^{**}_2 |_{U_{D'}}$ in $\cup_{s \in X} \mathcal{M}^s$ is contained in $U_{D'} \cap \{ \rho \leq \alpha' r_{\can, \varepsilon} \}$. \item \label{ass_eta1_regularity_b} $\eta_1$, $\eta'_1$ are smooth on every fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology. \item \label{ass_eta1_regularity_bb} Assertion~\ref{ass_cutoff_c} of this lemma holds. \item \label{ass_eta1_regularity_d} Assertion~\ref{ass_cutoff_e} of this lemma holds for some constants $C_m = C_m (\alpha', \linebreak[1] A, \linebreak[1] a, \linebreak[1] D') \linebreak[1] < \infty$. \end{enumerate} \end{Claim} \begin{proof} Let us first show that Assertions~\ref{ass_eta1_regularity_b} and \ref{ass_eta1_regularity_bb} of this claim follow from Assertion~\ref{ass_eta1_regularity_a}. For Assertions~\ref{ass_eta1_regularity_b} observe that Assertion~\ref{ass_eta1_regularity_a} implies smoothness and transverse continuity of $\partial_\t \eta'_1$. For any $D'$-small component $\mathcal{C}$ we have $\mathcal{C}(t') \not\subset U_{D'}$ for $t' \in (t^*_\mathcal{C}, t)$ close enough to $t^*_\mathcal{C}$, because otherwise $\mathcal{C}$ would survive until time $t^*_\mathcal{C}$. So every neighborhood of a point in $U_{D'}$ can be evolved backwards by the flow of $\partial_\t$ into $\{ \eta'_1 = 0 \}$. Assertion~\ref{ass_eta1_regularity_b} now follows by integrating $\partial_\t \eta'_1$ in time. For Assertion~\ref{ass_eta1_regularity_bb} observe that whenever $\eta_1 (x) > 0$, then there is a $t' \leq t$ such that $x(t') \in U_{D'}$ and $\eta^*_1 (x(t')) \eta^{**}_1 (x(t')) > 0$. By Assertion~\ref{ass_eta1_regularity_a} we have $\rho (x(t')) \leq \alpha' r_{\can, \varepsilon} (x(t'))$, which implies Assertion~\ref{ass_eta1_regularity_bb} via Claim~\ref{cl_UD_prime}\ref{ass_UD_prime_b}. It remains to prove Assertions~\ref{ass_eta1_regularity_a} and \ref{ass_eta1_regularity_d} of this claim. For this purpose fix $A > 0$ and assume that $D' \geq \underline{D}' (A)$, such that if $(\ov{M}, \ov{g})$ is the final time-slice of a compact, simply-connected $\kappa$-solution and for some $\ov{x} \in \ov{M}$ \[ A \cdot \frac{\wh\rho(\ov{x}, 0)}{\mathcal{R} (\ov{M}, \ov{g}_{0})} \geq \frac1{10}, \] then $\diam ( \ov{M}, g_0) < D' \rho (\ov{x})$. This is possible due to Lemma~\ref{lem_kappa_identity_1}. Assume now that Assertion~\ref{ass_eta1_regularity_a} was false for fixed $\alpha', A, D'$ . Choose sequences $a^i,\varepsilon^i \to 0$. Then we can find singular Ricci flows $\mathcal{M}^i$ and points $x^i \in \mathcal{M}^i_{t^i}$ that are contained in the closure of $U_{D'}$ and the support of $\eta^*_1 \eta^{**}_1$, but $x^i \not\in U_{D'}$ or $\rho (x^i) > \alpha' r_{\can, \varepsilon^i} (x^i)$. Let $\mathcal{C}^i \subset \mathcal{M}^i_{t^i}$ be the component containing $x^i$. Since $x^i$ is contained in the closure of $U_{D'}$ we have \[ \diam \mathcal{C}^i \leq D \rho (x^i), \qquad \rho (x^i) \leq 10^{-1} r_{\can, \varepsilon^i} (x^i). \] This implies that $x^i$ satisfies the $\varepsilon^i$-canonical neighborhood assumption and therefore, after passing to a subsequence, the universal covers $(\td\mathcal{C}^i, \rho^{-2} (x^i) g^i_{t^i}, x^i)$ converge to a compact smooth pointed limit $(\ov{M}, \ov{g}, \ov{x})$, which is the final time-slice of a $\kappa$-solution. Next, since $x^i$ is contained in the support of $\eta^*_1 \eta^{**}_1$ we have \begin{equation} \label{eq_eta_limits} \liminf_{i \to \infty} a^i \cdot \frac{r_{\can, \varepsilon^i}(x^i)}{\wh\rho(x^i)} \geq \frac1{10}, \qquad \liminf_{i \to \infty} A \cdot \frac{\wh\rho(x^i)}{ \mathcal{R} (\td\mathcal{C}^i)} \geq \frac1{10}. \end{equation} By our choice of $D'$, the second bound implies $\diam (\ov{M} , \ov{g}_0) < D' \rho (\ov{x})$. So for large $i$ we have $\diam \mathcal{C}^i < D' \rho (x^i)$. Since $\lim_{i \to \infty} \wh\rho (x^i) / \rho (x^i) = V^{1/3} (\ov{M}, \ov{g}_0) > 0$, the first bound of (\ref{eq_eta_limits}) implies that $\liminf_{i \to \infty} r_{\can, \varepsilon^i} (x^i) / \rho (x^i) = \infty$. This implies that $\mathcal{C}^i \subset U^i_{D'}$ and $\rho (x^i) \leq \alpha' r_{\can, \varepsilon^i} (x^i)$ for large $i$, which contradicts our assumptions. Lastly assume that Assertion~\ref{ass_eta1_regularity_d} of this claim was false. Then we can find a sequence of counterexamples as before, but this time for fixed $\alpha', A, a, D'$ and $\varepsilon^i \to 0$ such that $\partial^{m_0}_\t \eta'_1 (x^i) \rho^{2m_0} (x^i) \to \infty$ for some fixed $m_0 \geq 1$. So for large $i$ the point $x^i$ must lie in the support of $\eta^*_1 \eta^{**}_2 |_{U_{D'}}$, because otherwise $\eta'_1$ would be constant near $x^i$. As before, we can pass to a pointed, compact, simply-connected $\kappa$-solution $(\ov{M}, \ov{g}, \ov{x})$ such that for any $m \geq 0$ we have \[ \partial^m_\t \eta^{**}_1 (x^i) \cdot \rho^{2m} (x^i) \longrightarrow \partial^m_t \eta^{**}_1 (\ov{x}), \qquad \partial^m_\t \wh\rho (x^i) \cdot \rho^{-1+2m} (x^i) \longrightarrow \partial^m_t \wh\rho (\ov{x}). \] Moreover, by Lemma~\ref{lem_rcan_control} we have for $m \geq 1$ \[ \limsup_{i \to \infty} \big|\partial^m_\t r_{\can, \varepsilon^i} (x^i) \big| \cdot \rho^{-1+2m} (x^i) \leq \limsup_{i \to \infty} \varepsilon^i r_{\can, \varepsilon^i}^{1-2m} (x^i) \cdot \rho^{-1+2m} (x^i) = 0, \] which implies that for $m \geq 1$ \[ \partial^m_\t \eta^{*}_1 (x^i) \cdot \rho^{2m} (x^i) \longrightarrow 0. \] By combining these estimates, we obtain a contradiction to our assumption that $\partial^{m_0}_\t \eta'_1 (x^i) \rho^{2m_0} (x^i) \to \infty$. \end{proof} \begin{Claim} \label{cl_choice_of_A} If $\alpha \leq c(\delta^*) a$, $A \geq \underline{A} (\delta^*)$, $D' \geq \underline{D}' (\delta^*)$, $D_0 \geq \underline{D}_0 (\delta^*)$, $\varepsilon \leq \varepsilon (\delta^*)$, then Assertion~\ref{ass_cutoff_d} of this lemma holds. \end{Claim} \begin{proof} Assume that the claim was false for some fixed $\delta^*$. Then we can find singular Ricci flows $\mathcal{M}^i$ and points $x^i \in \mathcal{M}^i_{t^i}$ such that $\eta_1 (x^i) < 1$ and all Assertions~\ref{ass_cutoff_d1}--\ref{ass_cutoff_d4} are false, for parameters $a^i, A^i, \alpha^i \leq c^i a^i$, $D^i, D^{\prime, i}, D^i_0$, $\varepsilon^i$ that satisfy the bounds of Claims~\ref{cl_UD_prime} and \ref{cl_eta1_regularity} and $\alpha^i, c^i, \varepsilon^i \to 0$ and $A^i, D^{\prime, i}, D^i_0 \to \infty$. Let $\mathcal{C}^i \subset \mathcal{M}^i_{t^i}$ be the component containing $x^i$ and observe that $\eta_1(x^i) < 1$ implies $\eta'_1 (x^i) < (A^i)^{-1}$. Since Assertion~\ref{ass_cutoff_d1} is violated we have \[ \rho (x^i) \leq \alpha^i \cdot r_{\can, \varepsilon^i} (x^i) \leq c^i a^i \cdot r_{\can, \varepsilon^i} (x^i) < r_{\can, \varepsilon^i} (x^i) \] for large $i$, which implies that the points $x^i$ satisfy the $\varepsilon^i$-canonical neighborhood assumption. So after passing to a subsequence, we may assume that after parabolic rescaling by $\rho^{-2} (x^i)$ the universal covers of the flows restricted to larger and larger backwards parabolic neighborhoods of $x^i$ converge to a smooth, pointed $\kappa$-solution $(\ov{M}, (\ov{g})_{t \leq 0}, \ov{x})$. We claim that $\ov{M}$ must be compact. Assume not. Then by Theorem~\ref{Thm_kappa_sol_classification}, after passing to another subsequence, $(\mathcal{C}^i, \rho^{-2} (x^i) g^i_{t^i}, x^i)$ would converge to a pointed round cylinder, its $\mathbb{Z}_2$-quotient or a pointed Bryant soliton, in contradiction to our assumption that Assertions~\ref{ass_cutoff_d3} and \ref{ass_cutoff_d4} are violated. Note here that the Hessian of $R$ on $(M_{\Bry}, g_{\Bry})$ at $x_{\Bry}$ is non-degenerate (see Lemma~\ref{Lem_Bry_R_Hessian_positive}), so if $(\mathcal{C}^i, x'')$ is sufficiently close to $(M_{\Bry}, g_{\Bry}, x_{\Bry})$, then we can find an $x' \in \mathcal{C}^i$ near $x''$ with $\nabla R (x') = 0$. So $\ov{M}$ must be compact. Since Assertion~\ref{ass_cutoff_d1} is violated by assumption, we have for all $t' \in [t^i - \rho^2 (x^i), t^i]$ \[ a^i \cdot \frac{r_{\can, \varepsilon^i} (x^i(t'))}{\wh\rho (x^i(t'))} \geq a^i \cdot \frac{r_{\can, \varepsilon^i} (x^i)}{\wh\rho (x^i(t'))} \geq \frac{a^i}{\alpha^i} \cdot \frac{\rho(x^i)}{\wh\rho (x^i(t'))} \geq \frac{1}{c^i} \cdot \frac{\rho(x^i)}{\wh\rho (x^i(t'))} \] Due to smooth convergence to a compact $\kappa$-solution, which also holds on larger and larger parabolic neighborhoods, the right-hand side must go to infinity. Therefore, for large $i$ we have $\eta^*_1 (x^i(t')) = 1$ for all $t' \in [t^i - \rho^2 (x^i), t^i]$. Similarly, by smooth convergence to a compact $\kappa$-solution, $A^i \to \infty$ and the fact that $\wh{\rho} / \mathcal{R}$ is scaling invariant, we obtain that for large $i$ we have $\eta^{**}_1 (x^i(t')) = 1$ for all $t' \in [t^i - \rho^2 (x^i), t^i]$. So since $D^{\prime, i} \to \infty$ we obtain that for large $i$ \[ (A^i)^{-1} > \eta'_1(x^i) \geq \int_{t^i - \rho^2 (x^i)}^{t^i} \frac{1}{\wh\rho^2 (x^i(t'))} dt'. \] This, again, contradicts smooth convergence to a compact $\kappa$-solution.\end{proof} Lastly, let us summarize the choice of constants. Given $\delta^*$, we can determine $A$ and $D_0$ using Claim~\ref{cl_choice_of_A}. Next, we can choose $D'$ using Claims~\ref{cl_eta1_regularity}, \ref{cl_choice_of_A} and then $\alpha'$, $a$ using Claim~\ref{cl_UD_prime}. Once $a$ is fixed, we can choose $\alpha$ using Claim~\ref{cl_choice_of_A}. These constants can be used to determine $D, D_0, C_m$. Lastly, we can choose $\varepsilon$ depending on all previous constants and $N$. \end{proof} \subsection{Modification in regions that are geometrically close to Bryant solitons} \label{subsec_modify_Bryant} Our next goal will be to round the metrics $g^s$ in regions that are close to the tip of a Bryant soliton at an appropriately small scale. The resulting metrics will be called $g^{\prime,s}_2$. \begin{lemma}\label{lem_Bryant_rounding} For every $\delta, \delta^* > 0$, and assuming $\alpha \leq \ov\alpha(\delta^*)$, $D \geq \underline{D} (\delta^*)$, $C_m \geq \underline{C}_m$ and $\varepsilon \leq \ov\varepsilon (\delta^*, \delta)$, there are: \begin{itemize} \item a transversely continuous family of smooth metrics $(g^{\prime, s}_{2})_{s \in X}$ on $\ker d\t$, \item a continuous function $\eta_2 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ that is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology, \item a transversely continuous family of spherical structures $(\SS^s_2)_{s \in X}$ on open subsets of $\mathcal{M}^s$, \end{itemize} such that for any $s \in X$ and $t \geq 0$: \begin{enumerate}[label=(\alph*)] \item \label{ass_round_Bry_a} The fibers of $\SS^s_2$ are contained in time-slices of $\mathcal{M}^s$. \item \label{ass_round_Bry_b} $g^{\prime, s}_{2}$ is compatible with $\SS^s_2$. \item \label{ass_round_Bry_e} $\eta_2$ satisfies all assertions of Lemma~\ref{lem_cutoff_almost_extinct} for the new constants $\alpha$, $D$, $C_m$, $\varepsilon$ and $N := [\delta^{-1}]$ and with Assertion~\ref{ass_cutoff_d} replaced by: For every point $x \in \mathcal{M}^s_t$ with $\eta_2 (x) < 1$ (at least) one of the following is true: \begin{enumerate}[label=(c\arabic*)] \item \label{ass_round_Bry_e1} $\rho (x) > \alpha r_{\can, \varepsilon} (x)$. \item \label{ass_round_Bry_e2} $x$ is a center of a $\delta^*$-neck with respect to $g^{\prime,s}_2$, \item \label{ass_round_Bry_e3} There is an open neighborhood of $x$ that admits a two-fold cover in which a lift of $x$ is a center of a $\delta^*$-neck with respect to $g^{\prime,s}_2$. \item \label{ass_round_Bry_e4} $x \in \domain (\SS^s_2)$. \end{enumerate} \item \label{ass_round_Bry_f} $g^{\prime,s}_2 = g^s$ on $\{ \rho > 10^{-1} r_{\can, \varepsilon} \}$. \item \label{ass_round_Bry_g} $|\nabla^{m_1} \partial_\t^{m_2} ( g^{\prime, s}_2 - g^s )| < \delta \rho^{-m_1 - 2m_2}$ for $m_1, m_2 = 0, \ldots, [\delta^{-1}]$. \item \label{ass_round_Bry_i} If $(\mathcal{M}^s_t, g^s_t)$ is homothetic to the round sphere or a quotient of the round cylinder, then $g^{\prime,s}_{2,t} = g^s_t$. \item \label{ass_round_Bry_j} For every spherical fiber $\mathcal{O}$ of $\SS^s_2$ there is a family of local spatial vector fields $(Y_\mathcal{O}^{s'})_{s' \in X}$ defined in a neighborhood of $\mathcal{O}$ in $\cup_{s \in X} \mathcal{M}^s$ such that for all $s' \in X$ the vector field $\partial^{s'}_\t + Y^{s'}_\mathcal{O}$ preserves $\SS^{s'}_2$ and $|\nabla^{m_1} \partial_\t^{m_2} Y^{s'}_\mathcal{O}| < \delta \rho^{1-m_1 - 2m_2}$ for $m_1, m_2 = 0, \ldots, [\delta^{-1}]$. \end{enumerate} \end{lemma} Note that we may choose $\delta \ll \delta^*$. This will be important for us later when we analyze components where $\eta_2 \in (0,1)$ in the proof of Lemma~\ref{lem_exten_almost_round}. More specifically, the diameter of these components is bounded by a constant of the form $\underline{D} (\delta^*) \rho (x)$, while $g'_2$ is $\delta$-close to $g$. So by choosing $\delta \ll \delta^*$, we can guarantee that the Ricci flows starting from both metrics on these components remain arbitrarily close on an arbitrarily large time-interval. \begin{proof} In the following we will assume that $\delta^*$ is smaller than some universal constant, which we will determine in the course of the proof. Apply Lemma~\ref{lem_cutoff_almost_extinct} with $\delta^*$ replaced by $\delta^*/2$ and $N = [\delta^{-1}]$, set \[ \eta_2 (x) := \nu ( 2 \eta_1 (x) ) \] and consider the constants $\alpha \leq \ov\alpha(\delta^*)$, $D \geq \underline{D} (\delta^*), D_0 \geq \underline{D}_0 (\delta^*)$ and $C_m \geq \underline{C}_m (\delta^*)$. Note that $\{ \eta_2 > 0 \} \subset \{ \eta_1 > 0 \}$ and $\{ \eta_2 < 1 \} \subset \{ \eta_1 < \frac12 \} \subset \{ \eta_1 < 1\}$. So all assertions of Lemma~\ref{lem_cutoff_almost_extinct} remain true for $\eta_2$ after modifying the constants $C_m \geq \underline{C}_m (\delta^*)$. In the following we will construct $g^{\prime,s}_2$ and $\SS^s_2$ for all $s \in X$. The fact that these objects are transversely continuous, as well as Assertion~\ref{ass_round_Bry_j}, will mostly be clear due to our construction. Let $E^s \subset \mathcal{M}^s$ be the set of points $x' \in \mathcal{M}^s_t$ such that: \begin{enumerate}[label=(\arabic*)] \item \label{prop_E_1} $\eta_1 (x') < 1$. \item \label{prop_E_2} $\rho (x') < 10^{-1} r_{\can, \varepsilon} (x')$. \item \label{prop_E_3} $\nabla R (x') = 0$. \item \label{prop_E_4} $(\mathcal{M}^s_t, g^s_t, x')$ is $\delta^*$-close to $(M_{\Bry}, g_{\Bry}, x_{\Bry})$ at some scale. \item \label{prop_E_5} The diameter of the component of $\mathcal{M}^s_t$ containing $x'$ is $> 100 D_0 \rho(x')$. \end{enumerate} \begin{Claim} \label{cl_def_E} Assuming $\delta^* \leq \ov\delta^*$, $\varepsilon \leq \ov\varepsilon ( D_0)$ the following is true for any $x' \in E^s \cap \mathcal{M}^s_t$: \begin{enumerate}[label=(\alph*)] \item \label{ass_Es_a} The Hessian of $R$ at $x'$ is strictly negative. \item \label{ass_Es_b} $E^s \subset \mathcal{M}^s$ is an 1-dimensional submanifold. \item \label{ass_Es_c} $(E^s)_{s \in X}$ is transversely continuous in the following sense: There are neighborhoods $U \subset X$, $I \subset [0, \infty)$ of $x'$ and $t$ and a transversely continuous family of smooth maps $(\wh{x}'_{s'} : I \to \mathcal{M}^{s'})_{s' \in U}$ with $\wh{x}'_{s'} (t') \in E^{s'} \cap \mathcal{M}^{s'}_{t'}$, $\wh{x}'_s (t) = x'$ and $\cup_{s' \in X} \wh{x}'_{s'} (I) \cap V = \cup_{s' \in X} E^{s'} \cap V$ for some neighborhood $V \subset \cup_{s \in X} \mathcal{M}^s$ of $x'$. Moreover, $(\wh{x}'_{s'})_{s' \in U}$ is locally uniquely determined by $x'$. \item \label{ass_Es_d} The balls $B(x', 10 D_0 \rho (x'))$, $x' \in E^s \cap \mathcal{M}^s_t$ are pairwise disjoint. \item \label{ass_Es_e} If $x \in \mathcal{M}^s_t$ with $\eta_1 (x) < 1$, then \[ x \in \cup_{x' \in E^s \cap \mathcal{M}^s_t} B(x', D_0 \rho(x')) \] or one of the Assertions~\ref{ass_round_Bry_e1}--\ref{ass_round_Bry_e3} of this lemma hold with $\delta^*$ replaced by $\delta^* / 2$ and $g^{\prime,s}_2$ replaced by $g^s$. \item \label{ass_Es_f} The injectivity radius at $x'$ is $> 10 D_0 \rho (x')$. \end{enumerate} \end{Claim} \begin{proof} Assertion~\ref{ass_Es_a} is a consequence of Lemma~\ref{Lem_Bry_R_Hessian_positive} and Property~\ref{prop_E_4} for $\delta^* \leq \ov\delta^*$ and Assertions~\ref{ass_Es_b} and \ref{ass_Es_c} are an immediate consequence of Assertion~\ref{ass_Es_a} due to the implicit function theorem applied to $\nabla R$. Assertion~\ref{ass_Es_e} is a direct consequence of Assertion~\ref{ass_cutoff_d} from Lemma~\ref{lem_cutoff_almost_extinct}. For Assertion~\ref{ass_Es_d} it suffices to show that \begin{equation} \label{eq_Es_cap_x_prime} E^s \cap B(x',20 D_0 \rho (x')) = \{ x' \}. \end{equation} If Assertions~\ref{ass_Es_d} or \ref{ass_Es_f} were false, then we could find a sequence of singular Ricci flows $\mathcal{M}^i$ and points $x^{\prime,i} \in \mathcal{M}^i_{t^i}$ that satisfy Properties~\ref{prop_E_1}--\ref{prop_E_5} for $\varepsilon^i \to 0$, but violate (\ref{eq_Es_cap_x_prime}) or Assertion~\ref{ass_Es_f}. After passing to a subsequence, we may assume that $(\mathcal{M}^i_{t^i}, \rho^{-2} (x^{\prime,i}) g^i_{t^i}, x^{\prime,i})$ either converge to a pointed final time-slice $(\ov{M}, \ov{g}, \ov{x}')$ of a $\kappa$-solution, or their universal covers converge to a round sphere. The second case can be excluded by Property~\ref{prop_E_4}, assuming $\delta^* \leq \ov\delta^*$, which also implies that $(\ov{M}, \ov{g})$ is not a quotient of the round sphere or the round cylinder. It follows that $(\ov{M}, \ov{g})$ is rotationally symmetric due to Theorem~\ref{Thm_kappa_sol_classification} and by Property~\ref{prop_E_3} the point $\ov{x}'$ is a center of rotation. By Property~\ref{prop_E_5} we obtain that $\diam (\ov{M}, \ov{g}) \geq 100 D_0$. This implies (\ref{eq_Es_cap_x_prime}) for large $i$. Assertion~\ref{ass_Es_f} follows for large $i$ since the injectivity radius at $\ov{x}'$ is $\geq 100 D_0$. \end{proof} Fix some $x' \in E^s \cap \mathcal{M}^s_t$ for the moment and choose a continuous family $(\wh{x}'_{s'})$ near $x'$ as in Assertion~\ref{ass_Es_c} of Claim~\ref{cl_def_E}. Choose a family of linear isometries $\varphi_{s',t'} : \mathbb{R}^3 \to T_{\wh{x}'_{s'}(t')} \mathcal{M}^{s'}_{t'}$ such that for every $s'$ the family $t' \mapsto \varphi_{s',t'}$ is parallel along $t' \mapsto x'_{s',t'}$. Then \[ \chi : (s',t', v) \longmapsto \exp_{g^{s'}_{t'}, \wh{x}'_{s'}(t')} (\varphi_{s',t'}(v)) \] defines a family of exponential coordinates near $x'$, which induce a family of $O(3)$-actions on $B(\wh{x}'_{s'}(t'), 10 D_0 \rho (\wh{x}'_{s'} (t')))$ that are transversely continuous in the smooth topology. Let $g''$ be average of $g$ under these local actions. Note that $g''$ does not depend on the choice of the family $(\varphi_{s',t'})$, so it extends to a smooth and transversely continuous family of metrics on $\cup_{x' \in E^s} B(x', 10 D_0 \rho (x'))$, which is compatible with a unique transversely continuous family of spherical structure $(\SS^{\prime,s})_{s \in X}$. Next, define $\partial^{s'}_\t + Y^{s'}_{\chi}$ to be the average of $\partial^{s'}_\t$ under the same action on the image of $\chi$. Then $\partial^{s'}_\t + Y^{s'}_{\chi}$ preserves $\SS^{\prime,s'}$ for $s'$ near $s$. A standard limit argument as in the proof of Claim~\ref{cl_def_E} (the limit again being a rotationally symmetric $\kappa$-solution) shows that if $\varepsilon \leq \ov\varepsilon (\delta', D_0 (\delta^*))$, then \begin{equation}\label{eq_gppYchi} |\nabla^{m_1} \partial_\t^{m_2} ( g^{\prime\prime, s} - g^s )| < \delta' \rho^{-m_1 - 2m_2}, \qquad |\nabla^{m_1} \partial_\t^{m_2} Y^{s'}_\chi | < \delta' \rho^{1-m_1 - 2m_2} \end{equation} for $m_1, m_2 = 0, \ldots, [(\delta')^{-1}]$. For $x \in B (x', \linebreak[1] 10 D_0 \rho (x'))$ set \[ (g^{\prime\prime\prime, s}_t)_x := (g^{\prime\prime,s}_t)_x + \nu \bigg( \frac{d(x, x')}{D_0 \rho (x')} - 2 \bigg) \cdot \big( (g^s_{t})_x - (g^{\prime\prime,s}_t)_x \big) \] and $(g^{\prime\prime\prime, s}_t)_x := (g^s_{t})_x$ otherwise. Then $g^{\prime\prime\prime, s}_t$ is smooth on $\mathcal{M}^s_t$ and transversely continuous and $g^{\prime\prime\prime, s}_t = g^{\prime\prime, s}_t$ on $B (x', \linebreak[1] 2 D_0 \rho (x'))$. We can now define $g^{\prime,s}_2$ as follows: \[ (g^{\prime,s}_2)_x := (g^s)_x + \nu \bigg( \frac{ r_{\can, \varepsilon} (x)}{10^2 \rho (x)} \bigg) \cdot \nu \big( 2 - 2 \eta_1 (x) \big) \cdot \big( (g^{\prime\prime\prime,s})_x - (g^s)_x \big). \] Then $(g^{\prime,s}_2)_x = (g^{\prime\prime\prime,s})_x$ whenever $\rho (x) < 10^{-2} r_{\can, \varepsilon} (x)$ and $\eta_1(x) < \frac12$. On the other hand, $(g^{\prime,s}_2)_x = (g^{s})_x$ whenever $\rho (x) >10^{-1} r_{\can, \varepsilon} (x)$ or $\eta_1 (x) = 1$. This implies Assertion~\ref{ass_round_Bry_f}. If $\delta' \leq \ov\delta' (\delta, D_0(\delta^*), (C_m (\delta^*)))$ and $\varepsilon \leq \ov\varepsilon (\delta)$, then Assertion~\ref{ass_round_Bry_g} holds due to (\ref{eq_gppYchi}) and Lemma~\ref{lem_rcan_control}; moreover we can assume that $\frac12 \rho_g < \rho_{g'_2} < 2 \rho_g$. Next let $\SS_2$ be the restriction of $\SS'$ to $\{ \rho_{g'_2} < \frac12 10^{-2} r_{\can, \varepsilon} \} \cap \{ \eta_1 < \frac12 \}$. Then Assertion~\ref{ass_round_Bry_e} holds due to Claim~\ref{cl_def_E}\ref{ass_Es_e}, assuming that we have chosen $\delta' \leq \ov\delta' (\delta^*, D_0)$ and $\alpha \leq 10^{-3}$; note we need to ensure that a center of a $\delta^*/2$-neck with respect to $g^s$ is automatically a center of a $\delta^*$-neck with respect to $g^{\prime,s}_2$. Assertions~\ref{ass_round_Bry_a}, \ref{ass_round_Bry_b},\ref{ass_round_Bry_i} hold by construction. Assertion~\ref{ass_round_Bry_j} holds due to (\ref{eq_gppYchi}), assuming that $\delta' \leq \delta$. \end{proof} \subsection{Modification in cylindrical regions} \label{subsec_modify_necks} In the next lemma we construct metrics $g^{\prime,s}_3$ by rounding the metrics $g^{\prime,s}_2$ in the neck-like regions. This new metric will be rotationally symmetric everywhere, except at points of large scale or on components of bounded normalized diameter. \begin{lemma}\label{lem_round_cyl} For every $\delta > 0$, and assuming that $\alpha \leq \ov\alpha, D \geq \underline{D}$, $C_m \geq \underline{C}_m$ and $\varepsilon \leq \ov\varepsilon(\delta)$, there are: \begin{itemize} \item a transversely continuous family of smooth metrics $(g^{\prime, s}_{3})_{s \in X}$ on $\ker d\t$, \item a continuous function $\eta_3 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ that is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology, \item a transversely continuous family of spherical structures $(\SS^s_3)_{s \in X}$ on open subsets of $\mathcal{M}^s$, \end{itemize} such that for all $s \in X$: \begin{enumerate}[label=(\alph*)] \item \label{ass_round_Cyl_a} The fibers of $\SS^s_3$ are contained in time-slices of $\mathcal{M}^s$. \item \label{ass_round_Cyl_b} $g^{\prime, s}_{3}$ is compatible with $\SS^s_3$. \item \label{ass_round_Cyl_c} $\domain (\SS^s_3) \supset \{ \rho < \alpha r_{\can, \varepsilon} \} \cap \{ \eta_3 < 1 \} \cap \mathcal{M}^s$. \item \label{ass_round_Cyl_d} $\eta_3$ satisfies Assertions~\ref{ass_cutoff_a}--\ref{ass_cutoff_c}, \ref{ass_cutoff_e} of Lemma~\ref{lem_cutoff_almost_extinct} with respect to the new constants $D, C_m$ and for $N = [\delta^{-1}]$. \item \label{ass_round_Cyl_e} $g'_3 = g$ on $\{ \rho > 10^{-1} r_{\can,\varepsilon}\}$. \item \label{ass_round_Cyl_f} $|\nabla^{m_1} \partial_\t^{m_2} ( g^{\prime,s}_3-g^s )| \leq \delta \rho^{-m_1-2m_2}$ for $m_1, m_2 = 0, \ldots, [\delta^{-1}]$. \item \label{ass_round_Cyl_g} If $(\mathcal{M}^s_t, g^s_t)$ is homothetic to the round sphere or a quotient of the round cylinder for some $t \geq 0$, then $g^{\prime,s}_{3,t} = g^s_t$. \item \label{ass_round_Cyl_h} For every spherical fiber $\mathcal{O}$ of $\SS^s_3$ there is a family of local spatial vector fields $(Y_\mathcal{O}^{s'})_{s' \in X}$ defined in a neighborhood of $\mathcal{O}$ in $\cup_{s \in X} \mathcal{M}^s$ such that for all $s' \in X$ the vector field $\partial^{s'}_\t + Y^{s'}_\mathcal{O}$ preserves $\SS^{s'}_3$ and $|\nabla^{m_1} \partial_\t^{m_2} Y^{s'}_\mathcal{O}| < \delta \rho^{1-m_1 - 2m_2}$ for $m_1, m_2 = 0, \ldots, [\delta^{-1}]$. \end{enumerate} \end{lemma} Note that $\alpha$, $D$ and $C_m$ are independent of $\delta$. \begin{proof} Let $\delta^*, \delta^\#, \delta' > 0$ be constants that we will determine in the course of the proof. Apply Lemma~\ref{lem_Bryant_rounding} for $\delta^*, \delta = \delta^\#$ and consider the family of metrics $(g^{\prime, s}_2)_{s \in X}$, the continuous function $\eta_2$ and the family of spherical structures $(\SS^s_2)_{s \in X}$. If $\delta^* \leq \ov\delta^*$, then for every $x \in \mathcal{M}^s_t$ that satisfies Assertion~\ref{ass_round_Bry_e2} or \ref{ass_round_Bry_e3} of Lemma~\ref{lem_Bryant_rounding} we can find a unique constant mean curvature (CMC) sphere or CMC-projective space $x \in \Sigma_x \subset \mathcal{M}^s_t$ with respect to $g^{\prime,s}_{2,t}$ whose diameter is $< 100 \rho (x)$. Moreover, the induced metric $g^{\prime,s}_{2,t} |_{\Sigma_x}$ can, after rescaling, be assumed to be arbitrarily close to the round sphere or projective space if $\delta^*$ is chosen appropriately small. So if $\delta^* \leq \ov\delta^*$, then $g'_x := \RD^2 (g^{\prime,s}_{2,t} |_{\Sigma_x})$ defines a round metric on $\Sigma_x$; here $\RD^2$ denotes the rounding operator from Subsection~\ref{subsec_RD}. If $x \in \domain (\SS^s_2)$, then $\Sigma_x$ is a regular fiber of $\SS^s_2$ and the induced metric is round, so $g'_x = g^{\prime,s}_{2,t} |_{\Sigma_x}$. It follows that for sufficiently small $\delta^*$ the spheres $\Sigma_x \subset \mathcal{M}^s_t$ and volume-normalizations of the metrics $g'_x$, for all $x$ satisfying Assertion~\ref{ass_round_Bry_e2} or \ref{ass_round_Bry_e3} of Lemma~\ref{lem_Bryant_rounding}, can be used to define a spherical structure $\SS^{\prime,s}$ on an open subset of $\mathcal{M}^s$ that extends the spherical structure $\SS^s_2$. By construction, the family $(\SS^{\prime,s})_{s \in X}$ is transversely continuous and \begin{equation} \label{eq_domain_SS_prime_contains} \{ \rho_{g} < \alpha r_{\can, \varepsilon} \} \cap \{ \eta_2 < 1 \} \cap \mathcal{M}^s \subset \domain(\SS^{\prime,s}). \end{equation} The family $(\SS^s_2)_{s \in X}$ will arise by restricting $(\SS^{\prime,s})_{s \in X}$ to a smaller domain. From now on we fix $\delta^* > 0$ such that the construction in the previous paragraph can be carried out. We can then also fix $D \geq \underline{D} (\delta^*)$, $\alpha \leq \ov\alpha (\delta^*)$ and $C_m \geq \underline{C}_m (\delta^*)$ according to Lemma~\ref{lem_Bryant_rounding}. We will now construct a family of metrics on $\domain(\SS^{\prime,s})$ that are compatible with $\SS^{\prime,s}$. For this purpose fix $s \in X$ and consider a regular spherical fiber $\Sigma \subset \mathcal{M}^s_t$ of $\SS^{\prime,s}$. Let $g_\Sigma$ be a multiple of the standard round metric induced by $\SS^{\prime,s}$ with the property that: \begin{enumerate} \item \label{prop_CMC_rounding_1} The areas of $\Sigma$ with respect to $g_\Sigma$ and the induced metric $g^{\prime,s}_{2,t} |_{\Sigma}$ agree. \end{enumerate} Note that $g_\Sigma = g'_x$ from before if $\Sigma = \Sigma_x$. If $\Sigma$ is a fiber of $\SS^s_2$, then we have $g_\Sigma = g^{\prime,s}_{2,t} |_{\Sigma}$. By passing to a local two-fold cover (see Lemma~\ref{lem_local_spherical_struct}), we can also define $g_\Sigma$ for any singular fibers $\Sigma \approx \mathbb{R} P^2$. Next, fix a unit normal vector field $N$ along $\Sigma$ (with respect to $g^{\prime,s}_2$) and consider all spatial vector fields $Z$ defined in a neighborhood of $\Sigma$ such that: \begin{enumerate}[start=2] \item \label{prop_CMC_rounding_2} $Z$ preserves $\SS^{\prime,s}$ \item \label{prop_CMC_rounding_3} $\int_\Sigma \langle Z, N \rangle_{g^{\prime,s}_2} d\mu_{g_\Sigma}=\int_\Sigma d\mu_{g_\Sigma}$ \end{enumerate} Any two such vector fields $Z_1, Z_2$ differ along $\Sigma$ by a Killing field on $(\Sigma,g_\Sigma)$. So there is a unique vector field $Z_{\Sigma, N}$ along $\Sigma$ with minimal $L^2$-norm (with respect to $g^{\prime,s}_2$) that arises as a restriction of a vector field $Z$ satisfying Properties~\ref{prop_CMC_rounding_2} and \ref{prop_CMC_rounding_3} to $\Sigma$. Note that $Z_{\Sigma, - N} = - Z_{\Sigma, N}$. If $\Sigma$ is a fiber of $\SS^s_2$, then $Z_{\Sigma, N} = N$. We can now define a metric $g^{\prime\prime, s}$ for $\ker d\t$ on $\domain(\SS^{\prime,s})$ such that for every regular fiber $\Sigma$ of $\SS^{\prime,s}$: \begin{enumerate}[start=4] \item $g^{\prime\prime,s} |_{\Sigma} = g_\Sigma$ \item $Z_{\Sigma, N}$ are unit normal vector fields with respect to $g^{\prime\prime,s}$. \end{enumerate} Then $g^{\prime\prime,s}$ is smooth and transversely continuous near regular fibers. The regularity near singular fibers $\approx \mathbb{R} P^2$ can be seen by passing to a local two-fold cover. On $\domain (\SS^s_2)$ we have $g^{\prime\prime, s} = g^{\prime, s}_2$, so $g^{\prime\prime, s}$ is also regular near singular fibers that are points. This shows that $g^{\prime\prime,s}$ is smooth and transversely continuous everywhere. Let us now discuss the closeness of $g^{\prime\prime, s}$ to $g^{\prime,s}_2$ and the existence of a local vector field as in Assertion~\ref{ass_round_Cyl_h}. Fix a fiber $\Sigma \subset \mathcal{M}^s_t$ of $\SS^{\prime, s}$ that is not a fiber of $\SS^s_2$ and pick a point $x \in \Sigma$ and two orthonormal vectors $v_1, v_2 \in T_x \mathcal{M}^s_t$ (with respect to $g^{\prime\prime, s}_{t}$) that are tangent to $\Sigma$. We can uniquely extend $v_i$ to a family of vectors $v_{i,t'}$ along the curve $t' \mapsto x(t') \in \mathcal{M}^s_{t'}$, for $t'$ close to $t$, such that $v_{1,t'}, v_{2,t'}$ remain orthonormal and tangent to the spherical fiber $\Sigma_{x(t')}$ through $x(t')$ and such that $\frac{d}{dt'} v_{1,t'}$ is normal to $v_{2,t'}$. Using the exponential map, we can find a unique family of homothetic embeddings $\beta_{t'} : S^2 \to \mathcal{M}^s_{t'}$ such that $\beta_{t'} (S^2) = \Sigma_{x(t')}$ and such that for some fixed orthonormal tangent vectors $u_1, u_2$ of $S^2$ the images $d\beta_{t'} (u_i)$ are positive multiples of $v_{i,t'}$. Using the normal exponential map to $\Sigma_{x(t')}$, we can extend these maps to charts of the form \[ \chi^s_{t'} : S^2 \times (-a^s, a^s) \longrightarrow \mathcal{M}^s_{t'} \] By repeating the same procedure for $s'$ near $s$, starting with points $x^{s'}$ and vectors $v_i^{s'}$ that depend continuously on $s'$, we can extend $(\chi^s_{t'})$ to a transversely continuous family of charts $\chi = (\chi^{s'}_{t'})$. These charts induce a transversely continuous family of local $O(3)$-actions that are isometric with respect to $g^{\prime\prime,s}$ and compatible with $\SS^{\prime,s}$. As in the proof of Lemma~\ref{lem_Bryant_rounding} we can define $\partial^{s'}_\t + Y^{s'}_{\chi}$ to be the average of $\partial^{s'}_\t$ under this action near $\Sigma$. Then $\partial^{s'}_\t + Y^{s'}_{\chi}$ preserves $\SS^{\prime,s'}$ for $s'$ near $s$. A limit argument yields that if $\varepsilon \leq \ov\varepsilon (\delta'), \delta^\# \leq \ov\delta^\# (\delta')$, then near $\Sigma$ \begin{equation}\label{eq_gppYchi_again} |\nabla^{m_1} \partial_\t^{m_2} ( g^{\prime\prime, s} - g_2^{\prime,s} )| \leq \delta' \rho^{-m_1 - 2m_2}, \qquad |\nabla^{m_1} \partial_\t^{m_2} Y^{s'}_\chi | \leq \delta' \rho^{1-m_1 - 2m_2} \end{equation} for $m_1, m_2 = 0, \ldots, [(\delta')^{-1}]$. Lastly, we construct the metrics $g^{\prime, s}_3$ by interpolating between $ g_2^{\prime,s}$ and $g^{\prime\prime, s}$ using the cutoff function $\nu$ from Subsection~\ref{subsec_rounding_conventions}. If $x \in \domain (\SS^{\prime, s})$, then set \[ (g^{\prime,s}_3)_x := (g^{\prime, s}_2)_x + \nu \bigg( \frac{\alpha \cdot r_{\can, \varepsilon} (x)}{10^2 \rho_{g} (x)} \bigg) \cdot \nu \big( 2-2 \eta_2 (x) \big) \cdot \big( (g^{\prime\prime,s})_x - (g^{\prime, s}_2)_x \big), \] otherwise let $(g^{\prime,s}_2)_x := (g^{\prime, s}_1)_x$. This defines a smooth and transversely continuous family of metrics due to (\ref{eq_domain_SS_prime_contains}). On $\{ \rho > 10^{-1} r_{\can, \varepsilon} \}$ we have $g^{\prime}_3 = g^{\prime}_2 = g$, assuming $\alpha \leq 1$, which implies Assertion~\ref{ass_round_Cyl_e}. As in the proof of Lemma~\ref{lem_Bryant_rounding} the bound (\ref{eq_gppYchi_again}) implies Assertion~\ref{ass_round_Cyl_f} if $\delta' \leq \ov\delta' ( \delta, (C_m (\delta^*)))$ and $\delta^\# \leq \ov\delta^\# (\delta)$ and we can again assume that $\frac12 \rho_{g'_2} < \rho_{g'_3} < 2 \rho_{g'_2}$ and $\frac12 \rho_{g} < \rho_{g'_2} < 2 \rho_{g}$. Next note that $g'_3 = g''$ on \[ \{ \rho_{g} < 10^{-2} \alpha r_{\can, \varepsilon} \} \cap \{ \eta_2 < \tfrac12 \} \supset \{ \rho_{g'_3} < \tfrac14 10^{-2} \alpha r_{\can, \varepsilon} \} \cap \{ \eta_2 < \tfrac12 \}. \] So if $\SS^s_3$ denotes the restriction of $\SS^{\prime, s}$ to the subset $\{ \rho_{g'_3} < \frac14 10^{-2} \alpha r_{\can, \varepsilon} \} \cap \{ \eta_2 > \frac12 \}$, then Assertions~\ref{ass_round_Cyl_a}, \ref{ass_round_Cyl_b} hold by construction and Assertion~\ref{ass_round_Cyl_c} holds if we replace $\alpha$ by $\frac1{16} 10^{-2} \alpha$ and set $\eta_3 (x) := \nu (2 \eta_2 (x) )$. Finally, Assertion~\ref{ass_round_Cyl_h} of this lemma holds due to (\ref{eq_gppYchi_again}) and Lemma~\ref{lem_Bryant_rounding}\ref{ass_round_Bry_j}, assuming $\delta' \leq \delta$, and Assertions~\ref{ass_round_Cyl_d} and \ref{ass_round_Cyl_g} hold after adjusting the constants $C_m$ appropriately. \end{proof} \subsection{Modification of the time vector field} \label{subsec_modify_dt} Next, we will modify the time-vector fields $\partial^s_\t$ on $\{ \eta_3 > 0 \} \cap \{ \rho < \alpha r_{\can, \varepsilon} \}$ so that they preserve the spherical structures $\SS^s_3$. \begin{lemma} \label{lem_dtprime} For every $\delta > 0$ and assuming that $\alpha \leq \ov\alpha$, $D \geq \underline{D}$, $C_m \geq \underline{C}_m$ and $\varepsilon \leq \ov\varepsilon(\delta)$ there are: \begin{itemize} \item a transversely continuous family of smooth metrics $(g^{\prime, s}_{4})_{s \in X}$ on $\ker d\t$, \item a continuous function $\eta_4 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ that is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology, \item a transversely continuous family of spherical structures $(\SS^s_4)_{s \in X}$ on open subsets of $\mathcal{M}^s$, \item a transversely continuous family of smooth vector fields $(\partial^{\prime,s}_{\t,4})_{s \in X}$ on $\mathcal{M}^s$ that satisfy $\partial^{\prime,s}_{\t,4} \t = 1$, \end{itemize} such that the assertions of Lemma~\ref{lem_round_cyl} still hold for $(g^{\prime,s}_4)_{s \in X}$, $\eta_4$ and $(\SS^s_4)_{s \in X}$ and such that in addition for all $s \in X$: \begin{enumerate}[label=(\alph*), start=9] \item \label{ass_round_VF_i} $\partial^{\prime,s}_{\t,4}$ preserves $\SS^s_4$. \item \label{ass_round_VF_j} $\partial^{\prime,s}_{\t,4} = \partial^s_\t$ on $\{ \rho > 10^{-1} r_{\can, \varepsilon} \}$. \item \label{ass_round_VF_k} $|\nabla^{m_1} \partial_\t^{m_2} (\partial^{\prime,s}_{\t,4} - \partial^s_\t)| \leq \delta \rho^{1-m_1 - 2m_2}$ for $m_1, m_2 = 0, \ldots, [\delta^{-1}]$. \item \label{ass_round_VF_l} If $(\mathcal{M}^s_0,g^s_0)$ is homothetic to the round sphere or a quotient of the round cylinder, then $\partial^{\prime,s}_{\t,4} = \partial^s_\t$. \end{enumerate} \end{lemma} \begin{proof} Apply Lemma~\ref{lem_round_cyl} for $\delta$ replaced by some $\delta^\# > 0$, which we will determine in the course of the proof depending on $\delta$, and fix the constants $\alpha$, $D$, $C_m$. Assume in the following that $\frac12 g < g'_3 < 2 g$. Set $(g^{\prime,4}_s )_{s \in X} := (g^{\prime,3}_s )_{s \in X}$. Fix some $s \in X$ for now and set $U^s := \{ \eta_3 < 1 \} \cap \{ \rho_{g'_3} < \frac12 \alpha r_{\can, \varepsilon} \} \cap \mathcal{M}^s \subset \domain (\SS^s_3)$. We will define \begin{equation} \label{eq_def_partialtprime} \partial^{\prime,s}_{\t,4} := \partial^s_\t + \eta^* Z^s, \end{equation} where $Z^s$ is a spatial vector field on $U^s$ and $\eta^* : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ is a smooth and transversely continuous cutoff function with support on $\cup_{s \in X} U^s$. Let us describe the construction of $Z^s$. Consider a spherical fiber $\mathcal{O} \subset U^s \cap \mathcal{M}^s_t$. Call a vector field $Z'$ in $\mathcal{M}^s_t$ along $\mathcal{O}$ \emph{admissible} if $\partial^s_\t + Z'$ can be extended to a vector field on neighborhood of $\mathcal{O}$ in $\mathcal{M}^s$ that preserves $\SS^s_3$. Using a local chart near $\mathcal{O}$ in $\mathcal{M}^s$, one can see that the space of admissible vector fields along $\mathcal{O}$ is affine and finite dimensional. More specifically, if $\mathcal{O}$ is a point, then there is only one admissible vector field along $\mathcal{O}$ and otherwise the difference of any two admissible vector fields is equal to the sum of a Killing field on $\mathcal{O}$ and a parallel normal vector field to $\mathcal{O}$. Let now $Z'_{\mathcal{O}}$ be the admissible vector field along $\mathcal{O}$ whose $L^2$-norm is minimal and define $Z^s$ on $U^s$ such that $Z^s|_{\mathcal{O}} := Z'_{\mathcal{O}}$ for every spherical fiber $\mathcal{O} \subset U^s$. Then $Z^s$ is well defined. We will now show that $Z^s$ is smooth, transversely continuous in the smooth topology and small in the sense of Assertion~\ref{ass_round_VF_k}. For this purpose, consider a family of defined vector field $(Y^{s'}_\mathcal{O})_{s' \in X}$ from Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_h} near a spherical fiber $\mathcal{O}$. As in the proofs of Lemmas~\ref{lem_Bryant_rounding} and \ref{lem_round_cyl}, we can construct a transversely continuous family of local $O(3)$-actions $(\zeta^{s'})$ that are compatible with $\SS^{s'}_3$ and isometric with respect to $g^{\prime, s'}_3$. By definition $Y^{s'}_\mathcal{O} - Z^{s'}$ restricted to every spherical fiber $\mathcal{O}' \subset \mathcal{M}^{s'}$ near $\mathcal{O}$ that is not a point equals the $L^2$-projection of $Y_\mathcal{O}|_{\mathcal{O}'}$ onto the subspace spanned by Killing fields and parallel normal vector fields along $\mathcal{O}'$. If $\mathcal{O}'$ is a point, then $Y^{s'}_\mathcal{O} - Z^{s'}$ restricted to every spherical fiber $\mathcal{O}'$ vanishes. So a representation theory argument implies that \[ Y^{s'}_\mathcal{O} - Z^{s'} |_U = \frac1{|O(3)|} \int_{O(3)} (1 + 3 \tr A) (\zeta^{s'}_A)_* Y^{s'}_{\mathcal{O}} \, dA, \] where $\zeta^{s'}_A := \zeta^{s'} (A, \cdot)$. This implies the desired regularity properties of $Z^s$ and the bound from Assertion~\ref{ass_round_VF_k} for $\delta^\# \leq \ov\delta^\# (\delta)$ if we had $\partial^{\prime,s}_{\t,4} = \partial^s_\t + Z^s$. It remains to construct the cutoff function $\eta^*$. Let $\nu$ be the cutoff function from Subsection~\ref{subsec_rounding_conventions} and set \[ \eta^* (x) := \nu \bigg( \frac{\alpha \cdot r_{\can, \varepsilon} (x)}{10^2 \rho_{g'_3} (x)} \bigg) \cdot \nu \big( 2 \eta_2 (x) \big) . \] Then $\supp \eta^* \subset \cup_{s \in X} U^s$, so if we define $\partial^{\prime,s}_{\t,4}$ as in (\ref{eq_def_partialtprime}), then Assertion~\ref{ass_round_VF_k} holds for $\delta^\# \leq \ov\delta^\# (\delta, (C_m))$ and $\varepsilon \leq \ov\varepsilon (\delta, (C_m))$. Next, note that $\eta^* \equiv 1$ on $U' := \{ \rho_{g'_3} < 10^{-2} \alpha r_{\can, \varepsilon} \} \cap \{ \eta_3 > \frac12 \}$. So if we define $\SS^{s}_4$ to be the restriction of $\SS^{s}_3$ to $U' \cap \mathcal{M}^s$, then Assertion~\ref{ass_round_VF_i} holds. Assertions~\ref{ass_round_Cyl_a}--\ref{ass_round_Cyl_h} of Lemma~\ref{lem_round_cyl} continue to hold if we replace $\alpha$ by $\frac12 10^{-2} \alpha$, set $\eta_4 (x) := \nu (2 \eta_3 (x) )$ and adjust $C_m$ appropriately. Assertion~\ref{ass_round_VF_j} of this lemma holds assuming $\alpha \leq \frac12$ and Assertion~\ref{ass_round_VF_l} is true due to construction. \end{proof} \subsection{Extension of the structure until the metric is almost round} \label{subsec_extend_to_almost_round} Next we modify each metric $g^{\prime,s}_4$ on the support of $\eta_4$ such that it remains compatible with a spherical structure $\SS^s_5$ until it has almost constant curvature. We will also choose a new cutoff function $\eta_5$ whose support is contained in the support of $\eta_4$ and which measures the closeness of $g^{\prime,s}_4$ to a constant curvature metric. Recall in the following that $\eta_4$ is only non-zero in components that have positive sectional curvature, bounded normalized diameter and will become extinct in finite time. Therefore our construction will only take place in product domains of $\mathcal{M}^s$, which can be described by conventional Ricci flows. Our strategy will be to construct new metrics $g^{\prime,s}_5$ by evolving the metrics $g^{\prime,s}_4$ on the support of $\eta_4$ forward by a certain amount of time under the Ricci flow. By the continuous dependence of the Ricci flow on its the initial data, this flow remains close to $g^s$ for some time. Moreover, any symmetry of $g^{\prime,s}_4$ will be preserved by the flow and therefore the new metrics $g^{\prime,s}_5$ will be compatible with a spherical structure for a longer time. By choosing our constants appropriately, we can ensure that $g^{\prime,s}_5$ is compatible with a spherical structure until a time close enough to the corresponding extinction time. After this time the remaining flow is sufficiently close to a quotient of the round sphere. \begin{lemma} \label{lem_exten_almost_round} For every $\delta > 0$ and assuming that $\alpha \leq \ov\alpha (\delta)$, $C^*_m \geq \underline{C}^*_m$ and $\varepsilon \leq \ov\varepsilon(\delta)$ there are: \begin{itemize} \item a transversely continuous family of smooth metrics $(g^{\prime, s}_{5})_{s \in X}$ on $\ker d\t$, \item a continuous function $\eta_5 : \cup_{s \in X} \mathcal{M}^s \to [0,1]$ that is smooth on each fiber $\mathcal{M}^s$ and transversely continuous in the smooth topology, \item a transversely continuous family of spherical structures $(\SS^s_5)_{s \in X}$ on open subsets of $\mathcal{M}^s$, \item a transversely continuous family of smooth vector fields $(\partial^{\prime,s}_{\t,5})_{s \in X}$ on $\mathcal{M}^s$ that satisfy $\partial^{\prime,s}_{\t,5} \t = 1$, \end{itemize} such that for any $s \in X$ and $t \geq 0$: \begin{enumerate}[label=(\alph*)] \item \label{ass_extend_alm_round_aa} $\partial^s_\t \eta_5 \geq 0$. \item \label{ass_extend_alm_round_a} $\eta_5$ is constant on every connected component of $\mathcal{M}^{s}_t$. \item \label{ass_extend_alm_round_b} Any connected component $\mathcal{C} \in \mathcal{M}^{s}_t$ on which $\eta_5 > 0$ is compact and there is a time $t_\mathcal{C} > t$ such that: \begin{enumerate}[label=(c\arabic*)] \item $\mathcal{C}$ survives until time $t'$ for all $t' \in [t, t_\mathcal{C})$ and no point in $\mathcal{C}$ survives until or past time $t_\mathcal{C}$. \item The universal cover $(\td\mathcal{C} (t'), g^{\prime,s}_{5,t'})$ is $\delta$-close to the round sphere modulo rescaling for all $t' \in [t, t_\mathcal{C})$. \item $\rho < 10^{-1} r_{\can, \varepsilon}$ on $\mathcal{C} (t')$ for all $t' \geq [t, t_\mathcal{C})$. \end{enumerate} \item \label{ass_extend_alm_round_c} $| \partial_\t^m \eta_5 | \leq C^*_m \rho^{-2m}$ for $m = 0, \ldots, [\delta^{-1}]$. \item \label{ass_extend_alm_round_d} $\eta_5$, $g'_5$, $\SS_5$, $\partial'_{\t, 5}$ satisfy Assertions~\ref{ass_round_Cyl_a}--\ref{ass_round_Cyl_c}, \ref{ass_round_Cyl_e}--\ref{ass_round_Cyl_g} of Lemma~\ref{lem_round_cyl} and all assertions of Lemma~\ref{lem_dtprime}. \end{enumerate} \end{lemma} It will be important later that the constants $C^*_m$ are independent of $\delta$. \begin{proof} Let $ \delta^\#, \delta',\delta'' > 0$, $A < \infty$ be constants whose values will be determined depending on $\delta$ in the course of the proof. Apply Lemma~\ref{lem_dtprime} with $\delta$ replaced by $\delta^\#$ and consider the families $(g^{\prime,s}_4)_{s \in X}, (\partial^{\prime,s}_{\t,4})_{s \in X}$, $(\SS^s_4)_{s \in X}$, the cutoff function $\eta_4$ and the constants $\alpha, D, C_m$. Assume that $\delta^\#$ is chosen small enough such that $\frac12 \rho_{g} < \rho_{g'_4} < 2 \rho_{g}$. Set $\partial'_{\t, 5} := \partial'_{\t, 4}$ everywhere and $g'_5 := g'_4$, $\SS_5 := \SS_4$ and $\eta_5 := \eta_4 = 0$ on $\{ \eta_4 = 0 \}$. Therefore, it suffices to consider only components $\mathcal{C} \subset \mathcal{M}^s_t$ where $\eta_4 > 0$. Recall that by Lemma~\ref{lem_cutoff_almost_extinct}\ref{ass_cutoff_c}, the union of these components consists of pairwise disjoint product domains. For any such component $\mathcal{C} \subset \mathcal{M}^s_t$ choose $t^{\min}_\mathcal{C} < t < t^{\max}_\mathcal{C}$ minimal/maximal such that $\mathcal{C}$ survives until all $t' \in (t^{\min}_\mathcal{C}, t^{\max}_\mathcal{C})$ and set \[ U_\mathcal{C} := \cup_{t' \in (t^{\min}_\mathcal{C}, t^{\max}_\mathcal{C})} \mathcal{C} (t'). \] Then $U_\mathcal{C}$ is a product domain (with respect to $\partial^s_\t$ and $\partial^{\prime,s}_{\t, 5}$) and for any two components $\mathcal{C}, \mathcal{C}' \subset \mathcal{M}^s_t$ on which $\eta_4 > 0$ the product domains $U_{\mathcal{C}}, U_{\mathcal{C}'}$ are either equal or disjoint. For the remainder of the proof let us fix some $s \in X$ and a product domain of the form $U_\mathcal{C}$. Recall that $\eta_4 = 0$ on $\mathcal{C}(t') \subset U_\mathcal{C}$ for $t'$ close to $t^{\min}_\mathcal{C}$. We will describe how to define $g^{\prime,s}_5$, $\eta_5$ and $\SS^s_5$ on $U_\mathcal{C}$ such that all assertions of this lemma hold and $g^{\prime,s}_5 = g^{\prime,s}_4$, $\eta_5 = \eta_4 = 0$ and $\SS^s_5 = \SS^s_4$ on $\{ \eta_4 = 0 \} \cap U_\mathcal{C}$. It will be clear that the same construction can be performed for every $U_\mathcal{C}$ and that the resulting objects have the desired regularity properties. We first represent the flows $g^s$ and $g^{\prime,s}_4$ restricted to $U_\mathcal{C}$ by a smooth family of metrics on $\mathcal{C}$, which satisfy an equation that is similar to the volume normalized Ricci flow equation. For this purpose consider the flow $\Phi : \mathcal{C} \times (T_1, T_2) \to U_\mathcal{C}$ of the vector field $\td\partial^s_\t := \wh\rho_g^2 \cdot \partial^{\prime,s}_{\t, 4}$. By Lemma~\ref{lem_cutoff_almost_extinct}\ref{ass_cutoff_c5} we have $|\td\partial^s_\t \t| = \wh\rho_g^2 \leq C (t^{\max}_\mathcal{C} - \t)$. So $T_2 = \infty$. Express $g^s$ and $g^{\prime,s}_4$ as families of metrics $(\td{g}_t)_{t \in (T_1, \infty)}$, $(\td{g}_{4,t})_{t \in (T_1, \infty)}$ and define $\td\eta_4 : (T_1, \infty) \to [0,1]$ as follows: \[ \td{g}_t :=\Phi^*_t \big( \wh\rho^{-2}_g g^{s} \big), \qquad \td{g}_{4,t} := \Phi^*_t \big( \wh\rho_g^{-2} g^{\prime,s}_4 \big), \qquad \td\eta_4 (t) := \eta_4 (\Phi_t (\mathcal{C})) . \] Note that if $\partial^{\prime,\t}_{\t, 4} = \partial^s_{\t}$, then $(\td{g}_t)$ is the volume normalization of the flow $g^s$ restricted to $U_\mathcal{C}$. In the general case, consider the family of vector fields $(Z_t)_{t \in (T_1, \infty)}$ \[ Z_t := \Phi^*_t \big( \wh\rho_g^2 (\partial^s_{\t}-\partial^{\prime,s}_{\t, 4}) \big). \] Then $(\td{g}_t)$ satisfies the following normalized flow equation with a correctional Lie derivative: \[ \partial_t \td{g}_t + \mathcal{L}_{Z_t} \td{g}_t = - 2 \Ric_{\td{g}_t} + \frac{2}{3 V(\mathcal{C}, \td{g}_t)} \int_\mathcal{C} R_{\td{g}_t} d\mu_{\td{g}_t} \cdot \td{g}_t . \] We have $\partial_t \td\eta_4 \geq 0$ by Lemma~\ref{lem_cutoff_almost_extinct}\ref{ass_cutoff_a}. By our previous discussion we have $\td\eta_4 (t) \equiv 0$ for $t$ near $T_1$ and $\td\eta_4 (t) > 0$ for large $t$. Let $T_0 \in (T_1, \infty)$ be maximal such that $\td\eta_4 |_{(T_1, T_0]} \equiv 0$. \begin{Claim} \label{cl_bounds_vol_normalized_RF} There are constants $C'_m < \infty$ such that if $\delta^\# \leq \ov\delta^\# (\delta')$, $\varepsilon \leq \ov\varepsilon (\delta')$, then $T_0 - (\delta')^{-2} > T_1$ and for all $t \in (T_0 - (\delta')^{-2}, \infty)$ and $m_1, m_2 = 0, \ldots, [(\delta')^{-1}]$ we have \begin{multline*} | \partial_t^{m_2} \td\eta_4 | \leq C'_{m_2} \rho_{\td{g}}^{-2m_2}, \qquad |\nabla^{m_1} \partial_t^{m_2} ( \td{g}_4 -\td{g} )| \leq \delta' \rho_{\td{g}}^{-m_1-2m_2}, \\ \qquad |\nabla^{m_1} \partial_t^{m_2} Z| \leq \delta' \rho_{\td{g}}^{1-m_1-2m_2} . \end{multline*} \end{Claim} \begin{proof} The first statement is a consequence of Lemma~\ref{lem_rcan_control}, assuming that $\varepsilon \leq \ov\varepsilon (\delta')$. The other bounds follow via a standard limit argument using Lemma~\ref{lem_cutoff_almost_extinct}\ref{ass_cutoff_e}, Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_f} and Lemma~\ref{lem_dtprime}\ref{ass_round_VF_k} \end{proof} For any metric $g'$ on $\mathcal{C}$ and any $\Delta T \geq 0$ denote by $\RF_{\Delta T} (g')$ the result of evolving $g'$ by the volume normalized Ricci flow equation \begin{equation} \label{eq_vol_norm_RF} \partial_t g'_t = - 2 \Ric_{g'_t} + \frac{2}{3 V(\mathcal{C}, g'_t)} \int_\mathcal{C} R_{g'_t} d\mu_{g'_t} \cdot g'_t, \qquad g'_0 = g' \end{equation} for time $\Delta T$, if possible. Note that if $g'$ is compatible with some spherical structure $\SS'$, then so is $RF_{\Delta T} (g')$. We now define $(\td{g}_{5,t})_{t \in (T_1, \infty)}$ and $\td\eta_5 : (T_1, \infty) \to [0,1]$ as follows: \[ \td{g}_{5,t} := \RF_{\td\eta_4(t) A} \td{g}_{4, t-\td\eta_4(t) A}, \qquad \td\eta_5 (t) := \begin{cases} 0 & \text{if $t \leq T_0$} \\ \td\eta_4(t -A) &\text{if $t > T_0$} \end{cases}. \] \begin{Claim} \label{cl_translation_by_A} If $\delta' \leq \ov\delta' (A, \delta'')$ and $\varepsilon \leq \ov\varepsilon (A, \delta'')$, then $(\td{g}_{5,t})_{t \in (T_1, \infty)}$ and $\td\eta_5 : (T_1, \infty) \to [0,1]$ are well defined and smooth and for all $m_1, m_2 = 0, \ldots, [(\delta'')^{-1}]$ we have \[ | \partial_t^{m_2} \td\eta_5 | \leq C'_{m_2} \rho_{\td{g}}^{-2m_2}, \qquad |\nabla^{m_1} \partial_t^{m_2} ( \td{g}_5 -\td{g} )| \leq \delta'' \rho_{\td{g}}^{-m_1-2m_2} . \] Moreover, $\td\eta_5 = \td\eta_4$ and $\td{g}_5 = \td{g}_4$ on $(T_1, T_0)$ and if $\Phi_{t - \td\eta_4(t) A} (\mathcal{C}) \subset \domain (\SS^s_4)$, then $\td{g}_{5,t}$ is compatible with the pullback of $\SS_4$ via $\Phi_{t - \td\eta_4(t) A}$. \end{Claim} \begin{proof} The first bound is a direct consequence of Claim~\ref{cl_bounds_vol_normalized_RF}. The second bound follows via a standard limit argument (the limit being a volume normalized Ricci flow). The last two statements follow by construction. \end{proof} We can now choose $g^{\prime,s}_5$ and $\eta_5$ on $U_{\mathcal{C}}$ such that for all $t \in (T_1, \infty)$ \[ \td{g}_{5,t} := \Phi^*_t \big( \wh\rho_g^{-2} g^{\prime,s}_5 \big), \qquad \td\eta_5 (t) := \eta_5 (\Phi_t (\mathcal{C})). \] Note that $g^{\prime,s}_5 = g^{\prime,s}_4$ and $\eta_5 = \eta_4$ on $ \{ \eta_4 = 0 \} \cap U_{\mathcal{C}}$. For any $t \in (T_1, \infty)$ for which $\Phi_{t - \td\eta_4(t) A} (\mathcal{C}) \subset \domain (\SS^s_4)$ consider the push-forward of $\SS^s_4 |_{\Phi_{t - \td\eta_4(t) A} (\mathcal{C})}$ onto $\Phi_t (\mathcal{C})$ via the diffeomorphism $\Phi_t \circ \Phi^{-1}_{t - \td\eta_4(t) A}$. The union of these spherical structures, for all $t \in (T_1,\infty)$ defines a new spherical structure $\SS^s_5$ on $U_\mathcal{C}$, which is compatible with $g^{\prime,s}_5$ on $U_\mathcal{C}$ and equal to $\SS^s_4$ on $U_\mathcal{C} \cap \{ \eta_4 = 0 \}$. By Lemma~\ref{lem_cutoff_almost_extinct}\ref{ass_cutoff_c} we have $\{ \rho > 10^{-1} r_{\can, \varepsilon} \} \cap U_\mathcal{C} \subset \{ \eta_4 = 0 \} \cap U_\mathcal{C}$. It follows that Assertions~\ref{ass_round_Cyl_a}, \ref{ass_round_Cyl_b}, \ref{ass_round_Cyl_e}, \ref{ass_round_Cyl_g} of Lemma~\ref{lem_round_cyl} and all Assertions of Lemma~\ref{lem_dtprime} hold on $U_\mathcal{C}$ for $g^{\prime,s}_5$, $\eta_5$ and $\SS^s_5$. Assertions~\ref{ass_extend_alm_round_aa} and \ref{ass_extend_alm_round_a} of this lemma hold by construction. By Claim~\ref{cl_translation_by_A} and another limit argument we can choose constants $C^*_m = C^*_m ((C'_m)) < \infty$ such that Assertion~\ref{ass_extend_alm_round_c} of this lemma and Assertion~\ref{ass_round_Cyl_f} of Lemma~\ref{lem_round_cyl} hold on $U_\mathcal{C}$ if $\delta'' \leq \ov\delta'' (\delta)$, $\varepsilon \leq \ov\varepsilon (\delta)$. Note here that the constants $C^*_m$ can be chosen independently of $A$. The following claim implies that the remaining assertions of this lemma hold on $U_\mathcal{C}$. \begin{Claim} If $A \geq \underline{A} (D,\delta)$, $c \leq \ov{c}(A)$ and $\varepsilon \leq \ov\varepsilon(D,\delta)$, then Assertion~\ref{ass_extend_alm_round_b} of this lemma holds on $U_\mathcal{C}$ and Assertion~\ref{ass_round_Cyl_c} of Lemma~\ref{lem_round_cyl} holds on $U_\mathcal{C}$ for $g^{\prime,s}_5$, $\eta_5$ and $\SS^s_5$ if we replace $\alpha$ by $c \alpha$. \end{Claim} \begin{proof} Consider first Assertion~\ref{ass_extend_alm_round_b} of this lemma. Due to the fact that $\{ \eta_5 > 0 \} \cap U_\mathcal{C} \subset \{ \eta_4 > 0 \} \cap U_\mathcal{C}$ and Assertion~\ref{ass_cutoff_c} of Lemma~\ref{lem_cutoff_almost_extinct}, it suffices to show that if $\td\eta_5(t) > 0$, then the universal cover of $(\mathcal{C},\td{g}_t)$ is $\delta$-close to the round sphere modulo rescaling if $A \geq \underline{A} (D,\delta)$ and $\varepsilon \leq \ov\varepsilon (D,\delta)$. Fix such a $t \in (T_1, \infty)$ with $\td\eta_5(t) > 0$ and assume that the universal cover of $(\mathcal{C},\td{g}_t)$ was not $\delta$-close to the round sphere modulo rescaling. Then $t-A > T_0$ and $\td\eta_4(t') > 0$ for all $t' \in [t - A, t]$. So Assertions~\ref{ass_cutoff_c1}, \ref{ass_cutoff_c3}, \ref{ass_cutoff_c4} of Lemma~\ref{lem_cutoff_almost_extinct} hold on $\Phi_{t'} (\mathcal{C})$ for all $t' \in [t - A, t]$. We obtain that $\sec_{\td{g}_{t'}} > 0$ on $\mathcal{C} \times [t-A,t]$ and $\diam_{\td{g}_{t'}} \mathcal{C} < D \rho$ on $\mathcal{C}$ for all $t' \in [t-A,t]$. In addition, the $\varepsilon$-canonical neighborhood assumption holds on $\mathcal{C} \times [t-A,t]$. So if for no choice of $A$ and $\varepsilon$ the universal cover of $(\mathcal{C},\td{g}_t)$ was $\delta$-close to the round sphere modulo rescaling, then we could apply a limit argument using Theorem~\ref{Thm_kappa_compactness_theory} and obtain an ancient solution $(M^\infty, (\td{g}^\infty_{t'})_{t' \leq 0})$ to the volume normalized Ricci flow equation (\ref{eq_vol_norm_RF}) that satisfies \begin{equation} \label{eq_sec_diam_tdg_infty} \sec_{\td{g}^\infty} \geq 0, \qquad \diam_{\td{g}^\infty} \mathcal{C} \leq D \rho \end{equation} everywhere. Moreover every time-slice of $(\td{g}^\infty_{t'})_{t' \leq 0}$ is isometric to a time-slice of a $\kappa$-solution and $(\td{g}^\infty_{t'})_{t' \leq 0}$ cannot homothetic to a shrinking round sphere. Due to the positivity of the scalar curvature, it follows that reparameterizing $(\td{g}^\infty_{t'})_{t' \leq 0}$ yields an ancient Ricci flow, and therefore a $\kappa$-solution. The second bound in (\ref{eq_sec_diam_tdg_infty}) implies via Theorem~\ref{Thm_kappa_sol_classification}\ref{ass_kappa_sol_classification_d} that this $\kappa$-solution is homothetic to the round sphere, in contradiction to our assumptions. For Assertion~\ref{ass_round_Cyl_c} of Lemma~\ref{lem_round_cyl} note that by the canonical neighborhood assumption we have \[ | \td\partial^s_\t \rho_{g} | = \wh\rho_g^2 \cdot |\partial^{\prime,s}_{\t, 4} \rho_g | \leq C(D) \rho_g. \] It follows that for any $(x,t) \in \mathcal{C} \times (T_1, \infty)$ \[ \rho (\Phi_{t - \td\eta_4(t)A} (x)) ) \leq e^{C(D) A} \rho(\Phi_t( x) ). \] Set $c := e^{-C(D) A}$ and assume that $\rho_{g^s} (\Phi_t(x)) < c \alpha r_{\can, \varepsilon} (\Phi_t (x))$ and $\td\eta_5 (t) = \eta_5 (\Phi_t (x)) < 1$. Then \[ \rho (\Phi_{t - \td\eta_4(t)A} (x)) ) < \alpha r_{\can, \varepsilon} (\Phi_t (x)) \leq \alpha r_{\can, \varepsilon} (\Phi_{t - \td\eta_4(t)A} (x)). \] If $\td\eta_4 (t) < 1$, then $\td\eta_4 ( t - \td\eta_4(t)A ) \leq \td\eta_4(t) < 1$ and if $\td\eta_4(t) = 1$, then $\td\eta_4 ( t - \td\eta_4(t)A ) = \td\eta_4 ( t - A ) = \td\eta_5(t) < 1$. Therefore \[ \eta_4 (\Phi_{t - \td\eta_4(t)A} (x)) < 1. \] By Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_c} it follows that $\Phi_{t - \td\eta_4(t)A} (x) \in \domain (\SS^s_4)$ and therefore by construction $\Phi_t (x) \in \domain (\SS^s_5)$. \end{proof} By repeating the construction above for all product domains $U_\mathcal{C}$, we obtain that all assertions of this lemma hold on the union of all $U_\mathcal{C}$. On the complement of these product domain we have $g'_5 = g'_4$, $\SS_5 := \SS_4$ and $\eta_5 = \eta_4 = 0$. So Assertions~\ref{ass_extend_alm_round_aa}--\ref{ass_extend_alm_round_c} hold trivially on this complement and Assertion~\ref{ass_extend_alm_round_d} holds due to Lemmas~\ref{lem_round_cyl} and \ref{lem_dtprime} assuming $\delta^\# \leq \delta$. \end{proof} \subsection{Modification in almost round components and proof of the main theorem} \label{subsec_proof_rounding} Lastly, we will construct $(g^{\prime,s})_{s \in X}$ and $(\partial^{\prime,s}_\t)_{s \in X}$ by modifying $(g^{\prime,s}_5)_{s \in X}$ and $(\partial^{\prime,s}_{\t,5})_{s \in X}$ on $\{ \eta_5 > 0 \}$. We will also construct a family of spherical structures $(\SS^s)_{s \in X}$ by restricting and extending the family $(\SS^s_5)_{s \in X}$. These objects will form the family of $\mathcal{R}$-structures whose existence is asserted in Theorem~\ref{Thm_rounding}. \begin{proof}[Proof of Theorem~\ref{Thm_rounding}.] Let $ \delta^\#, \delta'> 0$, $A < \infty$ be constants, whose values will be determined depending on $\delta$ in the course of the proof. Apply Lemma~\ref{lem_exten_almost_round} with $\delta$ replaced by $\delta^\#$ and consider the families $(g^{\prime,s}_5)_{s \in X}, (\partial^{\prime,s}_{\t,4})_{s \in X}$, $(\SS^s_5)_{s \in X}$, the cutoff function $\eta_5$ and constants $\alpha (\delta^\#)$ and $C^*_m$. Assume that $\delta^\#$ is chosen small enough such that $\frac12 \rho_{g} < \rho_{g'_5} < 2 \rho_{g}$. Let us first define the family of metrics $g^{\prime,s}$ and vector fields $\partial^{\prime,s}_\t$. Fix $s \in X$ and consider a component $\mathcal{C} \subset \mathcal{M}^s_t$ on which $\eta_5 > 0$. By Lemma~\ref{lem_exten_almost_round}\ref{ass_extend_alm_round_b} the universal cover of $(\mathcal{C}, g^{\prime,s}_{5,t})$ is $\delta^\#$-close to the round sphere modulo rescaling. So if $\delta^\# \leq \ov\delta^\#$, then we can define a smooth family of metrics $g^{\prime\prime,s}$ on $\{ \eta_5 > 0 \} \cap \mathcal{M}^s$ such that \[ g^{\prime\prime,s}_t |_\mathcal{C} = \RD^3 (g^{\prime,s}_{5,t} |_\mathcal{C}) \] for any such component $\mathcal{C}$; here $\RD^3$ is the rounding operator from Subsection~\ref{subsec_RD}. Similarly as in the proof of Lemma~\ref{lem_dtprime}, we can consider the space of vector fields $Z'$ on $\mathcal{C}$ that can be extended to spatial vector fields $Z''$ near $\mathcal{C}$ in $\mathcal{M}^s$ such that the flow of $\partial^{\prime,s}_{\t, 5} + Z''$ consists of homotheties with respect to $g^{\prime\prime,s}$. The difference of any two such vector fields is a Killing field on $(\mathcal{C}, g^{\prime\prime,s}_t |_{\mathcal{C}})$, therefore this space is affine linear and finite dimensional. Let $Z_\mathcal{C}$ be the vector field of this space whose $L^2$-norm with respect to $g^{\prime\prime,s}_t$ is minimal. Define the spatial vector field $Z^s$ on $\{ \eta_5 > 0 \} \cap \mathcal{M}^s$ such that $Z^s |_\mathcal{C} = Z_\mathcal{C}$ for any component $\mathcal{C} \subset \mathcal{M}^s_t \cap \{ \eta_5 > 0\}$. Note that since our construction is invariant under isometries, $g^{\prime\prime,s}$ is still compatible with $\SS^s_5$ and $\partial^{\prime\prime,s}_{\t}$ still preserves $\SS^s_5$. Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_f}, Lemma~\ref{lem_dtprime}\ref{ass_round_VF_k} and a standard limit argument imply that if $\delta^\# \leq \ov\delta^\# (\delta')$, $\varepsilon \leq \ov\varepsilon (\delta')$, then \begin{equation}\label{eq_gpp_Z_proof_thm} |\nabla^{m_1} \partial_\t^{m_2} ( g^{\prime\prime, s} - g_5^{\prime,s} )| \leq \delta' \rho^{-m_1 - 2m_2}, \qquad |\nabla^{m_1} \partial_\t^{m_2} Z^{s} | \leq \delta' \rho^{1-m_1 - 2m_2} \end{equation} for $m_1, m_2 = 0, \ldots, [(\delta')^{-1}]$. Define $g^{\prime,s}:= g^{\prime,s}_5$ and $\partial^{\prime,s}_\t := \partial^{\prime,s}_{\t, 5}$ on $\{ \eta_5 = 0 \}$ and for all $x \in \{ \eta_5 > 0 \} \cap \mathcal{M}^s_t$ set \begin{align*} \big( g^{\prime,s}_t \big)_x &:= \big( g^{\prime,s}_{5,t} \big)_x + \nu (2\eta_5(x)) \cdot \big( \big( g^{\prime\prime,s}_t \big)_x - \big( g^{\prime,s}_{5,t} \big)_x \big), \\ \big( \partial^{\prime,s}_{\t} \big)_x &:= \big( \partial^{\prime,s}_{\t,5} \big)_x + \nu (2\eta_5(x)) \cdot \big( \big( \partial^{\prime\prime,s}_\t \big)_x - \big( \partial^{\prime,s}_{\t,5} \big)_x \big). \end{align*} Then $g^{\prime,s} = g^{\prime\prime,s}$ and $\partial^{\prime,s}_\t = \partial^{\prime\prime,s}_{t}$ on $\{ \eta_5 > \frac12 \} \cap \mathcal{M}^s$. Assertion~\ref{ass_thm_rounding_c} of this theorem follows using (\ref{eq_gpp_Z_proof_thm}), Lemma~\ref{lem_exten_almost_round}\ref{ass_extend_alm_round_c}, Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_f}, Lemma~\ref{lem_dtprime}\ref{ass_round_VF_k}, assuming $\delta' \leq \ov\delta' (\delta, (C^*_m))$, $\delta^\# \leq \delta$ and $\varepsilon \leq \ov\varepsilon (\delta, (C^*_m))$. By our discussion of the previous paragraph, $g^{\prime,s}$ is still compatible with $\SS^s_5$ and $\partial^{\prime,s}_{\t}$ still preserves $\SS^s_5$. Next let us construct $\SS^s$, $U^s_{S2}$ and $U^s_{S3}$. By Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_c} and Lemma~\ref{lem_exten_almost_round}\ref{ass_extend_alm_round_b}, we can find a universal constant $c > 0$ such that if $\delta^\# \leq \ov\delta^\#$, then \[ \domain (\SS^s_5) \supset \{ \wh\rho_g < c \alpha r_{\can, \varepsilon} \} \cap \{0 < \eta_5 < 1 \} \cap \mathcal{M}^s. \] Set \[ U^s_{S3} := \{ \wh\rho_g < c \alpha r_{\can, \varepsilon} \} \cap \{ \tfrac12 < \eta_5 \} \cap \mathcal{M}^s. \] Then $g^{\prime,s}$ restricted to every time-slice of $U^s_{S3}$ has constant curvature and the flow of $\partial^{\prime,s}_\t$ restricted to $U^s_{S3}$ consists of homotheties with respect to $g^{\prime,s}$. Before constructing $\SS^s$ and $U^s_{S2}$, we need to improve the family of spherical structures $\SS^s_5$. By restricting $\SS^s_5$, we obtain a transversely continuous family of spherical structures $(\SS^s_6)_{s \in X}$ such that \begin{multline*} \domain (\SS^s_5) \supset \domain (\SS^s_6) \\ = \big( \domain (\SS^s_5) \cap \{ \eta_5 < \tfrac14 \} \big) \cup \big( \{ \wh\rho_g < c \alpha r_{\can, \varepsilon} \} \cap \{ 0 < \eta_5 < 1 \} \cap \mathcal{M}^s \big). \end{multline*} So there is a universal constant $c' > 0$ such that if $\delta^\# \leq \ov\delta^\#$, then \begin{align*} \domain (\SS^s_6) &\supset \{ \rho_g < c' \alpha r_{\can, \varepsilon} \} \cap \{ \eta_5 < 1 \} \cap \mathcal{M}^s, \\ \domain (\SS^s_6) \cap \{ \tfrac12 < \eta_5 \} &= U^s_{S3} \cap \{ \eta_5 < 1 \} . \end{align*} Next we construct a family of spherical structures $\SS^s$ by extending the domains of the spherical structures $\SS^s_6$. Fix $s \in X$. Assume that $\delta^\# \leq \ov\delta^\#$, $\varepsilon \leq \ov\varepsilon$ such that by Lemma~\ref{lem_exten_almost_round}\ref{ass_extend_alm_round_b} \[ \partial^{\prime}_{\t} \bigg( \frac{\wh\rho_{g}}{r_{\can, \varepsilon}} \bigg) < 0 \qquad \text{on} \quad \{ \eta_5 > 0 \}. \] Therefore, for any component $\mathcal{C} \subset \mathcal{M}^s_t \cap U^s_{S3}$ we have $\mathcal{C} (t') \subset U^s_{S3}$ for all $t' \in [t, t_\mathcal{C})$. So every component $W \subset U^s_{S3}$ is either contained in $\{ \eta_5 = 1 \}$, and therefore disjoint from $\domain (\SS^s_6)$, or it intersects $\domain (\SS^s_6)$ in a connected product domain. Let $U^s_{S2}$ be the union of $\domain (\SS^s_6)$ with all components of the second type. Then $U^s_{S3} \setminus U^s_{S2}$ is open. Using the flow of $\partial^{\prime,s}_\t$ we can extend $\SS^s_5$ to a spherical structure $\SS^s$ on $U^s_{S2}$ that is compatible with $g^{\prime,s}$ and preserved by $\partial^{\prime,s}_\t$. Recall here that the flow of $\partial^{\prime,s}_\t$ restricted to every component $W \subset U^s_{S3}$ consists of homotheties with respect to $g^{\prime,s}$. By construction, $\cup_{s \in X} U^s_{S2}$ is open and the family of spherical structures $(\SS^s)_{s \in X}$ is transversely continuous. We have shown so far that $(\mathcal{R}^s := ( g^{\prime, s}, \linebreak[1] \partial^{\prime, s}_\t, \linebreak[1] U^s_{S2}, \linebreak[1] U^s_{S3}, \linebreak[1] \mathcal{S}^s))_{s \in X}$ is a transversely continuous family of $\mathcal{R}$-structures. Assertions~\ref{ass_thm_rounding_a}, \ref{ass_thm_rounding_b} and \ref{ass_thm_rounding_e} of this theorem hold by setting \[ r_{\rot, \delta} (r, t) := \tfrac12 c' \alpha r_{\can, \varepsilon (\delta)} (r,t), \qquad C := 4 (c' \alpha)^{-1}. \] Assertion~\ref{ass_thm_rounding_d} is a consequence of Lemma~\ref{lem_round_cyl}\ref{ass_round_Cyl_g} and Lemma~\ref{lem_dtprime}\ref{ass_round_VF_l}. \end{proof} \section{Preparatory results} In this section we collect several results that will be useful in Sections~\ref{sec_partial_homotopy} and \ref{sec_deforming_families_metrics}. \subsection{Spherical structures} We will need the following two lemmas on spherical structures. \begin{lemma} \label{lem_spherical_struct_classification} Consider a spherical structure $\SS$ on a connected 3-manifold with boundary $M$ such that $\domain (\SS) = M$. Then one of the following cases holds: \begin{enumerate}[label=(\alph*)] \item $\SS$ only consists of regular fibers and $M$ is diffeomorphic to one of the following models: \[ S^2 \times (0,1), \; S^2 \times [0,1),\; S^2 \times [0,1], \; S^2 \times S^1 \] \item $\SS$ has exactly one singular fiber and this fiber is a point and $M$ is diffeomorphic to one of the following models: \[ B^3, \; D^3 \] \item $\SS$ has exactly one singular fiber and this fiber is $\approx \mathbb{R} P^3$ and $M$ is diffeomorphic to one of the following models: \[ \big( S^2 \times (-1,1) \big) / \mathbb{Z}_2, \; \big( S^2 \times [-1,1] \big) / \mathbb{Z}_2 \] \item $\SS$ has exactly two singular fibers, both of which are points and $M \approx S^3$. \item $\SS$ has exactly two singular fibers, both of which are $\approx \mathbb{R} P^3$ and $M \approx (S^2 \times S^1) / \mathbb{Z}_2$. Here $\mathbb{Z}_2$ acts as the antipodal map on $S^2$ and as a reflection with two fixed points on $S^1$. \item $\SS$ has exactly two singular fibers, one of which is a point and the other which is $\approx \mathbb{R} P^3$ and $M \approx \mathbb{R} P^3$. \end{enumerate} \end{lemma} \begin{proof} Let $X$ be the quotient of $M$ by the spherical fibers. By Lemma~\ref{lem_local_spherical_struct}, $X$ is homeomorphic to a 1-manifold with boundary and every boundary point corresponds to a model of the form $S^2 \times [0,1)$, $D^3$ or $(S^2 \times [-1,1]) / \mathbb{Z}_2$. \end{proof} \begin{lemma} \label{Lem_cont_fam_sph_struct} Consider a 3-manifold with boundary $M$ and a compact, connected, 3-di\-men\-sion\-al submanifold with boundary $Y \subset M$. Let $(\SS^s)_{s \in D^n}$, $n \geq 0$, be a transversely continuous family of spherical structures defined on open subsets $Y \subset U^s \subset M$ such that $\partial Y$ is a union of regular fibers of $\SS^s$ for all $s \in D^n$. Assume that there is a transversely continuous family of Riemannian metrics $(g^s)_{s \in D^n}$ on $M$ that is compatible with $(\SS^s)_{s \in X}$. Then there is an open subset $Y \subset V \subset M$ and a transversely continuous family of embeddings $(\omega_s : V \to M)_{s \in D^n}$ such that $\omega_s (Y) = Y$ for all $s \in D^n$ and such that the pullbacks of $\SS^s$ via $\omega_s$ are constant in $s$. \end{lemma} We remark that the existence of the family of metrics $(g^s)_{s \in D^n}$ could be dropped from the assumptions of the lemma, as it follows easily from the other assumptions. \begin{proof} By Lemma~\ref{lem_spherical_struct_classification} the number and types of singular fibers of $\SS^s$ restricted to $Y$ is constant in $s$. \medskip \textit{Case 1: $\partial Y \neq \emptyset$. \quad} Pick a component $\Sigma \subset \partial Y$. Using the exponential map on $(\Sigma, g^s |_{\Sigma})$ we can find a continuous family of diffeomorphisms $(\varphi^s : S^2 \to \Sigma)_{s \in X}$ that are homotheties as maps from $(S^2, g_{S^2})$ to $(\Sigma, g^s |_{\Sigma})$. For every $s \in D^n$ let $\nu^s$ be the unit normal vector field to $\Sigma$ pointing towards the interior of $Y$ and consider the normal exponential map: \[ \psi^s : (z,r) \longmapsto \exp_{\varphi^s(z)}^{g^s} ( r \, \nu^s (\varphi^s(z))). \] Choose $r_{\max}^s > 0$ maximal such that $\psi^s$ is injective on $S^2 \times [0, r^s_{\max})$ and such that $\psi^s ( S^2 \times [0, r^s_{\max}) ) \subset Y$. After replacing $(g^s)_{s \in X}$ with $( (r^s_{\max})^{-2} g^s )_{s \in X}$, we may assume that $r^s_{\max} \equiv 1$. If $\Sigma \not\subset \partial M$, then we can find a uniform $\varepsilon_1 > 0$ such that $\psi^s$ is defined and injective on $S^2 \times [-\varepsilon_1, 1)$. If $\Sigma \subset \partial M$, then set $\varepsilon_1 := 0$. If $Y$ has another boundary component $\Sigma_2$, then we can similarly find a constant $\varepsilon_2 \geq 0$ such that $\psi^s$ is defined and injective on $S^2 \times [-\varepsilon_1, 1+\varepsilon_2]$ and $\varepsilon_2 > 0$ if $\Sigma_2 \not\subset \partial M$. Let $Y_0^s \subset Y$ be the union of regular spherical fibers of $\SS^s$ and let $s_0 \in D^n$ be an arbitrary point. Using the maps $(\psi^s)_{s \in D^n}$ we can construct an open subset $Y^{s_0}_0 \subset \td{V} \subset M$ and a continuous family of embeddings $(\td\omega_s : \td{V} \to M )_{s \in D^n}$ such that $\td\omega_s (Y^s_0) = Y^{s_0}_0$ and such that the pullbacks of $\SS^s$ restricted to $Y^s_0$ are constant in $s$. Due to the construction of the maps $(\td\omega_s)_{s \in D^n}$ via the exponential map and since $Y_0^s \subset Y$ is dense, we can extend these maps to maps $(\omega_s)_{s \in D^n}$ with the desired properties. \medskip \textit{Case 2: $\partial Y = \emptyset$. \quad} \medskip \textit{Case 2a: $\SS^s$ contains a singular fiber $\approx \mathbb{R} P^2$. \quad} Similarly as in Case~1, we can use the exponential map to construct a continuous family of homothetic embeddings $( \varphi^s : \mathbb{R} P^2 \to Y )_{s \in D^n}$ such that for every $s \in D^n$ the image $\varphi^s ( \mathbb{R} P^2 )$ is a singular fiber of $\SS^s$. We can now construct $(\omega_s)$ similarly as in Case~1 using the normal exponential map. \medskip \textit{Case 2b: $\SS^s$ contains a singular fiber that is a point. \quad} We can find a continuous family of points $(p^s)_{s \in D^n}$ such that $\{ p^s \}$ is a singular fiber for $\SS^s$ for all $s \in D^n$. Choose a continuous family of isometric maps $(\varphi^s : \mathbb{R}^3 \to T_{p^s} Y)_{s \in X}$. After rescaling the metric $g^s$, we may assume that $\operatorname{injrad} (M, g^s, p^s) = 1$ for all $s \in D^n$. The remainder of the proof is similar as in Case 1. \medskip \textit{Case 2c: $\SS^s$ only consists of regular fibers. \quad} By Lemma~\ref{lem_spherical_struct_classification} $Y \approx S^2 \times S^1$. Pick a point $p \in Y$. As in the proof of Case~1 we can find a continuous family of diffeomorphisms $(\varphi^s : S^2 \to \Sigma)_{s \in X}$ that are homotheties between $(S^2, g_{S^2})$ and fibers $p \in \Sigma^s \subset Y$ of $\SS^s$. Using the normal exponential map and after rescaling the metric $g^s$, as in Case~1, we may further construct a continuous family of covering maps $(\td\psi^s : S^2 \times \mathbb{R} \to Y)_{s \in D^n}$ such that the pullback of $\SS^s$ via each $\td\psi^s$ agrees with the standard spherical structure on $S^2 \times \mathbb{R}$ and such that $S^2 \times [0,1)$ is a fundamental domain. Then $\td\psi^s (z,r+1) = \td\psi^s ( A^s(r) z, r)$ for some continuous family of smooth maps $(A^s : \mathbb{R} \to O(3))_{s \in D^n}$. Since $D^n$ is contractible, we can find a continuous family of smooth maps $(\td{B}^s : [0,1] \to O(3))_{s \in D^n}$ such that $A^s (0) \td{B}^s(1) = B^s(0)$ for all $s \in D^n$. These maps can be extended to a continuous family of smooth maps $(\td{B}^s : [0,1] \to O(3))_{s \in D^n}$ such that \[ A^s(r) B^s (r+1) = B^s (r) \] for all $s \in D^n$ and $r \in \mathbb{R}$. Set $\ov\psi^s (z,r) := \td\psi^s (B^s (r) z , r)$. Since \begin{multline*} \ov\psi^s (z,r+1) = \td\psi^s (B^s (r+1) z , r+1) = \td\psi^s (A^s(r) B^s (r+1) z , r) \\ = \td\psi^s (B^s (r) z , r) = \ov\psi^s (z,r), \end{multline*} the family $(\ov\psi^s)_{s \in D^n}$ descends to a continuous family of diffeomorphisms $(\psi^s : S^2 \times S^1 \to Y)_{s \in D^n}$ such that $(\omega^s := \psi^s \circ (\psi^{s_0})^{-1})_{s \in D^n}$ has the desired properties for any fixed $s_0 \in D^n$. \end{proof} \subsection{PSC-conformal metrics} In this subsection we introduce a conformally invariant condition on the class of compact Riemannian $3$-manifolds with round boundary components. This property behaves well with respect to the geometric operations that arise in our main construction; it may be a viewed as a relative version of the property of being conformally equivalent to a PSC metric. \begin{definition}[PSC-conformality] \label{Def_PSC_conformal} A compact Riemannian 3-manifold with boundary $(M,g)$ is called {\bf PSC-conformal} if there is a smooth positive function $w \in C^\infty (M)$ such that: \begin{enumerate} \item $w^4 g$ has positive scalar curvature. \item $w$ restricted to each boundary component of $M$ is constant. \item Every boundary component of $(M, w^4 g)$ is totally geodesic and isometric to the standard round 2-sphere. \end{enumerate} \end{definition} Note that in the case $\partial M = \emptyset$, the manifold $(M,g)$ is PSC-conformal if and only if its Yamabe constant is positive. By expressing the conditions above in terms of $w$, we obtain: \begin{lemma} \label{Lem_PSC_conformal_analytic} A compact Riemannian 3-manifold with boundary $(M,g)$ is PSC-conformal if and only if all its boundary components are homothetic to the round sphere and there is a function $w \in C^\infty (M)$ such that: \begin{enumerate}[label=(\arabic*)] \item \label{prop_PSC_conformal_analytic_1} $w^4 g$ has positive scalar curvature, or equivalently, $8 \triangle w - R w < 0$ on $M$. \item \label{prop_PSC_conformal_analytic_2} $w^4 |_{\partial M}$ is equal to the sectional curvature of the induced metric on $\partial M$. \item \label{prop_PSC_conformal_analytic_3} $A_{\partial M} =( \nu_{\partial M} w^4 )g$, where $A_{\partial M}$ denotes the second fundamental form and $\nu_{\partial M}$ the inward pointing unit vector field to $\partial M$ (note that this implies that $\partial M$ is umbilic). \end{enumerate} \end{lemma} The next lemma shows that the PSC-conformal property is open --- as is the standard PSC-condition --- if we restrict to variations with a specific behavior on the boundary. \begin{lemma} \label{Lem_PSC_conformal_open} Let $M$ be a compact 3-manifold with boundary and $(g_s)_{s \in X}$ a continuous family of Riemannian metrics. Assume that for all $s \in X$ all boundary components of $(M,g_s)$ are umbilic and homothetic to the round sphere. If $(M,g_{s_0})$ is PSC-conformal for some $s_0 \in X$, then so is $(M,g_s)$ for $s$ near $s_0$. Moreover we may choose conformal factors $w_s$ satisfying Lemma~\ref{Lem_PSC_conformal_analytic} which varying continuously with $s$. \end{lemma} \begin{proof} This is a consequence of Lemma~\ref{Lem_PSC_conformal_analytic}. Choose $w_{s_0}$ such that Properties~\ref{prop_PSC_conformal_analytic_1}--\ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic} hold for $(M, g_{s_0})$. We can extend $w_{s_0}$ to a continuous family of functions $w_s \in C^\infty(M)$ such that Properties~\ref{prop_PSC_conformal_analytic_1} and \ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic} hold for all $s \in X$. Then Property~\ref{prop_PSC_conformal_analytic_2} holds for $s$ near $s_0$. \end{proof} Next we show that the PSC-conformal property remains preserved if we enlarge a given Riemannian manifold by domains that allow a spherical structure. This fact will be important in the proof of Proposition~\ref{prop_extending}. \begin{lemma} \label{Lem_PSC_conformal_enlarge} Consider a compact Riemannian 3-manifold with boundary $(M,g)$, let $Z \subset M$ be a compact 3-dimensional submanifold with boundary and let $\SS$ be a spherical structure on $M$. Suppose that: \begin{enumerate}[label=(\roman*)] \item $g$ is compatible with $\SS$. \item $(Z,g)$ is PSC-conformal. \item $\ov{M \setminus Z}$ is a union of spherical fibers of $\SS$. \end{enumerate} Then $(M,g)$ is also PSC-conformal. \end{lemma} Note that in the case $Z = \emptyset$ Lemma~\ref{Lem_PSC_conformal_enlarge} implies that $(M,g)$ is PSC-conformal if $M$ admits a spherical structure that is compatible with $g$. \begin{proof} We first argue that we may assume without loss of generality that $\ov{M \setminus Z}$ is connected, is a union of regular fibers and is disjoint from $\partial M$. To see this let $\{ \Sigma_1, \ldots, \Sigma_m \}$ be the set of all singular fibers in $M \setminus Z$, boundary components of $\partial M$ that are contained in $M \setminus Z$ and at least one regular fiber in $M \setminus Z$. Choose pairwise disjoint closed neighborhoods $V_j \subset M \setminus Z$ of each $\Sigma_j$ that are each unions of spherical fibers and diffeomorphic to $D^3$ or $(S^2 \times [-1,1])/\mathbb{Z}_2$ or $S^2 \times [0,1]$. Note that $(V_j, g)$ is PSC-conformal for all $j = 1, \ldots, m$, because in the first case $g |_{V_j}$ is conformally equivalent to a metric with positive scalar curvature that is cylindrical near the boundary and in the second and third case $g |_{V_j}$ is conformally equivalent to (a quotient) of the round cylinder. So all assumptions of the lemma are still satisfied if we replace $Z$ by $Z' := Z \cup_{j=1}^m V_j$. Therefore, we may assume that $\ov{M \setminus Z}$ is disjoint from all singular fibers and $\partial M$ and that $Z \neq \emptyset$. Furthermore, by induction it suffices to consider the case in which $M \setminus Z$ is connected. So by Lemma~\ref{lem_spherical_struct_classification} we have $\ov{M \setminus Z} \approx S^2 \times [0,1]$. Choose an embedding $\phi : S^2 \times (-L-\varepsilon, L+\varepsilon) \to M$, $L, \varepsilon > 0$ such that $\ov{M \setminus Z} = \phi (S^2 \times [0,L])$ and such that $\phi^* g = f^4 (g_{S^2} + dr^2)$ for some smooth function $f : (-L-\varepsilon,L+\varepsilon) \to \mathbb{R}_+$. Let $w \in C^\infty (Z)$ such that all properties of Lemma~\ref{Lem_PSC_conformal_analytic} hold. Then $\phi^* (w^4 g) = (\td{w} f)^4 (g_{S^2} + dr^2)$ for some smooth function $\td{w} : (-L-\varepsilon , -L] \cup [L,L+\varepsilon) \to \mathbb{R}_+$ and we have \[ (\td{w} f)(\pm L) = 1, \qquad (\td{w} f)' (\pm L) = 0. \] By smoothing the function \[ r \mapsto \frac1{f(r)} \begin{cases} ( \td{w}f)( r) &\text{if $r \in (-L-\varepsilon , -L] \cup [L,L+\varepsilon)$} \\ 1 & \text{if $r \in (-L, L)$} \end{cases}, \] we can find a smooth function $\td{w}^*: (-L-\varepsilon,L+\varepsilon) \to \mathbb{R}_+$ such that $\td{w}^* = \td{w}$ near the ends of the domain and such that $(\ov{w} f)^4 (g_{S^2} + dr^2)$ has positive scalar curvature. We can then choose $w^* \in C^\infty (M)$ such that $w^* = w$ outside the image of $\phi$ and $w^* \circ \phi = \td{w}^*$. This function satisfies all properties of Lemma~\ref{Lem_PSC_conformal_analytic}, showing that $(M,g)$ is PSC-conformal. \end{proof} Next, we discuss a criterion that will help us identify PSC-conformal manifolds in Section~\ref{sec_deforming_families_metrics}. \begin{lemma} \label{lem_CNA_SS_implies_PSC_conformal} There are constants $\varepsilon, c > 0$ such that the following holds. Suppose that $(M,g)$ is a (not necessarily) complete Riemannian 3-manifold, $\SS$ is a spherical structure on $M$ that is compatible with $g$ and $Z \subset M$ is a compact 3-dimensional submanifold with the property that for some $r > 0$: \begin{enumerate}[label=(\roman*)] \item $\partial Z$ is a union of regular spherical fibers of $\SS$. \item $(Z,g)$ has positive scalar curvature. \item Every point of $Z \cap \{ \rho < r \}$ satisfies the $\varepsilon$-canonical neighborhood assumption. \item $\rho \leq c r$ on $\partial Z$. \item $\{ \rho < r \} \subset \domain \SS$. \end{enumerate} Then $(Z,g)$ is PSC-conformal. \end{lemma} \begin{proof} The constants $\varepsilon$ and $c$ will be determined in the course of the proof. We may assume without loss of generality that $Z$ is connected and $\partial Z \neq \emptyset$. Moreover, we may assume that $Z$ is not a union of spherical fibers, in which case the PSC-conformality is trivial due to Lemma~\ref{Lem_PSC_conformal_enlarge}. Therefore $Z$ must contain a point with $\rho \geq r$. Choose $\lambda \in (\sqrt{c}/2,\sqrt{c})$ such that $\rho \neq \lambda r$ on any singular spherical fiber in $Z$ and consider the subset $Z_0 := \{ \rho \leq \lambda \} \cap Z \subset Z$. Then $\ov{Z \setminus Z_0}$ is a union of spherical fibers. Let $Z_1 \subset Z$ be the union of $Z_0$ with all components of $Z \setminus Z_0$ that are disjoint from $\partial Z$. Then $\ov{Z \setminus Z_1}$ is a union of spherical fibers and $\rho \equiv \lambda r$ on $\partial Z_1$. Due to Lemma~\ref{Lem_PSC_conformal_enlarge} it suffices to to show that $(Z_1, g)$ is PSC-conformal. To see this it suffices to show the following claim. \begin{Claim} If $\varepsilon \leq \ov\varepsilon$, $c \leq \ov{c}$, then for every component $\Sigma \subset \partial Z_1$ there is a collar neighborhood $U_\Sigma \subset Z_1$ consisting of regular spherical fibers and a function $u_\Sigma \subset C^\infty (U_\Sigma)$ such that: \begin{enumerate}[label=(\alph*)] \item \label{ass_cl_U_Sigma_tot_geod_a} $u_\Sigma > 0$. \item \label{ass_cl_U_Sigma_tot_geod_b} $u_\Sigma \equiv 1$ outside of a compact subset of $U_\Sigma$. \item \label{ass_cl_U_Sigma_tot_geod_c} $8 \triangle u_\Sigma - R u_\Sigma < 0$; therefore $u_\Sigma^4 g$ has positive scalar curvature. \item \label{ass_cl_U_Sigma_tot_geod_d} $\Sigma$ is totally geodesic in $(Z_1, u_\Sigma^4 g |_\Sigma )$ and is isometric to the round 2-sphere of scale $\lambda r$. \end{enumerate} \end{Claim} Note that by our assumption $Z_1 \not\subset \domain (\SS)$, which implies that the neighborhoods $U_\Sigma$ are pairwise disjoint. \begin{proof} By rescaling we may assume without loss of generality that $\lambda r = 1$. Fix sequences $\varepsilon^i, c^i \to 0$ and consider a sequence of counterexamples $M^i, Z^i, Z^i_1, \Sigma^i, g^i$, $r^i$ to the claim. Choose points $x^i \in \Sigma^i$. After passing to a subsequence, we may assume that $(M^i, g^i, x^i)$ converge to the final time-slice of a pointed $\kappa$-solution $(\ov{M}, \ov{g}, \ov{x})$ with $\rho (\ov{x}) = 1$; note that $(M^i, g^i)$ cannot be isometric to the round sphere for large $i$ by our assumption that $M^i$ contains a point with $\rho \geq r^i$. We claim that $(\ov{M}, \ov{g})$ is homothetic to the round cylinder. Otherwise $\ov{M}$ would be either compact or one-ended (see Theorem~\ref{Thm_kappa_sol_classification}). So $\Sigma^i$ bounds a compact domain $D^i \subset M^i$ for large $i$ on which $C_*^{-1} < \rho < C_*$ for some constant $C_* < \infty$ that is independent of $i$. Since $Z^i_1$ contains a point of scale $\rho \geq r^i \geq \lambda_i^{-1} \to \infty$ this implies that the interiors of $D^i$ and $Z^i_1$ are disjoint. Since $\rho \leq c^i r^i = c^i / \lambda_i \to 0$ on $\partial Z^i$ we have $D^i \subset Z^i_1$, which contradicts our construction of $Z^i_1$ for large $i$. Since $(\ov{M}, \ov{g})$ is homothetic to the round cylinder, we can choose $U_{\Sigma^i} \subset Z_1^i$ to be larger and larger tubular neighborhoods of $\Sigma^i$. Assertions~\ref{ass_cl_U_Sigma_tot_geod_a}--\ref{ass_cl_U_Sigma_tot_geod_d} can therefore be achieved easily for large $i$. \end{proof} This finishes the proof of the lemma. \end{proof} Lastly, we discuss further properties of the PSC-conformal condition, which will be central to the proof of Proposition~\ref{prop_move_remove_disk}. We begin by showing that the conformal factor $w$ from Definition~\ref{Def_PSC_conformal} or Lemma~\ref{Lem_PSC_conformal_analytic} can be chosen to be of a standard form near a given point. \begin{lemma} \label{Lem_w_std_form} Let $(M,g)$ be PSC-conformal and $p \in \Int M$. Then there is a constant $a \in \mathbb{R}$ such and a smooth function $w \in C^\infty (M)$ satisfying Properties~\ref{prop_PSC_conformal_analytic_1}--\ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic} and such that near $p$ we have \begin{equation} \label{eq_w_std_form} w = w (p) - a \cdot d^2 (p,\cdot) . \end{equation} \end{lemma} \begin{proof} We first show that we can arrange $w$ such that $\nabla w( p) = 0$. For this purpose, fix some $w \in C^\infty (M)$ satisfying Properties~\ref{prop_PSC_conformal_analytic_1}--\ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic} and assume that $\nabla w (p) \neq 0$. Choose a small geodesic $\gamma : (-\varepsilon, \varepsilon) \to M$ with $\gamma(0) = p$ and $\gamma' (p) =\nabla w (p) / |\nabla w (p)|$. Choose a sequence $\alpha_i \to 0$ with $\alpha_i > 0$. \begin{Claim} There exists a sequence of even functions $\varphi_i \in C^\infty (\mathbb{R})$ and numbers $t_i > 0$ such that for large $i$: \begin{enumerate}[label=(\arabic*)] \item \label{ass_cl_varphi_even_1} $0 \leq \varphi_i \leq 1$. \item \label{ass_cl_varphi_even_2} $\varphi_i \equiv 0$ on $[1, \infty)$. \item \label{ass_cl_varphi_even_3} $\varphi'_i \leq 0$ on $[0, \infty)$. \item \label{ass_cl_varphi_even_4} $\alpha_i^2 \varphi'_i (t_i) = - |\nabla w(p)|$. \item \label{ass_cl_varphi_even_5} $\varphi''_i (r) + 1.5 r^{-1} \varphi'_i (r) \leq 1$ on $[0, \infty)$. \end{enumerate} \end{Claim} \begin{proof} Let $\nu : \mathbb{R} \to [0,1]$ be an even cutoff function with $\nu \equiv 1$ on $[-\frac12, \frac12]$ and $\nu \equiv 0$ on $(-\infty, -1] \cup [1, \infty)$ and $\nu' \leq 0$ on $[0, \infty)$. Let $\delta > 0$ and consider the function \[ \psi_\delta (r) := \delta \cdot \nu (r) \cdot |r|^{-1/2}. \] For small $\delta$, this function satisfies Properties \ref{ass_cl_varphi_even_2}, \ref{ass_cl_varphi_even_3} and \ref{ass_cl_varphi_even_5} wherever defined. Our goal will be to choose positive constants $\delta_i, t_i \to 0$ and let $\varphi_i$ be a smoothing of $\max \{ \psi_{\delta_i}, .9 \}$. In order to ensure that Properties~\ref{ass_cl_varphi_even_1}--\ref{ass_cl_varphi_even_5} hold, we require that $0 \leq \psi_{\delta_i} \leq \frac12$ on $[t_i / 2, \infty)$ and $\psi'_{\delta_i} (t_i) = - \alpha_i^{-2} |\nabla w(p)|$. These conditions are equivalent to \[ 0 < t_i < \tfrac12, \qquad \delta_i (t_i/2)^{-1/2} \leq \tfrac12, \qquad \delta_i t_i^{-1.5} = 2 \alpha_i^{-2} |\nabla w(p)|, \] which can be met for large $i$ such that $\delta_i, t_i \to 0$. \end{proof} Set now $q_i := \gamma(-t_i)$ and \[ \td{w}_i := w + \alpha_i^3 \varphi_i \bigg( \frac{d (q_i, \cdot )}{\alpha_i} \bigg). \] Then $\td{w}_i \to w$ in $C^0$ and for large $i$ \[ \triangle \td{w}_i - \triangle w \leq \alpha_i \bigg( \varphi'' + \triangle d(q_i, \cdot) \cdot \varphi' \bigg) \leq \alpha_i \bigg( \varphi'' + \frac{1.5}{d(q_i,\cdot)} \varphi' \bigg) \leq \alpha_i \to 0. \] Therefore, for large $i$ the function $\td{w}_i$ satisfies Properties~\ref{prop_PSC_conformal_analytic_1}--\ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic}. Moreover $\nabla \td{w}_i =( |\nabla w (p)| + \alpha_i^2 \varphi'_i (t_i) ) \gamma'(p) = 0$ for large $i$. This shows that we can find a function $w \in C^\infty (M)$ satisfying Properties~\ref{prop_PSC_conformal_analytic_1}--\ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic} and $\nabla w (p) = 0$. Fix $w$ for the remainder of the proof. Choose some $a \in \mathbb{R}$ such that for $i := w(p) - a \cdot d^2 (p, \cdot)$ we have $8 \triangle u - R u < 0$ near $p$. Our goal will be to interpolate between $w$ and $u$. Let $f : \mathbb{R} \to [0,1]$ be a smooth cutoff function with $f \equiv 1$ on $(-\infty, -2]$ and $f \equiv 0$ on $[-1, \infty)$. Choose a sequence of positive numbers $\varepsilon_i \to 0$ and set \[ \nu_i (r) := f (\varepsilon_i \log r ), \qquad \td{w}_i := w + \nu_i (d(p, \cdot)) (u-w). \] Then for large $i$ the function $\td{w}_i$ satisfies (\ref{eq_w_std_form}) and we have $\td{w}_i = w$ near $\partial M$. It remains to show that for we have $8 \triangle \td{w}_i - R \td{w}_i < 0$ for large $i$. To see this, observe first that $\td{w}_i \to w$ in $C^0$. Next we compute with $r := d(p, \cdot)$ \begin{multline*} \triangle \td{w}_i - \triangle w = \triangle \nu_i (r) (u-w) + 2 \nabla \nu_i \nabla (u-w) + \nu_i \triangle (u-w) \\ \leq \nu_i'' (u-w) + \nu_i' \cdot \triangle r \cdot (u-w) + 2 |\nu'_i| \cdot |\nabla (u-w)| + \nu_i \triangle (u-w) \\ \leq C \varepsilon_i r^{-2} \cdot C r^2 + C \varepsilon_i r^{-1} \cdot C r^{-1} \cdot C r^2 + C \varepsilon_i r^{-1} \cdot C r + \nu_i \triangle (u-w). \end{multline*} It follows that \[ \triangle \td{w}_i \leq C \varepsilon_i + \nu_i \triangle u + (1-\nu_i ) \triangle w. \] This implies that \[ 8 \triangle \td{w}_i - R \td{w}_i \leq C \varepsilon_i + \nu_i (8 \triangle u - R u) + (1-\nu_i) (8 \triangle w - R w) . \] The sum of the second and third term is strictly negative and independent of $i$; so since $C \varepsilon_i \to 0$, the right-hand side is strictly negative for large $i$. \end{proof} \begin{lemma} \label{lem_thick_annulus_psc_conformal} Let $M$ be a compact 3-manifold with boundary and consider a continuous family of Riemannian metrics $(g_s)_{s \in X}$ on $M$, where $X$ is a compact topological space. Suppose that $(M, g_s)$ is PSC-conformal for all $s \in X$. Consider a continuous family of embeddings $(\mu_s : B^3 (1) \to M)_{s \in X}$ and suppose that for every $s \in X$ the pullback metric $\mu^*_s g_s$ is compatible with the standard spherical structure on $B^3(1)$. Then there is a constant $0 < r_0 < 1$ such that for all $0 < r \leq r_0$ and $s \in X$ the Riemannian manifold $(M \setminus \mu_s (B^3 (r)), g_s)$ is PSC-conformal. \end{lemma} \begin{proof} Due to the openness of the PSC-conformal condition from Lemma~\ref{Lem_PSC_conformal_open} and the fact that $X$ is compact, it suffices to prove the lemma for a single $s \in X$. So let us write in the following $g = g_s$ and $\mu = \mu_s$. In addition, by Lemma~\ref{Lem_PSC_conformal_enlarge}, it suffices to show that there is an $r_0 > 0$ such that $(M \setminus \mu (B^3 (0,r_0)), g)$ is PSC-conformal. Using the exponential map, we may moreover assume that $\mu^* g$ is even invariant under the standard $O(3)$-action. By Lemma~\ref{Lem_w_std_form} we may choose $w \in C^\infty (M)$ satisfying Properties~\ref{prop_PSC_conformal_analytic_1}--\ref{prop_PSC_conformal_analytic_3} of Lemma~\ref{Lem_PSC_conformal_analytic} and (\ref{eq_w_std_form}) for $p = \mu (0)$. Then $( B^3 (1) \setminus \{ 0 \}, \mu_1^* (w^4 g))$ is isometric to \begin{equation} \label{eq_f4_cyl} \big( S^2 \times (0,\infty), f^4 ( g_{S^2} + dt^2 ) \big), \end{equation} where $f : (0, \infty) \to \mathbb{R}_+$ is smooth with \begin{equation} \label{eq_f_conditions_to_0} \lim_{t \to \infty} f (t) = \lim_{t \to \infty} f' (t) = 0, \qquad 8 f'' - 2 f < 0. \end{equation} The last condition is equivalent to the statement that the metric (\ref{eq_f4_cyl}) has positive scalar curvature. It remains to show that there is a number $t_0 > 0$ and a smooth function $\td{f} : (0, t_0] \to \mathbb{R}_+$ such that \begin{enumerate} \item $\td{f} = f$ near $0$, \item $8\td{f}'' - 2 \td{f} < 0$, \item $\td{f} (t_0) = 1$ and \item $\td{f}'(t_0) = 0$. \end{enumerate} Due to a standard smoothing argument it suffices to construct $\td{f}$ to be piecewise smooth and with the property that $\frac{d}{dt^-} f \geq \frac{d}{dt^+} f$ at every non-smooth point. Moreover, since every function with $\td{f}'(t_0') \geq 0$ can be continued by a constant function for $t \geq t_0'$, we may replace Property~(4) by $\td{f}' (t_0) \geq 0$. Let us now construct $\td{f}$. The conditions (\ref{eq_f_conditions_to_0}) imply that $2f' (t_1) > - f(t_1)$ and $f(t_1) < 1$ for some $t_1 > 0$, because otherwise we have for large $t$ \[ 4(f')^2 - f^2 > 0, \qquad (4(f')^2 - f^2)' = (8f'' - 2 f) f' > 0,\] which contradicts the first two limits of (\ref{eq_f_conditions_to_0}). Fix $t_1$ and $\delta > 0$ and set \[ \td{f} (t) := \begin{cases} f(t) & \text{if $t \leq t_1$} \\ f(t_1) \cosh (\frac{t-t_1}{2+\delta}) + (2+\delta) f'(t_1) \sinh( \frac{t-t_1}{2+\delta}) & \text{if $t > t_1$} \end{cases}. \] For sufficiently small $\delta$ we have $(2+\delta) f'(t_1) + f(t_1) > 0$ and therefore $\lim_{t \to \infty} \td{f}(t) = \infty$ and $\td{f}(t) > 0$ for all $t \geq t_1$. Thus we can choose $t_0 > t_1$ such that $\td{f} (t_0) = 1$ and $\td{f}' (t_0) \geq 0$. \end{proof} \subsection{Extending symmetric metrics} The following proposition will be used in the proof of Proposition~\ref{prop_extending}. Its purpose will be to extend the domain of a family of metrics by a subset that is equipped with a family of spherical structures, while preserving the PSC-conformal condition. \begin{proposition} \label{prop_extending_symmetric} Consider a 3-manifold with boundary $M$ and a compact connected 3-di\-men\-sion\-al submanifold with boundary $Y \subset M$. Let $(\SS^s)_{s \in D^n}$, $n \geq 0$, be a transversely continuous family of spherical structures defined on open subsets $Y \subset U^s \subset M$ such that $\partial Y$ is a union of regular fibers of $\SS^s$ for all $s \in D^n$. Consider the following continuous families of metrics: \begin{itemize} \item $(g^1_{s})_{s \in D^n}$ on $M$ \item $(g^2_{s,t})_{s \in D^n, t \in [0,1]}$ on $\ov{M \setminus Y}$ \item $(g^3_{s,t})_{s \in \partial D^n, t \in [0,1]}$ on $M$ \end{itemize} Assume that $g^1_s$, $g^2_{s,t}$ and $g^3_{s,t}$ are compatible with $\SS^s$ for all $s \in D^n$ or $\partial D^n$ and $t \in [0,1]$ and assume that the following compatiblity conditions hold: \begin{enumerate}[label=(\roman*)] \item \label{prop_lem_extending_symmetric_i} $g^1_s = g^2_{s, 0}$ on $\ov{M \setminus Y}$ for all $s \in D^n$. \item \label{prop_lem_extending_symmetric_ii} $g^1_s = g^3_{s,0}$ on $M$ for all $s \in \partial D^n$. \item \label{prop_lem_extending_symmetric_iii} $g^2_{s,t} = g^3_{s,t}$ on $\ov{M \setminus Y}$ for all $s \in \partial D^n$, $t \in [0,1]$. \end{enumerate} Then there is a continuous family of metrics $(h_{s,t})_{s \in D^n, t \in [0,1]}$ on $M$ such that $h_{s,t}$ is compatible with $\SS^s$ for all $s \in D^n$, $t \in [0,1]$ and such that: \begin{enumerate}[label=(\alph*)] \item \label{ass_lem_extending_symmetric_a} $h_{s,0} = g^1_s$ on $M$ for all $s \in D^n$. \item \label{ass_lem_extending_symmetric_b} $h_{s,t} = g^2_{s,t}$ on $\ov{M \setminus Y}$ for all $s \in D^n$, $t \in [0,1]$. \item \label{ass_lem_extending_symmetric_c} $h_{s,t} = g^3_{s,t}$ on $M$ for all $s \in \partial D^n$, $t \in [0,1]$. \end{enumerate} \end{proposition} The proof of this proposition will occupy the remainder of this subsection. Our strategy will be to successively simplify the statement in a sequence of lemmas. \begin{proof} By Lemma~\ref{Lem_cont_fam_sph_struct} there is a connected open neighborhood $Y \subset V \subset M$ and a continuous family of embeddings $(\omega_s : V \to M)_{s \in D^n}$ such that $\omega_s (Y) = Y$ and such that the following holds: if $\SS^{\prime,s}$ denotes the pullback of $\SS^s$ via $\omega_s$, then $V$ is a union of fibers of $\SS^{\prime,s}$ and $\SS^{\prime,s}$ restricted to $V$ is constant in $s$. After replacing $M$ by $V$, $(\SS^s)_{s \in D^n}$ by $(\SS^{\prime,s})_{s \in D^n}$, $(g^1_s)_{s \in D^n}$ by $(\omega_s^* g^1_s)_{s \in D^n}$ etc., we may assume without loss of generality that $\domain (\SS^s) = M$ and that $\SS^s =: \SS$ is constant in $s$. Since $(D^n \times [0,1], \partial D^n \times [0,1] \cup D^n \times \{ 0 \} )$ is homeomorphic $(D^n \times [0,1], D^n \times \{ 0 \})$, we can simplify the proposition further by removing the family $(g^3_{s,t})_{s \in \partial D^n, t \in [0,1]}$ as well as Assumptions~\ref{prop_lem_extending_symmetric_ii}, \ref{prop_lem_extending_symmetric_iii} and Assertion~\ref{ass_lem_extending_symmetric_c}. If $\partial Y = \emptyset$, then $M \setminus Y = \emptyset$, so Assumption~\ref{prop_lem_extending_symmetric_i} and Assertion~\ref{ass_lem_extending_symmetric_b} are vacuous and we can simply set $h_{s,t} := g^1_s$. If $\partial Y \neq \emptyset$, then $Y$ is diffeomorphic to one of the following manifolds (see Lemma~\ref{lem_spherical_struct_classification}): \[ S^2 \times [0,1], \; (S^2 \times [-1,1])/ \mathbb{Z}_2, \; D^3. \] By removing collar neighborhoods of $\partial Y$ from $Y$, we can construct a compact 3-manifold with boundary $Z \subset \Int Y$ that is a union of fibers of $\SS$ such that $Y' := \ov{Y \setminus Z}$ is diffeomorphic to a disjoint union of copies of $S^2 \times I$. Define $g^{\prime,2}_{s,t} := g^2_{s,t}$ on $\ov{M \setminus Y}$ and $g^{\prime,2}_{s,t} := g^1_{s,0}$ on $Z$. Then $(g^{\prime,2}_{s,t})_{s \in D^n, t \in [0,1]}$ is defined on $\ov{M \setminus Y'}$. By replacing $Y$ by $Y'$ and $(g^{2}_{s,t})_{s \in D^n, t \in [0,1]}$ by $(g^{\prime,2}_{s,t})_{s \in D^n, t \in [0,1]}$, we can reduce the proposition to the case in which $Y$ is diffeomorphic to a disjoint union of copies of $S^2 \times [0,1]$. Moreover, due to Assertion~\ref{ass_lem_extending_symmetric_b}, we can handle each component of $Y$ separately. Therefore the proposition can be reduced to Lemma~\ref{lem_extending_symmetric_simplified} below. \end{proof} \begin{lemma} \label{lem_extending_symmetric_simplified} Let $n \geq 0$ and let $(M, Y) := ( S^2 \times (-2, 2), S^2 \times [-1,1])$. Let $\SS$ be the standard spherical structure on $M$ and consider continuous families of metrics: \begin{itemize} \item $(g^1_s)_{s \in D^n}$ on $M$, \item $(g^2_{s,t})_{s \in D^n, t \in [0,1]}$ on $\ov{M \setminus Y}$ \end{itemize} that are all compatible with $\SS$ and that satisfy the compatibility condition $g^1_s = g^2_{s,0}$ on $\ov{M \setminus Y}$ for all $s \in D^n$. Then there is a continuous family of metrics $(h_{s,t})_{s \in D^n, t \in [0,1]}$ on $M$ such that: \begin{enumerate}[label=(\alph*)] \item $h_{s,t}$ is compatible with $\SS$ for all $s \in D^n$ and $t \in [0,1]$. \item $h_{s,0} = g^1_s$ on $M$ for all $s \in D^n$, \item $h_{s,t} = g^2_{s,t}$ on $\ov{M \setminus Y}$ for all $s \in D^n$ and $t \in [0,1]$. \end{enumerate} \end{lemma} \begin{proof} By Lemma~\ref{lem_compatible_metric_general_form} the metrics $g^1_s$ and $g^2_{s,t}$ are of the form \[ a^2(r) g_{S^2} + b^2(r) dr^2 + \sum_{i=1}^3 c_i (r) (dr \, \xi_i + \xi_i \, dr), \] where $a,b, c_i$ are smooth functions and $a,b > 0$. So by considering the functions $\log a, \log b, c_i$, the lemma can be reduced to Lemma~\ref{lem_extending_symmetric_simplified_functions} below. \end{proof} \begin{lemma} \label{lem_extending_symmetric_simplified_functions} Let $n \geq 0$ and consider continuous families of functions: \begin{itemize} \item $(f^1_s \in C^\infty ((-2,2)))_{s \in D^n}$, \item $(f^2_{s,t} \in C^\infty ( (-2,-1] \cup [1,2))_{s \in D^n, t \in [0,1]}$ \end{itemize} that satisfy the compatibility condition $f^1_s = f^2_{s,0}$ on $(-2,-1] \cup [1,2)$ for all $s \in D^n$. Then there is a continuous family of functions $(f^3_{s,t}\in C^\infty ((-2,2)))_{s \in D^n, t \in [0,1]}$ such that: \begin{enumerate}[label=(\alph*)] \item $f^3_{s,0} = f^1_s$ on $(-2,2)$ for all $s \in D^n$, \item $f^3_{s,t} = f^2_{s,t}$ on $(-2,-1] \cup [1,2)$ for all $s \in D^n$ and $t \in [0,1]$. \end{enumerate} \end{lemma} \begin{proof} It suffices to prove the lemma in the case in which $f^1_{s,t} \equiv 0$, because after applying the lemma to the functions $\td{f}^1_s :\equiv 0$ and $\td{f}^2_{s,t} := f^2_{s,t} - f^1_s$, resulting in a family of functions $\td{f}^3_{s,t}$, we may set $f^3_{s,t} := \td{f}^3_{s,t} + f^1_s$. So assume in the following that $f^1_{s,t} \equiv 0$ and note that this implies $f^2_{s,0} \equiv 0$. By Seeley's Theorem \cite{Seeley1964}, we may extend the family $(f^2_{s,t})$ to a continuous family $(\ov{f}^2_{s,t} \in C^\infty ( (-2,-.8) \cup (.8,2))_{s \in D^n, t \in [0,1]}$ such that $\ov{f}^2_{s,0} \equiv 0$ for all $s \in D^n$. Let $\eta \in C^\infty ((-2,2))$ be a cutoff function such that $\eta \equiv 1$ on $(-2,-1] \cup [1,2)$ and $\eta \equiv 0$ on $[-.9,.9]$. Then $f^3_{s,t} := \eta \ov{f}^2_{s,t}$ has the desired properties. \end{proof} \subsection{Extending symmetric metrics on the round sphere} The following proposition is similar in spirit to Proposition~\ref{prop_extending_symmetric} and will also be used in the proof of Proposition~\ref{prop_extending}. It concerns deformations of metrics compatible with a family of spherical structures $(\SS^s)$ if the starting metric is the round sphere or an isometric quotient of it. In this case the spherical structures $\SS^s$ are not uniquely determined by the metric. Due to this ambiguity, the spherical structure $\SS^s$ may not be defined for certain parameters $s$; in this case we will require the associated metrics to be multiples of a fixed round metric. \begin{proposition} \label{prop_exten_round} Let $(M, g^*)$ be a compact 3-manifold of constant sectional curvature $1$ and let $\Delta^n$ be the standard $n$-simplex for $n \geq 0$. Consider the following data: \begin{itemize} \item a continuous function $\lambda : \Delta^n \to \mathbb{R}_+$, \item a continuous family of metrics $(k_{s,t})_{s \in \partial \Delta^n, t \in [0,1]}$ on $M$ \item an open subset $A \subset \Delta^n$, \item a closed subset $E \subset \partial \Delta^n$ such that $E \subset A$ \item a transversely continuous family of spherical structures $(\SS^s)_{s \in A}$ on $M$. \end{itemize} Assume that \begin{enumerate}[label=(\roman*)] \item $k_{s,0} = \lambda^2 (s) g^*$ for all $s \in \partial \Delta^n$ \item For all $s \in A$ the metric $g^*$ is compatible with $\SS^s$. \item \label{prop_exten_round_iii} For all $s \in E$ and $t \in [0,1]$ the metric $k_{s,t}$ is compatible with $\SS^s$. \item For all $s \in \partial \Delta^n \setminus E$ and $t \in [0,1]$ the metric $k_{s,t}$ is a multiple of $g^*$. \end{enumerate} Then there is a continuous family of metrics $(h_{s,t})_{s \in \Delta^n, t \in [0,1]}$ on $M$ and a closed subset $E' \subset \Delta^n$ such that: \begin{enumerate}[label=(\alph*)] \item \label{ass_exten_round_aa} $h_{s,0} = \lambda^2 (s) g^*$ for all $s \in \Delta^n$. \item \label{ass_exten_round_a} $h_{s,t} = k_{s,t}$ for all $s \in \partial \Delta^n$, $t \in [0,1]$. \item \label{ass_exten_round_b} $E' \subset A$. \item \label{ass_exten_round_c} For any $s \in A$ and $t \in [0,1]$ the metric $h_{s,t}$ is compatible with $\SS^s$. \item \label{ass_exten_round_d} For all $s \in \Delta^n \setminus E'$ and $t \in [0,1]$ the metric $h_{s,t}$ is a multiple of $g^*$. \end{enumerate} \end{proposition} Note that if $M \not\approx S^3, \mathbb{R} P^3$, then $E= A = \emptyset$, in which case the proposition is trivial. \begin{proof} Let us first reduce the proposition to the case in which either $E = \emptyset$ or $A = \Delta^n$. To see this, assume that the proposition is true in these two cases, in any dimension. Choose a simplicial refinement of $\Delta^n$ that is fine enough such that every subsimplex $\sigma \subset \Delta^n$ is either fully contained in $A$ or disjoint from $E$. Then we may successively construct $(h_{s,t})$ over the skeleta of this decomposition. More specifically, let $0 \leq k \leq n$ and assume by induction that either $k = 0$ or that some family of metrics $(h^{k-1}_{s,t})$ has been constructed over the $(k-1)$-dimensional skeleton $X^{k-1}$ of $\Delta^n$ such that Assertions~\ref{ass_exten_round_aa}--\ref{ass_exten_round_d} hold for some closed subset $E'_{k-1} \subset X^{k-1}$, where in Assertion~\ref{ass_exten_round_d} we have to replace the difference $\Delta^n \setminus E'$ by $X^{k-1} \setminus E'_{k-1}$. For every $k$-simplex $\sigma \subset \Delta^n$ consider the closed subset $E_\sigma := E'_{k-1} \cap \partial \sigma$ and apply the proposition to find an extension $(h^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ of $(h^{k-1}_{s,t})$ to $\sigma$ and a closed subset $E'_\sigma \subset \sigma$ such that Assertions~\ref{ass_exten_round_aa}, \ref{ass_exten_round_b}--\ref{ass_exten_round_d} of the proposition hold. Set $E'_k := \cup_{\sigma \subset \Delta^n, \dim \sigma = k} E'_\sigma$ and combine all families $(h^\sigma_{s,t})$ to a family $(h^k_{s,t})$ over $X^k$. This finishes the induction. Then $(h_{s,t}) := (h^n_{s,t})$ and $E' := E'_n$ are the desired data for this proposition. So it remains to prove the proposition in the two cases $E = \emptyset$ and $A = \Delta^n$. Consider first the case in which $E = \emptyset$. Then $k_{s,t} = \mu^2 (s,t) g^*$ for some continuous function $\mu : \partial \Delta^n \times [0,1] \to \mathbb{R}_+$ that satisfies $\mu(s,0) = \lambda (s)$ for all $s \in \partial \Delta^n$. Let $\lambda_0 : \Delta^n \times [0,1] \to \mathbb{R}_+$ be a continuous function such that $\lambda_0 (s,0) = \lambda (s)$ for all $s \in \Delta^n$ and $\lambda_0 (s,t)= \mu(s,t)$ for all $s \in \partial \Delta^n$, $t \in [0,1]$. Then $(h_{s,t} := \lambda_0^2 (s,t) g^*)$ and $E' := \emptyset$ are the desired data for this proposition. Lastly, consider the case in which $A = \Delta^n$. So $\SS^s$ is defined for all $s \in \Delta^n$. For all $s \in \partial\Delta^n - E$ and $t \in [0,1]$ the metric $k_{s,t}$ is a multiple of $g^*$ and therefore compatible with $\SS^s$. For all $s \in E$ and $t \in [0,1]$ the metric $k_{s,t}$ is compatible with $\SS^s$ by Assumption~\ref{prop_exten_round_iii}. Therefore $k_{s,t}$ is compatible with $\SS^s$ for all $s \in \partial \Delta^n$ and $t \in [0,1]$. Set $E' := \Delta^n$. Then Assertion~\ref{ass_exten_round_d} becomes vacuous and the existence of $(h_{s,t})$ is a direct consequence of Proposition~\ref{prop_extending_symmetric}. \end{proof} \subsection{The conformal exponential map} \label{subsec_conf_exp} In this subsection, we will introduce the \emph{conformal exponential map}, which will be used in the rounding construction of Subsection~\ref{subsec_Rounding}. The conformal exponential map will produce a set of canonical local coordinates near a point $p \in M$ in a Riemannian manifold $(M,g)$ with the following properties: \begin{itemize} \item If $g$ is locally rotationally symmetric about $p$, then so is the metric in these local coordinates. \item If the metric is conformally flat near $p$, then the metric expressed in these local coordinates is conformally equivalent to the standard Euclidean metric. \item The metric expressed in these local coordinates agrees with the Euclidean metric up to first order at the origin. \end{itemize} In the following let $(M, g)$ be a 3-dimensional Riemannian manifold. Recall that the Schouten tensor is defined as follows: \[ S = \Ric - \frac14 R g \] If $\td{g} = e^{2\phi} g$, then the Schouten tensor $\td{S}$ of $\td{g}$ can be expressed as \begin{equation} \label{eq_Schouten_conf_trafo} \td{S}_{ij} = S_{ij} - ( \nabla^2_{ij} \phi - \nabla_i \phi \nabla_j \phi ) - \frac12 |\nabla \phi |^2 g_{ij}. \end{equation} If $\td{S} \equiv 0$, which implies that $g$ is conformally flat, then we can recover the function $\phi$ by integrating an expression involving $S$ twice along curves. We will now carry out this integration even in the case in which $g$ is not conformally flat. In doing so, we will construct a locally defined function $\phi$, with the property that if $g$ is conformally flat, then $e^{2\phi} g$ is flat. Let $p \in M$ be a point with injectivity radius $\operatorname{injrad}(p)$. By standard ODE-theory there is a maximal radius $r=r_{g,p} \in (0, \operatorname{injrad}(p)]$, depending continuously on the metric $g$ and the point $p$, such that we can solve the following ODE for 1-forms radially along arclength geodesics $\gamma : [0,r) \to M$ emanating at $\gamma(0)=p$: \[ \alpha (p) = 0, \qquad \nabla_{\gamma'} \alpha = S(\gamma', \cdot) + \alpha( \gamma' ) \alpha - \frac12 |\alpha |^2 g (\gamma', \cdot ). \] We claim that this defines a unique smooth 1-form $\alpha$ on $B(p,r)$. To see that $\alpha$ is smooth consider the exponential map $v \mapsto \gamma_v (1) = \exp_p (v)$ and notice that the parallel transports $\alpha_v (s) := P^{\gamma_v}_{0,s} \alpha$ of $\alpha (\gamma_v (s))$ to $p$ satisfies the ODE \[ \alpha_v (0) = 0, \qquad \alpha'_v (s) = (P^{\gamma_v}_{0,s} S)(v, \cdot) + \alpha_v (s) (v) \alpha_v (s) - \frac12 |\alpha_v^2 (s)| g_p( v, \cdot ). \] The value $\alpha_v (1) = P^{\gamma_v}_{0,1} \alpha$ depends smoothly on the vector $v \in T_p M$. Next, we construct the following function $\phi \in C^\infty (B(p,r))$ by radial integration along geodesics $\gamma$ emanating from $p$: \[ \phi (p) = 0, \qquad \nabla_{\gamma'} \phi = \alpha (\gamma'). \] The smoothness of $\phi$ follows similarly as before. Note that by uniqueness of solutions to ODEs we have: \begin{lemma} \label{Lem_recover_phi} Let $\gamma : [0, r) \to M$ be a geodesic emanating at $\gamma (0)=p$ and let $t \in [0,r)$. Suppose that there is a neighborhood $\gamma(t) \in U' \subset B(p,r)$ and a function $\phi' \in C^\infty (U')$ such that $e^{2\phi'} g$ is flat. If $\phi' = \phi$, $d\phi' = d\phi$ at $\gamma (t)$, then the same is true along the component of $\gamma ([0,r)) \cap U'$ containing $\gamma(t)$. \end{lemma} We can now define the conformal exponential map. \begin{definition}[Conformal exponential map] After constructing $\phi \in C^\infty (B(p,r))$ as above, set \[ \confexp_p := \exp_{e^{2 \phi} g, p} : U^{conf}_p \longrightarrow B(p,r), \] where $U^{conf}_p := \exp_{e^{2 \phi} g, p}^{-1} (B(p,r)) \subset T_p M$.\end{definition} The following proposition summarizes the properties of the conformal exponential map: \begin{proposition} \label{Prop_properties_confexp} Let $B(p,r') \subset B(p,r)$ be a possibly smaller ball and consider the pullback $g' := \confexp_p^* g$ onto $\confexp_p^{-1} (B(p,r')) \subset T_p M$. Denote by $g_{\eucl,p}$ the standard Euclidean metric on $T_p M$. Then: \begin{enumerate}[label=(\alph*)] \item \label{ass_properties_confexp_a} If $(g_s)_{s \in X}$ is a continuous family of Riemannian metrics on $M$ (in the smooth topology), then the corresponding family of conformal exponential maps $(\confexp_{g_s,p})_{s \in X}$ also depends continuously on $s$ (in the smooth topology). \item \label{ass_properties_confexp_b} If $g$ restricted to $B(p,r')$ is rotationally symmetric about $p$, i.e. if $g$ is compatible with a spherical structure on $B(p,r')$ with singular fiber $\{ p \}$, then $\confexp_p^{-1} (B(p,r'))$ and $g'$ invariant under the standard $O(3)$-action. \item \label{ass_properties_confexp_c} If $g$ restricted to $B(p,r')$ is conformally flat, then $g' = e^{2\phi'} g_{\eucl,p}$ for some $\phi' \in C^\infty (\confexp_p^{-1} (B(p,r')))$. \item \label{ass_properties_confexp_d} At $p$ we have $g' - g_{\eucl,p} = \partial ( g' - g_{\eucl,p} ) = 0$. \end{enumerate} \end{proposition} \begin{proof} Assertion~\ref{ass_properties_confexp_a} follows by construction. For Assertion~\ref{ass_properties_confexp_b} observe that if $g$ is rotationally symmetric on $B(p,r')$, then so is $\phi$. For Assertion~\ref{ass_properties_confexp_c} it suffices to show that $e^{2\phi} g$ restricted to $B(p,r')$ is flat. For this purpose choose an arclength geodesic $\gamma : [0,r') \to M$ emanating at $\gamma (0) = p$. Let $t_0 \in [0, r')$ be maximal with the property that $e^{2\phi} g$ is flat in a neighborhood $U \subset B(p,r')$ of $\gamma ([0,t_0))$. By Lemma~\ref{Lem_recover_phi} above and Lemma~\ref{Lem_global_conf_flat_U_sc} below we have $t_0 > 0$. Assume now that $t_0 < r'$ and set $q := \gamma (t_0)$. By Lemma~\ref{Lem_global_conf_flat_U_sc} we can find a function $\phi' \in C^\infty (V)$ defined in a neighborhood $q \in V \subset B(p,r')$ such that $e^{2\phi'} g$ is flat and $\phi' (q) = \phi (q)$, $d\phi'_q = d\phi_q$. By Lemma~\ref{Lem_recover_phi}, we obtain that $\phi = \phi'$, $d\phi = d\phi'$ along $\gamma |_{(t_1,t_0]}$ for some $t_1 \in [0, t_0)$. So by the uniqueness statement in Lemma~\ref{Lem_global_conf_flat_U_sc} we have $\phi' = \phi$ on the connected component of $U \cap V$ containing $\gamma([t_1,t_0))$. So $e^{2\phi} g$ is flat in a neighborhood of $\gamma ([0,t_0])$, in contradiction to the maximal choice of $t_0$. For Assertion~\ref{ass_properties_confexp_d} observe that $\phi (p) = d\phi_p = 0$ and $e^{2\phi \circ \confexp_p} g' - g_{\eucl,p} = \exp^*_{e^{2\phi} g,p} (e^{2\phi} g) - g_{\eucl, p} = O(r^2)$. \end{proof} \begin{lemma} \label{Lem_global_conf_flat_U_sc} If $(M, g)$ is conformally flat near some point $q \in M$, then for any $a \in \mathbb{R}$, $\alpha \in T_q^* M$ there is a neighborhood $q \in V \subset M$ and a $\phi \in C^\infty (V)$ such that $e^{2\phi} g$ is flat and $\phi (q) = a$, $d\phi_q = \alpha$. Moreover $\phi$ is unique modulo restriction to a smaller neighborhood of $q$.\end{lemma} \begin{proof} By the local conformal flatness, we can find an open neighborhood $q \in V' \subset M$ and a $\phi' \in C^\infty (V')$ such that $g' = e^{2\phi'} g$ is flat on $V'$. Since $e^{2\phi} g = e^{2 (\phi - \phi')} g'$ for any smooth function $\phi$ that is defined near $p$, we may replace $M$ by $V'$ and $g$ by $e^{2\phi} g$. So by isometrically identifying $(M,g)$ with a subset of $\mathbb{R}^n$ we may assume without loss of generality that $M \subset \mathbb{R}^3$ and $g = g_{\eucl}$. Recall that by (\ref{eq_Schouten_conf_trafo}) local conformal flatness is equivalent to the PDE \begin{equation} \label{eq_conf_flat_psi} -\nabla^2_{ij} \phi + \nabla_i \phi \nabla_j \phi - \frac12 |\nabla \phi|^2 \delta_{ij} = 0 \end{equation} If $\alpha = 0$, then we can set $\phi \equiv a$, otherwise we can set $\phi (x) := -2\log |x-y| + 2 \log |y| + a$ for $y \in \mathbb{R}^3$ with $2 \frac{y}{|y|^2} = \alpha$. This proves existence. Uniqueness follows by viewing (\ref{eq_conf_flat_psi}) restricted to lines as an ODE in $\nabla \phi$, as was done in the beginning of this subsection. \end{proof} \begin{remark}\label{rmk_Cotton_York} We recall that a metric is conformally flat if and only the Schouten tensor satisfies the Cotton-York condition (see \cite{Cotton-1899}): \[ \nabla_i S_{jk} - \nabla_j S_{ik} = 0 \] With some more effort Assertion~\ref{ass_properties_confexp_c} of Proposition~\ref{Prop_properties_confexp} can be deduced directly from the Cotton-York condition. In the context of this paper the Cotton-York condition is, however, not essential, which is why details have been omitted. \end{remark} \subsection{Rounding metrics} \label{subsec_Rounding} The goal of this subsection is to prove the following result, which states that a metric on the standard $3$-ball can be deformed to make it compatible with the standard spherical structure on a smaller ball, while preserving the properties of conformal flatness and compatibility with the spherical structure on (most) disks $D(r)$ for $r\in (0,1]$ and PSC-conformality. The following is the main result of this subsection. \begin{proposition} \label{Prop_rounding} Let $X$ be a compact topological space, $X_{PSC} \subset X$ a closed subset and $0 < \ov{r}_1 < 1$. Consider a continuous family of Riemannian metrics $(h_s)_{s \in X}$ on the unit ball $B^3 \subset \mathbb{R}^3$ such there is a continuous family of positive functions $(w_s \in C^\infty(B^3))_{s \in X_{PSC}}$ with the property that $w_s^4 h_s$ has positive scalar curvature. Then there is a continuous family of Riemannian metrics $(h'_{s,u})_{s \in X, u \in [0,1]}$ and a constant $0 < r_1 < \ov{r}_1$ such that for all $s \in X$ and $u \in [0,1]$: \begin{enumerate}[label=(\alph*)] \item \label{ass_Prop_rounding_a} $h'_{s,u} = h_s$ on $B^3 \setminus B^3 (\ov{r}_1)$. \item \label{ass_Prop_rounding_b} $h'_{s,0} = h_s$. \item \label{ass_Prop_rounding_c} $h'_{s,1} = h_s$ is compatible with the standard spherical structure on $D^3 (r_1)$. \item \label{ass_Prop_rounding_d} If $h_s$ is compatible with the standard spherical structure on $D^3(r)$ for some $r \in [\ov{r}_1,1]$, then so is $h'_{s,u}$. \item \label{ass_Prop_rounding_e} If $h_s$ is conformally flat, then so is $h'_{s,u}$. \item \label{ass_Prop_rounding_f} If $s \in X_{PSC}$, then $w_{s,u}^{\prime,4} h'_{s,u}$ has positive scalar curvature, for some $w'_{s,u} \in C^\infty(B^3)$ that agrees with $w_s$ on $B^3 \setminus B^3 (\ov{r}_1)$. \end{enumerate} \end{proposition} We will reduce Proposition~\ref{Prop_rounding} to Lemma~\ref{Lem_rounding_intermediate} below. \begin{lemma} \label{Lem_rounding_intermediate} Assume that we are in the same situation as Proposition~\ref{Prop_rounding} and assume additionally that for all $s \in X$ \begin{enumerate}[label=(\roman*)] \item \label{hyp_Lem_rounding_intermediate_i} $(h_s)_0 = (\partial h_s)_0 = 0$. \item \label{hyp_Lem_rounding_intermediate_ii} If $h_s$ is conformally flat, then on $B^3(\ov{r}_1)$ it is even conformally equivalent to the standard Euclidean metric $g_{\eucl}$. \end{enumerate} Then there is a continuous family of Riemannian metrics $(h'_{s,u})_{s \in X, u \in [0,1]}$ and a constant $0 < r_1 < \ov{r}_1$ such that Assertions~\ref{ass_Prop_rounding_a}--\ref{ass_Prop_rounding_f} of Proposition~\ref{Prop_rounding} hold for all $s \in X$ and $u \in [0,1]$. \end{lemma} In order to show that Lemma~\ref{Lem_rounding_intermediate} implies Proposition~\ref{Prop_rounding} we need the following lemma: \begin{lemma} \label{Lem_existence_phi_s_u} Let $(h_s)_{s \in X}$ be a continuous family of Riemannian metrics on $B^3$ and let $0 < \ov{r}_1 < 1$. Then there is a constant $0 < \ov{r}'_1 < \ov{r}_1$ and a continuous family of diffeomorphisms $(\phi_{s,u} : B^3 \to B^3)_{s \in X, u \in [0,1]}$ such that for all $s \in X$ and $u \in [0,1]$: \begin{enumerate}[label=(\alph*)] \item \label{ass_Lem_existence_phi_s_u_a} $\phi_{s,u} = \id$ on $B^3 \setminus B^3 (\ov{r}_1)$. \item \label{ass_Lem_existence_phi_s_u_b} $\phi_{s,0} = \id$ \item \label{ass_Lem_existence_phi_s_u_c} The pullback $\phi^*_{s,1} h_s$ agrees with the Euclidean metric $g_{\eucl}$ at the origin up to first order. \item \label{ass_Lem_existence_phi_s_u_d} If $h_s$ is conformally flat, then $\phi^*_{s,1} h_s$ restricted to $B^3(\ov{r}'_1)$ is conformally equivalent to $g_{\eucl}$. \item \label{ass_Lem_existence_phi_s_u_e} If $h_s$ is compatible with the standard spherical structure on $D^3(r)$ for some $r \in [\ov{r}_1,1]$, then so is $\phi^*_{s,u} h_s$. \end{enumerate} \end{lemma} \begin{proof} Choose $r_2 > 0$ small enough such that $\confexp_{0, h_s} |_{B^3(r_2)} : B^3 (r_2) \to B^3$ is defined for all $s \in X$. Let $S^+_3$ the set of symmetric positive definite $3 \times 3$ matrices and denote by $I \in S^+_3$ the identity matrix. For any $s \in X$ choose the matrix $L_s \in S^+_3$ with the property $(h_s)_0 (L_s v, L_s w) = v \cdot w$ for any $v, w \in \mathbb{R}^3$, where the latter denotes the standard Euclidean inner product. Then $L_s$ is continuous in $s$ and we can find an $r_3 > 0$ such that the maps \[ \psi_{s,u} := u \confexp_{0, h_s} \circ L_s |_{B^3(r_3)} + (1-u) \id_{B^3(r_3)} : B^3 (r_3) \to B^3 \] are defined and continuous for all $s \in X$, $u \in [0,1]$. Note that for all $s \in X$, $u \in [0,1]$ we have \[ \psi_{s,u} (0) = 0, \qquad (d\psi_{s,u})_0 = u L_s + (1-u) I \] and $\psi_{s,1}^* h_s$ agrees with $g_{\eucl}$ at the origin up to first order. \begin{Claim} There is a continuous family of diffeomorphisms $(\zeta_A : B^3 \to B^3)_{A \in S^+_3}$, parameterized by the set of symmetric positive definite $3 \times 3$ matrices, such that for all $A \in S^+_3$: \begin{enumerate}[label=(\alph*)] \item \label{ass_cl_zeta_A_a} $\zeta_A= \id$ on $B^3 \setminus B^3 (\ov{r}_1)$ \item $\zeta_A (0) = 0$. \item $(d\zeta_A)_0 = A$. \item $\zeta_{I} = \id_{B^3}$. \end{enumerate} \end{Claim} \begin{proof} Let $S_3$ be the set of symmetric $3 \times 3$ matrices and recall that $\exp : S_3 \to S_3^+$ is a diffeomorphism. For any compactly supported vector field $V$ on $B^3$ let $\zeta'_V : B^3 \to B^3$ be the flow of $V$ at time $1$. If $V_0 = 0$, then $(d \zeta_V)_0 = \exp (dV_0)$. By taking linear combinations, we can find a continuous (or linear) family of vector fields $(V_A)_{A \in S_3}$ on $B^3$ whose support lies inside $B^3(\ov r_1)$ such that $(V_A)_0 = 0$ and $(dV_A)_0 = A$. We can then set $\zeta_{\exp (A)} := \zeta'_{V_A}$. \end{proof} Let $\eta : \mathbb{R}^3 \to [0,1]$ be a smooth, rotationally symmetric cutoff function with $\eta \equiv 1$ on $B^3(1)$ and $\eta \equiv 0$ on $\mathbb{R}^3 \setminus B^3(2)$. Let $r_4 > 0$ and set $\eta_{r_4} := \eta (v/r_4)$ and \[ \phi_{s, u} := \eta_{r_4} \psi_{s, u} + (1-\eta_{r_4} ) \zeta_{u L_s + (1-u) I}. \] A standard limit argument shows that the family $(\phi_{s})_{s \in X}$ consists of diffeomorphisms for sufficiently small $r_4$. Assertion~\ref{ass_Lem_existence_phi_s_u_a} holds due to Assertion~\ref{ass_cl_zeta_A_a} of the Claim if $r_4$ is chosen sufficiently small. Assertion~\ref{ass_Lem_existence_phi_s_u_b} holds since $\psi_{s,0} = \eta_{r_4} \id + (1-\eta_{r_4}) \zeta_I = \id$. Assertions~\ref{ass_Lem_existence_phi_s_u_c} and \ref{ass_Lem_existence_phi_s_u_d} hold for small $\ov{r}'_1$ since $\phi_{s,1} = \psi_{s,1}$ near the origin and due to the discussion preceding the Claim. For Assertion~\ref{ass_Lem_existence_phi_s_u_e} assume that $h_s$ is compatible with the standard spherical structure on $D^3(r)$ for some $r \in [\ov{r}_1, 1]$. Then $h_s$ agrees with $g_{\eucl}$ at the origin up to first order, so $L_s = I$. Moreover $\confexp_{0,h_s}$ preserves the standard spherical structure on $\confexp^{-1}_{0,h_s} (D^3 (r))$. Therefore $\phi_{s,u}$ preserves the standard spherical structure on $\phi^{-1}_{s,u} (D^3 (r)) = D^3(r)$, since $r \geq \ov{r}_1$. This implies Assertion~\ref{ass_Lem_existence_phi_s_u_e}. \end{proof} \begin{proof}[Proof that Lemma~\ref{Lem_rounding_intermediate} implies Proposition~\ref{Prop_rounding}] Consider the constant $0 < \ov{r}'_1 < \ov{r}_1$ and the family of diffeomorphisms $(\phi_{s,u} : B^3 \to B^3)_{s \in X, u \in [0,1]}$ from Lemma~\ref{Lem_existence_phi_s_u}. Apply Lemma~\ref{Lem_rounding_intermediate} with $\ov{r}_1$ replaced by $\ov{r}'_1$ to $(\phi^*_{s,1} h_s)_{s \in X}$, consider the constant $0 < r_1 < \ov{r}_1$ and call the resulting family of metrics $(\td{h}'_{s,u})_{s \in X, u \in [0,1]}$. Set \[ h'_{s,u} := \begin{cases} \phi^*_{s, 2u} h_s & \text{if $u \in [0, \frac12]$} \\ \td{h}'_{s,2u-1} & \text{if $u \in (\frac12, 1]$} \end{cases} \] Note that this family is continuous due to Lemma~\ref{Lem_rounding_intermediate}\ref{ass_Prop_rounding_b}. We claim that $(h'_{s,u})_{s \in X, u \in [0,1]}$ satisfies Assertions~\ref{ass_Prop_rounding_a}--\ref{ass_Prop_rounding_f} of this proposition. For Assertion~\ref{ass_Prop_rounding_a} observe that on $B^3 \setminus B^3 (\ov{r}_1)$ we have $h'_{s,u} = \phi^*_{s, 2u} h_s = h_s$ if $u \in [0,\frac12]$, due to Lemma~\ref{Lem_existence_phi_s_u}\ref{ass_Lem_existence_phi_s_u_a} and $h'_{s,u} = h_s$ on $B^3 \setminus B^3 (\ov{r}_1) \subset B^3 \setminus B^3 (\ov{r}'_1)$ if $u \in [\frac12,1]$, due to Lemma~\ref{Lem_rounding_intermediate}\ref{ass_Prop_rounding_a}. Assertion~\ref{ass_Prop_rounding_b} holds since $\phi_{s,0} = \id$; compare with Lemma~\ref{Lem_existence_phi_s_u}\ref{ass_Lem_existence_phi_s_u_b}. Assertions~\ref{ass_Prop_rounding_c}, \ref{ass_Prop_rounding_e} and \ref{ass_Prop_rounding_f} hold due to the same assertions of Lemma~~\ref{Lem_rounding_intermediate}, because $h'_{s,1} = \td{h}'_{s,1}$ and due to Lemma~\ref{Lem_existence_phi_s_u}\ref{ass_Lem_existence_phi_s_u_a}. Consider now Assertion~\ref{ass_Prop_rounding_d} and assume that $h_s$ is compatible with the standard spherical structure on $D^3 (r)$ for some $r \in [\ov{r}_1,1]$. By Lemma~\ref{Lem_existence_phi_s_u}\ref{ass_Lem_existence_phi_s_u_e} the same is true for the pullbacks $\phi^*_{s,u} h_s$ and therefore by Lemma~\ref{Lem_rounding_intermediate}\ref{ass_Prop_rounding_d}, the same is true for $\td{h}'_{s,u}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{Lem_rounding_intermediate}.] Let $f : \mathbb{R} \to [0,1]$ be a cutoff function with $f \equiv 1$ on $(-\infty, -2]$ and $f\equiv 0$ on $[-1, \infty)$. Let $\varepsilon > 0$ be a constant that we will determine later and set \[ \nu (s) := f(\varepsilon \log s ). \] Fix a metric $\ov g$ on $B^3$ with the following properties: \begin{enumerate} \item $\ov g$ is $O(3)$-invariant. \item $(\ov g_{ij})_0 = (\partial \ov g_{ij})_0 = 0$. \item $\ov g$ is conformally equivalent to the Euclidean metric $g_{\eucl}$. \item $w_s^4 \ov{g}$ has positive scalar curvature at the origin for all $s \in X_{PSC}$. \end{enumerate} Set \[ h'_{s,u} := \big(1- u \cdot \nu (d_{h_s} (0, \cdot)) \big) h_s + u \cdot \nu (d_{h_s} (0, \cdot)) \ov{g}, \] where $d_{h_s} (0, \cdot)$ denotes the radial distance function with respect to the metric $h_s$. It remains to check that Assertions~\ref{ass_Prop_rounding_a}--\ref{ass_Prop_rounding_f} of Proposition~\ref{Prop_rounding} hold for sufficiently small $\varepsilon$ and $r_1$. Assertions~\ref{ass_Prop_rounding_a}--\ref{ass_Prop_rounding_c} trivially hold for sufficiently small $\varepsilon$ and $r_1$. For Assertion~\ref{ass_Prop_rounding_d} observe that if $h_s$ is compatible with the standard spherical structure on $D^3(r)$, then $d_{h_s} (0, \cdot)$ restricted to $D^3(r)$ is $O(3)$-invariant. Moreover, $\ov{g}$ is compatible with the standard spherical structure on $D^3(r)$ as well. It follows that $h'_{s,u}$ is compatible with the standard spherical structure on $D^3(r)$. For Assertion~\ref{ass_Prop_rounding_e} assume that $h_s$ is conformally flat. Then by Assumption~\ref{hyp_Lem_rounding_intermediate_ii} $h_s$ is conformally equivalent to $g_{\eucl}$ on $B^3(\ov{r}_1)$. Since $\ov{g}$ is conformally equivalent to $g_{\eucl}$ as well, this implies that $h'_{s,u}$ is conformally equivalent to $g_{\eucl}$ on $B^3(\ov{r}_1)$. The conformal flatness on $B^3 \setminus B^3(\ov{r}_1)$ follows from Assertion~\ref{ass_Prop_rounding_a} Lastly, we claim that Assertion~\ref{ass_Prop_rounding_f} holds for $w'_{s,u} = w_s$ if $\varepsilon$ is chosen small enough. To see this note first that there is a constant $C < \infty$ that is independent of $\varepsilon, s, u$ such that \[ \big| (h'_{s,u})_{ij} - \ov{g}_{ij} \big| \leq C r^2, \] \[ \big| \partial_i (h'_{s,u})_{jk} \big| \leq C |\nu' (r)| \cdot |h_s-\ov{g}| + C r \leq C \varepsilon r + C r, \] \begin{multline*} \big| \partial^2 ( h'_{s,u})_{ij} - (1-u \nu) \cdot \partial^2 (h_s)_{ij} - u \nu \cdot \partial^2 \ov{g}_{ij} \big| \\ \leq C (|\nu''| + r^{-1} |\nu'| ) |h_{s} - \ov{g}| + C |\nu'| \cdot | \partial (h'_{s,u} - \ov{g})| \leq C \varepsilon^2 + C \varepsilon . \end{multline*} Since $h'_{s,u} = h_s$ if $d_{h_s} (0, \cdot) > e^{-1/\varepsilon}$ and since the $w_s$ is uniformly bounded in the $C^2$-norm for all $s \in X_{PSC}$, we obtain that \[ | R_{w_s^4 h'_{s,u}} - (1-u\nu) R_{w_s^4 h'_{s,u}} - u\nu R_{w_s^4 \ov{g}}| \leq c \] if $\varepsilon \leq \ov\varepsilon (c)$. By compactness of $X_{PSC}$ we obtain that $R_{w_s^4 h'_{s,u}} > 0$ for sufficiently small $\varepsilon$. \end{proof} \section{Partial homotopies} \label{sec_partial_homotopy} In this section we introduce partial homotopies and certain modification moves, which will be used in Section~\ref{sec_deforming_families_metrics}. \subsection{General setup} \label{subsec_gen_setup} For the following discussion we fix a pair of finite simplicial complexes $(\mathcal{K}, \mathcal{L})$, where $\mathcal{L} \subset \mathcal{K}$ is a subcomplex. We will denote by $L \subset K$ the geometric realizations of $\mathcal{L} \subset \mathcal{K}$. When there is no chance of confusion, we will refer to the pair $(K,L)$ instead of $(\mathcal{K}, \mathcal{L})$. Consider a fiber bundle $E \to K$ over $K$ whose fibers are smooth compact Riemannian 3-manifolds. We will view this bundle as a continuous family of Riemannian manifolds $(M^s, g^s)_{s \in K}$ (see Remark~\ref{rmk_fiber_bundle_construction}). Note that a particularly interesting case is the case in which $(M^s)_{s \in K}$ is given by a trivial family of the form $(M, g^s)_{s \in K}$, where $(g^s)_{s \in K}$ is a continuous family of Riemannian metrics on a fixed compact 3-manifold $M$. \begin{definition} A metric $g$ on a compact 3-manifold $M$ is called a {\bf CC-metric} if $(M,g)$ is homothetic to a quotient of the round sphere or round cylinder. \end{definition} If $L \neq \emptyset$, then we assume that the metrics $g^s$, $s \in L$, are CC-metrics. If none of the $M^s$ are diffeomorphic to a spherical space form or a quotient of a cylinder, then no such metrics exist on any $M^s$ and therefore we must have $L = \emptyset$. We will also fix a closed subset $K_{PSC} \subset K$ with the property that $(M^s, g^s)$ has positive scalar curvature for all $s \in K_{PSC}$. Our ultimate goal in this and the following section (see Theorem~\ref{Thm_main_deform_to_CF}) will be to construct a transversely continuous family of metrics $(h^s_t)_{s \in K, t \in [0,1]}$ on $(M^s)_{s \in K}$ with the property that: \begin{enumerate}[label=(\arabic*)] \item \label{prop_family_1} $h^s_0 = g^s$ for all $s \in K$, \item \label{prop_family_2} $h^s_1$ is conformally flat and PSC-conformal for all $s \in K$, \item \label{prop_family_3} $h^s_t$ are CC-metrics for all $s \in L$, $t \in [0,1]$, \item \label{prop_family_4} $(M^s, h^s_t)$ is PSC-conformal for any $s \in K_{PSC}$, $t \in [0,1]$; see Definition~\ref{Def_PSC_conformal}. \end{enumerate} Here $(h^s_t)_{s \in K, t \in [0,1]}$ is said to be transversely continuous if it is transversely continuous in every family chart of $(M^s)_{s \in K}$, or equivalently, if $(h^s_t)_{(s,t) \in K \times [0,1]}$ is transversely continuous in the sense of Definition~\ref{def_continuity_smooth_objects} on the continuous family $(M^s \times \{ t \})_{(s,t) \in K \times [0,1]}$. In order to achieve this, we will apply Theorem~\ref{thm_existence_family_k} to find a continuous family of singular Ricci flows $(\mathcal{M}^s )_{s \in K}$ whose family of time-0-slices $(\mathcal{M}^s_0, g^s_0)_{s \in K}$ is isomorphic to $(M^s, g^s)_{s \in K}$; in the following we will identify both objects. By Theorem~\ref{Thm_sing_RF_uniqueness} for all $s \in L$ all time-slices of $\mathcal{M}^s$ are CC-metrics. By Theorem~\ref{Thm_PSC_preservation} the flow $\mathcal{M}^s$ has positive scalar curvature for all $s \in K_{PSC}$. In the following we will write $\mathcal{M}^s = (\mathcal{M}^s, \t^s, g^s, \partial^s_\t)$ and we fix a transversely continuous family of $\mathcal{R}$-structures \[ \mathcal{R}^s = ( g^{\prime, s}, \linebreak[1] \partial^{\prime, s}_\t, \linebreak[1] U^s_{S2}, \linebreak[1] U^s_{S3}, \linebreak[1] \mathcal{S}^s) \] for each $\mathcal{M}^s$. Such a family exists due to Theorem~\ref{Thm_rounding} and we can ensure that \begin{equation} \label{eq_RR_trivial_over_L} g^{\prime,s} = g^s, \quad \partial^{\prime,s}_\t = \partial^s_\t \qquad \text{for all} \quad s \in L. \end{equation} We will discuss further geometric and analytic properties of this structure in Section~\ref{sec_deforming_families_metrics}; for the purpose of this section it suffices to assume that $(\mathcal{R}^s)_{s \in K}$ satisfies the properties of Definitions~\ref{Def_R_structure} and \ref{Def_RR_structure_transverse_cont}. Let $T \geq 0$. The goal of this section will be to introduce a new type of partially defined homotopy starting from the the family of metrics $( g^{\prime,s}_{T})_{s \in K}$ on the family of time-$T$-slices $(\mathcal{M}^s_T)_{s \in K}$. We will see that if $T = 0$, then under a certain conditions this partial homotopy implies the existence of the desired family $(h^s_t)_{s \in K, t \in [0,1]}$ satisfying Properties~\ref{prop_family_1}--\ref{prop_family_4}. We will moreover discuss ``moves'' that will allow us to ``improve'' a given partial homotopy and enable us to flow it backwards in time, i.e. decrease the time parameter $T$. \subsection{Definition of a partial homotopy} Let $X$ be a topological space. \begin{definition}[Metric deformation] \label{Def_metric_deformation} A {\bf metric deformation over $X$} is a pair $(Z, (g_{s,t})_{s \in X, t \in [0,1]})$ with the following properties: \begin{enumerate}[label=(\arabic*)] \item $Z$ is a compact 3-manifold with boundary whose boundary components are spheres. \item $(g_{s,t})_{s \in X, t \in [0,1]}$ is a continuous family of Riemannian metrics. \item For all $s \in X$, the Riemannian manifold $(Z, g_{s,1})$ is conformally flat and PSC-conformal. \end{enumerate} \end{definition} We can now define a partial homotopy. For this purpose, fix some $T \geq 0$ and consider the simplicial pair $(K,L)$, the continuous family of singular Ricci flows $(\mathcal{M}^s)_{s \in K}$ over $K$, as well as the family of $\mathcal{R}$-structures $(\mathcal{R}^s)_{s \in K}$ from Subsection~\ref{subsec_gen_setup}. \begin{definition}[Partial homotopy] \label{Def_partial_homotopy} For every simplex $\sigma \subset K$ consider a metric deformation $( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]})$ and a transversely continuous family of embeddings $(\psi^\sigma_s : Z^\sigma \to \mathcal{M}^s_T )_{s \in \sigma}$. We call $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ a {\bf partial homotopy (at time $T$ relative to $L$ for the the family of $\mathcal{R}$-structures $(\mathcal{R}^{s})_{s \in K}$)} if the following holds: \begin{enumerate}[label=(\arabic*)] \item \label{prop_def_partial_homotopy_1} For all $s \in \sigma \subset K$ we have $(\psi^{ \sigma}_s)^* g^{\prime,s}_{T} = g^{\sigma}_{s, 0}$. \item \label{prop_def_partial_homotopy_2} For all $s \in \tau \subsetneq \sigma \subset K$ we have $ \psi^{ \sigma}_s ( Z^{ \sigma} ) \subset \psi^{ \tau}_s ( Z^{ \tau} )$. \item \label{prop_def_partial_homotopy_3} For all $s \in \tau \subsetneq \sigma \subset K$ and $t \in [0,1]$ we have $((\psi^{ \tau}_s )^{-1} \circ \psi^{ \sigma}_s )^* g^\tau_{s,t} = g^\sigma_{s,t}$. \item \label{prop_def_partial_homotopy_4} For each $s \in \tau \subsetneq \sigma \subset K$ and for the closure $\mathcal{C}$ of each component of $Z^{ \tau} \setminus((\psi^{ \tau}_s )^{-1} \circ \psi^{ \sigma}_s ) ( Z^{ \sigma} )$ one (or both) of the following is true: \begin{enumerate}[label=(\roman*)] \item \label{prop_def_partial_homotopy_4i} $\psi^\tau_s (\mathcal{C}) \subset U_{S2}^s$ and $\psi^\tau_s (\mathcal{C})$ is a union of spherical fibers. Moreover, for every $t \in [0,1]$ the metric $(\psi^\tau_s)_* g^\tau_{s,t}$ restricted to $\psi^\tau_s (\mathcal{C})$ is compatible with the restricted spherical structure. \item \label{prop_def_partial_homotopy_4ii} $\partial \mathcal{C} = \emptyset$, $\psi^\tau_s (\mathcal{C}) \subset U^s_{S3}$ and for every $s' \in \tau$ near $s$ the metric $g^\tau_{s',t}$ restricted to $\mathcal{C}$ is a multiple of $g^\tau_{s',0}$ for all $t \in [0,1]$. \end{enumerate} \item \label{prop_def_partial_homotopy_5} For every $\sigma \subset K$ and every component $\Sigma \subset \partial Z^\sigma$ the image $\psi^\sigma_s (\Sigma)$ is a regular fiber of $\SS^s$ for all $s \in \sigma$. Moreover, there is an $\varepsilon > 0$ that is independent of $s$ such that for all $t \in [0,1]$ the metric $(\psi^\sigma_s)_* g^{\sigma}_{s,t}$ is compatible with $\SS^s$ on an $\varepsilon$-collar neighborhood of $\psi^\sigma_s (\Sigma)$ inside $\psi^\sigma_s (Z^\sigma)$. \item \label{prop_def_partial_homotopy_6} For every $s \in \sigma \subset L$ we have $\psi^\sigma_s (Z^\sigma) = \emptyset$ or $\mathcal{M}^s_T$ and the metrics $g^\sigma_{s,t}$, $t \in [0,1]$ are either multiples of the same constant curvature metric or they are isometric to quotients of the round cylinder and admit the same local isometric $O(3)$-actions. \end{enumerate} We say that the partial homotopy is {\bf PSC-conformal over $s \in K$} if for any simplex $\sigma \subset K$ with $s \in \sigma$ and any $t \in [0,1]$ the Riemannian manifold $(Z^\sigma, g^\sigma_{s,t})$ is PSC-conformal. If $Z^\sigma = \emptyset$ for all $\sigma \subset K$, then the partial homotopy is called {\bf trivial}. \end{definition} In other words, a partial homotopy is given by metric deformations $(\psi^\sigma_s)_* g^\sigma_{s,t}$ on the $s$-dependent domains $\psi^\sigma_s (Z^\sigma) \subset \mathcal{M}^s_T$ starting from $(\psi^\sigma_s)_* g^\sigma_{s,0} = g^{\prime,s}_T$ (see Property~\ref{prop_def_partial_homotopy_1}). For a fixed $s \in K$ there may be several such domains and deformations, for different simplices $\sigma \subset K$ containing $s$. Property~\ref{prop_def_partial_homotopy_2} states that these domains are nested and decrease in size as the dimension of $\sigma$ increases. Property~\ref{prop_def_partial_homotopy_3} states that any two deformations $(\psi^\sigma_s)_* g^\sigma_{s,t}$, $(\psi^\tau_s)_* g^\tau_{s,t}$ ($\tau \subset \sigma$) for the same parameter $s$ agree on the smaller domain $\psi^\sigma_s (Z^\sigma)$ and Property~\ref{prop_def_partial_homotopy_4} imposes a symmetry condition of the larger deformation $(\psi^\tau_s)_* g^\tau_{s,t}$ on the difference $\psi^\tau_s (Z^\tau) \setminus \psi^\tau_s (Z^\sigma)$. The use of the parameter $s'$ in Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4ii} ensures that $\sigma$ admits an \emph{open} over $\sigma = U^\sigma_{(i)} \cup U^\sigma_{(ii)}$ such that Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i} holds for all $s \in U^\sigma_{(i)}$ and Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4ii} holds for all $s \in U^\sigma_{(ii)}$; this fact will be important in the proof of Proposition~\ref{prop_extending} (see also Proposition~\ref{prop_exten_round}). Property~\ref{prop_def_partial_homotopy_5} is a technical property, which will allow us to extend a deformation $(\psi^\sigma_s)_* g^\sigma_{s,t}$ to a larger domain using metrics that are $\SS^s$ compatible. It could potentially be replaced by a condition requiring the boundary components of $\psi^\sigma_s (Z^\sigma)$ to be umbilic round spheres with respect to all metrics $(\psi^\sigma_s)_* g^\sigma_{s,t}$, plus some conditions on the higher derivatives. In the cylindrical case Property~\ref{prop_def_partial_homotopy_6} implies that for fixed $s \in \sigma$ the metrics $g^\sigma_{s,t}$, $t \in [0,1]$ can locally be expressed as $a_t^2 g_{S^2} + b_t^2 dr^2$ for some continuously varying $a_t, b_t > 0$. On a more philosophical level, Definition~\ref{Def_partial_homotopy} formalizes the idea of a continuous family of metric deformations on subsets of $\mathcal{M}^s_T$, which are defined up to some ambiguity. This ambiguity is ``supported'' on the differences $\psi^\tau_s (Z^\tau) \setminus \psi^\tau_s (Z^\sigma)$, which we will later choose to be subsets of small scale $\rho$. The deformations $(\psi^\sigma_s)_* g^\sigma_{s,t}$ restricted to these differences are required to be of a very controlled symmetric form. This will allow us to argue that the ambiguity expressed in Definition~\ref{Def_partial_homotopy} is ``contractible'' in a certain sense. Lastly, let us comment on the use of the PSC-conformal condition in Definition~\ref{Def_partial_homotopy}. The reason that we are using this condition instead of the standard positive scalar curvature condition is rather subtle, but it will be central in the proof of Proposition~\ref{prop_extending} below. In short, it has to do with poor contractibility properties of certain spaces of warped product metrics with positive scalar curvature. The PSC-conformal condition, on the other hand, is much more forgiving; for instance, a metric is automatically PSC conformal if it is compatible with a spherical structure whose domain is the entire manifold. \subsection{Constructing the desired family of metrics from a partial homotopy} Our strategy in Section~\ref{sec_deforming_families_metrics} will be to inductively construct partial homotopies at a sequence of decreasing times $T_i$ with $T_i = 0$ for some large $i$. The following proposition will allow us to convert a partial homotopy at time $0$ to a conventional homotopy. It essentially states that we can construct a family $(h^s_t)_{t \in [0,1],s \in K}$ satisfying Properties~\ref{prop_family_1}--\ref{prop_family_4} from Subsection~\ref{subsec_gen_setup} from a partial homotopy at time $0$ if the maps $\psi^\sigma_s$ are all surjective. \begin{proposition} \label{prop_partial_homotopy_standard_homotopy} Suppose there is a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $0$ relative to $L$ with the property that $\psi^\sigma_s (Z^\sigma) = \mathcal{M}^s_0$ for all $s \in \sigma \subset K$. Then there is a family of family of metrics $(h^s_t)_{t \in [0,1],s \in K}$ satisfying Properties~\ref{prop_family_1}--\ref{prop_family_3} from Subsection~\ref{subsec_gen_setup}. Moreover for any $s \in K$, $t \in [0,1]$ the following holds. If the partial homotopy is PSC-conformal at $s$, then $(M^s, g^s_t)$ is PSC-conformal (compare with Property~\ref{prop_family_4} above). \end{proposition} \begin{proof} By Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_3} we have $(\psi^\tau_s)_* g^\tau_{s,t} = (\psi^\sigma_s)_* g^\sigma_{s,t}$ for all $s \in \tau \subsetneq \sigma \subset K$, $t \in [0,1]$. So we can define $h^s_t := (\psi^\sigma_s)_* g^\sigma_{s,t}$ for any $s \in \sigma \subset K$. The asserted properties of the family $(M^s = \mathcal{M}^s_0, (h^s_t)_{t \in [0,1]})_{s \in K}$ are direct consequences of Definition~\ref{Def_partial_homotopy}. \end{proof} \subsection{Moving a partial homotopy backwards in time} The following proposition allows us to construct a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at an earlier time $T' \leq T$ from a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $t$ such that the domains $\ov\psi^\sigma_s (Z^\sigma)$ arise by flowing $\psi^\sigma_s (Z^\sigma)$ backwards by the flow of $\partial^{\prime,s}$ by $T - T'$. In order to achieve this we require that the differences $\psi^\tau_s (Z^\tau) \setminus \psi^\tau_s (Z^\sigma)$ remain in the support of $\mathcal{R}^s$. In the following we denote by $x^{\partial^{\prime,s}_\t} (t')$ and $X^{\partial^{\prime,s}_\t}(t')$ the image of $x \in \mathcal{M}^s_t$ and $X \subset \mathcal{M}^s_t$) under the time $(t'-t)$-flow of the vector field $\partial^{\prime,s}_\t$; this is the same notion as in Definition~\ref{def_points_in_RF_spacetimes} with $\partial_\t$ replaced by $\partial^{\prime,s}_\t$. \begin{proposition} \label{prop_move_part_hom_bckwrds} Consider a partial homotopy $\{ ( Z^\sigma, \linebreak[1] ( g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ and let $T' \in [0, T]$. Assume that: \begin{enumerate}[label=(\roman*)] \item \label{hyp_lem_move_part_hom_bckwrds_i} For all $s \in \sigma \subset K$ all points of $\psi^\sigma_s(Z^\sigma)$ survive until time $T'$ with respect to the flow of $\partial^{\prime,s}_\t$. \item \label{hyp_lem_move_part_hom_bckwrds_ii} For all $s \in \tau \subsetneq \sigma \subset K$ and $t' \in [T', T]$ we have \[ \big( \psi^\tau(Z^\tau) \setminus \psi^\sigma_s(Z^\sigma) \big)^{\partial^{\prime,s}_\t} (t') \subset U^s_{S2} \cup U^s_{S3}. \] \end{enumerate} Then there is a partial homotopy of the form $\{ ( Z^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T'$ relative to $L$ such that for all $s \in \sigma \subset K$: \begin{enumerate}[label=(\alph*)] \item \label{ass_lem_move_part_hom_bckwrds_a} $\ov\psi^\sigma_s (Z^\sigma) = (\psi^\sigma_s (Z^\sigma))^{\partial^{\prime,s}_\t}(T')$. \item \label{ass_lem_move_part_hom_bckwrds_b} If $\{ ( Z^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over $s$ and if $g^{\prime,s}_{t'}$ restricted to $(\psi^\sigma_s(Z^\sigma))^{\partial^{\prime,s}_\t} (t')$ is PSC-conformal for all $t' \in [T',T]$, then $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is also PSC-conformal over $s$. \end{enumerate} \end{proposition} \begin{proof} We can define maps $( \psi^{\sigma}_s )_{t'} : Z^\sigma \to \mathcal{M}^s_{t'}$, $t' \in [T',T]$, as follows: \[ \big( \psi^{\sigma}_s \big)_{t'} (z) := \big( \psi^{\sigma}_s (z) \big)^{\partial^{\prime,s}_\t} (t'). \] Set \begin{equation*} \label{eq_def_tdpsi} \ov\psi^{\sigma}_s := \big( \psi^{\sigma}_s \big)_{T'} \end{equation*} and \[ \ov{g}^\sigma_{s,t} := \begin{cases} (\psi^\sigma_s)^*_{T' + 2 t (T - T')} g_{T' + 2 t (T - T')} & \text{if $t \in [0, \tfrac12]$} \\ g^\sigma_{s, 2 t - 1} & \text{if $t \in [\tfrac12, 1]$} \end{cases} \] Then Assertions~\ref{ass_lem_move_part_hom_bckwrds_a} and \ref{ass_lem_move_part_hom_bckwrds_b} hold automatically. Let us now verify all properties of Definition~\ref{Def_partial_homotopy} for $\{ ( Z^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$. By construction $(Z^\sigma, (\ov{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]})$ is a metric deformation for all $\sigma \subset K$. Properties~\ref{prop_def_partial_homotopy_1}--\ref{prop_def_partial_homotopy_3} of Definition~\ref{Def_partial_homotopy} hold by construction as well. For Property~\ref{prop_def_partial_homotopy_4} consider the closure $\mathcal{C}$ of a component of \[ Z^{ \tau} \setminus((\ov\psi^{ \tau}_s )^{-1} \circ \ov\psi^{ \sigma}_s ) ( Z^{ \sigma} ) = Z^{ \tau} \setminus((\psi^{ \tau}_s )^{-1} \circ \psi^{ \sigma}_s ) ( Z^{ \sigma} ). \] By Assumption~\ref{hyp_lem_move_part_hom_bckwrds_ii} we have \begin{equation} \label{eq_CCinUS2US3} \mathcal{C}^{\partial^{\prime,s}_\t} (t') \subset U^s_{S2} \cup U^s_{S3} \qquad \text{for all} \quad t' \in [T', T]. \end{equation} We can apply Definition~\ref{prop_def_partial_homotopy_4} to $\mathcal{C}$ and the original partial homotopy, which implies that there are two cases. \medskip \textit{Case 1: Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i} holds for the original partial homotopy. \quad} So $\psi^\tau_s (\mathcal{C}) \subset U^s_{S2}$, $\psi^\tau_s (\mathcal{C})$ is a union of fibers of $\SS^s$ and $(\psi_s^\tau)_* g^\tau_{s,t}$ restricted to $\psi^\tau_s (\mathcal{C})$ is compatible with $\SS^s$. By (\ref{eq_CCinUS2US3}) and Definition~\ref{Def_R_structure}\ref{prop_def_RR_1} we know that $\mathcal{C}(t') \subset U^s_{S2}$ for all $t' \in [T', T]$ and thus by Definition~\ref{Def_R_structure}\ref{prop_def_RR_2}--\ref{prop_def_RR_4} the set $\ov\psi^\tau_s (\mathcal{C}) = \mathcal{C}^{\partial^{\prime,s}_\t} (T')$ is a union of fibers of $\SS^s$ and $(\ov\psi^\tau_s)_* \ov{g}^\tau_{s,t}$ restricted to $\ov\psi^\tau_s (\mathcal{C})$ is compatible with $\SS^s$. \medskip \textit{Case 2: Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4ii} holds for the original partial homotopy. \quad} So $\partial\mathcal{C} = \emptyset$, $\psi^\tau_s (\mathcal{C}) \subset U^s_{S3}$ and for every $s' \in \tau$ near $s$ the metric $g^\tau_{s', t}$, $t \in [0,1]$, is a multiple of $g^\tau_{s', 0}$. By (\ref{eq_CCinUS2US3}) and Definition~\ref{Def_R_structure}\ref{prop_def_RR_5} we have $\mathcal{C}^{\partial^{\prime,s}_\t} (T') \subset U^s_{S2}$ or $\mathcal{C}^{\partial^{\prime,s}_\t} (T') \subset U^s_{S3}$. \medskip \textit{Case 2a: $\mathcal{C} (T') \subset U^s_{S2}$. \quad} By Definition~\ref{Def_R_structure}\ref{prop_def_RR_6} there is a $T^* \in [T', T)$ such that $\mathcal{C}^{\partial^{\prime,s}_\t} (t') \subset U^s_{S3}$ for all $t' \in (T^*, T]$ and $\mathcal{C}^{\partial^{\prime,s}_\t} (t') \subset U^s_{S2} \setminus U^s_{S3}$ for all $t' \in [T', T^*]$. Using Definition~\ref{Def_R_structure}\ref{prop_def_RR_2}--\ref{prop_def_RR_4}, \ref{prop_def_RR_7} we can argue as in Case~1 that $(\ov\psi^\tau_s)_* \ov{g}^\tau_{s,t}$ restricted to $\ov\psi^\tau_s (\mathcal{C})$ is compatible with $\SS^s$. \medskip \textit{Case 2b: $\mathcal{C} (T') \subset U^s_{S3}$. \quad} By Definition~\ref{Def_R_structure}\ref{prop_def_RR_6} we have $\mathcal{C}^{\partial^{\prime,s}_\t} (t') \subset U^s_{S3}$ for all $t' \in [T', T]$. By openness of $\cup_{s' \in K} U^{s'}_{S3} \subset \cup_{s' \in K} \mathcal{M}^{s'}$ the same is true for $s' \in \tau$ near $s$. So by Definition~\ref{Def_R_structure}\ref{prop_def_RR_7} for $s' \in \tau$ near $s$ we obtain that $\ov{g}^\tau_{s,t}$ is a multiple of $\ov{g}^\tau_{s,0}$. \medskip Next, consider Property~\ref{prop_def_partial_homotopy_5} of Definition~\ref{Def_partial_homotopy}. Fix some $\Sigma \subset \partial Z^\sigma$. Then $\psi^\sigma_s (\Sigma) \subset U^s_{S2}$. We can again argue as before that $(\psi^\sigma_s (\Sigma))^{\partial^{\prime,s}_\t} (t') \subset U^s_{S2}$ for all $t' \in [T', T]$. Therefore, in a neighborhood of $(\psi^\sigma_s (\Sigma))^{\partial^{\prime,s}_\t} (t')\subset U^s_{S2}$ the metric $g^{\prime,s}_{t'}$ is compatible with $\SS^s$. Property~\ref{prop_def_partial_homotopy_5} now follows from Definition~\ref{Def_R_structure}\ref{prop_def_RR_2}--\ref{prop_def_RR_4}. Property~\ref{prop_def_partial_homotopy_6} is a consequence of (\ref{eq_RR_trivial_over_L}). \end{proof} \subsection{Passing to a simplicial refinement} Consider the simplicial pair $(\mathcal{K}, \mathcal{L})$ with geometric realization $(K,L)$, as discussed in Subsection~\ref{subsec_gen_setup}. Let $\mathcal{K}'$ be a simplicial refinement of $\mathcal{K}$, let $\mathcal{L}'$ be the corresponding simplicial refinement of $\mathcal{L}$, and identify the geometric realizations of $(\mathcal{K}', \mathcal{L}')$ with $(K,L)$. The next proposition states that given a partial homotopy respecting the simplicial structure $\mathcal{K}$, we may construct a canonical partial homotopy respecting the simplicial structure $\mathcal{K}'$. \begin{proposition} \label{prop_simp_refinement} Let $\mathcal{K}'$ be a simplicial refinement of $\mathcal{K}$. If $\{ ( Z^\sigma, \linebreak[1] ( g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is a partial homotopy at time $T$ relative to $L$ that respects the simplicial structure $\mathcal{K}$, then there is a partial homotopy $\{ ( \ov{Z}^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ that respects the simplicial structure $\mathcal{K}'$ such that: \begin{enumerate}[label=(\alph*)] \item For any $\sigma' \in \mathcal{K}'$ the following holds: If $\sigma \in \mathcal{K}$ is the simplex with the smallest dimension such that $\sigma \supset \sigma'$, then $\ov\psi^{\sigma'}_s (\ov{Z}^{\sigma'}) = \psi^{\sigma}_s (Z^{\sigma})$ and $(\ov\psi^{\sigma'}_s)_* \ov{g}^{\sigma'}_{s,t} = (\psi^{\sigma}_s)_* g^\sigma_{s,t}$ for all $s \in \sigma'$, $t \in [0,1]$. \item If $\{ ( Z^\sigma, \linebreak[1] ( g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over some $s \in K$, then so is $\{ ( \ov{Z}^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$. \end{enumerate} \end{proposition} \begin{proof} For any $\sigma' \in \mathcal{K}'$, then there is a unique simplex $\sigma_{\sigma'} \in \mathcal{K}$ that has minimal dimension and satisfies $\sigma_{\sigma'} \supset \sigma'$. Set \[ ( \ov{Z}^{\sigma' }, (\ov g^{\sigma'}_{s,t})_{s \in \sigma', t \in [0, 1]}, ( \psi^{\sigma'}_s )_{s \in \sigma'}) := ( Z^{\sigma_{\sigma'}}, (g^{\sigma_{\sigma'}}_{s,t})_{s \in \sigma_{\sigma'}, t \in [0, 1]}, ( \psi^{\sigma_{\sigma'}}_s )_{s \in \sigma_{\sigma'}}). \] It is easy to see that this new data still defines a partial homotopy that satisfies the assertions of this proposition. \end{proof} \subsection{Enlarging a partial homotopy} In the following we prove that we can enlarge the images $\psi^\sigma_s (Z^\sigma)$ by certain subsets contained in $U^s_{S2} \cup U^s_{S3}$ that are either unions of spherical fibers or round components. \begin{proposition} \label{prop_extending} Consider a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$. Fix some simplex $\sigma \subset K$. Let $\wh{Z}^\sigma$ be a compact 3-manifold with boundary, $\iota^\sigma : Z^\sigma \to \wh{Z}^\sigma$ an embedding and $(\wh\psi^\sigma_s : \wh{Z}^\sigma \to \mathcal{M}^s_T)_{s \in \sigma}$ a continuous family of embeddings. Assume that for all $s \in \sigma$: \begin{enumerate}[label=(\roman*)] \item \label{ass_ph_extend_i} $\psi^\sigma_s = \wh\psi^\sigma_s \circ \iota^\sigma$. \item \label{ass_ph_extend_ii} If $s \in \tau \subset \partial \sigma$, then $\wh\psi^\sigma_s (\wh{Z}^\sigma) \subset \psi^\tau_s (Z^\tau)$. \item \label{ass_ph_extend_iii} For the closure $Y$ of every component of $\wh{Z}^\sigma \setminus \iota^\sigma (Z^\sigma)$ one of the following is true uniformly for all $s \in \sigma$: \begin{enumerate}[label=(iii-\arabic*)] \item \label{ass_ph_extend_iii-1} $\wh\psi^\sigma_s (Y)$ is a union of fibers of $\SS^s$. \item \label{ass_ph_extend_iii-2} $\partial Y = \emptyset$, $\wh\psi^\sigma_s (Y) \subset U^s_{S3}$ and $(\wh\psi^\sigma_s)^* g^{\prime,s}_{T}$ a multiple of the same constant curvature metric for all $s \in \sigma$. \end{enumerate} \item \label{ass_ph_extend_iv} If $\sigma \subset L$, then $\wh\psi^\sigma_s (\wh{Z}^\sigma) \setminus \psi^\sigma_s ( Z^\sigma) = \emptyset$ or $\mathcal{M}^s_T$ and $g^{\prime,s}_T$ is a CC-metric. \end{enumerate} Then there is a family of metrics $(\wh{g}^\sigma_{s \in \sigma, t \in [0,1]})$ on $\wh{Z}^\sigma$ such that \begin{equation} \label{eq_enlarged_partial_h} \{ ( Z^{\sigma'}, \linebreak[1] (g^{\sigma'}_{s,t})_{s \in \sigma', t \in [0,1]}, \linebreak[1] (\psi^{\sigma'}_s )_{s \in \sigma'}) \}_{\sigma' \subset K, \sigma' \neq \sigma} \cup \{ ( \wh{Z}^\sigma, \linebreak[1] (\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\wh\psi^\sigma_s )_{s \in \sigma}) ) \} \end{equation} is a partial homotopy at time $T$ relative to $L$. Moreover, if the original partial homotopy was PSC-conformal over some $s \in K$, then the new partial homotopy is PSC-conformal over $s$ as well. \end{proposition} \begin{proof} After replacing $Z^\sigma$ with $\iota^\sigma (Z^\sigma)$, $\psi^\sigma_s$ with $\wh\psi^\sigma_s |_{\iota^\sigma (Z^\sigma)}$ and $g^\sigma_{s,t}$ with $\iota^\sigma_* g^\sigma_{s,t}$, we may assume without loss of generality that $Z^\sigma \subset \wh{Z}^\sigma$ is an embedded submanifold, $\iota^\sigma = \id_{Z^\sigma}$ and $\psi^\sigma_s = \wh\psi^\sigma_s |_{Z^\sigma}$. Next, note that $\wh{Z}^\sigma \setminus Z^\sigma$ consists of finitely many connected components. In the following we will assume without loss of generality that this difference only contains one connected component. The proposition in its full generality will then follow by successively adding connected components to $Z^\sigma$. Let $Y$ be the closure of $\wh{Z}^\sigma \setminus Z^\sigma$. By Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_3}, for every two simplicies $\tau_1 \subset \tau_2 \subset \partial \sigma$ and $s \in \tau_1$, $t \in [0,1]$ we have on $\wh{Z}^\sigma$: \[ \big( (\psi^{\tau_2}_s)^{-1} \circ \wh\psi^\sigma_s \big)^* g^{\tau_2}_{s,t} = \big( (\psi^{\tau_2}_s)^{-1} \circ \wh\psi^\sigma_s \big)^* \big( (\psi^{\tau_1}_s)^{-1} \circ \psi^{\tau_2}_s \big)^* g^{\tau_1}_{s,t} = \big( (\psi^{\tau_1}_s)^{-1} \circ \wh\psi^\sigma_s \big)^* g^{\tau_1}_{s,t}. \] Therefore, there is a continuous family of metrics $(k_{s,t})_{s \in \partial\sigma, t \in [0,1]}$ on $\wh{Z}^\sigma$ that satisfies \begin{equation} \label{eq_k_gtau} k_{s,t} = \big( (\psi^{\tau}_s)^{-1} \circ \wh\psi^\sigma_s \big)^* g^{\tau}_{s,t} \qquad \text{for all} \quad s \in \tau \subset \partial\sigma. \end{equation} \begin{Claim} We have \begin{alignat}{3} k_{s,0} &= \big( \wh\psi^\sigma_s \big)^* g^{\prime,s}_{T} & \qquad &\text{on} \quad \wh{Z}^\sigma &\qquad &\text{for}\quad s \in \partial\sigma, \label{eq_k_psi_g} \\ k_{s,t} &= g^{\sigma}_{s,t} & \qquad &\text{on} \quad Z^\sigma &\qquad &\text{for}\quad s \in \partial\sigma, \; t \in [0,1] \label{eq_k_iotag} \\ g^{\sigma}_{s,0} &= \big( \wh\psi^\sigma_s \big)^* g^{\prime,s}_{T} & \qquad &\text{on} \quad Z^\sigma &\qquad &\text{for}\quad s \in \sigma, \label{eq_psi_g_iotag} \end{alignat} \end{Claim} \begin{proof} For (\ref{eq_k_psi_g}), observe that for all $s \in \tau \subset \partial\sigma$ \[ k_{s,0} = \big( (\psi^{\tau}_s)^{-1} \circ \wh\psi^\sigma_s \big)^* (\psi^\tau_s)^* g^{\prime,s}_{T} = \big( \wh\psi^\sigma_s \big)^* g^{\prime,s}_{T}. \] (\ref{eq_k_iotag}) and (\ref{eq_psi_g_iotag}) follow from $\psi^\sigma_s = \wh\psi^\sigma_s |_{Z^\sigma}$ and \[ k_{s,t} \big|_{Z^\sigma} = \big( (\psi^{\tau}_s)^{-1} \circ \psi^\sigma_s \big)^* g^{\tau}_{s,t} = g^{\sigma}_{s,t}. \qedhere\] \end{proof} \medskip Our goal will be to construct a metric deformation $(\wh{Z}^\sigma, (\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]})$ such that, among other things, \begin{alignat}{3} \wh{g}^\sigma_{s,t} &= k_{s,t} &\qquad &\text{on} \quad \wh{Z}^\sigma &\qquad &\text{for} \quad s \in \partial\sigma, \; t \in [0,1] \label{eq_h_k} \\ \wh{g}^\sigma_{s,0} &= \big(\wh\psi^\sigma_{s} \big)^* g^{\prime,s}_{T} &\qquad &\text{on} \quad \wh{Z}^\sigma &\qquad & \text{for} \quad s \in \sigma \label{eq_h_psig} \\ \wh{g}^\sigma_{s,t} &= g^\sigma_{s,t} &\qquad &\text{on} \quad Z^\sigma &\qquad & \text{for} \quad s \in \sigma, t \in [0,1] \label{eq_h_iotag} \end{alignat} \begin{Claim} If (\ref{eq_h_k})--(\ref{eq_h_iotag}) hold, then (\ref{eq_enlarged_partial_h}) satisfies Properties \ref{prop_def_partial_homotopy_1}--\ref{prop_def_partial_homotopy_3} of Definition~\ref{Def_partial_homotopy}. \end{Claim} \begin{proof} Property~\ref{prop_def_partial_homotopy_1} of Definition~\ref{Def_partial_homotopy} holds due to (\ref{eq_h_psig}). Property~\ref{prop_def_partial_homotopy_2} holds by Assumptions~\ref{ass_ph_extend_i} and \ref{ass_ph_extend_ii}. For Property~\ref{prop_def_partial_homotopy_3} observe that by (\ref{eq_k_gtau}) and (\ref{eq_h_k}) for any $s \in \tau \subset \partial\sigma$ we have \[ \big((\psi^{\tau}_s)^{-1} \circ \wh\psi^\sigma_s \big)^* g^{\tau}_{s,t} = k_{s,t} = \wh{g}^\sigma_{s,t} \qquad \text{on} \quad \wh{Z}^\sigma. \] On the other hand, for any $s \in \sigma \subset \partial \tau$ we have by Assumption~\ref{ass_ph_extend_i} \[ \psi^\tau_s (Z^\tau) \subset \psi^\sigma_s (Z^\sigma) = \wh\psi^\sigma_s (Z^\sigma). \] Thus $(\wh\psi^{\sigma}_s )^{-1} \circ \psi^\tau_s = (\psi^{\sigma}_s )^{-1} \circ \psi^\tau_s : Z^\tau \to Z^\sigma$ and by (\ref{eq_h_iotag}) we have on $Z^\tau$ \[ \big((\wh\psi^{\sigma}_s)^{-1} \circ \psi^\tau_s \big)^* \wh{g}^\sigma_{s,t}= \big((\psi^{\sigma}_s)^{-1} \circ \psi^\tau_s \big)^*g^\sigma_{s,t} = g^\tau_{s,t}. \qedhere \] \end{proof} We will now construct $(\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ such that (\ref{eq_h_k})--(\ref{eq_h_iotag}) hold. \bigskip \textit{Case 1: $\sigma\not\subset L$.}\quad We distinguish the two cases from Assumption~\ref{ass_ph_extend_iii}. \medskip \textit{Case 1a: Assumption~\ref{ass_ph_extend_iii-1} holds for all $s \in \sigma$. \quad} For every $s \in \sigma$ let $\SS^{\prime,s}$ be the pull back of $\SS^s$ along $\wh\psi^\sigma_s$. Then $(\SS^{\prime,s})_{s \in \sigma}$ is a transversely continuous family of spherical structures defined on neighborhoods of $Y$ in $\wh{Z}^\sigma$, $Y$ is a union of fibers of $\SS^{\prime,s}$ and $(\wh\psi^\sigma_s)^* g^{\prime,s}_T$ is compatible with $\SS^{\prime,s}$ for all $s \in \sigma$. Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_5} implies that there is an $\varepsilon > 0$ such that for every $t \in [0,1]$ the metric $g^\sigma_{s,t}$ restricted to an $\varepsilon$-collar neighborhood of $\partial Z^\sigma$ inside $Z^\sigma$ is compatible with $\SS^{\prime,s}$. So after restricting $\SS^{\prime, s}$ to a smaller domain, we may assume in the following that for all $s \in \sigma$ we have $Y \subset \domain \SS^{\prime, s}$ and that $g^\sigma_{s,t}$ is compatible with $\SS^{\prime, s}$ on $Z^\sigma \cap \domain ( \SS^{\prime,s} )$ for all $t \in [0,1]$. \begin{Claim} $k_{s,t}$ is compatible with $\SS^{\prime,s}$ for all $s \in \partial\sigma$ and $t \in [0,1]$ \end{Claim} \begin{proof} Note first that due to (\ref{eq_k_iotag}) the metric $k_{s,t}$ restricted to $Z^\sigma$ is compatible with $\SS^{\prime,s}$. So it remains to check the compatibility only on $Y$. Choose $\tau \subset \partial \sigma$ such that $s \in \tau$. The set $((\psi^\tau_s)^{-1} \circ \wh\psi^\sigma_s)(Y)$ is contained in the closure $\mathcal{C}$ of a component of $Z^\tau \setminus ((\psi^\tau_s)^{-1} \circ \psi^\sigma_s)(Z^\sigma)$. By Property~\ref{prop_def_partial_homotopy_4} of Definition~\ref{Def_partial_homotopy} there are two cases. In the first case (Case~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i}) the image $\psi^\tau_s (\mathcal{C})$ is a union of fibers of $\SS^s$ and $((\wh\psi^\sigma_s)_* k_{s,t} )|_{\wh\psi^\sigma_s(Y)} = ((\psi^\tau_s)_* g^\tau_{s,t})|_{\wh\psi^\sigma_s(Y)}$ is compatible with $\SS^s$ for all $t \in [0,1]$. It follows that $k_{s,t} |_Y$ is compatible with $\SS^{\prime,s}$ for all $t \in [0,1]$. In the second case (Case~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i}) $\partial\mathcal{C} = \emptyset$, $\psi^\tau_s (\mathcal{C}) \subset U^s_{S3}$ and $g^\tau_{s,t}$ restricted to $\mathcal{C}$ is a multiple of $g^\tau_{s,0}$ for all $t \in [0,1]$. It follows that $((\wh\psi^\sigma_s)_* k_{s,t} )|_{\wh\psi^\sigma_s(Y)}= ((\psi^\tau_s)_* g^\tau_{s,t})|_{\wh\psi^\sigma_s(Y)}$ is a multiple of $((\wh\psi^\sigma_s)_* k_{s,0} )|_{\wh\psi^\sigma_s(Y)}= ((\psi^\tau_s)_* g^\tau_{s,0})|_{\wh\psi^\sigma_s(Y)}$ for all $t \in [0,1]$. So $k_{s,t}|_Y$ is a multiple of $k_{s,0}|_Y = ((\wh\psi^\sigma_s)^* g^{\prime,s}_T)|_Y$ for all $t \in [0,1]$, which is compatible with $\SS^{\prime,s}$. \end{proof} We can now construct $(\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ using Proposition~\ref{prop_extending_symmetric}. \medskip \textit{Case 1b: Assumption~\ref{ass_ph_extend_iii-2} holds for all $s \in \sigma$. \quad} Recall that in this case $\wh{Z}^\sigma$ is a disjoint union of $Z^\sigma$ and $Y$, $\wh\psi^\sigma_s (Y) \subset U^s_{S3}$ and there is a constant curvature metric $g^*$ on $Y$ and a continuous function $\lambda : \sigma \to \mathbb{R}_+$ that $((\wh\psi^\sigma_s)^* g^{\prime,s}_{T} )|_Y = \lambda^2(s) g^*$ for all $s \in \sigma$. Set $(\wh{g}^\sigma_{s,t}) := (g^\sigma_{s,t})$ on $Z^\sigma$. Then (\ref{eq_h_iotag}) is satisfied and it remains to specify $(\wh{g}^\sigma_{s,t})$ on $Y$. Let \[ A := \{ s \in \sigma \;\; : \;\; \wh\psi^\sigma (Y) \subset U^s_{S2} \} \] and for every $s \in A$ let $\SS^{\prime,s}$ be the pull back of $\SS^s$ to $Y$ along $\wh\psi^\sigma|_Y$. Note that $A$ is open in $\sigma$, $(\SS^{\prime,s})_{s \in A}$ is transversely continuous and $(\wh\psi^\sigma_s)^* g^{\prime,s}_T$ is compatible with $\SS^{\prime,s}$ for all $s \in A$. Moreover, if $A \neq \emptyset$, then $(Y, g^*)$ is isometric to the standard $S^3$ or $\mathbb{R} P^3$. Next let us analyze the family $(k_{s,t})_{s \in \partial\sigma, t \in [0,1]}$. For any $s \in \tau \subset \partial\sigma$ the set $\mathcal{C}_{\tau,s} := ((\psi^\tau_s)^{-1} \circ \wh\psi^\sigma_s) (Y)$ is a component of $Z^\tau$. Since $\tau$ is connected we have $\mathcal{C}_{\tau,s} = \mathcal{C}_\tau$ for all $s \in \tau$. By Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4}, (\ref{eq_k_gtau}) and (\ref{eq_k_psi_g}) there is a closed subset $E_\tau \subset A \cap \tau$ such that: \begin{enumerate} \item For all $s \in E_\tau$ and $t \in [0,1]$ the metric $k_{s,t}|_Y$ is compatible with $\SS^{\prime,s}$. \item For all $s \in \tau \setminus E_\tau$ and $t \in [0,1]$ the metric $k_{s,t} |_Y$ is a multiple of $k_{s,0} |_Y = (\wh\psi^\sigma_s g^{\prime,s}_T) |_Y = \lambda^2 (s) g^*$. \end{enumerate} Set $E := \cup_{\tau \subset \partial\sigma} E_\tau \subset A$ and notice that $E$ is closed. By Proposition~\ref{prop_exten_round} we can construct $(\wh{g}^\sigma_{s,t} |_Y)_{s \in \sigma, t \in [0,1]}$ such that (\ref{eq_h_k}) and (\ref{eq_h_psig}) hold and such that $\wh{g}^\sigma_{s,t}|_Y$ is compatible with $\SS^{\prime,s}$ for all $s \in A$ and $t \in [0,1]$. Moreover, $\wh{g}^\sigma_{s,t}|_Y$ is a multiple of $g^*$ for all $s \in A \setminus E'$ for some closed subset $E' \subset A \subset \sigma$. \bigskip \textit{Case 2: $\sigma\subset L$ \quad} We may assume that $Y \neq \emptyset$. By Assumption~\ref{ass_ph_extend_iv} we have $\partial Y = \emptyset$, $\wh{Z}^\sigma = Y$ and $Z^\sigma = \emptyset$. \medskip \textit{Case 2a: $Y$ is a spherical space form. \quad} Due to Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_6} and (\ref{eq_k_psi_g}) we have $k_{s,t} = \lambda^2_{s,t} ( \wh\psi^\sigma_s )^* g^{\prime, s}_T$ for all $s \in \partial \sigma$, $t \in [0,1]$, where $(s,t) \mapsto \lambda_{s,t} > 0$ is continuous and $\lambda_{s,0} = 1$. Extend this function to a continuous function $(s,t) \mapsto \td\lambda_{s,t} > 0$ on $\sigma \times [0,1]$ and set $h_{s,t} := \td\lambda^2_{s,t} ( \wh\psi^\sigma_s )^* g^{\prime, s}_T$. \medskip \textit{Case 2b: $Y$ is a quotient of a cylinder. \quad} As in Case 1a let $\SS^{\prime,s}$ be the pull back of $\SS^s$ along $\wh\psi^\sigma_s$. For any $s \in \sigma$ we can split $( \wh\psi^\sigma_s )^* g^{\prime, s}_T = u_s + v_s$ into its components tangential and orthogonal to the fibers of $\SS^{\prime,s}$. For the same reasons as in Case~2a we have $k_{s,t} = \lambda^2_{s,t} u_s + \mu^2_{s,t} v_s$, where $(s,t) \mapsto \lambda_{s,t} > 0$, $(s,t) \mapsto \mu_{s,t} > 0$ are continuous functions with $\lambda_{s,0} = \mu_{s,0} = 1$. Let $\td\lambda_{s,t}, \td\mu_{s,t} : \sigma \times [0,1] \to \mathbb{R}_+$ be continuous extensions of these functions and set $h_{s,t} := \td\lambda^2_{s,t} u_s + \td\mu^2_{s,t} v_s$. \bigskip We now verify that $(\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ satisfies the required properties in all cases. For this purpose, recall that by (\ref{eq_h_psig}) for all $s \in \sigma$ the metric $\wh g^\sigma_{s,1}$ restricted to $Z^\sigma$ is conformally flat and PSC-conformal and $\wh g^\sigma_{s,1}$ restricted to a neighborhood of $\wh Z^\sigma \setminus \Int Z^\sigma$ is compatible with a spherical structure and therefore also conformally flat. So by Lemma~\ref{Lem_PSC_conformal_enlarge} we obtain that $(\wh{Z}, \wh g^\sigma_{s,1})$ is conformally flat and PSC-conformal for all $s \in \sigma$, which implies that $(\wh{Z}^\sigma, (\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]})$ is a metric deformation. Next, consider Definition~\ref{Def_partial_homotopy}. Properties~\ref{prop_def_partial_homotopy_5}, \ref{prop_def_partial_homotopy_6} of Definition~\ref{Def_partial_homotopy} hold by construction. Let us now verify Property~\ref{prop_def_partial_homotopy_4} of Definition~\ref{Def_partial_homotopy}. First assume that $s \in \tau \subset \partial\sigma$ and consider the embedding \[ F := (\psi^\tau_s)^{-1} \circ \wh\psi^\sigma_s : \wh{Z}^\sigma \longrightarrow Z^\tau. \] Consider the closure $\mathcal{C}$ of a component of $Z^\tau \setminus F(\wh{Z}^\sigma) = Z^\tau \setminus ( F(Z^\sigma) \cup F(Y))$. Let $\mathcal{C}'$ be the closure of a component of $Z^\tau \setminus F(Z^\sigma)$ such that $\mathcal{C}' \supset \mathcal{C}$. If $\mathcal{C}' = \mathcal{C}$, then there is nothing to show. So assume that $\mathcal{C}' \supsetneq \mathcal{C}$. Then $\mathcal{C}' \supset F(Y)$ and $\mathcal{C}$ is the closure of a component of $\mathcal{C}' \setminus F(Y)$. This implies that $\partial Y \neq \emptyset$ and therefore the construction of $(\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ was covered in Case~1a. Let us now consider the two possibilities of Property~\ref{prop_def_partial_homotopy_4} of Definition~\ref{Def_partial_homotopy} for $\mathcal{C}'$: \begin{enumerate}[label=(\roman*)] \item $\psi^\tau_s(\mathcal{C}')$ is a union of spherical fibers and $(\psi^\tau_s)_* g^\tau_{s,t}$ restricted to $\psi^\tau_s (\mathcal{C}')$ is compatible with the restricted spherical structure for all $t \in [0,1]$. Since $\psi^\tau_s (F(Y)) = \wh\psi^\sigma_s (Y)$ is a union of spherical fibers, the same is true if we replace $\mathcal{C}'$ by $\mathcal{C}$. \item $\mathcal{C}'$ is closed, $\psi^\tau_s (\mathcal{C}') \subset U^s_{S3}$ and for every $s' \in \tau$ near $s$ the metric $g^\tau_{s',t}$ restricted to $\mathcal{C}$ is a multiple of $g^\tau_{s', 0}$ for all $t \in [0,1]$. Again, since $\psi^\tau_s (F(Y)) = \wh\psi^\sigma_s (Y)$ is a union of spherical fibers, we obtain using Definition~\ref{Def_R_structure}\ref{prop_def_RR_1} that $\psi^\tau_s (\mathcal{C}') \subset U^s_{S2}$. This implies that $(\psi^\tau_s)_* g^\tau_{s,t}$ restricted to $\psi^\tau_s (\mathcal{C})$, being a multiple of $(\psi^\tau_s)_* g^\tau_{s,0} = g^{\prime,s}_T$ restricted to $\psi^\tau_s (\mathcal{C})$, is compatible with the restricted spherical structure. Hence this case implies Case (i). \end{enumerate} Next assume that $s \in \sigma \subset \partial\tau$ and consider the embedding \[ F := \big(\wh\psi^\sigma_s \big)^{-1} \circ \psi^\tau_s : Z^\tau \longrightarrow \wh{Z}^\sigma. \] Consider the closure $\mathcal{C}$ of a component of $\wh{Z}^\sigma \setminus F(Z^\tau)$. If $\mathcal{C}$ is also the closure of a component of $Z^\sigma \setminus F(Z^\tau)$, then we are done, so assume that this is not the case. It follows that $\mathcal{C} \supset Y$. Moreover, if $\mathcal{C}'_1, \ldots, \mathcal{C}'_m$ denote the closures of all components of $Z^\sigma \setminus F(Z^\tau)$ and $I \subset \{ 1, \ldots, m \}$ denotes the set of indices with the property that $\mathcal{C}'_i \cap Y \neq \emptyset$, then $\mathcal{C}= Y \cup_{i \in I} \mathcal{C}'_i$. If $(\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ was constructed according to Cases 1b or Case 2, then $I = \emptyset$, $\mathcal{C} = Y$ and the conditions in Property~\ref{prop_def_partial_homotopy_4} of Definition~\ref{Def_partial_homotopy} hold by the discussions in these cases. So assume that $(\wh{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ was constructed according to Case 1a. Since $\partial\mathcal{C}'_i \neq \emptyset$ for all $i \in I$, Property~\ref{prop_def_partial_homotopy_4} of Definition~\ref{Def_partial_homotopy} implies that for all $i \in I$ the set $\wh\psi^\sigma_s (\mathcal{C}'_i)$ is a union of spherical fibers and for every $t \in [0,1]$ the metric $(\wh\psi^\sigma_s)_* \wh{g}^\sigma_{s,t}$ restricted to $\wh\psi^\sigma_s (\mathcal{C}'_i)$ is compatible with the restricted spherical structure. By construction, the same is true if we replace $\mathcal{C}'_i$ by $Y$. Therefore $\wh\psi^\sigma_s (\mathcal{C})$ is a union of spherical fibers and $(\wh\psi^\sigma_s)_* \wh{g}^\sigma_{s,t}$ restricted to $\wh\psi^\sigma_s (\mathcal{C})$ is compatible with the restricted spherical structure It follows that $\mathcal{C}$ satisfies Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i} of Definition~\ref{Def_partial_homotopy} (with $\sigma$ and $\tau$ switched). Lastly, assume that the original partial homotopy was PSC-conformal over some $s \in K$. We will argue that (\ref{eq_enlarged_partial_h}) is PSC-conformal over $s$ as well. For this purpose it remains to consider the case $s \in \sigma$, and we only need to show that $(\wh{Z}^\sigma, \wh{g}^\sigma_{s,t})$ is PSC-conformal for all $t \in [0,1]$. Recall that $(Z^\sigma, g^\sigma_{s,t})$ is PSC-conformal. In Cases~1b, 2a, 2b, $Y$ is a component of $\wh{Z}^\sigma$ and we can use Lemma~\ref{Lem_PSC_conformal_enlarge} to show that $(Y, \wh{g}^\sigma_{s,t})$ is PSC-conformal. In Case~1a the metric $\wh{g}^\sigma_{s,t}$ is compatible with a spherical structure on $\wh{Z}^\sigma$ with the property that $Y$ is a union of spherical fibers. Therefore, we obtain again using Lemma~\ref{Lem_PSC_conformal_enlarge} that $(\wh{Z}^\sigma, \wh{g}^\sigma_{s,t})$ is PSC-conformal. \end{proof} \subsection{Removing a disk from a partial homotopy} In this subsection we will show that one can modify a partial homotopy by the removal of a disk of controlled size from some $Z^\sigma$, without disturbing the partial homotopy property. To get an idea of (one of the situations in which) this operation will eventually be applied, consider a single singular Ricci flow $\mathcal{M}$ that undergoes a degenerate neck pinch at some time $T_0>0$. Right after the singular time, one observes a cap region modelled on Bryant soliton, whose scale decreases to zero as $t\searrow T_0$. For certain times $T>T_0$ close to $T_0$, we will find that there are two equally admissible truncations of $\mathcal{M}_T$, which agree with one another modulo the removal of an approximate Bryant $3$-disk region. The following proposition will enable one to adjust a partial homotopy to accommodate the removal of such a disk region. \begin{proposition}[Disk removal] \label{prop_move_remove_disk} Consider a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ and a closed subset $K_{PSC} \subset K$. Fix some simplex $\sigma_0 \subset K$ with $\sigma_0\cap L=\emptyset$, and assume that there is a collection of continuous families of embeddings $\{(\nu_{s,j} : D^3(1)=:D^3 \to \mathcal{M}^s_T )_{s \in \sigma_0}\}_{1\leq j\leq m}$ such that the following holds: \begin{enumerate}[label=(\roman*)] \item \label{lem_move_remove_disk_i} For all $s \in \sigma_0$ the embeddings $\{\nu_{s,j}\}_{1\leq j\leq m}$ have pairwise disjoint images and for every $j \in \{ 1,\ldots,m \}$ we have $\nu_{s,j} (D^3) \subset \psi^{\sigma_0}_s (\Int Z^{\sigma_0}) \cap U^s_{S2}$. \item \label{lem_move_remove_disk_ii} For every $s \in \sigma_0$, $j \in \{1,\ldots,m \}$, the embedding $\nu_{s,j}$ carries the standard spherical structure on $D^3$ to $\SS^s$ restricted to $\nu_{s,j}(D^3)$. \item \label{lem_move_remove_disk_iii} $\nu_{s,j} (D^3) \cap \psi^\tau_s (Z^\tau) = \emptyset$ whenever $s\in{\sigma_0} \subsetneq \tau \subset K$, $j\in \{1,\ldots,m\}$. \item \label{lem_move_remove_disk_iv} If $\sigma_0$ is a maximal simplex, i.e. $\sigma_0$ is not properly contained in any other simplex $\sigma_1 \subset K$, then we assume additionally that for any $\tau \subset K$ with the property that $\tau \cap {\sigma_0} \neq \emptyset$ the following holds for all $s \in {\sigma_0} \cap \tau$, $j \in \{ 1,\ldots,m \}$: The image $\nu_{s,j} (D^3)$ does not contain an entire component of $\psi^{\tau}_s (Z^{\tau})$. \item \label{lem_move_remove_disk_v} $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over every $s\in K_{PSC}$. \item \label{lem_move_remove_disk_vi} For all $s \in \sigma_0 \cap K_{PSC}$ the Riemannian manifold \[ \big( \psi^{\sigma_0}_s (Z^{\sigma_0}) \setminus \cup_{j=1}^m \nu_{s,j} (\Int D^3), g^{\prime,s}_T \big) \] is PSC-conformal. \end{enumerate} Then we can find a partial homotopy $\{ (\td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ such that the following holds: \begin{enumerate}[label=(\alph*)] \item \label{lem_move_remove_disk_a} $\td\psi^\tau_s (\td{Z}^\tau) = \psi^\tau_s (Z^\tau)$ for all $s\in \tau$, if $\tau \neq {\sigma_0}$. \item \label{lem_move_remove_disk_b} $\td\psi^{\sigma_0}_s (\td{Z}^{\sigma_0}) = \psi^{\sigma_0}_s (Z^{\sigma_0}) \setminus \cup_{j=1}^m \nu_{s,j} (\Int D^3)$ for all $s\in {\sigma_0}$. \item \label{lem_move_remove_disk_c} $\{ ( \td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over every $s\in K_{PSC}$. \end{enumerate} \end{proposition} Before proceeding with the proof, we give an indication of some of the main points. For simplicity we will restrict to the case when only one disk is removed, and drop the index $j$. The main objective in the removal procedure is to ensure that the removed disk satisfies Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4} --- the $\SS$-compatibility of the metric deformation $(g^{\sigma_0}_{s,t})$ --- on the disk. If $\sigma_0$ is not maximal, we choose some simplex $\tau\supsetneq \sigma_0$. By applying Assumption~\ref{lem_move_remove_disk_iii} and Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4} to the pair $\sigma_0\subsetneq\tau$, we find that $(\psi^{\sigma_0}_s)_*g^{\sigma_0}_{s,t}$ is already $\SS^s$-compatible on the entire disk $\nu_s(D^3)$. Hence we may simply remove $\nu_s(D^3)$ from $\psi^{\sigma_0}_s(Z^{\sigma_0})$ and the verification of Definition~\ref{Def_partial_homotopy} amounts to unwinding of definitions. When $\sigma_0$ is maximal, the metric deformation $(\psi^{\sigma_0}_s)_*g^{\sigma_0}_{s,t}$ is typically not $\SS^s$-compatible on the $3$-disk $\nu_s(D^3)$, so we must modify it to respect the compatibility conditions on the system of metric deformations in a partial homotopy. This necessitates adjustments to metric deformations $(g^\tau_{s,t})$ for $\tau$ near $\sigma_0$ as well. To first approximation, the modification process involves two steps: we first apply a rounding procedure to $(\psi^{\sigma_0}_s)_*g^{\sigma_0}_{s,t}$ on a small ball centered at the singular fiber $\nu_s(0)$ of $\SS^s$ to produce a metric deformation that is $\SS^s$-compatible near $\nu_s(0)$; then we push forward the resulting metric deformation by an $\SS^s$-compatible diffeomorphism to inflate the region of $\SS^s$-compatibility so that it covers $\nu_s(D^3)$. The actual process is more involved than this for several reasons: One issue is that we need to cut off the modification procedure in a neighborhood of $\sigma_0$. To address this, we upgrade the two steps mentioned above --- the rounding procedure and the push forward by an inflationary diffeomorphism --- into continuous families depending on a parameter lying in the interval $[0,1]$, and implement the cutoff by arranging for this control parameter to be supported close to $\sigma_0\subset K$. Another major constraint on the modification process is the $\SS$-compatibility required by Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4} for various pairs $\tau\subsetneq \sigma$. The assumptions of Proposition~\ref{prop_move_remove_disk} imply that for pairs $\tau\subsetneq\sigma$ and $s\in \tau$ near $\sigma_0$, the portion of the $3$-disk $\nu_s(D^3)$ where Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4} imposes $\SS$-compatibility is either a disk $\nu_s(D^3(r))$ or an annulus $\nu_s(A^3(r,r'))$, where $r$ is bounded away from $0$. This enables us to guarantee that the two step modification process --- rounding and inflation --- leaves the $\SS$-compatibility undisturbed. \begin{proof} By induction we may reduce the proposition to the case $m=1$ (Alternatively, the construction given below may be localized near each disk, so that the case when $m>1$ follows by applying the same argument simultaneously). Note here that due to Lemma~\ref{Lem_PSC_conformal_enlarge} Assumptions~\ref{lem_move_remove_disk_ii} and \ref{lem_move_remove_disk_vi} imply that for any subset $I \subset \{ 1, \ldots, m \}$ the Riemannian manifold \[ \big( \psi^{\sigma_0}_s (Z^{\sigma_0}) \setminus \cup_{j \in I} \nu_{s,j} (\Int D^3), g^{\prime,s}_T \big) \] is PSC-conformal. So we may assume in the following that $m=1$ and drop the index $j$ from now on. Using the exponential map based at $\nu_s (0)$ and the fact that $\cup_{s \in K} U^s_{S2}$ is open, we can construct a neighborhood $\sigma_0 \subset U \subset K$ and a continuous family of embeddings $(\ov\nu_s : D^3 (2) \to \mathcal{M}^s)_{s \in U}$ such that $\nu_s = \ov\nu_s |_{D^3 (1)}$ if $s \in \sigma_0$ and such that: \begin{enumerate}[label=(\arabic*)] \item \label{prop_proof_disk_removal_1} For all $s \in \sigma_0$ we have $\ov\nu_s (D^3(2)) \subset \psi^{\sigma_0}_s (\Int Z^{\sigma_0})$. \item \label{prop_proof_disk_removal_2} For all $s \in U$ we have $\ov\nu_s (D^3(2)) \subset U^s_{S2}$ and $\ov\nu_s$ carries the standard spherical structure on $D^3(2)$ to the restriction of $\SS^s$ to $\ov\nu_s(D^3(2))$. \item \label{prop_proof_disk_removal_3} $\ov\nu_s (D^3(2)) \cap \psi^\tau_s (Z^\tau) = \emptyset$ whenever ${\sigma_0} \subsetneq \tau \subset K$. \item \label{prop_proof_disk_removal_4} If $\sigma_0$ is maximal, then for any $\tau \subset K$ and $s \in \tau \cap U$ the image $\ov\nu_s (D^3(2))$ does not contain an entire component of $\psi^{\tau}_s (Z^{\tau})$. \end{enumerate} In the following we will write $\nu_s$ instead of $\ov\nu_s$ for simplicity. Next, we exploit the invariance of Definition~\ref{Def_partial_homotopy} under precomposition by diffeomorphisms to argue that we may assume in addition, without loss of generality: \begin{enumerate}[label=(\arabic*), start=5] \item \label{prop_proof_disk_removal_5} There is an embedding $\mu : D^3 (2) \to Z^{\sigma_0}$ with the property that $\nu_s = \psi^{\sigma_0}_s \circ \mu$ for all $s \in \sigma_0$. \end{enumerate} More specifically, let $(\chi_s : Z^{\sigma_0} \to Z^{\sigma_0})_{s \in {\sigma_0}}$ be a continuous family of diffeomorphisms that are equal to the identity near $\partial Z^{\sigma_0}$ and such that $\chi_s^{-1} \circ (\psi^{\sigma_0}_s)^{-1} \circ \nu_s$ is constant in $s$. Set \[ \big( \ov\psi^{\sigma_0}_s := \psi^{\sigma_0}_s \circ \chi_s : Z^{\sigma_0} \to Z^{\sigma_0} \big)_{s \in {\sigma_0}}, \qquad \ov{g}^{\sigma_0}_{s,t} := \chi_s^* g_{s,t}^{\sigma_0}. \] If we replace $(g^{\sigma_0}_{s,t})_{s \in \sigma_0, t \in [0,1]}, \linebreak[1] (\psi^{\sigma_0}_s )_{s \in \sigma_0}$ by $(\ov{g}^{\sigma_0}_{s,t})_{s \in \sigma_0, t \in [0,1]}, \linebreak[1] (\ov\psi^{\sigma_0}_s )_{s \in \sigma_0}$, then the conditions for a partial homotopy are preserved, and $\mu := (\psi^{\sigma_0}_s)^{-1} \circ \nu_s$ is constant in $s$. Next, we claim that the following is true: \begin{enumerate}[label=(\arabic*), start=6] \item \label{prop_proof_disk_removal_6} If $\tau \subsetneq \sigma \subset K$, $s \in \tau \cap U$ and $\mathcal{C}$ is the closure of a component of $Z^{ \tau} \setminus((\psi^{ \tau}_s )^{-1} \circ \psi^{ \sigma}_s ) ( Z^{ \sigma} )$ with $\psi^\tau_s(\mathcal{C}) \cap \nu_s (D^3(2)) \neq \emptyset$, then Case~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i} of Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4} holds for $\mathcal{C}$. \end{enumerate} In fact, if $\mathcal{C}$ satisfies Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i}, then we are done; so assume that it satisfies Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4ii}. Then $\partial\mathcal{C} = \emptyset$, $\psi^{\tau}_s (\mathcal{C}) \subset U^s_{S3}$ and $g^{\tau}_{s,t}$ restricted to $\mathcal{C}$ is a multiple of $g^{\tau}_{s,0}$. Since $\nu_s(D^3(2))\subset \psi^\tau_s(\mathcal{C}) \subset U_{S2}$, we conclude using Definition~\ref{Def_R_structure}\ref{prop_def_RR_1} that $\psi^{\tau}_s(\mathcal{C}) \subset U^s_{S2}$. So since $\partial \mathcal{C} = \emptyset$, we obtain that $\psi^{\tau}_s(\mathcal{C})$ is a union of spherical fibers. By Definition~\ref{Def_R_structure}\ref{prop_def_RR_4} we obtain that $(\psi^{\tau}_s)_* g^{\tau}_{s,0} = g^{\prime,s}_T$, and thus also $(\psi^{\tau}_s)_* g^{\tau}_{s,t}$, restricted to $\psi^{\tau}_s(\mathcal{C})$ is compatible with $\SS^s$. \medskip \textit{Case 1: $\sigma_0 \subsetneq \sigma_1$ for some simplex $\sigma_1 \subset K$. \quad} Let us first apply Property~\ref{prop_proof_disk_removal_6} above for $\tau = \sigma_0$ and $\sigma = \sigma_1$. By Property~\ref{prop_proof_disk_removal_3} above, for any $s \in \sigma_0$, there is a component $\mathcal{C} \subset Z^{ \sigma_0} \setminus((\psi^{ \sigma_0}_s )^{-1} \circ \psi^{ \sigma_1}_s ) ( Z^{ \sigma_1} )$ with $\psi^{\sigma_0}_s (\mathcal{C}) \supset \nu_s (D^3(2))$. So by Properties~\ref{prop_proof_disk_removal_2}, \ref{prop_proof_disk_removal_5}, \ref{prop_proof_disk_removal_6} above we obtain: \begin{enumerate}[label=(\arabic*), start=7] \item \label{prop_proof_disk_removal_7} For all $s \in \sigma_0$, $t \in [0,1]$ the pullback $\mu^* g^{\sigma_0}_{s,t}$ is compatible with the standard spherical structure on $D^2(2)$. \end{enumerate} Now set \[ \td{Z}^{\sigma_0} := Z^{\sigma_0} \setminus \mu^{-1} (\Int D^3(1)), \qquad \td\psi^{\sigma_0}_s := \psi^{\sigma_0}_s \big|_{\td{Z}^{\sigma_0}}, \qquad \td{g}^{\sigma_0}_{s,t} := g^{\sigma_0}_{s,t} \big|_{\td{Z}^{\sigma_0}} \] and \[ (\td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) := ( Z^\sigma, \linebreak[1] ( g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \] for all $\sigma \neq \sigma_0$. Then Assertions~\ref{lem_move_remove_disk_a}, \ref{lem_move_remove_disk_b} of this proposition hold automatically. Let us now verify that $\{ (\td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is a partial homotopy. First, we argue that $(\td Z^{\sigma_0}, \linebreak[1] (\td g^{\sigma_0}_{s,t})_{s \in \sigma_0, t \in [0,1]})$ is a metric deformation (see Definition~\ref{Def_metric_deformation}). Since $(Z^{\sigma_0}, \linebreak[1] ( g^{\sigma_0}_{s,t})_{s \in \sigma_0, t \in [0,1]})$ is a metric deformation, the only non-trivial property is the PSC-conformality of $(\td Z^{\sigma_0}, \linebreak[1] \td g^{\sigma_0}_{s,1})$, which follows from Lemma~\ref{Lem_PSC_conformal_enlarge} with $M=\td Z^{\sigma_0}$, $Z=(\psi^{\sigma_0}_s)^{-1}(\psi^{\sigma_1}_s(Z^{\sigma_1}))\subset \td Z^{\sigma_0}$ and $g=\td g^{\sigma_0}_{s,1}$. Properties~\ref{prop_def_partial_homotopy_1}--\ref{prop_def_partial_homotopy_3}, \ref{prop_def_partial_homotopy_6} of Definition~\ref{Def_partial_homotopy} clearly hold for $\{ (\td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$. Property~\ref{prop_def_partial_homotopy_5} is unaffected by the modification, except for the boundary component $\mu(\partial D^3(1)) \subset \td Z^{\sigma_0}$, for which Property~\ref{prop_def_partial_homotopy_5} follows from Property~\ref{prop_proof_disk_removal_7} above. We now verify Property~\ref{prop_def_partial_homotopy_4}. Note that it holds for pairs of simplices $ \tau\subsetneq \sigma$ when $\sigma_0\not\in\{\tau,\sigma\}$, because $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is a partial homotopy. Suppose that $s\in \sigma_0\subsetneq\sigma $ for some simplex $\sigma \subset K$. Then the collection of components of $\td Z^{\sigma_0}\setminus ((\td\psi^{\sigma_0}_s)^{-1}\circ\td\psi^{\sigma}_s)(\td Z^{\sigma})$ is the same as the collection of components of $Z^{\sigma_0}\setminus ((\psi^{\sigma_0}_s)^{-1}\circ\psi^{\sigma}_s)(Z^{\sigma})$, except for the one containing $\mu (D^3(1))$; let $\td\mathcal{C}$ denote its closure and $\mathcal{C}$ denote the closure of the corresponding component of $Z^{\sigma_0}\setminus ((\psi^{\sigma_0}_s)^{-1}\circ\psi^{\sigma}_s)(Z^{\sigma})$. So $\td{\mathcal{C}} = \mathcal{C} \setminus \mu (\Int D^3(1))$. By Properties~\ref{prop_proof_disk_removal_2}, \ref{prop_proof_disk_removal_6} above, we obtain that $\td\mathcal{C}$ satisfies Property~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i} of Definition~\ref{Def_partial_homotopy}. Next suppose that $s\in \tau \subsetneq \sigma_0$. Then the closures of the component $\td Z^{\tau}\setminus ((\td\psi^{\tau}_s)^{-1}\circ\td\psi^{\sigma_0}_s)(\td Z^{\sigma_0})$ are the same as those of $ Z^{\tau}\setminus ((\psi^{\tau}_s)^{-1}\circ\psi^{\sigma_0}_s)(Z^{\sigma_0})$ plus the component $\mathcal{C}:=(\psi^{\tau}_s)^{-1}(\nu_s(D^3(1)))$. It follows from Property~\ref{prop_proof_disk_removal_7} that $\mathcal{C}$ satisfies Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i}. We now verify Assertion~\ref{lem_move_remove_disk_c} of this proposition. Suppose that $\sigma\subset K$ is a simplex and $s\in K_{PSC}\cap\sigma$. If $\sigma\neq\sigma_0$, then $\td Z^\sigma=Z^\sigma$ and $\td g^\sigma_{s,t}=g^\sigma_{s,t}$ for all $t\in [0,1]$, so $(\td Z^\sigma, \td g^{\sigma}_{s,t})$ is PSC-conformal by assumption. If $\sigma=\sigma_0$, then for every $t\in[0,1]$ we may apply Lemma~\ref{Lem_PSC_conformal_enlarge} with $M=\td Z^{\sigma_0}$, $Z=(\psi^{\sigma_0}_s)^{-1}(\psi^{\sigma_1}_s(Z^{\sigma_1}))\subset \td Z^{\sigma_0}$ and $g=\td g^{\sigma_0}_{s,t}$ to conclude that $\td g^{\sigma_0}_{s,t}$ is PSC-conformal. \medskip \textit{Case 2: $\sigma_0$ is a maximal simplex of $K$. \quad} In this case, by Properties \ref{prop_proof_disk_removal_1}--\ref{prop_proof_disk_removal_6} above imply that the assumptions of Lemma~\ref{lem_make_compatible_on_disk} below hold. So the proposition follows from Lemma~\ref{lem_make_compatible_on_disk} below. \end{proof} \begin{lemma} \label{lem_make_compatible_on_disk} Consider a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ and a closed subset $K_{PSC} \subset K$. Fix some simplex $\sigma_0 \subset K$, $\sigma_0\cap L=\emptyset$ and a neighborhood $\sigma_0 \subset U \subset K$ and assume that there is a continuous family of embeddings $(\nu_s : D^3(2) \to \mathcal{M}^s )_{s \in U}$ such that the following holds: \begin{enumerate}[label=(\roman*)] \item \label{lem_make_compatible_on_disk_i} $\sigma_0$ is a maximal simplex. \item \label{lem_make_compatible_on_disk_ii} There is an embedding $\mu : D^3 (2) \to \Int Z^{\sigma_0}$ such that $\nu_s = \psi^{\sigma_0}_s \circ \mu$ for all $s \in \sigma_0$. \item \label{lem_make_compatible_on_disk_iii} For all $s \in U$ we have $\nu_s (D^3(2)) \subset U^s_{S2}$ and the embedding $\nu_s$ carries the standard spherical structure on $D^3$ to $\SS^s$ restricted to $\nu_{s}(D^3(2))$. \item \label{lem_make_compatible_on_disk_iv} For any $\tau \subset K$ and $s \in \tau \cap U$ the image $\nu_s (D^3(2))$ does not contain an entire component of $\psi^{\tau}_s (Z^{\tau})$. \item \label{lem_make_compatible_on_disk_vii} If $\tau \subsetneq \sigma \subset K$, $s \in \tau \cap U$ and $\mathcal{C}$ is the closure of a component of $Z^{ \tau} \setminus((\psi^{ \tau}_s )^{-1} \circ \psi^{ \sigma}_s ) ( Z^{ \sigma} )$ with $\psi^\tau_s (\mathcal{C}) \cap \nu_s (D^3(2)) \neq \emptyset$, then Case~\ref{prop_def_partial_homotopy_4}\ref{prop_def_partial_homotopy_4i} of Definition~\ref{Def_partial_homotopy}\ref{prop_def_partial_homotopy_4} holds for $\mathcal{C}$. \item \label{lem_make_compatible_on_disk_v} $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over every $s \in K_{PSC}$. \item \label{lem_make_compatible_on_disk_vi} For all $s \in \sigma_0 \cap K_{PSC}$ the Riemannian manifold $(\psi^{\sigma_0}_s (Z^{\sigma_0} )\setminus \nu_s (\Int D^3(1)), \linebreak[1] g^{\prime,s}_T)$ is PSC-conformal. \end{enumerate} Then letting \begin{equation} \label{eqn_tilde_z_sigma_definition} \td Z^\sigma:=\begin{cases} Z^\sigma & \text{if $\sigma\neq\sigma_0$,}\\ Z^{\sigma_0}\setminus \mu (\Int D^3(1)) & \text{if $\sigma=\sigma_0$}\, \end{cases}, \qquad \td\psi^\sigma_s := \psi^\sigma_s \big|_{\td Z^\sigma}, \end{equation} we can find continuous families $(\td{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}$ of metrics on $\td{Z}^\sigma$ such that: \begin{enumerate}[label=(\alph*)] \item $\{ ( \td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is a partial homotopy at time $T$ relative to $L$. \item $\{ ( \td Z^\sigma, \linebreak[1] (\td g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over every $s \in K_{PSC}$. \end{enumerate} \end{lemma} \begin{proof} In the following, we will construct the families of metrics $(\td{g}^\sigma_{s,t})$. The construction will be performed locally on $\nu_s (D^3(2))$ and on $U$, meaning that the metrics $(\td\psi^\sigma_s)_* \td{g}^\sigma_{s,t}$ and $(\psi^\sigma_s)_* g^\sigma_{s,t}$ will (if at all) only differ on $\nu_s (D^3(2))$ if $s \in U$ and we will choose $\td{g}^\sigma_{s,t} = g^\sigma_{s,t}$ if $s \not\in U$. Before proceeding, we observe that without loss of generality we may assume that each $g^\sigma_{s,t}$ is constant in $t$ for all $t \in [0,\frac12]$. In fact, in our partial homotopy we may replace each $g^\sigma_{s,t}$ by \[ \td{g}^{\sigma}_{s,t} := \begin{cases} g^\sigma_{s,0} & \text{if $t \in [0, \tfrac12]$} \\ g^\sigma_{s, 2t - 1} & \text{if $t \in [\tfrac12,1]$} \end{cases} \] and Definition~\ref{Def_partial_homotopy} will still be satisfied. So assume that this is the case from now on. In the first claim we will analyze the intersections of $\nu_s (D^3(2))$ with the images $\psi^\sigma_s (Z^\sigma)$. We will find that $\nu_s (D^3(2))$ is either fully contained in this image or the intersection equals an annulus whose inner circle has radius bounded away from $0$. \begin{Claim} \label{cl_2_mu} After possibly shrinking the neighborhood $U$ of $\sigma_0$ we can find a small $r_0 \in (0,1)$ such that: \begin{enumerate}[label=(\alph*)] \item \label{cl_2_mu_a} $U$ is contained in the union of all simplices $\tau \subset K$ that intersect ${\sigma_0}$. \item \label{cl_2_mu_b} For any $\tau \subset K$ there are two cases: \begin{enumerate}[label=(b\arabic*)] \item \label{cl_2_mu_b1} $\nu_s (D^3(2)) \subset \psi^\tau_s (Z^\tau)$ for all $s \in \tau \cap U$ or \item \label{cl_2_mu_b2} $\nu_s (D^3(r_0)) \cap \psi^\tau_s (Z^\tau) = \emptyset$ for all $s \in \tau \cap U$. In view of Assumption~\ref{lem_make_compatible_on_disk_iv} this implies that the complement $D^3(2) \setminus \nu_s^{-1} ( \psi^\tau_s ( \Int(Z^\tau)))$ is either empty or of the form $D^3(r)$ for some $r \in (r_0,2]$. \end{enumerate} \end{enumerate} \end{Claim} \begin{proof} This follows by an openness argument and Assumption~\ref{lem_make_compatible_on_disk_iv} of the lemma. Let $U = U_1 \supset U_2 \supset \ldots \supset \sigma_0$ a sequence of open subsets with $\cap_{i=1}^\infty U_i = \sigma_0$. Then Assertion~\ref{cl_2_mu_a} holds if we replace $U$ by $U_i$ for large $i$. We will now argue that Assertion~\ref{cl_2_mu_b} holds as well for large $i$. Consider some simplex $\tau \subset K$. If $\tau = \sigma_0$, then Property~\ref{cl_2_mu_b1} holds by Assumption~\ref{lem_make_compatible_on_disk_ii} of the lemma. If $\tau \subsetneq \sigma_0$, then $\psi^{\sigma_0}_s (Z^{\sigma_0}) \subset \psi^\tau_s (Z^\tau)$ and thus Property~\ref{cl_2_mu_b1} holds as well. Assume now that $\tau \not\subset \sigma_0$, but $\tau \cap \sigma_0 \neq \emptyset$. By Assumption~\ref{lem_make_compatible_on_disk_iv} of the lemma, for any $s \in \tau \cap U$ we have either $\nu_s (0) \not\in \psi^\tau_s (Z^\tau)$ or $\nu_s (D^3(2)) \subset \psi^\tau_s (Z^\tau)$. Consider the set $S \subset \tau \cap U$ of parameters for which $\nu_s (D^3(2)) \subset \psi^\tau_s (Z^\tau)$. Then $S$ is closed in $\tau \cap U$ by definition, but by Assumption~\ref{lem_make_compatible_on_disk_iv} of the lemma it is also open in $\tau \cap U$. So there is a subset $\tau \cap \sigma_0 \subset U_\tau \subset \tau \cap U$ that is open in $\tau$ such that either $\nu_s (0) \not\in \psi^\tau_s (Z^\tau)$ or $\nu_s (D^3(2)) \subset \psi^\tau_s (Z^\tau)$ uniformly for all $s \in U_\tau$. Thus Assertion~\ref{cl_2_mu_b} holds for $\tau$ if $i$ is large enough such that $U_i \cap \tau \subset U_\tau \cap \tau$ and $r_0$ is chosen small enough. Since $K$ is finite, this implies Assertion~\ref{cl_2_mu_b} for large $i$. \end{proof} As mentioned before, our goal will be to modify the metrics $(\psi^\sigma_s)_* g^\sigma_{s,t}$ only on $\nu_s (D^3(2))$. It will therefore be useful to consider the pullbacks $\nu_s^* (\psi^\sigma_s)_* g^\sigma_{s,t}$. Since these pullbacks may only be defined on annular regions, it will be suitable later to construct a family of extensions to an entire disk $D^3(1.99) \subset D^3(2)$, which for technical reasons will have to be slightly smaller than $D^3(2)$. \begin{Claim} \label{cl_3_h} After possibly shrinking the neighborhood $U$ of $\sigma_0$, we can find a continuous family of metrics $(h_{s,t})_{s \in U, t \in [0,1]}$ on $D^3(1.99)$ such that the following holds for any $ \tau \subset K$, $s \in \tau \cap U$, $t \in [0,1]$: \begin{enumerate}[label=(\alph*)] \item \label{cl_3_h_a} $h_{s,t} = \nu^*_s ( \psi^\tau_s)_* g^\tau_{s,t}$ on $D^3(1.99) \cap \nu^{-1}_s ( \psi^\tau_s (Z^\tau))$. \item \label{cl_3_h_b} The metric $h_{s,t}$ restricted to $$D^3(1.99) \setminus \nu_s^{-1} \big( \psi^\tau_s ( \Int Z^\tau) \big)$$ is compatible with the standard spherical structure on $D^3(1.99)$. \end{enumerate} \end{Claim} \begin{proof} Let $\tau_1, \ldots, \tau_m \subset K$ be a list of all simplices of $K$, indexed in such a way that $\dim \tau_j$ is non-decreasing in $j$. We will proceed by induction and construct a sequence $U = U_0 \supset U_1 \supset \ldots$ of open neighborhoods of $\sigma_0$ and a sequence of families $(h^j_{s,t})_{s \in U_j \cap \cup_{i=1}^j \tau_i, t \in [0,1]}$ such that Assertions~\ref{cl_3_h_a} and \ref{cl_3_h_b} hold if $s \in U_j \cap \cup_{i=1}^j \tau_i$. Let $j \in \{ 1,\ldots, m \}$ and assume by induction that we have already constructed $U_{j-1}$ and $(h^{j-1}_{s,t})$. Note that $(h^{j-1}_{s,t})$ is defined on $\partial \tau_j \cap U_{j-1}$. Our goal will now be to possibly shrink $U_{j-1}$ and extend $(h^{j-1}_{s,t})$ over the interior of $\tau_j \cap U_{j}$ such that Assertions~\ref{cl_3_h_a} and \ref{cl_3_h_b} continue to hold. We first claim that this is the case if for all $s \in \tau_j \cap U_j$ and $t \in [0,1]$ we have: \begin{enumerate}[label=(\arabic*)] \item \label{prop_h_construction_1} $h^j_{s,t} = \nu^*_s ( \psi^{\tau_j}_s)_* g^{\tau_j}_{s,t}$ on $D^3(1.99) \cap \nu^{-1}_s ( \psi^{\tau_j}_s (Z^{\tau_j}))$. \item \label{prop_h_construction_2} $h^j_{s,t}$ restricted to $D^3(1.99) \setminus \nu_s^{-1} ( \psi^{\tau_j}_s ( \Int Z^{\tau_j}) )$ is compatible with the standard spherical structure on $D^3(1.99)$. \item \label{prop_h_construction_3} $h^j_{s,t} = h^{j-1}_{s,t}$ if $s \in \partial \tau_j \cap U_j$. \end{enumerate} In fact, if $\tau \subset K$ with $\tau_j \not\subset \tau$, then $\tau$ is disjoint from the interior of $\tau_j$ and Assertions~\ref{cl_3_h_a} and \ref{cl_3_h_b} hold by induction. If $\tau = \tau_j$, then Assertions~\ref{cl_3_h_a} and \ref{cl_3_h_b} trivially follow from Properties~\ref{prop_h_construction_1} and \ref{prop_h_construction_2} above. If $\tau_j \subsetneq \tau$, then by the definition of a partial homotopy we have $\psi^\tau_s (Z^\tau) \subset \psi^{\tau_j}_s (Z^{\tau_j})$ and $(\psi^\tau_s)_* g^\tau_{s,t} = (\psi^{\tau_j}_s)_* g^{\tau_j}_{s,t}$ on $\psi^\tau_s (Z^\tau)$. So Property~\ref{prop_h_construction_1} implies Assertion~\ref{cl_3_h_a}. Consider now Assertion~\ref{cl_3_h_b}. Due to Assumption~\ref{lem_make_compatible_on_disk_iii} of the lemma and Property~\ref{prop_h_construction_2} above it suffices to show that on $\nu_s (D^3(1.99)) \cap (\psi^{\tau_j}_s (Z^{\tau_j}) \setminus \psi^\tau_s (Z^\tau))$ the metrics $(\nu_s)_* h^j_{s,t} = (\psi^{\tau_j}_s)_* g^{\tau_j}_{s,t}$, $t \in [0,1]$, are compatible with $\SS^s$. For this purpose let $\mathcal{C}$ be the closure of a component of $Z^{\tau_j} \setminus ((\psi^{\tau_j}_s)^{-1} \circ \psi^\tau_s )( Z^\tau)$ such that $\nu_s (D^3(1.99))\cap \psi^{\tau_j}_s (\mathcal{C}) \neq \emptyset$. By Assumption~\ref{lem_make_compatible_on_disk_vii} the image $\psi^{\tau_j}_s (\mathcal{C})$ is a union of spherical fibers and $ (\psi^{\tau_j}_s)_* g^{\tau_j}_{s,t}$ restricted to $\psi^{\tau_j}_s (\mathcal{C})$ is compatible with the spherical structure. This finishes the proof of Assertion~\ref{cl_3_h_b}. It remains to choose $U_j$ and $(h^j_{s,t})$ satisfying Properties~\ref{prop_h_construction_1}--\ref{prop_h_construction_3} above. If $\tau_j$ satisfies Case~\ref{cl_2_mu_b1} of Claim~\ref{cl_2_mu}, then we can simply set \[ h^j_{s,t} := \nu_s^* (\psi^{\tau_j}_s)_* g^{\tau_j}_{s,t}. \] So assume that $\tau_j$ satisfies Case~\ref{cl_2_mu_b2}. By the remark in Claim~\ref{cl_2_mu}\ref{cl_2_mu_b2}, there is a continuous function $r_j : U_{j-1} \cap \tau_j \to (r_0, 2]$ such that for all $s \in U_{j-1} \cap \tau_j$ \[ B^3 (2) \cap \nu_s^{-1} (\psi^{\tau_j}_s (Z^{\tau_j})) = B^3 (2) \cap \ov{A^3 (r_j (s), 2)} . \] Note that in the case $r_j(s) = 2$ this set is empty. Next observe that it suffices to construct $h^j_{s,t}$ for $s$ in a neighborhood $V_{s_0} \subset \tau_j$ of any parameter $s_0 \in \tau_j \cap \sigma_0$. The desired family can then be constructed using a partition of unity. So fix some $s_0 \in \tau_j \cap \sigma_0$. If $r_j (s_0) > 1.99$, then Property~\ref{prop_h_construction_1} is vacuous in a neighborhood of $s_0$ and Properties~\ref{prop_h_construction_2}, \ref{prop_h_construction_3} can be satisfied by extending $(h^{j-1}_{s,t})$ by an arbitrary family of metrics on $D^3(1.99)$ that are compatible with the standard spherical structure. If $r_j (s_0) \leq 1.99$, then $r_j < 2$ in a neighborhood of $s_0$ in $\tau_j$ and Properties~\ref{prop_h_construction_1}--\ref{prop_h_construction_3} can be satisfied by extending $\nu_s^* (\psi^{\tau_j}_s)_* g^{\tau_j}_{s,t}$ onto $D^3(1.99)$ as in the proof of Proposition~\ref{prop_extending_symmetric}. This finishes the proof of the claim. \end{proof} We will now mainly work with the family $(h_{s,t})$. In the next step we apply a rounding procedure at the origin. The resulting family $(h'_{s,t})$ will be compatible with the standard spherical structure on a small disk $D^3(r_1) \subset D^3 (1.99)$. \begin{Claim} \label{cl4_hprime} Let $r_0$ be the constant from Claim~\ref{cl_2_mu}. We can find a smaller open neighborhood $U' \Subset U$ of ${\sigma_0}$ and a continuous family of metrics $(h'_{s,t})_{s \in U, t \in [0,1]}$ on $D^3(1.99)$, such that the following holds for some $r_1 \in (0,r_0/2)$: \begin{enumerate}[label=(\alph*)] \item \label{cl4_hprime_a} For all $s \in U'$ and $t \in [\frac12,1]$ the metric $h'_{s,t}$ is compatible with the standard spherical structure on $D^3 (r_1)$. \item \label{cl4_hprime_b} $h'_{s,0} = h_{s,0}$ for all $s \in U$. \item \label{cl4_hprime_c} $h'_{s,t} = h_{s,t}$ for all $s \in U \setminus U'$ and $t \in [0,1]$. \item \label{cl4_hprime_d} $h'_{s,t} = h_{s,t}$ on $A^3 (r_0, 1.99)$ for all $s \in U$ and $t \in [0,1]$. \item \label{cl4_hprime_e} $h'_{s,1}$ is conformally flat for all $s \in U$. \item \label{cl4_hprime_f} If for some $s \in U$, $t \in [0,1]$ and $r\in [r_0,1.99]$ the metric $h_{s,t}$ is compatible with the standard spherical structure on $D^3(r)$, then so is $h'_{s,t}$. \item \label{cl4_hprime_g} For any $s \in U$, $s \in \sigma \subset K$ and $t \in [0,1]$ the following holds: Consider the metric \begin{equation} \label{eq_k_definition_cases} k^\sigma_{s,t} := \begin{cases} (\nu_s)_* h'_{s,t} & \text{on $\nu_s (D^3(1.99))$} \\ (\psi_{s}^\sigma)_* g^\sigma_{s,t} & \text{on $\psi^\sigma_s (Z^\sigma) \setminus \nu_s (D^3(1.99))$} \end{cases} \end{equation} If $s \in K_{PSC}$ or $t = 1$, then $(\psi^\sigma_s (Z^\sigma), k^\sigma_{s,t})$ is PSC-conformal. \end{enumerate} \end{Claim} \begin{proof} Choose $\delta_1 \in C^0_c (U)$, $0 \leq \delta_1 \leq 1$ such that $\delta_1 \equiv 1$ on a neighborhood $U'$ of $\sigma_0$ and choose $\delta_2 \in C^0 ([0,1])$ such that $\delta_2 (0) = 0$ and $\delta_2 \equiv 1$ on $[\frac12, 1]$. Let $(h'_{s,t})_{(s,t) \in \supp (\delta_1) \times [0,1]}$ be the result of applying Proposition~\ref{Prop_rounding} with $u = \delta_1 (s) \delta_2 (t)$, $\ov r_1 = r_0$ (from Claim~\ref{cl_2_mu}), $X = \supp (\delta_1) \times [0,1]$ and \begin{equation} \label{eq_XPSC_removing} X_{PSC} = \big( (\supp(\delta_1) \cap K_{PSC}) \times [0,1] \big) \cup \big( \supp(\delta_1) \times \{ 1 \} \big) \end{equation} on $D^3 (1.99)$. Let $r_1 \in (0, r_0)$ be the constant produced by Proposition~\ref{Prop_rounding}. By Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_b}, we can extend $(h'_{s,t})_{(s,t) \in \supp (\delta_1) \times [0,1]}$ to a continuous family over $U \times [0,1]$ by setting $h'_{s,t} := h_{s,t}$ whenever $s \not\in \supp (\delta_1)$. Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_c} implies Assertion~\ref{cl4_hprime_a}, because $\delta_1 (s) \delta_2 (t) = 1$ for $s \in U'$ and $t \in [\frac12, 1]$. Assertions~\ref{cl4_hprime_b}--\ref{cl4_hprime_d} follow immediately from Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_a}, \ref{ass_Prop_rounding_c}. For Assertion~\ref{cl4_hprime_e} notice that by Claim~\ref{cl_3_h}, the metric $h_{s,1}$ is locally either conformally flat or compatible with the standard spherical structure, and therefore conformally flat. By Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_e} the rounding procedure retains this property. Assertion~\ref{cl4_hprime_f} is a restatement of Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_d}. Assertion ~\ref{cl4_hprime_g} holds by assumption if $\psi^\sigma_s(Z^\sigma)\cap \nu_s(D^3(r_0))=\emptyset$ since in this case $k^\sigma_{s,t}=(\psi^\sigma_{s})_*g^\sigma_{s,t}$ (see Assertion~\ref{cl4_hprime_d}). Hence by Claim~\ref{cl_2_mu} we may assume $\psi^\sigma_{s}(Z^\sigma)\supset \nu_s(D^3(2))$ for all $s \in \sigma$, and in this case PSC-conformality follows from Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_f}. To see this, note that by Assumption~\ref{lem_make_compatible_on_disk_v}, for every $(s,t) \in X_{PSC}$ from (\ref{eq_XPSC_removing}) we know that $(Z^\sigma, g^\sigma_{s,t})$ is PSC-conformal, so there exists a function $\wh w_s\in C^\infty(Z^\sigma)$ satisfying the conditions in Lemma~\ref{Lem_PSC_conformal_analytic}. Hence by Lemma~\ref{Lem_PSC_conformal_open} and a partition of unity argument we may assume that $\{\wh w_s\}_{s\in X_{PSC}}$ is a continuous family of smooth functions. Setting $w_s:=\wh w_s\circ\mu$, we apply Proposition~\ref{Prop_rounding}\ref{ass_Prop_rounding_f}, and let $w'_{s,t}$ be the resulting functions. Now letting \[ \wh w'_{s,t} := \begin{cases} w'_{s,t}\circ \nu_s^{-1} & \text{on $\nu_s (B^3(1.99))$} \\ \wh w_{s,t} \circ (\psi^\sigma_s)^{-1} & \text{on $\psi^\sigma_s (Z^\sigma) \setminus \nu_s (B^3(1.99))$} \end{cases} \] we see that $\wh w_{s,t}'$ satisfies the conditions in Lemma~\ref{Lem_PSC_conformal_analytic} for $(\psi^\sigma_s(Z^\sigma),k^\sigma_{s,t})$. \end{proof} Next, we stretch the metrics $h'_{s,t}$ radially. \begin{Claim} \label{cl_5_hprimeprime} We can find a continuous family of metrics $(h''_{s,t})_{s \in U, t \in [0,1]}$ on $D^3(1.99)$, such that the following holds: \begin{enumerate}[label=(\alph*)] \item \label{cl_5_hprimeprime_a} For all $s \in {\sigma_0}$ the metric $h''_{s,t}$ is compatible with the standard spherical structure on $D^3 (1.1)$. \item \label{cl_5_hprimeprime_b} $h''_{s,0} = h_{s,0}$ for all $s \in U$. \item \label{cl_5_hprimeprime_c} $h''_{s,t} = h_{s,t}$ for all $s \in U \setminus U'$ and $t \in [0,1]$. \item \label{cl_5_hprimeprime_d} $h''_{s,t} = h_{s,t}$ on $A^3 (1.98, 1.99)$ for all $s \in U$ and $t \in [0,1]$. \item \label{cl_5_hprimeprime_e} $h''_{s,1}$ is conformally flat for all $s \in U$. \item \label{cl_5_hprimeprime_f} If some $s \in U$, $t \in [0,1]$ and $r \in [r_0, 1.99]$ the metric $h_{s,t}$ is compatible with the standard spherical structure on $D^3(r)$, then so is $h''_{s,t}$. \item \label{cl_5_hprimeprime_g} For any $s \in U$, $s \in \sigma \subset K$ and $t \in [0,1]$ the following holds: Consider the metric \[ \td{k}^\sigma_{s,t} := \begin{cases} (\nu_s)_* h''_{s,t} & \text{on $\nu_s (D^3(1.99))$} \\ (\psi_{s,t}^\sigma)_* g^\sigma_{s,t} & \text{on $\psi^\sigma_s (Z^\sigma) \setminus \nu_s (D^3(1.99))$} \end{cases} \] restricted to $\psi^\sigma_s(\td Z^\sigma)$, where $\td Z^\sigma$ is as in (\ref{eqn_tilde_z_sigma_definition}). If $s \in K_{PSC}$ or $t = 1$, then $(\td\psi^\sigma_s (\td Z^\sigma), \td k^\sigma_{s,t})$ is PSC-conformal. \end{enumerate} \end{Claim} \begin{proof} Fix a continuous family of diffeomorphisms \[ (\Phi_u : D^3(1.99) \to D^3(1.99))_{u \in (0,1]} \] with the following properties: \begin{enumerate} [label=(\Alph*)] \item \label{item_phi_properties_1} $\Phi_u (x) = f(x,u) x$ for some scalar function $f : D^3(1.99) \times (0,1] \to (0,1]$. \item \label{item_phi_properties_2} $\Phi_u = \id$ on $A^3 (1.98, 1.99)$. \item $\Phi_1 = \id$. \item For any $x \in D^3 (1.97)$ we have $|\Phi_u (x)| < u$. \end{enumerate} Fix a continuous function $\delta_1 \in C_c^0 (U')$ with $0 \leq \delta_1 \leq 1$ and support in $U'$ such that $\delta_1 \equiv 1$ on ${\sigma_0}$ and a continuous function $\delta_2 \in C^0 ([0,1])$ with $0 \leq \delta_2 \leq 1$ such that $\delta_2 (0) = 0$ and $\delta_2 \equiv 1$ on $[\frac12, 1]$. Let $r_2\in(0,r_1/2]$ be a constant whose value we will determine later and set \[ h''_{s,t} := \Phi^*_{1- (1-r_2) \delta_1(s) \delta_2(t)} h'_{s,t}. \] Let us first prove Assertion~\ref{cl_5_hprimeprime_a}. Let $s \in \sigma_0$ and $t \in [0,1]$. If $t \in [\frac12, 1]$, then we have $h''_{s,t} = \Phi^*_{r_2} h'_{s,t}$ and $\Phi_{r_2} ( D^3(1.1) ) \subset D^3(r_1)$. So by Claim~\ref{cl4_hprime}\ref{cl4_hprime_a}, the metric $h''_{s,t}$ is compatible with the standard spherical structure on $D^3(1.1)$. On the other hand, if $t \in [0, \frac12]$, then by Claim~\ref{cl_3_h}\ref{cl_3_h_a} and the fact that $g^{\sigma_0}_{s,t}=g^{\sigma_0}_{s,0}$ for all $t\in [0,\frac12]$ we have \begin{equation*} \label{eq_h_p_h_nu_psi_g} h_{s,t} = \nu^*_s (\psi^{\sigma_0}_s)_* g^{\sigma_0}_{s,t} = \nu^*_s (\psi^{\sigma_0}_s)_* g^{\sigma_0}_{s,0} = \nu^*_s g^{\prime,s}_T. \end{equation*} So by Assumption~\ref{lem_make_compatible_on_disk_iii} the metric $h_{s,t}$ is compatible with the standard spherical structure on $D^3 (1.99)$. By Claim~\ref{cl4_hprime}\ref{cl4_hprime_f} the same is true for $h'_{s,t}$. For Assertion~\ref{cl_5_hprimeprime_b}, observe that by Claim~\ref{cl4_hprime}\ref{cl4_hprime_b} for all $s \in U$ \[ h''_{s,0} = \Phi^*_1 h'_{s,0} = h'_{s,0} = h_{s,0}. \] Similarly, for Assertion~\ref{cl_5_hprimeprime_c}, we have by Claim~\ref{cl4_hprime}\ref{cl4_hprime_c} for all $s \in U \setminus U'$, $t \in [0,1]$ $$ h''_{s,t} = \Phi^*_1 h'_{s,t} = h'_{s,t} = h_{s,t}. $$ Assertion~\ref{cl_5_hprimeprime_d} follows from Property \ref{item_phi_properties_2} along with Claim~\ref{cl4_hprime}\ref{cl4_hprime_d}. Assertion~\ref{cl_5_hprimeprime_e} follows from Claim~\ref{cl4_hprime}\ref{cl4_hprime_e}. Assertion~\ref{cl_5_hprimeprime_f} follows from Claim~\ref{cl4_hprime}\ref{cl4_hprime_f} and Property \ref{item_phi_properties_1} above. More specifically, if $h_{s,t}$ is compatible with the standard spherical structure on $D^3(r)$, then $h'_{s,t}$ restricted to $D^3(r)$ is as well and therefore, $h''_{s,t}$ is compatible with the standard spherical structure on $\Phi^{-1}_{1- (1-r_2) \delta_1(s) \delta_2(t)} ( D^3(r)) \supset D^3(r)$. Lastly, consider Assertion~\ref{cl_5_hprimeprime_g}. First suppose that $\sigma\neq\sigma_0$ or $t \in [0,\frac12]$. We claim that in this case \begin{equation} \label{eq_td_Z_sigma_PSC_conf} (\td{Z}^\sigma, g^\sigma_{s,t} |_{\td{Z}^\sigma}) \quad \text{is PSC-conformal if $s \in \sigma \cap K_{PSC}$ or $t =1$.} \end{equation} In fact, if $\sigma \neq \sigma_0$, then (\ref{eq_td_Z_sigma_PSC_conf}) follows from Assumption~\ref{lem_make_compatible_on_disk_v}, Definition~\ref{Def_metric_deformation} and the fact that $\td{Z}^\sigma = Z^\sigma$. On the other hand, if $\sigma = \sigma_0$, $s \in \sigma_0 \cap K_{PSC}$ and $t \in [0,\frac12]$, then $g^{\sigma_0}_{s,t} = g^{\sigma_0}_{s,0} = (\psi^{\sigma_0}_s)^* g^{\prime,s}_T$. So $(\td{Z}^{\sigma_0}, g^{\sigma_0}_{s,t} |_{\td{Z}^\sigma})$ is isometric to $(\td\psi^{\sigma_0} (\td{Z}^{\sigma_0}), g^{\prime,s}_T)$, which is PSC-conformal by Assumption~\ref{lem_make_compatible_on_disk_vi}. Let us now continue with the proof of Assertion~\ref{cl_5_hprimeprime_g} if $\sigma\neq\sigma_0$ or $t \in [0,\frac12]$. If $\nu_s (D^3 (1.99)) \subset \psi^\sigma_s (\td Z^\sigma)$ (which precludes $\sigma = \sigma_0$), then we are done by Claim~\ref{cl4_hprime}\ref{cl4_hprime_g}, because the metrics $h'_{s,t}$ and $h''_{s,t}$ are isometric to one another, which implies that the extensions of $(\nu_s)_* h'_{s,t}$ and $(\nu_s)_* h''_{s,t}$ by $(\psi_{s}^\sigma)_* g^\sigma_{s,t}$ onto $\psi^\sigma_s (\td Z^\sigma)$ are isometric. The same is true if $\nu_s (D^3 (1.99))$, $\psi^\sigma_s (\td Z^\sigma)$ are disjoint. If $\nu_s (D^3 (1.99)) \not\subset \psi^\sigma_s (\td Z^\sigma)$ and both subsets are not disjoint, then by Claim~\ref{cl_2_mu}\ref{cl_2_mu_b2} or the definition of $\td Z^{\sigma_0}$ \[ D^3 (1.99) \cap \nu_s^{-1} (\psi^\sigma_s (\td Z^\sigma)) = \ov{A^3 (r,1.99 )} \] for some $r \in (r_0, 1.99]$. In this case, the metric $h''_{s,t}$ restricted to \[ \Phi^{-1}_{1- (1-r_2) \delta_1(s) \delta_2(t)} ( \ov{A^3 (r, 1.99)} ) \subset \ov{A^3 (r, 1.99)} \] is isometric to $h'_{s,t}$ restricted to $ \ov{A^3 (r, 1.99)}$. By Claim~\ref{cl4_hprime}\ref{cl4_hprime_d} and Claim~\ref{cl_3_h}\ref{cl_3_h_a} we have $h'_{s,t} = h_{s,t} = \nu^*_s (\psi^\sigma_s)_* g^{\sigma}_{s,t}$ on $ \ov{A^3 (r, 1.99)}$. It follows that $\td{k}^\sigma_{s,t}$ restricted to \[ \big( \psi^\sigma_s (\td Z^\sigma) \setminus \nu_s (D^3(1.99)) \big) \cup \nu_s \big( \Phi^{-1}_{1- (1-r_2) \delta_1(s) \delta_2(t)} ( \ov{A^3 (r, 1.99)} ) \big) \] is isometric to $(\td Z^\sigma, g^\sigma_{s,t})$, which is PSC-conformal if $s \in \sigma \cap K_{PSC}$ or $t=1$, due to (\ref{eq_td_Z_sigma_PSC_conf}). On the other hand, $\td{k}^\sigma_{s,t}$ restricted to the closure of \begin{equation} \label{eq_A_Phi_A} \nu_s \big( \ov{A^3 (r, 1.99)} \setminus \Phi^{-1}_{1- (1-r_2) \delta_1(s) \delta_2(t)} ( \ov{A^3 (r, 1.99)} ) \big) \end{equation} is isometric to $h'_{s,t}$ restricted to \[ \Phi_{1- (1-r_2) \delta_1(s) \delta_2(t)} (\ov{A^3 (r, 1.99)}) \setminus \ov{A^3 (r, 1.99)} \subset D^3 (r). \] By Claim~\ref{cl_3_h}\ref{cl_3_h_b}, we know that $h_{s,t}$ is compatible with the standard spherical structure on $D^3(r)$ and thus by Claim~\ref{cl4_hprime}\ref{cl4_hprime_f} the same is true for $h'_{s,t}$ (recall that $r \geq r_0$). It follows that $\td{k}^\sigma_{s,t}$ restricted to the closure of (\ref{eq_A_Phi_A}) is compatible with $\SS^s$. Thus by Lemma~\ref{Lem_PSC_conformal_enlarge} we conclude that $(\psi^\sigma_s (\td Z^\sigma), \td{k}^\sigma_{s,t})$ is PSC-conformal if $s \in \sigma \cap K_{PSC}$ or $t=1$. Now suppose $\sigma=\sigma_0$ and $t \in [\frac12,1]$. Consider the family of metrics $( k^{\sigma_0}_{s,t})$ on $\psi^{\sigma_0}_s(Z^{\sigma_0})$ from (\ref{eq_k_definition_cases}). By Claim~\ref{cl4_hprime}\ref{cl4_hprime_g} we know that $(Z^{\sigma_0},(\psi^{\sigma_0}_s)^* k^{\sigma_0}_{s,t})$ is PSC-conformal for all $(s,t) \in (\sigma_0 \cap K_{PSC}) \times [\frac12,1] \cup \sigma_0 \times \{ 1 \}$. By Lemma~\ref{lem_thick_annulus_psc_conformal}, we may choose $r_2\in (0,r_1)$ such that $(\psi^{\sigma_0}_s)^* k^{\sigma_0}_{s,t}$ is also PSC-conformal on $Z^{\sigma_0} \setminus\mu (\Int D^3(r_2))$ for the same $(s,t)$. Assertion~\ref{cl_5_hprimeprime_g} now follows from the fact that $( \psi^{\sigma_0}_s(\td Z^{\sigma_0}) = \psi^{\sigma_0}_s(Z^{\sigma_0})\setminus\nu_s(\Int D^3(1)), \td{k}^{\sigma_0}_{s,t})$ is isometric to $k^{\sigma_0}_{s,t}$ restricted to $\psi^{\sigma_0}_s (Z^{\sigma_0} \setminus\mu (\Int D^3(r_2)))$. \end{proof} \bigskip For every $s \in \sigma \subset K$ and $t \in [0,1]$, we can now define \[ \td{g}^\sigma_{s,t} := \begin{cases} g^\sigma_{s,t} & \text{on $Z^\sigma \setminus (\psi^\sigma_s)^{-1} (\nu_s (D^3 (1.99)))$ if $s \in U$ or on $Z^\sigma$ if $s \not\in U$} \\ (\psi^\sigma_s)^* (\nu_s)_* h''_{s,t} & \text{on $(\psi^\sigma_s)^{-1} (\nu_s (D^3 (1.99)))$ if $s \in U$} \end{cases} \] To complete the proof of Lemma~\ref{lem_make_compatible_on_disk}, we now verify that $\{ (\td Z^\sigma, \linebreak[1] (\td{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s)_{s \in \sigma}) \}_{\sigma \subset K}$ is a partial homotopy at time $T$ relative to $L$ that is PSC-conformal over all $s \in K_{PSC}$. By Claim~\ref{cl_3_h}\ref{cl_3_h_a} and Claim~\ref{cl_5_hprimeprime}\ref{cl_5_hprimeprime_c}, \ref{cl_5_hprimeprime_d}, the metrics $\td{g}^\sigma_{s,t}$ are smooth and depend continuously on $s, t$. By Claim~\ref{cl_5_hprimeprime}\ref{cl_5_hprimeprime_e}, \ref{cl_5_hprimeprime_g}, $(\td{Z}^\sigma, \td{g}^\sigma_{s,1})$ is conformally flat and PSC-conformal. So $(\td Z^\sigma, (\td{g}^\sigma_{s,t}))$ are metric deformations. By Claim~\ref{cl_5_hprimeprime}\ref{cl_5_hprimeprime_g} the Riemannian manifold $(\td Z^\sigma, \td g^\sigma_{s,t})$ is PSC-conformal for all $t \in [0,1]$ and $s \in \sigma \subset K$ if $s \in K_{PSC} \cap U$. If $s \in K_{PSC} \setminus U$, then $(\td Z^\sigma, \td g^\sigma_{s,t})=( Z^\sigma, g^\sigma_{s,t})$ is PSC-conformal by Assumption~\ref{lem_make_compatible_on_disk_v}. Let us now verify the properties of Definition~\ref{Def_partial_homotopy}. By Claim~\ref{cl_5_hprimeprime}\ref{cl_5_hprimeprime_b} and Claim~\ref{cl_3_h}\ref{cl_3_h_a} we have $(\psi^\sigma_s)^* g^{\prime,s}_T = g^\sigma_{s,0} = \td{g}^\sigma_{s,0}$. This verifies Property~\ref{prop_def_partial_homotopy_1}. Property~\ref{prop_def_partial_homotopy_2} does not concern $\td{g}^\sigma_{s,t}$, so it remains true. Property~\ref{prop_def_partial_homotopy_3} is immediate from the definition of $\td{g}^\sigma_{s,t}$. Property~\ref{prop_def_partial_homotopy_6} remains unchanged. It remains to verify Properties~\ref{prop_def_partial_homotopy_4}, \ref{prop_def_partial_homotopy_5} in Definition~\ref{Def_partial_homotopy}. For Property~\ref{prop_def_partial_homotopy_4} consider the closure $\mathcal{C}$ of a component of $Z^{ \tau} \setminus((\psi^{ \tau}_s )^{-1} \circ \psi^{ \sigma}_s ) ( Z^{ \sigma} )$, for some $\tau\subsetneq \sigma$, $s\in \tau\cap U$. We may assume that $\psi^\tau_s (\mathcal{C}) \cap \nu_s (D^3 (1.99)) \neq \emptyset$, because otherwise the property holds trivially. By Claim~\ref{cl_2_mu}\ref{cl_2_mu_b} we have $\psi^\tau_s (\mathcal{C}) \cap \nu_s (D^3 (1.99)) = \nu_s ( \ov {A^3(r_\tau,r_\sigma)})$ for some $r_\sigma \in (r_0, 2]$, $r_\tau\in \{0\}\cup (r_0,r_\sigma)$. Thus by Assumption~\ref{lem_make_compatible_on_disk_vii}, Claim \ref{cl_3_h} and Claim~\ref{cl_5_hprimeprime}\ref{cl_5_hprimeprime_f} we have that $(\psi^\tau_s)_* \td{g}^\tau_{s,t}$ is compatible with $\SS^s$ on $\psi^\tau_s (\mathcal{C})$. Lastly, consider Property~\ref{prop_def_partial_homotopy_5}. Let $\sigma \subset K$, $s \in \sigma \cap U$ and consider a boundary component $\Sigma \subset \partial Z^\sigma$ with $\psi^\sigma_s (\Sigma) \subset \nu_s (D^3 (1.99))$. By Claim~\ref{cl_3_h} we have that $h_{s,t}$ restricted to a neighborhood of the disk bounded by $\nu_s^{-1} (\Sigma)$ is compatible with the standard spherical structure. Thus by Claim~\ref{cl_5_hprimeprime}\ref{cl_5_hprimeprime_f}, so is $h''_{s,t}$, which proves Property~\ref{prop_def_partial_homotopy_5}. \end{proof} \section{Deforming families of metrics towards families of conformally flat metrics}\label{sec_deforming_families_metrics} \subsection{Statement of the main result and setup} \label{subsec_deforming_main_results} Similarly as in Subsection~\ref{subsec_gen_setup} we fix a pair $(K,L)$ of topological spaces that is homeomorphic to the the geometric realization of a pair of finite simplicial complexes $(\mathcal{K}, \mathcal{L})$ where $\mathcal{L} \subset \mathcal{K}$ is a subcomplex. We will mostly refer to the pair $(K,L)$ instead of $(\mathcal{K}, \mathcal{L})$ if there is no chance of confusion. In this section we will show the following theorem. \begin{theorem} \label{Thm_main_deform_to_CF} Consider a continuous family $(M^s, g^s)_{s \in K}$ of Riemannian manifolds. Suppose that $M^s$ is diffeomorphic to a connected sum of spherical space forms and copies of $S^2 \times S^1$ for all $s \in K$ and that $(M^s, g^s)$ is a CC-metric for all $s \in L$. Let $K_{PSC} \subset K$ be a closed subset with the property that $(M^s,g^s)$ has positive scalar curvature for all $s \in K_{PSC}$. Then there is a continuous family of Riemannian metrics $(h^s_t)_{s \in K, t \in [0,1]}$ on $(M^s)_{s \in K}$ such that for all $s \in K$: \begin{enumerate}[label=(\alph*)] \item $h^s_0 = g^s$. \item $h^s_1$ is conformally flat and PSC-conformal. \item If $s \in L$, then $h^s_t$ is a CC-metric for all $t \in [0,1]$. \item If $s \in K_{PSC}$, then $(M^s, h^s_t)$ is PSC-conformal for all $t \in [0,1]$. \end{enumerate} \end{theorem} We will now reduce Theorem~\ref{Thm_main_deform_to_CF} to Lemma~\ref{Lem_main_existence_partial_homotopy} below, which concerns the existence of certain partial homotopies. By Theorem~\ref{thm_existence_family_k} there is a continuous family of singular Ricci flows $(\mathcal{M}^s)_{s \in K}$ with initial condition $(M^s, g^s)_{s \in K}$. We may identify $(\mathcal{M}^s_0, g^s_0) = (M^s, g^s)$ for all $s \in K$. By uniqueness, for all $s \in L$ all time-slices of $\mathcal{M}^s$ are CC-metrics. Moreover, by Theorem~\ref{Thm_PSC_preservation} the flow $\mathcal{M}^s$ has positive scalar curvature for all $s \in K_{PSC}$. By Theorem~\ref{Thm_extinction_time} there is a uniform time $T_0 < \infty$ at which these flows become extinct, i.e. $\mathcal{M}^s_t = \emptyset$ for all $t \geq T_0$ and $s \in K$. Next, we invoke Theorem~\ref{Thm_rounding} for some $\delta > 0$, which we will choose later. We obtain a transversely continuous family of $\mathcal{R}$-structures \[ \mathcal{R}^s = ( g^{\prime, s}, \partial^{\prime, s}_\t, U^s_{S2}, U^s_{S3}, \SS^s ) \] on $(\mathcal{M}^s)_{s \in K}$. Recall from Theorem~\ref{Thm_rounding} that \[ U_{S2}^s \cup U_{S3}^s = \big\{ x \in \mathcal{M}^s \;\; : \;\; \rho_{g^{\prime, s}} (x) < r_{\rot, \delta} (r_{\initial} (M^s, g^s) , \t(x)) \big\}. \] Due to the uniform extinction time $T_0$ and Assertion~\ref{ass_thm_rounding_e} of Theorem~\ref{Thm_rounding}, we can multiply the metrics $g^s$ with a large constant and assume without loss of generality that \begin{equation} \label{eq_rs_bigger_C0} r_{\initial} (M^s, g^s), \; r_{\can, \delta} (r_{\initial} (M^s, g^s) , t), \; r_{\rot, \delta} (r_{\initial} (M^s, g^s) , t) > C_0 \end{equation} for all $s \in K$ and $t \geq 0$ for which $\mathcal{M}^s_t \neq \emptyset$, where $10 < C_0 < \infty$ is a constant that we will choose later. Therefore, we have \begin{equation} \label{eq_US2US310} \{ \rho_{g^{\prime, s}} < C_0 \} \subset U^s_{S2} \cup U^s_{S3} \qquad \text{for all} \quad s \in K. \end{equation} In this section we will exclusively work with the objects $g^{\prime, s}, \partial^{\prime,s}_\t$ instead of $g^s, \partial^s_\t$ and we will often omit the index in expressions of the form ``$\rho_{g^{\prime, s}}$''. The following lemma summarizes all further properties of $g^{\prime,s}$ and $\partial^{\prime,s}_t$ that we will use in this section. Fix an arbitrary constant $\Lambda > 100$ for the remainder of this section. \begin{lemma} \label{Lem_further_properties_gpdtp} If $C_0 \geq \underline{C}_0 (\Lambda)$ and $\delta \leq \ov\delta (\Lambda)$, then: \begin{enumerate}[label=(\alph*)] \item \label{ass_further_properties_gpdtp_a} $g^{\prime,s}_{0} = g^s$ for all $s \in K$. \item \label{ass_further_properties_gpdtp_aa} $\rho > 1$ on $\mathcal{M}^s_0$ for all $s \in K$. \item \label{ass_further_properties_gpdtp_b} There is some $T_{\ext} < \infty$ such that $\mathcal{M}^s_{t} = \emptyset$ for all $t \geq T_{\ext}$ and $s \in K$. \item \label{ass_further_properties_gpdtp_c} For any $r > 0$ the restriction of $\pi : \cup_{s \in K} \mathcal{M}^s \to K$ to $\{ \rho \geq r \}$ is proper. \item \label{ass_further_properties_gpdtp_d} There is a constant $\theta = \theta (r) \in (0, r^2]$ such that for any $s \in K$, $t_1, t_2 \geq 0$ with $|t_1 - t_2| \leq \theta$ the following is true. If $x \in \mathcal{M}^s_{t_2}$ with $\rho (x) > r / 10$, then the point $x( t_1 )$ is defined and we have: \[ |\rho(x) - \rho(x(t_1))| < 10^{-3} \rho(x) \] \item \label{ass_further_properties_gpdtp_e} If in Assertion~\ref{ass_further_properties_gpdtp_c} we have $t_1 \leq t_2$ and $\rho (x (t_1) ) \leq \rho (x) \leq 10$, then there are embedded disks $D' \subset D \subset U^s_{S2} \cap \mathcal{M}^s_{t_2}$ with $x \in D'$ that are the union of spherical fibers of $\SS^s$ and such that $\rho > .9 \rho (x)$ on $D$, $\rho > 2 \Lambda^3 \rho (x)$ on $\partial D$, $\rho < 2 \rho (x)$ on $D'$ and such that $D'$ contains a singular spherical fiber of the form $\{ x' \} \subset D'$. \item \label{ass_further_properties_gpdtp_f} For any $s \in L$ the following is true: \begin{enumerate}[label=(g\arabic*)] \item If $(M, g^s)$ is homothetic to a quotient of the round sphere, then the flow of $\partial'_\t$ induces a homothety between $(\mathcal{M}^s_0, g^{\prime,s}_{0})$ and $(\mathcal{M}^s_t, g^{\prime,s}_{t})$ for all $t$ for which it is defined. \item If $(M,g^s)$ is homothetic to a quotient of the round cylinder, then for all $t \geq 0$ for which $\mathcal{M}^s_t \neq \emptyset$ the Riemannian manifold $(\mathcal{M}^s_t, g^{\prime,s}_{t})$ is homothetic to a (possibly different) quotient of the round cylinder and the flow of $\partial'_\t$ preserves the local isometric $O(3)$-actions on each time-slice $\mathcal{M}^s_t$. \end{enumerate} \item \label{ass_further_properties_gpdtp_g} If $(M^s, g^s )$ has positive scalar curvature, then $g^{\prime s}$ has positive scalar curvature on every time-slice. Moreover if $t \geq 0$ and if $Y \subset \mathcal{M}^s_t$ is a compact 3-dimensional submanifold whose boundary components are regular spherical fibers of $\SS^s$ and $\rho \leq 1$ on $\partial Y$, then $(Y, g^{\prime,s}_t)$ is PSC-conformal. \end{enumerate} \end{lemma} \begin{proof} Assertion~\ref{ass_further_properties_gpdtp_a} is a consequence of Theorem~\ref{Thm_rounding}\ref{ass_thm_rounding_b}. Assertion~\ref{ass_further_properties_gpdtp_aa} holds for $C_0 \geq \underline{C}_0$. Assertion~\ref{ass_further_properties_gpdtp_b} follows from Theorem~\ref{Thm_extinction_time}. Assertion~\ref{ass_further_properties_gpdtp_c} follows from Assertion~\ref{ass_further_properties_gpdtp_b} and Theorem~\ref{Thm_properness_fam_sing_RF}. Assertion~\ref{ass_further_properties_gpdtp_d} is a consequence of Assertion~\ref{ass_further_properties_gpdtp_c}. For Assertion~\ref{ass_further_properties_gpdtp_e} let $\delta' > 0$ be a constant whose value we will determine later. By Lemma~\ref{lem_bryant_increasing_scale}, assuming $C_0 \geq \underline{C}_0 (\delta')$, $\delta \leq \ov\delta (\delta')$ (see (\ref{eq_rs_bigger_C0})), the pointed Riemannian manifold $(\mathcal{M}_{t_2}^s, g^s_{t_2}, x)$ is $\delta'$-close to the pointed Bryant soliton $(M_{\Bry}, \linebreak[1] g_{\Bry}, \linebreak[1] x_{\Bry})$ at scale $\rho(x)$. If $\delta' \leq \ov\delta'$, then $.99 \rho_{g^s} \leq \rho_{g^{\prime,s}} \leq 1.01 \rho_{g^s}$ on $\{ \rho < C_0 \}$. Moreover if $\delta' \leq \ov\delta'$, then the scalar curvature of $g^{\prime,s}$ attains a unique maximum at some $x' \in B(x, .01\rho(x))$. Therefore $x,x' \in U^s_{S2}$ and $\{ x' \}$ is a singular fiber. Let $D'$ be the union of spherical fibers intersecting the minimizing geodesic between $x, x'$. Then the asserted properties of $D'$ hold for $\delta' \leq \ov\delta'$ and the existence of $D$ holds if $C_0 \geq \underline{C}_0 (\Lambda)$ and $\delta' \leq \ov\delta' (\Lambda)$. Assertion~\ref{ass_further_properties_gpdtp_f} follows by uniqueness of singular Ricci flows, Theorem~\ref{Thm_sing_RF_uniqueness}, and Theorem~\ref{Thm_rounding}\ref{ass_thm_rounding_d}. The first part of Assertion~\ref{ass_further_properties_gpdtp_g} holds if $\delta \leq \ov{\delta}$. The second part follows using Lemma~\ref{lem_CNA_SS_implies_PSC_conformal} if $C_0 \geq \underline{C}_0$ and $\delta \leq \ov{\delta}$. Observe that we may assume without loss of generality that $Y$ is disjoint from $U_{S3}^s$, because all components of $U_{S3}^s \cap \mathcal{M}^s_t$ are PSC-conformal. \end{proof} From now on let us fix the constants $C_0, \delta$ from Lemma~\ref{Lem_further_properties_gpdtp}, as well as the family of $\mathcal{R}$-structures $(\mathcal{R}^s)_{s \in K}$. Let $n := \dim K$ and set $r_k := \Lambda^{-4n + 4k-4}$. So \[ 0 < r_0 < \ldots < r_{n-1} < r_n = \Lambda^{-4}, \qquad \Lambda^4 r_k = r_{k+1}. \] By Proposition~\ref{prop_partial_homotopy_standard_homotopy} and Lemma~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_aa}, Theorem~\ref{Thm_main_deform_to_CF} can be reduced to the following lemma. \begin{lemma} \label{Lem_main_existence_partial_homotopy} For any $T \geq 0$ there is a simplicial refinement of $\mathcal{K}$ and a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ for $(\mathcal{R}^s)_{s \in K}$ that satisfies the following a priori assumptions for every $k$-simplex $\sigma\subset K$ and all $s \in \sigma$: \begin{enumerate}[label=(APA \arabic*), leftmargin=*] \item \label{APA1} $\{ \rho > 1 \} \cap \mathcal{M}^s_T \subset \psi^\sigma_s (Z^\sigma) \subset \{ \rho > r_k \}$ \item \label{APA2} Every component of $\psi^\sigma_s (Z^\sigma)$ contains a point with $\rho > \Lambda^2 r_k$. \item \label{APA3} $\rho > \Lambda r_k$ on $\psi^\sigma_s (\partial Z^\sigma)$. \item \label{APA4} $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is PSC-conformal over every $s \in K_{PSC}$. \end{enumerate} \end{lemma} The remainder of this section is devoted to the proof of Lemma~\ref{Lem_main_existence_partial_homotopy}, which proceeds by induction. If $T \geq T_{\ext}$, then the assertion of the lemma is true, as we can choose the trivial partial homotopy. So it remains to show that if the lemma is true for some time $T$, then it also holds at time $T - \Delta T$, where $0 < \Delta T \leq \min \{ T, \theta (r_0) \}$, where $\theta$ is the constant from Lemma~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_d}. For this purpose, fix $T$, $\Delta T$ for the remainder of the section and consider some simplicial refinement $\mathcal{K}'$ of $\mathcal{K}$ and a partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ satisfying the a priori assumptions \ref{APA1}--\ref{APA4}. Our goal in the next subsections will be to construct a partial homotopy at time $T - \Delta T$ that satisfies a priori assumptions \ref{APA1}--\ref{APA4}, after passing to a simplicial refinement of $\mathcal{K}'$. \subsection{Passing to a simplicial refinement} Our strategy will be to modify (or improve) the partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ so that by evolving it backwards in time by $\Delta T$ (using Proposition~\ref{prop_move_part_hom_bckwrds}) we obtain another partial homotopy that satisfies a priori assumptions \ref{APA1}--\ref{APA4}. The modification will be carried out by successive application of the modification moves described in Proposition~\ref{prop_extending} (Enlarging a partial homotopy) and \ref{prop_move_remove_disk} (Removing a disk from a partial homotopy). As a preparation, we will first find a simplicial refinement of $K$ (using Proposition~\ref{prop_simp_refinement}) such that over each simplex we can choose certain continuous data, which will later serve as a blueprint for these modification moves. The main result of this subsection is the following lemma: \begin{lemma}[Passing to a simplicial refinement] \label{lem_simplicial_refinement} After passing to a simplicial refinement of $\mathcal{K}'$ and modifying the partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ to respect this refined structure, we may assume in addition to a priori assumptions \ref{APA1}--\ref{APA4} that for any simplex $\sigma \subset K$ there are the following data: \begin{itemize} \item a compact manifold with boundary $\widehat{Z}^\sigma$, \item an embedding $\iota^\sigma: Z^\sigma \to \widehat{Z}^\sigma$, \item a continuous family of embeddings $(\widehat{\psi}^\sigma_s : \widehat{Z}^\sigma \to \mathcal{M}^s_{T})_{s \in \sigma}$ and \item continuous families of embeddings $(\nu^\sigma_{s,j} : D^3 \to \mathcal{M}^s)_{s \in \sigma, j = 1, \ldots, m_\sigma}$, \end{itemize} such that for all $s \in \sigma$, $j = 1, \ldots, m_\sigma$ and $k = \dim \sigma$: \begin{enumerate}[label=(\alph*)] \item \label{ass_refinement_a} $\psi_{s}^{\sigma} = \wh\psi^{\sigma}_{s} \circ \iota^{\sigma}$. \item \label{ass_refinement_b} For the closure $\mathcal{C}$ of every component of $\wh{Z}^{\sigma} \setminus \iota^{\sigma}( Z^\sigma)$ one (or both) of the following is true uniformly for all $s \in \sigma$: $\wh\psi^\sigma_s (\mathcal{C})$ is a union of fibers of $\SS^{s}$ or $\partial\mathcal{C} = \emptyset$ and $\wh\psi^\sigma_{s} (\mathcal{C}) \subset U_{S3}^{s}$. In the second case the metrics $(\wh\psi^\sigma_s)^* g'_{s,T}$, $s \in \sigma$, are multiples of each another. \item \label{ass_refinement_c} If $\sigma \subset L$, then $\wh\psi^\sigma_s (\wh{Z}^\sigma )$ is empty or equal to $\mathcal{M}^s_T$. \item \label{ass_refinement_d} $\widehat\psi^\sigma_s (\widehat{Z}^\sigma) \setminus \psi^\sigma_s (Z^\sigma) \subset \{ \rho > 2r_{k} \}$. \item \label{ass_refinement_e} $\{ \rho > \frac12 \Lambda^3 r_{k} \} \cap \mathcal{M}^s_T \subset \widehat\psi^\sigma_s (\widehat{Z}^\sigma)$. \item \label{ass_refinement_f} Every component of $\wh\psi^\sigma_s (\wh{Z}^\sigma)$ that does not contain a component of $\psi^\sigma_s (Z^\sigma)$ contains a point of scale $\rho > 2 \Lambda^2 r_k$. \item \label{ass_refinement_g} In every component of $\mathcal{M}^{s}_{T} \setminus \wh\psi^\sigma_{s}(\Int \wh{Z}^\sigma)$ that intersects $\wh\psi^\sigma_{s}(\wh{Z}^\sigma)$ there is a point with $\rho < 4 r_k$. \item \label{ass_refinement_h} $\rho > 2\Lambda r_k$ on $ \wh\psi^{\sigma}_{s}(\partial \wh{Z}^\sigma )\setminus \psi^\sigma_{s} ( \partial Z^\sigma )$. \item \label{ass_refinement_h_old} If $\sigma \cap L\neq\emptyset$, then $m_\sigma = 0$. \item \label{ass_refinement_disjoint} The images $\nu^\sigma_{s,1} (D^3), \ldots, \nu^\sigma_{s,m_\sigma} (D^3)$ are pairwise disjoint. \item \label{ass_refinement_ii} $\nu^\sigma_{s,j} (D^3) \subset \psi^\sigma_s (\Int Z^\sigma)$. \item \label{ass_refinement_i} $\nu^\sigma_{s,j} (D^3) \subset U^s_{S2}$ and the embedding $\nu^\sigma_{s,j}$ carries the standard spherical structure on $D^3$ to $\SS^s$ restricted to $\nu^\sigma_{s,j}(D^3)$. \item \label{ass_refinement_j} $\nu^\sigma_{s,1} (D^3) \cup \ldots \cup \nu^\sigma_{s,m_\sigma} (D^3)$ contains all singular spherical fibers of $\psi^\sigma_s (Z^\sigma) \cap U^s_{S2}$ that are points and on which $\rho < 4 r_k$. \item \label{ass_refinement_k} $\rho < 4 \Lambda r_k$ on $\nu^\sigma_{s,j} (D^3)$. \item \label{ass_refinement_l} $\rho > 2\Lambda r_k$ on $\nu^\sigma_{s,j} (\partial D^3)$ \end{enumerate} \end{lemma} The idea of the proof is the following. For every $s_0 \in \sigma \subset K$ we first construct continuous data $Z^{s_0, \sigma}, \iota^{s_0, \sigma}, \wh\psi^{s_0, \sigma}_s, \nu^{s_0, \sigma}_{s,j}$ that satisfy the assertions of Lemma~\ref{lem_simplicial_refinement} for parameters $s$ that are close enough to $s_0$. We therefore obtain an open covering of $K$ consisting of subsets over which this data is defined. Our simplicial refinement of $\mathcal{K}'$ will later be taken to be subordinate to this open cover. Let us first construct $Z^{s_0, \sigma}, \iota^{s_0, \sigma}, \wh\psi^{s_0, \sigma}_s$ for $s$ near some $s_0 \in \sigma \subset K$. \begin{lemma} For every $s_0 \in \sigma \subset K$ there is a neighborhood $U_{s_0, \sigma} \subset K$ of $s_0$, a compact manifold with boundary $\widehat{Z}^{s_0,\sigma}$, an embedding $\iota^{s_0, \sigma}: Z^\sigma \to \widehat{Z}^{s_0,\sigma}$ and a continuous family of embeddings $(\widehat{\psi}^{s_0,\sigma}_{s} : \widehat{Z}^{s_0,\sigma} \to \mathcal{M}^{s}_{T})_{s \in U_{s_0, \sigma} \cap \sigma}$ such that Assertions \ref{ass_refinement_a}--\ref{ass_refinement_h} of Lemma~\ref{lem_simplicial_refinement} hold for all $s \in U_{s_0, \sigma} \cap \sigma$ and $k = \dim \sigma$ if we replace $(Z^{ \sigma}, \iota^{ \sigma}, \wh\psi^{ \sigma}_s)$ with $(Z^{s_0, \sigma}, \iota^{s_0, \sigma}, \wh\psi^{s_0, \sigma}_s)$. \end{lemma} \begin{proof} By a priori assumption \ref{APA1} and (\ref{eq_US2US310}) we have \[ \mathcal{M}^{s_0}_{T} \setminus \psi^{\sigma}_{s_0} ( \Int Z^\sigma ) \subset U^{s_0}_{S2} \cup U^{s_0}_{S3}. \] It follows from Definition~\ref{Def_R_structure}\ref{prop_def_RR_1} and \ref{Def_partial_homotopy} that each component of this difference is contained in $U^{s_0}_{S2}$ and is a union of fibers of $\SS^{s_0}$ or in $U^{s_0}_{S3}$ and is homothetic to a quotient of a standard sphere. Let $Y \subset \mathcal{M}^{s_0}_{T} \setminus \psi^\sigma_{s_0} ( \Int Z^\sigma )$ be the set of points on which $\rho > 2 r_k$. Define $Y' \subset Y$ to be the union of all connected components of $Y$ that contain a point of scale $\rho > 2 \Lambda^2 r_k$ or that intersect $\psi^\sigma_{s_0} ( \partial Z^\sigma )$. Note that any boundary component of $Y'$ that is contained in $Y'$ is also contained in $\psi^\sigma_{s_0} (\partial Z^\sigma)$. Furthermore, any connected component of $Y'$ is contained in $U^{s_0}_{S3}$ and is compact or in $U^{s_0}_{S2}$ and is a union of fibers of $\SS^{s_0}$. Therefore, every non-compact component of $Y'$ is a union of spherical fibers and must be diffeomorphic to one of the following manifolds (see Lemma~\ref{lem_spherical_struct_classification}): \[ S^2 \times [0, 1), \quad S^2 \times (0,1), \quad (S^2 \times (-1,1))/\mathbb{Z}_2, \quad B^3 \] Consider such a component $\mathcal{C} \subset Y'$. Call $\mathcal{C}$ \emph{good} if it contains a point with $\rho > 2 \Lambda^2 r_k$. By construction, bad components must intersect $\psi^\sigma_{s_0} ( \partial Z^\sigma )$ and must therefore be either compact or diffeomorphic to $S^2 \times [0, 1)$. Suppose for a moment that $\mathcal{C}$ is good. Since $\rho \to 2 r_k$ near the ends of $\mathcal{C}$, we can find a minimal compact domain $\mathcal{C}' \subset \mathcal{C}$ such that \begin{enumerate} \item $\mathcal{C}'$ is a union of spherical fibers. \item $\mathcal{C} \setminus \mathcal{C}'$ is a union of neighborhoods of the ends of $\mathcal{C}$; so each component is diffeomorphic to $S^2 \times (0,1)$. \item $\rho < 4 \Lambda r_k$ on $\mathcal{C} \setminus \mathcal{C}'$. \end{enumerate} Call $\mathcal{C}'$ the \emph{core} of $\mathcal{C}$. We now define $Y''$ to be the union of compact components of $Y'$ and the cores of non-compact good components of $Y'$. Set $\widehat{Z}^\sigma := \psi^\sigma_{s_0} ( Z^\sigma ) \cup Y''$. By Lemma~\ref{lem_chart_near_compact_subset} and isotopy extension, we can define $\wh\psi^{s_0,\sigma}_{s} : \wh Z^\sigma \to \mathcal{M}^s_T$ for $s \in \sigma$ close to $s_0$ such that $\wh\psi^{s_0,\sigma}_{s} ( \partial \widehat{Z}^\sigma )$ consists of spherical fibers and such that Assertion \ref{ass_refinement_a} of Lemma~\ref{lem_simplicial_refinement} holds. We can moreover construct $\wh\psi^{s_0,\sigma}_{s}$ in such a way that for every component $\mathcal{C} \subset Y$ with the property that $\partial \mathcal{C} = \emptyset$ and $Y \subset U^{s_0}_{S3}$ the metric $(\wh\psi^{s_0,\sigma}_{s})^* g_{s,T}$ restricted to $\mathcal{C}$ is a multiple of the same constant curvature metric for all $s$ close to $s_0$. Then Assertion \ref{ass_refinement_b} of Lemma~\ref{lem_simplicial_refinement} holds for all $s$ close to $s_0$. By construction, Assertions \ref{ass_refinement_c}--\ref{ass_refinement_f}, \ref{ass_refinement_h} of Lemma~\ref{lem_simplicial_refinement} hold for $s = s_0$. Next, we argue that the same is true for Assertion~\ref{ass_refinement_g}. Assume by contradiction that $\rho \geq 4r_k$ on some component $\mathcal{C}^* \subset \mathcal{M}^s_T \setminus \wh\psi^\sigma_s (\Int \wh Z^\sigma)$ that intersects $\wh\psi^\sigma_s ( \wh Z^\sigma)$. Then the component $\mathcal{C}^{**} \subset \mathcal{M}^s_T \setminus \psi^\sigma_s (\Int Z^\sigma)$ containing $\mathcal{C}^*$ is a union of $\mathcal{C}^*$ with components of $Y$. Since $\rho \to 2 r_k$ near the open ends of $Y$, we find that $\mathcal{C}^{**} \subset Y$. However this implies that $\mathcal{C}^{**}$ is a compact component of $Y$ and therefore $\mathcal{C}^{**} \subset Y''$. Since Assertions \ref{ass_refinement_c}--\ref{ass_refinement_h} are open, they also hold for $s$ sufficiently close to $s_0$. \end{proof} \begin{lemma} \label{lem_nu_s_js} For every $s_0 \in \sigma \subset K$ there is a neighborhood $V_{s_0, \sigma} \subset K$ of $s_0$ and continuous families of embeddings $(\nu^{s_0,\sigma}_{s,j} : D^3 \to \mathcal{M}^s)_{s \in V_{s_0, \sigma} \cap \sigma, j = 1, \ldots, m_{s_0, \sigma}}$ such that Assertions~\ref{ass_refinement_disjoint}--\ref{ass_refinement_l} of Lemma~\ref{lem_simplicial_refinement} hold for all $s \in V_{s_0, \sigma} \cap \sigma$, $j = 1, \ldots, m_{s_0, \sigma}$ and $k = \dim \sigma$ if we replace $(\nu^\sigma_{s,j} : D^3 \to \mathcal{M}^s)_{s \in \sigma, j = 1, \ldots, m_\sigma}$ with $(\nu^{s_0,\sigma}_{s,j} : D^3 \to \mathcal{M}^s)_{s \in V_{s_0, \sigma} \cap \sigma, j = 1, \ldots, m_{s_0, \sigma}}$. Moreover, instead of Assertion~\ref{ass_refinement_h_old} we have: \begin{enumerate}[label=(\alph*$\,'$), start=9] \item \label{ass_find_disk_i_prime} If $V_{s_0, \sigma} \cap L \neq \emptyset$, then $m_{s_0,\sigma} = 0$. \end{enumerate} \end{lemma} \begin{proof} Let $E \subset \cup_{s \in K} U^s_{S2}$ be the union of all spherical fibers that are points and on which $\rho \leq 4 r_k$. Then $E$ is closed in $\cup_{s \in K} U^s_{S2}$ and $E \cap \psi^\sigma_{s_0} (Z^\sigma) =: \{ x_1, \ldots, x_{m_{s_0, \sigma}} \}$ consists of finitely many points. For every $j = 1, \ldots, m_{s_0, \sigma}$ let $Y_j \subset \mathcal{M}^{s_0}_T$ be the union of all open disks $X \subset \mathcal{M}^{s_0}_T$ with the property that $x_j \in X$, $X \setminus \{ x_j \}$ consists of regular spherical fibers and $\rho \leq 3 \Lambda r_k$ on $X$. Then $Y_j$ is also an open disk. By a priori assumption \ref{APA2}, no component of $\psi^\sigma_{s_0} (Z^\sigma)$ is fully contained in the closure $\ov{Y}_j$ of $Y_j$. So, in particular, $\partial \ov{Y}_j \neq \emptyset$, which implies that $\partial \ov{Y}_j$ is a regular fiber and therefore $\ov{Y}_j$ is a closed disk. As the boundaries of both subsets $Y_j$ and $\psi^\sigma_{s_0} (Z^\sigma)$ consist of spherical fibers and $x \in Y_j \cap \psi^\sigma_{s_0} (Z^\sigma) \neq\emptyset$, we also obtain that $\ov{Y}_j \subset \psi^\sigma_{s_0} (\Int Z^\sigma)$. For any two $j,j' = 1, \ldots, m_{s_0, \sigma}$, $j \neq j'$ the disks $\ov{Y}_j$, $\ov{Y}_{j'}$ are pairwise disjoint, because otherwise their union would be a connected component of $\mathcal{M}^{s_0}_{T}$, which would again contradict a priori assumption \ref{APA2}. For every $j = 1, \ldots, m_{s_0, \sigma}$ choose a continuous family of points $(x'_{s,j} \in \mathcal{M}^s_T)$ for $s$ close to $s_0$ such that $x'_{s_0,j} = x_j$. Using the exponential map based at $x'_{s,j}$, we can construct continuous families of embeddings $(\nu^{s_0,\sigma}_{s,j} : D^3 \to \mathcal{M}^s)$ for $s$ close to $s_0$ such that $x'_{s,j} \in \nu^{s_0,\sigma}_{s,j} ( 0)$ and $\nu^{s_0,\sigma}_{s_0,j} (D^3) = \ov{Y}_j$. For $s$ close to $s_0$, these disks are pairwise disjoint and satisfy Assertion~\ref{ass_refinement_i} of Lemma~\ref{lem_simplicial_refinement}. Assertions~\ref{ass_refinement_ii}, \ref{ass_refinement_j}--\ref{ass_refinement_l} of Lemma~\ref{lem_simplicial_refinement} hold for $s = s_0$ by construction and therefore by openness, they also hold for $s \in V_{s_0, \sigma}$, where $V_{s_0, \sigma}$ is a small enough neighborhood of $s_0$. For Assertion~\ref{ass_find_disk_i_prime} we can distinguish two cases: If $s_0 \not\in L$, then we can choose $V_{s_0, \sigma} \cap L \neq\emptyset$. If $s_0 \in L$, then $\rho$ is constant on $\mathcal{M}^{s_0}_T$ and therefore by construction $m_{s_0, \sigma} = 0$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem_simplicial_refinement}.] For every $s_0 \in K$ let \[ W_{s_0} := \bigcap_{s_0 \in \sigma \subset K} (U_{s_0, \sigma} \cap V_{s_0, \sigma} ). \] Then $W_{s_0}$ is still an open neighborhood of $s_0$ and $K = \cup_{s_0 \in K} W_{s_0}$. Let now $\mathcal{K}''$ be a refinement of $\mathcal{K}'$ that is subordinate to this cover and for every simplex $\sigma \in \mathcal{K}''$ of this refinement let \begin{multline*} \big( \wh{Z}^\sigma, \iota^\sigma,( \wh\psi^\sigma_s)_{s \in \sigma}, (\nu^{\sigma}_{s,j} )_{s \in \sigma, j = 1, \ldots, m_\sigma} \big) \\ := \big(\wh{Z}^{s_\sigma, \sigma}, \iota^{s_\sigma,\sigma}, ( \wh\psi^{s_\sigma, \sigma}_s)_{s \in \sigma}, (\nu^{s_\sigma,\sigma}_{s,j} )_{s \in \sigma, j = 1, \ldots, m_{s_\sigma, \sigma}} \big), \end{multline*} where $s_\sigma \in \sigma$ is chosen such that $\sigma \subset W_{s_\sigma}$. We can now modify the partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ according to Proposition~\ref{prop_simp_refinement} to respect the refinement $\mathcal{K}''$. To see that a priori assumptions \ref{APA1}--\ref{APA4} continue to hold, let $s \in \sigma''$ where $\sigma'' \in \mathcal{K}''$, $k'' = \dim \sigma''$, and choose $\sigma' \in \mathcal{K}'$, $k' = \dim \sigma'$ of minimal dimension such that $\sigma' \supset \sigma''$. Then $k' \geq k''$ and $\psi^{\sigma''}_s (Z^{\sigma''}) = \psi^{\sigma'}_s (Z^{\sigma'})$. Therefore we have \[ \{ \rho > 1 \} \cap \mathcal{M}^s_T \subset \psi^{\sigma''}_s (Z^{\sigma''}) \subset \{ \rho > r_{k'} \} \subset \{ \rho > r_{k''} \}, \] every component of $\psi^{\sigma''}_s (Z^{\sigma''})$ contains a point with $\rho > \Lambda^2 r_{k'} \geq \Lambda^2 r_{k''}$ and $\rho > \Lambda r_{k'} \geq \Lambda r_{k''}$ on $\psi^{\sigma''}_s (\partial Z^{\sigma''})$. \end{proof} \subsection{Improving the partial homotopy} Our next goal will be to construct a new partial homotopy by extending the domain of the partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ according to the maps $(\wh\psi^\sigma_s)_{s \in \sigma}$ (using Proposition~\ref{prop_extending}) and by removing the disks $(\nu^\sigma_{s,j} (D^3))_{s \in \sigma, j = 1, \ldots, m_\sigma}$ (using Proposition~\ref{prop_move_remove_disk}). In the next subsection we will move the new partial homotopy backwards in time by the time step $\Delta T$. The following lemma will be used to verify that the resulting partial homotopy satisfies a priori assumptions \ref{APA1}--\ref{APA4}. \begin{lemma} \label{lem_newAPA} With the choices of $(\widehat{\psi}^\sigma_s )_{s \in \sigma}$ and $(\nu^\sigma_{s,j})_{s \in \sigma, j = 1, \ldots, m_\sigma}$ from Lemma~\ref{lem_simplicial_refinement} the following holds for all $s \in \sigma \subset K$. All points of \begin{equation} \label{def_X_psi} X^\sigma_s := \wh\psi^{\sigma}_s (\wh Z^{\sigma}) \setminus (\nu^\sigma_{s,1}(B^3) \cup \ldots \cup \nu^\sigma_{s,m_\sigma} (B^3)) \end{equation} survive until time $T - \Delta T$ and we have: \begin{enumerate}[label=(\alph*)] \item \label{ass_newAPA_a} $\{ \rho > \Lambda^3 r_{k} \} \cap \mathcal{M}^s_T \subset X^\sigma_s (T - \Delta T) \subset \{ \rho > r_{k} \}$ \item \label{ass_newAPA_b} Every component of of $X^\sigma_s$ contains a point $x$ with $\rho (x(T-\Delta T)) > \Lambda^2 r_k$. \item \label{ass_newAPA_c} $\rho > \Lambda r_k$ on $\partial X^\sigma_s (T - \Delta T)$. \end{enumerate} \end{lemma} \begin{proof} Fix $s \in \sigma \subset K$. By a priori assumption \ref{APA1} and Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_d} we have $\rho > r_k$ on $X^\sigma_s$. So by our choice of $\Delta T$ all points of $X^\sigma_s$ survive until time $T - \Delta T$. We first show: \begin{Claim} If there is some $x \in \wh\psi^\sigma_s (\wh{Z}^\sigma)$ with $\rho (x(T - \Delta T)) \leq \rho (x) \leq 10$, then there is an embedded disk $D \subset \mathcal{M}^s_T$ such that: \begin{enumerate}[label=(\alph*)] \item \label{ass_disk_a} $x \in \Int D$. \item \label{ass_disk_b} $\rho > 2 \Lambda^3 r_k$ on $\partial D$. \item \label{ass_disk_c} If $\rho (x) > 5 r_k$, then $D \subset \wh\psi^\sigma_s ( \Int \wh{Z}^\sigma)$. \item \label{ass_disk_d} If $\rho (x (T - \Delta T)) < r_k$, then $x \in \nu^\sigma_{s,1} (B^3) \cup \ldots \cup \nu^\sigma_{s,m_\sigma} (B^3)$. \end{enumerate} \end{Claim} \begin{proof} Let $x, x' \in D' \subset D$ be the data from Lemma~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_e}. Assertions~\ref{ass_disk_a} and \ref{ass_disk_b} follow immediately using a priori assumption \ref{APA1}. By a priori assumption \ref{APA2} and Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_e} we have $\partial D \subset \wh\psi^\sigma_s (\wh{Z}^\sigma)$. So if Assertion~\ref{ass_disk_c} was false, then $D$ must contain a component of $\mathcal{M}^s_T \setminus \wh\psi^\sigma_s ( \Int \wh{Z}^\sigma)$ that intersects $\wh\psi^\sigma_s (\wh{Z}^\sigma)$ and on which $\rho > .9 \rho (x) >4 r_k$, in contradiction to Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_g}. Lastly, we verify Assertion~\ref{ass_disk_d}. Assume that $\rho (x(T- \Delta T))< r_k$. Then by our choice of $\Delta T$ we have $\rho (x) < 2 r_k$, see Lemma~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_c}. So by Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_d} we have $x \in \psi^\sigma_s (Z^\sigma)$. By a priori assumption \ref{APA3} and the fact that $\rho < 2 \rho (x) < 4 r_k$ on $D'$, we obtain that $x' \in D' \subset \psi^\sigma_s (Z^\sigma)$. So by Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_j} we have $x' \in \nu^\sigma_{s,j} (D^3)$ for some $j \in \{ 1, \ldots, m_\sigma \}$. By Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_l} we have $x \in D' \subset \nu^\sigma_{s,j} (B^3)$. This finishes the proof of Assertion~\ref{ass_disk_d}. \end{proof} We can now verify the assertions of this lemma. The first inclusion of Assertion~\ref{ass_newAPA_a} follows from Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_e}, \ref{ass_refinement_k} and our choice of $\Delta T$, see Lemma~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_c}. The second inclusion is a consequence of a priori assumption \ref{APA1}, Lemma \ref{lem_simplicial_refinement}\ref{ass_refinement_d}, Assertion~\ref{ass_disk_d} of the Claim and our choice of $\Delta T$. For Assertion~\ref{ass_newAPA_b} consider a component $\mathcal{C}$ of $X^\sigma_s$ and let $\mathcal{C}' \subset \wh{Z}^\sigma$ be the component with $\mathcal{C} \subset \wh\psi^\sigma_s (\mathcal{C}')$. Assume by contradiction that $\rho \leq \Lambda^2 r_k$ on $\mathcal{C} (T-\Delta T)$. So by our choice of $\Delta T$ and Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_k} we have \begin{equation} \label{eq_rho_on_CCprime} \rho \leq \Lambda^2 r_k \quad \text{on} \quad \big(\wh\psi^\sigma_s (\mathcal{C}') \big)(T - \Delta T) \quad \Longrightarrow \quad \rho < 2 \Lambda^2 r_k \quad \text{on} \quad \wh\psi^\sigma_s (\mathcal{C}'). \end{equation} By Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_f} there is a component $\mathcal{C}'' \subset Z^\sigma$ such that $\psi^\sigma_s (\mathcal{C}'') \subset \wh\psi^\sigma_s (\mathcal{C}')$. Due to a priori assumption \ref{APA2} we can find a point $x \in \psi^\sigma_s (\mathcal{C}'')$ such that $\rho (x) > \Lambda^2 r_k$. Since $\rho (x(T- \Delta T) \leq \Lambda^2 r_k$, Assertion~\ref{ass_disk_c} of the Claim implies that there is an embedded disk $D \subset \wh\psi^\sigma_s (\mathcal{C}')$ with $\rho > 2 \Lambda^3 r_k$ on $\partial D$, in contradiction to (\ref{eq_rho_on_CCprime}). Lastly, we verify Assertion~\ref{ass_newAPA_c}. Consider a component $\Sigma \subset \partial X^\sigma_s$. If $\Sigma = \nu^\sigma_{s,j} (\partial D^3)$ for some $j = 1, \ldots, m_\sigma$, then we are done by Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_l} and our choice of $\Delta T$. Assume now that $\Sigma \subset \wh\psi^\sigma_s (\partial \wh{Z}^\sigma)$. If $\Sigma \not\subset \psi^\sigma_s (\partial Z^\sigma)$, then we are done by Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_h} and our choice of $\Delta T$. Assume now that $\Sigma \subset \psi^\sigma_s (\partial Z^\sigma)$. By a priori assumption \ref{APA3} we have $\rho > \Lambda r_k$ on $\Sigma$. If $\rho (\Sigma (T - \Delta T)) \leq \Lambda r_k$, then we can apply Assertion~\ref{ass_disk_c} of the Claim to find an embedded disk $D \subset \wh\psi^\sigma_s (\Int \wh{Z}^\sigma )$ whose interior intersects $\Sigma$, contradicting the fact that $\Sigma \subset \wh\psi^\sigma_s (\partial \wh{Z}^\sigma)$. \end{proof} The next lemma ensures that all necessary containment relationships hold if we successively modify the partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ according to the data $(\wh\psi^\sigma_s)_{s \in \sigma}$ and $(\nu^\sigma_{s,j} (D^3))_{s \in \sigma, j = 1, \ldots, m_\sigma}$ over all simplices of $K$. \begin{lemma} \label{lem_containment_psi} For any two simplices $\sigma, \tau \subset K$ and $s \in \tau \cap \sigma$ we have for all $j =1, \ldots, m_\sigma$: \begin{enumerate}[label=(\alph*)] \item \label{ass_containment_psi_a} If $\dim \tau < \dim \sigma$, then $\wh\psi^\sigma_s (\wh{Z}^\sigma) \subset \wh\psi^\tau_s (\wh{Z}^\tau) \setminus (\nu^\tau_{s,1} (B^3) \cup \ldots \cup \nu^\tau_{s,m_\tau}(B^3)) $. \item \label{ass_containment_psi_b} If $\dim \tau < \dim \sigma$, then $\nu^\tau_{s,j}(D^3) \cap \psi^\sigma_s (Z^\sigma) = \nu^\tau_{s,j}(D^3) \cap \wh\psi^\sigma_s (\wh Z^\sigma) = \emptyset$. \item \label{ass_containment_psi_c} If $\dim \tau \leq \dim \sigma$, then the image $\nu^\tau_{s,j} (D^3)$ does not contain an entire component of $\psi^{\sigma}_s (Z^{\sigma})$. \item \label{ass_containment_psi_d} If $\dim \tau \geq \dim \sigma$, then the image $\nu^\tau_{s,j} (D^3)$ does not contain an entire component of $\wh\psi^\sigma_s (\wh{Z}^\sigma) \setminus (\nu^\sigma_{s,1} (B^3) \cup \ldots \cup \nu^\sigma_{s,m_\sigma}(B^3))$. \end{enumerate} \end{lemma} \begin{proof} By a priori assumption \ref{APA1} and Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_d} we have $\rho > r_{\dim \sigma}$ on $\wh\psi^\sigma_s (\wh{Z}^\sigma)$. On the other hand, by Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_e}, \ref{ass_refinement_k} the set $\wh\psi^\tau_s (\wh{Z}^\tau) \setminus (\nu^\tau_{s,1} (B^3) \cup \ldots \cup \nu^\tau_{s,m_\tau}(B^3))$ contains all points in $\mathcal{M}^s_T$ of scale $\rho > \frac12 \Lambda^3 r_{\dim \tau}$ and $r_{\dim \sigma} > \frac12 \Lambda^3 r_{\dim \tau}$. This implies Assertion~\ref{ass_containment_psi_a}. Assertion~\ref{ass_containment_psi_b} is a direct consequence of Assertion~\ref{ass_containment_psi_a}. If $\dim \tau \neq \dim \sigma$, then Assertion~\ref{ass_containment_psi_c} follows from Assertion~\ref{ass_containment_psi_b} and Assertion~\ref{ass_containment_psi_d} follows from Assertion~\ref{ass_containment_psi_a}, because $\nu^\tau_{s,j}(D^3) \subset \psi^\tau_s (Z^\tau) \subset \wh \psi^\tau_s (\wh Z^\tau)$. So assume that $\dim \tau = \dim \sigma =: k$. By a priori assumption \ref{APA2} and Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_f} every component of $\psi^{\sigma}_s (Z^{\sigma})$ or $\wh\psi^\sigma_s (\wh{Z}^\sigma) \setminus (\nu^\sigma_{s,1} (B^3) \cup \ldots \cup \nu^\sigma_{s,m_\sigma}(B^3))$ contains a point with $\rho > \Lambda^2 r_{k}$. On the other hand, by Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_k}, we have $\rho < 4 \Lambda r_k < \Lambda^2 r_k$ on $\nu^\tau_{s,j} (D^3)$. So $\nu^\tau_{s,j} (D^3)$ cannot fully contain any such component. \end{proof} We now modify the partial homotopy $\{ ( Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ using Proposition~\ref{prop_extending} and \ref{prop_move_remove_disk}. We proceed inductively over the dimension of the skeleton. More specifically, we claim that for every $k \geq 0$ we can construct a partial homotopy at time $T$ relative to $L$ that has the form \begin{equation} \label{eq_whtd_partial_homotopy} \big\{ (\td{Z}^{\sigma}, (\td{g}^{\sigma}_{s,t} )_{s \in \sigma, t \in [0,1]}, (\td\psi^{\sigma}_s)_{s \in \sigma} ) \big\}_{\substack{\sigma \subset K, \\ \dim \sigma < k}} \cup \{ (Z^{\sigma}, (g^{\sigma}_{s,t})_{s \in \sigma, t \in [0,1]}, (\psi^{\sigma}_s)_{s \in \sigma} ) \}_{\substack{\sigma \subset K, \\ \dim \sigma \geq k}} \end{equation} and for which \[ \td\psi^\sigma_s (\td{Z}^\sigma) = X^\sigma_s \] for all $s \in \sigma \subset K$ with $\dim \sigma < k$, where $X^\sigma_s$ is defined in (\ref{def_X_psi}). Note that if $k = 0$, then (\ref{eq_whtd_partial_homotopy}) can be taken to be the original partial homotopy $\{ (Z^\sigma, \linebreak[1] (g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\psi^\sigma_s)_{s \in \sigma}) \}_{\sigma \subset K}$. If (\ref{eq_whtd_partial_homotopy}) has been constructed for some $k$, then we can construct another partial homotopy of form (\ref{eq_whtd_partial_homotopy}) for $k+1$ by first applying Proposition~\ref{prop_extending}, using the data $\wh{Z}^\sigma, \iota^\sigma, (\wh\psi^\sigma_s)_{s \in \sigma}$, and then Proposition~\ref{prop_move_remove_disk}, using the data $( \nu^\sigma_{s, j})_{s \in \sigma, j = 1, \ldots, m_\sigma}$ over all simplices $\sigma \subset K$ of dimension $k+1$. Lemma~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_g}, Lemma~\ref{lem_simplicial_refinement}\ref{ass_refinement_a}--\ref{ass_refinement_c}, \ref{ass_refinement_disjoint}--\ref{ass_refinement_i} and Lemma~\ref{lem_containment_psi} ensure that the assumptions of Proposition~\ref{prop_extending} and \ref{prop_move_remove_disk} hold. So by induction we obtain: \begin{lemma} \label{lem_apply_all_moves} There is a simplicial refinement of $\mathcal{K}$ and a partial homotopy $\{ ( \td{Z}^\sigma, \linebreak[1] (\td{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T$ relative to $L$ for $(\mathcal{R}^s)_{s \in K}$ such that for all $s \in \sigma \subset K$ the set $X^\sigma_s := \td\psi^\sigma_s (\td{Z}^\sigma)$ satisfies Assertions~\ref{ass_newAPA_a}--\ref{ass_newAPA_c} of Lemma~\ref{lem_newAPA}. Moreover, $\{ ( \td{Z}^\sigma, \linebreak[1] (\td{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ is still PSC-conformal over every $s \in K_{PSC}$. \end{lemma} \subsection{Construction of a partial homotopy at time $T - \Delta T$} Apply Proposition~\ref{prop_move_part_hom_bckwrds} with $T' = T - \Delta T$ to the partial homotopy $\{ ( \td{Z}^\sigma, \linebreak[1] (\td{g}^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\td\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$. The assumptions of this lemma are met due to Lemma~\ref{lem_newAPA} and (\ref{eq_US2US310}). We obtain a partial homotopy $\{ ( \td Z^\sigma, \linebreak[1] (\ov g^\sigma_{s,t})_{s \in \sigma, t \in [0,1]}, \linebreak[1] (\ov\psi^\sigma_s )_{s \in \sigma}) \}_{\sigma \subset K}$ at time $T- \Delta T$ with $\ov\psi^\sigma_s ( \td Z^\sigma) = X^\sigma_s (T - \Delta T)$. So by Lemma~\ref{lem_newAPA}, a priori assumptions \ref{APA1}--\ref{APA3} hold for this new partial homotopy. A priori assumption \ref{APA4} holds due to Proposition~\ref{prop_move_part_hom_bckwrds}\ref{ass_lem_move_part_hom_bckwrds_b} and Lemmas~\ref{Lem_further_properties_gpdtp}\ref{ass_further_properties_gpdtp_g}, \ref{lem_apply_all_moves}. This concludes our induction argument and proves Lemma~\ref{lem_newAPA}, which, as discussed in Subsection~\ref{subsec_deforming_main_results} implies Theorem~\ref{Thm_main_deform_to_CF}. \section{Proofs of the main theorems} \label{sec_proofs_main_theorems} Theorem~\ref{Thm_main_general_case} from Section~\ref{sec_introduction} is a direct consequence of the following theorem. \begin{theorem} \label{Thm_main_conf_flat} Let $(K,L)$, $L \subset K$, be a pair of topological spaces that is homeomorphic to the geometric realization of a pair of finite simplicial complexes. Let $M$ be a connected sum of spherical space forms and copies of $S^2 \times S^1$. Consider a fiber bundle $\pi : E \to K$ whose fibers are homeomorphic to $M$ and whose structure group is $\operatorname{Diff} (M)$. Let $(g^s)_{s \in K}$ be a continuous family of fiberwise Riemannian metrics such that $( \pi^{-1} (s) , g^s)$ is isometric to a compact quotient of the standard round sphere or standard round cylinder for all $s \in L$. Then there is a continuous family of Riemannian metrics $(h^s_t)_{s \in K, t \in [0,1]}$ such that for all $s \in K$ and $t \in [0,1]$ \begin{enumerate}[label=(\alph*)] \item \label{Thm_main_conf_flat_a} $h^s_0 = g^s$. \item \label{Thm_main_conf_flat_b} $h^s_1$ is conformally flat and has positive scalar curvature. \item \label{Thm_main_conf_flat_c} If $s \in L$, then $h^s_t = g^s$. \item \label{Thm_main_conf_flat_d} If $(M^s, g^s)$ has positive scalar curvature, then so does $(M^s, h^s_t)$. \end{enumerate} \end{theorem} \begin{proof} Due to Remark~\ref{rmk_fiber_bundle_construction} we can view the fiber bundle $\pi : E \to K$ as a continuous family of Riemannian manifolds $(M^s := \pi^{-1} (s), g^s)_{s \in K}$ with $M^s \approx M$. Let $K_{PSC} \subset K$ be a closed subset with the property that $(M^s, g^s)$ has positive scalar curvature for all $s \in K_{PSC}$. We first show the theorem with the following weaker assertions; we will later explain how to strengthen these assertions: \begin{enumerate}[label=(\alph*$'$), start=3] \item \label{Thm_main_conf_flat_cp} If $s \in L$, then one of the following is true for all $t \in [0,1]$: \begin{enumerate}[label=(c$'$\arabic*)] \item $M \approx S^3 / \Gamma$ and $h^s_t$ is a constant multiple of $g^s$. \item $M \approx (S^2 \times \mathbb{R}) / \Gamma$ and $(M^s, h^s_t)$ is a quotient of the round cylinder and the local isometric $O(3)$-actions of $(M^s, g^s)$ and $(M^s, h^s_t)$ are the same. \end{enumerate} \item \label{Thm_main_conf_flat_dp} $(M^s, h^s_t)$ has positive scalar curvature for all $s \in K_{PSC}$ and $t \in [0,1]$. \end{enumerate} Consider the family of metrics $(h^s_t)_{s \in K, t \in [0,1]}$ produced by Theorem~\ref{Thm_main_deform_to_CF}, based on the family $(M^s, g^s)_{s \in K}$. We claim that there is a continuous family of positive functions $(w^s_t \in C^\infty (M^s))_{s \in K, t \in [0,1]}$ such that for all $s \in K$ and $t \in [0,1]$: \begin{enumerate}[label=(\arabic*)] \item \label{prop_w_s_t_1} $w_0^s = 1$. \item \label{prop_w_s_t_2} $w_t^s = 1$ if $s \in L$. \item \label{prop_w_s_t_3} $(w^s_t)^4 h^s_t$ has positive scalar curvature, or equivalently $8 \triangle w^s_t - R_{h^s_t} w^s_t < 0$, if $(s,t) \in K \times \{ 1 \} \cup L \times [0,1]$. \end{enumerate} In fact, for any $(s_0, t_0) \in K \times [0,1]$ there is a neighborhood $U_{s_0, t_0} \subset K \times [0,1]$ and a continuous family of positive functions $(w^{s_0,s}_{t_0,t})_{(s,t) \in U_{s_0, t_0}}$ satisfying Properties~\ref{prop_w_s_t_1}--\ref{prop_w_s_t_3}. Moreover, Properties~\ref{prop_w_s_t_1}--\ref{prop_w_s_t_3} remain valid under convex combination. Therefore $(w^s_t)$ can be constructed from the families $(w^{s_0,s}_{t_0,t})$ using a partition of unity. The family $(\td{h}^s_t := (w^s_t)^4 h^s_t)_{s \in K, t \in [0,1]}$ satisfies Assertions~\ref{Thm_main_conf_flat_a}, \ref{Thm_main_conf_flat_b}, \ref{Thm_main_conf_flat_cp}, \ref{Thm_main_conf_flat_dp}. We will now argue that the theorem also holds with Assertion~\ref{Thm_main_conf_flat_c} replaced by \ref{Thm_main_conf_flat_cp} only. For this purpose, let $K_{PSC} \subset K$ be the set of parameters $s \in K$ for which $(M^s, g^s)$ has non-negative scalar curvature. Then $K_{PSC}$ is closed. After evolving the metrics $g^s$ by the Ricci flow for some small uniform time $\tau > 0$, we can produce families of metrics $(g^s_t)_{s \in X, t \in [0,\tau]}$ such that $g^s_0 = g^s$ for all $s \in K$ and such that $g^s_t$ even has positive scalar curvature for all $s \in K_{PSC}$ and $t \in (0, \tau]$. Apply our previous discussion to the family to the family $(M^s, g^s_\tau)_{s \in K}$, resulting in a family $(h^{\prime, s}_t)$ satisfying Assertions~\ref{Thm_main_conf_flat_a}, \ref{Thm_main_conf_flat_b}, \ref{Thm_main_conf_flat_cp}, \ref{Thm_main_conf_flat_dp}. Then we can obtain the desired family $(h^s_t)$ by concatenating the resulting family $(h^{\prime, s}_t)$ with $(g^s_t)$. So assume in the following that $(h^s_t)_{s \in K, t \in [0,1]}$ satisfies Assertions~\ref{Thm_main_conf_flat_a}, \ref{Thm_main_conf_flat_b}, \ref{Thm_main_conf_flat_c}, \ref{Thm_main_conf_flat_dp}. It remains to construct a family $(h^{\prime\prime,s}_t)_{s \in K, t \in [0,1]}$, that in addition satisfies Assertion~\ref{Thm_main_conf_flat_c}. By gluing together local trivializations of the vector bundle $\pi : E \to K$, we can find neighborhoods $L \subset U \Subset V \subset K$ and a transversely continuous bundle map $(F, f) : (E,K) \to (E, K)$; meaning that $\pi \circ F = f \circ \pi$ and $F |_{\pi^{-1} (s)} : \pi^{-1} (s) \to \pi^{-1} (f(s))$ are a smooth diffeomorphisms that depend continuously on $s$ in the smooth topology such that: \begin{enumerate}[start=4] \item $(M^s,g^s)$ has positive scalar curvature for all $s \in V$. \item $f = \id$ and $F = \id$ over $(K \setminus V) \cup L$ and $\pi^{-1} ((K \setminus V) \cup L)$. \item $f(U) \subset L$. \end{enumerate} Let $\eta : K \to [0,1]$ be a continuous map such that $1- \eta$ is supported in $U$ and $\eta \equiv 0$ on $L$. Then \[ h^{\prime\prime,s}_t := \ov{F}^* h^{f(s)}_{ \eta(s) t} \] satisfies Assertions~\ref{Thm_main_conf_flat_a}--\ref{Thm_main_conf_flat_d}, which finishes the proof. \end{proof} We can now prove Theorem~\ref{thm_psc_contractible} from Section~\ref{sec_introduction}. \begin{proof}[Proof of Theorem~\ref{thm_psc_contractible}] Suppose $\met_{PSC}(M)$ is nonempty. So $M$ is a connected sum of spherical space forms and copies of $S^2\times S^1$ by Perelman \cite{Perelman2}. Now consider a continuous map $\alpha:S^k\rightarrow \met_{PSC}(M)$ for some $k\geq 0$. Since $\met(M)$ is contractible, we may extend $\alpha$ to a continuous map $\wh\alpha:D^{k+1}\rightarrow\met(M)$. Letting $K:=D^{k+1}$ and $\pi:E:=K\times M\rightarrow K$ be the trivial bundle, the map $\wh\alpha$ defines a family of fiberwise Riemannian metrics as in Theorem~\ref{Thm_main_general_case}. Applying the theorem, we may reinterpret the resulting family $(h^s_t)_{s\in K, t\in [0,1]}$ as defining a homotopy of pairs $(\wh\alpha_t:(D^{k+1},S^k)\rightarrow (\met(M),\met_{PSC}(M)))_{t\in [0,1]}$ where $\wh\alpha_0=\wh\alpha$ and $\wh\alpha_1$ takes values in $\met_{PSC}(M)$. Restricting this homotopy to $S^n$, we obtain a homotopy $(\alpha_t:S^n\rightarrow \met_{PSC}(M))_{t\in [0,1]}$ where $\alpha_1$ is null-homotopic via $\wh\alpha_1:D^{k+1}\rightarrow \met_{PSC}(M)$. Thus $\alpha$ is null-homotopic. Since $\alpha$ was arbitrary and $\met_{PSC}(M)$ has the homotopy type of a CW-complex \cite[Sec. 2.1]{rubinstein_et_al}, it follows that $\met_{PSC}(M)$ is contractible. \end{proof} \bigskip The remaining theorems from Section~\ref{sec_introduction} will follow from Theorem~\ref{Thm_main_conf_flat} using the following lemma. \begin{lemma} \label{lem_conf_flat_round} Let $(M,g)$ be a Riemannian 3-manifold. \begin{enumerate}[label=(\alph*)] \item \label{lem_conf_flat_round_a} If $M \approx S^3/\Gamma$ and $g$ is a conformally flat metric, then there is a unique $\phi \in C^\infty (M)$ such that $(M, e^{2\phi} g)$ is isometric to the standard round sphere and such that $\int e^{-\phi} d\mu_g$ is minimal. Moreover $\phi$ depends continuously on $g$ (in the smooth topology). \item \label{lem_conf_flat_round_b} If $M$ is diffeomorphic to a quotient of the round cylinder and $g$ is a conformally flat Riemannian metric on $M$, then there is a unique $\phi \in C^\infty (M)$ such that $e^{2\phi} g$ is isometric to a quotient of the standard round cylinder. Moreover, $\phi$ depends continuously on $g$ (in the smooth topology). \end{enumerate} \end{lemma} \begin{proof} Assertion~\ref{lem_conf_flat_round_a}. It suffices to consider the case $M \approx S^3$, since we may pass to the universal cover, and the uniqueness of the minimizer guarantees that it will descend to a minimizer on $M$. Let $V_g \subset C^\infty (M)$ be the space of functions $\phi$ with the property that $(M, e^{2\phi} g)$ is isometric to the standard round sphere. By \cite{Kuiper-1949} this space is non-empty. Pick some arbitrary $\phi' \in V_g$ and identify $(M,e^{2\phi'} g)$ with $(S^3, g_{S^3})$. Then $g = e^{-2\phi'} g_{S^3}$. So we need to show that the functional \[ F_{\phi'} (\phi) := \int_{S^3} e^{-\phi} e^{-3\phi'} d\mu_{S^3} \] restricted to $V_\phi$ has a unique minimum. To see this note that $\phi \in V_g$ if and only if $(S^3, e^{2(\phi- \phi')} g_{S^3})$ is isometric to the standard round sphere, which holds if and only if for some $\vec c \in \mathbb{R}^4$ \[ \phi - \phi' = - \log \big(\sqrt{1+ |\vec{c}|^2} - \vec{c} \cdot \vec{x} \big). \] In this case we obtain that \[ F_{\phi'} (\phi) = \int_{S^3} \big(\sqrt{1+ |\vec{c}|^2} - \vec{c} \cdot \vec{x} \big) e^{-4\phi'} d\mu_{S^3} =: \td{F}_{\phi'} (\vec{c}). \] It can be verified easily that $\td{F}_{\phi'} : \mathbb{R}^4 \to \mathbb{R}$ is strictly convex. Moreover along every ray $t \mapsto t \vec{c}$ we have \[ \lim_{t \to \infty} \td{F}_{\phi'} (t \vec{c}) = \lim_{t \to \infty} \int_{S^3} \big(\sqrt{1+ t^2 |\vec{c}|^2} - t \vec{c} \cdot \vec{x} \big)e^{-4\phi'}d\mu_{S^3} = \infty. \] So $\td{F}_{\phi'}$ and therefore $F_{\phi'} |_{V_g}$ attains a unique minimum. For the continuous dependence claim, note that for any continuous family $(g_s)_{s \in X}$ of conformally flat metrics on $M$ we can find a continuous family of smooth maps $(\psi_s : M \to S^3)_{s \in X}$ such that $\psi^*_s g_{S^3} = e^{2\phi'_{s}} g_s$ for some continuous family $(\phi_s)_{s \in X}$ of smooth functions; such a family can be constructed via the developing map for example, compare with the methods of \cite{Kuiper-1949}. So it suffices to show that the minimizer of the functional $F_{\phi'_{s}}$ depends continuously on $s$, which is clear since the Hessians of $\td{F}_{\phi'_{s}}$ are positive definite. Assertion~\ref{lem_conf_flat_round_b}. We first observe that $M$ is diffeomorphic to either $S^2\times S^1$ or to $\mathbb{R} P^3\# \mathbb{R} P^3$. Let $dev:\td M\rightarrow S^3$ be a developing map of the conformally flat structure and $\pi_1(M)\stackrel{\rho}{\curvearrowright}S^3$ be the holonomy action, so that $dev$ is $\rho$-equivariant for the deck group action $\pi_1(M)\curvearrowright\td M$. We identify the conformal group $\operatorname{Conf}(S^3)$ with $\operatorname{Isom}(\mathbb{H}^4)$. Since $\pi_1(M)$ has a cyclic subgroup of index at most $2$, we have the following three cases. {\em Case 1: The action $\pi_1(M)\stackrel{\rho}{\curvearrowright} \mathbb{H}^4$ is elliptic, i.e. fixes a point in $\mathbb{H}^4$. \quad} Then there exists a $\rho$-invariant metric $g_0$ with sectional curvature $\equiv 1$ in the conformal class of $S^3$. The pullback $dev^*g_0$ is $\pi_1(M)$-invariant and complete, and hence must be isometric to $S^3$, contradicting the fact that $\td M$ is noncompact. {\em Case 2: The action $\pi_1(M)\stackrel{\rho}{\curvearrowright} \mathbb{H}^4$ is parabolic, i.e. it fixes precisely one point $p$ in the ideal boundary $\partial \mathbb{H}^4=S^3$, and has no fixed points in $\mathbb{H}^4$. \quad} Letting $S=dev^{-1}(p)$, then $S$ is a closed, discrete subset of $\td M$ because $dev$ is an immersion. There is a $\rho$-invariant complete flat metric $\wh g$ on $S^3\setminus\{p\}$. Letting $\td M':=\td M\setminus S$, the pullback $dev\big|_{\td M'}^*\wh g$ is a complete flat metric on the simply connected manifold $\td M'$, and must therefore be isometric to $\mathbb{R}^3$; this contradicts the fact that $\td M$ is diffeomorphic to $(S^2\times \mathbb{R})\setminus S$. {\em Case 3: The action $\pi_1(M)\stackrel{\rho}{\curvearrowright} \mathbb{H}^4$ is hyperbolic, i.e. it preserves an axial geodesic $\gamma\subset \mathbb{H}^4$, and fixes precisely two points $p_1,p_2\in S^3=\partial \mathbb{H}^4$. \quad } Letting $S=dev^{-1}(p)$, then $S$ is a closed, discrete subset of $\td M$. There is a $\rho$-invariant complete cylindrical metric $\wh g$ on $S^3\setminus\{p_1,p_2\}$. Letting $\td M':=\td M\setminus S$, the pullback $dev\big|_{\td M'}^*\wh g$ is a complete metric on the simply connected manifold $\td M'$ which is locally isometric to $S^2\times \mathbb{R}$. By the splitting theorem, $\td M'$ can have at most two ends, and hence $S=\emptyset$, and $dev\big|_{\td M}:\td M\rightarrow S^3\setminus\{p_1,p_2\}$, being a local isometry between simply connected complete manifolds, must be an isometry. (Alternatively, use the developing map for the cylindrical structure.) Now suppose $\wh g'$ is another cylindrical metric on $S^3\setminus\{p_1,p_2\}$ that is invariant under the action of $\pi_1(M)\stackrel{\rho}{\curvearrowright}S^3$ and conformal to $\wh g$. Then there is an isometry $\phi:(S^3\setminus\{p_1,p_2\},\wh g)\rightarrow (S^3\setminus\{p_1,p_2\},\lambda\wh g')$ for some $\lambda>0$; but then $\phi$ is a conformal diffeomorphism of $S^3\setminus\{p_1,p_2\}$. Hence $\phi\in \operatorname{Isom}(\td M,\wh g)$, and so $\wh g=\lambda \wh g'$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_cc_contractible}.] We can argue similarly as in the proof of Theorem~\ref{thm_psc_contractible}. Suppose that $\Met_{CC} (M)$ is non-empty and thus $M$ is diffeomorphic to an isometric quotient of the round sphere or round cylinder. Consider a continuous map $\alpha : S^k \to \Met_{CC} (M)$ for some $k \geq 0$ and let $\wh\alpha:D^{k+1}\rightarrow\met(M)$ be an extension of $\alpha$. As in the proof of Theorem~\ref{thm_psc_contractible} we can apply Theorem~\ref{Thm_main_general_case} to the associated family of metrics on the trivial vector bundle over $K = D^{k+1}$, this time with $L = S^k = \partial D^{k+1}$, and obtain a homotopy of pairs $(\wh\alpha_t:(D^{k+1},S^k)\rightarrow (\met(M),\met_{CC}(M)))_{t\in [0,1]}$, where $\wh\alpha_0=\wh\alpha$ and $\wh\alpha_1$ takes values in the space $\met_{CF}(M)$ of conformally flat metrics on $M$. Let $(g_s)_{s \in D^{k+1}}$ be the family of conformally flat metrics on $M$ corresponding to the map $\wh\alpha_1$. By Lemma~\ref{lem_conf_flat_round} there is a certain continuous family $(\phi_s \in C^\infty (M))_{s \in D^{k+1}}$ such that $g'_s := e^{2\phi_s} g_s \in \Met_{CC} (M)$. By the uniqueness statements in Lemma~\ref{lem_conf_flat_round} we have $\phi_s \equiv 0$ for all $s \in \partial D^{k+1}$. So the family of metrics $(g'_s)_{s \in D^{k+1}}$ defines a null-homotopy of $\wh\alpha_1$ in $\Met_{CC}(M)$. So $\alpha$, which is freely homotopic to $\wh\alpha_1$, is null-homotopic. Since $\alpha$ was arbitrary and $\met_{CC}(M)$ has the homotopy type of a CW-complex \cite[Sec. 2.1]{rubinstein_et_al}, it follows that $\met_{CC}(M)$ is contractible. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_gen_smal}.] Theorem~\ref{thm_gen_smal} follows from Theorem~\ref{thm_cc_contractible} via a standard topological argument, see for example \cite[Lemma 2.2]{gsc}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_RP3RP3}.] Let $M = \mathbb{R} P^3 \# \mathbb{R} P^3$ and consider a metric $g \in \Met_{CC} (M)$. It can be seen easily that $(M, g)$ is isometric to $(S^2 \times S^1(l_g) )/ \mathbb{Z}_2$, for some $l_g > 0$, where $S^2$ is equipped with the standard round metric, $S^1(l_g)$ has total length $l_g$ and $\mathbb{Z}_2$ acts by the antipodal map on $S^2$ and by a reflection with two fixed points on $S^1(l_g)$. There are unique numbers $a_g, b_g \in \mathbb{R}$, depending continuously on $g$, such that $a_g g + b_g \Ric_g$ is isometric to $(S^2 \times S^1(2\pi ) )/ \mathbb{Z}_2$. Denote by $\Met_{CC1}(M) \subset \Met_{CC} (M)$ the space of metrics that are isometric to $(S^2 \times S^1(2\pi ) )/ \mathbb{Z}_2$. It follows that $\Met_{CC} (M)$ deformation retracts onto $\Met_{CC1}(M)$ and thus, by Theorem~\ref{thm_cc_contractible}, the space $\Met_{CC1}(M)$ is contractible as well. We can now argue as in the proof of Theorem~\ref{thm_gen_smal} that \[ O(1) \times O(3) \cong \Isom ((S^2 \times S^1(2\pi ) )/ \mathbb{Z}_2) \longrightarrow \operatorname{Diff}(M) \] is a homotopy equivalence. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_S2S1_diff}.] Let $M = S^2 \times S^1$ and denote by $S(M)$ the space of spherical structures on $M$. We can view $S(M)$ as a subspace of the space of 2-dimensional distributions equipped with Riemannian metrics, which carries a natural smooth topology; equip $S(M)$ with the subspace topology. Note that continuity with respect to this topology is equivalent to transverse continuity of spherical structures in the sense of Definition~\ref{Def_spherical_struct_transverse_cont}. The space $S(M)$ has the homotopy type of a CW-complex, since it is a Fr\'echet manifold (see \cite[Sec. 2.1]{rubinstein_et_al}). Let $\SS_{S^2 \times S^1} \in S(M)$ be the standard spherical structure on $S^2 \times S^1$. It can be seen as in the proof of \cite[Lemma 2.2]{gsc} that the map \[ \operatorname{Diff} (M) \longrightarrow S(M), \qquad \phi \longmapsto \phi^* \SS_{S^2 \times S^1} \] is a fibration and that the inclusion $\operatorname{Diff} (M, \SS_{S^2 \times S^1} ) \to \operatorname{Diff} (M)$ is a homotopy equivalence if and only if $S(M)$ is contractible, where \[ \operatorname{Diff} (M, \SS_{S^2 \times S^1} ) \cong \operatorname{Diff} (S^1) \times O(3) \times \Omega O(3) \cong O(2) \times O(3) \times \Omega O(3) \] denotes the space of diffeomorphisms fixing $\SS_{S^2 \times S^1}$. So it remains to show that $S(M)$ is contractible. To see this, consider a continuous family of spherical structures $(\SS^s)_{s \in S^k}$, $k \geq 0$, on $M$. For every $s \in S^k$ the space of metrics in $\Met_{CC} (M)$ that are compatible with $\SS^s$ is convex and non-empty (see Lemma~\ref{lem_spherical_struct_classification}). Therefore, we can find a continuous family of Riemannian metrics $(g^s \in \Met_{CC} (M))_{s \in S^k}$ compatible with $(\SS^s)_{s \in S^k}$. By Theorem~\ref{thm_cc_contractible} we can extend this family to a continuous family $(\wh g^s \in \Met_{CC} (M))_{s \in D^{k+1}}$. The corresponding family of spherical structures $(\wh\SS^s)_{s \in D^{k+1}}$ constitute a null-homotopy of $(\SS^s)_{s \in S^k}$. \end{proof}
proofpile-arXiv_065-6122
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In a piezoelectric material, an applied \emph{uniform} strain can induce electric polarisation (or vice versa). Crystallographic considerations restrict this important property to non-centrosymmetric systems. By contrast, flexoelectricity\footnote{\emph{Flexo} and \emph{Piezo} derive from the latin words \emph{flecto} - to bend - and \emph{piezein} - to squeeze. } is the property of an insulator whereby it polarises when subjected to an \emph{inhomogeneous deformation} (i.e.\ a strain gradient). The inhomogeneous deformation breaks the material's centrosymmetry, thereby allowing polarisation in non-piezoelectric materials. Flexoelectricity can occur in materials of any symmetry, broadening the range of materials for use as actuators and sensors \citep{Jiang2013}. Flexoelectricity is therefore of considerable interest to the engineering community and is the subject of extensive research. Flexoelectricity in solids was first identified by \citeauthor{Mashkevich1957} \citep{Mashkevich1957, Tolpygo1963} and the theoretical foundations laid by \citet{Kogan1964}. Recent rapid advancements in the miniaturisation of fabricated components has stimulated substantial experimental research into the flexoelectric effect \cite{Ma2001, Ma2002, Zubko2007} as gradient effects are more pronounced at smaller length scales. Structures at small length scales can also exhibit a size-dependent mechanical response \citep[see e.g.][]{Stelmashenko1993, Fleck1994}. Thus any representative model for flexoelectricity needs to account for both the coupling of the electrical response to a strain-gradient, and size-dependent mechanical effects. Reviews on flexoelectricity include \citep{Tagantsev1987, Tagantsev1991, Maranganti2006, Ma2010, Nguyen2013, Lee2012, Zubko2013, Krichen2016}. The flexoelectric effect can be classified as direct or converse. Direct is when a strain gradient induces polarisation; converse is when an electric field gradient induces a mechanical stress. The direct flexoelectric effect will allow novel piezoelectric composites containing no piezoelectric elements to be developed \citep{Ma2010}. Flexoelectricity is also responsible for electromechanical behaviour in hard crystalline materials and underpins core mechanoelectric transduction phenomena in biomaterials \citep{Nguyen2013}. Classical continuum theories are unable to account for the size-dependent response exhibited by structures at small length scales. Extended continuum models have been actively developed over the past three decades to remedy this deficiency. A significant proportion of extended models are members of either the gradient or micromorphic frameworks. Micromorphic continua are characterised by additional degrees of freedom at each continuum point \citep{Eringen1999, Mindlin1964, Toupin1964}. By contrast, gradient continua possess higher gradients of their primary fields \citep[see][and the references therein]{Forest2009}. The purely mechanical micromorphic theory has been extended to account for electromagnetic coupling by including the additional classical continuum electrodynamic contributions in the balance relations \citep[see e.g.][]{Eringen2003, Eringen2004}. \citeauthor{Romeo2011} directly accounted for electromagnetic contributions at the microscale in a micromorphic framework by accounting for electric dipole and quadrupole densities. This theory was extended to account for dielectric multipoles \citep{Romeo2015, Romeo2020} and thereby describe the piezoelectric and flexoelectric effect. Numerical models that capture the key physics of flexoelectricity for arbitrary geometries in three dimensions are however limited. This is particularly true for soft dielectric materials that can undergo significant deformation. A central impediment to developing finite element models for flexoelectricity, or indeed gradient elasticity, is the requirement that the basis functions used to approximate the displacement field must be piecewise smooth and globally $C^1$-continuous. This constraint arises as the partial differential equation governing the mechanical problem is of fourth-order. By contrast, one only requires a standard $C^0$-continuous approximation for electro-elasticity. $C^1$-continuous finite element approximations for complex geometries in three space dimensions are limited \citep{Gomez2008}. Options include isogeometric analysis \citep{Hughes2005}, mixed formulations, discontinuous Galerkin approximations \citep{Engel2002}, the natural element method \citep{Sukumar1999} and other specialised element formulations, and meshless methods \citep{Askes2002}. Many of these methods are not easily implemented within a conventional finite element library. Many of the aforementioned methods to generate $C^1$-continuous finite element approximations have been used to model the problem of flexoelectricity. \citet{Abdollahi2014} chose a meshless method. The analysis was restricted to two dimensions and to the linearised theory. They recently extended the formulation to three dimensions to provide new insight into the pyramid compression tests used to characterise the flexoelectric parameters. Related works include \citep{Abdollahi2015b, Abdollahi2015c}. \citet{Deng2014} developed a nonlinear theory for flexoelectricity in soft materials and biological membranes. Numerical results were restricted to one space dimension. They used a fourth-order approximation for the displacement field. This however is not sufficient for a global $C^1$-continuous finite element approximation. A mixed formulation based on theory of generalised (extended) continua was proposed by \citet{Mao2016}. The mixed approach allowed the linearised gradient theory to be treated within a standard $C^0$-continuous finite element setting. In a key contribution, \citet{Yvonnet2017} extended the nonlinear theory of electroelasticity \citep[see e.g.][and references therein]{Dorfmann2005, Pelteret2016, Vu2006} to account for the coupling between polarization and the gradient of the deformation gradient $\gz{G}$ - a third-order tensor. A non-standard, $C^1$-continuous, Argyris-triangle-based finite element formulation was used. This restricts the approach to relatively simple geometries and two dimensions. In contrast to the majority of flexoelectricity models, the free space surrounding the continuum body was accounted for. A major contribution of the work presented here is to model the scale-dependent effects that underpin flexoelectricity (the direct effect) using the micromorphic approach. The formulation is not restrictive and can handle arbitrary geometries in three space dimensions. We exploit the Dirichlet principle to uncover the relations governing the response of a (soft) dielectric material exhibiting flexoelectric effects. Both geometric and materials nonlinearities are accounted for. The framework is flexible and allows one to describe a range of related problems via an appropriate restriction of the constitutive parameters. Several forms for the flexoelectric energy are proposed. The highly-nonlinear system of governing equations are solved approximately using the finite element method. A Newton--Raphson strategy is used to linearise the problem. The framework is robust and exploits distributed parallelisation and automatic differentiation to improve the efficiency and to simplify the implementation, respectively. Parallelisation helps offset the increased computational cost that arises in the micromorphic approach due to the need to approximate the micro-deformation field, a second-order tensor, in addition to the motion and the electric potential. The finite element model is implemented with the open-source library deal.II \citep{Bangerth2007, dealII91}. The structure of the presentation is as follows. The theoretical background is presented in \sect{sec_theory}. This includes the kinematics of the macroscopic, micromorphic (microscopic) and electric problems. The governing equations and boundary conditions are then derived using the Dirichlet principle. Concrete forms for the constitutive relations are also given. Details of the monolithic finite element formulation are provided in \sect{sec_fe_approximation}. The theory is then elucidated via a series of numerical example problems in \sect{sect_numerical_examples}. The presentation concludes with a summary and discussion. \section*{Notation} Direct notation is adopted throughout. Occasional use is made of index notation, the summation convention for repeated indices being implied. Indices associated with the reference configuration and the current configuration of the body are distinguished by the use of upper- and lower-case font, respectively. The scalar products of two vectors $\gz a$ and $\gz b$, two second-order tensors $\gz A$ and $\gz B$, and two third-order tensors $\gz Q$ and $\gz G$ are respectively denoted by \begin{align*} \gz a \cdot \gz b = a_i b_i \, , && \gz A : \gz B = A_{ij} B_{ij} \, , && \gz Q \;\cdot\!\!: \gz G := Q_{ijk} G_{ijk} \, . \end{align*} The conventional dyadic product of two vectors, and of two second-order tensors are respectively given by \begin{align*} \gz a \otimes \gz b = a_i b_j \gz e_i \otimes \gz e_j && \text{and} && \gz A \otimes \gz B = A_{ij} B_{kl} \gz e_i \otimes \gz e_j \otimes \gz e_k \otimes \gz e_l \, , \end{align*} where $\gz e_i \in \mathbb{R}^{n^\text{dim}}$ and $\gz E_I \in \mathbb{R}^{n^\text{dim}}$ are the basis vectors of the Cartesian coordinate frame in the current (spatial) and reference (material) settings, respectively, and $n^\text{dim}$ is the space dimension. The upper and lower dyadic products of pairs of second-order tensors are respectively given by \begin{align*} \gz A \overline{\otimes} \gz B = A_{ik} B_{jl} \gz e_i \otimes \gz e_j \otimes \gz e_k \otimes \gz e_l && \text{and} && \gz A \underline{\otimes} \gz B = A_{il} B_{jk} \gz e_i \otimes \gz e_j \otimes \gz e_k \otimes \gz e_l \, . \end{align*} The second-order identity tensor is defined by \begin{align*} \gz I = \delta_{ij} \gz e_i \otimes \gz e_j \, . \end{align*} The action of a second-order tensor $\gz A$ on a vector $\gz b$ is the vector $\gz c$ defined by \begin{align*} \gz c = \gz A \cdot \gz b = A_{im} b_{m} \gz e_i \, . \end{align*} The single contraction of two second-order tensors, $\gz A$ and $\gz B$, is the second-order tensor $\gz C$ defined by \begin{align*} \gz C = \gz A \cdot \gz B = A_{im} A_{mj} \gz e_i \otimes \gz e_j \, . \end{align*} Micromorphic variables are distinguished from macroscopic quantities by an overline. Variables associated with the electrical problem are distinguished using blackboard bold. Further notation is introduced when required. \section{Theoretical background}\label{sec_theory} The kinematics of the coupled problem of flexoelectricity are presented in \sect{sect_kinematics}. The Dirichlet principle is then employed to derive the governing equations and boundary conditions. Concrete forms for the constitutive relations are then provided. \subsection{Kinematics} \label{sect_kinematics} The kinematic description of motion at the macroscopic scale is presented in \sect{sect_kinematics_macro}. This is followed by the description of the micromorphic problem at the microscopic scale. The electric problem is then given. For more details on the formulation of coupled nonlinear electro-elasticity, see \citep{Dorfmann2005, Dorfmann2014, Steinmann2011} and for nonlinear micromorphic elasticity see \citep{Hirschberger2007}, and the references therein. \subsubsection{The macroscopic problem}\label{sect_kinematics_macro} Consider a continuum body $\mcl B$ composed of matter as shown in \fig{fig_kinematics}. The motion of $\mcl B$ from its reference configuration $\mcl B_0$ to its current configuration $\mcl B_t$ is defined via the map $\gz x = \gz \varphi (\gz X,t)$, where $\gz x \in \mcl{B}_t$ and $\gz X \in \mcl{B}_0$ are physical points in the current and reference configurations, respectively. The boundary of the reference configuration is denoted by $\Gamma_0$, with outward unit normal $\gz{N}$. \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{images/kinematics.pdf} \caption{The reference and current configurations of the continuum body $\mcl B$ and the associated macroscopic and microscopic (micromorphic) motions and deformation gradients.} \label{fig_kinematics} \end{figure} The invertible linear tangent map $\gz F$ (i.e.\ the deformation gradient) maps a line element $\,\mbox{d} \gz X$ in the reference configuration to a line element $\,\mbox{d} \gz x$ in the current configuration and is defined by the derivative of the motion with respect to the material placement; that is, \begin{align*} \gz F := {\rm Grad} \gz \varphi = \dfrac{\partial \varphi_i}{\partial X_J} \gz{e}_i \otimes \gz{E}_J && \text{and} && \,\mbox{d} \gz x = \gz F \cdot \,\mbox{d} \gz X \, . \end{align*} The determinant of $\gz{F}$ is defined by $J := \det \gz{F} > 0$ and its inverse by $j := 1 / J$. The symmetric right and left Cauchy--Green tensors, $\gz{C}$ and $\gz{b}$, are respectively defined by \begin{align*} \gz{C} := \gz{F}{}^{\mathsf{T}} \cdot \gz{F} && \text{and} && \gz{b} := \gz{F} \cdot \gz{F} {}^{\mathsf{T}} \, . \end{align*} It proves convenient to define the inverse of the deformation gradient by $\gz{f} := \gz{F}^{-1}$. The Piola strain $\gz{B}$ and the Finger strain $\gz{c}$ are respectively defined by \begin{align*} \gz{B} := \gz{f} \cdot \gz{f}{}^{\mathsf{T}} = \gz{C}^{-1} && \text{and} && \gz{c} := \gz{f} {}^{\mathsf{T}} \cdot \gz{f} = \gz{b}^{-1} \, . \end{align*} Furthermore, the Green--Lagrange and Euler--Almansi strain tensors are respectively defined by \begin{align*} \gz{E} := \dfrac{1}{2}\left[ \gz{C} - \gz{I} \right] && \text{and} && \gz{e} := \dfrac{1}{2}\left[ \gz{i} - \gz{c} \right] \intertext{where} \gz{E} = \gz{F}{}^{\mathsf{T}} \cdot \gz{e} \cdot \gz{F} =: \gz{\varphi}^{-1}_\star (\gz{e}) && \text{and} && \gz{e} = \gz{f}{}^{\mathsf{T}} \cdot \gz{E} \cdot \gz{f} =: \gz{\varphi}_\star (\gz{E}) \, . \end{align*} The second-order identity tensors in the referential and current configurations are denoted by $\gz I$ and $\gz i$, respectively. Note, $\gz f \cdot \gz F = \gz I$ and $\gz F \cdot \gz f = \gz i$. The push-forward and pull-back operations on second-order tensors are denoted by $\gz{\varphi}_\star$ and $\gz{\varphi}^{-1}_\star$, respectively. That is, \begin{align*} \gz{\varphi}_\star (\bullet)^\flat = \gz{f}{}^{\mathsf{T}} \cdot (\bullet)^\flat \cdot \gz{f} && \text{and} && \gz{\varphi}^{-1}_\star (\bullet)^\flat = \gz{F}{}^{\mathsf{T}} \cdot (\bullet)^\flat \cdot \gz{F} \, , \\ \gz{\varphi}_\star (\bullet)^\natural = \gz{F} \cdot (\bullet)^\natural \cdot \gz{F}{}^{\mathsf{T}} && \text{and} && \gz{\varphi}^{-1}_\star (\bullet)^\natural = \gz{f} \cdot (\bullet)^\natural \cdot \gz{f}{}^{\mathsf{T}} \, , \end{align*} where $(\bullet)^\flat$ and $(\bullet)^\natural$ denote covariant and contravariant second-order tensors, respectively. For completeness, the push-forward and pull-back operations on vectors are given by \begin{align*} \gz{\varphi}_\star (\bullet)^\flat = \gz{f}{}^{\mathsf{T}} \cdot (\bullet)^\flat && \text{and} && \gz{\varphi}^{-1}_\star (\bullet)^\flat = \gz{F}{}^{\mathsf{T}} \cdot (\bullet)^\flat \, , \\ \gz{\varphi}_\star (\bullet)^\natural = \gz{F} \cdot (\bullet)^\natural && \text{and} && \gz{\varphi}^{-1}_\star (\bullet)^\natural = \gz{f} \cdot (\bullet)^\natural \, . \end{align*} Hence, for the covariant (kinematic) measures adopted here, \begin{align*} \gz{e} = \gz{\varphi}_\star (\gz{E}) && \text{and} && \gz{E} = \gz{\varphi}^{-1}_\star (\gz{e}) \, , \\ \gz{i} = \gz{\varphi}_\star (\gz{C}) && \text{and} && \gz{C} = \gz{\varphi}^{-1}_\star (\gz{i}) \, , \\ \gz{c} = \gz{\varphi}_\star (\gz{I}) && \text{and} && \gz{I} = \gz{\varphi}^{-1}_\star (\gz{c}) \, . \end{align*} \subsubsection{The micromorphic problem} The body $\mcl B$ is modelled as a micromorphic continuum to account for size-dependent effects. As such, each material point $\mcl P \in \mcl B$ has additional micromorphic degrees of freedom associated with the continuum at the microscale that undergoes an affine deformation. The micro-deformation $\overline{\gz F}(\gz X,t)$ represents an affine map of material points from their reference position $\overline{\gz X}$ to a current position $\overline{\gz x}$ within the microscale continuum; that is \begin{align*} \overline{\gz x} = \overline{\gz F} \cdot \overline{\gz X} \, . \end{align*} The micro-deformation $\overline{\gz F}$ is kinematically independent of the macroscopic continuum and represents an additional state variable. The gradient of the micro-deformation with respect to the macroscale material placement is a (mixed-variant) third-order tensor defined by \begin{align*} \overline {\gz G} (\gz X) := {\rm Grad} \overline{\gz F} (\gz X) = \dfrac{\partial \overline{F}_{iJ}}{\partial X_K} \gz{e}_i \otimes \gz{E}_J \otimes \gz{E}_K = \overline{G}_{iJK} \gz{e}_i \otimes \gz{E}_J \otimes \gz{E}_K \, . \end{align*} \subsubsection{The electric problem} The scalar electric potential is denoted by $\varphi$. The spatial and referential electric fields, denoted by $\mathbbm{e}$ and $\mathbb{E}$ respectively, are thus given by \begin{align*} \mathbbm{e} = - {\rm grad} \varphi && \text{and} && \mathbb{E} = - {\rm Grad} \varphi \, , \intertext{and are related to one another as follows} \mathbbm{e} = \gz{f}{}^{\mathsf{T}} \cdot \mathbb{E} = \gz{\varphi}_\star (\mathbb{E}) && \text{and} && \mathbb{E} = \gz{F}{}^{\mathsf{T}} \cdot \mathbbm{e} = \gz{\varphi}^{-1}_\star (\mathbbm{e}) \, . \end{align*} The gradient operator with respect to the current configuration is defined by ${\rm grad}(\bullet) := {\rm Grad}(\bullet) \cdot \gz{f}$. \subsection{Dirichlet principle (stationary energy principle)} \label{sec_dirichlet_principle} The Dirichlet principle is employed to determine the kinetic quantities conjugate to the kinematic measures derived in \sect{sect_kinematics}. The principle also provides the structure for the governing equations and the boundary conditions. For the conservative system considered here, the total potential energy $E$ is given by \begin{align} E = \int_{\mcl B_0} U_0(\gz \varphi, \gz F, \overline{\gz F}, \overline {\gz G}, \varphi, \mathbb{E}; \gz X) \, \,\mbox{d} V + \int_{\Gamma_0} u_0(\gz \varphi, \overline{\gz F}, \varphi; \gz X) \, \,\mbox{d} A \,, \label{eq_total_potential_energy_1} \end{align} where $U_0$ and $u_0$ are the potential energy density functions per unit reference volume and area, respectively. The potential energy density $U_0$ is additively decomposed as follows \begin{align*} U_0 (\gz \varphi, \gz F, \overline{\gz F}, \overline {\gz G}, \varphi, \mathbb{E}) = W_0(\gz F, \overline{\gz F}, \overline {\gz G}, \mathbb{E}) + V_0(\gz \varphi, \varphi) \, , \end{align*} where $W_0$ is the internal contribution and $V_0$ is the external contribution. Note, we have assumed that there are no external contributions associated with the micromorphic problem. Under the isothermal conditions assumed here, the internal contribution $W_0$ is further decomposed as \begin{align} W_0(\gz F, \overline{\gz F}, \overline {\gz G}, \mathbb{E}) = \underbrace{ \psi_0^{\text{elast}}(\gz F, \overline{\gz F}, \overline {\gz G}, \mathbb{E}) + \psi_0^{\text{flexo}}(\{ \gz F, \overline{\gz F} \} , \overline {\gz G}, \mathbb{E}) }_{\psi_0(\gz F, \overline{\gz F} , \overline {\gz G}, \mathbb{E})} + E_0(\gz F, \mathbb{E}) \, , \label{eq_W0} \end{align} where $\psi_0^{\text{elast}}$ is the electric free enthalpy density, $\psi_0^{\text{flexo}}$ is the internal energy associated with the flexoelectric effect, and $E_0$ is the electric energy density. We note that the proposed additive decomposition is an assumption motivated by classical approaches in electroelasticity \citep[see e.g.][and references therein]{Vu2007}; other choices are possible. \begin{rmk} \label{rmk_flex_energy_param} Note, the internal energy associated with the flexoelectric effect describes the coupling between the gradient of the micro-deformation $\overline {\gz G}$ and the spatial electric field $\mathbbm{e} = \gz{f}{}^{\mathsf{T}} \cdot \mathbb{E}$. Hence the dependence of the energy $\psi_0^{\text{flexo}}$ on both $\gz F$ and $\mathbb{E}$. An alternative push-forward of $\mathbb{E}$ via $\overline{\gz{f}}:= \overline{\gz F}^{-1}$, is discussed further in \sect{sec_const_relations}. The possible functional dependence of the internal energy associated with the flexoelectric effect on either $\gz F$ or $\overline{\gz F}$, or both, is denoted by curly braces in the parametrisation. \qed \end{rmk} In summary, the total potential energy \eqref{eq_total_potential_energy_1} can be expressed as \begin{equation} \begin{split} E &= \int_{\mcl B_0} W_0(\gz F, \overline{\gz F}, \overline {\gz G}, \mathbb{E}; \gz X) \, \,\mbox{d} V + \int_{\mcl B_0} V_0(\gz \varphi, \varphi; \gz X) \, \,\mbox{d} V + \int_{\Gamma_0} u_0(\gz \varphi, \overline{\gz F}, \varphi; \gz X) \, \,\mbox{d} A \, . \end{split} \label{eq_total_potential_energy_2} \end{equation} At equilibrium, the total potential energy of the system must be stationary with respect to arbitrary variations of the primary fields; that is \begin{align*} \delta E(\gz \varphi, \gz F, \overline{\gz F}, \overline {\gz G}, \varphi, \mathbb{E}) = 0 \, . \end{align*} Hence \begin{equation} \begin{split} 0 &= \int_{\mcl B_0} \left[ \gz P^{\text{tot}} : \delta \gz F +\overline {\gz P} : \delta \overline{\gz F} +\overline {\gz Q} \;\cdot\!\!: \delta \overline {\gz G} -\mathbb{D} \cdot \delta \mathbb{E} -\gz b_0 \cdot \delta \gz \varphi +\rho_0^f \delta \varphi \, \,\mbox{d} V \right] \\ &\quad + \int_{\Gamma_0} \left[ -\gz t_0 \cdot \delta \gz \varphi -\overline{\gz t}_0 : \delta \overline{\gz F} +\widehat{\rho}_0^f \delta \varphi \, \,\mbox{d} A \right] \qquad \qquad \forall \; \delta \gz \varphi,\, \delta \overline{\gz F},\, \delta \varphi \, , \end{split} \label{eq_stationary_2} \end{equation} where the energetically-conjugate kinetic measures are defined in Table \ref{table_kinetics}. \begin{table}[htb!] \centering \begin{tabular}{ l l l l} \toprule \textbf{Measure} & \textbf{Domain} & \textbf{Label} & \textbf{Order} \\ \hline $\gz P^{\text{tot}} := \,\mbox{D}_{\gz F} U_0$ & $\mcl B_0$ & macroscopic Piola stress & 2 \\ $\overline {\gz P} := \,\mbox{D}_{\overline{\gz F}} U_0$ & $\mcl B_0$ & micromorphic Piola stress & 2 \\ $\overline {\gz Q} := \,\mbox{D}_{\overline {\gz G}} U_0$ & $\mcl B_0$ & micromorphic double stress & 3 \\ $\mathbb{D} := - \,\mbox{D}_{\mathbb{E}} U_0$ & $\mcl B_0$ & dielectric displacement & 1 \\ $\gz b_0 := - \,\mbox{D}_{\gz \varphi} U_0$ & $\mcl B_0$ & body force & 1 \\ $\rho_0^f := \,\mbox{D}_{\varphi}U_0$ & $\mcl B_0$ & density of free charge per unit volume & 0 \\ $\gz t_0 := -\,\mbox{D}_{\gz \varphi} u_0$ & $\Gamma_0$ & macroscopic Piola traction & 1 \\ $\overline{\gz t}_0 := -\,\mbox{D}_{\overline{\gz F}} u_0$ & $\Gamma_0$ & micromorphic Piola traction & 2 \\ $\widehat{\rho}_0^f := \,\mbox{D}_{\varphi} u_0$ & $\Gamma_0$ & density of free charge per unit area & 0 \\ \bottomrule \end{tabular} \caption{ Summary and definition of the kinetic measures introduced in \eqn{eq_stationary_2}. Order refers to the order of the tensorial quantity. } \label{table_kinetics} \end{table} It is convenient to additively decompose the macroscopic Piola stress $\gz P^{\text{tot}}$ and the dielectric displacement $\mathbb{D}$ as follows: \begin{align*} \gz P^{\text{tot}} =: \underbrace{\left[ \gz P + \gz P^{\text{pol}} \right]}_{\,\mbox{D}_{\gz F} \psi_0} + \underbrace{\left[ \gz P^{\text{max}} \right]}_{\,\mbox{D}_{\gz F} E_0} && \text{and} && \mathbb{D} =: \underbrace{\left[\mathbb{P} \right]}_{-\,\mbox{D}_{\mathbb{E}} \psi_0} + \underbrace{\left[ \mathbb{D}^\epsilon \right]}_{-\,\mbox{D}_{\mathbb{E}} E_0} \, , \end{align*} where $\psi_0 = \psi_0^{\text{elast}} + \psi_0^{\text{flexo}}$ was defined in \eqn{eq_W0}. Here $\gz P$ is the ordinary Piola stress, $\gz P^{\text{pol}}$ is the polarization stress, and $\gz P^{\text{max}}$ is the Maxwell stress. $\mathbb{D}$ is the referential dielectric displacement, $\mathbb{P}$ is the referential polarization, and $\mathbb{D}^\epsilon$ is the dielectric displacement. The elastic contribution to the energy density associated with matter $\psi^\text{elast}_0$ (see \eqn{eq_W0}) contains contributions from the macroscopic problem, the micromorphic problem and an additional scale-bridging contribution \citep[see][for extensive details]{Hirschberger2008}; that is \begin{align} \psi_0^\text{elast}(\gz F, \overline{\gz F}, \overline {\gz G}, \mathbb{E}) =: \psi_0^\text{mac} + \psi_0^\text{mic} + \psi_0^\text{scale} \, . \label{eq_elastic_energy} \end{align} It is convenient to define the spatial polarization by $\mathbbm{p} := - D_\mathbbm{e} \psi_t$, where $\psi_t:= j \psi_0$ is the free enthalpy per unit volume of the current configuration, as the Piola transformation of the material polarization $\mathbb{P} = -\,\mbox{D}_{\mathbb{E}} E_0 $, that is \begin{align*} \mathbbm{p} = j \gz{\varphi}_\star (\mathbb{P}) && \text{and} && \mathbb{P} = J \gz{f} \cdot \mathbbm{p} = \mathbbm{p} \cdot \text{cof}{\gz{F}} = J \gz{\varphi}^{-1}_\star (\mathbbm{p}) \, , \end{align*} where the cofactor of an invertible second-order tensor $(\bullet)$ is defined by $\text{cof} (\bullet) := [\det (\bullet)] (\bullet){}^{-\mathsf{T}}$. \begin{rmk} The electric energy density $E_0$ is parametrised here in terms of the electric field $\mathbb{E}$ and the deformation gradient $\gz F$. This is a common choice \citep[see][for further details]{Dorfmann2014}. \citet{Yvonnet2017} in their work on flexoelectricity uses a mixed-type formulation where the electric energy density $E_0$ is parametrised by the spatial polarization $\mathbbm{p}$. \qed \end{rmk} \subsection{Governing equations and boundary conditions} The system of equations and boundary conditions governing the coupled problem of flexoelectricity and micromorphic elasticity are now derived. The system of coupled governing equations (the Euler equations) is obtained by applying the divergence theorem to \eqn{eq_stationary_2} and invoking the arbitrariness and independence of the variations $\delta \gz \varphi,\, \delta \overline{\gz F}$ and $\delta \varphi$, to obtain \begin{gather*} \left. \begin{split} \mbox{Div\,} \left[ \gz P + \gz P^{\text{pol}} + \gz P^{\text{max}} \right] + \gz b_0 = \gz 0 \\ \mbox{Div\,} \overline {\gz Q} - \overline {\gz P} = \gz 0 \\ \mbox{Div\,} \mathbb{D} = \rho_0^f \end{split} \qquad \right\} \qquad \text{in } \mcl B_0 \, . \end{gather*} Dirichlet conditions on the displacement $\gz \varphi$, the micro-deformation $\overline{\gz F}$, and the electric potential $\varphi$ are prescribed on the parts of the boundary $\Gamma^{\gz \varphi}_0 \subseteq \Gamma_0$, $\Gamma^{\overline{\gz F}}_0 \subseteq \Gamma_0$, and $\Gamma^{\varphi}_0 \subseteq \Gamma_0$, respectively. That is \begin{align*} \gz \varphi = \gz \varphi^\text{pre}_{\Gamma} \text{ on } \Gamma^{\gz \varphi}_0 \, , && \overline{\gz F} = \overline{\gz F}^\text{pre}_{\Gamma} \text{ on } \Gamma^{\overline{\gz F}}_0 \, , && \varphi_{\mcl B} = \varphi^\text{pre}_{\Gamma} \text{ on } \Gamma^{\varphi}_0 \, . \end{align*} The superscript $(\bullet)^\text{pre}$ denotes a prescribed function. \begin{rmk} The physical meaning of a Dirichlet boundary condition on the micro-deformation $\overline{\gz F}$ is not clear. We retain this possibility for the sake of completeness. \qed \end{rmk} The various Neumann conditions on the respective subsets of $\Gamma_0$ follow as \begin{align} \left[ \gz P^{\text{max}} + \gz P + \gz P^{\text{pol}} \right] \cdot \gz N &= \gz t_0^\text{pre} && \text{on } \Gamma^{\gz P}_0 \text{ where } \Gamma^{\gz \varphi}_0 \cup \Gamma^{\gz P}_0 = \Gamma_0 \text{ and } \Gamma^{\gz \varphi}_0 \cap \Gamma^{\gz P}_0 = \emptyset \, , \label{neumann_motion} \\ \overline {\gz Q} \cdot \gz N &= \overline{\gz t}_0^\text{pre} && \text{on } \Gamma^{\overline {\gz Q}}_0 \text{ where } \Gamma^{\overline{\gz F}}_0 \cup \Gamma^{\overline {\gz Q}}_0 = \Gamma_0 \text{ and } \Gamma^{\overline{\gz F}}_0 \cap \Gamma^{\overline {\gz Q}}_0 = \emptyset\, , \label{neumann_micro} \\ -\left[ \mathbb{D}^\epsilon + \mathbb{P}\right] \cdot \gz N &= \widehat{\rho}_0^f{}^\text{pre} && \text{on } \Gamma^{\mathbb{D}}_0 \text{ where } \Gamma^{\varphi}_0 \cup \Gamma^{\mathbb{D}}_0 = \Gamma_0 \text{ and } \Gamma^{\varphi}_0 \cap \Gamma^{\mathbb{D}}_0 = \emptyset\, . \label{neumann_potential} \end{align} \subsection{Constitutive relations} \label{sec_const_relations} Concrete examples for the various terms that comprise the total potential energy $E$ in \eqn{eq_total_potential_energy_2} are now given. The resulting expressions for the kinetic measures defined in Table \ref{table_kinetics} are given in \ref{appendix_linearisation}. \subsubsection{Elastic energy density} Following \citep{Hirschberger2007, Pelteret2016}, the elastic energy density associated with matter $\psi^\text{elast}_0 = \psi_0^\text{mac} + \psi_0^\text{mic} + \psi_0^\text{scale} $ (see \eqn{eq_elastic_energy}) is assumed to be of the form \begin{align} \psi_0^\text{mac}(\gz F, \mathbb{E}) &\equiv \frac{1}{2} \lambda \ln^2 J + \frac{1}{2} \mu \left[ \gz F : \gz F - n^\text{dim} - 2 \ln J \right] + \epsilon_0 \left[ \alpha \gz I + \beta \gz C + \gamma \gz B \right ] : \mathbb{E} \otimes \mathbb{E} \, , \label{psi_mac} \\ \psi_0^\text{mic}(\overline {\gz G}) &\equiv \frac{1}{2} \mu \ell^2 \overline {\gz G} \;\cdot\!\!: \overline {\gz G} \, , \label{psi_mic} \\ \psi_0^\text{scale}(\gz \varphi, \overline{\gz F}) &\equiv \frac{1}{2} p \left[ \overline{\gz F} - \gz F \right] : \left[ \overline{\gz F} - \gz F \right] \, . \label{psi_scale} \end{align} Here $\lambda$ and $\mu$ are the Lame constants, $\ell \geq 0$ is the length-scale parameter and $p \geq 0$ is a penalty-like parameter that couples the macro- and micro-deformation gradients. The free space electric permittivity constant $\epsilon_0 = $\SI{8.854187817e-12}{\gz F \per \metre} and $\alpha$, $\beta$ and $\gamma$ are parameters. \eqn{psi_mac} is an additive decomposition of a compressible neo-Hookean energy and a prototypical coupled electro-elastic model \citep[see][for further details]{Mehnert2018, Pelteret2016}. Following \citet{Hirschberger2007}, the micromorphic and scale-bridging energies as chosen to be quadratic functions of the various strain measures. We note that this is an assumption and not a requirement. \begin{rmk} Alternative forms for the scale transition energy include \begin{align*} \psi_0^\text{scale}(\gz \varphi, \overline{\gz F}) &\equiv \frac{1}{2} p \left[ \gz f \cdot \overline{\gz F} - \gz{I} \right]^2 \, , \\ \psi_0^\text{scale}(\gz \varphi, \overline{\gz F}) &\equiv \frac{1}{2} p \left[ \overline{\gz F}{}^{\mathsf{T}} \cdot \overline{\gz F} - \gz{C} \right]^2 \, . \end{align*} An alternative description of the micromorphic energy density in terms of Eringen's Lagrangian micro-deformation gradient $\overline{\gz G}{}^\text{E} := \overline{\gz F} {}^{\mathsf{T}} \cdot \overline {\gz G}$ is \begin{align*} \psi_0^\text{mic}(\overline{\gz F},\overline {\gz G}) &\equiv \frac{1}{2} \mu \ell^2 \overline{\gz G}{}^\text{E} \;\cdot\!\!: \overline{\gz G}{}^\text{E} \, . \end{align*} \end{rmk} \subsubsection{Electric energy density} The electric energy density is given by \citep[see][]{Vu2006} \begin{align} E_0(\gz F, \mathbb{E}) = -\dfrac{1}{2} \epsilon_0 J \, \gz{B} : \mathbb{E} \otimes \mathbb{E} \, . \label{eq_hyperelastic_energy_electric} \end{align} \subsubsection{Flexoelectric energy density} As discussed in \sect{sec_dirichlet_principle}, the flexoelectric contribution couples the third-order, mixed-variant micro-gradient $\overline {\gz G}$ and the electric field. We propose here that the flexoelectric contribution takes the form \begin{align} \psi_0^\text{flexo}(\overline{\gz F}, \overline {\gz G}, \mathbb{E}) &= \upsilon \left[ \overline{\gz f} {}^{\mathsf{T}} \cdot \mathbb{E} \right] \cdot \overline {\gz G} : \gz{I} \, , \label{psi_flexo_A} \end{align} where $\upsilon := \epsilon_0 \ell \overline{\upsilon}$ is a positive parameter. The inclusion of the length scale $\ell$ increases the relative flexoelectric contribution for diminishing sample size (cf.\ \eqn{psi_mic}). The free space electric permittivity constant ensures that the contribution of the flexoelectric energy is of a similar order and structure to the electrical contributions in $\psi_0^\text{mac}$ and $E_0$, see \eqn{psi_mac} and \eqn{eq_hyperelastic_energy_electric}, respectively. The pull-back of $\mathbb{E}$ via the micro-deformation is proposed to preclude direct coupling of the macroscale Piola stress $\gz P^{\text{tot}} = \,\mbox{D}_{\gz F} U_0$ and the microscale problem other than through the scale-bridging energy in \eqn{psi_scale} (see Remark \ref{rmk_flex_energy_param}). \begin{rmk} An alternative form of \eqn{psi_flexo_A} would be \begin{align*} \psi_0^\text{flexo}(\gz F, \overline {\gz G}, \mathbb{E}) = \upsilon \, \mathbbm{e} \cdot \overline {\gz G} : \gz{I} = \upsilon \left[ \gz{\varphi}_\star (\mathbb{E}) \right] \cdot \overline {\gz G} : \gz{\varphi}^{-1}_\star (\gz{b}) = \upsilon \left[ \gz f{}^{\mathsf{T}} \cdot \mathbb{E} \right] \cdot \overline {\gz G} : \gz{I} \, . \end{align*} We note that $\overline {\gz G}$ is a mixed-variant tensor that is contracted in $\psi_0^\text{flexo}$ from the left by the covariant spatial electric field $\mathbbm{e} = \gz{\varphi}_\star (\mathbb{E})$ and from the right by the contravariant material identity tensor $\gz{I}$. In the same spirit, a further logical proposal for the flexoelectric energy would be \begin{align*} \psi_0^\text{flexo}(\gz F, \overline {\gz G}, \mathbb{E}) = \upsilon \, \mathbbm{e} \cdot \overline {\gz G} : \gz{B} = \upsilon \left[ \gz{\varphi}_\star (\mathbb{E}) \right] \cdot \overline {\gz G} : \gz{\varphi}^{-1}_\star (\gz{i}) \, . \end{align*} \qed \end{rmk} \begin{rmk} The model of flexoelectricity proposed by \citet{Yvonnet2017} requires a $C^1$-continuous finite element approximation. This is restrictive. The micromorphic approach proposed here requires standard $C^0$ continuity. To compare formulations, define the gradient of the deformation gradient by $\gz G = {\rm Grad} \gz F = {\rm Grad} [ {\rm Grad} \gz \varphi ]$. We note that as the penalty-like parameter $p \to \infty$ in \eqn{psi_scale}, $\overline {\gz G} \to \gz G$. \citeauthor{Yvonnet2017} define an internal energy to describe the gradient and the flexoelectric effect that takes the form \begin{align} \psi_0^\text{flexo}(\gz G, \mathbbm{p}) = \frac{1}{2}[\ell^\text{YL}]^2 \left[ \gz G : \gz I \right] \cdot \left[ \gz G : \gz I \right] + \upsilon^\text{YL} \, \mathbbm{p} \cdot \left[ \gz G : \gz I \right] \, , \label{psi_flex_JY} \end{align} where $\ell^\text{YL} \geq 0$ is a length scale and $\upsilon^\text{YL} \geq 0$ is a constitutive parameter. Note that the units of $\upsilon$ and $\upsilon^\text{JY}$ are clearly different. The first term in \eqn{psi_flex_JY} is similar to the micromorphic energy in \eqn{psi_mic} but with a different choice of inner product. The second term is similar but reflects the choice of \citeauthor{Yvonnet2017} to select the polarization $\mathbbm{p}$ as a primary field. \qed \end{rmk} The micromorphic model of flexoelectricity allows a range of different problems to be addressed by modifying the parameters in the constitutive relations, as depicted in \fig{fig_family_tree}. The schematic provides a convenient classification structure. FM-Elasticity denotes the problem of coupled flexoelectricity and micromorphic elasticity. As the penalty penalty-like parameter $p \to \infty$ we recover coupled flexoelectricity and gradient elasticity, denoted FG-Elasticity. By setting the flexoelectric parameter $\overline{\upsilon} \to 0$ we obtain the problem of coupled electro-micromorphic elasticity, denoted EM-Elasticity. Note that by this definition, flexoelectric effects are absent in EM-Elasticity. The problem of coupled gradient electro-elasticity is obtained from EM-Elasticity as $p \to \infty$. In addition micromorphic elasticity (M-Elasticity) and electro-elasticity (E-Elasticity) are obtained from EM-Elasticity as $\epsilon_0 \to 0$, and as $\ell \to 0$ and $p \to 0$, respectively. In the same spirit, we recover gradient elasticity from M-Elasticity as $p \to \infty$. Finally, we obtain the standard problem of nonlinear elasticity from M-Elasticity as $\ell \to 0$ and $p \to 0$, and from E-elasticity as $\epsilon_0 \to 0$. Henceforth for the choice of $\ell \equiv 0$ it is implied that $p \equiv 0$. This ensures that the macroscopic and micromorphic response are uncoupled. \begin{figure}[htb!] \centering \includegraphics[width=0.85\textwidth]{images/family_tree.pdf} \caption{The relation between the various models of coupled electrical and mechanical elasticity within the micromorphic setting.} \label{fig_family_tree} \end{figure} \section{The Finite Element approximation} \label{sec_fe_approximation} The triangulation of the reference configuration $\mcl B_0$ into non-overlapping elements is denoted by $\mathcal{T}^h_{\mcl B}$. The primary fields (the macroscopic motion $\gz \varphi$, the micro-deformation $\overline{\gz F}$, and the electric potential $\varphi$) are approximated using finite element spaces of continuous piecewise polynomials of fixed, but potentially different, degree. The macroscopic motion $\gz \varphi \in H^1(\mcl B_0)$, the micromorphic deformation $\overline{\gz F} \in H^1(\mcl B_0)$, and the scalar electric potential $\varphi \in H^1(\mcl B_0)$ are respectively given in a vector space spanned by the standard (i.e.\ $C^0$-continuous) vector-, tensor-, and scalar-valued finite element basis functions (polynomials with local support), respectively denoted by $\gz N^I_{\gz \varphi}$, $\gz N^I_{\overline{\gz F}}$ and $N^I_{\varphi}$. That is, the primary fields and their associated variations ($\delta \gz \varphi \in H^1_0(\mcl B_0)$, $\delta \overline{\gz F} \in H^1_0(\mcl B_0)$ and $\delta \varphi \in H^1_0(\mcl B_0)$) are approximated by \begin{align} \gz \varphi^h =: \sum_{I \in \mcl I_{\gz \varphi}} {\gz \upvarphi}_I \gz N^I_{\gz \varphi} (\gz X) && \text{and} && \delta \gz \varphi^h =: \sum_{I \in \mcl I_{\gz \varphi}} {\gz \upvarphi}_I \gz N^I_{\gz \varphi} (\gz X) \label{eq_motion_h} \,, \\ \overline{\gz F}^h =: \sum_{I \in \mcl I_{\overline{\gz F}}} \overline{\mathbf{\mathsf{F}}}_I \gz N^I_{\overline{\gz F}} (\gz X) && \text{and} && \delta \overline{\gz F}^h =: \sum_{I \in \mcl I_{\overline{\gz F}}} \delta \overline{\mathbf{\mathsf{F}}}_I \gz N^I_{\overline{\gz F}} (\gz X) \label{eq_mF_h} \,, \\ \varphi^h =: \sum_{I \in \mcl I_{\varphi}} {\upvarphi}_I N^I_{\varphi} (\gz X) && \text{and} && \delta \varphi^h =: \sum_{I \in \mcl I_{\varphi}} \delta {\upvarphi}_I N^I_{\varphi} (\gz X) \label{eq_epot_h} \, , \end{align} where superscript $h$ indicates that the representation is related to the finite element mesh with size function $h(\gz X)$. Upright Greek letters are used to denote a global vector containing the degrees of freedom associated with one of the three primary field. The sets $\mcl I_{\gz \varphi}$ and $\mcl I_{\overline{\gz F}}$ and $\mcl I_{\varphi}$ contain the degrees of freedom for the macroscopic, micromorphic and electric fields, respectively. The discrete representation of the gradients and variations of the primary fields follow directly as \begin{align} \gz F^h =: \sum_{I \in \mcl I_{\gz \varphi}} {\gz \upvarphi}_I {\rm Grad} \gz N^I_{\gz \varphi} (\gz X) && \text{and} && \delta \gz F^h =: \sum_{I \in \mcl I_{\gz \varphi}} \delta {\gz \upvarphi}_I {\rm Grad} \gz N^I_{\gz \varphi} (\gz X) \,, \\ \overline {\gz G}^h =: \sum_{I \in \mcl I_{\overline{\gz F}}} \overline{\mathbf{\mathsf{F}}}_I {\rm Grad} \gz N^I_{\overline{\gz F}} (\gz X) && \text{and} && \delta \overline {\gz G}^h =: \sum_{I \in \mcl I_{\overline{\gz F}}} \delta \overline{\mathbf{\mathsf{F}}}_I {\rm Grad} \gz N^I_{\overline{\gz F}} (\gz X) \,, \\ \mathbb{E}^h =: -\sum_{I \in \mcl I_{\varphi}} {\upvarphi}_I {\rm Grad} N^I_{\varphi} (\gz X) && \text{and} && \delta \mathbb{E}^h =: -\sum_{I \in \mcl I_{\varphi}} \delta {\upvarphi}_I {\rm Grad} N^I_{\varphi} (\gz X) \, . \label{eq_Efield_h} \end{align} Substituting the discrete representations \eqref{eq_motion_h}--\eqref{eq_Efield_h} into the stationary condition \eqref{eq_stationary_2}, yields the following three sets of coupled non-linear residual equations to be satisfied: \begin{align} \mathsf{R}^{I}_{\gz \varphi} &:= \int_{\mcl B_0} \left[ {\gz P^{\text{tot}}} : {\rm Grad} \gz N^I_{\gz \varphi} - \gz b_0 \cdot \gz N^I_{\gz \varphi} \right] \, \,\mbox{d} V - \int_{\Gamma_0} \gz t_0 \cdot \gz N^I_{\gz \varphi} \, \,\mbox{d} A \doteq 0 && \forall I \in \mcl I_{\gz \varphi} \label{R_motion} \\ \mathsf{R}^{I}_{\overline{\gz F}} &:= \int_{\mcl B_0} \left[ \overline {\gz P} : \gz N^I_{\overline{\gz F}} + \overline {\gz Q} \;\cdot\!\!:{\rm Grad} \gz N^I_{\overline{\gz F}} \right] \, \,\mbox{d} V - \int_{\Gamma^{\mcl B}_0} \overline{\gz t}_0 : \gz N^I_{\overline{\gz F}} \, \,\mbox{d} A \doteq 0 && \forall I \in \mcl I_{\overline{\gz F}} \label{R_mF} \\ \mathsf{R}^{I}_{\varphi} &:= \int_{\mcl B_0} \left[ \mathbb{D} \cdot {\rm Grad} N^I_{\varphi} + \rho_0^f N^I_{\varphi} \right] \, \,\mbox{d} V + \int_{\Gamma_0} \widehat{\rho}_0^f N^I_{\varphi} \, \,\mbox{d} A \doteq 0 && \forall I \in \mcl I_{\varphi} \, . \label{R_potential} \end{align} The three global residual vectors, obtained by assembling the individual contributions from the residual expressions associated with the respective degrees of freedom \eqref{R_motion}--\eqref{R_potential}, are denoted by \begin{align*} \begin{bmatrix} \gz{\mathsf{R}}_{\gz \varphi} & \gz{\mathsf{R}}_{\overline{\gz F}} & \gz{\mathsf{R}}_{\varphi} \end{bmatrix}{}^{\mathsf{T}} =: \gz{\mathsf{R}} \, . \intertext{and the global vectors of degrees of freedom by} \begin{bmatrix} \gz{\mathsf{d}}_{\gz \varphi} & \gz{\mathsf{d}}_{\overline{\gz F}} & \gz{\mathsf{d}}_{\varphi} \end{bmatrix}{}^{\mathsf{T}} =: \gz{\mathsf{d}} \, . \end{align*} Note that $\dim \gz{\mathsf{R}}_{\gz \varphi} = \dim \gz{\mathsf{d}}_{\gz \varphi} = \vert \mcl I_{\gz \varphi} \vert $, $\dim \gz{\mathsf{R}}_{\overline{\gz F}} = \dim \gz{\mathsf{d}}_{\overline{\gz F}} = \vert \mcl I_{\gz \overline{\gz F}} \vert $, and $\dim \gz{\mathsf{R}}_{\varphi} = \dim \gz{\mathsf{d}}_{\varphi} = \vert \mcl I_{\gz \varphi} \vert $. The coupled nonlinear residual equations are solved approximately using a Newton--Raphson strategy whereby within each iteration $(i)$ of the current load (time) step the linearised problem is given by \begin{gather*} \gz{\mathsf{R}}^{(i+1)} = \gz{\mathsf{R}}^{(i)} + \left[ \,\mbox{D}_{\gz{\mathsf{d}}} \gz{\mathsf{R}}^{(i)} \right] \Delta \gz{\mathsf{d}}^{(i)} \doteq \gz 0 \\ \implies \gz{\mathsf{K}}^{(i)} \Delta \gz{\mathsf{d}}^{(i)} = - \gz{\mathsf{R}}^{(i)} \, , \end{gather*} and $\Delta \gz{\mathsf{d}}^{(i)} := \gz{\mathsf{d}}^{(i+1)} - \gz{\mathsf{d}}^{(i)}$. When expressed in the form of a block system, the discrete problem at each iteration takes the form \begin{align} \begin{bmatrix} \gz{\mathsf{K}}_{\gz \varphi\motion} & \gz{\mathsf{K}}_{\gz \varphi\overline{\gz F}} & \gz{\mathsf{K}}_{\gz \varphi\varphi} \\[6pt] \gz{\mathsf{K}}_{\overline{\gz F}\gz \varphi} & \gz{\mathsf{K}}_{\overline{\gz F}\mF} & \gz{\mathsf{K}}_{\overline{\gz F}\varphi} & \\[6pt] \gz{\mathsf{K}}_{\varphi\gz \varphi} & \gz{\mathsf{K}}_{\varphi\overline{\gz F}} & \gz{\mathsf{K}}_{\varphi\potential} \end{bmatrix}^{(i)} \begin{bmatrix} \Delta \gz{\mathsf{d}}_{\gz \varphi} \\[6pt] \Delta \gz{\mathsf{d}}_{\overline{\gz F}} \\[6pt] \Delta \gz{\mathsf{d}}_{\varphi} \end{bmatrix}^{(i)} = - \begin{bmatrix} \gz{\mathsf{R}}_{\gz \varphi} \\[6pt] \gz{\mathsf{R}}_{\overline{\gz F}} \\[6pt] \gz{\mathsf{R}}_{\varphi} \end{bmatrix}^{(i)} \, . \label{KD_R} \end{align} The load step is deemed converged when the (normalised) magnitude of the incremental changes $\Delta \gz{\mathsf{d}}_{\gz \varphi}$, $\Delta \gz{\mathsf{d}}_{\overline{\gz F}}$, and $\Delta \gz{\mathsf{d}}_{\varphi}$, together with the (normalised) magnitude of the residual vectors $\gz{\mathsf{R}}_{\gz \varphi}$, $\gz{\mathsf{R}}_{\overline{\gz F}}$, and $\gz{\mathsf{R}}_{\varphi}$, are below a defined tolerance $\epsilon \ll 1$. The explicit reference to the current iteration counter is dropped henceforth. The matrix problem \eqref{KD_R} is solved monolithically. The various contributions to the tangent matrix $\gz{\mathsf{K}}$ associated with the degrees of freedom $I \in \{\mcl I_{\gz \varphi}, \mcl I_{\overline{\gz F}}, \mcl I_{\varphi} \}$ and $J \in \{\mcl I_{\gz \varphi}, \mcl I_{\overline{\gz F}}, \mcl I_{\varphi} \}$ are given by \begin{align*} {\left[ {\mathsf{K}_{\gz \varphi\motion}}\right]}_{IJ} &= \partial_{\gz \upvarphi^J} {\mathsf{R}}^{I}_{\gz \varphi} = \int_{\mcl B_0} \partial_{\gz \upvarphi^J} \left[ \gz P^{\text{tot}} : {\rm Grad} \gz N^I_{\gz \varphi} \right] \, \,\mbox{d} V = \int_{\mcl B_0} \left[ \,\mbox{D}_{\gz F}{\gz P^{\text{tot}}} : {\rm Grad} \gz N^J_{\gz \varphi} \right]: {\rm Grad} \gz N^I_{\gz \varphi} \, \,\mbox{d} V \\ {\left[ {\mathsf{K}_{\gz \varphi\overline{\gz F}}}\right]}_{IJ} &= \partial_{\overline{\mathbf{\mathsf{F}}}^J} {\mathsf{R}}^{I}_{\gz \varphi} = \int_{\mcl B_0} \partial_{\overline{\mathbf{\mathsf{F}}}^J} \left[ \gz P^{\text{tot}} : {\rm Grad} \gz N^I_{\gz \varphi} \right] \, \,\mbox{d} V = \int_{\mcl B_0} \left[ \,\mbox{D}_{\overline{\gz F}}{\gz P^{\text{tot}}} : \gz N^J_{\overline{\gz F}} \right]: {\rm Grad} \gz N^I_{\gz \varphi} \, \,\mbox{d} V \\ {\left[ {\mathsf{K}_{\gz \varphi\varphi}}\right]}_{IJ} &= \partial_{\upvarphi^J} {\mathsf{R}}^{I}_{\gz \varphi} = \int_{\mcl B_0} \partial_{\upvarphi^J} \left[ \gz P^{\text{tot}} : {\rm Grad} \gz N^I_{\gz \varphi} \right] \, \,\mbox{d} V = - \int_{\mcl B_0} \left[ \,\mbox{D}_{\mathbb{E}}{\gz P^{\text{tot}}} \cdot {\rm Grad} N^J_{\varphi} \right]: {\rm Grad} \gz N^I_{\gz \varphi} \, \,\mbox{d} V \\ \cline{1-2} {\left[ {\mathsf{K}_{\overline{\gz F}\gz \varphi}}\right]}_{IJ} &= \partial_{\gz \upvarphi^J} {\mathsf{R}}^{I}_{\overline{\mathbf{\mathsf{F}}}} = \int_{\mcl B_0} \partial_{\gz \upvarphi^J} \left[ \overline {\gz P} : \gz N^I_{\overline{\gz F}} + \overline {\gz Q} \;\cdot\!\!:{\rm Grad} \gz N^I_{\overline{\gz F}} \right] \, \,\mbox{d} V \\ &= \int_{\mcl B_0} \left[ \,\mbox{D}_{\gz F}{\overline {\gz P}} : {\rm Grad} \gz N^J_{\gz \varphi} \right]: \gz N^I_{\overline{\gz F}} \, \,\mbox{d} V + \int_{\mcl B_0} \left[ \,\mbox{D}_{\gz F}{\overline {\gz Q}} \;\cdot\!\!: {\rm Grad} \gz N^J_{\gz \varphi} \right] \;\cdot\!\!: {\rm Grad} \gz N^I_{\overline{\gz F}} \, \,\mbox{d} V \\ {\left[ {\mathsf{K}_{\overline{\gz F}\mF}}\right]}_{IJ} &= \partial_{\overline{\mathbf{\mathsf{F}}}^J} {\mathsf{R}}^{I}_{\overline{\mathbf{\mathsf{F}}}} = \int_{\mcl B_0} \partial_{\overline{\mathbf{\mathsf{F}}}^J} \left[ \overline {\gz P} : \gz N^I_{\overline{\gz F}} + \overline {\gz Q} \;\cdot\!\!:{\rm Grad} \gz N^I_{\overline{\gz F}} \right] \, \,\mbox{d} V \\ &= \int_{\mcl B_0} \left[ \,\mbox{D}_{\overline{\gz F}}{\overline {\gz P}} : \gz N^J_{\overline{\gz F}} \right]: \gz N^I_{\overline{\gz F}} \, \,\mbox{d} V + \int_{\mcl B_0} \left[ \,\mbox{D}_{\overline {\gz G}}{\overline {\gz Q}} \;\cdot\!\!: {\rm Grad} \gz N^J_{\overline{\gz F}} \right] \;\cdot\!\!: {\rm Grad} \gz N^I_{\overline{\gz F}} \, \,\mbox{d} V \\ {\left[ {\mathsf{K}_{\overline{\gz F}\varphi}}\right]}_{IJ} &= \partial_{\upvarphi^J} {\mathsf{R}}^{I}_{\overline{\mathbf{\mathsf{F}}}} = \int_{\mcl B_0} \partial_{\upvarphi^J} \left[ \overline {\gz P} : \gz N^I_{\overline{\gz F}} + \overline {\gz Q} \;\cdot\!\!:{\rm Grad} \gz N^I_{\overline{\gz F}} \right] \, \,\mbox{d} V \\ &= - \int_{\mcl B_0} \left[ \,\mbox{D}_{\mathbb{E}}{\overline {\gz P}} \cdot {\rm Grad} N^J_{\varphi} \right]: \gz N^I_{\overline{\gz F}} \, \,\mbox{d} V - \int_{\mcl B_0} \left[ \,\mbox{D}_{\overline {\gz G}}{\overline {\gz Q}} \cdot {\rm Grad} N^J_{\varphi} \right] \;\cdot\!\!: {\rm Grad} \gz N^I_{\overline{\gz F}} \, \,\mbox{d} V \\ \cline{1-2} {\left[ {\mathsf{K}_{\varphi\gz \varphi}}\right]}_{IJ} &= \partial_{\gz \upvarphi^J} {\mathsf{R}}^{I}_{\upvarphi} = \int_{\mcl B_0} \partial_{\gz \upvarphi^J} \left[ \mathbb{D} \cdot {\rm Grad} N^I_{\varphi} \right] \, \,\mbox{d} V = \int_{\mcl B_0} \left[ \,\mbox{D}_{\gz F}{\mathbb{D}} : {\rm Grad} \gz N^J_{\gz \varphi} \right] \cdot {\rm Grad} N^I_{\varphi} \, \,\mbox{d} V \\ {\left[ {\mathsf{K}_{\varphi\overline{\gz F}}}\right]}_{IJ} &= \partial_{\overline{\mathbf{\mathsf{F}}}^J} {\mathsf{R}}^{I}_{\upvarphi} = \int_{\mcl B_0} \partial_{\overline{\mathbf{\mathsf{F}}}^J} \left[ \mathbb{D} \cdot {\rm Grad} N^I_{\varphi} \right] \, \,\mbox{d} V = \int_{\mcl B_0} \left[ \,\mbox{D}_{\overline{\gz F}}{\mathbb{D}} : \gz N^J_{\overline{\gz F}} \right] \cdot {\rm Grad} N^I_{\varphi} \, \,\mbox{d} V \\ {\left[ {\mathsf{K}_{\varphi\potential}}\right]}_{IJ} &= \partial_{\upvarphi^J} {\mathsf{R}}^{I}_{\upvarphi} = \int_{\mcl B_0} \partial_{\upvarphi^J} \left[ \mathbb{D} \cdot {\rm Grad} N^I_{\varphi} \right] \, \,\mbox{d} V = - \int_{\mcl B_0} \left[ \,\mbox{D}_{\mathbb{E}}{\mathbb{D}} \cdot {\rm Grad} N^J_{\varphi} \right] \cdot {\rm Grad} N^I_{\varphi} \, \,\mbox{d} V \, . \end{align*} \section{Numerical examples} \label{sect_numerical_examples} The finite element problem detailed in the previous section is implemented within the open-source library deal.II \citep{dealII91,Bangerth2007} in conjunction with the linear algebra package Trilinos \citep{Trilinos2005}. The automatic differentiation package ADOL-C \citep{adolc2012} is used to evaluate the derivatives that appear in the expressions for the residual and tangent given in \sect{sec_fe_approximation}. Tri-quadratic piecewise polynomials are used to approximate the (vectorial) macroscopic motion map $\gz \varphi^h$ and the (tensorial) micromorphic deformation gradient $\overline {\gz F}^h$. Tri-linear approximations are used for the (scalar) electric potential $\varphi^h$. This is a non-standard choice for the problem of E-Elasticity where equal-order approximations are typically used for $\gz \varphi^h$ and $\varphi^h$ \citep{Pelteret2016}. We choose to break with convention to ensure that the flexoelectric contribution to the energy in \eqn{psi_flexo_A} contains electrical and micromorphic terms of equal polynomial order. The challenge of determining the optimal functional setting for the problem of FM-Elasticity is discussed further in \sect{sec_discuss_conclusion}. The default constitutive parameters used for the numerical examples, unless stated otherwise, are listed in Table~\ref{table_constitutive_parameters}. To improve the scaling of the linear system \eqref{KD_R}, we adopt units of \si{mm}, \si{N} and \si{\kilo \V}. Initial conditions on the micro-deformation of $\overline{\gz F} \equiv \overline{\gz{I}}$ are set, where $\overline{\gz{I}}$ is the second-order micromorphic identity tensor. Homogeneous Neumann boundary conditions for the micromorphic traction, defined in \eqn{neumann_micro}, are assumed for all example problems. Body forces and free charge are ignored. \begin{table}[htb!] \centering \begin{tabular}{ l l l l l} \toprule \textbf{Parameter} & \textbf{Symbol} & \textbf{Value} \\ \hline Poisson's ratio & $\nu$ & \num{0.273} \\ Shear modulus & $\mu$ & \num{0.05} \\ & $\alpha$ & \num{0.2} \\ & $\beta$ & \num{2} \\ & $\gamma$ & \num{-2} \\ Penalty-like parameter & $p$ & $\num{5000}\mu$ \\ \bottomrule \end{tabular} \caption{Default constitutive parameters. The electrical and mechanical parameters are taken from \citep{Vu2010, Vu2012, Pelteret2016}. } \label{table_constitutive_parameters} \end{table} We consider two three-dimensional example problems to elucidate the theory developed in the previous sections. The first is the problem of a strip with a hole, the second the bending of a cantilever beam. \subsection{Strip with hole: M-Elasticity}\label{sec_strip_hole_m_elasticity} The objective of this example is to demonstrate the key features of M-Elasticity. Specifically, we examine the role that the ratio of the length scale $\ell$ to a characteristic dimension of the macroscopic problem $L$ plays in the overall response of the structure. Recall that in the proposed formulation for FM-Elasticity, the micromorphic model captures scale-dependent effects and allows the gradient of the deformation gradient $\gz{G} = {\rm Grad} \gz F$ to be approximated via its micromorphic counterpart $\overline {\gz G}$ within a conventional $C^0$-continuous finite element setting. Consider the problem of a three-dimensional strip with dimensions $L \times L/3 \times L/24$, where $L=120$, loaded in tension as depicted in \fig{fig_strip_with_hole_geom_bcs}(a). A two-dimensional version of the problem was proposed by \citet{Hirschberger2007} for the problem of M-Elasticity. An equal and opposite motion is prescribed on the upper and lower face in 5 equal steps. The final length of the deformed specimen is $3L /2$. All other boundary conditions are of type homogeneous Neumann. The finite element mesh of the undeformed strip is shown in \fig{fig_strip_hole_m_elasticity}. The symmetry of the problem is exploited to reduce the computational cost by simulating only one quarter of the domain. Following \citep{Hirschberger2007}, the penalty-like parameter is set to $p \equiv 50 \mu$ for this example. \begin{figure}[htb!] \centering \includegraphics[width=0.65\textwidth]{images/strip_with_hole_geom_bcs.pdf} \caption{The problem of a strip with a hole. Geometry and boundary conditions for (a) M-Elasticity and (b) E- and FM-Elasticity.} \label{fig_strip_with_hole_geom_bcs} \end{figure} The final deformed shapes for five different choices of $\ell \in \{0;L/12;L/6;L/3;L \}$ are shown in \fig{fig_strip_hole_m_elasticity_plots}. The choice $\ell \equiv 0$ corresponds to (nonlinear) Elasticity, see \fig{fig_family_tree}. The response away from the hole is similar for all choices of $\ell$ as the deformation is essentially homogeneous in this region. The micromorphic effect is significant in the vicinity of the hole where the deformation is inhomogeneous. The horizontal and the vertical displacement of the points A and B, respectively, (see \fig{fig_strip_with_hole_geom_bcs}(a)) are plotted against the prescribed displacement of the upper face $\varphi_y^\text{pre}$ in \fig{fig_strip_hole_m_elasticity_plots}. Increasing the length scale, or equivalently reducing the specimen size, leads to a stiffer response. The increase of strength with decreasing specimen size is the key feature of M-Elasticity. This behaviour will be inherited by the problems of EM- and FM-Elasticity, as discussed next. \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{images/strip_hole_m_elasticity.pdf} \caption{The finite element mesh of the undeformed strip with a hole geometry, and the final deformed configuration for various different length scales $\ell$. The distribution of the $yy$-component of the Cauchy stress $\gz{\sigma} := j \gz P \cdot \gz F {}^{\mathsf{T}}$ is shown. } \label{fig_strip_hole_m_elasticity} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{images/strip_hole_m_elasticity_plots.pdf} \caption{The relationship between the applied displacement $u_y^\text{pre}$ and the (a) horizontal displacement of point A and (b) the vertical displacement of point B for the strip with a hole problem and the model of M-Elasticity.} \label{fig_strip_hole_m_elasticity_plots} \end{figure} \subsection{Strip with hole: E- and FM-Elasticity} The problem of a strip with a hole has been used to demonstrate key features of models of E-Elasticity at finite deformations \citep[see e.g.][]{Vu2007}. The objective here is to use this benchmark problem to illustrate differences between E- and FM-Elasticity. The problem of EM-Elasticity is not considered in this example problem for the sake of brevity. The boundary conditions are shown in \fig{fig_strip_with_hole_geom_bcs}(b). A potential difference of $\num{0.2} \equiv 2 \varphi^\text{pre}$ is applied between the upper and lower faces in five equal steps. The finite element mesh of the undeformed configuration is shown in \fig{fig_strip_hole_e_fm_elasticity}. As in \sect{sec_strip_hole_m_elasticity}, the symmetry of the problem is exploited. The length scale is fixed as $\ell \equiv L/3$ and the penalty-like parameter is set to $p \equiv 5000 \mu$. The deformation induced by the applied potential difference is large, as shown in \fig{fig_strip_hole_e_fm_elasticity}. The results for E-Elasticity in \fig{fig_strip_hole_e_fm_elasticity}(a) match those presented by \citet{Vu2007}. The FM-Elasticity problem, shown in \fig{fig_strip_hole_e_fm_elasticity}(b), highlights important differences between the models. The mechanical deformation in the vicinity of the hole differ significantly. This is a consequence of the micromorphic response in FM-Elasticity that occurs due to the inhomogeneous deformation field in this region, as shown in \sect{sec_strip_hole_m_elasticity} for the problem of M-Elasticity. The differences between the theories are more clearly demonstrated in the plot of the horizontal and the vertical displacement of the points A and B, respectively, (see \fig{fig_strip_with_hole_geom_bcs}(b)) against the time (load) step shown in \fig{fig_strip_hole_e_fm_elasticity_plots}. \begin{figure}[htb!] \centering \includegraphics[width=0.7\textwidth]{images/strip_hole_e_fm_elasticity.pdf} \caption{The finite element mesh of the undeformed configuration and the deformed strip for the problems of (a) E-Elasticity and (b) FM-Elasticity. The plot is coloured by the potential field.} \label{fig_strip_hole_e_fm_elasticity} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{images/strip_hole_e_fm_elasticity_plots.pdf} \caption{The (a) horizontal displacement of point A, and (b) the vertical displacement of point B for the strip with a hole problem over five time (load) steps.} \label{fig_strip_hole_e_fm_elasticity_plots} \end{figure} \subsection{Bending of a micro-cantilever beam} Consider the microscale cantilever beam of dimensions $L \times L/10 \times L/10$ shown in \fig{fig_beam}, where $L = \num{100}$. The beam is fully fixed on the left face at $X=0$, that is, the macroscopic displacement $\gz{u} = \gz{0}$. A traction $\gz{t}_0^\text{pre} = \num{-0.2}\gz{E}_2$ is applied to the right face at $X=L$ in a single load step. The electric potential $\varphi = \varphi^\text{pre} = \num{0}$ at $X=0$. The setup is typical of that used to quantify the flexoelectric response \citep[see e.g.][and the references therein]{Wenhui2001}. The applied traction causes the cantilever to bend, thereby inducing a transverse strain gradient. The polarization due to the flexoelectric effect is then determined from the measurement of the potential difference between two metallic plates on the upper and lower faces of the beam, as indicated by points C and D in \fig{fig_beam}. \begin{figure}[htb!] \centering \includegraphics[width=0.6\textwidth]{images/beam_geom_bcs.pdf} \caption{The geometry and boundary conditions for the micro-cantilever beam problem.} \label{fig_beam} \end{figure} \subsubsection{M-Elasticity} The influence of the length scale $\ell$ on the vertical deflection of the beam $u_y$ due to the applied traction is investigated for the problem of M-Elasticity to determine the role of the micromorphic contribution in the absence of flexoelectric effects. The vertical deflection along the line A--B for $\ell \in \{ 0; 0.25; 0.5; 0.75; 1; 2\}$ is shown in \fig{fig_beam_m_elasticity_plots}. The choice of $\ell \equiv 0$ corresponds to (nonlinear) Elasticity. An increasing length scale (decreasing specimen size) leads to a stiffer response, as expected. As $\ell \to 0$ we recover the Elasticity response. \begin{figure}[htb!] \centering \includegraphics[width=0.65\textwidth]{images/beam_m_elasticity_plots.pdf} \caption{The vertical deflection $u_y$ along the line A--B for the cantilever beam for various choices of the length scale $\ell$. The problem is M-Elasticity. A plot of the deformed shape of the beam for the choices $\ell \equiv 0$ (Elasticity) and $\ell \equiv 2$ is also shown. The deformed shape is coloured by the magnitude of the displacement field.} \label{fig_beam_m_elasticity_plots} \end{figure} \subsubsection{EM- and FM-Elasticity} The length scale is now fixed as $\ell \equiv 1$ (see \fig{fig_beam_m_elasticity_plots}) and the influence of the flexoelectric coefficient $\overline{\upsilon} \in \{0; 0.25; 0.5; 0.75; 1 \}$ on the response investigated. The distribution of the potential along the horizontal line A--B and the vertical line C--D (see \fig{fig_beam}) is shown in \fig{fig_beam_fm_elasticity_plots}(a) and (b), respectively. The choice of $\overline{\upsilon} \equiv 0$ corresponds to EM-Elasticity. For this case, and for the current choice of electric boundary conditions, the potential is zero in the beam. Choosing $\overline{\upsilon} > 0$ activates the flexoelectric effect. Increasing $\overline{\upsilon}$ linearly scales the magnitude of the distribution of the potential over the beam. This response can be understood from the distribution of the micro-gradient $\overline {\gz G} = {\rm Grad} \overline{\gz F}$ along the beam shown in \fig{fig_beam_fm_elasticity_plots}(c) and (d). The distribution of $\vert \overline {\gz G} \vert$ along the line A--B is essentially identical for all choices of $\overline{\upsilon}$. Thus the electric field has negligible influence on the micromorphic response in the current example. The choice of the energy associated with the flexoelectric effect in \eqn{psi_flexo_A} is linear in $\overline {\gz G}$. Hence the flexoelectric contribution to the dielectric displacement scales linearly with the flexoelectric coefficient $\overline{\upsilon}$, see \eqn{eD}. The micro-gradient $\overline {\gz G}$ exhibits a concentration at $X=0$ where the beam is macroscopically fully-constrained, see \fig{fig_beam_fm_elasticity_plots}(c) and (d). The macroscopic boundary condition results in a concentration in the macroscopic deformation field $\gz F$ and hence the micro-deformation $\overline{\gz F}$. Notice, however, the discrepancy between $\vert \overline{\gz F} - \gz F \vert$ at the boundary shown in \fig{fig_beam_fm_elasticity_plots}(c). Recall that the scale bridging energy $\psi_0^\text{scale}$ in \eqn{psi_scale} contains the term $[\overline{\gz F} - \gz F]$. The micro-deformation $\overline{\gz F}$ is a field variable (a nodal quantity in the finite element description) while the deformation gradient $\gz F$ is computed from the displacement field and evaluated at the quadrature points of the finite element mesh. This mismatch leads to the inability to tie $\overline{\gz F}$ to $\gz F$ more closely in the presence of a concentration in the macroscopic deformation field irrespective as to the choice of the penalty term $p$. Several choices for $p$ were investigated and all produced similar behaviour. \begin{figure}[htb!] \centering \includegraphics[width=\textwidth]{images/beam_fm_elasticity_plots.pdf} \caption{The distribution of the potential $\varphi$ over (a) the horizontal line A--B and (b) the vertical line C--D, for various choices of $\overline{\upsilon}$. The distribution of the norm of the micro-gradient $\vert \overline {\gz G} \vert$ and the scale transition measure $\vert \overline{\gz F} - \gz F \vert$ along the line A--B is shown in (c). The distribution of the norm of the micro-gradient $\vert \overline {\gz G} \vert$ over the deformed cantilever beam is given in (d).} \label{fig_beam_fm_elasticity_plots} \end{figure} \section{Discussion and conclusion} \label{sec_discuss_conclusion} A novel micromorphic formulation for flexoelectricity has been presented. The formulation has been implemented within a conventional $C^0$-continuous finite element setting. The Dirichlet principle has been applied to reveal the structure of the governing relations and the boundary conditions. The formulation allows for a spectrum of different model problems to be considered by the appropriate restriction of the constitutive parameters. Details of the finite element approximation have been given. The theory has been elucidated via a series of numerical example problems. The cantilever beam example demonstrated the complex interaction between the mechanical size effect and the flexoelectric response. The influence of the free space surrounding the solid material has been ignored, as is often the case for piezoelectric materials \citep[see e.g.][]{Poya2015}. This is not however the case for electro-active polymers where the free space contribution can be significant \citep{Vogel2014}, and was accounted for by \citet{Yvonnet2017} in their model of flexoelectricity. Therefore, the framework presented will be extended to consider the free space. The approach will follow our previous work on E-Elasticity \citep{Pelteret2016}. The current work presented a mathematical and numerical model for flexoelectricity. The numerical model has been validated for E-Elasticity and M-Elasticity using benchmark problems in the literature. The validation of the FM-Elasticity model against experiment is critical and will be considered in future work. This will allow one to decide on the correct form of the flexoelectric energy and the choice of the relevant constitutive parameters. Experimentally measured uncertainty in the geometry and the constitutive parameters of the fabricated components should be accounted for in the model. The micromorphic framework presented could readily be extended to describe the converse flexoelectric effect via the introduction of a micromorphic electrical field $\overline \mathbb{E}$ and its gradient $\overline{\mathbb{G}} = {\rm Grad} \overline \mathbb{E}$. The scale-bridging energy for the converse effect would involve the norm $\vert \overline \mathbb{E} - \mathbb{E} \vert$ and take a form similar to \eqn{psi_scale}. The form of the energy describing the converse flexoelectric effect would include $\overline{\mathbb{G}}$ and some measure of the macroscopic deformation. The choice of the optimal functional setting for the problem of flexoelectricity remains an open challenge. A careful mathematical and computational study will provide further insight and is recommended. This may also reveal an approach to better control the scale transition parameter in the vicinity of concentrations in the macroscopic fields. \section*{Acknowledgements} PS and AM gratefully acknowledge the support provided by the EPSRC Strategic Support Package: Engineering of Active Materials by Multiscale/Multiphysics Computational Mechanics - EP/R008531/1. DD was partly supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG), grant DA 1664/2-1.
proofpile-arXiv_065-6134
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of the spectral properties of a graph is a popular subject in the graph theory. The relation among the rank of the adjacent matrix and other topological structure parameters of a graph has been studied extensively by many researchers. Recently there has been a growing study of the rank of the adjacent matrix associated to signed graphs and mixed graphs. In this paper we characterize the properties of the rank of a complex unit gain graph. We refer to \cite{BONDY} for undefined terminologies and notation. In this paper, we only consider the simple and finite graphs. Let $G$ be an undirected graph with vertex set $V(G)=\{ v_{1}, v_{2}, \cdots, v_{n} \}$. The $degree$ of a vertex $u \in V(G)$, denote by $d_{G}(u)$, is the number of vertices which are adjacent to $u$. A vertex of $G$ is called a {\it pendant vertex} if it is a vertex of degree one in $G$, whereas a vertex of $G$ is called a {\it quasi-pendant vertex} if it is adjacent to a pendant vertex in $G$ unless it is a pendant vertex. Denote by $P_n$ and $C_n$ a path and cycle on $n$ vertices, respectively. The {\it adjacency matrix} $A(G)$ of $G$ is the $n \times n$ matrix whose $(i, j)$-entry equals to 1 if vertices $v_{i}$ and $v_j$ are adjacent and 0 otherwise. A $complex$ $unit$ $gain$ $graph$ (or $\mathbb{T}$-gain graph) is a graph with the additional structure that each orientation of an edge is given a complex unit, called a $gain$, which is the inverse of the complex unit assigned to the opposite orientation. For a simple graph $G$ with order $n$, let $\overrightarrow{E}$ be the set of oriented edges, it is obvious that this set contains two copies of each edge with opposite directions. We write $e_{ij}$ for the oriented edge from $v_{i}$ to $v_{j}$. The circle group, which is denoted by $\mathbb{T}= \{ z \in C:|z|=1 \} $, is a subgroup of the multiplicative group of all nonzero complex numbers $\mathbb{C}^{\times}$. A complex unit gain graph is a triple $\Phi=(G, \mathbb{T}, \varphi)$ consisting of a graph $G$, $\mathbb{T}= \{ z \in C:|z|=1 \} $ is a subgroup of the multiplicative group of all nonzero complex numbers $\mathbb{C}^{\times}$ and a gain function $\varphi: \overrightarrow{E} \rightarrow \mathbb{T}$, where $G$ is the underlying graph of $\Phi$ and $\varphi(e_{ij})=\varphi(e_{ji})^{-1}=\overline{\varphi(e_{ji})}$. For convenience, we write $(G, \varphi)$ for a complex unit gain graph $\Phi=(G, \mathbb{T}, \varphi)$ in this paper. The adjacency matrix associated to the complex unit gain graph $(G, \varphi)$ is the $n \times n$ complex matrix $A(G, \varphi)=a_{ij}$, where $a_{ij}=\varphi(e_{ij})$ if $v_{i}$ is adjacent to $v_{j}$, otherwise $a_{ij}=0$. It is obvious to see that $A(G, \varphi)$ is Hermitian and its eigenvalues are real. If the gain of every edge is 1 in $(G, \varphi)$, then the adjacency matrix $A(G, \varphi)$ is exactly the adjacency matrix $A(G)$ of the underlying graph $G$. It is obvious that a simple graph is assumed as a complex unit gain graph with all positive gain 1's. The $positive$ $inertia$ $index$, denoted by $p^{+}(G, \varphi)$, and the $negative$ $inertia$ $index$, denoted by $n^{-}(G, \varphi)$, of a complex unit gain graph $(G, \varphi)$ are defined to be the number of positive eigenvalues and negative eigenvalues of $A(G, \varphi)$ including multiplicities, respectively. The $rank$ of a complex unit gain graph $(G, \varphi)$, written as $r(G, \varphi)$, is defined to be the rank of $A(G, \varphi)$. Obviously, $r(G, \varphi)=p^{+}(G, \varphi)+n^{-}(G, \varphi)$. For an induced subgraph $H$ of a graph $G$, denote by $G-H$, the subgraph obtained from $G$ by deleting all vertices of $H$ and all incident edges. For a subset $X$ of $V(G)$, $G-X$ is the induced subgraph obtained from $G$ by deleting all vertices in $X$ and all incident edges. In particular, $G-\{ x \}$ is usually written as $G-x$ for simplicity. For an induced subgraph $H$ and a vertex $u$ outside $H$, the induced subgraph of $G$ with vertex set $V(H) \cup \{ u \}$ is simply written as $H+u$. For a graph $G$, let $c(G)$ be the {\it cyclomatic number} of $G$, that is $c(G)=|E(G)|-|V(G)|+\omega(G)$, where $\omega(G)$ is the number of connected components of $G$. Two vertices of a graph $G$ are said to be $independent$ if they are not adjacent. A subset $I$ of $V(G)$ is called an $independent$ $set$ if any two vertices of $I$ are independent in $G$. An independent set $I$ is $maximum$ if $G$ has no independent set $I'$ with $|I'|> |I'|$. The number of vertices in a maximum independent set of $G$ is called the $independence$ $number$ of $G$ and is denoted by $\alpha(G)$. For a complex unit gain graph $(G, \varphi)$, the independence number and cyclomatic number of $(G, \varphi)$ are defined to be the independence number and cyclomatic number of its underlying graph, respectively. Let $G$ be a graph with pairwise vertex-disjoints cycles (if any) and $\mathscr{C}_{G}$ be the set of all cycles of $G$. $T_{G}$ is an acyclic graph obtained from $G$ by contracting each cycle of $G$ into a vertex (called a {\it cyclic vertex}). Denoted by $\mathscr{O}_{G}$ the set of all cyclic vertex of $G$. Moreover, denoted by $[T_{G}]$ the subgraph of $T_{G}$ induced by all non-cyclic vertices. It is obviously that $[T_{G}]=T_{G}-\mathscr{O}_{G}$. The rank of graphs have been discussed intensively by many researchers. There are some papers focused on the study on the rank of graphs in terms of other topological structure parameters. Wang and Wong characterized the bounds for the matching number, the edge chromatic number and the independence number of a graph in terms of rank in \cite{WANGLONG}. Gutman and Sciriha \cite{GUT} studied the nullity of line graphs of trees. Guo et al. \cite{MOHAR} and Liu et al. \cite{LXL} introduced the Hermitian adjacency matrix of a mixed graph and presented some basic properties of the rank of the mixed graphs independently. In \cite{FYZ1}, the rank of the signed unicyclic graph was discussed by Fan et al. He et al. characterized the relation among the rank, the matching number and the cyclomatic number of a signed graph in \cite{HSJ}. Chen et al. \cite{LSC} investigated the relation between the $H$-rank of a mixed graph and the matching number of its underlying graph. For other research of the rank of a graph one may be referred to those in \cite{BEVI,HOU,MAH2,MOHAR2,WANGXIN}. Recently, the study of the properties of complex unit gain graphs has attracted increased attention. Reff extended some fundamental concepts from spectral graph theory to complex unit gain graphs and defined the adjacency, incidence and Laplacian matrices of them in \cite{REFF}. Yu et al. \cite{YGH} investigated some properties of inertia of complex unit gain graphs and discussed the inertia index of a complex unit gain cycle. In \cite{FYZUNIT}, Wang et al. provided a combinatorial description of the determinant of the Laplacian matrix of a complex unit gain graph which generalized that for the determinant of the Laplacian matrix of a signed graph. Lu et al. \cite{LUY} studied the complex unit gain unicyclic graphs with small positive or negative index and characterized the complex unit gain bicyclic graphs with rank 2, 3 or 4. In \cite{WLG}, the relation among the rank of a complex unit gain graph and the rank of its underlying graph and the cyclomatic number was investigated by Lu et al. In this paper, the upper and lower bounds of the rank of a complex unit gain graph $(G, \varphi)$ with order $n$ in terms of the cyclomatic number and the independence number of its underlying graph are investigated. Moreover, the properties of the extremal graphs which attended the lower bound are identified. The following Theorems \ref{T30} and \ref{T50} are our main results. \begin{theorem}\label{T30} Let $(G, \varphi)$ be a complex unit gain graph with order $n$. Then $$2n-2c(G)-2\alpha(G) \leq r(G, \varphi) \leq 2n-2\alpha(G).$$ \end{theorem} \begin{theorem}\label{T50} Let $(G, \varphi)$ be a complex unit gain graph with order $n$. Then $ r(G, \varphi)= 2n-2c(G)-2\alpha(G)$ if and only if all the following conditions hold for $(G, \varphi)$: {\em(i)} the cycles (if any) of $(G, \varphi)$ are pairwise vertex-disjoint; {\em(ii)} for each cycle (if any) $(C_{l}, \varphi)$ of $(G, \varphi)$, either $\varphi(C_{l}, \varphi)=(-1)^{\frac{l}{2}}$ and $l$ is even or $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd; {\em(iii)} $\alpha(T_{G})=\alpha([T_{G}])+c(G)$. \end{theorem} The rest of this paper is organized as follows. Prior to showing our main results, in Section 2, we list some known elementary lemmas and results which will be useful. In Section 3, we give the proof of the Theorem \ref{T30}. In Section 4, the properties of the extremal signed graphs which attained the lower bound of Theorem \ref{T30} are identified, and the proof of the Theorem \ref{T50} is presented. \section{Preliminaries} In this section, some known results and useful lemmas which will be used in the proofs of our main results are listed. \begin{lemma} \label{L16}{\rm\cite{YGH}} Let $(G, \varphi)$ be a complex unit gain graph. {\em(i)} If $(H, \varphi)$ is an induced subgraph of $(G, \varphi)$, then $r(H, \varphi) \leq r(G, \varphi)$. {\em(ii)} If $(G_{1}, \varphi), (G_{2}, \varphi), \cdots, (G_{t}, \varphi)$ are all the connected components of $(G, \varphi)$, then $r(G, \varphi)=\sum_{i=1}^{t}r(G_{i}, \varphi)$. {\em(iii)} $r(G, \varphi)\geq 0$ with equality if and only if $(G, \varphi)$ is an empty graph. \end{lemma} \begin{definition} \label{L053}{\rm\cite{LUY}} Let $(C_{n}, \varphi)$ ($n \geq 3$) be a complex unit gain cycle and $$\varphi(C_{n}, \varphi)=\varphi(v_{1}v_{2} \cdots v_{n}v_{1} )=\varphi(v_{1}v_{2})\varphi(v_{2}v_{3}) \cdots \varphi(v_{n-1}v_{n})\varphi(v_{n}v_{1}).$$ Then $(C_{n}, \varphi)$ is said to be one of the following five Types: $$\left\{ \begin{array}{ll} \rm{Type~A}, & \hbox{if $\varphi(C_{n}, \varphi)=(-1)^{\frac{n}{2}}$ and $n$ is even;} \\ \rm{Type~B}, & \hbox{if $\varphi(C_{n}, \varphi) \neq (-1)^{\frac{n}{2}}$ and $n$ is even;} \\ \rm{Type~C}, & \hbox{if $Re((-1)^{\frac{n-1}{2}}\varphi(C_{n}, \varphi))>0$ and $n$ is odd;} \\ \rm{Type~D}, & \hbox{if $Re((-1)^{\frac{n-1}{2}}\varphi(C_{n}, \varphi))<0$ and $n$ is odd;} \\ \rm{Type~E}, & \hbox{if $Re((-1)^{\frac{n-1}{2}}\varphi(C_{n}, \varphi))=0$ and $n$ is odd.} \end{array} \right. $$ Where $Re(\cdot)$ is the real part of a complex number. \end{definition} \begin{lemma} \label{L12}{\rm\cite{YGH}} Let $(C_{n}, \varphi)$ be a complex unit gain cycle of order $n$. Then $$(p^{+}(C_{n}, \varphi), n^{-}(C_{n}, \varphi))=\left\{ \begin{array}{ll} (\frac{n-2}{2}, \frac{n-2}{2}), & \hbox{if $(C_{n}, \varphi)$ is of \rm{Type~A};} \\ (\frac{n}{2}, \frac{n}{2}), & \hbox{if $(C_{n}, \varphi)$ is of \rm{Type~B};} \\ (\frac{n+1}{2}, \frac{n-1}{2}), & \hbox{if $(C_{n}, \varphi)$ is of \rm{Type~C};} \\ (\frac{n-1}{2}, \frac{n+1}{2}), & \hbox{if $(C_{n}, \varphi)$ is of \rm{Type~D};} \\ (\frac{n-1}{2}, \frac{n-1}{2}), & \hbox{if $(C_{n}, \varphi)$ is of \rm{Type~E}.} \end{array} \right. $$ \end{lemma} \begin{lemma} \label{L15}{\rm\cite{YGH}} Let $(T, \varphi)$ be an acyclic complex unit gain graph. Then $r(T, \varphi)= r(T)$. \end{lemma} From Lemma \ref{L15}, we have the following Lemma \ref{LPN} directly. \begin{lemma} \label{LPN} Let $(P_{n}, \varphi)$ be a complex unit gain path with order $n$. Then $$ r(P_{n}, \varphi)=\left\{ \begin{array}{ll} n-1, & \hbox{if $n$ is odd;} \\ n, & \hbox{if $n$ is even.} \end{array} \right. $$ \end{lemma} \begin{lemma} \label{L051}{\rm\cite{BONDY}} Let $T$ be a acyclic graph with order $n$. Then $r(T)=2m(T)$ and $\alpha(T)+m(T)=n$. \end{lemma} Obviously, by Lemmas \ref{L15} and \ref{L051}, the following Lemma \ref{L052} can be obtained. \begin{lemma} \label{L052} Let $(T, \varphi)$ be an acyclic complex unit gain graph with order $n$. Then $r(T, \varphi)+2\alpha(T)=2n$. \end{lemma} \begin{lemma} \label{L13}{\rm\cite{YGH}} Let $y$ be a pendant vertex of a complex unit gain graph $(G, \varphi)$ and $x$ is the neighbour of $y$. Then $r(G, \varphi)=r((G, \varphi)- \{ x, y \} )+2$. \end{lemma} \begin{lemma} \label{L14}{\rm\cite{YGH}} Let $x$ be a vertex of a complex unit gain graph $(G, \varphi)$. Then $r(G, \varphi)-2 \leq r((G, \varphi)-x) \leq r(G, \varphi)$. \end{lemma} \begin{lemma} \label{L053}{\rm\cite{HLSC}} Let $y$ be a pendant vertex of a graph $G$ and $x$ is the neighbour of $y$. Then $\alpha(G)=\alpha(G-x)=\alpha(G-\{ x, y \})+1$. \end{lemma} \begin{lemma} \label{L23}{\rm\cite{WDY}} Let $G$ be a graph with $x \in V(G)$. {\em(i)} $c(G)=c(G-x)$ if $x$ lies outside any cycle of $G$; {\em(ii)} $c(G-x) \leq c(G)-1$ if $x$ lies on a cycle of $G$; {\em(iii)} $c(G-x) \leq c(G)-2$ if $x$ is a common vertex of distinct cycles of $G$. \end{lemma} \begin{lemma} \label{L054}{\rm\cite{HLSC}} Let $G$ be a graph. Then {\em(i)} $\alpha(G)-1 \leq \alpha(G-x) \leq \alpha(G) $ for any vertex $x \in V(G)$; {\em(ii)} $\alpha(G-e) \geq \alpha(G)$ for any edge $e \in E(G)$. \end{lemma} \begin{lemma} \label{L055}{\rm\cite{HLSC}} Let $T$ be a tree with at least one edge and $T_{0}$ be the subtree obtained from $T$ by deleting all pendant vertices of $T$. {\em(i)} $\alpha(T) \leq \alpha(T_{0}) +p(T) $, where $p(T)$ is the number of pendent vertices of $T$; {\em(ii)} If $\alpha(T) = \alpha(T-D)+|D|$ for a subset $D$ of $V(T)$, then there is a pendant vertex $x$ such that $x \notin D$. \end{lemma} \section{Proof of Theorem \ref{T30}} In this section, the proof for Theorem \ref{T30} is presented. \noindent {\bf The proof of Theorem \ref{T30}.} Firstly, we show that $ r(G, \varphi)\leq 2n -2\alpha(G)$. Let $I$ be a maximum independent set of $G$, i.e., $|I|=\alpha(G)$. Then $$A(G, \varphi)=\left( \begin{array}{cc} \mathbf{0} & \boldsymbol{B} \\ \boldsymbol{B}^{\top} & \boldsymbol{A} \\ \end{array} \right) $$ where $\boldsymbol{B}$ is a submatrix of $A(G, \varphi)$ with row indexed by $I$ and column indexed by $V(G)-I$, $\boldsymbol{B}^{\top}$ refers to the transpose of $\boldsymbol{B}$ and $\boldsymbol{A}$ is the adjacency matrix of the induced subgraph $G-I$. Then it can be checked that $$r(G, \varphi)\leq r(\mathbf{0}, \boldsymbol{B})+r(\boldsymbol{B}^{\top}, \boldsymbol{A})\leq n-\alpha(G)+n-\alpha(G)=2n-2\alpha(G).$$ Thus, $$r(G, \varphi)\leq 2n-2\alpha(G).$$ Next, we argue by induction on $c(G)$ to show that $2n-2c(G) \leq r(G, \varphi)+2\alpha(G) $. If $c(G)=0$, then $(G, \varphi)$ is a complex unit gain tree, and so result follows from Lemma \ref{L052}. Hence one can assume that $c(G) \geq 1$. Let $u$ be a vertex on some cycle of $(G, \varphi)$ and $(G', \varphi)=(G, \varphi)-u$. Let $(G_{1}, \varphi), (G_{2}, \varphi), \cdots, (G_{l}, \varphi)$ be all connected components of $(G', \varphi)$. By Lemma \ref{L23}, we have \begin{equation} \label{E1} \sum_{i=1}^{l}c(G_{i})=c(G') \leq c(G)-1. \end{equation} By the induction hypothesis, one has \begin{equation} \label{E2} 2(n-1)-2c(G') \leq r(G', \varphi)+2\alpha(G'). \end{equation} By Lemmas \ref{L054} and \ref{L14}, we have \begin{equation} \label{E3} \sum_{i=1}^{l}\alpha(G_{i})=\alpha(G') \leq \alpha(G) \end{equation} and \begin{equation} \label{E4} \sum\limits_{i=1}^{l}r(G_{i}, \varphi)=r(G', \varphi) \leq r(G, \varphi). \end{equation} Thus the desired inequality now follows by combining (\ref{E1}), (\ref{E2}), (\ref{E3}) and (\ref{E4}), \begin{eqnarray} \label{E0} r(G, \varphi)+2\alpha(G) & \ge & r(G', \varphi)+2\alpha(G') \\ \nonumber & \ge & 2(n-1)-2c(G') \\ \nonumber & \geq & 2(n - 1) - 2(c(G) - 1) = 2n-2c(G), \end{eqnarray} as desired. This completes the proof of Theorem \ref{T30}. $\square$ \section{Proof of Theorem \ref{T50}.} A complex unit gain graph $(G, \varphi)$ with order $n$ is called {\it lower-optimal} if $r(G, \varphi)=2n-2c(G)-2\alpha(G)$, or equivalently, the complex unit gain graph which attain the lower bound in Theorem \ref{T30}. In this section, we characterize the properties of the complex unit gain graphs which are lower-optimal, and then we give the proof for Theorem \ref{T50}. \begin{lemma} \label{L001} Let $u$ be a cut vertex of a complex unit gain graph $(G, \varphi)$ and $(H, \varphi)$ be a component of $(G, \varphi)-u$. If $r(H, \varphi)=r((H, \varphi)+u)$, then $r(G, \varphi)=r(H, \varphi)+r((G, \varphi)-(H, \varphi))$. \end{lemma} \begin{proof} Let $|V(H, \varphi)|=k$ and $$A(G, \varphi) = \left( \begin{array}{ccc} \boldsymbol{A} & \mathbf{\beta} & \mathbf{0} \\ \mathbf{\overline{\beta}^{\top}} & 0 & \mathbf{\gamma} \\ \boldsymbol{0} & \mathbf{\overline{\gamma}^{\top}} & \boldsymbol{B} \\ \end{array} \right), $$ where $\boldsymbol{A}$ and $\boldsymbol{B}$ are the Hermitian adjacency matrices of $(H, \varphi)$ and $(G, \varphi)-(H, \varphi)-u$, respectively. $\mathbf{\overline{\beta}^{\top}}$ refers to the conjugate transpose of $\mathbf{\beta}$. Since $r(H, \varphi)=r((H, \varphi)+u)$, the linear equation $\boldsymbol{A}X=\mathbf{\beta}$ has solutions. Let $\xi$ be a solution of $\boldsymbol{A}X=\mathbf{\beta}$, and put $$Q = \left( \begin{array}{ccc} \boldsymbol{E_{k}} & -\xi & \mathbf{0} \\ \mathbf{0} & 1 & \mathbf{0} \\ \mathbf{0} & \mathbf{0} & \boldsymbol{I_{n-k-1}} \\ \end{array} \right), $$ where $\boldsymbol{I_{k}}$ denotes a $k \times k$ identity matrix. By directly calculation, we have $$\overline{Q}^{\top}A(G, \varphi)Q = \left( \begin{array}{ccc} \boldsymbol{A} & \mathbf{0} & \mathbf{0} \\ \mathbf{0} & -\overline{\mathbf{\beta}}^{\top}\xi & \mathbf{\gamma} \\ \mathbf{0} & \overline{\mathbf{\gamma}}^{\top} & \boldsymbol{B} \\ \end{array} \right). $$ Since $r(H, \varphi)=r((H, \varphi)+u)$, we have $-\overline{\mathbf{\beta}}^{\top}\xi=0$. Thus we have $r(G, \varphi)=r(H, \varphi)+r((G, \varphi)-(H, \varphi))$. \end{proof} \begin{lemma} \label{L002} Let $(C_{l}, \varphi)$ be a pendant complex unit gain cycle of a complex unit gain graph $(G, \varphi)$ with $u$ be the only vertex of $(C_{l}, \varphi)$ of degree 3. Let $(H, \varphi)=(G, \varphi)-(C_{l}, \varphi)$ and $(G', \varphi)=(H, \varphi)+u$. If $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd, then $$r(G, \varphi)=r(G', \varphi)+l-1.$$ \end{lemma} \begin{proof} Note that $u$ is a cut vertex of $(G, \varphi)$ and $(P_{l-1}, \varphi)$ is a complex unit gain path as a component of $(G, \varphi)-u$. By the fact that $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd, then by Lemmas \ref{LPN} and \ref{L12} one has that $$r(P_{l-1}, \varphi)=r(C_{l}, \varphi)=l-1.$$ Then, by Lemma \ref{L001}, we have $$r(G, \varphi)=r(G', \varphi)+r(P_{l-1}, \varphi)=r(G', \varphi)+l-1.$$ \end{proof} By Lemma \ref{L12}, the following Lemma \ref{L50} can be obtained directly. \begin{lemma} \label{L50} The complex unit gain cycle $(C_{q}, \varphi)$ is lower-optimal if and only if either $\varphi(C_{q}, \varphi)=(-1)^{\frac{q}{2}}$ and $q$ is even or $Re((-1)^{\frac{q-1}{2}}\varphi(C_{q}, \varphi))=0$ and $q$ is odd. \end{lemma} \begin{lemma} \label{L000} Let $(G, \varphi)$ be a complex unit gain graph and $u$ be a vertex of $(G, \varphi)$ lying on a complex unit gain cycle. If $r(G, \varphi)= 2n-2c(G)-2\alpha(G)$, then each of the following holds. \\ {\em(i)} $r(G, \varphi)=r((G, \varphi)-u)$; \\ {\em(ii)} $(G, \varphi)-u$ is lower-optimal; \\ {\em(iii)} $c(G)=c(G-u)+1$; \\ {\em(iv)} $\alpha(G)=\alpha(G-u)$; \\ {\em(v)} $u$ lies on just one complex unit gain cycle of $(G, \varphi)$ and $u$ is not a quasi-pendant vertex of $(G, \varphi)$. \end{lemma} \begin{proof} In the proof arguments of Theorem \ref{T30} that justifies $r(G, \varphi)+2\alpha(G) \geq 2n-2c(G)$. If both ends of (\ref{E0}) in the proof of Theorem \ref{T30} are the same, then all inequalities in (\ref{E0}) must be equalities, and so Lemma \ref{L000} (i)-(iv) are observed. To prove (v). By Lemma \ref{L000} (iii) and Lemma \ref{L23}, we conclude that $u$ lies on just one complex unit gain cycle of $(G, \varphi)$. Suppose to the contrary that $u$ is a quasi-pendant vertex which adjacent to a pendant vertex $v$. Then by Lemma \ref{L13}, we have $$r((G, \varphi)-u)=r((G, \varphi)- \{ u, v \})=r(G, \varphi)-2,$$ which is a contradiction to (i). This completes the proof of the lemma. \end{proof} \begin{lemma} \label{L51} Let $(G, \varphi)$ be a complex unit gain graph and $(G_{1}, \varphi), (G_{2}, \varphi), \cdots, (G_{k}, \varphi)$ be all connected components of $(G, \varphi)$. Then $(G, \varphi)$ is lower-optimal if and only if $(G_{j}, \varphi)$ is lower-optimal for each $j \in \{1, 2, \cdots, k \}$. \end{lemma} \begin{proof} (Sufficiency.) For each $i \in \{ 1, 2, \cdots, k \}$, one has that $$r(G_{i}, \varphi)+2\alpha(G_{i})= 2|V(G_{i})|-2c(G_{i}).$$ Then, one has that \begin{eqnarray*} r(G, \varphi)&=&\sum\limits_{j=1}^{k}r(G_{j}, \varphi)\\ &=&\sum\limits_{j=1}^{k}[2|V(G_{i})|-2c(G_{i})-2\alpha(G_{i})]\\ &=&2|V(G)|-2c(G)-2\alpha(G). \end{eqnarray*} (Necessity.) Suppose to the contrary that there is a connected component of $(G, \varphi)$, say $(G_{1}, \varphi)$, which is not lower-optimal. By Theorem \ref{T30}, one has that $$r(G_{1}, \varphi)+2\alpha(G_{1}) > 2|V(G_{1})|-2c(G_{1})$$ and for each $ j \in \{ 2, 3, \cdots, k \}$, we have $$r(G_{j}, \varphi) +2\alpha(G_{j}) \geq 2|V(G_{j})|-2c(G_{j}).$$ Thus, one has that $$r(G, \varphi)+2\alpha(G) > 2|V(G)|-2c(G),$$ a contradiction. \end{proof} \begin{lemma} \label{L56} Let $u$ be a pendant vertex of a complex unit gain graph $(G, \varphi)$ and $v$ be the vertex which adjacent to $u$. Let $(G_{0}, \varphi)=(G, \varphi)-\{ u, v \}$. Then $(G, \varphi)$ is lower-optimal if and only if $v$ is not on any complex unit gain cycle of $(G, \varphi)$ and $(G_{0}, \varphi)$ is lower-optimal. \end{lemma} \begin{proof} (Sufficiency.) Since $v$ is not on any complex unit gain cycle, by Lemma \ref{L23}, we have $c(G)=c(G_{0})$. By Lemmas \ref{L13} and \ref{L053}, one has that $$r(G, \varphi)=r(G_{0}, \varphi)+2, \alpha(G)=\alpha(G_{0})+1.$$ Thus, one can get $(G, \varphi)$ is lower-optimal by the condition that $(G_{0}, \varphi)$ is lower-optimal. (Necessity.) By Lemmas \ref{L13} and \ref{L000} and the condition that $(G, \varphi)$ is lower-optimal, it can be checked that $$r(G_{0}, \varphi)+2\alpha(G_{0})=2|V(G_{0})|-2c(G).$$ It follows from Theorem \ref{T30} that one has $$r(G_{0}, \varphi)+2\alpha(G_{0}) \geq 2|V(G_{0})|-2c(G_{0}).$$ By the fact that $c(G_{0}) \leq c(G)$, then we have $$c(G)=c(G_{0}), r(G_{0}, \varphi)+2\alpha(G_{0})=2|V(G_{0})|-2c(G_{0}).$$ Thus $(G_{0}, \varphi)$ is also lower-optimal and $v$ is not on any complex unit gain cycle of $(G, \varphi)$. \end{proof} \begin{lemma} \label{L55} Let $(G, \varphi)$ be a complex unit gain graph obtained by joining a vertex $x$ of a complex unit gain cycle $(C_{l}, \varphi)$ by an edge to a vertex $y$ of a complex unit gain connected graph $(K, \varphi)$. If $(G, \varphi)$ is lower-optimal, then the following properties hold for $(G, \varphi)$. {\em(i)} For each complex unit gain cycle $(C_{q}, \varphi)$ of $(G, \varphi)$, either $\varphi(C_{q}, \varphi)=(-1)^{\frac{q}{2}}$ and $q$ is even or $Re((-1)^{\frac{q-1}{2}}\varphi(C_{q}, \varphi))=0$ and $q$ is odd; {\em(ii)} If $\varphi(C_{l}, \varphi)=(-1)^{\frac{l}{2}}$ and $l$ is even, then $r(G, \varphi)=l-2+r(K, \varphi)$ and $\alpha(G)=\frac{l}{2}+\alpha(K)$; if $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd, then $r(G, \varphi)=l-1+r(K, \varphi)$ and $\alpha(G)=\frac{l-1}{2}+\alpha(K)$. {\em(iii)} $(K, \varphi)$ is lower-optimal; {\em(iv)} Let $(G', \varphi)$ be the induced complex unit gain subgraph of $(G, \varphi)$ with vertex set $V(K)\cup \{ x\}$. Then $(G', \varphi)$ is also lower-optimal; {\em(v)} $\alpha(G')=\alpha(K)+1$ and $r(G', \varphi)=r(K, \varphi)$. \end{lemma} \begin{proof} {\bf (i):} We show (i) by induction on the order $n$ of $(G, \varphi)$. By Lemma \ref{L000}, $x$ can not be a quasi-pendant vertex of $(G, \varphi)$, then $y$ is not an isolated vertex of $(G, \varphi)$. Then, $(K, \varphi)$ contains at least two vertices, i.e., $n\geq l+2$. If $n=l+2$, then $(K, \varphi)$ contains exactly two vertices, without loss of generality, assume them be $y$ and $z$. Thus, one has that $(C_{l}, \varphi)=(G, \varphi)-\{ y, z \}$. By Lemma \ref{L56}, we have $(C_{l}, \varphi)$ is lower-optimal. Then (i) follows from Lemma \ref{L50} directly. Next, we consider the case of $n \geq l+3$. Suppose that (i) holds for every lower-optimal complex unit gain graph with order smaller than $n$. If $(K, \varphi)$ is a forest. Then $(G, \varphi)$ contains at least one pendant vertex. Let $u$ be a pendant vertex of $(G, \varphi)$ and $v$ be the vertex which adjacent to $u$. By Lemma \ref{L000}, $v$ is not on $(C_{l}, \varphi)$. By Lemma \ref{L56}, one has that $(G, \varphi)-\{ u, v \}$ is lower-optimal. By induction hypothesis to $(G, \varphi)-\{ u, v \}$, we have either $\varphi(C_{l}, \varphi)=(-1)^{\frac{l}{2}}$ and $l$ is even or $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd. Then (i) follows in this case. If $(K, \varphi)$ contains cycles. Let $g$ be a vertex lying on a cycle of $(K, \varphi)$. By Lemma \ref{L000}, $(G, \varphi)-g$ is lower-optimal. Then, the induction hypothesis to $(G, \varphi)-g$ implies that either $\varphi(C_{l}, \varphi)=(-1)^{\frac{l}{2}}$ and $l$ is even or $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd. Let $s$ be a vertex lying on $(C_{l}, \varphi)$. By Lemma \ref{L000}, $(G, \varphi)-s$ is lower-optimal. Then, the induction hypothesis to $(G, \varphi)-s$ implies that for each cycle $(C_{q}, \varphi)$ of $(K, \varphi)$ either $\varphi(C_{q}, \varphi)=(-1)^{\frac{q}{2}}$ and $q$ is even or $Re((-1)^{\frac{q-1}{2}}\varphi(C_{q}, \varphi))=0$ and $q$ is odd. This completes the proof of (i). Next we show (ii)-(v) according to the following two possible cases. {\bf Case 1.} $\varphi(C_{l}, \varphi)=(-1)^{\frac{l}{2}}$ and $l$ is even. {\bf (ii):} Since $x$ lies on a cycle of $(G, \varphi)$, by Lemmas \ref{L000}, \ref{LPN} and \ref{L053}, one has that \begin{equation} \label{E6} r(G, \varphi)=r((G, \varphi)-x)= r(P_{l-1}, \varphi)+r(K, \varphi)=l-2+r(K, \varphi) \end{equation} and \begin{equation} \label{E7} \alpha(G)=\alpha(G-x)= \alpha(P_{l-1})+\alpha(K)=\frac{l}{2}+\alpha(K). \end{equation} {\bf (iii):} As $(C_{l}, \varphi)$ is a pendant cycle of $(G, \varphi)$, one has that \begin{equation} \label{E8} c(K)=c(G)-1. \end{equation} By (\ref{E6})-(\ref{E8}), we have \begin{equation} \label{E9} r(K, \varphi)+2\alpha(K)=2(n-l)-2c(K). \end{equation} {\bf (iv):} Let $s$ be a vertex of $(C_{l}, \varphi)$ which adjacent to $x$. Then, by Lemmas \ref{L000}, \ref{L13} and \ref{L053}, we have \begin{equation} \label{E10} r(G, \varphi)=r((G, \varphi)-s)= l-2+r(G', \varphi) \end{equation} and \begin{equation} \label{E11} \alpha(G)=\alpha(G-s)= \frac{l-2}{2}+\alpha(G'). \end{equation} It is obvious that $c(G)=c(G')+1$. Then from (\ref{E10})-(\ref{E11}), we have \begin{eqnarray*} r(G', \varphi)+2\alpha(G')&=&r(G, \varphi)+2\alpha(G)-2(l-2)\\ &=&2n-2c(G)-2(l-2)\\ &=&2(n-l+1)-2c(G'). \end{eqnarray*} {\bf (v):} Combining (\ref{E6}) and (\ref{E10}), one has that $$r(K, \varphi)=r(G', \varphi).$$ From (\ref{E7}) and (\ref{E11}), we have $$\alpha(K)+1=\alpha(G').$$ {\bf Case 2.} $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd. {\bf (ii):} Since $x$ lies on a cycle of $(G, \varphi)$, by Lemmas \ref{L000}, \ref{L13} and \ref{L053}, one has that \begin{equation} \label{E06} r(G, \varphi)=r((G, \varphi)-x)= r(P_{l-1}, \varphi)+r(K, \varphi)=l-1+r(K, \varphi) \end{equation} and \begin{equation} \label{E07} \alpha(G)=\alpha(G-x)= \alpha(P_{l-1})+\alpha(K)=\frac{l-1}{2}+\alpha(K). \end{equation} {\bf (iii):} As $C_{l}$ is a pendant cycle of $(G, \varphi)$, one has that \begin{equation} \label{E08} c(K)=c(G)-1. \end{equation} By (\ref{E06})-(\ref{E08}), we have \begin{equation} \label{E09} r(K, \varphi)+2\alpha(K)=2(n-l)-2c(K). \end{equation} {\bf (iv)} and {\bf (v):} By Lemma \ref{L002}, we have \begin{equation} \label{E010} r(G, \varphi)= l-1+r(G', \varphi). \end{equation} Then, by (\ref{E06}) and (\ref{E010}) we have \begin{equation} \label{E011} r(G', \varphi)=r(K, \varphi). \end{equation} By (\ref{E09}) and Theorem \ref{T30}, one has that \begin{eqnarray*} 2\alpha(K)&=&2(n-l)-r(K, \varphi)-2c(K)\\ &=&2(n-l+1)-r(G', \varphi)-2c(K)-2\\ &=&2(n-l+1)-r(G', \varphi)-2c(G')-2\\ &\leq&2\alpha(G')-2. \end{eqnarray*} Thus, we have $\alpha(K)\leq \alpha(G')-1$. On the other hand, by Lemma \ref{L054}, we have $\alpha(K) \geq \alpha(G')-1$. Hence, \begin{equation} \label{E012} \alpha(K) = \alpha(G')-1. \end{equation} It is obvious that $c(G')=c(K)$. Combing (\ref{E09}), (\ref{E011}) and (\ref{E012}), one has that $$r(G', \varphi)+2\alpha(G')=2(n-l+1)-2c(G').$$ This implies (iv). Moreover, equalities (\ref{E011}) and (\ref{E012}) implies (v). This completes the proof. \end{proof} \begin{lemma} \label{L58} Let $(G, \varphi)$ be a lower-optimal complex unit gain graph. Then $\alpha(G)=\alpha(T_{G})+\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G)$. \end{lemma} \begin{proof} We argue by induction on the order $n$ of $G$ to show the lemma. If $n=1$, then the lemma holds trivially. Next, we consider the case of $n \geq 2$. Suppose that the result holds for every lower-optimal complex unit gain graph with order smaller than $n$. If $E(T_{G})=0$, i.e., $T_{G}$ is an empty graph, then each component of $(G, \varphi)$ is a cycle or an isolated vertex. For each cycle $C_{l}$, it is routine to check that $\alpha(C_{l})= \lfloor \frac{l}{2} \rfloor$. Then the lemma follows. If $E(T_{G}) \geq 1$. Then $T_{G}$ contains at least one pendant vertex, say $x$. If $x$ is also a pendant vertex in $(G, \varphi)$, then $(G, \varphi)$ contains a pendant vertex. If $x$ is a vertex obtained by contracting a cycle of $(G, \varphi)$, then $(G, \varphi)$ contains a pendant cycle. Then we will deal with the following two cases. {\bf Case 1.} $x$ is also a pendant vertex in $(G, \varphi)$. Let $y$ be the unique neighbour of $x$ and $(G_{0}, \varphi)= (G, \varphi)-\{ x, y \}$. By Lemma \ref{L56}, one has that $y$ is not on any cycle of $(G, \varphi)$ and $(G_{0}, \varphi)$ is lower-optimal. Furthermore, it is obvious that $c(G)=c(G_{0})$. By induction hypothesis, we have (a) $\alpha(G_{0})=\alpha(T_{G_{0}})+\sum_{C \in \mathscr{C}_{G_{0}}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G_{0})$. Sine $x$ is a pendant vertex of $(G, \varphi)$ and $y$ is a quasi-pendant vertex which is not in any cycle of $(G, \varphi)$, $x$ is a pendant vertex of $T_{G}$ and $y$ is a quasi-pendant vertex of $T_{G}$. Moreover, $T_{G_{0}}=T_{G}-\{ x, y \}$. Thus, by Lemma \ref{L053} and assertion (a), we have \begin{eqnarray*} \alpha(G)&=&\alpha(G_{0})+1\\ &=&\alpha(T_{G_{0}})+\sum_{C \in \mathscr{C}_{G_{0}}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G_{0})+1\\ &=&\alpha(T_{G})-1+\sum_{C \in \mathscr{C}_{G_{0}}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G_{0})+1\\ &=&\alpha(T_{G})+\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G). \end{eqnarray*} Thus, the result holds in this case. {\bf Case 2.} $x$ lies on a pendant cycle. Let $x$ lies on a pendant cycle $C_{q}$. In this case, one can suppose that $x$ is the unique vertex of $C_{q}$ of degree 3. Let $K=G-C_{q}$ and $(G_{1}, \varphi)$ be the induced complex unit gain subgraph of $(G, \varphi)$ with vertex set $V(K)\cup \{ x\}$. By Lemma \ref{L55} (iv), one has that $(G_{1}, \varphi)$ is lower-optimal. By induction hypothesis, we have (c) $\alpha(G_{1})=\alpha(T_{G_{1}})+\sum_{C \in \mathscr{C}_{G_{1}}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G_{1})$. It can be checked that $$\mathscr{C}_{G}=\mathscr{C}_{G_{1}} \cup C_{q}=\mathscr{C}_{K} \cup C_{q}.$$ Moreover, one has that \begin{equation} \label{E13} \sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor=\sum_{C \in \mathscr{C}_{G_{1}}}\lfloor\frac{|V(C)|}{2}\rfloor+\lfloor\frac{q}{2}\rfloor=\sum_{C \in \mathscr{C}_{K}}\lfloor\frac{|V(C)|}{2}\rfloor+\lfloor\frac{q}{2}\rfloor. \end{equation} Since $C_{q}$ is a pendant cycle of $(G, \varphi)$, it is obvious that \begin{equation} \label{E14} c(G_{1})=c(K)=c(G)-1. \end{equation} By Lemma \ref{L55} (v), one has that \begin{equation} \label{E15} \alpha(G_{1})=\alpha(K)+1. \end{equation} Note that \begin{equation} \label{E150} T_{G_{1}}=T_{G}. \end{equation} By Lemma \ref{L55} (ii) and (\ref{E13})-(\ref{E150}), one has that \begin{eqnarray*} \alpha(G)&=&\alpha(K)+\lfloor\frac{p}{2}\rfloor\\ &=&\alpha(G_{1})+\lfloor\frac{p}{2}\rfloor-1\\ &=&\alpha(T_{G_{1}})+\sum_{C \in \mathscr{C}_{G_{1}}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G_{1})+\lfloor\frac{p}{2}\rfloor-1\\ &=&\alpha(T_{G})+\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G_{1})-1\\ &=&\alpha(T_{G})+\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor-c(G). \end{eqnarray*} This completes the proof. \end{proof} \noindent {\bf The proof of Theorem \ref{T50}.} (Sufficiency.) We proceed by induction on the order $n$ of $(G, \varphi)$. If $n=1$, then the result holds trivially. Therefore we assume that $(G, \varphi)$ is a complex unit gain graph with order $n \geq 2$ and satisfies (i)-(iii). Suppose that any complex unit gain graph of order smaller than $n$ which satisfes (i)-(iii) is lower-optimal. Since the cycles (if any) of $(G, \varphi)$ are pairwise vertex-disjoint, $(G, \varphi)$ has exactly $c(G)$ cycles, i.e., $|\mathscr{O}_{G}|=c(G)$. If $E(T_{G})=0$, i.e., $T_{G}$ is an empty graph, then each component of $(G, \varphi)$ is a cycle or an isolated vertex. By (ii) and Lemma \ref{L50}, we have $(G, \varphi)$ is lower-optimal. If $E(T_{G}) \geq 1$. Then $T_{G}$ contains at least one pendant vertex. By (iii), one has that $$\alpha(T_{G})=\alpha([T_{G}])+c(G)=\alpha(T_{G}-\mathscr{O}_{G})+c(G)=\alpha(T_{G}-\mathscr{O}_{G})+|\mathscr{O}_{G}|.$$ Thus, by Lemma \ref{L055} (ii), there exists a pendent vertex of $T_{G}$ which is not in $\mathscr{O}_{G}$. Then, $(G, \varphi)$ contains at least one pendant vertex, say $u$. Let $v$ be the unique neighbour of $u$ and let $(G_{0}, \varphi)=(G, \varphi)-\{ u, v \}$. It is obvious that $u$ is a pendant vertex of $T_{G}$ adjacent to $v$ and $T_{G_{0}}=T_{G}-\{ u, v \}$. By Lemma \ref{L053}, one has that $$\alpha(T_{G})=\alpha(T_{G}-v)=\alpha(T_{G}-\{ u, v \})+1.$$ {\bf Claim.} $v$ does not lie on any cycle of $(G, \varphi)$. By contradiction, assume that $v$ lies on a cycle of $(G, \varphi)$. Then $v$ is in $\mathscr{O}_{G}$. Note that the size of $\mathscr{O}_{G}$ is $c(G)$. Then, $H:=(T_{G}-v) \cup K_{1}$ is a spanning subgraph of $T_{G}$. Delete all the edges $e$ in $H$ such that $e$ contains at least one end-vertex in $\mathscr{O}_{G} \backslash \{ v \}$. Thus, the resulting graph is $[T_{G}] \cup c(G)K_{1}$. By Lemma \ref{L054}, one has that $$\alpha([T_{G}] \cup c(G)K_{1}) \geq \alpha((T_{G}-v) \cup K_{1}),$$ that is, $$\alpha([T_{G}])+c(G) \geq \alpha(T_{G}-v)+1.$$ Then, we have $$\alpha([T_{G}]) \geq \alpha(T_{G}-v)+1-c(G)=\alpha(T_{G})+1-c(G),$$ a contradiction to (iii). This completes the proof of the claim. Thus, $v$ does not lie on any cycle of $(G, \varphi)$. Moreover, $u$ is also a pendant vertex of $[T_{G}]$ which adjacent to $v$ and $[T_{G_{0}}]=[T_{G}]-\{ u, v \}$. By Lemma \ref{L053}, one has that $$\alpha([T_{G}])=\alpha([T_{G_{0}}])+1.$$ It is routine to checked that $c(G)=c(G_{0}).$ Thus, \begin{eqnarray*} \alpha(T_{G_{0}})&=&\alpha(T_{G})-1\\ &=&\alpha([T_{G}])+c(G)-1\\ &=&\alpha([T_{G_{0}}])+1+c(G)-1\\ &=&\alpha([T_{G_{0}}])+c(G_{0}). \end{eqnarray*} Combining the fact that all cycles of $(G, \varphi)$ belong to $(G_{0}, \varphi)$, one has that $(G_{0}, \varphi)$ satisfies all the conditions (i)-(iii). By induction hypothesis, we have $(G_{0}, \varphi)$ is lower-optimal. By Lemma \ref{L56}, we have $(G, \varphi)$ is lower-optimal. (Necessity.) Let $(G, \varphi)$ be a lower-optimal complex unit gain graph. If $(G, \varphi)$ is a complex unit gain acyclic graph, then (i)-(iii) holds directly. So one can suppose that $(G, \varphi)$ contains cycles. By Lemma \ref{L000} (v) and \ref{L55} (i), one has that the cycles (if any) of $(G, \varphi)$ are pairwise vertex-disjoint and for each cycle $(C_{l}, \varphi)$ of $(G, \varphi)$, either $\varphi(C_{l}, \varphi)=(-1)^{\frac{l}{2}}$ and $l$ is even or $Re((-1)^{\frac{l-1}{2}}\varphi(C_{l}, \varphi))=0$ and $l$ is odd. This completes the proof of (i) and (ii). Next, we argue by induction on the order $n$ of $(G, \varphi)$ to show (iii). Since $(G, \varphi)$ contains cycles, $n \geq 3$. If $n=3$, then $(G, \varphi)$ is a 3-cycle and (iii) holds trivially. Therefore we assume that $(G, \varphi)$ is a lower-optimal complex unit gain graph with order $n \geq 4$. Suppose that (iii) holds for all lower-optimal complex unit gain graphs of order smaller than $n$. If $E(T_{G})=0$, i.e., $T_{G}$ is an empty graph, then each component of $(G, \varphi)$ is a cycle or an isolated vertex. Then, (iii) follows. If $E(T_{G}) \geq 1$. Then $T_{G}$ contains at least one pendant vertex, say $x$. If $x$ is also a pendant vertex in $(G, \varphi)$, then $(G, \varphi)$ contains a pendant vertex. If $x$ is a vertex obtained by contracting a cycle of $(G, \varphi)$, then $(G, \varphi)$ contains a pendant cycle. Then we will deal with (iii) with the following two cases. {\bf Case 1.} $x$ is a pendant vertex of $(G, \varphi)$. Let $y$ be the unique neighbour of $x$ and $(G_{1}, \varphi)= (G, \varphi)-\{ x, y \}$. By Lemma \ref{L56}, one has that $y$ is not on any cycle of $(G, \varphi)$ and $(G_{1}, \varphi)$ is lower-optimal. By induction hypothesis, we have $$\alpha(T_{G_{1}})=\alpha([T_{G_{1}}])+c(G_{1}).$$ Note that $x$ is also a pendant vertex of $T_{G}$ which adjacent to $y$, then $T_{G_{1}}=T_{G}-\{ x, y \}$, $[T_{G_{1}}]=[T_{G}]-\{ x, y \}$ and $c(G)=c(G_{1})$. By Lemma \ref{L053}, it can be checked that $$\alpha(T_{G})=\alpha([T_{G}])+c(G).$$ The result follows. {\bf Case 2.} $(G, \varphi)$ contains a pendant cycle. Let $(C_{q}, \varphi)$ be a pendant complex unit gain cycle of $(G, \varphi)$ and $(K, \varphi)=(G, \varphi)-(C_{q}, \varphi)$. By Lemma \ref{L55} (ii), one has that $(K, \varphi)$ is lower-optimal. By induction hypothesis, we have \begin{equation} \label{E20} \alpha(T_{K})=\alpha([T_{K}])+c(K). \end{equation} In view of Lemma \ref{L55} (ii), one has \begin{equation} \label{E21} \alpha(G)=\alpha(K)+\lfloor\frac{q}{2}\rfloor. \end{equation} Since $\mathscr{C}_{G}=\mathscr{C}_{K} \cup C_{q}$. Then, we have \begin{equation} \label{E22} \sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor=\sum_{C \in \mathscr{C}_{K}}\lfloor\frac{|V(C)|}{2}\rfloor+\lfloor\frac{q}{2}\rfloor. \end{equation} Since $(G, \varphi)$ and $(K, \varphi)$ are lower-optimal, by Lemma \ref{L58}, we have \begin{equation} \label{E23} \alpha(T_{G})=\alpha(G)-\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor+c(G) \end{equation} and \begin{equation} \label{E24} \alpha(T_{K})=\alpha(K)-\sum_{C \in \mathscr{C}_{K}}\lfloor\frac{|V(C)|}{2}\rfloor+c(K). \end{equation} It is routine to check that $c(G)=c(K)+1$. Then combining (\ref{E20})-(\ref{E24}), we have \begin{eqnarray*} \alpha(T_{G})&=&\alpha(G)-\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor+c(G)\\ &=&\alpha(K)+\lfloor\frac{q}{2}\rfloor-\sum_{C \in \mathscr{C}_{G}}\lfloor\frac{|V(C)|}{2}\rfloor+c(G)\\ &=&\alpha(K)-\sum_{C \in \mathscr{C}_{K}}\lfloor\frac{|V(C)|}{2}\rfloor+c(G)\\ &=&\alpha(K)-\sum_{C \in \mathscr{C}_{K}}\lfloor\frac{|V(C)|}{2}\rfloor+c(K)+1\\ &=&\alpha(T_{K})+1. \end{eqnarray*} Note that \begin{equation} \label{E26} [T_{G}]\cong [T_{K}]. \end{equation} Then, in view of (\ref{E20}) and (\ref{E26}), one has that \begin{eqnarray*} \alpha(T_{G})&=&\alpha(T_{K})+1\\ &=&\alpha([T_{K}])+c(K)+1\\ &=&\alpha([T_{G}])+c(G).\\ \end{eqnarray*} This completes the proof. $\square$ \section*{Acknowledgments} This work was supported by the National Natural Science Foundation of China (No. 11731002), the Fundamental Research Funds for the Central Universities (No. 2016JBZ012) and the 111 Project of China (B16002).
proofpile-arXiv_065-6141
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{Introduction} Sustainable development of cities is a major global challenge as more than half of the world population is living in urban areas. The smart city concept allows optimizing services for urban areas because or as a result of the advancement of the new technologies ranging from very small devices to big data centers. These technologies can be considered in the context of IoT, where many objects, devices, machines, and data centers are connected. The usage of IoT technologies for {\em crowd management} in urban environments is promising for the future of smart cities. IoT technologies can enable many improvements for crowd management, which spans sectors such as transportation services (e.g., operating public transport or directing pedestrian traffic), public safety (e.g., detection of fighting incidents), and tourism (e.g., event management for enhanced visitor experience). For instance, movement behaviors of crowds may indicate situations such as traffic congestion, emergency incidents, and panic situations during certain events such as large gatherings in city squares. \begin{figure} \centering \includegraphics[clip, trim={3.1cm 1cm 4.5cm 1cm}, width=0.9\columnwidth]{figures/IoTPF.pdf} \caption{Federated and interoperable IoT platform supporting crowd management stakeholders of smart cities.} \label{Fig:IoTPF} \end{figure} While cities aim to achieve smart urban services, new challenges arise due to the limitations and deficiencies of the current systems and technologies in terms of \emph{scalability} of the connected systems, \emph{information transparency} between different systems (i.e., semantic interoperability) or stakeholders, \emph{data federation}, and \emph{information privacy}. When mobility information must be shared across multiple stakeholders, a proprietary infrastructure cannot fulfill all the different requirements that they impose. For example, some of the stakeholders expect real-time mobility monitoring service for event detection while others require historical mobility data analytics to analyze efficiency of services in different urban environments (e.g., train station, stadium, city square). It is very difficult to re-design a one-size-fit-all IoT system when new requirements arise for various environments or different time periods. A better solution is to provide ``a system of systems'' in which a new service can be easily developed or setup to handle any new requirement by leveraging existing technologies and infrastructure. To make such a system of systems useful, semantic models based on an appropriate ontology are needed for transparently exchanging data, analytics results, and allowing to share new insights from different crowd management applications. The federation of data, results, and learned insights is the key technical enabler to understand the crowd mobility behaviors in a smart city. Finally, privacy preservation is a problem of utmost importance for smart cities. While various data from vast deployment of sensors travel through the IoT systems, preserving privacy at a level closer to the data contributors (providers) is an important challenge. This article describes the recent advances in IoT for understanding crowd mobility in smart cities. The {\em federated and interoperable semantic IoT} (FIESTA-IoT) platform for smart cities is introduced for the specific perspective of crowd management applications. Fig.~\ref{Fig:IoTPF} illustrates the outlook of the smart city applications leveraging the smart city platform for sharing information across various stakeholders. While the platform is currently in use for several smart city testbeds, the article focuses on two IoT systems for crowd mobility, namely {\em Crowd Mobility Analytics System} (CMAS) and {\em Crowd Counting and Location System} (CCLS) and discusses the aspects related to the aforementioned limitations. Two pilot studies are conducted in Gold Coast, Australia and Santander, Spain, where various sensors are deployed in urban areas. The first pilot study uses CMAS in Gold Coast for a medium-scale smart city deployment. The requirements of the pilot include analyzing heavy or light pedestrian traffic at streets with or without vehicles. The second pilot study uses CCLS in an indoor market in Santander. The requirements include detecting people (crowd size) and locating their positions at public buildings of a city and other critical infrastructures. In both pilots, data anonymisation limits tracking devices for long time periods. On the other hand, online and offline analytics information needs to be shared across various stakeholders such as city councils and visualized in several interfaces using IoT technologies and infrastructure to provide insights for crowd management in smart cities. \section{Crowd Mobility Analytics using the Smart City Platform} \label{Systems} \subsection{Federated and Interoperable IoT Platform} Smart city data is often gathered by solutions where dedicated networks of sensors or data sources produce observations to be consumed by specific applications. The systems usually differ from each other, serving for distinct purposes, and they are mostly not interoperable~\cite{zanella2014smartcities,yannuzzi2017fog}. In this regard, creating crowd management services that harness the abundant data from a smart city (e.g., environmental data, road traffic information) would require either ad-hoc integration or creation of new systems. This situation raises a new requirement of an integrated ``system of systems'' or ``container of systems''. \begin{figure} \centering \includegraphics[clip, trim={2.8cm 0cm 4cm 0cm},width=0.96\columnwidth]{figures/architecture.pdf} \caption{Crowd mobility-based instantiation of the federated and interoperable IoT platform.} \label{Fig:Architecture} \end{figure} To overcome this challenge, we propose a crowd mobility-based instantiation of the FIESTA-IoT platform~\cite{Lanza2018} and provide semantic interoperability from IoT deployments to the services (shown in Figure \ref{Fig:Architecture}). The heterogeneous IoT deployments on the \emph{IoT Devices and Systems} (bottom layer) are integrated to the Cloud and data is anonymised with salting and hashing. In this layer, in addition to the two crowd mobility systems (CMAS and CCLS), IoT Cloud Data from external platforms can be connected. Furthermore, there exist other IoT FIESTA-IoT systems that can be leveraged. Currently more than 5000 sensors (from 11 integrated testbeds~\cite{Sanchez2018}) report environmental data (e.g., temperature, humidity, illuminance, noise level), road traffic information (e.g., vehicle speed, traffic intensity), car and bike parking spots, estimated arrival times of buses, and smart building information (e.g., human occupancy, power consumption). At the \emph{Federated Cloud Infrastructure} (middle layer), the data from the bottom layer is modelled using the FIESTA-IoT Semantic Model and stored in the Linked-Data Storage. In particular, the semantic model for crowd mobility data is described in Section \ref{ontology}. The data in the Cloud infrastructure is accessible through the Federated Context Management which exposes NGSI and SPARQL interfaces. Our open source IoT Broker (Aeron Broker) component provides scalable federation for the context management, whereas IoT Discovery (NEConfMan) enables easy registration and discovery of resources with features such as geo-discovery. The crowd management-related IoT data is harnessed by {\em Crowd Management Applications} (top layer) which contain IoT services provided by the platform and crowd mobility applications. These services enhance the crowd mobility data through reasoning by aggregating the semantic data and assessing the situations related to physical objects (i.e., Contextualization Service) at different levels of abstraction such as buildings level or street level. Assessment of the situations can be performed through; a) pre-defined thresholds, b) anomaly detection, c) time-series analysis, d) artificial intelligence. The obtained situations are displayed on the dashboard in Figure \ref{Fig:IoTPF}, named {\em Smart City Magnifier}, which reports alerts regarding traffic status, crowd flows, critical events (e.g., fire bursting), and so on. Moreover, crowd mobility applications such as {\em Gold Coast Operation Center} and {\em SmartSantander Maps} receive the results (generated by CMAS, CCLS or other IoT services) from the Cloud and provide visualizations. \subsection{Crowd Mobility Semantic Model} \label{ontology} \begin{figure*}[!t] \centering \includegraphics[clip, trim={0cm 1cm 0cm 0cm}, width=2\columnwidth]{figures/ontology.pdf} \caption{Modeling crowd mobility information based on FIESTA-IoT ontology.} \label{Fig:Ontology} \end{figure*} In order to provide seamless interoperability and information transparency from IoT systems to the crowd management applications, the crowd mobility outcomes are semantically annotated following the FIESTA-IoT ontology~\cite{agarwal2016fiestaontology} as shown in Fig.~\ref{Fig:Ontology} (with a stress on the specific taxonomy of M3-lite for crowd mobility). Rich and complex knowledge is represented with an ontology as things are connected to each other through relationships. Things are not identified as individuals, but as classes of individuals. Moreover, a class might have sub-classes. For example, peopleCounterX is an instance of \textit{PeopleCountSensor} class which is a subclass of \textit{Counter} (see Fig.~\ref{Fig:Ontology}). The classes can be defined and described in taxonomies and an ontology may use classes from different ontologies or taxonomies. Relationships between classes are known as properties and it is possible to define properties' cardinality. Each class and property used in an ontology is uniquely identified by a namespace prefix and the class or property specific name. For example, \textit{m3-lite:PeopleCountSensor} is a class defined in the M3-lite ontology. For the sake of readability, in this paragraph we are omitting the namespace prefix while they are shown with prefix in Fig.~\ref{Fig:Ontology}. The core concept is the \emph{SensingDevice}, representing a sensor that produces \emph{Observation}, which is a measurement (or computation) of a phenomenon related to an object happened at a specific \emph{Instant}. For example, a crowd mobility detector can be seen as a \emph{Device} composed of multiple \emph{SensingDevices}. In this sense, such a detector can have one \emph{PeopleFlowCountSensor} and one \emph{StayingPeopleCountSensor}, which are subclasses of \emph{PeopleCountSensor}. The \emph{Observation}(s) is expressed with a \emph{QuantityKind} having a \emph{Unit}. Following our example, the \emph{QuantityKind} associated to the data generated by the \emph{PeopleFlowCountSensor} is \emph{CountPeopleMoving} (subclass of \emph{QuantityKind}) with \emph{Item} as its \emph{Unit} and with the \emph{Direction} property expressed either in geodetic \emph{DirectionAzimuth} or as a generic \emph{DirectionHeading}. The directions start from the \emph{Point} that is the \emph{location} of the physical \emph{Platform}. \emph{Platform} is meant as the supporting dock to which the \emph{Device} is attached. The \emph{StayingPeopleCountSensor} generates \emph{CountPeopleStaying} values expressed in \emph{Item}. The system also consists of \emph{PeopleStayDurationSensor} that generates \emph{PeopleStayDurationAverage} values measured in \emph{SecondTime}. Each \emph{SensingDevice} might have a \emph{Coverage}, specified either as \emph{Polygon/Rectangle/Circle} or as a simple \emph{Point}. This indicates the geographic extent of the \emph{Observation}. \subsection{Integrated IoT Systems} \label{CrowdMobilitySystems} \subsubsection{Crowd Mobility Analytics System} \label{CEMASystem} The CMAS (extended from our system in~\cite{Wu-2018-MOBISYS}) is integrated with the platform via semantic annotation of the outcome. The developed system consists of Wi-Fi sniffers, stereoscopic cameras, IoT gateways, and data analytics modules. The Wi-Fi sniffers are capable of capturing wireless probes broadcasted by mobile devices. Based on the captured Wi-Fi probes, the system can count the mobile devices in these sensing areas. The cameras are co-located with specific Wi-Fi sniffers deployed at the dedicated {\em calibration choke points}. A built-in people counting software runs in the cameras. Both Wi-Fi device detection and people counting results are reported to to the Cloud, where data analytics modules reside, through the IoT gateways. Three analytics modules are developed: {\em crowd estimation, people flows, and stay duration}. The {\em crowd estimation} module outputs number of people by correlating the stereoscopic camera counts and the number of Wi-Fi enabled devices at the calibration points. Based on the correlation between the two data modalities, the calibration of the data analytical results are applied in other sensing areas without cameras. The module monitoring {\em people flows} infers crowd movement in these areas. Finally, the {\em stay duration} module estimates the waiting times and the number of waiting people. All analytics results are exported to the Federated Cloud Infrastructure so the crowd analytics results are discoverable and available for applications in the smart city platform. \subsubsection{Crowd Counting and Location System} \label{SMSSystem} Different from CMAS, CCLS aims at analysing crowd behaviour in public buildings of a city, as well as critical infrastructures. The system relies in the analyses of IEEE802.11 frames to discover devices in the surroundings of the deployment, normally within the monitored areas. Similar to CMAS, the deployed nodes capture ``Probe Request'' frames sent by smartphones, which include a Wi-Fi interface in ``active search'' mode, incorporated in most of them. However, CCLS does not only aim at detecting people, but also aims at locating them. For this, the system stores the RSSI and sequence number from the captured frames. It is possible to locate people by processing this information using RSSI-based algorithms. All the post-processing is performed in an edge server, where all the measurements are sent after the corresponding anonymisation techniques are applied. Once the anonymised raw measurements are analyzed and the counting and location analytics applied over them (i.e., the estimated crowd size and positions are obtained), these observations are semantically annotated and pushed to the Federated Cloud Infrastructure. For the semantic modelling, each crowd estimator is modelled as an \emph{PeopleCountSensor}, with a specific \emph{Coverage} (representing the area to which the estimations apply), that generates \emph{CountPeople} observations expressed in \emph{Item}. \subsection{Privacy Considerations} One of the essential requirements is dealing with tracked devices' privacy. Nowadays, privacy is one of the major public concerns. In this sense, data protection laws have to be observed when handling data that could be personal. Quite restrictive rules apply in most countries of the world, being the countries from the European Union (EU) some of the most restrictive ones. These rules are recently updated through the General Data Protection Regulation (GDPR)~\cite{regulation2016regulation} enforcement The Wi-Fi sensors in CMAS and CCLS deal with MAC addresses, which are considered personal data under the new EU regulation. As it is stated in the GDPR~\cite{regulation2016regulation}, ``The principles of data protection should apply to any information concerning an identified or identifiable natural person''. Therefore, Wi-Fi-based tracking services in public or private spaces can be performed only if the service obtains the user's opted-in permission, or data is anonymised in such manner that the user is no longer identifiable, as mentioned in the 26\textsuperscript{th} article from the aforementioned regulation. The Article 29 Working Party, recently replaced by the European Data Protection Board (EDPB), is in charge of analysing the compliance of the privacy rules. In a document released to analyze the ePrivacy regulation compliance with GDPR~\cite{WP247}, the Data Protection Working Party states that Wi-Fi tracking can only be performed either if there is consent or the personal data is anonymised. Within the same document, four conditions are mentioned for the latter case to be compliant with the GDPR: \begin{itemize} \item The purpose of the data collection from terminal equipment is restricted to mere statistical counting. \item The tracking is limited in time and space to the extent strictly necessary for this purpose. \item The data will be deleted or anonymised immediately afterwards. \item There exist effective opt-out possibilities. \end{itemize} Considering that user's permission request is impossible to obtain in normal conditions within the subject of the experimentation, the only option is to anonymise data regarding to MAC addresses. Thus, experimentation security measures must be undertaken to address both, data integrity and anonymisation. Therefore, any type of experimentation or service provision must take into account this concern, which is usually underestimated by system developers. CCLS in Santander is based on the Spanish Personal Data Protection Laws and the Spanish Law Protection Office recommendations for data anonymisation~\cite{agencia}. The recommendation consists on the use of a cryptographic hash function with randomly generated hash keys. More precisely, the HMAC protocol, which provides such mechanisms, is recommended. In the SmartSantander deployment, we implement the HMAC algorithm along with the SHA256 hashing function, with a 12-bytes randomly generated key. Finally, in order to ensure a non-reversible process, this implementation also comprises a procedure to destroy and renew the key during specific session periods. For CMAS in Gold Coast, the hashed and salted Wi-Fi probe data is sent to the Cloud. The stereoscopic cameras do not record video or perform face detection. The cameras simply count the passage of people through predefined lines at the choke points. The outputs of the camera are people count-in and -out values. The main drawback of this procedure is the limitation of tracking devices throughout long periods (as in~\cite{de2013unique}) or longer travels within the city, but it is the price that must be paid to meet the privacy requirements. \section{Pilot Studies in Australia and Spain} \label{PilotDescription} \subsection{Pilot Deployment in Gold Coast} \label{CEMASystemAdvanced} \subsubsection{Pilot Setup} The deployments in Gold Coast include 17 Wi-Fi sensors and 2 stereoscopic cameras. The Wi-Fi sensors are custom-built devices for outdoor deployments. Two cameras are used at the calibration choke points, where there is a camera and a Wi-Fi sensor deployed together. The cameras are the Hella Advanced People Sensor APS-90E deployed at a height about 3.6 meters. Each camera is configured to capture the entire choke point for accurate counting. The deployments target two regions. These sensors deployed in these areas are considered as {\em Cluster 1} for (expected) heavy pedestrian traffic and {\em Cluster 2} for light traffic places. Each cluster has a stereoscopic camera for the calibration. The collected data is sent to the Cloud where two virtual machines are created for the clusters. Clustering the areas allows applying CMAS to city-scale by sharing the raw data load. \subsubsection{Pilot Operation} The pilot study activities started in September 2017 and CMAS has been in use starting from November 2017. Various types of pilot tests are conducted on the field during the operation of the pilot. Manual counting is performed using video footages taken from different deployment areas. In comparison to manual counting, the cameras provide an accuracy between 88\% and 98\%, which mainly depends on the weather and lightning conditions. Furthermore, field tests for heavy and light traffic areas resulted in 93\% and 89\% crowd size accuracy compared to manual counting. The results obtained from outside the choke points give further confidence to treat stereoscopic camera results as {\em near ground truth} as proposed in~\cite{Wu-2018-MOBISYS}. Gold Coast pilot successfully tests the crowd mobility analytics services by leveraging federation of clusters and interoperability using the semantic model to share the results with stakeholders. This shows that similar systems can be developed and leveraged by future crowd management applications using the smart city platform. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{figures/heatmap.pdf} \caption{Heatmap from Mercado del Este in Santander.} \label{Fig:MercadoDelEsteHeatmap} \end{figure} \begin{figure*} \begin{framed} \centering \begin{tabular}{cc} \includegraphics[width=0.44\linewidth]{figures/wifi_count.pdf} & \includegraphics[width=0.44\linewidth]{figures/correlation_coefficient.pdf} \\ (a) The number of unique Wi-Fi devices detected (hourly). & (b) Hourly changes of the camera/Wi-Fi ratios. \\ \end{tabular} \end{framed} \caption{Weekly measurements from Gold Coast. Cluster 1: Pedestrian ways with heavy pedestrian traffic, Cluster 2: Roads including vehicles and light pedestrian traffic.} \label{Fig:Correlation_Coefficient} \end{figure*} \subsection{Pilot Deployment in Santander} \label{SmSPilotDescription} \subsubsection{Pilot Setup} CCLS is deployed in the ``Mercado del Este'' market, a restored symmetric building that contains shops, restaurants, a regional tourist office, and a museum. This building is particularly interesting as it usually receives significant numbers of visitors due to its central location, with exceptionally crowded periods. The system is composed of 8 devices installed within the market building. These devices include a Wi-Fi interface aimed at detecting surrounding Wi-Fi enabled visitors' devices. Internet connectivity is provided through the Municipality Network, and the devices are powered using Power over Ethernet connected to the market's electrical grid. In addition to the wireless interfaces, half of the devices also include environmental sensors measuring temperature and humidity. Device deployment is carried out with the collaboration and supervision of the municipality and the market managers. Considering the main goal of monitoring people within the market, two parameters are considered in order to get market status snapshots over the time. First, the number of visitors within the market in different time frames and second, the location of the visitors in the different areas of the market. \subsubsection{Pilot Operation} Firstly, in order to monitor the visitors within the market, we follow a deterministic approach, in which we consider that a device is inside the building if a minimum of 6 sensor nodes detect it with a certain level of RSSI. In our deployment, this solution is feasible considering the particular symmetric distribution of the building and the location of the sensor nodes, covering the external wall of the building. Secondly, device locations are estimated using the Weighted Centroid Algorithm ~\cite{kosovic2014enhanced}, which provides a reasonable approximation of 5 meters to the ground-truth measurements without any ad-hoc calibration. For the cases that require more precision, these positioning methods are able to introduce less than 2 meters error if the system is calibrated in advance. Synthesized information including real-time visitor location and detected number of visitors per unit of time is provided through a web portal to the market managers and municipality responsibles. Fig.~\ref{Fig:MercadoDelEsteHeatmap} shows the heat map of the market in a specific moment. Other parameters, such as the visitors' dwell time in different long-term periods, are not analysed due to the privacy safeguards that have to be addressed. \section{City-Scale Experiments} \label{Insights} This section discusses some of the experimental observations from the Gold Coast pilot with CMAS. Specifically, it includes the variance in the crowd estimation for the Wi-Fi sensors and cameras. Our focus in the experimental study is to observe the dynamic changes in the {\em number of unique Wi-Fi devices detected} and the {\em correlation coefficient} (or simply camera/Wi-Fi ratio), which is a dynamic parameter that is computed by the {\em Adaptive Linear Calibration Algorithm}~\cite{Wu-2018-MOBISYS}. The coefficient basically indicates the proportion of the number of people (count-in and count-out events) detected by the camera to the number of devices detected by the Wi-Fi sensors every time interval. We analyze the hourly results for the two clusters, where 5 minute time intervals are aggregated and averaged for 1 hour. Figure~\ref{Fig:Correlation_Coefficient}-a shows the average number of Wi-Fi devices detected for one-week period. There exists an increased activity in Cluster 1 region especially during Friday (23/03/2018) and the following weekend. This can be due to crowdedness in the shopping street and the beach area contained in this region. Moreover, there is a peak in Saturday that can be due to an event or gathering. Figure~\ref{Fig:Correlation_Coefficient}-b shows the change of the coefficients (ratios). The ratios are computed at the calibration choke points (providing near-ground truth for the measurements). The hourly ratio is computed such that number of people count-in and count-out events are divided to the number of Wi-Fi probes. First, for Cluster 2 with light traffic, correlation coefficient is mostly (almost all days) higher compared to Cluster 1. Second, correlation coefficient values lie mostly in the range of (0.2, 2), whereas the peak value is about 2.8. This indicates that the results based on Wi-Fi-only measurements are likely to have less accuracy most of the times of the days and the correlation changes throughout the days. Lastly, there exists certain regularity in the correlation from one day to another, which can be learned through a time period and then applied to other time periods where camera is temporarily inactive or removed. On the other hand, as seen in the peak hours of Cluster 2, the ratios do not lie within a narrow range. One reason can be events affecting the volume of pedestrians. Lastly, Fig.~\ref{Fig:Correlation_Coefficient}-b shows relatively higher variance of the coefficient for areas with light pedestrian traffic. Calibration could be necessary for shorter time intervals. Overall, it is observed that effective use of Wi-Fi sensing and combining them with sensing by stereoscopic cameras produce accurate sensing in large scale for both the heavy and light pedestrian traffic areas. Moreover, the variance between heavy and light traffic shows the usefulness of the clustering approach which treats these regions separately. \section{Related Work} \label{RelatedWork} There are recent studies that focus on understanding of human mobility through IoT devices such as wireless sensors. Jara et al.~\cite{jara2015big} observed the relation between traffic behavior and temperature conditions as a smart city application through deployment of IoT devices in Santander. Tong et al.~\cite{tong2017modeling} propose usage of Wi-Fi sensors to understand passenger flows. Evaluation through simulation results shows high accuracy. Zhao et al.~\cite{zhao2016urban} survey the recent advances in understanding human mobility in urban environments. The study lists some of the existing urban human mobility datasets collected such as GPS, GSM, Wi-Fi, and Bluetooth traces. Similarly, Zhou et al.~\cite{Zhou-CommMag2018} discuss the topic of human mobility in urban environments and present a taxonomy of crowdsensed input data types and application outcomes such as crowd density and flows within building, and people transportation mode identification (cycling, running, bus riding). Lastly, Montjoye et al.~\cite{de2013unique} focus on the privacy aspect by analyzing long period Wi-Fi traces and show that 95\% of the individuals can be uniquely identified using spatiotemporal datasets. \section{Future Work and Challenges} The current work focuses on finding insights behind crowd mobility such as detecting crowdedness. However, understanding more complex crowd mobility behaviour in a large-scale city area such as movements of groups (e.g., family) could be helpful for crowd management and enhancing smart mobility in the cities. The collected mobility information can serve as input of human mobility simulations to further study how city dynamics are affected by crowd mobility patterns. With the combination of real mobility dataset in a simulated environment, learning new mobility insights opens up new opportunities for new crowd management strategies (e.g., congestion avoidance, evacuation planning, demand management) that can further improve the public service and safety in smart cities. In our future developments, the semantic interoperability through ontologies can be leveraged more extensively for cross-infrastructure communication and knowledge sharing. The new advancements of the NGSI protocol by the ETSI Industry Specification Group (ISG) on Context Information Management (CIM) are centered around the concepts of linked data. This opens a new horizon where knowledge graphs are shared among various infrastructures and, while their administrators own the produced data, it is still accessible seamlessly and transparently by all actors in the multi-infrastructure federation. \section{Conclusions} \label{Conclusion} This article discusses the new advancements towards understanding crowd mobility in smart cities using IoT. While there exist certain limitations, the CMAS and CCLS systems using the smart city platform offer improvements for more efficient crowd management. The pilot studies in Gold Coast and Santander show the capability to fulfill various requirements and share information across stakeholders by leveraging the IoT technologies and infrastructure. \parpic{\includegraphics[width=0.2\linewidth,clip,keepaspectratio]{figures/EU-Project-Banner}} \noindent \textbf{Acknowledgment:} The pilot study in Gold Coast is conducted in collaboration with NEC Australia. This work has been partially funded by the Spanish Government (MINECO) under Grant Agreement No. TEC2015-71329-C2-1-R ADVICE (Dynamic Provisioning of Connectivity in High Density 5G Wireless Scenarios) project and by the EU Horizon 2020 Programme under Grant Agreements No. 731993 AUTOPILOT (Automated Driving Progressed by Internet Of Things), 643943 FIESTA-IoT (Federated Interoperable Semantic IoT Testbeds and Applications), and 643275 FESTIVAL (Federated Interoperable Smart ICT Services Development and Testing Platforms) projects and the joint project by NEC Laboratories Europe and Technische Universit\"at Dortmund. The content of this paper does not reflect the official opinion of the Spanish Government or European Union. Responsibility for the information and views expressed therein lies entirely with the authors. \bibliographystyle{IEEEtran}
proofpile-arXiv_065-6145
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Quantum random walk or quantum walk (QW) in its original form is simply the dynamics of a quantum particle that has a spin-1/2-like internal degree of freedom in addition to its position and momentum \cite{first}. Being a natural quantum version of the classical random walk that appears in statistics, computer science, finance, physics, chemistry and biology, it has been a topic of fundamental interest \cite{review}. Moreover, QW research now enjoy broader interest due to its widespread applications in the areas of quantum algorithms \cite{kempe}, quantum computing \cite{q_computing}, quantum biology \cite{q_biology}, and quantum simulation \cite{q_simulation}.\\ The dynamics of a quantum walker is usually controlled by two unitary operators : a rotation operator \(\hat{C}\) (called ``quantum coin") and a shift operator \(\hat{S}\). The coin operator acts on the walker's internal degrees of freedom, leaving it generally in a superposition of spin up and spin down. The shift operator then shifts the position according to the walker's internal degree of freedom. Hence, the internal and external degrees of freedom becomes entangled. Successive applications of the two operators (\(\hat{C}\) \& \(\hat{S}\)) generate discrete time evolution of the walker. This is what we call the one-dimensional discrete time QW. One major advantage of QW over the classical random walk is that a quantum walker spreads over the line linearly in time (standard deviation \(\sigma\sim t\)), while the classical random walk spreads in a slower fashion (\(\sigma\sim t^{1/2}\)).\\ A QW with multiple particles contains quantum resources like multi-particle quantum correlations and multi-partite entanglement which have no classical analogue. Moreover, in case of identical particles, quantum statistics gives an additional feature to QWs that can also be exploited. In 2006, Omar et al. first extended the idea of single particle QW to the case of two particles \cite{Omar}. They showed that a QW with two particles can indeed behave very differently from two independent single-particle QWs even in the absence of any inter-particle interactions \cite{Omar}. In particular, the probability to find at least one particle in a certain position after some steps of the walk, as well as the average distance between the two particles, was shown to be larger or smaller than the case of two unentangled particles, depending on the initial conditions \cite{Omar}. Thereafter, the topic of two-particle QW has attracted significant attention. Berry and Wang considered simple interaction schemes between two particles and showed that the interactions lead to a diverse range of probability distribution that depend on correlations and relative phases between the initial coin states of the two particles \cite{Berry}. They also showed that two interacting walkers can be used to distinguish all nonisomorphic strongly regular graphs \cite{Berry}. Stefanak et al. showed that the directional correlations between two interacting particles can exceed the limits for non-interacting particles \cite{Stefanak2}. Shu et al. studied the effect of coin parameters on two-particle QWs for different initial states and showed that the coin parameters can be used to tune the entanglement between the particles \cite{Shu}. Pathak and Agarwal reported the QWs of two photons in separable and entangled states \cite{Pathak}. In recent years, different experimental implementations of two-particle QWs have been reported \cite{expt1,expt2,expt3}.\\ QW research, including the above mentioned works, has mainly focused on evolutions due to repeated applications of time-independent unitary coin operators whereas the time-dependent coins have attracted much less attention. However, works on single particle QWs with time-dependent unitary coins have found a rich array of phenomena \cite{19,coin2,coin1,interferometer}. In those works, the time dependences were introduced either by choosing coin parameters having explicit time-dependence or by selecting one coin at every step from a deterministic aperiodic sequence of two coins. Ba{\~n}uls et al. first prescribed a coin operator with explicit time dependence in ref.\cite{coin2}. They showed that the operator generates dynamical localization and quasiperiodic dynamics. Such fascinating behavior was also realized separately in a QW with a time independent coin and position-dependent phases at every step \cite{15,16}. Ba{\~n}uls et al. also showed that the time-dependent coin can be used as a control mechanism to compensate for the phases arising from some external influence \cite{coin2}. A different type of explicit time-dependence was introduced by Romanelli who actually generalized the discrete time QW on the line using a time-dependent unitary coin operator \cite{coin1}. He showed that the time dependent coin allows the particle to exhibit a variety of predetermined asymptotic wave-function spreadings : ballistic, sub-ballistic, diffusive, sub-diffusive and localized. These coherent intermediate situations might be useful for controlling quantum information and for the development of quantum algorithms \cite{coin1}. In recent experiments, Broome et al. simulated another different type of time-dependent coin control by setting different coin parameters for different steps, which were effected in different locations along the longitudinal axis within their photonic beam-displacer interferometer \cite{interferometer}. The linearly-ramped time-dependent coin operation generated two periodic revivals of the walker distribution. On the other hand, a QW where the coin at every step is obtained from a deterministic aperiodic sequence of two coins was first introduced by Ribeiro et al. in ref. \cite{19}. This type of time-dependence generates different types of wave function spreadings e.g., sub-ballistic, diffusive etc. depending on the nature of the aperiodic sequences \cite{19,191}.\\ The above described studies have shown that time-dependent coins opens a rich array of phenomena. However, such studies have been performed only on single particle systems. Two-particle systems are yet to be explored. This has been our primary motivation for the present work.\\ We also intend to generalize here the dynamical behavior of two quantum walkers. Generalizing two-particle QWs is in itself a topic of interest as multi-walker systems are expected to generate measurable phenomenon not described by the single-particle model, due to entanglement and interaction among the walkers. For example, non-trivial effects were found in case of two particle QW on a disordered lattice \cite{loc}. It is quite difficult to predict such effects from our knowledge of the related single particle models. Since the time dependent coins allow both the particles to exhibit a variety of dynamic behavior, we can generalize two-particle QW evolution using time-dependent coins. The advantage of using time-dependent coin in studying various wave function spreadings is that the modification has to be done only on the coin whereas with time-independent coins, the modifications are required to be performed on a much larger part of the system. For example, to generate dynamic localization without time dependent coins, different position dependent phases are required to be introduced at every step \cite{15,16}. \\ The time-dependent coins also allow us to numerically study the collective dynamics of two quantum walker of different nature. Such studies can be instrumental for developing a deeper understanding of two-particle QW. For example, using time-dependent coins we can study the dynamics of two particles where one of them has a predetermined ballistic wave function spreading whereas the other one has a predetermined localized wave function spreading. Interesting dynamical correlations can be found in those cases. Such study can not directly be performed using time-independent coins in the presence(absence) of disorder as both particles will then perform localized(ballistic) evolution. \\ We present here a thorough numerical simulation study of a one dimensional system of two quantum walkers exhibiting rich collective dynamics controlled by simple time­-dependent unitary coins proposed by Romanelli \cite{coin1} and Ba{\~n}uls et al. \cite{coin2}. We investigate how the interplay of time-dependence, simple interaction schemes, entanglement and the relative phase between the coin states of the particles will influence the evolution of the QW. We demonstrate and characterize the wide-spectrum of tunable dynamical behavior offered by the two particle QW evolving under the influences of quantum coins having explicit time dependence.\\ The paper is organized as follows. In the section \ref{two}, we describe the formalisms of single and two particle QWs. The \(\mathbb{1}\) and \(\pi\)-phase interaction schemes considered here are described in section \ref{three}. Section \ref{four} has a description of the time-dependent coins used in the present work. The formalism of two particle QW with time-dependent coins has been described in section \ref{five}. Section \ref{six} describes the observables. All the numerical results of our study are presented in section \ref{seven}. In section \ref{eight}, we draw the conclusions and present future pathways. \section{ Standard single particle and two particle QW \label{two}} \subsection{Single particle QW} The relevant degrees of freedom for a single particle discrete-time QW on a line are the particle’s position \(x\) (with \(x \in z\)) on the line, as well as its coin state. The total Hilbert space is given by \(H_{Total} \in H_{P}\otimes H_{C}\) , where \(H_{P}\) is spanned by the orthonormal position vectors \(\{|x\rangle\}\) and \(H_{C}\) is the two-dimensional coin space spanned by two orthonormal vectors which we denote as \(|\uparrow\rangle\) and \(|\downarrow\rangle\). Each step of the QW consists of two subsequent operations: the coin operation and the shift-operation. The coin operation, given by \(\hat{C}\), and acting only on \(H_{C}\) , allows for superpositions of different alternatives, leading to different moves. This operation is the quantum equivalent of randomly choosing which way the particle will move in case of classical random walk. Then, the shift operation \(\hat{S}\) moves the particle according to the current coin state, transferring this way the quantum superposition to the total state in \(H_{Total}\). The evolution of the system at each step of the walk can then be described by the total unitary operator.\\ \begin{equation} \hat{U}\equiv \hat{S}(\hat{I}\otimes\hat{C}) \end{equation} where \(\hat{I}\) is the identity operator acting on \({H_{P}}\). A popular choice for \(\hat{C}\) is the Hadamard operator \(\hat{C}_{H}\): \begin{center} \begin{equation} \hat{C}_{H}=\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\[0.3em] 1 & -1 \end{pmatrix} \end{equation} \end{center} The shift operator is given by \begin{equation} \hat{S} = ( \sum\limits_{x} |x+1\rangle \langle x|)\otimes |\uparrow\rangle\langle\uparrow| + ( \sum\limits_{x} |x-1\rangle \langle x|)\otimes |\downarrow\rangle\langle\downarrow| \end{equation} \subsection{Two particle QW} A two-particle QW takes place in the Hilbert space \(H = H_{1} \otimes H_{2}\) , where \(H_{i} = (H_{P}\otimes H_{C} )_i\) (\(i\)=1,2). Let \(|x,\alpha ; y,\beta \rangle= |x,\alpha\rangle_1 \otimes |y,\beta\rangle _2\) be a two-particle basis state, where \(x,y\) represent the positions of the two particles on the same axis and \(\alpha,\beta \in \{\uparrow, \downarrow\}\) represent their respective coin states. The time-evolution operator is defined as \(\hat{U} = \hat{S}(\hat{I} \otimes \hat{C})\), where \(\hat{S}\) is defined in the two-particle basis by \begin{center}\begin{equation} \begin{matrix} \hat{S}=|x+1, \uparrow ; y+1, \uparrow\rangle \langle x, \uparrow; y, \uparrow| \\[0.3em] +|x+1,\uparrow;y-1,\downarrow\rangle\langle x,\uparrow;y,\downarrow| \\[0.3em] +|x-1,\downarrow;y+1,\uparrow\rangle\langle x,\downarrow;y,\uparrow| \\[0.3em] +|x-1,\downarrow;y-1,\downarrow\rangle\langle x,\downarrow;y,\downarrow| \end{matrix}\end{equation} \end{center} The coin operator can be represented as a \(4 \times 4\) matrix \(\hat{C}_{1,2} = C_{1} \otimes \hat{C}_{2}\) where \(\hat{C}_{1}\) and \(\hat{C}_{2}\) act on two different particles. For example, if \(\hat{C}_{1}\) and \(\hat{C}_{2}\) are both equal to the standard \(2 \times 2\) Hadamard matrix \(\hat{C}_{H}\) then \(\hat{C}_{1,2}\) acts on the coin Hilbert space as\\ \begin{center} \begin{equation} \hat{C}_{1,2}\begin{pmatrix} a_{\uparrow\uparrow} \\[0.3em] a_{\downarrow\uparrow} \\[0.3em] a_{\uparrow\downarrow} \\[0.3em] a_{\downarrow\downarrow} \\[0.3em] \end{pmatrix} = \frac{1}{2}\begin{pmatrix} 1 & 1 & 1 & 1 \\[0.3em] 1 & -1 & 1 & -1 \\[0.3em] 1 & 1 & -1 & -1 \\[0.3em] 1 & -1 & -1 & 1 \\[0.3em] \end{pmatrix}\begin{pmatrix} a_{\uparrow\uparrow} \\[0.3em] a_{\downarrow\uparrow} \\[0.3em] a_{\uparrow\downarrow} \\[0.3em] a_{\downarrow\downarrow} \\[0.3em] \end{pmatrix} \end{equation} \end{center} where \(a_{\alpha\beta}= \langle x,\alpha ; y,\beta|\psi\rangle\). Here \(|\psi\rangle\) represents the current state of the system. The two-particle probability distribution, \(P (x,y,t)\), is the probability of finding particle \(1\) at position \(x\) and particle \(2\) at position \(y\) after \(t\) steps of the two-particle QW, i.e., \(P(x,y,t)=\sum\limits_{\alpha,\beta=\uparrow,\downarrow}|\langle x,\alpha ; y,\beta|(U )^t |\psi_0 \rangle|^2 \), where \(|\psi_{0}\rangle\) is the initial state of the system. The evolution of the system crucially depends on the choice of the initial states \cite{Omar}.\\ Here, we study the QW evolutions starting from three different initial states. One is a separable product state \(|Sep\rangle\) formed from two particles in unbiased states, i.e., \( |Sep\rangle = \frac{1}{2} (|0, \uparrow\rangle_{1} + i|0, \downarrow\rangle_{1} ) \otimes (|0, \uparrow\rangle_{2} + i|0, \downarrow\rangle_{2} ) = \frac{1}{2} (|0, \uparrow ; 0, \uparrow\rangle + i|0, \uparrow ; 0, \downarrow\rangle+ i|0, \downarrow ; 0, \uparrow\rangle - |0, \downarrow ; 0, \downarrow\rangle ). \) The other two are the two Bell states \(|\psi^{+}\rangle\),\(|\psi^{-}\rangle\) in which the coin states of the two particles are maximally entangled : \(|\psi^{+}\rangle= \frac{1}{\sqrt{2}} (|0, \uparrow ; 0, \downarrow\rangle + |0, \downarrow ; 0, \uparrow\rangle ),\) \(|\psi^{-}\rangle= \frac{1}{\sqrt{2}} (|0, \uparrow ; 0, \downarrow\rangle - |0, \downarrow ; 0, \uparrow\rangle )\). These two entangled states differ by a relative phase which creates the differences in the resultant behaviors.\\ \section{\(\mathbb{1}\) and \(\Pi\)-phase interaction schemes \label{three}} \begin{figure*}[th] \begin{center} \subfigure[\label{fig:110a}]{\includegraphics[scale=0.70]{sigma_1.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:110b}]{\includegraphics[scale=0.70]{sigma_2.pdf}} \end{center} \caption{(a) The standard deviation ${\protect\sigma }(n)$ for coin \(C_{\alpha}(t)\) as a function of the dimensionless time $n$ in log-log scales. The values of $\protect\alpha$ are as follows : $0$ (top), 0.25, 0.50(middle), 0.75 to $1.25$ (bottom). (b) The standard deviation ${\protect\sigma }(n)$ for coin \(C_{\Phi}(t)\) as a function of the dimensionless time $n$ for different choice of the two parameters \(p\) and \(q\) as described inside the figure.} \label{f1} \end{figure*} For non-interacting walks, the quantum coin \(C\) is taken to be identical for all two-particle position states. The situation becomes more interesting when two particles interact with each other. Even simple interaction schemes can generate quite different behavior compared to the non-interacting case. Berry et al. introduced two simple interaction schemes, which are known as the \(\mathbb{1}\) interaction and the \(\pi\)-phase interactions \cite{Berry}. For two-particle quantum walks, the \(\mathbb{1}\) interaction is implemented by substituting the standard coin operator with the negative identity operator when both the particles are in the same position state. For example, in the two-particle QW on the line with the Hadamard coin \(\hat{C}_{H}\), the coin operator for the states \(\{|x,\alpha; y,\beta \rangle\}\) becomes \(\hat{C}=\hat{C}_{H} \otimes \hat{C}_{H} \) when \(x\neq y\) and \(\hat{C} = -\mathbb{1} \otimes - \mathbb{1} =\mathbb{1}\) when \(x=y\), This interaction was introduced by analogy with the QW based search procedure described in \cite{Shenvi}, in which a quantum oracle was implemented as a substitution of the Grover coin operator at the ``marked" vertices. In some sense, the \(\mathbb{1}\)-interacting two-particle walk is equivalent to the search procedure with all doubly occupied vertex states being ``marked."\\ In case of the \(\pi\)-phase interacting two-particle QW on the line with the Hadamard coin \(\hat{C}_{H}\), coin operator becomes \(\hat{C}=\hat{C}_{H} \otimes \hat{C}_{H} \) when \(x\neq y\) and \(\hat{C} = e^{i\pi} \hat{C}_{H} \otimes \hat{C}_{H}\) when \(x=y\) \cite{Berry}.\\ Neither of these simple interactions are intended to represent particular physical situations. However, they have been considered in most studies on two particle interacting QW as they allow us to examine, in a simple way, the characteristics of the two-particle quantum walks that are affected by explicit spatial interactions between particles \cite{topological,another,Sun,Rodriguez}. \section{ Time dependent quantum coins\label{coins} \label{four}} In general, an arbitrary time-independent coin operator can be written as \cite{book} \begin{equation} \hat{C}=\left(\begin{array}{cc} \mbox{cos} \theta & e^{-i\phi_{1}}\mbox{sin} \theta\\ e^{i\phi_{2}}\mbox{sin} \theta & -e^{i(\phi_{1}+\phi_{2})}\mbox{cos} \theta\end{array}\right).\label{Cgral}\end{equation} For \(\phi_{1}=\phi_{2}=0\), the above form reduces to \begin{equation} \hat{C}=\left(\begin{array}{cc} \mbox{cos} \theta & \mbox{sin} \theta\\ \mbox{sin} \theta & -\mbox{cos} \theta\end{array}\right).\label{Cgral}\end{equation} Romanelli and Ba{\~n}uls et al. considered separately the idea of a modified QW, where the coin elements change with time during the evolution.\\ Romanelli prescribed and studied in ref. \cite{coin1}, a deterministic angular time dependence $\theta =\theta (t)$ for the coin operator and studied the case where \begin{equation}\label{eq:8} \hat{C}={\hat{C}_{\alpha}(t)}=\left(\begin{array}{cc} \mbox{cos} \theta(t) & \mbox{sin} \theta(t)\\ \mbox{sin} \theta(t) & -\mbox{cos} \theta(t)\end{array}\right)\end{equation} with \begin{equation}\label{eq:9} \cos \theta (t)=\frac{1}{\sqrt{2}}\left( \frac{\tau }{t+\tau }\right) ^{\alpha } \end{equation}% He considered $\alpha \geq 0$ and also defined the discrete dimensionless time as $t=(n-1)\tau $ where \(n\) is the number of time steps and \(\tau\) is the unit of time \cite{coin1}. He found five different types of asymptotic behaviors depending on the values of the parameter \(\alpha\):\\ \hspace*{2cm}a) ballistic for $\alpha =0$,\\ \hspace*{2cm}b) sub-ballistic for $0<\alpha <0.5$,\\ \hspace*{2cm}c) diffusive for $\alpha =0.5$\\ \hspace*{2cm}d) sub-diffusive for $0.5<\alpha\leq 1$,\\ \hspace*{2cm}e) localized for $\alpha >1$. \\ We have shown the variations of the standard deviation \(\sigma\) against time for single particle QW under the influence of \(\hat{C}_{\alpha}(t)\) for five different values of \(\alpha\) in Fig. \ref{fig:110a}. It can be seen that the slope of the curves gradually decreases with increasing values of \(\alpha\) in case of single particle QW.\\ On the other hand, Ba{\~n}uls et al. \cite{coin2} prescribed and studied the effect of a time-dependent coin of the following special form \begin{equation}\label{eq:10} \hat{C}=\hat{C}_{\Phi}(t)=\frac{1}{\sqrt{2}}\left(\begin{array}{cc} e^{-i\Phi(t)} & e^{-i\Phi(t)}\\ e^{i\Phi(t)} & -e^{i\Phi(t)}\end{array}\right).\end{equation} Notice that the above coin can be obtained as the sequence of two operations, i.e., \begin{equation} \hat{C}_{\Phi}(t)=\hat{C}_{0}(t)\hat{C}_{H}\end{equation with\begin{equation} \hat{C}_{0}(t)=\left(\begin{array}{cc} e^{-i\Phi(t)} & 0\\ 0 & e^{i\Phi(t)}\end{array}\right).\end{equation} Here \(\hat{C}_{H}\) is the time independent Hadamard coin and $\Phi(t)$ is a general function. Ba{\~n}uls et al. studied a particularly interesting case where \(\Phi(t) = \Phi_{0} t \). For rational values of \(\Phi_{0}/2\pi\), dynamical localization around the origin was observed during a transient period. The standard deviation \(\sigma\) oscillates periodically with time during this period \cite{coin2}. After long enough times, ballistic diffusion starts \cite{coin2}. The duration of the transient regime is dependent on the coin parameters. For irrational values of \(\Phi_{0}/2\pi\), the long-time diffusion gets suppressed and the QW shows dynamical localization around the origin for arbitrarily long time \cite{coin2}.\\ Ba{\~n}uls et al. interpreted the dynamic localization as a propagating solution in the dispersive medium with null mean value of its group velocity \cite{coin2}. They considered that the value of \( \Phi_{0}(=2\pi \frac{q}{p}) \) depends on two parameters \(q\) and \(p\). The parameter \(p\) was found to control the period of primary oscillations whereas the period of secondary oscillations was controlled by both the parameters \(p\) and \(q\). Ba{\~n}uls et al. numerically showed oscillations of \(\sigma\) with quasiperiod `\(p\)' by considering a rational value of \(q/p\). The secondary oscillations were more pronounced the smaller is \(q\) and the larger is the quasiperiod \cite{coin2}.\\ We have shown the variations of \(\sigma\) against time for a single quantum walker evolving under the influence of \(\hat{C_{\phi}}(t)\) for three different (\(q,p\)) combinations in Fig. \ref{fig:110b}. It can be seen that the standard deviation \(\sigma\) oscillates with time period `\(p\)'. For \(q=3\), it can be seen that there are secondary oscillations of period \(\frac{p}{q}\).\\ \section{ Different combinations of the time-dependent quantum coins \label{five}} We have studied the dynamics of two particle QW for the following different combinations of two time-dependent coins : \subsection{Combination-I} Both the particles are driven by the following time-dependent coin introduced in equations \ref{eq:8} and \ref{eq:9}. \begin{equation} \hat{C}_{\alpha}=\left(\begin{array}{cc} cos \theta & sin \theta\\ sin \theta & -cos \theta\end{array}\right).\label{Cgral}\end{equation} where \begin{equation} \cos \theta (t)=\frac{1}{\sqrt{2}}\left( \frac{\tau }{t+\tau }\right) ^{\alpha }, \label{cos} \end{equation} Here the related \(4 \times 4\) coin matrix is written as \(\hat{C}_{\alpha,\alpha}= \hat{C}_{\alpha}\otimes \hat{C}_{\alpha}\) \begin{center} \begin{equation} =\begin{pmatrix} cos^{2}\theta(t) & \frac{sin 2\theta(t)}{2} & \frac{sin 2\theta(t)}{2} & sin^{2}\theta(t) \\[0.3em] \frac{sin 2\theta(t)}{2} & -cos^{2}\theta(t) & sin^{2}\theta(t) & -\frac{sin 2\theta(t)}{2} \\[0.3em] \frac{sin 2\theta(t)}{2} & sin^{2}\theta(t) & -cos^{2}\theta(t) & -\frac{sin 2\theta(t)}{2} \\[0.3em] sin^{2}\theta(t) & \frac{-sin 2\theta(t)}{2} & \frac{-sin 2\theta(t)}{2} & cos^{2}\theta(t) \\[0.3em] \end{pmatrix}\end{equation}\\ \end{center} \begin{center} \subsection{Combination-II} The particles are driven by two coins with different \(\alpha\) parameters i.e, the related \(4 \times 4\) coin matrix is written as \(\hat{C}_{\alpha_{1},\alpha_{2}}= \hat{C}_{\alpha_{1}} \otimes \hat{C}_{\alpha_{2}}\)\\ \begin{equation}= \begin{pmatrix} cos\theta_{1}cos\theta_{2} & cos\theta_{1}sin\theta_{2} & sin\theta_{1} cos\theta_{2} & sin\theta_{1} sin\theta_{2} \\[0.3em] cos\theta_{1}sin\theta_{2} & -cos\theta_{1}cos\theta_{2} & sin\theta_{1} sin\theta_{2} & -sin\theta_{1}cos\theta_{2} \\[0.3em] sin\theta_{1}cos\theta_{2} & sin\theta_{1} sin\theta_{2} & -cos\theta_{1} cos\theta_{2} & -cos\theta_{1} sin\theta_{2} \\[0.3em] sin\theta_{1}sin\theta_{2} & -sin\theta_{1} cos\theta_{2} & -cos\theta_{1} sin\theta_{2} & cos\theta_{1} cos\theta_{2} \\[0.3em] \end{pmatrix}\end{equation}\\ \end{center} \begin{equation} \mbox{ where } \cos \theta_{1}=\frac{1}{\sqrt{2}}\left( \frac{\tau }{t+\tau }\right) ^{\alpha_{1} } \label{cos} \end{equation} \begin{equation} \mbox{ and } \cos \theta_{2}=\frac{1}{\sqrt{2}}\left( \frac{\tau }{t+\tau }\right) ^{\alpha_{2} } \label{cos} \end{equation} \subsection{Combination-III} Both the particles are driven by the following time-dependent coin introduced in equation \ref{eq:10}\\ \begin{equation} \hat{C}_{\Phi}(t)=\left(\begin{array}{cc} \sqrt{\frac{1}{2}}e^{-i\Phi(t)} & \sqrt{\frac{1}{2}}e^{-i\Phi(t)}\\ \sqrt{\frac{1}{2}}e^{i\Phi(t)} & -\sqrt{\frac{1}{2}}e^{i\Phi(t)}\end{array}\right)\label{Cgral}\end{equation} Therefore the related \(4 \times 4\) coin matrix is written as \begin{center} \(\hat{C}_{\Phi,\Phi}= \hat{C}_{\Phi} \otimes \hat{C}_{\Phi}\)\\ \begin{equation} = \frac{1}{2}\begin{pmatrix} e^{-2i\Phi(t)} & e^{-2i\Phi(t)} & e^{-2i\Phi(t)} & e^{-2i\Phi(t)} \\[0.3em] 1 & -1 & 1 & -1 \\[0.3em] 1 & 1 & -1 & -1 \\[0.3em] e^{2i\Phi(t)} & -e^{2i\Phi(t)} & -e^{2i\Phi(t)} & e^{2i\Phi(t)} \\[0.3em] \end{pmatrix}\end{equation} \end{center} We consider \(\phi(t)=\phi_{0}t\) and \(\phi_{0}/2\pi=\frac{q}{p}\) following ref. \cite{coin2} and study the dynamics for different rational values of \(q/p\). \subsection{Combination-IV} Hadamard coin \(\hat{C}_{H}\) is applied on one particle and the time-dependent coin \(\hat{C}_{\Phi}(t)\) is applied on the other particle. Therefore the related \(4 \times 4\) coin matrix is written as \begin{center} \(\hat{C}_{H,\Phi}= \hat{C}_{H} \otimes \hat{C}_{\Phi}\)\\ \begin{equation} =\frac{1}{2}\begin{pmatrix} e^{-i\Phi(t)} & e^{-i\Phi(t)} & e^{-i\Phi(t)} & e^{-i\Phi(t)} \\[0.3em] e^{i\Phi(t)} & -e^{i\Phi(t)} & e^{i\Phi(t)} & -e^{i\Phi(t)} \\[0.3em] e^{-i\Phi(t)} & e^{-i\Phi(t)} & -e^{-i\Phi(t)} & -e^{-i\Phi(t)} \\[0.3em] e^{i\Phi(t)} & -e^{i\Phi(t)} & -e^{i\Phi(t)} & e^{i\Phi(t)} \\[0.3em] \end{pmatrix}\end{equation} \end{center} \section{ The dynamical observables \label{six}} \vspace*{.5cm} We have numerically studied the time evolutions of the following joint properties : \(C_{12}\), \(\Delta_{12}\) and \(E(|\psi\rangle )\), apart from the joint two particle probability distribution \(P(x,y)\). Here x,y represent the positions of the two particles on the same line. The first observable \(C_{12}\) is the positional correlation function which is given by \(C_{12}= \langle x y\rangle - \langle x \rangle \langle y \rangle\). It measures the positional correlation between the particles. It is also used to quantify the bunching or anti-bunching behavior of the two particles. Positive (negative) spatial correlation indicates bunching (anti-bunching) behavior of the particles.\\The second observable is the average distance between the two particles, defined as \(\Delta_{12} = \langle |x - y|\rangle \). The third observable \(E(|\psi\rangle )\) is the entanglement entropy. We have evaluated \(E(|\psi\rangle )\) following the procedure described in ref \cite{Berry}. For two-particle quantum walks, the composite space \(H\) can be divided into two single-particle subsystems, \(H_{1(2)} = (H_{P} \otimes H_{C} )_{1(2)}\), to measure the total entanglement between the two particles. The entanglement between two subsystems of a bipartite pure quantum state \(|\psi \rangle \) can be measured using the von Neumann entropy \(S\) of the reduced density matrix of either subsystem \cite{mintert}, \(E(|\psi\rangle ) = S(\rho_{1} ) = S(\rho_{2} ) = - Tr(\rho_{1} \log_{2} \rho_{1} )\). Since the trace is invariant under similarity transformation and the density matrix \(\rho_{1}\) has real, non-negative eigenvalues \(\lambda_{i}\) , the von Neumann entropy can easily be calculated as \(S(\rho_{1} ) = - \sum\limits_{i} \lambda_{i} \log_{2} \lambda_{i} \). \\ For a pure two-particle state, \( |\psi\rangle = \sum\limits_{xy} \sum\limits_{ij} a_{xiyj} |x,i ; y,j \rangle,\) the reduced density matrix \(\rho_{1}\) is obtained by tracing the density matrix \(\rho\) = \(|\psi\rangle \langle \psi|\) over subsystem 2,\\ \( \rho_{1}=Tr_{2}(\rho)= \sum\limits_{xyzw}\sum\limits_{ijkl} a_{xiyj} a^{*}_{zkwl} |x,i\rangle \langle z,k| \langle w,l|y,j\rangle \)\\ where \(x,y,z,w\) represent points on the line, while \(i,j,k,l\) represent coin states and \(a_{xiyj}\) are coefficients of the two-particle basis states. Using the orthonormality condition \(\langle w,l|y,j\rangle =\delta_{yw}\delta_{jl}\), we obtain \(\rho_{1} = \sum\limits_{xz}\sum\limits_{ik} b_{xizk} |x,i\rangle \langle z,k|, \) where \(b_{xizk} = \sum\limits_{y}\sum\limits_{j} a_{xiyj} a^{*}_{zkyj}\).\\ We then numerically calculate the eigenvalues \(\lambda_{i}\) of \(\rho_{1}\). The entanglement \(E\) between the two particles can be obtained at each time step from the following relation \(S(\rho_{1} ) = - \sum\limits_{i} \lambda_{i} \log_{2} \lambda_{i} \). The maximum entanglement between two \(k\)-dimensional subsystems is \(E_{max} = \log_{2} k\). If both particles are initially placed at the origin, then \(k=2\) (as the coin-space is two dimensional and position space is one dimensional), so the Bell states \(|\psi^{\pm}\rangle,|\phi^{\pm}\rangle \) are maximally entangled \((E_{max} = \log_{2} 2 = 1)\). As the QW spreads at a rate of one lattice position per time step in each direction, the number of possible occupied states in a two-particle QW on the line increases linearly with the number of steps. The dimension of each of the single-particle subspaces is therefore \(k = 2(2n + 1)\), giving \(E_{max} = \log_{2} [2(2n + 1)] = 1 + \log_{2} (2n + 1)\) where \(n\) is the number of time steps. So the upper bound on entanglement grows logarithmically with the number of steps \(n\).\\ \section{ Diverse dynamics of two quantum walkers \label{seven}} \subsection{The case of coin \(\hat{C}_{\alpha}(t)\)} \FloatBarrier \begin{figure*}[th] \centering \subfigure[\label{fig:1a} ]{\includegraphics[scale=0.25]{1a1.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:1b} ]{\includegraphics[scale=0.25]{1b1.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:1c} ]{\includegraphics[scale=0.25]{1c1.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:1d} ]{\includegraphics[scale=0.25]{1d1.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:1e} ]{\includegraphics[scale=0.25]{1e1.pdf}} \subfigure[\label{fig:2a} ]{\includegraphics[scale=0.25]{2a.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:2b} ]{\includegraphics[scale=0.25]{2b.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:2c} ]{\includegraphics[scale=0.25]{2c.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:2d} ]{\includegraphics[scale=0.25]{2d.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:2e} ]{\includegraphics[scale=0.25]{2e.pdf}} \subfigure[\label{fig:3a} ]{\includegraphics[scale=0.25]{3a.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:3b} ]{\includegraphics[scale=0.25]{3b.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:3c} ]{\includegraphics[scale=0.25]{3c.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:3d} ]{\includegraphics[scale=0.25]{3d.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:3e} ]{\includegraphics[scale=0.25]{3e.pdf}} \subfigure[\label{fig:2f} ]{\includegraphics[scale=0.40]{2f.pdf}} \subfigure[\label{fig:2g1} ]{\includegraphics[scale=0.40]{3g1.pdf}} \subfigure[\label{fig:2g2} ]{\includegraphics[scale=0.40]{2g.pdf}} \subfigure[\label{fig:3f} ]{\includegraphics[scale=0.40]{3f.pdf}} \subfigure[\label{fig:3g1} ]{\includegraphics[scale=0.40]{3g.pdf}} \subfigure[\label{fig:3g2} ]{\includegraphics[scale=0.40]{2g1.pdf}} \caption{\label{fig:1}{ Here figures (a)-(o) show two-particle probability distributions \(P(x,y)\) after 100 time steps for two non interacting walkers evolving under the influences of the coin \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\). \(P(x,y)\) for \(|Sep\rangle\) initial state :(a) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (b) \(\alpha_{1}\) = \(\alpha_{2}\)=0.5; (c) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (d) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5; (e) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. \(P(x,y)\) for \(|\psi^{+}\rangle\) initial state : (f) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (g) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (h) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (i) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5; (j) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. \(P(x,y)\) for \(|\psi^{-}\rangle\) initial state : (k) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (l) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (m) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (n) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5; (o) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Figures (p)-(u) show the variations of the correlation function \(C_{12}\) against dimension less time \(n\) for two non-interacting walkers evolving under the influences of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\). Variations of \(C_{12}\) for \(|\psi^{+}\rangle\) initial state : (p) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (q) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (r) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Variations of \(C_{12}\) for \(|\psi^{-}\rangle\) initial state : (s) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (t) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (u) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25 } } \end{figure*} \begin{figure}[th] \centering \subfigure[\label{fig:4a} ]{\includegraphics[scale=0.30]{4a.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:4f} ]{\includegraphics[scale=0.30]{4f.pdf}} \caption{\label{fig:2}{ (a) Two-particle probability distribution \(P(x,y)\) after 100 time steps of evolution of two \(\mathbb{1}\)-interacting walkers starting from a \(|Sep\rangle\) initial state with the following coin parameters : \(\alpha_{1}\)=\(\alpha_{2}\)=0; (b) Variations of the correlation function against time for evolutions starting from a \(|Sep\rangle\) initial state with different coin parameter combinations. Some of the curves for different combinations of the two coin parameters nearly overlap each other. } } \end{figure} \begin{figure*}[th] \centering \subfigure[\label{fig:5a} ]{\includegraphics[scale=0.25]{5a.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:5b} ]{\includegraphics[scale=0.25]{5b.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:5c} ]{\includegraphics[scale=0.25]{5c.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:5d} ]{\includegraphics[scale=0.25]{5d.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:5e} ]{\includegraphics[scale=0.25]{5e.pdf}} \subfigure[\label{fig:6a} ]{\includegraphics[scale=0.25]{6a.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:6b} ]{\includegraphics[scale=0.25]{6b.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:6c} ]{\includegraphics[scale=0.25]{6c.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:6d} ]{\includegraphics[scale=0.25]{6d.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:6e} ]{\includegraphics[scale=0.25]{6e.pdf}} \subfigure[\label{fig:5f} ]{\includegraphics[scale=0.40]{5f.pdf}} \subfigure[\label{fig:5g} ]{\includegraphics[scale=0.40]{5g.pdf}} \subfigure[\label{fig:5g1} ]{\includegraphics[scale=0.40]{5g1.pdf}} \subfigure[\label{fig:6f} ]{\includegraphics[scale=0.40]{6f.pdf}} \subfigure[\label{fig:6g} ]{\includegraphics[scale=0.40]{6g.pdf}} \subfigure[\label{fig:6h} ]{\includegraphics[scale=0.40]{6h.pdf}} \caption{\label{fig:2}{ Here figures (a)-(j) show two-particle probability distributions \(P(x,y)\) after 100 time steps for two \(\mathbb{1}\)- interacting walkers evolving under the influences of the coin \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\). \(P(x,y)\) for \(|\psi^{+}\rangle\) initial state : (a) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (b) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (c) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (d) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5; (e) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. \(P(x,y)\) for \(|\psi^{-}\rangle\) initial state : (f) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (g) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (h) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (i) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5, (j) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Figures (k)-(p) show the variations of the correlation function \(C_{12}\) against dimension less time \(n\) for two \(\mathbb{1}\) interacting walkers evolving under the influences of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\). Variations of \(C_{12}\) for \(|\psi^{+}\rangle\) initial state : (k) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (l) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (m) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Variations of \(C_{12}\) for \(|\psi^{-}\rangle\) initial state : (n) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (o) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (p) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25.} } \end{figure*} \begin{figure*}[th] \centering \subfigure[\label{fig:7a} ]{\includegraphics[scale=0.25]{7a.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:7b} ]{\includegraphics[scale=0.25]{7b.pdf}}\hspace*{-.5cm} \subfigure[\label{fig:7c} ]{\includegraphics[scale=0.25]{7c.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:7d} ]{\includegraphics[scale=0.25]{7d.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:7e} ]{\includegraphics[scale=0.25]{7e.pdf}} \subfigure[\label{fig:8a} ]{\includegraphics[scale=0.25]{8a.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:8b} ]{\includegraphics[scale=0.25]{8b.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:8c} ]{\includegraphics[scale=0.25]{8c.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:8d} ]{\includegraphics[scale=0.25]{8d.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:8e} ]{\includegraphics[scale=0.25]{8e.pdf}} \subfigure[\label{fig:9a} ]{\includegraphics[scale=0.25]{9a.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:9b} ]{\includegraphics[scale=0.25]{9b.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:9c} ]{\includegraphics[scale=0.25]{9c.pdf}}\hspace*{-.55cm} \subfigure[\label{fig:9d} ]{\includegraphics[scale=0.25]{9d.pdf}}\hspace*{-.35cm} \subfigure[\label{fig:9e} ]{\includegraphics[scale=0.25]{9e.pdf}} \subfigure[\label{fig:7f1} ]{\includegraphics[scale=0.35]{7f.pdf}} \subfigure[\label{fig:7g1} ]{\includegraphics[scale=0.35]{7g.pdf}} \subfigure[\label{fig:7h1} ]{\includegraphics[scale=0.35]{7h.pdf}}\\ \subfigure[\label{fig:8f1} ]{\includegraphics[scale=0.35]{8f.pdf}} \subfigure[\label{fig:8g1} ]{\includegraphics[scale=0.35]{8g.pdf}} \subfigure[\label{fig:8h1} ]{\includegraphics[scale=0.35]{8h.pdf}}\\ \subfigure[\label{fig:9f1} ]{\includegraphics[scale=0.35]{9f.pdf}} \subfigure[\label{fig:9g1} ]{\includegraphics[scale=0.35]{9g.pdf}} \subfigure[\label{fig:9h1} ]{\includegraphics[scale=0.35]{9h.pdf}} \caption{\label{fig:3}{ Here figures (a)-(o) show two-particle probability distributions \(P(x,y)\) after 100 time steps for two \(\pi\)-phase interacting walkers evolving under the influences of the coin \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\). \(P(x,y)\) for \(|Sep\rangle\) initial state : (a) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (b) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (c) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (d) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5; (e) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. \(P(x,y)\) for \(|\psi^{}+\rangle\) initial state : (f) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (g) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (h) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (i) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5, (j) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. \(P(x,y)\) for \(|\psi^{-}\rangle\) initial state : (k) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; (l) \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; (m) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25, (n) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5; (o) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Figures (p)-(x) show the variations of the correlation function \(C_{12}\) against dimensionless time \(n\) for two \(\pi\)-phase interacting walkers evolving under the influences of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\). Variations of \(C_{12}\) for \(|Sep\rangle\) initial state : (p) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (q) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (r) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Variations of \(C_{12}\) for \(|\psi^{+}\rangle\) initial state : (s) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (t) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (u) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. Variations of \(C_{12}\) for \(|\psi^{-}\rangle\) initial state : (v) \(\alpha_{1}\) = \(\alpha_{2}\) = 0; \(\alpha_{1}\) = \(\alpha_{2}\) = 0.5; \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 0.5 (w) \(\alpha_{1}\) = \(\alpha_{2}\) = 1.25 (x) \(\alpha_{1}\) = 0, \(\alpha_{2}\) = 1.25. } } \end{figure*} Here we describe the results obtained for time dependent coins of kind \(\hat{C}_{\alpha}(t)\). In general, the coin parameter \(\alpha\) can be different for the two different coins used in two-particle QW. We have studied the QW evolutions for five different combinations of these two parameters : (1) \(\alpha_{1}=\alpha_{2}=0\), (2) \(\alpha_{1}=\alpha_{2}=0.50\), (3) \(\alpha_{1}=\alpha_{2}=1.25\), (4) \(\alpha_{1}=0,\alpha_{2}=0.5\) and (5) \(\alpha_{1}=0,\alpha_{2}=1.25\). The first three combinations represent cases where the walkers are driven by identical coins. The other two combinations represent cases where two different walkers are controlled by two different coins of contrasting nature, e.g., \(\alpha=0\) generates ballistic evolution whereas \(\alpha=1.25\) generates localization.\\ When \(\alpha_{1}=\alpha_{2}=0\), the coins transform to time-independent Hadamard coins. We discuss the results for time-independent coins here so that we can compare and contrast these results with all other cases studied here with time-dependent coins. The two particle probability distributions for time independent Hadamard coins were studied by Berry et al.\cite{Berry} for different initial states. Our results for \(\alpha_{1}=\alpha_{2}=0\) agree with the results obtained by Berry et al.\cite{Berry}. The second(third) combination is used to explore the evolution of two quantum walkers which would have generated diffusive(localized) evolutions at the individual level had there been no quantum entanglement and interactions between the two walkers. \\ \subsubsection{Dynamics of two non-interacting walkers under the influence of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\)} The joint probability distribution \(P(x,y)\) for two non-interacting walkers starting from the state \(|Sep\rangle\) is simply equal to the product of two single particle probability distributions. For \(\alpha_{1}=\alpha_{2}=0\), a snapshot of the resultant distribution \(P(x,y)\) is shown in Fig.\ref{fig:1a}. Formation of multiple high peaks at four different corners of the \(xy\) plane indicates that if, upon measurement on the system, one particle is found to be placed relatively far from the origin then the other particle is likely to be found either near the first particle (bunching) or at the opposite end of the line (anti-bunching). The average separation \(\Delta_{12}\) increases gradually with time \cite{supl}. For \(\alpha_{1}=\alpha_{2}=0.5\), \(P(x,y)\) spreads much more slowly (see figure \ref{fig:1b}), as expected. \(\Delta_{12}\) also increases with time in a much slower fashion. For \(\alpha_{1}=\alpha_{2}=1.25\), the probability distribution remains localized near origin (see Fig.\ref{fig:1c}). The weak time-variation of \(\Delta_{12} \) and its saturation to a small value \( \sim 1.4\) also indicate localization of the walkers \cite{supl}. The nature of QW evolution changes when the two walkers are driven by two quantum coins of contrasting nature. For \(\alpha_{1}=0,\alpha_{2}=1.25\), \(P(x,y)\) spreads along the y-axis of the related plot shown in figure \ref{fig:1e} indicating that one of the walkers remains close to origin throughout the evolution while the other one moves away from the origin. The higher peaks are at certain non-zero values of \(y\). The width of the distribution increases in case of \(\alpha_{1}=0,\alpha_{2}=0.5\)(see Fig.\ref{fig:1d}). These results simply indicate that one can tune the value of \(\alpha_{2}\) to change the width of \(P(x,y)\) while keeping \(\alpha_{1}\) at a fixed value. On the other hand, the position of the peaks can be controlled by changing the value of \(\alpha_{1}\).\\% The variations of \(\Delta_{12}\) against time is quite similar for \(\alpha_{1}=0,\alpha_{2}=0.5\) and \(\alpha_{1}=\alpha_{2}=0.5\) \cite{supl}.\\ Two non-interacting walkers remains uncorrelated and un-entangled when they start evolving from the \(|Sep\rangle\) state. On the other hand, entangled initial states generates two-particles correlation even in the absence of pair interactions. The figures \ref{fig:2a}-\ref{fig:2e} show the joint probability distributions \(P(x,y)\) for the bosonic \(|\psi^{+}\rangle\) initial state. The probability distributions look somewhat similar to those obtained in case of \(|Sep\rangle\) initial state. This occurs due to presence of two similar terms in \(|\psi^{+}\rangle\) and \(|Sep\rangle\) states. The temporal variation of \(C_{12}\) for different combinations of the coin parameters are shown in the figures \ref{fig:2f}-\ref{fig:2g2}. When \(\alpha_{1}=\alpha_{2}=0\), \(C_{12}\) increases with time (from zero) indicating that the walkers become more and more correlated with time. \(C_{12}\) also exhibits some periodic oscillations where the oscillation amplitude increases with time (see Fig.\ref{fig:2f}). It indicates a competition between bunching and anti-bunching behaviors. However, the relative dominance of bunching behavior is quite clear here. When both the coins become time-dependent (\(\alpha_{1}=\alpha_{2}=0.5\)), the particles become anti-correlated as \(C_{12}\) remains negative and its value gradually decreases with time (see Fig.\ref{fig:2f}). So, for \(|\psi^{+}\rangle\), changing \(\alpha\) from 0 to 0.5 not only slowers the spreading of \(P(x,y)\) but also changes the character of the dynamics. For \(\alpha_{1}=\alpha_{2}=1.25\), the particles remains localized near the origin (see Fig.\ref{fig:2c}) as expected but there are distinct periodic oscillations of \(C_{12}\)(see Fig.\ref{fig:2g1}). The amplitude of oscillation is although small. For \(\alpha_{1}=0,\alpha_{2}=1.25\), \(C_{12}\) exhibit oscillations and the oscillation amplitude increases with time (see Fig.\ref{fig:2g2}). This occurs as one of the particle is moving away from the origin. So, the amplitude of such oscillation can be controlled by varying the parameter \(\alpha_{1}\). The particles become periodically correlated and anti-correlated with time. For \(\alpha_{1}=0,\alpha_{2}=0.5\), the particles remain anti-correlated. There are packets of oscillation in the temporal variation of \(C_{12}\) (see Fig\ref{fig:2f}). For \(|\psi^{-}\rangle\) initial state, the evolution is completely different from that found in cases of \(|Sep\rangle\) and \(|\psi^{+}\rangle\). The particles remain anti-correlated and anti-bunching behavior dominates. When \(\alpha_{1}=\alpha_{2}=0\), the particles exhibit nearly pure anti-bunching behavior as shown in Fig. \ref{fig:3a}. It implies that upon measurement on the system, if one particle is found to be placed near one end of the line then the other particle is likely to be found at the opposite end of the line. The correlation function rapidly decays with time as shown in Fig. \ref{fig:3f}. For \(\alpha_{1}=\alpha_{2}=0.5\), \(P(x,y)\) spreads in a slower fashion but the presence of anti-bunching behavior can still be seen in the related plot of \(P(x,y)\) (see Fig.\ref{fig:3b}). \(C_{12}\) also decrease in a slower fashion. For \(\alpha_{1}=\alpha_{2}=1.25\), the probability distribution remains localized near origin (see Fig.\ref{fig:3c}). The particles remain anti-correlated but \(C_{12}\) also shows oscillatory behavior indicating competition between bunching and anti-bunching (see Fig.\ref{fig:3g1}). When \(\alpha_{1}=0, \alpha_{2}=1.25\) the probability distribution looks quite similar to that obtained with \(|\psi^{+}\rangle\) (see Fig.\ref{fig:3e}). However, the correlation function rapidly decreases with time (see Fig.\ref{fig:3g2}). For \(\alpha_{1}=0,\alpha_{2}=0.5\), the plots of \(P(x,y)\) indicate that upon measurement, it is highly probable that both the particles are to be found on two opposite sides of the origin, one near and the other far from the origin. So, the particles remain strongly anti-correlated (see Fig.\ref{fig:3d}).\\ It is interesting to find that the system can generate localization behavior of significantly different character depending on the nature of the initial states in case of \(\alpha_{1}=\alpha_{2}=1.25\), even in the absence of interactions.\\ \subsubsection{Dynamics of two \(\mathbb{1}\) interacting walkers under the influence of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\)} In the presence of interactions, the particles become entangled \cite{supl}. Let us first describe the results obtained for \(|Sep\rangle\) initial state. In this interacting walk, the identity operator acts on the \(|0, \uparrow ; 0, \uparrow \rangle\) and \(|0, \downarrow ; 0, \downarrow \rangle\) terms of the \(|Sep\rangle\) state. Since both particles are in the same position and coin states and the identity operator does not mix the coin states of each particle, they are translated together and move in the same direction at each step of the walk. The related probability distribution \(P(x,y)\) and time variation of \(C_{12}\) are shown in the Figures \ref{fig:4a} and \ref{fig:4f} respectively. \(C_{12}\) rapidly increases with time. Since this strong bunching behavior is independent of the coin parameters, all the curves for different parameter sets nearly overlap each-other. The contribution of the other two terms present in \(|Sep\rangle\) is relatively weaker. The influence of the other terms is visible in Fig. \ref{fig:4f}, as the curve for \(\alpha_{1}=\alpha_{2}=0\) does not exactly overlap the other curves. The temporal variations of \(\Delta_{12}\) and \(E{|\psi}\rangle\) are quite similar to that observed in the previously studied case of two non-interacting walkers starting from \(|Sep\rangle\) initial state \cite{supl}. This is also due to the contribution of the other two terms present in \(|Sep\rangle\). The entanglement entropy saturates to values \(\sim 2\) for different coin parameter combinations \cite{supl}. \\ When the walker pairs start from bosonic \(|\psi^{+}\rangle\) initial state with \(\alpha_{1}=\alpha_{2}=0\), the particles exhibit fermionic anti-bunching behavior as is clearly shown in Fig. \ref{fig:5a}. This is quite interesting as fermionic anti-bunching is obtained from bosonic initial state. The correlation function \(C_{12}\) rapidly decays with time indicating that the walkers become more and more anti-correlated with time (see Fig.\ref{fig:5f}). \(P(x,y)\) spreads much more slowly as the values of the coin parameters are increased to \(\alpha_{1}=\alpha_{2}=0.50\). The probability distributions for \(\alpha_{1}=\alpha_{2}=0.50\) and \(\alpha_{1}=\alpha_{2}=1.25\) are shown in the figures \ref{fig:5b} and \ref{fig:5c} respectively. The walkers exhibit anti-correlated evolution in all the three cases. However, the rate of decay of \(C_{12}\) decreases as the values of the coin parameters are increased (see Fig.\ref{fig:5f}, \ref{fig:5g}). For \(\alpha_{1}=0, \alpha_{2}\neq 0\), the probability distributions are found to exhibit behaviors qualitatively similar to the previously studied cases (see figures \ref{fig:5d} and \ref{fig:5e}). A difference is that the probability of finding both the particles together at the origin is smaller than that found in the previously studied cases with \(\alpha_{1}=0, \alpha_{2}=1.25\).\\ The scenario becomes more interesting when the walker pairs start from \(|\psi^{-}\rangle\) initial state with \(\alpha_{1}=\alpha_{2}=0\). The probability distribution \(P(x,y)\) consists of two parts. One part spreads along one diagonal of the \(xy\) plane in Fig. \ref{fig:6a} indicating the presence of bunching behavior and the other part corresponds to anti-bunching. So, there are finite probabilities that upon measurement, two particles can either be found bunched together or they can also be found far apart from each-other situated on the opposite ends of the line. The correlation function \(C_{12}\) rapidly decays with time indicating that the walkers remain anticorrelated. This is because the anti-bunching peaks are more distant from the origin in comparison to the bunching peaks. As the values of \(\alpha_{1},\alpha_{2}\) are simultaneously increased from \(0\) to \(0.5\), the dynamics becomes very slow. The average separation reaches a value \(\sim 4.5\) after 100 steps. Fig. \ref{fig:6b} shows that a significant part of \(P(x,y)\) is localized near origin. Both \(P(x,y)\) and \(C_{12}\) evolves quite slowly with time (see Figs.\ref{fig:6b} and \ref{fig:6f}). The system exhibits dynamics which is much more slower than what is naively expected in case of two such walkers. For \(\alpha_{1}=\alpha_{2}=1.25\), although the probability distribution remains localized near origin(see Fig.\ref{fig:6c}), the dynamical behavior is quite different from our expectations. Here both the correlation function and the average separation \(\Delta_{12}\) exhibits periodic oscillatory behavior (see Fig.\ref{fig:6g}) \cite{supl}. This is a ``dynamical" kind of localization. When \(\alpha_{1}=0, \alpha_{2}= 1.25\), the probability distribution shown in Fig. \ref{fig:6e} indicates that it is highly probable that upon measurement both the particles would be found at origin. Therefore, for \(\alpha_{2}=1.25\), the system exhibits very slow evolution even if \(\alpha_{1}=0\) (see Fig.\ref{fig:6h}). If one walker has slow dynamics then the other walker also evolves slowly. Even for \(\alpha_{1}=0, \alpha_{2}= 0.5\), \(P(x,y)\) has peaks near origin(see Fig.\ref{fig:6d}). The particles become more and more anticorrelated with time but in a much more slower fashion in comparison to the previous cases (except the non-interacting case with \(|\psi^{+}\rangle\))with \(\alpha_{1}=0, \alpha_{2}= 0.5\) (see Fig.\ref{fig:6f}). \\ One interesting observation is that for \(|\psi^{-}\rangle\) initial state, two interacting walkers will exhibit very slow dynamics whenever one of the coin parameters is \(\geq0.5\).\\ \subsubsection{Dynamics of two \(\pi\)-phase interacting walkers under the influence of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\)} Let us now describe the results for \(\pi\)-phase interaction. When the walkers start from the \(|Sep\rangle\) initial state with \(\alpha_{1}=\alpha_{2}=0\), the probability distribution \(P(x,y)\) exhibits a diagonally spreading part alongwith an anti-bunching part (see Fig.\ref{fig:7a}), similar to the previously described case of \(\mathbb{1}\)-interaction and \(|\psi^{-}\rangle\) state. However, the plot of \(P(x,y)\) in Fig.\ref{fig:7a} also shows that the contribution to the anti-bunching part is relatively stronger here. As a result, the correlation function \(C_{12}\) decays relatively faster (see Fig.\ref{fig:7f1}) For \(\alpha_{1}=\alpha_{2}=0.5\), a major part of \(P(x,y)\) remains localized near origin (see Fig.\ref{fig:7b}). The other part generates anti-bunching. For \(\alpha_{1}=\alpha_{2}=1.25\), the system becomes localized near origin (see Fig. \ref{fig:7c}) and the particles remain anticorrelated (see Fig. \ref{fig:7g1}). For \(\alpha_{1}=0,\alpha_{2}\ne 0\), the situation becomes more interesting as the probability distribution \(P(x,y)\) is quite different from the previously described cases. For \(\alpha_{1}=0,\alpha_{2}=1.25\), \(P(x,y)\) has three sharp peaks in three different regions of the plot(see Fig.\ref{fig:7e}) : origin and two other points(quite distant from origin) on the y-axis. This indicates that upon measurement, three cases are most likely to happen : Firstly, both walkers can be found at origin. Secondly, one walker can be found at origin and the other one can be found positioned at a distant point on the positive side of the origin. Thirdly, one walker can be found at origin and the other one can be found positioned at a distant point on the negative side of the origin. The walkers become periodically correlated and anticorrelated with a period of one time step as shown in Fig. \ref{fig:7h1}. The oscillation amplitude increases with time. For \(\alpha_{1}=0,\alpha_{2}=0.5\), the probability of last two phenomenon decreases as can be seen from the flattening of the related two sharp peaks along \(X\) axis. The corresponding temporal variation of \(C_{12}\) is shown in Fig. \ref{fig:7f1}. \\ For \(|\psi^{+}\rangle\) initial state, the plots of the probability distributions \(P(x,y)\) are shown in Figs. \ref{fig:8a}-\ref{fig:8e}. For \(\alpha_{1}=0,\alpha_{2}=0\), the dynamics remains qualitatively similar to that obtained with \(|Sep\rangle\). As the values of \(\alpha_{1},\alpha_{2}\) are simultaneously increased from \(0\) to \(0.5\), the dynamics becomes very slow. The average separation reaches a value \(\sim 2.7\) after 100 steps \cite{supl}. Both \(P(x,y)\) and \(C_{12}\) evolves quite slowly with time (see Fig.\ref{fig:8b} and Fig.\ref{fig:8f1}). For \(\alpha_{1}=\alpha_{2}=1.25\), the probability distribution remains localized as expected (\(\Delta_{12}\) \(\sim0.2\) after 100 steps, which is the smallest amongst all the studied cases with \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\) . The average separation performs oscillations of a quite small amplitude (\(\sim 0.1\)) \cite{supl}. The correlation function exhibits periodic oscillatory behavior (see Fig. \ref{fig:8g1}). The particles exhibit correlated evolution. In this way, this case is quite different from most of the ``localized" cases (i.e., cases with \(\alpha_{1}=\alpha_{2}=1.25\)) studied here where particles have been found to exhibit anti-correlated evolution. When \(\alpha_{1}=0, \alpha_{2}= 1.25\), the probability distribution \(P(x,y)\) again has three sharp peaks (see Fig.\ref{fig:8e}). The difference with the \(|Sep\rangle\) state is that here the height of the peak at origin is slightly smaller and the heights of the other two peaks are slightly higher than that obtained in case of \(|Sep\rangle\) state. The corresponding correlation function has an interesting temporal evolution (see Fig.\ref{fig:8h1}). Although, the particles mostly remain correlated during the evolution, \(C_{12}\) exhibits oscillations where the maximal value of correlation gradually increases with time. For \(\alpha_{1}=0, \alpha_{2}= 0.5\), the dynamics is quite slow (see Fig.\ref{fig:8f1}). It is interesting to note that for both \(\alpha_{1}=0.5, \alpha_{2}= 0.5\) and \(\alpha_{1}=0, \alpha_{2}= 0.5\), \(C_{12}\) decays in a qualitatively similar way although the parameter combinations are different from each other.\\ The results for \(|\psi^{-}\rangle\) initial state are quite similar to that obtained in the previously described case of two \(\mathbb{1}\)-interacting walkers starting from \(|\psi^{-}\rangle\) initial state. The two particles probability distributions are shown in the figures \ref{fig:9a}-\ref{fig:9e} and the temporal variations of the correlation functions are shown in the figures \ref{fig:9f1}-\ref{fig:9h1}.\\ \subsubsection{Overall diversity in the \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\) driven dynamics } The time-dependent coin \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\) has generated rich variety of dynamical behavior. Here we briefly summarize the overall diversity in the dynamics for some sets of parameters.\\ For \(\alpha_{1}=\alpha_{2}=1.25\), although localization has been found for all different initial states and interactions, the character of localization for different initial states and interactions has been quite different. In some cases, we have observed correlated evolution whereas in some other cases anti-correlated evolution has been found. A ``dynamical'' kind of localization, where both the average separation and the correlation function exhibit periodic oscillations, has also been found in a particular case. On the other hand, we have also found a particular case of localization where the average separation is quite small(\(\sim0.2\)) in comparison to its typical values(\(\sim 1.5\)) \cite{supl}.\\ For \(\alpha_{1}=0,\alpha_{2}=1.25\), we have observed that depending on the initial states and interactions, the system can exhibit any one of the following three scenarios : (1) both the particles can be found simultaneously localized near origin, (2) only one of them can be found to be localized near origin, and (3) both of them can be found at positions quite distant from the origin.\\ For \(\alpha_{1}=\alpha_{2}=0.5\), both the decay rate of \(C_{12}\) and growth rate of \(\Delta_{12}\) have been found to change significantly for different initial states and interactions. For two \(\pi\)-phase interacting walkers starting from \(|\psi^{+}\rangle\) state, both the rates attain their smallest values whereas for two \(\mathbb{1}\) interacting walkers starting from \(|\psi^{+}\rangle\) state, both the rates attain their highest values. On the contrary, such drastic change has not been found under similar conditions for the time independent coins (\(\alpha_{1}=\alpha_{2}=0\)).\\ \begin{figure}[th] \centering \includegraphics[scale=0.70]{Fig5.pdf} \caption{\label{fig:10a}{The variations of the instantaneous average separation \(\Delta_{12}\) against dimensionless time \(n\) in case of two non interacting walkers starting from the \(|Sep\rangle\) state under influence of the coin \(\hat{C}_{\phi}(t)\). The considered values of the coin parameters \(q\) and \(p\) for different curves are described inside the figure }} \end{figure} \begin{figure*}[th] \centering \hspace*{-.15cm}\subfigure[\label{fig:11a} ]{\includegraphics[scale=0.37]{Fig6a.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:11b} ]{\includegraphics[scale=0.37]{Fig6b.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:12a} ]{\includegraphics[scale=0.37]{Fig6c.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:12b} ]{\includegraphics[scale=0.37]{Fig6d.pdf}}\\ \hspace*{-.15cm}\subfigure[\label{fig:11c} ]{\includegraphics[scale=0.37]{Fig6e.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:11d} ]{\includegraphics[scale=0.37]{Fig6f.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:12c} ]{\includegraphics[scale=0.37]{Fig6g.pdf}}\hspace*{-.15cm} \subfigure[\label{fig:12d} ]{\includegraphics[scale=0.37]{Fig6h.pdf}} \caption{\label{fig:11}{The time-variations of the joint properties for two non-interacting quantum walkers with same coins : (a) Variations of \(C_{12}\) in case of \(|\psi^{+}\rangle\); (b) Variations of \(\Delta_{12}\) in case of \(|\psi^{+}\rangle\), (c) Variations of \(C_{12}\) in case of \(|\psi^{-}\rangle\); (d) Variations of \(\Delta_{12}\) in case of \(|\psi^{-}\rangle\). The time-variations of the joint properties for two non-interacting quantum walkers with two different coins : (e) Variations of \(C_{12}\) in case of \(|\psi^{+}\rangle\); (f) Variations of \(\Delta_{12}\) in case of \(|\psi^{+}\rangle\), (g) Variations of \(C_{12}\) in case of \(|\psi^{-}\rangle\); (h) Variations of \(\Delta_{12}\) in case of \(|\psi^{-}\rangle\). The coin parameters have been mentioned inside the plots.} } \end{figure*} \begin{figure*}[th] \centering \subfigure[\label{fig:13a} ]{\includegraphics[scale=0.45]{Fig7a.pdf}} \subfigure[\label{fig:13b} ]{\includegraphics[scale=0.45]{Fig7b.pdf}} \subfigure[\label{fig:13c} ]{\includegraphics[scale=0.45]{Fig7c.pdf}} \subfigure[\label{fig:14a} ]{\includegraphics[scale=0.45]{Fig7d.pdf}} \subfigure[\label{fig:14b} ]{\includegraphics[scale=0.45]{Fig7e.pdf}} \subfigure[\label{fig:14c} ]{\includegraphics[scale=0.45]{Fig7f.pdf}} \subfigure[\label{fig:15a} ]{\includegraphics[scale=0.45]{Fig7g.pdf}} \subfigure[\label{fig:15b} ]{\includegraphics[scale=0.45]{Fig7h.pdf}} \subfigure[\label{fig:15c} ]{\includegraphics[scale=0.45]{Fig7i.pdf}} \caption{\label{fig:13}{The time-variation of the joint properties for two \(\mathbb{1}\)-interacting quantum walkers : (a) Variations of \(C_{12}\) in case of \(|Sep\rangle\); (b) Variations of \(\Delta_{12}\) in case of \(|Sep\rangle\), (c) Variations of \(E(|\psi\rangle)\) in case of \(|Sep\rangle\), (d) Variations of \(C_{12}\) in case of \(|\psi^{+}\rangle\); (e) Variations of \(\Delta_{12}\) in case of \(|\psi^{+}\rangle\), (f) Variations of \(E(|\psi\rangle)\) in case of \(|\psi^{+}\rangle\),(g) Variations of \(C_{12}\) in case of \(|\psi^{-}\rangle\); (h) Variations of \(\Delta_{12}\) in case of \(|\psi^{-}\rangle\), (i) Variations of \(E(|\psi\rangle)\) in case of \(|\psi^{-}\rangle\). The coin parameters are mentioned inside the plots.} } \end{figure*} \subsection{The case of time-dependent coin \(\hat{C}_{\Phi}(t)\)} As described earlier in Sec. \ref{coins}, the time-dependent coin \(\hat{C}_{\Phi}(t)\) generates dynamical localization phenomenon in case of single particle QW. The fate of such behavior in case of two particle QW is a non-trivial question. How do the interactions and initial states influence the coin-parameter dependence of such QW evolution is also difficult to answer. Here we perform extensive numerical simulations in order to answer these questions. The related \(4\times4\) matrix \(\hat{C}_{\Phi,\Phi}\) has been described earlier in Sec. \ref{five}. We present the results mainly for three different sets of coin parameters \(q\) and \(p\) : (1) \(q=1,p=100\), (2) \(q=1,p=50\) and (3) \(q=4,p=50\) \cite{parameters2}.\\ The dynamical localization of a single quantum walker can be detected from the periodic variation of its probability distribution \(P(x)\) with time. Alternatively, one can also consider the periodic time-variation of the standard deviation as a signature of dynamic localization (see Fig.\ref{fig:110b} and related discussions in Sec. \ref{coins}). For the two particle case, we have shown below the temporal behavior of the collective dynamical properties of the walkers which bear clear signatures of both the presence and absence of the two body dynamical localization phenomena in different cases. In addition, some plots of \(P(x,y)\) are also provided in the supplementary material \cite{oscillatory_spreading}. \subsubsection{Dynamics of two non interacting walkers under the influence of \(\hat{C}_{\Phi}(t)\) } Let us first describe the results obtained in the simplest case, i.e., the case of two non-interacting walkers starting from \(|Sep\rangle\) initial state. In this case, the two-particle probability distribution \(P(x,y)\) is simply the product of two single walker probability distributions. Both the single walkers perform mutually independent dynamical localizations controlled by the coin parameters \(p\) and \(q\). So, as expected, the system exhibits coin-parameter dependent ``two-body dynamical localization". \(P(x,y)\) has both bunching and anti-bunching peaks which start moving away from the origin with time and periodically return to the same point simultaneously after an interval of \(p\) steps. Therefore, \(\Delta_{12}\) exhibits periodic oscillations as the anti-bunching peaks contribute to \(\Delta_{12}\). We have shown the variation of \(\Delta_{12}\) against time for three different combinations of \(q\) and \(p\) in Fig. \ref{fig:10a}. The plots show that the period of oscillation is \(p\) and the number of secondary oscillation is \(q\) as was found in case of dynamical localization of single particle. The amplitude of the periodic oscillations depends on the values of both \(q\) and \(p\). It increases with \(p\). For a fixed value of \(p\), the amplitude decreases with increasing \(q\). Since there is no interaction, the correlation function \(C_{12}\) and the entropy \(E(|\psi\rangle)\) remain equal to zero.\\ The entangled bosonic and fermionic initial states generate positional correlations between the particles. We analyze the temporal variations of the correlation function \(C_{12}\) to distinguish and characterize the two-body dynamical localization phenomena generated by the two different initial states. Earlier, we demonstrated that \(|\psi^{-}\rangle\) state generates pure anti-bunching phenomena for the non-interacting walkers with time-independent Hadamard coins (the case of \(C_{\alpha_{1},\alpha_{2}}\) with \(\alpha_{1}=\alpha_{2}=0\) ). \(C_{12}\) remained negative throughout that evolution. Here also, we see that \(\hat{C}_{12}\) is negative during the evolution, but it exhibits distinct periodic oscillations of period \(p\) (see Fig.\ref{fig:12a}). Since the particles become anti-correlated, we call this phenomenon ``anti-correlated" two-body dynamical localization. The amplitude and period of oscillations of both \(C_{12}\) and \(\Delta_{12}\) are controlled by the parameters \(p\) and \(q\). The nature of time variation of \(\Delta_{12}\), shown in Fig. \ref{fig:12b}, is similar to that observed in case of \(|Sep\rangle\) state. The only difference is that the amplitude of oscillation of \(\Delta_{12}\), for fixed values of \(p\) and \(q\), is higher in case of \(|\psi^{-}\rangle\) as no terms of \(|\psi^{-}\rangle\) contributes to bunching behavior in difference to \(|Sep\rangle\).\\ \(|\psi^{+}\rangle\) state generates a quite different type of dynamical localization. The difference can be seen in the time-variation of the correlation function \(C_{12}\) shown in the figure \ref{fig:11a}. In case of Hadamard coins, \(C_{12}\) remained positive for \(|\psi^{+}\rangle\) throughout the evolution. Here also, \(C_{12}\) mostly remains positive during the evolution. At a first glance, the plots of \(C_{12}\) (Fig. \ref{fig:11a}) may also look like inverted mirror image of the respective plots (Fig. \ref{fig:12a}) of \(C_{12}\) obtained in case of \(|\psi^{-}\rangle\). However, here \(C_{12}\) exhibits periodic oscillation between positive and negative values through zero value. So, the walkers periodically become mutually correlated-uncorrelated-anti-correlated with time. It is interesting to note that even for bosonic \(|\psi^{+}\rangle\) state, \(C_{\phi}(t)\) makes the particles mutually anti-correlated for certain short period of time. Since the particles mostly remain remain correlated, we call this phenomenon ``correlated" two-body dynamic localization. The related time variation of \(\Delta_{12}\), shown in Fig. \ref{fig:11b}, is slightly different in comparison to the above described case of \(|Sep\rangle\) state. The amplitude of \(\Delta_{12}\) becomes highest in case of \(|\psi^{-}\rangle\) among the three different initial states.\\ The system exhibits Interesting phenomena when the walkers are controlled by two different coins \(\hat{C}_{H}\) and \(\hat{C}_{\phi}(t)\). The first one is the Hadamard coin and the other one is the coin generating dynamical localization. The related \(4 \times 4\) matrix \(\hat{C}_{H,\Phi}\) has already been described in section \ref{five}. With \(\hat{C}_{H,\Phi}\), we have studied the QW evolution starting from \(|\psi^{+}\rangle\) and \(|\psi^{-}\rangle\) states. The average separation \(\Delta_{12}\), as shown in Figs. \ref{fig:11d} and \ref{fig:12d} respectively for \(|\psi^{+}\rangle\) and \(|\psi^{-}\rangle\) states , grows rapidly in both cases but there are clear signatures of oscillations in \(\Delta_{12}\), specially in Fig. \ref{fig:12d}. Such oscillatory behavior indicates dynamical localization of one of the particles. The corresponding variations of \(C_{12}\) also show oscillations of increasing amplitude with a time period of \(p\). The plots of \(C_{12}\) are shown in Figs. \ref{fig:11c} and \ref{fig:12c} respectively for \(|\psi^{+}\rangle\) and \(|\psi^{-}\rangle\) states. For \(|\psi^{-}\rangle\), \(C_{12}\) periodically becomes positive and negative. It indicates that one of the particle is moving away from the origin with time whereas the other particle is performing dynamic localization around origin. For \(|\psi^{+}\rangle\), \(C_{12}\) remains mostly positive during evolution and its amplitude of oscillation increases with time. It indicates that one of the particles is going further away from the origin with time and the other particle is performing periodic oscillation with the center of oscillation a bit shifted from the origin in the direction of motion of the first particle. It is interesting to note that even for \(|\psi^{-}\rangle\), the particles become correlated for certain amount of time.\\ Previously, we described the dynamics of the system under the influence of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\) (\(\alpha_{1}=0,\alpha_{2}=1.25\)), (Figs. \ref{fig:2g2} and \ref{fig:3g2}), i.e. the case where the system was evolving under the influence of two coins of contrasting nature. In that case, the particles became correlated, uncorrelated and anti-correlated periodically with time when they start evolving from a (\(|\psi^{+}\rangle\) initial state. On the contrary, for the combination of \(\hat{C}_{H}\) and \(\hat{C}_{\phi}(t)\), the particles remain correlated throughout the evolution (Fig. \ref{fig:11c}).\\ \subsubsection{Dynamics of two \(\mathbb{1}\) interacting walkers under the influence of \(\hat{C}_{\Phi}(t)\)} Here we describe QW evolutions for \(\mathbb{1}\) interactions between two particles. \(\mathbb{1}\) interaction generates quite different evolutions for three different considered initial states. Let us first describe the QW evolution starting from \(|Sep\rangle\) initial state. In this interacting walk, the identity operator acts on the \(|0, \uparrow ; 0, \uparrow \rangle\) and \(|0, \downarrow ; 0, \downarrow \rangle\) terms of the \(|Sep\rangle\) state. The identity operator does not mix the coin states and as a result the particles are translated together in the same direction at each time step. This behavior is independent of the coin parameters as the coin operator does not get a chance to act on the above two terms. The related time variation of \(C_{12}\) is shown in Fig. \ref{fig:13a}. It is observed that \(C_{12}\) rapidly increases with time. Since this strong bunching behavior is independent of the coin parameters, all the curves for different parameter sets overlap each-other (see Fig.\ref{fig:13a}). The contribution of the other terms in \(|Sep\rangle\) state is much weaker than the contribution of the above terms. The contributions of the other terms can be seen in the variations of \(\Delta_{12}\) and \(E(|\psi\rangle)\). Such strong bunching behavior was also observed, due to the same reasons, in case of \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\) coins.\\ Here the particles become entangled as a result of the interaction and the entanglement entropy \(E(|\psi\rangle)\) oscillates periodically (see Fig.\ref{fig:13c}) about a certain value which appears to be a constant contribution to \(E(|\psi\rangle)\) coming from the \(|0, \uparrow ; 0, \uparrow \rangle\) and \(|0, \downarrow ; 0, \downarrow \rangle\) terms. The temporal variations of the average separation \(\Delta_{12}\) also show periodic oscillations. The figures \ref{fig:13b} and \ref{fig:13c} clearly indicate the influences of \(q\) and \(p\) on the amplitude and time-period of oscillations. For any set of coin parameters, the particles become more entangled as the average separation \(\Delta_{12}\) increases. The particles become minimally entangled when the average separation \(\Delta_{12}=0\).\\ For bosonic \(|\psi^{+}\rangle\) initial state, a unique scenario is obtained. All the three observables, including the entanglement entropy, exhibit coin parameter dependent tunable periodic oscillations for all the three sets of coin parameters (see Figs.\ref{fig:14a}-\ref{fig:14c} ). It is quite non-trivial that all the three observables exhibit such behavior. So, the signatures of two-walker dynamic localization becomes evident in all the joint properties of quantum origin. The particles start from the origin being uncorrelated (\(C_{12}=0\)) and minimally entangled (\(E(|\psi\rangle)=1\)). Then for the first half of the time-period, both the average separation \(\Delta_{12}\) and entanglement entropy \(E(|\psi\rangle)\) increases with time as the particles become more and more anti-correlated. Then during the next half of the time-period, both \(\Delta_{12}\) and \(E(|\psi\rangle)\) decreases simultaneously whereas \(C_{12}\) increases. At the end of each time-period, both the particles return to origin being uncorrelated and minimally entangled(\(E(|\psi\rangle)=1\)) (i.e., exactly the state of the system at time t=0). The time period and amplitude can be controlled by the coin parameters \(p\) and \(q\). The related figures are \ref{fig:14a}-\ref{fig:14c}\cite{oscillatory}. The tunable periodic oscillations of the entanglement entropy \(E(|\psi\rangle)\) is quite interesting. Here, the quasiperiod can be controlled by the parameter \(p\). There are also secondary oscillations controlled by \(q\). So, this is an example of a simple two particle system where we can generate controllable periodic oscillation of \(E(|\psi\rangle)\).\\ For \(|\psi^{+}\rangle\) initial state, the correlation function oscillates between zero and a certain negative value. So, the walkers mostly remain anti-correlated during the evolution. This is contrary to the case of time independent coins \(\hat{C}_{\alpha_{1},\alpha_{2}}\)( with \(\alpha_{1}=\alpha_{2}=0\)) where the particles remains correlated starting from \(|\psi^{+}\rangle\) initial state as the bunching behavior dominates over the anti-bunching phenomenon.\\ For \(|\psi^{-}\rangle\) initial state, the collective dynamics and its coin parameter dependence become more complex. Although, the system still exhibits two-body dynamical localizations with tunable periodic oscillations of the three observables for some (\(q,p\)) combinations, the \(q,p\) dependences of the time-period and the number of secondary oscillations becomes more complex in comparison to that found in the previously described cases of \(|Sep\rangle\) and \(|\psi^{+}\rangle\) states. The temporal variations of the dynamical observables are shown in the figures \ref{fig:15a}-\ref{fig:15c}. The plots of \(\Delta_{12}\) show flatter peaks which is also a difference with the previously discussed cases. Fig. \ref{fig:15a} shows that the particles mostly remain correlated during their evolution for \(q=4,p=50\) whereas they mostly remain anti-correlated for \(q=1,p=100\). So, the nature of the correlations here depends strongly on the coin parameters. It can also be seen that the amplitudes of the periodic oscillations of \(C_{12}\) and \(\Delta_{12}\) are smaller than that found in the previously discussed cases (both for \(q=1,p=50\) and \(q=1,p=100\)). In the previously discussed cases of dynamical two body localization, the number of secondary oscillations of the dynamical observables was equal to the value of the parameter \(q\). The influence of the parameter \(q\) is more complex here. In order to understand the influence of the coin parameters, we have studied the dynamics for some more different values of \(q\) and \(p\) \cite{parameters}. We find that in most cases the system exhibits dynamic localization where the time period of oscillation of the dynamic observables is either \(p\) or \(2p\). However, we couldn't see any such simple relationship between the number of secondary oscillation and \(q\). Similarly, the results do not indicate any simple dependence of the oscillation amplitudes on \(q,p\). It is interesting to note that a new phenomenon is found for some (\(q,p\)) combinations in which the probability distribution \(P(x,y)\) spreads with time in contrary to being localized. We call it ``oscillatory spreading" as \(P(x,y)\) exhibits time dependent oscillations. Such behavior is also observed for \(\pi\)-phase interactions. We describe the phenomenon in more detail in next paragraph with the help of figures \ref{fig:16a}-\ref{fig:17c}.\\ \subsubsection{Dynamics of \(\pi\)-phase interacting walkers under the influence of \(C_{\Phi}(t)\)} When the particles interact via \(\pi\)-phase interaction and start from either \(|Sep\rangle\) or \(|\psi^{+}\rangle\) state, we find dynamic localization only for some specific combinations of the coin parameters \((q,p)\) among the different considered ones \cite{parameters}. For the other values of \((q,p)\), the particles remain correlated and the corresponding two particle probability distribution \(P(x,y)\) not only spreads with time but also exhibits oscillatory behavior. We call this phenomenon ``correlated oscillatory spreading" \cite{oscillatory_spreading}. The oscillatory spreading behavior is found for \(q=1,p=50\) and \(q=1,p=100\) (see Figs.\ref{fig:16a}-\ref{fig:17c}) \cite{oscillatory_spreading}. After some initial period, the two particles become more and more correlated with time as shown in the Figs. \ref{fig:16a} and \ref{fig:17a}. The entanglement entropy also increases slowly with time. Let us now discuss the dynamic localization behavior found in case of \(\pi\)-phase interactions. Here, \(q=4,p=50\) generates dynamic localization behavior (see Figs.\ref{fig:16a}-\ref{fig:17c}). However, the simple dependence (\(p\equiv\) period of oscillations, \(q\equiv\) number of secondary oscillations) on the coin parameter gets modified in this case. In order to understand the influence of the coin parameters, we have studied the dynamics for some more different values of \(q\) and \(p\) \cite{parameters}. We find that only in few cases the system exhibits dynamic localization where the time period of oscillation of the dynamic observables is \(2p\). However, we couldn't see any such simple relationship between the number of secondary oscillation and \(q\). Similarly the results do not indicate any simple dependence of the oscillation amplitudes on \(q,p\). Such modification was also found in the previously described case of \(\mathbb{1}\) interaction and \(|\psi^{-}\rangle\) initial state. \\ \begin{figure*}[th] \centering \subfigure[\label{fig:16a} ]{\includegraphics[scale=0.45]{Fig8a.pdf}} \subfigure[\label{fig:16b} ]{\includegraphics[scale=0.45]{Fig8b.pdf}} \subfigure[\label{fig:16c} ]{\includegraphics[scale=0.45]{Fig8c.pdf}} \subfigure[\label{fig:17a} ]{\includegraphics[scale=0.45]{Fig8d.pdf}} \subfigure[\label{fig:17b} ]{\includegraphics[scale=0.45]{Fig8e.pdf}} \subfigure[\label{fig:17c} ]{\includegraphics[scale=0.45]{Fig8f.pdf}} \subfigure[\label{fig:18a} ]{\includegraphics[scale=0.45]{Fig8g.pdf}} \subfigure[\label{fig:18b} ]{\includegraphics[scale=0.45]{Fig8h.pdf}} \subfigure[\label{fig:18c} ]{\includegraphics[scale=0.45]{Fig8i.pdf}} \caption{\label{fig:16}{The time-variation of the joint properties for two \(\pi\)-phase interacting quantum walkers : (a) Variations of \(C_{12}\) in case of \(|Sep\rangle\); (b) Variations of \(\Delta_{12}\) in case of \(|Sep\rangle\), (c) Variations of \(E(|\psi\rangle)\) in case of \(|Sep\rangle\), (d) Variations of \(C_{12}\) in case of \(|\psi^{+}\rangle\); (e) Variations of \(\Delta_{12}\) in case of \(|\psi^{+}\rangle\), (f) Variations of \(E(|\psi\rangle)\) in case of \(|\psi^{+}\rangle\), (g) Variations of \(C_{12}\) in case of \(|\psi^{-}\rangle\); (h) Variations of \(\Delta_{12}\) in case of \(|\psi^{-}\rangle\), (i) Variations of \(E(|\psi\rangle)\) in case of \(|\psi^{-}\rangle\) initial state. The coin parameters are mentioned inside the plots.} } \end{figure*} When the particles start from the \(|\psi^{-}\rangle\) initial state, the dynamics is found to be quite similar to that observed in case of two \(\mathbb{1}\) interacting particles starting from \(|\psi^{-}\rangle\) state (see Figs.\ref{fig:18a}-\ref{fig:18c}). Even in case of time-independent \(C_{\alpha_{1},\alpha_{2}}\)(\(\alpha_{1}=\alpha_{2}=0\)) coins, \(|\psi^{-}\rangle\) initial state generated qualitatively similar dynamics in cases of \(\mathbb{1}\) and \(\pi\)-phase interactions. So, for \(\pi\)-phase interaction, the simple dependence (\(p\equiv\) period of oscillations, \(q\equiv\) number of secondary oscillations) on the coin parameter gets modified for all the three different initial states.\\% \subsubsection{Overall diversity in the \(\hat{C}_{\phi}(t)\) driven dynamics} The time dependent coin \(\hat{C}_{\phi}(t)\) generates a wide-spectrum of two-body dynamical phenomena depending on the nature of interaction, initial state and the coin parameters. In some cases, the system exhibits ``simple" dynamic localization where the \(q\),\(p\) dependence of the periodic oscillations of the dynamic observables is simple. Moreover, such phenomena can be divided into two category : (I) Correlated dynamic localization and (II) Anti-correlated dynamic localization depending on the nature of the two different entangled initial states. In some other cases, the system exhibits ``complex" dynamical localization where the \(q,p\) dependence of the periodic oscillations of the dynamic observables is more complicated. Moreover, the nature of positional correlation in these cases becomes dependent on the coin-parameters. Finally, the system also exhibits a third kind of dynamical phenomena : ``oscillatory spreading" where the system does not localize and the corresponding probability distribution shows oscillatory behavior alongwith spreading.\\ We have found particular cases of two body dynamic localization in which all three joint quantities (\(\Delta_{12}\),\(C_{12}\mbox{ and }E(|\psi\rangle)\)) perform periodic oscillations with coin-parameter dependent amplitude and time-period.\\ We have to remember that these are transient behavior, obtained for rational values of \(\frac{q}{p}\) \cite{coin2}. At long times we should expect ballistic dynamics at the level of individual particles. The dynamical localization phenomena can be further modified by using two different coins of kind \(\hat{C}_{\Phi}(t)\) with two different sets of \(q,p\) parameters. However, we have already presented a rich variety of dynamics by just focussing on the simpler cases which could be helpful for developing an understanding for more complex cases. Some of the cases studied here needs a better understanding which is left for future work.\\ \section{Conclusion \& Outlook\label{eight}} Controlling QWs is a topic of current interest \cite{controlling}. Researchers have been prescribing new type of quantum coins (step-dependent coins \cite{step-dependent}, position dependent coins \cite{position-dependent}, a combination of two entangled coins \cite{two-entangled-coins} etc.) and different types of shift operators \cite{shift-operator} to manipulate QW dynamics of a single particle. We have presented here a first numerical simulation on controlling two-particle walks using time-dependent coins. Although various types of time-dependent coins can be constructed, we have considered here two specific time-dependent coins which enable a realization of wide spectrum of dynamical behavior. The results presented here can be considered as a generalization of two particle QW dynamics on a line. Some of the reported behaviors can be obtained in different situations even without the use of time-dependent coins. For example, two quantum walkers in presence of decoherences \cite{decoherence} are expected to generate dynamics qualitatively similar to that demonstrated using coin \(\hat{C}_{\alpha_{1},\alpha_{2}}(t)\) with \(\alpha_{1}=\alpha_{2}=0.5\). Position dependent phases \cite{15,16}, if employed in a system of two particles, are expected to generate two-body dynamical localization and oscillatory spreading phenomena qualitatively similar to that reported here.\\ One obvious extension of the present work would be to explore the dynamics on more complicated graphs. We have studied here the collective dynamics of two quantum particles. One can also study the dynamics of a multi-particle system using the time-dependent coins. We have considered here only \(\mathbb{1}\) and \(\pi\)-phase interactions. An interesting extension of the present work would be to consider other types of possible two-particle interactions such as long range interactions \cite{long-range-interactions}. Our work will form a base for understanding such results by comparing and contrasting those results with the results presented here.\\ We have also studied here the collective dynamics of two particles of different nature. Similar studies with multiple particles will help to answer the following question : How far can we manipulate the collective dynamical behavior of a group of quantum particles by tuning only the behavior of a single particle ? It is quite possible that a single particle with tunable dynamics in a group of particles can influence the collective dynamics in the presence of multi-particle quantum correlation and multipartite entanglement. However, an experimental realization of such a theoretically interesting idea may be a difficult problem.\\ \section{Acknowledgements} We thank Professor Parongama Sen for a careful reading of the manuscript and for her helpful comments and suggestions. T.K. Bose thanks Sudhindu Bikash Mandal for helpful discussions.\\ \bibliographystyle{revtex}
proofpile-arXiv_065-6160
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Many real-world problems, such as traffic routing \cite{leblanc1975}, market prediction \cite{fainmesser2012}, and social network dynamics \cite{skyrms2009}, involve multiple learning agents that interact and compete with each other. Such problems can be described as \emph{repeated games}, in which the goal of every agent is to maximize her cumulative reward. In most cases, the underlying game is \emph{unknown} to the agents, and the only way to learn about it is by repeatedly playing and observing the corresponding game outcomes. The performance of an agent in a repeated game is often measured in terms of \emph{regret}. For example, in traffic routing, the regret of an agent quantifies the reduction in travel time had the agent known the routes chosen by the other agents. No-regret algorithms for playing unknown repeated games exist, and their performance depends on the information available at every round. In the case of \emph{full information} feedback, the agent observes the obtained reward, as well as the rewards of other non-played actions. While these algorithms attain strong regret guarantees, such full information feedback is often unrealistic in real-world applications. In traffic routing, for instance, agents only observe the incurred travel times and cannot observe the travel times for the routes not chosen. In this paper, we address this challenge by considering a more realistic feedback model, where at every round of the game, the agent plays an action and observes the noisy reward outcome. In addition to this \emph{bandit} feedback, the agent \emph{also observes the actions played by other agents}. Under this feedback model and further regularity assumptions on the reward function, we present a novel no-regret algorithm for playing unknown repeated games. The proposed algorithm alleviates the need for full information feedback while still achieving comparable regret guarantees. \renewcommand{\arraystretch}{1.4} \begin{table}[t] \small \hspace{-0em} \centering \setlength\abovecaptionskip{-0.7em} \setlength\belowcaptionskip{-1em} \begin{tabular}{llll} \multicolumn{1}{l|}{} & \multicolumn{1}{c|}{\textsc{Hedge} \cite{freund1997}}& \multicolumn{1}{c|}{\textsc{Exp3} \cite{auer2003}} & \multicolumn{1}{c|}{\textsc{GP-MW} [this paper]} \\ \hline \multicolumn{1}{l|}{\textbf{Feedback}} & \multicolumn{1}{c|}{rewards for all actions} & \multicolumn{1}{c|}{obtained reward} & \multicolumn{1}{c|}{obtained reward + opponents' actions} \\ \hline \multicolumn{1}{l|}{\textbf{Regret}} & \multicolumn{1}{c|}{$\mathcal{O}\big(\sqrt{T \log K_i}\big)$} & \multicolumn{1}{c|}{$\mathcal{O}\big(\sqrt{ T K_i \log K_i}\big)$} & \multicolumn{1}{c|}{$\mathcal{O}\big( \sqrt{T\log K_i} + \gamma_T\sqrt{T}\big)$} \\ \hline & & & \end{tabular} \caption{Finite action set regret bounds that depend on the available feedback observed by player $i$ at each time step. Time horizon is denoted with $T$, and $K_i$ is the number of actions available to player~$i$. Kernel dependent quantity $\gamma_T$ (Eq.~\eqref{eq:mmi}) captures the degrees of freedom in the reward function.} \label{table1} \end{table} \textbf{Related Work.} In the \emph{full information setting}, multiplicative-weights (MW) algorithms~\cite{littlestone1994} such as \textsc{Hedge} \cite{freund1997} attain optimal $\mathcal{O}(\sqrt{T\log K_i})$ regret, where $K_i$ is the number of actions available to agent $i$. In the case of convex action sets in $ \mathbb{R}^{d_i}$, and convex and Lipschitz rewards, online convex optimization algorithms attain optimal $\mathcal{O}(\sqrt{T})$ regret \cite{zinkevich2003}. By only assuming Lipschitz rewards and bounded action sets, $\mathcal{O}(\sqrt{d_i T \log T })$ regret follows from \cite{maillard2010}, while in \cite{hazan2017} the authors provide efficient gradient-based algorithms with `local' regret guarantees. Full information feedback requires perfect knowledge of the game and is unrealistic in many applications. Our proposed algorithm overcomes this limitation while achieving comparable regret bounds. In the more challenging \emph{bandit setting}, existing algorithms have a \emph{substantially worse} dependence on the size of the action set. For finite actions, \textsc{Exp3} \cite{auer2003} and its variants ensure optimal $\mathcal{O}(\sqrt{T K_i \log K_i})$ regret. In the case of convex action sets, and convex and Lipschitz rewards, bandit algorithms attain $\mathcal{O}(\mathrm{poly}(d_i) \sqrt{T})$ regret \cite{bubeck2017}, while in the case of Lipschitz rewards $\mathcal{O}(T^\frac{d_i + 1}{d_i + 2} \log T)$ regret can be obtained \cite{slivkins2014}. In contrast, our algorithm works in the \emph{noisy} bandit setting and requires the knowledge of the actions played by other agents. This allows us to, under some regularity assumptions, obtain substantially improved performance. In Table~\ref{table1}, we summarize the regret and feedback model of our algorithm together with the existing no-regret algorithms. The previously mentioned online algorithms reduce the unknown repeated game to a single agent problem against an adversarial and adaptive environment that selects a different reward function at every time step \cite{cesa-bianchi2006}. A fact not exploited by these algorithms is that in a repeated game, the rewards obtained at different time steps are \emph{correlated} through a static unknown reward function. In \cite{syrgkanis2015} the authors use this fact to show that, if every agent uses a regularized no-regret algorithm, their individual regret grows at a lower rate of $\mathcal{O}(T^{1/4})$, while the sum of their rewards grows only as $\mathcal{O}(1)$. In contrast to \cite{syrgkanis2015}, we focus on the single-player viewpoint, and we do not make any assumption on opponents strategies\footnote{In fact, they are allowed to be adaptive and adversarial.}. Instead, we show that by observing opponents'~actions, the agent can exploit the structure of the reward function to reduce her individual regret. \textbf{Contributions.} We propose a novel no-regret bandit algorithm \textsc{GP-MW} for playing unknown repeated games.~\textsc{GP-MW} combines the ideas of the multiplicative weights update method \cite{littlestone1994}, with GP upper confidence bounds, a powerful tool used in GP bandit algorithms (e.g., \cite{srinivas2010,bogunovic2016truncated}). When a finite number $K_i$ of actions is available to player $i$, we provide a novel high-probability regret bound $\mathcal{O}(\sqrt{T\log K_i} + \gamma_T\sqrt{T})$, that depends on a kernel-dependent quantity $\gamma_T$ \cite{srinivas2010}. For common kernel choices, this results in a sublinear regret bound, which grows only logarithmically in $K_i$. In the case of infinite action subsets of $\mathbb{R}^{d_i}$ and Lipschitz rewards, via a discretization argument, we obtain a high-probability regret bound of $\mathcal{O}(\sqrt{d_i T \log( d_i T)} + \gamma_T \sqrt{T})$. We experimentally demonstrate that \textsc{GP-MW} outperforms existing bandit baselines in random matrix games and traffic routing problems. Moreover, we present an application of \textsc{GP-MW} to a novel robust Bayesian optimization setting in which our algorithm performs favourably in comparison to other baselines. \section{Problem Formulation} \label{sec:problem_formulation} We consider a repeated static game among $N$ non-cooperative agents, or players. Each player $i$ has an action set $\mathcal{A}^i \subseteq \mathbb{R}^{d_i}$ and a reward function $r^i : \mathbf{\mathcal{A}} = \mathcal{A}^1 \times \cdots \times \mathcal{A}^N \rightarrow [0,1]$. We assume that the reward function $r^i$ is unknown to player $i$. At every time $t$, players simultaneously choose actions $\mathbf{a}_t = (a_t^1, \ldots, a_t^N)$ and player $i$ obtains a reward $r^i(a_t^i,a_t^{-i})$, which depends on the played action $a_t^i$ and the actions $a_t^{-i} := (a_t^1, \ldots, a_t^{i-1}, a_t^{i+1}, \ldots, a_t^N)$ of all the other players. The goal of player $i$ is to maximize the cumulative reward $\sum_{t=1}^T r^i(a_t^i, a_t^{-i})$. After $T$ time steps, the \emph{regret} of player $i$ is defined as \begin{equation}\label{eq:regret} R^i(T) = \max_{a \in \mathcal{A}^i} \sum_{t=1}^T r^i(a, a_t^{-i}) - \sum_{t=1}^T r^i(a_t^i, a_t^{-i}) \, , \end{equation} i.e., the maximum gain the player could have achieved by playing the single best fixed action in case the sequence of opponents' actions $\{ a_t^{-i}\}_{t=1}^T$ and the reward function were known in hindsight. An algorithm is \emph{no-regret} for player $i$ if $R^i(T)/T \rightarrow 0$ as $T\rightarrow \infty$ for any sequence $\{ a_t^{-i} \}_{t=1}^T$. First, we consider the case of a finite number of available actions $K_i$, i.e., $|\mathcal{A}^i| = K_i$. To achieve no-regret, the player should play mixed strategies \cite{cesa-bianchi2006}, i.e., probability distributions $\mathbf{w}_t^i \in [0,1]^{K_i}$ over $\mathcal{A}^i$. With full-information feedback, at every time $t$ player $i$ observes the vector of rewards $\mathbf{r}_t = [r^i(a, a^{-i}_t)]_{a \in \mathcal{A}^i} \in \mathbb{R}^{K_i}$. With bandit feedback, only the reward $r^i(a_t^i, a^{-i}_t)$ is observed by the player. Existing full information and bandit algorithms \cite{freund1997,auer2003}, reduce the repeated game to a sequential decision making problem between player $i$ and an adaptive environment that, at each time $t$, selects a reward function $r_t : \mathcal{A}_i \rightarrow [0,1]$. In a repeated game, the reward that player $i$ observes at time $t$ is a \emph{static} fixed function of $(a_t^i , a_t^{-i})$, i.e., $r_t(a_t^i) = r^i(a^i_t, a^{-i}_t)$, and in many practical settings similar game outcomes lead to similar rewards (see, e.g., the traffic routing application in Section~\ref{sec:exp_routing}). In contrast to existing approaches, we exploit such \emph{correlations} by considering the feedback and reward function models described below. \textbf{Feedback model.} We consider a \emph{noisy} bandit feedback model where, at every time $t$, player $i$ observes a noisy measurement of the reward $\tilde{r}_t^i = r^i(a_t^i, a_t^{-i}) + \epsilon^i_t$ where $\epsilon^i_t$ is $\sigma_i$-sub-Gaussian, i.e., $\mathbb{E}[ \exp(c \, \epsilon_t^i)] \leq \exp(c^2 \sigma_i^2/2)$ for all $c \in \mathbb{R}$, with independence over time. The presence of noise is typical in real-world applications, since perfect measurements are unrealistic, e.g., measured travel times in traffic routing. Besides the standard noisy bandit feedback, we assume player $i$ also observes the played actions $a_t^{-i}$ of all the other players. In some applications, the reward function $r^i$ depends only indirectly on $a_t^{-i}$ through some aggregative function $\psi(a_t^{-i})$. For example, in traffic routing \cite{leblanc1975}, $\psi(a_t^{-i})$ represents the total occupancy of the network's edges, while in network games \cite{jackson2015}, it represents the strategies of player $i$'s neighbours. In such cases, it is sufficient for the player to observe $\psi(a_t^{-i})$ instead of $a_t^{-i}$. \par \textbf{Regularity assumption on rewards.} In this work, we assume the unknown reward function $r^i : \mathbf{\mathcal{A}} \rightarrow [0,1]$ has a bounded norm in a reproducing kernel Hilbert space (RKHS) associated with a positive semi-definite kernel function $k^i(\cdot, \cdot)$, that satisfies $k^i(\mathbf{a}, \mathbf{a}') \leq 1$ for all $\mathbf{a} , \mathbf{a}' \in \mathbf{\mathcal{A}}$. The RKHS norm $\| r^i \|_{k^i} = \sqrt{ \langle r^i, r^i \rangle _{k^i}}$ measures the smoothness of $r^i$ with respect to the kernel function $k^i(\cdot, \cdot)$, while the kernel encodes the similarity between two different outcomes of the game $\mathbf{a} , \mathbf{a}' \in \mathbf{\mathcal{A}}$. Typical kernel choices are \emph{polynomial}, \emph{Squared Exponential}, and \emph{Matérn}: \begin{gather*} k_{poly}(\mathbf{a}, \mathbf{a}' ) = \left(b + \frac{\mathbf{a}^\top \mathbf{a}'}{l} \right)^n \, , \qquad k_{SE}(\mathbf{a}, \mathbf{a}') = \exp \left(- \frac{s^2}{2 l^2} \right) \, , \\ k_{Mat \acute{e}rn}(\mathbf{a}, \mathbf{a}') = \frac{2^{1-\nu}}{\Gamma(\nu)} \left( \frac{s \sqrt{2\nu}}{l} \right)^\nu B_\nu \left( \frac{s \sqrt{2\nu}}{l} \right) \, , \end{gather*} where $s = \| \mathbf{a}- \mathbf{a}' \|_2$, $B_\nu$ is the modified Bessel function, and $l, n, \nu > 0$ are kernel hyperparameters \cite[Section 4]{rasmussen2005}. This is a standard smoothness assumption used in kernelized bandits and Bayesian optimization (e.g., \cite{srinivas2010,chowdhury2017}). In our context it allows player $i$ to use the observed history of play to learn about $r^i$ and predict unseen game outcomes. Our results are not restricted to any specific kernel function, and depending on the application at hand, various kernels can be used to model different types of reward functions. Moreover, composite kernels (see e.g., \cite{krause2011}) can be used to encode the differences in the structural dependence of $r^i$ on $a^i$ and $a^{-i}$. It is well known that Gaussian Process models can be used to learn functions with bounded RKHS norm \cite{srinivas2010, chowdhury2017}. A GP is a probability distribution over functions $f(\mathbf{a}) \sim \mathcal{GP}(\mu(\mathbf{a}), k(\mathbf{a}, \mathbf{a}'))$, specified by its mean and covariance functions $\mu(\cdot)$ and $k(\cdot, \cdot)$, respectively. Given a history of measurements $\{ y_j \}_{j=1}^t$ at points $\{ \mathbf{a}_j \}_{j=1}^t$ with $y_j = f( \mathbf{a}_j ) + \epsilon_j$ and $\epsilon_j \sim \mathcal{N}(0,\sigma^2)$, the posterior distribution under a $\mathcal{GP}(0, k(\mathbf{a}, \mathbf{a}'))$ prior is also Gaussian, with mean and variance functions: \begin{align} \mu_t(\mathbf{a}) &= \mathbf{k}_t(\mathbf{a})^\top (\mathbf{K}_t + \sigma^2 \mathbf{I}_t)^{-1} \mathbf{y}_t \label{eq:mean_update}\\ \sigma_t^2(\mathbf{a}) &= k(\mathbf{a}, \mathbf{a}) - \mathbf{k}_t(\mathbf{a})^\top (\mathbf{K}_t + \sigma^2 \mathbf{I}_t)^{-1} \mathbf{k}_t(\mathbf{a}) \, , \label{eq:var_update} \end{align} where $\mathbf{k}_t(\mathbf{a}) = [k(\mathbf{a}_j, \mathbf{a})]_{j=1}^t$, $\mathbf{y}_t = [y_1, \ldots, y_t]^\top$, and $\mathbf{K}_t = [k(\mathbf{a}_j, \mathbf{a}_{j'})]_{j,j'}$ is the kernel matrix. At time $t$, an upper confidence bound on $f$ can be obtained as: \begin{equation}\label{eq:ucb} UCB_t(\mathbf{a}) := \mu_{t-1}(\mathbf{a}) + \beta_t \sigma_{t-1}(\mathbf{a}) \, , \end{equation} where $\beta_t$ is a parameter that controls the width of the confidence bound and ensures $UCB_t(\mathbf{a}) \geq f(\mathbf{a})$, for all $\mathbf{a} \in \mathbf{\mathcal{A}}$ and $t \geq 1$, with high probability \cite{srinivas2010}. We make this statement precise in Theorem~\ref{thm:1}. \looseness=-1 Due to the above regularity assumptions and feedback model, player $i$ can use the history of play $\{ (\mathbf{a}_1, \tilde{r}_1^i) , \ldots, (\mathbf{a}_{t-1}, \tilde{r}_{t-1}^i) \}$ to compute an upper confidence bound $UCB_t(\cdot)$ of the unknown reward function $r^i$ by using \eqref{eq:ucb}. In the next section, we present our algorithm that makes use of $UCB_t(\cdot)$ to simulate full information feedback. \section{The \textsc{GP-MW} Algorithm}\label{sec:results} We now introduce \textsc{GP-MW}, a novel no-regret bandit algorithm, which can be used by a generic player $i$ (see Algorithm \ref{alg:GPHedge}). \textsc{GP-MW} maintains a probability distribution (or mixed strategy) $\mathbf{w}_t^i$ over $\mathcal{A}^i$ and updates it at every time step using a multiplicative-weight (MW) subroutine (see~\eqref{eq:mwu}) that requires full information feedback. Since such feedback is not available, \textsc{GP-MW} builds (in~\eqref{eq:opt_rewards}) an \emph{optimistic} estimate of the true reward of every action via the upper confidence bound $UCB_t$ of $r^i$. Moreover, since rewards are bounded in $[0,1]$, the algorithm makes use of $\min\{ 1, UCB_t(\cdot)\}$. At every time step $t$, \textsc{GP-MW} plays an action $a_t^i$ sampled from $\mathbf{w}_t^i$, and uses the noisy reward observation $\tilde{r}_t^i$ and actions $a_t^{-i}$ played by other players to compute the updated upper confidence bound $UCB_{t+1}(\cdot)$. \begin{algorithm}[t \caption{The \textsc{GP-MW} algorithm for player $i$} \label{alg:GPHedge} \textbf{Input: } Set of actions $\mathcal{A}^i$, GP prior $(\mu_0, \sigma_0, k^i)$, parameters $\{\beta_t\}_{t\geq 1}, \eta$ \begin{algorithmic}[1] \State Initialize: $\mathbf{w}_1^i = \frac{1}{K_i}(1, \ldots, 1) \in \mathbb{R}^{K_i}$ \For{$t= 1,2, \dots, T$} \State Sample action $a_t^i \sim \mathbf{w}_t^i$ \State Observe noisy reward $\tilde{r}_t^i$ and opponents' actions $a_t^{-i}$: $$\hspace{2em}\tilde{r}_t^i = r^i(a_t^i, a_t^{-i}) + \epsilon^i_t$$ \State Compute optimistic reward estimates $\hat{\mathbf{r}}_t \in \mathbb{R}^{K_i}$ : \begin{equation} \label{eq:opt_rewards} [\hat{\mathbf{r}}_t]_a = \min\{ 1, UCB_{t}( a, a_t^{-i})\} \quad \mathrm{for\;every}\quad a = 1,\ldots, K_i \end{equation} \State Update mixed strategy: \begin{equation} \label{eq:mwu} [\mathbf{w}_{t+1}^i]_a = \frac{[\mathbf{w}_{t}^i]_a \exp( -\eta \: (1-[\hat{\mathbf{r}}_t]_a))}{\sum_{k=1}^{K_i} [\mathbf{w}_{t}^i]_k \exp( -\eta \: (1- [\hat{\mathbf{r}}_t]_k))} \quad \mathrm{for\;every}\quad a = 1,\ldots, K_i \end{equation} \State Update $\mu_t, \sigma_t$ according to \eqref{eq:mean_update}-\eqref{eq:var_update} by appending $(\mathbf{a}_t , \tilde{r}_t^i)$ to the history of play. \EndFor \end{algorithmic} \end{algorithm} In Theorem~\ref{thm:1}, we present a high-probability regret bound for \textsc{GP-MW} while all the proofs of this section can be found in the supplementary material. The obtained bound depends on the \emph{maximum information gain}, a kernel-dependent quantity defined as: \begin{equation*} \label{eq:mmi} \gamma_t := \max_{\mathbf{a}_1, \ldots, \mathbf{a}_t} \frac{1}{2} \log \det (\mathbf{I}_t + \sigma^{-2} \mathbf{K}_t) \,. \end{equation*} It quantifies the maximal reduction in uncertainty about $r^i$ after observing outcomes $\{ \mathbf{a}_j\}_{j=1}^t$ and the corresponding noisy rewards. The result of \cite{srinivas2010} shows that this quantity is sublinear in $T$, e.g., $\gamma_T= \mathcal{O}((\log T )^{d+1})$ in the case of $k_{SE}$, and $\gamma_T= \mathcal{O}\big(T ^\frac{d^2 + d}{2 \nu + d^2 + d}\log T \big)$ in the case of $k_{Mat \acute{e}rn}$, where $d$ is the total dimension of the outcomes $\mathbf{a} \in \mathbf{\mathcal{A}}$, i.e., $d = \sum_{i=1}^N d_i$. \begin{theorem}\label{thm:1} Fix $\delta \in (0,1)$ and assume $\epsilon^i_t$'s are $\sigma_i$-sub-Gaussian with independence over time. For any $r^i$ such that $\| r^i\|_{k^i}\leq B$, if player $i$ plays actions from $\mathcal{A}_i$, $|\mathcal{A}_i|= K_i$, according to \textsc{GP-MW} with $\beta_t = B + \sqrt{2(\gamma_{t-1} + \log(2/\delta))}$ and $\eta = \sqrt{(8\log K_i)/T}$, then with probability at least $1- \delta$, \begin{equation*} R^i(T) = \mathcal{O}\left(\sqrt{T \log K_i} + \sqrt{ T \log(2/\delta)} + B\sqrt{T \gamma_T} + \sqrt{T\gamma_T(\gamma_T + \log(2/\delta))}\right) \,. \end{equation*} \end{theorem} The proof of this theorem follows by the decomposition of the regret of \textsc{GP-MW} into the sum of two terms. The first term corresponds to the regret that player $i$ incurs with respect to the sequence of computed upper confidence bounds. The second term is due to not knowing the true reward function $r^i$. The proof of Theorem~\ref{thm:1} then proceeds by bounding the first term using standard results from adversarial online learning ~\cite{cesa-bianchi2006}, while the second term is upper bounded by using regret bounding techniques from GP optimization ~\cite{srinivas2010, bogunovic2018}. Theorem~\ref{thm:1} can be made more explicit by substituting bounds on $\gamma_T$. For instance, in the case of the squared exponential kernel, the regret bound becomes $R^i(T) = \mathcal{O}\Big( \big( (\log K_i)^{1/2} + (\log T)^{d+1} \big) \sqrt{T} \Big)$. In comparison to the standard multi-armed bandit regret bound $\mathcal{O}(\sqrt{T K_i \log K_i})$ (e.g., \cite{auer2003}), this regret bound does not depend on $\sqrt{K_i}$, similarly to the ideal full information setting. \subsection*{The case of continuous action sets} In this section, we consider the case when $\mathcal{A}^i$ is a (continuous) compact subset of $\mathbb{R}^{d_i}$. In this case, further assumptions are required on $r^i$ and $\mathcal{A}_i$ to achieve sublinear regret. Hence, we assume a bounded set $\mathcal{A}_i \subset \mathbb{R}^{d_i}$ and $r^i$ to be Lipschitz continuous in $a^i$. Under the same assumptions, existing regret bounds are $\mathcal{O}(\sqrt{d_i T \log T })$ and $\mathcal{O}(T^\frac{d_i + 1}{d_i + 2} \log T)$ in the full information \cite{maillard2010} and bandit setting \cite{slivkins2014}, respectively. By using a discretization argument, we obtain a high probability regret bound for \textsc{GP-MW}. \begin{corollary}\label{cor:infinite_arms} Let $\delta \in (0,1)$ and $\epsilon^i_t$ be $\sigma_i$-sub-Gaussian with independence over time. Assume $\Vert r^i \Vert_k \leq B$, $\mathcal{A}_i \subset [0, b]^{d_i}$, and $r^i$ is $L$-Lipschitz in its first argument, and consider the discretization $[\mathcal{A}^i]_T$ with $|[\mathcal{A}^i]_T| = (L b \sqrt{d_i T})^{d_i}$ such that $\Vert a - [a]_T \Vert_1 \leq \sqrt{d_i/T} /L$ for every $a \in \mathcal{A}^i$, where $[a]_T$ is the closest point to $a$ in $[\mathcal{A}^i]_T$. If player $i$ plays actions from $[\mathcal{A}^i]_T$ according to \mbox{\textsc{GP-MW}} with $\beta_t = B + \sqrt{2(\gamma_{t-1} + \log(2/\delta))}$ and $\eta = \sqrt{8d_i\log(L b \sqrt{d_i T})/T}$, then with probability at least $1- \delta$, \begin{equation*} R^i(T) = \mathcal{O}\left(\sqrt{d_i T \log(L b \sqrt{d_i T})} + \sqrt{ T \log(2/\delta)} + B\sqrt{T \gamma_T} + \sqrt{T\gamma_T(\gamma_T + \log(2/\delta))}\right) \,. \end{equation*} \end{corollary} By substituting bounds on $\gamma_T$, our bound becomes $R^i(T) = \mathcal{O}(T^{1/2} \mathrm{poly log(T)})$ in the case of the SE kernel (for fixed $d$). Such a bound has a strictly better dependence on $T$ than the existing bandit bound $\mathcal{O}(T^\frac{d_i + 1}{d_i + 2} \log T)$ from \cite{slivkins2014}. Similarly to \cite{slivkins2014,maillard2010}, the algorithm resulting from Corollary~\ref{cor:infinite_arms} is not efficient in high dimensional settings, as its computational complexity is exponential in $d_i$. \looseness=-1 \section{Experiments} In this section, we consider random matrix games and a traffic routing model and compare \textsc{GP-MW} with the existing algorithms for playing repeated games. Then, we show an application of \textsc{GP-MW} to robust BO and compare it with existing baselines on a movie recommendation problem. \subsection{Repeated random matrix games} We consider a repeated matrix game between two players with actions $\mathcal{A}_1 = \mathcal{A}_2 = \{ 0,1, \ldots, K-1\}$ and payoff matrices $A^i \in \reals{K \times K}, i=1,2$. At every time step, each player $i$ receives a payoff $r^i(a_t^1 , a_t^2) = [A^i]_{a_t^1, a_t^2}$, where $[A^i]_{i,j}$ indicates the $(i,j)$-th entry of matrix $A^i$. We select $K = 30$ and generate $10$ random matrices with $r^1 = r^2 \sim GP(0, k(\cdot, \cdot))$, where $k = k_{SE}$ with $l=6$. We set the noise to $\epsilon^i_t \sim \mathcal{N}(0,1)$, and use $T = 200$. For every game, we distinguish between two settings:\looseness=-1 \textbf{Against random opponent.} In this setting, player-2 plays actions uniformly at random from $\mathcal{A}^2$ at every round $t$, while player-1 plays according to a no-regret algorithm. In \reffig{fig:random_opponent}, we compare the time-averaged regret of player-1 when playing according to \textsc{Hedge} \cite{freund1997}, \textsc{Exp3.P} \cite{auer2003}, and \textsc{GP-MW}. Our algorithm is run with the true function prior while \textsc{Hedge} receives (unrealistic) noiseless full information feedback (at every round $t$) and leads to the lowest regret. When only the noisy bandit feedback is available, \textsc{GP-MW} significantly outperforms \textsc{Exp3.P}. \textbf{\textsc{GP-MW} vs \textsc{Exp3.P}.} Here, player-1 plays according to \textsc{GP-MW} while player-2 is an adaptive adversary and plays using \textsc{Exp3.P}. In Figure~\ref{fig:GPHedge_vs_Exp3}, we compare the regret of the two players averaged over the game instances. \textsc{GP-MW} outperforms \textsc{Exp3.P} and ensures player-1 a smaller regret. \begin{figure*}[t] \centering \begin{subfigure}{.48\linewidth} \setlength\abovecaptionskip{-0.0em} \centering \includegraphics[width=1\linewidth]{rep_matrix_Figure_1.png} \caption{Against random opponent} \label{fig:random_opponent} \end{subfigure} $\quad$ \begin{subfigure}{.48\linewidth} \setlength\abovecaptionskip{-0.0em} \centering \includegraphics[width=1\linewidth]{rep_matrix_Figure_2.png} \caption{\textsc{GP-MW} vs. \textsc{Exp3.P}.} \label{fig:GPHedge_vs_Exp3} \end{subfigure} \caption{ \textsc{GP-MW} leads to smaller regret compared to \textsc{Exp3.P}. \textsc{Hedge} is an idealized benchmark which upper bounds the achievable performance. Shaded areas represent $\pm$ one standard deviation.} \label{fig:repeated_matrix_games} \end{figure*} \subsection{Repeated traffic routing}\label{sec:exp_routing} We consider the Sioux-Falls road network \cite{leblanc1975, website_transp_test}, a standard benchmark model in the transportation literature. The network is a directed graph with 24 nodes and 76 edges ($e \in E$). In this experiment, we have $N = 528$ agents and every agent $i$ seeks to send some number of units $u^i$ from a given origin to a given destination node. To do so, agent $i$ can choose among $K_i = 5$ possible routes consisting of network edges $E(i) \subset E$. A route chosen by agent $i$ corresponds to action $a^i \in \mathbb{R}^{|E(i)|}$ with $[a^i]_e = u^i$ in case $e$ belongs to the route and $[a^i]_e = 0$ otherwise. The goal of each agent $i$ is to minimize the travel time weighted by the number of units $u^i$. The travel time of an agent is unknown and depends on the total occupancy of the traversed edges within the chosen route. Hence, the travel time increases when more agents use the same edges. The number of units $u^i$ for every agent, as well as travel time functions for each edge, are taken from \cite{leblanc1975, website_transp_test}. A more detailed description of our experimental setup is provided in Appendix~\ref{app:routing}.\looseness=-1 We consider a repeated game, where agents choose routes using either of the following algorithms: \begin{itemize}[leftmargin=1em] \item \textsc{Hedge}. To run \textsc{Hedge}, each agent has to observe the travel time incurred had she chosen any different route. This requires knowing the exact travel time functions. Although these assumptions are unrealistic, we use \textsc{Hedge} as an idealized benchmark. \item \textsc{Exp3.P}. In the case of \textsc{Exp3.P}, agents only need to observe their incurred travel time. This corresponds to the standard bandit feedback. \item \textsc{GP-MW}. Let $\psi(a^{-i}_t) \in \mathbb{R}^{|E(i)|}$ be the total occupancy (by other agents) of edges $E(i)$ at time $t$. To run \textsc{GP-MW}, agent $i$ needs to observe a noisy measurement of the travel time as well as the corresponding $\psi(a^{-i}_t)$. \item \textsc{Q-BRI} (Q-learning Better Replies with Inertia algorithm \cite{chapman2013}). This algorithm requires the same feedback as \textsc{GP-MW} and is proven to asymptotically converge to a Nash equilibrium (as the considered game is a potential game \cite{monderer1996}). We use the same set of algorithm parameters as in \cite{chapman2013}. \end{itemize} For every agent $i$ to run \textsc{GP-MW}, we use a composite kernel $k^i$ such that for every $\mathbf{a}_1, \mathbf{a}_2 \in \mathbf{\mathcal{A}}$, $k^i( (a_1^i, a_1^{-i}) , (a_2^i, a_2^{-i}) ) = k_1^i(a_1^i,a_2^i) \cdot k_2^i(a_1^i + \psi(a_1^{-i}) ,a_2^i + \psi(a_2^{-i}))$ , where $k_1^i$ is a linear kernel and $k_2^i$ is a polynomial kernel of degree $n \in \lbrace 2,4,6 \rbrace$. \begin{figure*}[] \setlength\abovecaptionskip{-0.0em} \setlength\belowcaptionskip{-0.0em} \centering \begin{subfigure}{.48\linewidth} \centering \includegraphics[width=1\linewidth]{traffic_Figure_1_new.png} \label{fig:avg_regrets} \end{subfigure} $\quad$ \begin{subfigure}{.48\linewidth} \centering \includegraphics[width=1\linewidth]{traffic_Figure_2_new.png} \label{fig:avg_congestions} \end{subfigure} \vspace{-0em} \begin{subfigure}{.48\linewidth} \centering \includegraphics[width=1\linewidth]{traffic_Figure_3_new.png} \label{fig:avg_avg_regrets} \end{subfigure} $\quad$ \begin{subfigure}{.48\linewidth} \centering \includegraphics[width=1\linewidth]{traffic_Figure_4_new.png} \label{fig:avg_avg_congestions} \end{subfigure} \caption{\textsc{GP-MW} leads to a significantly smaller average regret compared to \textsc{Exp3.P} and \textsc{Q-BRI} and improves the overall congestion in the network. \textsc{Hedge} represents an idealized full information benchmark which upper bounds the achievable performance.} \label{fig:repeated_routing_games_1} \end{figure*} First, we consider a random subset of $100$ agents that we refer to as learning agents. These agents choose actions (routes) according to the aforementioned no-regret algorithms for $T = 100$ game rounds. The remaining non-learning agents simply choose the shortest route, ignoring the presence of the other agents. In Figure~\ref{fig:repeated_routing_games_1} (top plots), we compare the average regret (expressed in hours) of the learning agents when they use the different no-regret algorithms. We also show the associated average congestion in the network (see~\eqref{eq:congestion} in Appendix~\ref{app:routing} for a formal definition). When playing according to \textsc{GP-MW}, agents incur significantly smaller regret and the overall congestion is reduced in comparison to \textsc{Exp3.P} and \textsc{Q-BRI}. In our second experiment, we consider the same setup as before, but we vary the number of learning agents. In Figure~\ref{fig:repeated_routing_games_1} (bottom plots), we show the final (when $T=100$) average regret and congestion as a function of the number of learning agents. We observe that \textsc{GP-MW} systematically leads to a smaller regret and reduced congestion in comparison to \textsc{Exp3.P} and \textsc{Q-BRI}. Moreover, as the number of learning agents increases, both \textsc{Hedge} and \textsc{GP-MW} reduce the congestion in the network, while this is not the case with \textsc{Exp3.P} or \textsc{Q-BRI} (due to a slower convergence). \subsection{\textsc{GP-MW} and robust Bayesian Optimization}\label{sec:BO} In this section, we apply \textsc{GP-MW} to a novel robust Bayesian Optimization (BO) setting, similar to the one considered in~\cite{bogunovic2018}. The goal is to optimize an unknown function $f$ (under the same regularity assumptions as in Section~\ref{sec:problem_formulation}) from a sequence of queries and corresponding noisy observations. Very often, the actual queried points may differ from the selected ones due to various input perturbations, or the function may depend on external parameters that cannot be controlled (see \cite{bogunovic2018} for examples). This scenario can be modelled via a two player repeated game, where a player is competing against an adversary. The unknown reward function is given by $f : \mathcal{X} \times \Delta \rightarrow \mathbb{R}$. At every round $t$ of the game, the player selects a point $x_t \in \mathcal{X}$, and the adversary chooses $\delta_t \in \Delta$. The player then observes the parameter $\delta_t$ and a noisy estimate of the reward: $f(x_t, \delta_t) + \epsilon_t$. After $T$ time steps, the player incurs the regret $$R(T) = \max_{x \in \mathcal{X}} \sum_{t=1}^T f(x, \delta_t) - \sum_{t=1}^T f(x_t,\delta_t).$$ Note that both the regret definition and feedback model are the same as in Section~\ref{sec:problem_formulation}. In the standard (non-adversarial) Bayesian optimization setting, the \textsc{GP-UCB} algorithm~\cite{srinivas2010} ensures no-regret. On the other hand, the \textsc{StableOpt} algorithm \cite{bogunovic2018} attains strong regret guarantees against the worst-case adversary which perturbs the final reported point $x_T$. Here instead, we consider the case where the adversary is \emph{adaptive} at every time $t$, i.e., it can adapt to past selected points $x_1,\ldots, x_{t-1}$. We note that both \textsc{GP-UCB} and \textsc{StableOpt} fail to achieve no-regret in this setting, as both algorithms are deterministic conditioned on the history of play. On the other hand, \textsc{GP-MW} is a no-regret algorithm in this setting according to Theorem~\ref{thm:1} (and Corollary~\ref{cor:infinite_arms}). Next, we demonstrate these observations experimentally in a movie recommendation problem. \textbf{Movie recommendation.} We seek to recommend movies to users according to their preferences. A~priori it is unknown which user will see the recommendation at any time $t$. We assume that such a user is chosen arbitrarily (possibly adversarially), simultaneously to our recommendation. We use the MovieLens-100K dataset \cite{movielens} which provides a matrix of ratings for $1682$ movies rated by $943$ users. We apply non-negative matrix factorization with $p=15$ latent factors on the incomplete rating matrix and obtain feature vectors $\mathbf{m}_i , \mathbf{u}_j \in \mathbb{R}^p$ for movies and users, respectively. Hence, $\mathbf{m}_i^\top \mathbf{u}_j$ represents the rating of movie $i$ by user $j$. At every round $t$, the player selects $\mathbf{m}_t \in \{ \mathbf{m}_1, \ldots, \mathbf{m}_{1682}\}$, the adversary chooses (without observing $\mathbf{m}_t$) a user index $i_t \in \{ 1,\ldots, 943\}$, and the player receives reward $f(\mathbf{m}_t, i_t) = \mathbf{m}_t^\top \mathbf{u}_{i_t}$. We model $f$ via a GP with composite kernel $k((\mathbf{m},i),(\mathbf{m}',i')) = k_1(\mathbf{m},\mathbf{m}') \cdot k_2(i, i')$ where $k_1$ is a linear kernel and $k_2$ is a diagonal kernel. \begin{figure*}[] \centering \begin{subfigure}{.48\linewidth} \setlength\abovecaptionskip{-0.0em} \centering \includegraphics[width=1\linewidth]{movies_Figure_1.png} \caption{Users chosen at random.} \label{fig:Movie:random_adv} \end{subfigure} $\quad$ \begin{subfigure}{.48\linewidth} \setlength\abovecaptionskip{-0.0em} \setlength\belowcaptionskip{-0.0em} \centering \includegraphics[width=1\linewidth]{movies_Figure_2.png} \caption{Users chosen by adaptive adversary.} \label{fig:Movie:adaptive_adv} \end{subfigure} \caption{ \textsc{GP-MW} ensures no-regret against both randomly and adaptively chosen users, while \textsc{GP-UCB} and \textsc{StableOpt} attain constant average regret.} \label{fig:MovieRec} \end{figure*} We compare the performance of \textsc{GP-MW} against the ones of \textsc{GP-UCB} and \textsc{StableOpt} when sequentially recommending movies. In this experiment, we let \textsc{GP-UCB} select $\mathbf{m}_t = \arg\max_\mathbf{m} \max_{i } UCB_t( \mathbf{m}, i)$, while \textsc{StableOpt} chooses $\mathbf{m}_t = \argmax_\mathbf{m} \min_{i} UCB_t( \mathbf{m}, i)$ at every round $t$. Both algorithms update their posteriors with measurements at $(\mathbf{m}_t, \hat{i}_t)$ with $\hat{i}_t = \argmax_{i} UCB_t( \mathbf{m}_t, i)$ in the case of \textsc{GP-UCB} and $\hat{i}_t = \argmin_{i} LCB_t( \mathbf{m}_t, i)$ for \textsc{StableOpt}. Here, $LCB_t$ represents a lower confidence bound on $f$ (see \cite{bogunovic2018} for details). In Figure~\ref{fig:Movie:random_adv}, we show the average regret of the algorithms when the adversary chooses users uniformly at random at every $t$. In our second experiment (Figure~\ref{fig:Movie:adaptive_adv}), we show their performance when the adversary is adaptive and selects $i_t$ according to the \textsc{Hedge} algorithm. We observe that in both experiments \textsc{GP-MW} is no-regret, while the average regrets of both \textsc{GP-UCB} and \textsc{StableOpt} do not vanish. \looseness=-1 \section{Conclusions} We have proposed \textsc{GP-MW}, a no-regret bandit algorithm for playing unknown repeated games. In addition to the standard bandit feedback, the algorithm requires observing the actions of other players after every round of the game. By exploiting the correlation among different game outcomes, it computes upper confidence bounds on the rewards and uses them to simulate unavailable full information feedback. Our algorithm attains high probability regret bounds that can substantially improve upon the existing bandit regret bounds. In our experiments, we have demonstrated the effectiveness of \textsc{GP-MW} on synthetic games, and real-world problems of traffic routing and movie recommendation. \looseness=-1 \subsubsection*{Acknowledgments} This work was gratefully supported by Swiss National Science Foundation, under the grant SNSF $200021$\textunderscore$172781$, and by the European Union’s Horizon 2020 ERC grant $815943$. \bibliographystyle{plain}
proofpile-arXiv_065-6166
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{INTRODUCTION} In recent years, unmanned aerial vehicles (UAVs) have found success in challenging applications such as event safety, search and rescue, and security surveillance. For some specific operating domains such as indoors, underground or in urban centers, global navigation satellite systems (GNSS) information is not always available or reliable onboard the UAV. Hence, the feasibility of UAV deployment in these locales depends highly on the accuracy of the vision-based localization as a complement and replacement to GNSS systems. Failure to correctly perform the localization task with the sensors available to the UAV might lead to serious damage to the vehicle or to people in the vicinity. It is therefore imperative to not only perform accurate UAV localization with visual information, but to understand the reliability of the current localization estimate and to minimize the possibility of undetected failures in the visual localization process. Many UAV visual localization or visual simultaneous localization and mapping (SLAM) algorithms have been developed that can provide robust state estimation over an extended period of time, such as~\cite{mur2017orb, leutenegger2013keyframe}. Although these algorithms are able to operate with a low failure rate, it remains possible to further improve the robustness of UAV state estimation by continuously assessing the integrity of the state estimate computed onboard the UAV. Integrity measures the degree of trust that can be placed on the correctness of the localization solution~\cite{ochieng2002assessment}. In robot visual localization, 3$\sigma$ ($\pm$3 standard deviations) is often used as a measure of the uncertainty bounds on state estimates, which corresponds to a 99.7\% probability that the ground truth state is within the region. However, 3$\sigma$ only bounds the error of the state assuming there are no outliers in the measurements, that is, no measurements are drawn from outside the modeled measurement distribution. For real-life UAV visual localization, it is unreasonable to assume that the visual front-end works perfectly and there are never any outliers in the measurements at any time, as visual front-ends rely on correspondence of features from frame to frame, a process that is susceptible to error. As a result, the 3$\sigma$ approach is often too aggressive and does not bound the error of the true state of the UAV. The work of this paper is inspired by received autonomous integrity monitoring (RAIM) which is an integrity monitoring algorithm developed to assess the integrity of the GNSS signals. It is especially used in automated aircraft landing and other safety-critical GNSS applications. RAIM first uses redundant measurements to check the consistency of the measurements received, and detects possible faulty measurements from individual satellites. Afterwards, it determines a Horizontal Protection Level (HPL) and a Vertical Protection Level (VPL), which are the maximum errors in horizontal and vertical directions that the fault detection algorithm is not expected to detect~\cite{walter1995weighted}. The idea of adapting RAIM for robot localization has been recently developed for filter-based algorithms~\cite{2019imekf, 2019imekf2}, but not for optimization-based algorithms. Although integrity monitoring is a well-developed area for GNSS applications, it is still non-trivial to adapt it in optimization-based UAV visual localization. Firstly, in most of the UAV visual localization algorithms, the system relies on features to provide measurements instead of satellites, and there are considerably more measurements compared with GNSS applications. Secondly, in RAIM, it is rare for two satellites to be faulty at the same time. However, for visual localization, the absolute number of incorrect measurements, or outliers, is much higher, and the outlier ratio can often be greater than 10\%. As a result, a different outlier rejection method is needed to handle multiple outliers. In addition, in GNSS applications, each satellite provides one measurement, but in visual localization, each feature gives 2 or 3 measurements for the monocular and rectified stereo cases, respectively. In this paper, we propose a novel approach to monitor the integrity of the state estimation in optimization-based UAV visual localization. We first detect and isolate inconsistent visual measurements using a statistical method that is based on~\cite{tong2011batch} and expands on our previous work~\cite{das2014outlier}. We then calculate the largest translational error that can exist in the state of the UAV. The main contributions of this paper are: \begin{itemize} \item A novel metric called relaxed bound tightness that quantitatively evaluates the performance of error bounds for applications where the error is assumed to follow a Gaussian distribution. \item An approach to determine an approximate upper bound of the error in the state estimation of the UAV in optimization-based visual localization that is significantly more reliable than the typical 3$\sigma$ approach. \end{itemize} The remainder of this paper is outlined as follows: related work is discussed in Sec.~\ref{related_work}, details of the outlier rejection and protection calculation algorithms are discussed in Sec.~\ref{algorithms}, the proposed metric for evaluating error bounds is shown in Sec.~\ref{metric}, and finally, validation results of the proposed approach are shown in Sec.~\ref{results}. \section{RELATED WORK}\label{related_work} \subsection{Receiver Autonomous Integrity Monitoring} The concept of integrity monitoring has been developed and applied to GNSS applications. It is a necessary component for safety-critical applications where unreliable solutions might lead to serious injury or death. RAIM is one of the more commonly used integrity monitoring algorithms for aviation applications. It has two basic functions. The first is to detect whether there is a satellite failure using fault detection and exclusion algorithm. The second is to calculate the HPL and VPL, which are the largest horizontal and vertical errors that the fault detection algorithm is not expected to detect~\cite{liu2005gps}. Brown et al.~\cite{brown1994gps} propose the first mathematically rigorous RAIM algorithm, which uses statistical theory to check the consistency of the redundant GPS measurements and calculates the HPL with the measurement group that provides adequate consistency. Walter et al.~\cite{walter1995weighted} use Brown's method as a basis to develop weighted RAIM. The main idea of their work is that the measurements coming from each individual satellite have different levels of noise. As a result, instead of assuming a fixed covariance value for all the satellites, they propose to trust the measurements from satellites differently by assigning noise with weighted covariance to the measurement models of different satellites. Both methods assume that there is at most one faulty measurement in the system after the fault detection algorithm, and then they calculate the protection levels based on this assumption. Angus~\cite{angus2006raim} revise the algorithms proposed by Brown and Walter to take account of multiple faulty satellites in the measurements. The ability to handle multiple faulty measurements is useful in the context of visual localization because each feature corresponds to 2 or 3 measurements depending on the camera settings. As a result, to take account of 1 faulty feature in visual localization, we need to take account of 2 or 3 faulty measurements. \subsection{Vision-Based Localization} Vision-based localization or SLAM algorithms can be divided into two main categories, direct method or feature-based method. Direct methods process the entire image as the measurement and aim to minimize the photometric error~\cite{engel2014lsd}. Feature-based methods extract keypoints from the image and the objective is to minimize reprojection error. ORB-SLAM2~\cite{mur2017orb} is a state-of-the-art feature-based SLAM system for monocular, stereo, and RGB-D cameras. Its SLAM mode performs bundle adjustment to construct the map. ORB-SLAM2 also has a localization mode that allows the reuse of the map to determine the state of the agent by pose optimization. In addition, combining visual and inertial measurements also shows improvement in the performance especially for UAV applications. such as~\cite{leutenegger2013keyframe}. Although these algorithms show promising performance in experiments, it is unreasonable to assume that the measurements are fault-free at all times because finding correspondences of features from frame to frame is a process that is susceptible to error especially in a dynamic environment. The work of this paper is mainly applicable to feature-based visual localization method, and we use the localization mode of ORB-SLAM2 to validate our method, which is shown in Sec.~\ref{results}. \subsection{Outlier Rejection For Visual Measurements} RANSAC~\cite{fischler1981random} has been widely used in vision-based robot localization~\cite{kitt2010visual} to reject feature outliers. The basic idea of RANSAC is to use random sets of samples to form hypotheses and use the other samples to verify these hypotheses, and the hypothesis with the highest consensus is selected to be the inlier set~\cite{scaramuzza20111}. An alternative approach is to use statistical tests to check if the measurement set fits the assumed statistical model, such as Parity Space Approach~\cite{das2014outlier}, and Normalized Innovation Squared (NIS)~\cite{bar2004estimation}. These two methods both assume the noise of the measurement model follows a Gaussian distribution and check whether the weighted sum of squares residual follows a $\chi^2$ distribution. Parity Space Approach does not require the state of the system and it checks the consistency of a group of redundancy measurements by projecting them into the parity space~\cite{das2014outlier}. On the other hand, NIS assumes that an estimated state is known, and it is usually used to check if individual measurement is an outlier. Tong et al.~\cite{tong2011batch} propose a batch innovation test that extends NIS to remove outliers for a batch of measurements. Recently, Tzoumas et al.~\cite{tzoumas2019outlier} propose an adaptive trimming algorithm that also removes outliers based on the value of residuals. Instead of using a fixed threshold, the threshold in this algorithm updates for each iteration. In this paper, we adapt Tong's method and use it with the Parity Space Approach to remove multiple outliers iteratively, which leads to the calculation of protection level. \section{INTEGRITY MONITORING FRAMEWORK}\label{algorithms} \subsection{Problem Formulation} In this section, we show the problem formulation for stereo camera settings. We formulate the vision-based localization as a nonlinear pose optimization. The objective is to determine the camera state at the current frame by minimizing the reprojection error of features. We follow the notation introduced by Barfoot~\cite{barfoot2017state}. There are two frames, the inertial frame $\mathcal{F}_i$, and the camera frame $\mathcal{F}_c$ which corresponds to the left camera center. The state of the camera is defined as $\textbf{x} = $\{${\textbf{r}^{ci}_i}$, $\textbf{C}_{ci}$\}, which is the transformation from the inertial frame to the camera frame, where ${\textbf{r}^{ci}_i} \in \mathbb{R}^3$ and $\textbf{C}_{ci} \in \mathbb{SO}(3)$. Before performing localization, a map is built with features, and $\textbf{r}^{p_ji}_i$ denotes the position of the feature $j$ in the inertial frame. The feature position is transformed from the inertial frame to the camera frame first and then projected to the image coordinate plane using the following equations: \begin{equation} \textbf{r}^{p_j c}_{c} =\begin{bmatrix}x\\ y\\z\end{bmatrix} =\textbf{C}_{ci}(\textbf{r}^{p_ji}_i-\textbf{r}^{ci}_i) \end{equation} \begin{align} \begin{bmatrix} u_l\\ v_l\\ \delta_d\end{bmatrix} = \pi(\textbf{r}^{p_j c}_{c}) = \frac{1}{z} \begin{bmatrix} f_ux \\ f_vy \\ f_ub\end{bmatrix} + \begin{bmatrix} c_u\\ c_v \\ 0 \end{bmatrix} + \mathbf{e}_j \end{align} \begin{align} \label{measurement noise} \mathbf{e}_j \sim \mathcal{N}(\mathbf{0},\;\mathbf{Q}_j) \end{align} where $u_l,v_l$ are the left image coordinates, $\delta_d$ is the disparity, $f_u, f_v$ are the focal length, $c_u, c_v$ are the principle points, b is the baseline, $\pi$ is the stereo camera projection function, and $\mathbf{e}_j$ is the measurement noise that is assumed to follow a Gaussian distribution with covariance matrix $\mathbf{Q}_j$. At each frame, the pose optimization can be formulated as follows: \begin{align} \label{eq:optimization} \mathbf{x^*} =\Bigl\{{\textbf{r}^{ci*}_i,\textbf{C}^*_{ci}} \Bigr\} = \argmin_{\mathbf{r}_i^{ci},\mathbf{C}_{ci}} &\sum_{j}^{} \rho(\mathbf{e}_{y,j}^T\mathbf{Q}^{-1}_j\mathbf{e}_{y,j}) \end{align} where $\rho$ is the Huber loss function, and $\mathbf{e}_{y,j}$ is the measurement error term for feature $j$ and can be determined using the formula below: \begin{equation} \mathbf{e}_{y,j}(\mathbf{x}) = \mathbf{y}_{j} - \pi(\textbf{C}_{ci}(\textbf{r}^{p_ji}_i-\textbf{r}^{ci}_i)) \end{equation} where $\mathbf{y}_{j}$ is the measurement for feature $j$. \subsection{Fault Detection and Exclusion} This subsection shows how to apply fault detection and exclusion to visual measurements in a manner inspired by RAIM~\cite{walter1995weighted} using Parity Space Method. First, we have the measurement model for each feature $j$: \begin{ceqn} \begin{align} \mathbf{y}_{j} = h_j(\textbf{x}) + \mathbf{e}_j \end{align} \end{ceqn} where the measurement function for feature $j$ is $h_j(\textbf{x}): \mathbb{R}^6 \Rightarrow \mathbb{R}^{3}$, defined as \begin{ceqn} \begin{align} h_j(\textbf{x}) = \pi(\textbf{C}_{ci}(\textbf{r}^{p_ji}_i-\textbf{r}^{ci}_i)). \end{align} \end{ceqn} Then we linearize the measurement function $h_j(\textbf{x})$ about an operating point $\textbf{x}_0$ to obtain the linearized measurement model: \begin{ceqn} \begin{align} \label{eq:linearized_measurement} d\mathbf{y}_{j} = \mathbf{H}_j\cdot d\textbf{x} + \mathbf{e}_j \end{align} \end{ceqn} where $d\textbf{x} \in \mathbb{R}^6$ is the state perturbation, $\mathbf{H} \in \mathbb{R}^{3\times6}$ is the Jacobian of the measurement model, and $d\textbf{y}_j \in \mathbb{R}^3$ is the shifted measurements according to the operating point $\mathbf{x}_0$ for feature $j$. We make the following definitions by stacking $d\mathbf{y}_j$, $\mathbf{H}_j$, and covariance matrix $\mathbf{Q}_j$ for $N$ features observed in the current frame: \begin{ceqn} \begin{align} d\mathbf{y} = \begin{bmatrix} d\mathbf{y}_1\\\vdots \\ d\mathbf{y}_N \end{bmatrix}, \mathbf{Q} = \begin{bmatrix} \mathbf{Q}_1 & &\\& \ddots & \\ & & \mathbf{Q}_N \end{bmatrix}, \mathbf{H} = \begin{bmatrix} \mathbf{H}_1\\\vdots \\ \mathbf{H}_N \end{bmatrix}. \end{align} \end{ceqn} The stacked version of Eq.~\ref{eq:linearized_measurement} is: \begin{ceqn} \begin{align} \label{eq:stacked linearized measurement model} d\mathbf{y} = \mathbf{H}\cdot d\textbf{x} + \mathbf{e} \end{align} \end{ceqn} where \begin{align} \mathbf{e} \sim \mathcal{N}(\mathbf{0},\;\mathbf{Q}). \end{align} We define information matrix $\mathbf{W}$ which is the inverse of $\mathbf{Q}$: \begin{ceqn} \begin{align} \textbf{W} =\mathbf{Q}^{-1} \end{align} \end{ceqn} The estimated perturbation is the weighted least squares solution of Eq.~\ref{eq:stacked linearized measurement model}, which is: \begin{ceqn} \begin{align}\label{sol_linear} d\hat{\mathbf{x}} = (\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W}d\mathbf{y}. \end{align} \end{ceqn} The residual can be further calculated as follows: \begin{ceqn} \begin{align} \label{eq:residual} \mathbf{ \epsilon} = d\mathbf{y} - \mathbf{H}d\hat{\mathbf{x}} = (\mathbf{I} - \mathbf{H}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W})d\mathbf{y}. \end{align} \end{ceqn} However, if we have outliers in the measurements, Eq.\ref{eq:stacked linearized measurement model} is rewritten as: \begin{ceqn} \begin{align} \label{eq:fault} d\mathbf{y} = \mathbf{H}\cdot d\textbf{x} + \mathbf{e} + \mathbf{f} \end{align} \end{ceqn} where $\mathbf{f} \in \mathbb{R}^{3N}$ is the fault vector which models the error in the measurements under the assumption that there are outliers in the measurements~\cite{das2014outlier}. Eq.~\ref{eq:residual} has to be rewritten to be: \begin{ceqn} \begin{align} \mathbf{ \epsilon} = (\mathbf{I} - \mathbf{H}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W})(d\mathbf{y} -\mathbf{f}) \end{align} \end{ceqn} We then calculate the weighted sum of squares residual: \begin{ceqn} \begin{align} \lambda = \mathbf{\epsilon}^T\mathbf{ W} \mathbf{\epsilon}. \end{align} \end{ceqn} We can use $\lambda$ to determine whether outliers exist in the measurement set and $\sqrt{\lambda}$ is defined as the test statistic. If there are no outliers in the measurement data, $\lambda$ follows a central $\chi^2$ distribution with $n-m$ degrees of freedom based on the Parity Space Method~\cite{liu2005gps}. Here, $n$ is the number of measurements and $m$ is the number of states, and in the case of vision-based localization $n = 3N$ and $m = 6$. If there are outliers in the measurements, $\lambda$ follows a non-central $\chi^2$ distribution. The weighted sum of squares residual, $\lambda$, can be used to determine whether the position solution is outlier-free. A false alarm threshold $\delta$ can be selected analytically based on a desired probability of false alarm $P_{fa}$, by computing the $1-P_{fa}$ quantile of the central $\chi^2$ distribution with $3N-6$ degrees of freedom. If $\lambda < \delta$, the test indicates that the measurements are consistent with each other and it is hypothesized that there is no outlier in the measurement set, and if $\lambda > \delta$, there is a $1-P_{fa}$ probability that there are one or more outliers in the measurement set. If the measurement set passes the outlier detection algorithm, we can directly calculate the protection level in~\ref{PL}. If it is found that the measurement set contains outliers, we need to perform outlier rejection. Unlike GNSS applications where it is rare for two satellites to fail at the same time, the outlier ratio for visual measurements is significantly higher. As a result, a different approach is required to remove the outliers in the measurements for vision-based localization. We modify the approach in~\cite{tong2011batch} and propose the Iterative Parity Space Outlier Rejection (IPSOR) method to remove multiple outliers for integrity monitoring. Both algorithms calculate the weighted sum of squares residual on a group of measurements and remove outliers iteratively until the remaining measurements pass the residual threshold test. Specifically, the IPSOR strategy removes the feature that contributes the most to the test statistic iteratively and classifies them as outliers until the test statistic $\lambda$ is less than the threshold $\delta$ given the number of measurements. Afterward, we classify the remaining measurements as inliers and calculate the test statistic again. If the updated test statistic is less than the updated threshold $\delta$, the outlier rejection is completed. Otherwise, we repeat this process. If the number of inliers is less than a threshold in this process, we deem that there are too many outliers in the measurements and the localization position is unsafe. This method relies on the assumption that inlier measurements agree with each other and outlier measurements vary much more widely. Using the Gauss-Newton algorithm, the IPSOR method can be applied iteratively to obtain a better linearization state $\mathbf{x}_0$ and avoid removing inliers. \begin{algorithm} \caption{Given a set of $N$ 3D features $Y$ and an initial state estimate $\mathbf{x}_0$, we can obtain a set of inliers $Y_{inlier}$ which pass the outlier detection algorithm using IPSOR method.} \begin{algorithmic}[1] \STATE{$Y_{inlier} \leftarrow Y$} \STATE{$\mathbf{H} \leftarrow [\frac{\partial}{\partial x} h({\mathbf{x}})_1 \vert_{{\mathbf{x_0}}}, \frac{\partial}{\partial x} h({\mathbf{x}})_2 \vert_{{\mathbf{x_0}}} \ldots \frac{\partial}{\partial x} h({\mathbf{x}})_N \vert_{{\mathbf{x_0}}}]$} \STATE{$d\mathbf{y} \leftarrow [ d\mathbf{y}_1 \; d\mathbf{y}_2 \ldots d\mathbf{y}_{N}]$} \STATE{$d\hat{\mathbf{x}} = (\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W}d\mathbf{y} $} \STATE{$ \mathbf{ \epsilon} = d\mathbf{y} - \mathbf{H}d\hat{\mathbf{x}} = (\mathbf{I} - \mathbf{H}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W})d\mathbf{y} $} \STATE{$\lambda \leftarrow \epsilon^T\mathbf{W}\epsilon$} \STATE{$\delta \leftarrow \chi^2_{(3N-6,1-P_{fa})}$} \WHILE{$ \lambda > \delta$ } \FOR{$j < N$} \STATE{$ \mathbf{\lambda}_i(j)\leftarrow \epsilon_j^TW(j,j)\epsilon_j$} \STATE{$j \leftarrow j + 1$} \ENDFOR \WHILE{$ \lambda > \delta$} \STATE{$ \lambda_{max} \leftarrow \max(\mathbf{\lambda}_i(j))$} \STATE{$ i^* \leftarrow \argmax\limits_{i}(\mathbf{\lambda}_i(j))$} \STATE{$ \lambda \leftarrow \lambda - \lambda_{max}$} \STATE{$N \leftarrow N -1$} \STATE{$Y_{inlier} \leftarrow Y_{inlier} - Y_{i^*} $} \STATE{$\delta \leftarrow \chi^2_{(3N-6,1-P_{fa})}$} \ENDWHILE \IF{$N_I < N_T$} \STATE{\emph{Break}} \ENDIF \STATE{$\mathbf{H} \leftarrow [\frac{\partial}{\partial x} h({\mathbf{x}})_1 \vert_{{\mathbf{x_0}}}, \frac{\partial}{\partial x} h({\mathbf{x}})_2 \vert_{{\mathbf{x_0}}} \ldots \frac{\partial}{\partial x} h({\mathbf{x}})_{N_{inlier}} \vert_{{\mathbf{x_0}}}]$} \STATE{$d\mathbf{y} \leftarrow [ d\mathbf{y}_1 \; d\mathbf{y}_2 \ldots d\mathbf{y}_{N_{inlier}}]$} \STATE{$d\hat{\mathbf{x}} = (\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W}d\mathbf{y} $} \STATE{$ \mathbf{ \epsilon} = d\mathbf{y} - \mathbf{H}d\hat{\mathbf{x}} = (\mathbf{I} - \mathbf{H}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W})d\mathbf{y} $} \STATE{$\lambda \leftarrow \epsilon^T\mathbf{W}\epsilon$} \STATE{$\delta \leftarrow \chi^2_{(3N-6,1-P_{fa})}$} \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Protection Level Calculation}\label{PL} In visual localization, we define protection level as the maximum translational error in each direction that the outlier detection algorithm is not expected to detect, denoted as $PL_x$, $PL_y$, $PL_z$, and they have to take account of both errors introduced by possible outliers and noise. A protection level for the total distance can also be calculated using the same approach, but we use axis-specific protection levels so that we can compare with $3\sigma$ method. We follow and modify the approach proposed by Walter~\cite{walter1995weighted} and Angus~\cite{angus2006raim} to calculate the protection level. Once the measurements pass the outlier rejection algorithm, we assume that there will be at most one feature outlier in the measurements. It is unlikely for the measurements to pass the outlier rejection test if there is more than 1 faulty feature in the measurements. In addition, depending on the application, we can always take account of a higher number of faulty features to obtain a more conservative protection level. To analytically determine $PL_x$, $PL_y$, $PL_z$, we follow the problem formulation of~\cite{angus2006raim}. Because each feature produces 3 measurements, we denote the fault vector $\textbf{f}$ in Eq.~\ref{eq:fault} as \begin{ceqn} \begin{align} \mathbf{f} = \mathbf{P}_j\mathbf{f}^* \end{align} \end{ceqn} where $\mathbf{f}^* \in \mathbb{R}^{3}$ , and $\mathbf{P}_j \in \mathbb{R}^{3N\times 3}$ in which each $3 \times 3$ matrix represents a feature. For $\mathbf{P}_j$, the $j$th $3 \times 3$ matrix is an identity matrix, and the rest of the $3 \times 3$ matrices are matrices of zeros. For example, if the first feature is an outlier and the rest of the features are inliers, then $\mathbf{P}_j$, is defined as \begin{align} \mathbf{P}_j = \mathbf{P}_1 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \vdots &\vdots& \vdots \\ 0 & 0 & 0\end{bmatrix}. \end{align} The weighted sum of squares residual $\lambda_f$ that is introduced by the fault vector $\mathbf{f}$ can be calculated as: \begin{ceqn} \begin{align} \lambda_f =\mathbf{f}^T\mathbf{S}\mathbf{f} = \mathbf{f^*}^T\mathbf{P}^T\mathbf{S}\mathbf{P}\mathbf{f}^* \end{align} \end{ceqn} where \begin{ceqn} \begin{align} \mathbf{S} = \mathbf{W}(\mathbf{I} - \mathbf{H}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W}). \end{align} \end{ceqn} Based on Eq.\ref{sol_linear}, the square of the outlier-induced error $(\varepsilon_{f})^2$ in the localization solution introduced by the fault vector $\mathbf{f}$ can also be calculated as: \begin{ceqn} \begin{align}\label{eq:16} (\varepsilon_{f})^2 = \mathbf{f}^T\mathbf{D}_i\mathbf{f} = \mathbf{f^*}^T\mathbf{P}^T\mathbf{D}_i\mathbf{P}\mathbf{f}^* \end{align} \end{ceqn} where $\mathbf{D}_i$ depends on the direction and can be calculated as follows assuming the first three components of the pose correspond to the position: \begin{ceqn} \begin{align} \mathbf{D}_i = \mathbf{W}\mathbf{H}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{A}^T_{i}\mathbf{A}_{i}(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}\mathbf{H}^T\mathbf{W} \end{align} \end{ceqn} \begin{ceqn} \begin{align} \mathbf{A}_1 &= \begin{bmatrix} 1 & 0 & 0 & 0&0&0\end{bmatrix} \\ \mathbf{A}_2 &= \begin{bmatrix} 0 & 1 & 0 & 0&0&0\end{bmatrix}\\ \mathbf{A}_3 &= \begin{bmatrix} 0 & 0 & 1 & 0&0&0\end{bmatrix}. \end{align} \end{ceqn} where $i$ = 1, 2, 3 for $x$-direction, $y$-direction, and $z$-direction, respectively. From Eq.~\ref{eq:16}, we find a larger vector $\mathbf{f^*}$ introduces a larger error in the localization solution. However, because the outlier rejection test is passed, the maximum value of $\mathbf{f^*}^T\mathbf{P}^T\mathbf{S}\mathbf{P}\mathbf{f}^*$ is constrained to be less than $\delta$. As a result, the vector $\mathbf{f}^*$ introduced by the outlier needs to allocated to maximize the outlier-induced error in the solution while still meeting the condition that $\mathbf{f^*}^T\mathbf{P}^T\mathbf{S}\mathbf{P}\mathbf{f}^* = \delta$~\cite{angus2006raim}. The square of the outlier-induced error that the protection level takes account of can be determined by solving the optimization problem: \begin{equation} \begin{array}{rrclcl} \displaystyle \max_{\mathbf{f}^*, \mathbf{P}_j \in \mathbf{P}} & \multicolumn{3}{l}{\mathbf{f^*}^T\mathbf{P}_j^T\mathbf{D}_i\mathbf{P}_j\mathbf{f}^*}\\ \textrm{s.t.} & \mathbf{f^*}^T\mathbf{P}_j^T\mathbf{S}\mathbf{P}_j\mathbf{f}^* = \delta. \end{array} \end{equation} According to~\cite{angus2006raim}, we can eliminate $\mathbf{f}^*$ from the equation and simplify this constrained optimization problem to an unconstrained optimization problem as follows: \begin{equation} \begin{array}{rrclcl}\label{eq:simplified op} \displaystyle \max_{\mathbf{P}_j \in \mathbf{P}} & \multicolumn{3}{l}{ \mathbf{\Lambda}_{\max}(\mathbf{D}_i,\mathbf{P}_j,\mathbf{S})\delta} \end{array} \end{equation} where $\mathbf{\Lambda}_{\max}(\mathbf{D}_i,\mathbf{P}_j,\mathbf{S})$ is the largest eigenvalue of \begin{ceqn} \begin{align} \mathbf{P}_j^T\mathbf{D}_i\mathbf{P}_j(\mathbf{P}_j^T\mathbf{S}\mathbf{P}_j)^{-1}. \end{align} \end{ceqn} We can iterate through all the features to find the $\mathbf{P}_j$ that gives the maximum value of Eq.~\ref{eq:simplified op}, which is defined as: \begin{equation} \mathbf{P}^* = \begin{array}{rrclcl} \displaystyle \argmax_{\mathbf{P}_j \in \mathbf{P}} & \multicolumn{3}{l}{ \mathbf{\Lambda}_{\max}(\mathbf{D}_i,\mathbf{P}_j,\mathbf{S})\delta}. \end{array} \end{equation} As a result, the component of the protection level that takes account of the outlier-induced error can be found as: \begin{ceqn} \begin{align} \varepsilon_f &= \sqrt{ \mathbf{\Lambda}_{\max}(\mathbf{D}_i,\mathbf{P}^*,\mathbf{S})\delta}. \end{align} \end{ceqn} The noise-induced error, $\varepsilon_n$, in the localization solution can be calculated using the following equations: \begin{ceqn} \begin{align}\label{eq:pl_noise} \varepsilon_n &= k\sqrt{[(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}]_{i,i}} \end{align} \end{ceqn} where $i$ = 1, 2, 3 for $x$-direction, $y$-direction, $z$-direction, respectively, and $k$ is the number of standard deviations corresponding to the specified detection probability $P_d$. In real-life applications, $P_d$ is normally set to be 99.73\% and the corresponding $k = 3$. Incorporating both the outlier-induced error and the noise-induced error, we can define the protection level for visual localization as follows: \begin{flalign} \label{eq:plx} PL_i &= \varepsilon_f + \varepsilon_n \\ &=\sqrt{ \mathbf{\Lambda}_{\max}(\mathbf{D}_i,\mathbf{P}^*,\mathbf{S})\delta}+k\sqrt{[(\mathbf{H}^T\mathbf{W}\mathbf{H})^{-1}]_{i,i}} \end{flalign} where $i$ = 1, 2, 3 for $x$-direction, $y$-direction, and $z$-direction, respectively. $PL_x$, $PL_y$, $PL_z$ are equal to $PL_1$, $PL_2$, and $PL_3$, respectively. \section{PERFORMANCE EVALUATION}\label{metric} The best error bound for an estimation process should be as tight as possible while still bounding the error at all times. However, for nonlinear systems with outliers, it is not possible to guarantee that a proposed bound will successfully bound the true error at all times. There are currently no existing error bound metrics that quantitatively evaluate potential error bound performance. We therefore propose a novel relaxed bound tightness (RBT) metric that quantitatively evaluates the performance of error bounds for applications where the error is assumed to follow a Gaussian distribution. The metric, $\mathcal{Z}$, is calculated as follows: \begin{ceqn} \begin{align} \mathcal{Z} = \sqrt{\frac{\sum_{i=1}^{N}\varrho (\frac{\nu_i -e_i}{\sigma_i})^2}{N}} \end{align} \end{ceqn} where $\nu_i$ and $e_i$ are respectively the error bound and the error for a sample $i$, $N$ is the number of samples, $\sigma_i$ is the covariance of the error for the given sample, and $\varrho$ is a weight function defined as follows: \begin{equation} \varrho = \begin{dcases} 1 & \nu_i \geq e_i \\ \tau & \nu_i < e_i \end{dcases}. \end{equation} Because the error is supposed to be an upper bound rather than a prediction of the expectation of the errors, the weight function $\varrho$ should penalize error bounding failures more heavily than loose bounds. The coefficient $\tau$ decides to what extent the metric favors a tight error bound with a higher probability of failure over a conservative bound with a low probability of failure and vice versa, and is set to depend on a user-defined detection probability $P_d$. This probability $P_d$ is the minimum probability that the method correctly bounds the error required by the user. To determine the value of $\tau$, we assume the error follows a Gaussian distribution without outliers. Let $\Phi^{-1}$ be the quantile function for a Gaussian distribution, and define the ideal error bound, $\upsilon^*$, to be \begin{equation} \upsilon^* = \Phi^{-1}\left(1-\frac{1-P_d}{2}\right). \end{equation} This choice of bound corresponds exactly with the definition of the detection probability $P_d$ for a Gaussian distribution. Given the bound, $\upsilon^*$, we now solve for the value of $\tau$ that minimizes the metric $\mathcal{Z}$. Note that in the RBT metric, because we divide the difference of the bound and the true errors by the covariance for each sample, the value of $\tau$ is independent of the value of $\sigma_i$ given that the error is assumed to follow a Gaussian distribution. For a safety-critical application, $P_d$ should be defined close to 100\% such that this metric prefers a method that is conservative, but is able to correctly bound the error for a higher percentage of time. For applications where safety is less critical, $P_d$ can be defined less aggressively, such that the metric favors a method that provides tighter bounds but might fail to bound the error at certain times. \section{EXPERIMENTAL RESULTS}\label{results} In order to validate the proposed method and determine if the protection level correctly bounds the translational errors in each direction, we perform experiments on the machine hall sequences in the well-known EuRoC dataset~\cite{burri2016euroc}. The EuRoC dataset is collected using a micro aerial vehicle (MAV), and the ground truth poses of the MAV in the machine hall are captured by a Leica MS50 laser tracker. We take advantage of the localization mode of ORB-SLAM2~\cite{mur2017orb} and add the proposed outlier rejection and protection level calculation modules. We first construct the map using the ground truth poses and then run the localization mode of ORB-SLAM2 with the proposed integrity monitoring algorithm to obtain the estimated poses and the protection levels for each frame. For stereo vision-based localization, no initialization is needed to recover the scale. As a result, we start the estimation when the MAV is static after the initialization phase at the start of each sequence. Afterwards, we align the ground truth with the estimated trajectory and evaluate the error. Finally, we compare the protection level and $3\sigma$ bound in each direction with the true translational error for each frame. In this experiment, the false alert probability $P_{fa}$ is set to be 0.05 and $k$ in Eq.~\ref{eq:pl_noise} is set to be 3, which are the typical values used for robotics applications~\cite{ahn2012board}. In ORB-SLAM2, the covariance matrix $\mathbf{Q}_j$ for each feature is related to the scale at which the keypoint is extracted~\cite{mur2017orb}. Because ORB-SLAM2 is a vision only SLAM algorithm, it is the relative value between the covariances of the features that decides their importance in determining the pose of the camera rather than the absolute value of the covariances. At first, we perform the experiments using the default ORB-SLAM2 covariance value, which is set to be 1 pixel for features detected in the base scale level. Fig.~\ref{fig:mh01_z} shows the protection level, $3\sigma$ bound, and the true errors in the $z$-direction for the entire trajectory of \textit{MH\_01\_easy} using the 1-pixel assumption. It is found that the protection level is able to bound the error for more than 95\% of the frames but the $3\sigma$ is only able to bound around 50\% of the frames. Possible causes for this drastic failure of the $3\sigma$ bound are first that there are outlier features in these frames where the $3\sigma$ does not bound the error and second that the covariance assumed for the inlier measurement noise on the measurement model is too small. To avoid the second case, we perform the experiments with the assumptions of 1, 1.5 and 2 pixel covariance for features detected in the base scale level. These are the typical values used for the covariance of the camera measurement model~\cite{huang2014towards, wu2015square}. \begin{figure}[ht] \begin{center} \includegraphics[width=0.9\linewidth]{figs/mh01_z.png} \end{center} \caption{Protection Level, $3\sigma$ and errors vs. frames in $z$-direction for the \textit{MH\_easy\_01} sequence in EuRoc dataset using $1$-pixel covariance assumption. The $3\sigma$ approach only correctly bounds the error around 54\%, and the protection level correctly bounds the error for more than 95\%.} \label{fig:mh01_z} \end{figure} \begin{figure*}% \centering \begin{subfigure}{0.95\columnwidth} \includegraphics[width=\columnwidth]{figs/mh01_cdf.png}% \caption{MH\_01\_easy}% \end{subfigure}\hfill% \begin{subfigure}{0.95\columnwidth} \includegraphics[width=\columnwidth]{figs/mh02_cdf.png}% \caption{MH\_02\_easy}% \end{subfigure}\hfill% \begin{subfigure}{0.95\columnwidth} \includegraphics[width=\columnwidth]{figs/mh03_cdf.png}% \caption{MH\_03\_medium}% \end{subfigure}\hfill% \begin{subfigure}{0.95\columnwidth} \includegraphics[width=\columnwidth]{figs/mh04_cdf.png}% \caption{MH\_04\_difficult}% \end{subfigure}\hfill% \begin{subfigure}{0.95\columnwidth} \includegraphics[width=\columnwidth]{figs/mh05_cdf.png}% \caption{MH\_05\_difficult}% \end{subfigure}\hfill% \caption{Cumulative distribution plots for the difference between the error bound and the error for the machine hall sequences in the EuRoc dataset. Compared with $3\sigma$ method, the protection level correctly bounds the error for a higher percentage of frames for all sequences.} \label{fig:cdf} \end{figure*} In Fig.~\ref{fig:cdf}, we compare the protection level method and the $3\sigma$ method by plotting the cumulative distribution of the difference between the error bound and the true error in $x$-direction for different assumptions of the measurement covariance values. The ideal error bound would be entirely contained on the positive side of the x-axis in the plot, because we want the error bound to bound the true error exclusively, and also distributed as close to 0 as possible, because we want the error bound to be as tight as possible. The intersection between the curves and the vertical line $x = 0$ gives the percentage of frames that the error bound method fails to bound the error. As expected, the protection level provides a more conservative error bound compared with $3\sigma$ but is able to bound the error for a higher percentage of frames. If we use the 1-pixel assumption, which is the default covariance value of ORB-SLAM2, we find that $3\sigma$ is only able to correctly bound the error in $x$-direction for about 35\% of the frames for \textit{MH\_01\_easy}, 54\% for \textit{MH\_02\_easy}, 22\% for \textit{MH\_03\_medium}, 43\% for \textit{MH\_04\_difficult}, and 45\% for \textit{MH\_05\_difficult}. However, the protection level is able to bound the error about 85\% for \textit{MH\_03\_medium}, and around 95\% for the rest of the sequences. If we use 1.5-pixel and 2-pixel assumptions, the $3\sigma$ method is able to correctly bound at a higher percentage rate but still significantly less than the protection level method, which correctly bounds the error at approximately 95\% for \textit{MH\_03\_medium} and near 100\% for the rest of the sequences. Similar results hold for $y$-direction and $z$-direction. We also compare the proposed protection level with the $3\sigma$ method using the metric proposed in Sec.~\ref{metric}. We set the parameter $P_d$ to be 99.73\%, because this is one of the most commonly used detection probabilities in real-life applications and also corresponds with the $3\sigma$ bound used for comparison. Since we determine the penalty coefficient $\tau$ in the metric using Gaussian distribution and choose $P_d$ to be 99.73\%, the metric favors $3\sigma$ method if the error truly follows a zero-mean Gaussian distribution. However, from Table~\ref{performance}, it is found that the proposed protection level outperforms the $3\sigma$ method for all sequences for all 3 covariance assumptions. The results show that feature-based visual measurements are susceptible to error, and thus the error in the position solution does not perfectly follow a Gaussian distribution. Further, the protection level approach is able to correctly bound the error at a higher percentage rate and outperform $3\sigma$ on the chosen metric. \captionsetup[table]{skip=0pt} \begin{table*}[!ht] \centering \captionof{table}{Performance comparison using the proposed RBT metric} \label{performance} \begin{tabular}{||c | c| c c| c c| c c||} \hline \multirow{2}{*}{Sequence} & \multirow{2}{*}{Axis} & \multicolumn{2}{c|}{1 pixel} &\multicolumn{2}{c|}{1.5 pixels}& \multicolumn{2}{c||}{2 pixels} \\ & & PL & $3\sigma$ & PL & $3\sigma$ & PL & $3\sigma$ \\ \hline \multirow{3}{*}{MH\_01\_easy} &x & 126.2 & 398.0 & 38.1 & 235.1 & \textbf{17.4} & 159.3 \\ &y & 59.8 & 358.3 & \textbf{18.8} & 203.1 & 19.0 & 134.5 \\ &z & 127.3 & 206.0 & 45.7 & 104.6 & \textbf{25.7} & 68.6 \\ \hline \multirow{3}{*}{MH\_02\_easy} &x & 121.2 & 317.3 &30.8 & 179.1 & \textbf{20.3} & 103.8 \\ &y & 60.5 & 353.9 &25.2 & 205.7 & \textbf{21.7} & 134.8 \\ &z & 95.2 & 221.2 &25.7 & 125.0 & \textbf{23.7} & 71.1 \\ \hline \multirow{3}{*}{MH\_03\_medium} &x & 1053.5 & 1277.5 & 641.1 & 830.1 & \textbf{441.0} & 607 \\ &y & 682.2 & 971.8 & 389.3 & 622.7 & \textbf{252.3} & 449.8 \\ &z & 1056.1 & 1265.3 & 609.1 & 819.9 & \textbf{397.0} & 598.5 \\ \hline \multirow{3}{*}{MH\_04\_difficult} &x & 143.8 & 340.9 & 48.1 & 200.9 &\textbf{21.8} & 132.6 \\ &y & 105.3 & 336.6 & 41.9 & 199.5 & \textbf{26.3} & 133.4\\ &z & 89.8 & 165.5 & 48.7 & 88.7 & \textbf{32.2} & 53.8 \\ \hline \multirow{3}{*}{MH\_05\_difficult} &x & 51.8 & 256.6 & 25.6 & 146.9 & \textbf{21.8} &132.6\\ &y & 30.0 & 215.3 & 28.3 & 118.1 & \textbf{26.3} &133.4\\ &z & 50.1 & 173.3 & \textbf{26.9} & 93.8 & 32.2 &53.8 \\ \hline \end{tabular} \end{table*} The results for both protection level and $3\sigma$ are significantly worse for the \textit{MH\_03\_medium} sequence, because the true errors obtained in this sequence are significantly larger than other sequences. The hypothesis is that the map points might contain a number of errors for this sequence. To verify this hypothesis, we perform the experiments using the map constructed by the SLAM mode of ORB-SLAM2, instead of the map constructed using ground truth, and evaluate the localization solution against the SLAM solution instead of the ground truth poses, which removes the effect of the map errors on the solution. For the \textit{MH\_03\_medium} sequence, it is found that the translation RSME is 0.011 m in this case which is much smaller than 0.032 m which is the RSME obtained if we evaluate the errors against the ground truth. The performance for protection level in the $x$-direction evaluated using the proposed metric is 17.8, 18.4, and 18.7 for the 1-pixel, 1.5-pixel, and 2-pixel assumptions, respectively, while the performance for the $3\sigma$ approach is 90.0, 43.2, and 23.2. The protection level still outperforms the $3\sigma$ method using the proposed RBT metric. The same holds for the other two directions and other sequences. We notice that although protection level is able to bound the error better, it produces a looser error bound compared with $3\sigma$. The potential issue is that it could result in more false positive warnings during the operation. We believe the selection of the error bound should depend on the safety standard of the application, and the proposed protection level is more suitable for life-critical applications. \section{CONCLUSIONS AND FUTURE WORK} This work presents an integrity monitoring algorithm to estimate the maximum possible translational error in the visual localization solution. The framework is inspired by RAIM and modified to fit the problem formulation of visual localization. It first detects outliers based on the Parity Space Method and then calculates the maximum possible error the outlier detection algorithm is not expected to detect. In addition, we proposed a relaxed bound tightness metric to quantitatively evaluate the performance of error bounds. Finally, by performing experiments on the EuRoC dataset and evaluating the results using the proposed metric, it is determined that the proposed protection level produces more reliable bounds than the typical $3\sigma$ method and provides an approach to assess the integrity of the solution. Future work will include taking the uncertainty of the map and possible incorrect feature covariance into considerations when calculating protection level, and also further developing the concept to provide integrity monitoring for visual odometry. { \bibliographystyle{IEEEtran}
proofpile-arXiv_065-6172
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Sea surface temperature (SST) is a common indicator of primary productivity in aquaculture \cite{odonncha2019precision}, critical for operation of marine-based industries such as power plants \cite{huang_coastal_2014}, while being central to better understanding interactions between the ocean and the atmosphere \cite{bayr2019error}. Recent decades has seen enormous progress in approaches to sample SST. In particular, satellite technology has vastly increased the granularity of measurements that are possible, providing long-term global measurements at varying spatial and temporal resolution. MODIS (or Moderate Resolution Imaging Spectroradiometer) is a key instrument aboard the Terra and Aqua satellites, which acquire imagery data for 36 spectral bands, from which information on a range of oceanic processes, including SST, can be extracted. Concurrently, improvements in high-resolution ocean models together with increased computational capabilities have made sophisticated data-assimilation (DA) schemes feasible -- leading to a number of reanalysis products that provide accurate forecasts across broad spatial and temporal scales. Reanalyses yield numerical estimates of the true ocean state by combining models with observations to improve short-term predictions by providing more representative initial conditions. A~state-of-the-art reanalysis is the ERA5 global dataset from the European Centre for Medium-Range Weather Forecasts (ECMWF) \cite{hirahara_era5_2016}. It provides short-term SST forecasts (and hindcasts) on a 32\,km horizontal grid at hourly intervals from a numerical synthesis of ocean models, atmospheric forcing fluxes, and SST measurements. These analysis and forecasting systems face a number of scientific, technical, and practical challenges. \begin{itemize} \item The computational and operational requirements for ocean simulations at appropriate scales are immense and require high performance computing (HPC) facilities to provide forecasts and services in practical time frames \cite{bell_godae_2015}. \item Operational forecasting systems require robust data assimilation schemes that takes account of biases and errors in models and observations \citep{rawlins2007met}. \end{itemize} A consequence of these challenges is that operational forecasting systems are only feasible for large research centres or collaborations who have access to large-scale computing resources and scientific expertise. An alternative approach based on data-intensive computing \cite{hey2009fourth}, leverages the large datasets generated by ocean monitoring and modelling tools to train machine-learning-based forecasting models. Once trained, the computational expense of these products are negligible, and conceptually, one can develop transportable models that can be trained to learn features at different geographical location. This paper presents a suite of data-driven modelling approaches for developing robust systems to predict sea-surface temperature (SST). An automatic feature-engineering module was implemented to identify the key features at disparate geographical locations to provide a transportable forecasting system. Finally, the different models were averaged using a model-scoring and weighting approach to provide an ensemble prediction that outperformed the best-performing individual model. Contributions are as follows: \begin{itemize} \item We evaluated the predictive skill of a range of data-driven modelling approaches from the perspective of (1)~balancing computational complexity with predictive skill and (2)~leveraging ensemble aggregation to improve robustness. \item We developed an autonomous feature-engineering module to (1)~improve the portability of the model to different geographical locations and (2)~reduce the appetite for training data by providing a more intelligent supply of explanatory variables. \item Finally, we assessed performance of the modelling framework globally, against a state-of-the-art physics-based model. \end{itemize} While the idea of using machine learning (ML) to provide computationally cheaper surrogate models has been previously explored, the distinctive characteristics of SST lie in their complex temporal dependence structure and multi-level seasonality. To our knowledge, this application has not yet been considered in the existing literature. We demonstrate the viability of the approach to capture the short- and long-term trends: integrating different ML based models – with different temporal performance characteristics – in an ensemble approach provides accuracy on par with large scale complex models. In the next section, we discuss prior research in the domain. Subsequently, the different models are introduced along with the feature extraction and ensemble aggregation techniques. Section \ref{sec:results} compares performances of the different models along with the predictive accuracy of individual and ensemble aggregated models. The portability of the system to different geographical locations is discussed. Finally, we present conclusions from the research and discuss future work. \section{Related Work} A~wide variety of operational SST forecasting products exist that leverage physics-based circulation modelling and data assimilation to resolve temperature distributions. A~representative example is the forecasting system for the North-West Atlantic from the NEMO Community Ocean Model, which provides a variety of ocean variables at 12\,km resolution. Inputs to the system include: lateral boundary conditions from the open-ocean supplied by a (coarser) global model, atmospheric fluxes from the Met Office Unified Model and river inputs from 320 European rivers \cite{odea_operational_2012}. Other examples include the National Centers for Environmental Prediction Climate Forecast System, which provides global predictions of SST at 110\,km resolution \cite{saha_ncep_2014}, and the US Navy HYCOM Global Forecasting System, which provides 5-day forecasts at resolution ranging from 4--20\,km \cite{chassignet_godae_2009}, together with localised, regional models at higher resolution \cite{haidvogel2008ocean, chao_development_2009,odonncha2015characterizing}. The common feature of these modelling systems is the high computational demands that generally limit either the precision (coarse global models) or the size of the domain (high-resolution, regional models). Due to the heavy computational overhead of physical models, there is an increasing trend to apply data-driven deep-learning (DL) / machine-learning methods to model physical phenomena \cite{bezenac_deep_2017,wiewel_latent_2018}. Application of ML-based approaches has been categorised into three areas \cite{walker_machine_2016}: \begin{enumerate} \item The system's deterministic model is computationally expensive and ML can be used as a code accelerator. \item There is no deterministic model but an empirical ML-based model can be derived using existing data. \item Classification problems where one wish to identify specific spatial processes or events. \end{enumerate} A~number of studies have investigated data-driven approaches to provide computationally cheaper surrogate models, applied to such things as wave forecasting \cite{james_machine_2018}, air pollution \citep{hahnel2020using}, viscoelastic earthquake simulation \cite{devries_enabling_2017}, and water-quality investigation \cite{arandia_surrogate_2018}. Pertinent examples include: ML based approaches to spatially interpolate environmental variables and improve precision of solution \cite{li_application_2011}; DL-based approaches to increase the resolution of satellite imagery through down-scaling techniques \cite{Ducournau_deep_2016}; and data-mining applied to the large datasets generated by ocean monitoring and modelling tools to identify pertinent events such as harmful algal blooms \cite{Gokaraju_machine_2011}. Distinctive characteristics of SST are their complex temporal-dependence structure and multi-level seasonality. There are only a few options to describe systems with such characteristics, including: (1)~Generalised Additive Models (GAMs) from classic statistics, (2)~Random Forest (RF) and extreme gradient boosting (XGBoost) from ML, and (3) Multi-Layer Perceptron (MLP) and Long Short-Term Memory (LSTM) models from DL. These five models are all considered in this paper. \section{Machine Learning} \label{sec:ML} Given sufficient data, ML models have the potential to successfully detect, quantify, and predict various phenomena in the geosciences. While physics-based modelling involves providing a set of inputs to a model which generates the corresponding outputs based on a non-linear mapping encoded from a set of governing equations, supervised machine learning instead learns the requisite mapping by being shown large number of corresponding inputs and outputs. In ML parlance, the model is trained by being shown a set of inputs (called features) and corresponding outputs (termed labels) from which it learns the prediction task -- in our case, given some specific atmospheric measurements we wish to predict the sea surface temperature. With availability of sufficient data, the challenge reduces to selecting the appropriate ML model or algorithm, and prescribing suitable model settings or \textit{hyperparameters}. A model hyperparameter is a characteristic of a model that is external to the model and whose value cannot be estimated from data. In contrast, a parameter is an internal characteristic of the model and its value can be estimated from data during training. Classical works in machine learning and optimisation, introduced the "no free lunch" theorem \cite{wolpert1997no}, demonstrating that no single machine learning algorithm can be universally better than any other in all domains -- in effect, one must try multiple models and find one that works best for a particular problem. This study considers five different machine learning algorithms to predict SST. The study aims to 1) evaluate the performance of each to predict SST, 2) investigate whether simple model aggregation techniques can improve predictive skill and 3) provide insight that can be used to guide selection of appropriate model for future studies. While the specifics of each individual model vary, the fundamental approach consists of solving an optimisation problem on the training data until the outputs of the machine learning model consistently approximates the results of the training data. In the remainder of this section, we will describe each ML model used and provide heuristics for the selection of appropriate hyperparameters. The objective can be summarised as relating a univariate response variable $y$ to a set of explanatory variables $\textbf{x} = {x_1, x_2, ... ,x_i}$ (representing for example, air temperature, seasonal identifier, current SST, etc.). \subsection{Generalised Additive Models} \label{subsec:GAM} Linear regression models are ubiquitous in statistical modelling and prediction providing a simple technique to relate predictors or features to the outcome. The relationship is linear and can be written for a single instance as: \begin{equation} y = \beta_0 + \beta_1 x_1 + ... + \beta_i x_i + \epsilon \end{equation} where the $\beta_i$'s are unknown parameters or coefficients that must be determined, the variables $x_i$ are features that can explain the response variable $y$, and the error $\epsilon$ is a Gaussian random variable with expectation zero. The appeal of the linear regression model lies primarily with its simplicity and ease of interpretability. Since prediction is modelled as a weighted sum of the features, one can easily quantify the effect of changes to features on the outcome. This simplicity is also its greatest weakness since in many real-world situations: the relationship between the features and the outcome might be nonlinear, features may interact with each other, and the assumption of a Gaussian distribution of errors may be untrue. Generalised Additive Models (GAMs) extend on linear models by instead relating the outcome to unknown smooth \textit{functions} of the features. Predicting $y$ from the vector of covariates $\textbf{x}$, at time $t$ is as \cite{Hastie1990}: \begin{equation} g(y) = \alpha +f_1\left(x_{1}\right) + f_2\left(x_{2}\right) + ... +f_i\left(x_{i}\right) + \epsilon, \end{equation} where each $f_i\left(\cdot\right)$ is an unspecified function and $g(.)$ is a link function defining how the response variable relates to the linear predictor of explanatory variables (e.g. binomial, normal, Poisson) \cite{wijaya_forecasting_2015}. The functions $f_i\left(\cdot\right)$ can be estimated in many ways, most of which involve computer-intensive statistical methods. The basic building block of all these variations is a scatterplot smoother, which takes a scatter plot and returns a fitted function that reasonably balances smoothness of the function against fit to the data. The estimated function $f_i\left(x_i\right)$ can then reveal possible nonlinearities in the effect of the explanatory variable $x_i$. GAM models are particularly appealing for analysing time-series datasets in the geosciences due to interpretability, additivity of signal and regularisation: as mentioned, GAM lends itself towards interpretable models where the contribution of each explanatory variable is easily visualised and interpreted; time-series signals can be often explained by multiple additive components such as trends, seasonality and daily fluctuations which can be readily incorporated in GAM models; as opposed to simpler regression models targeted only at reducing the error, GAM admits a tuning parameter $\lambda$ that guides the "smoothness" of the model prediction (allowing us to explicitly balance the bias/variance tradeoff) \cite{friedman2001elements}. This parameter as well as the number of splines and polynomial-spline order are typically specified by the user based on heuristics, experience and model performance. \subsection{Random Forest} \label{subsec:RF} Moving from statistical learning models such as GAM to those from the machine learning library, Random Forests (RF) have demonstrated excellent performance in complex prediction problems characterised by a large number of explanatory variables and nonlinear dynamics. RF is a classification and regression method based on the \textit{aggregation} of a large number of decision trees. Decision trees are a conceptually simple yet powerful prediction tool that breaks down a dataset into smaller and smaller subsets while at the same time an associated decision tree is incrementally developed. The resulting intuitive pathway from explanatory variables to outcome serves to provide an easily interpretable model. In RF \cite{breiman_randomForest_2001}, each tree is a standard Classification or Regression Tree (CART) that uses what is termed node "impurity" as a splitting criterion and selects the splitting predictor from a randomly selected subset of predictors (the subset is different at each split). Each node in the regression tree corresponds to the average of the response within the subdomains of the features corresponding to that node. The node impurity gives a measure of how badly the observations at a given node fit the model. In regression trees this is typically measured by the residual sum of squares within that node. Each tree is constructed from a bootstrap sample drawn with replacement from the original data set, and the predictions of all trees are finally aggregated through majority voting. \citep{boulesteix2012overview} While RF is popular for its relatively good performance with little hyperparameter tuning (i.e. works well with the default values specified in the software library), as with all machine learning models it is necessary to consider the bias-variance tradeoff -- the balance between a model that tracks the training data perfectly but does not generalise to new data and a model that is biased or incapable of learning the training data characteristics. Some of the hyperparameters to tune include number of trees, maximum depth of each tree, number of features to consider when looking for the best split, and splitting criteria \citep{probst2019hyperparameters}. \subsection{XGBoost} \label{subsec:xgboost} While XGBoost shares many characteristics and advantages with RF (namely interpretability, predictive performance and simplicity), a key difference facilitating performance gain is that decision trees are built \textit{sequentially} rather than \textit{independently}. The XGBoost algorithm was developed at the University of Washington in 2016 and since its introduction has been credited with winning numerous Kaggle competitions and being used in multiple industry applications. XGBoost provides algorithmic improvements such as sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning, together with optimisation towards distributed computing, to build a scalable tree boosting system that can process billions of examples \cite{chen_boost_2006}. The tree ensemble model follows a similar framework to RF with prediction of the form \cite{chen_boost_2006}: \begin{equation} \hat{y}_i = \phi\left(x_i\right) = \sum _{k=1}^{K} f_k\left(x_i\right), \quad f_k \in \mathcal{F}, \end{equation} where we consider $K$ trees, $\mathcal{F} = \{f\left(x\right) = w_{q\left(x\right)}\}$ represents a set of classification and regression trees (CART), $q$ represents each independent decision-tree structure, and $w_{q(x)}$ is the weight of the leaf which is assigned to the input $x$. $\mathcal{F}$ is computed by minimising the objective function \cite{chen_boost_2006}: \begin{equation} \label{eqn:tree} \begin{split} \mathcal{L}_\phi & = \sum_i l\left(\hat{y}_i, y_i\right) + \sum_k \Omega\left(f_k\right), \\ & \mathrm{with} \quad \Omega\left(f\right) = \frac{1}{2} \lambda \|w\|^2, \end{split} \end{equation} with $l$ being a differentiable convex loss function (for example the mean squared error) of the difference between the prediction $\hat{y}_i$ and the observation $y_i$ for each realisation $i$. The regularisation term, $\Omega$, smooths the final weights to avoid over-fitting ($\lambda$ is a regularisation coefficient). Furthermore, a restriction to a maximal tree depth serves to regulate model complexity. \subsection{Multi-Layer Perceptron} \label{subsec:MLP} The first DL-based approach investigated was a Multi-Layer Perceptron (MLP) model. An MLP network solves an optimisation problem to compute the weights and biases that represent the nonlinear function mapping inputs to the best representation of outputs, $\mathbf{\hat{{{y}}}}$: \begin{equation} \label{eqn:ANN} g\left(\mathbf{x};\mathbf{\Theta}\right)=\mathbf{\hat{y}}. \end{equation} $\mathbf{\Theta}$ denotes the mapping matrix of weights and biases that represents the relationship between SST and explanatory variables, $\mathbf{x}$ in the form of a neural network. An MLP model is organised in sequential layers made up of interconnected neurons. As illustrated in Figure~\ref{fig:deeplearning}, the value of neuron $n$ in hidden layer $\ell$ is calculated as: \begin{linenomath*} \begin{equation} a_{n}^{\left(\ell\right)} = f\left(\sum^{\mathcal{N}_{\ell-1}}_{k=1} w_{k,n}^{\left(\ell\right)} a_{k}^{\left(\ell-1\right)} + b_{n}^{\left(\ell\right)}\right), \end{equation} \end{linenomath*} \noindent where $f$ is the activation function, $\mathcal{N}_{\ell-1}$ is the number of nodes in layer $\ell-1$, $w_{k,n}^{\left(\ell\right)}$ is the weight projecting from node $k$ in layer $\ell-1$ to node $n$ in layer $\ell$, $a_{k}^{\left(\ell-1\right)}$ is the activation of neuron $k$ in hidden layer $\ell-1$, and $b_{n}^{\left(\ell\right)}$ is the bias added to hidden layer $\ell$ contributing to the subsequent layer. The activation function selected for this application was the rectified linear unit (ReLU) \citep{nair2010rectified}: \begin{linenomath*} \begin{equation} f\left(z\right)=\max\left(0,z\right). \end{equation} \end{linenomath*} \begin{figure}[ht!] \centering \includegraphics[width=1.0\textwidth]{DeepLearning_Yushan.png} \caption{Schematic of an MLP machine learning network as illustrated in \citep{james_machine_2018}.} \label{fig:deeplearning} \end{figure} A~loss function is defined in terms of the squared error between the observations and the machine-learning prediction plus a regularisation contribution controlled by $\lambda$: \begin{linenomath*} \begin{equation} \label{eqn:lossfunction} \vartheta=\frac{1}{2}\sum_{k=1}^m\lVert{{\mathbf y}}^{\left( k\right)}-{\hat{{\mathbf{y}}}}^{\left( k\right)}\rVert_2^2 + \lambda \lVert{{\boldsymbol{\Theta}}}\rVert_2^2, \end{equation} \end{linenomath*} where the $||\cdot||_2$ indicates the $L_2$ norm. The regularisation term penalises complex models by enforcing weight decay, which prevents the magnitude of the weight vector from growing too large because large weights can lead to overfitting -- a condition where the model fits the training data well but does not generalise to new data \citep{goodfellow2016deep}. By minimising the loss function, the supervised machine learning algorithm identifies the ${{\boldsymbol{\Theta}}}$ that yields $\hat{{\mathbf{y}}}\approx{\mathbf y}$. As shown in Figure~\ref{fig:deeplearning}, a machine learning algorithm transforms an input vector (layer) to an output layer through a number of hidden layers. The machine learning model is trained on a data set to establish the weights parameterising the space of nonlinear functions mapping from ${\mathbf x}$ to ${\mathbf y}$. Hyperparameter tuning is required to balance the effective capacity of the model and the complexity of the task. In neural network type approaches, increasing the number of layers and of hidden units per layer increases the capacity of the model to represent complicated functions. Hence increasing the depth of the network can improve performance on the \textit{training} data but run the risk of overfitting -- thereby reducing generalisation potential. Standard hyperparameters to tune in neural networks include the number of layers, number of nodes and the regularisation coefficient $\lambda$. \subsection{Long short-term Memory Model} \label{subsec:lstm} Cognisant of the temporal nature of the data, we investigated the performance of recurrent neural network (RNN) type models. A~fundamental extension of RNNs compared to MLP is parameter sharing across different parts of the model. This has intuitive applicability to the forecasting of time-series variables with historical dependency. An RNN with a single cell recursively computes the hidden vector sequence $\mathbf{h}$ and output vector sequence $\mathbf{y}$ iteratively from $t=1,\ldots,T$ in the form \cite{Graves2013}: \begin{equation} \begin{split} & h_t = \mathcal{H} \left(W_{xh}x_t + W_{hh}h_{t-1} + b_h \right), \\ & y_t = W_{hy}y_t +b_y. \end{split} \end{equation} where the $\mathbf{W}$ terms denote weight matrices (e.g. $W_{xh}$ is the input-hidden layer weight matrix), the $\mathbf{b}$ terms denote bias vectors (e.g. $\mathbf{b}_h$ is hidden layer bias vector) and $\mathcal{H}$ is the hidden layer function which is typically implemented as a sigmoid function. In effect, the RNN has two inputs, the present state and the past. Standard RNN approaches have been shown to fail when lags between response and explanatory variables exceed 5--10 discrete timesteps \cite{gers_lstm_1999}. Repeated applications of the same parameters can give rise to vanishing, or exploding gradients leading to model stagnation or instability \cite{goodfellow2016deep}. A number of approaches have been proposed in the literature to address this, with the most popular being LSTM. Instead of a simple weighted dependency, `LSTM cells' also have an internal recurrence (a self-loop), that serves to guide the flow of information and reduce susceptibility to vanishing or exploding gradients. Each cell has the same inputs and outputs as an ordinary recurrent network, but also has more parameters and a system of gating units that controls the flow of information. An LSTM model has a number of gates: input, output and forget gates that decide whether to let information in, forget information because it is not important, or let it impact output at the current timestep, respectively. As new input comes in, it's impact can be accumulated to the cell, forgotten or propagated to the final state depending on the activation of the relevant gates \cite{Shi_2015_CNNLSTM}. In analogy to the MLP, we use L2-regularisation of weights represented by the parameter $\lambda$, in an equivalent manner to equation \ref{eqn:lossfunction}. More details on LSTM are provided in \citet{gers_lstm_1999}. \subsection{Feature Engineering} \label{sec:feature_engineering} In traditional modelling based on solving a set of partial differential equations (PDE), the relationship between inputs and outputs are clear -- founded on well-understood physics. Machine learning on the other hand relies on the concept of learning complex, nonlinear relationships between inputs and outputs. While the outputs are clear (the variable we wish to predict), the inputs are more opaque and one wishes to consider all variables that potentially contribute to the output response, while avoiding superfluous data that may hinder performance. When predicting SST, some of the variables that may contribute include a wide range of atmospheric conditions (air temperature, solar radiation, cloud cover, precipitation, wind speed, etc.), autoregressive features (i.e. past values of the response variable -- SST), temporal information (e.g. season, day of year, time of day), and potentially values at neighbouring spatial locations. Feature engineering is the process of using domain expertise and statistical analysis to extract the most appropriate set of features for a particular problem from the entire set of data that may contribute. The role of feature engineering is to improve predictive accuracy and expedite model convergence by selecting the most appropriate features that explain the response variable and provide maximum value. Excluding important data will limit the predictive skill of the model while superfluous data tends to add noise to the model. Figure \ref{fig:data_exploration} shows multi-year SST data illustrating primary patterns. A~monthly rolling mean of the data (middle plot) was subtracted from the raw data (top plot) with residuals presented (bottom plot). The seasonal pattern of the data is evident with yearly cycle capturing a significant portion of the data variance. The data residuals largely represent short-term fluctuations in the data (together with sensor uncertainty component). The objective of the modelling was to learn the nonlinear relationships between the explanatory variables and the long- and short-term signals of the data. For machine learning forecasts, the raw data themselves are rarely the most informative and a number of combinations and transformations of the raw data must be considered. The feature variables used for this study consisted of SST historical time series data from MODIS Aqua satellite, atmospheric data from The Weather Company (TWC), and time features (season, day of year, etc.). From these raw data, several different types of features were designed and investigated. The feature engineering process combined domain expertise to initially select known variables influencing SST, with statistical analysis to explore strength of relationship between a large number of features and the response variable. The dependence or correlation between each feature and response variable was determined based on a univariate feature selection using the SciKit-Learn \citep{sklearn_api} feature selection library. A subset of features with highest $F$-scores \cite{guyon2008feature} were retained. The implementation of the feature selection approach is described in more detail in Section \ref{subsec:modsetup}. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{fig1.pdf} \caption{SST time series from MODIS measurements (upper panel), monthly rolling mean (middle panel), and residuals after subtraction of the monthly rolling mean from the SST data (lower panel).} \label{fig:data_exploration} \end{figure} \section{Methodology} \label{sec:methodology} Application of ML techniques can be reduced to a number of steps related to: selection of appropriate ML algorithms, providing sufficient data to train the models, and selecting the correct model hyperparameters (settings or parameters that must be defined outside the learning algorithm) for the model. In the remainder of this section, we will describe the training data used, provide details on each model considered and outline the application of each model to the problem of forecasting SST. \subsection{Input Data} \label{sec:data} Training data were from the MODIS instrument aboard the NASA \textit{Aqua} satellite. MODIS SSTs are produced and made available to the public by the NASA GFSC Ocean Biology Processing Group. The MODIS sensor measures ocean temperature (along with other ocean products such as salinity and Chlorophyll concentration) from a layer less than 1\,mm thick at the sea surface. Data are available from 2002 to present at 4\,km horizontal resolution and daily intervals \citep{obpg_modis}. Calibration of the Pathfinder algorithm coefficients and tuning of instrument configurations produce accurate measurements of SST with mean squared error (MSE) against \textit{in situ} sensors $<\mathrm{0.2}^\circ$C \cite{kilpatric_modis_2015}. These accurate global SST measurement over a multi-decade period, serve as an ideal dataset to extract insights using ML. Daily, weekly (8 day), monthly and annual MODIS SST products are available at both 4.63 and 9.26 km spatial resolution and for both daytime and nighttime passes. The particular dataset we used was the MODIS Aqua, thermal-IR SST level 3, 4km, daily, daytime product downloaded from the Physical Oceanography Distributed Active Archive Center (PODAAC) \citep{obpg_modis}. This MODIS SST data served as \textit{labels} to the machine learning algorithm while the data were also used as autoregressive (lagged) \textit{features} to the model. As described in section \ref{sec:feature_engineering}, various combinations of atmospheric variables were provided as features to the model, extracted from The Weather Company through their public API \cite{IBM2018}. The variables used were the 18 atmospheric quantities included as part of the standard weather variables described in the API documentation \citep{TWC-cleanedHistorical}. While we do not have rights to redistribute The Weather Company data, a free API key can be obtained to download the data from the vendor. A key part of any modelling study is validation of the prediction and comparison against benchmark values. While not provided as inputs to the models, we used data from ECMWF model data to assess predictive skill. ECMWF ERA5 dataset provides an atmospheric reanalysis of the global climate at 32\,km horizontal grid at hourly intervals from a numerical synthesis of ocean models and atmospheric forcing fluxes \cite{hirahara_era5_2016}. We downloaded SST data at the nearest grid cell to the MODIS dataset using the ECMWF Climate Data Store API (CDSAPI) to serve as a validation dataset. \subsection{Model Setup and Training} \label{subsec:modsetup} As described in Section \ref{sec:ML}, there are three primary steps to deployment of a machine learning model: \begin{itemize} \item Feature engineering, where the requisite explanatory data are extracted, processed and combined to be fed to the model (described in Section \ref{sec:feature_engineering}). \item Selection of the most appropriate hyperparameters for the model. \item Training the model by feeding the training data to the model which finds patterns in the data that map the input data attributes to the target. \end{itemize} The first two steps were conducted on a dataset extracted from an arbitrary location in the North Atlantic: ($27^\circ 28^\prime 46.45^{\prime\prime}$ N, $32^\circ 25^\prime43.71^{\prime\prime}$ W). MODIS SST data were collected over 16 years from July 2002 (earliest available data) to December 2018. Satellite measurements are prone to missing data -- for instance due to cloud cover. For this location, data was missing 57\% of days, which was representative of data availability at other locations also. As data gaps are problematic for the training of time-series models (due to auto regressive dependencies), linear interpolation between adjacent values replaced the missing data. While this can introduce artefacts to the data, the fact that missing values were evenly distributed across the entire dataset and the time series nature of the data made interpolation the best approach. Moreover, the secondary weather input, TWC reanalysis data, were complete. As previously described, the experiments compared a number of feature selection approaches to incorporate atmospheric and autoregressive effects. Initially the most appropriate number of lags to specify as autoregressive SST features were selected based on heuristics (different temporal scales involved such as daily, seasonal, year) and a trial-and-error search of a limited number of possible lags. To simplify analysis of lag selection, the models were considered as autoregressive models for this stage (i.e. we only supplied SST at previous timesteps as inputs and did not include weather data). We observed that these simple autoregressive models provided adequate predictive skill for short-term forecasting of up to two days (for longer-term predictions, atmospheric features were critical for performance). Nevertheless, this simplified modelling study enabled insight into the most suitable number of lags (or number of AR steps) to include in each model deployment. For the GAM, RF and XGBoost models, the optimal lags were found to be approximately 30 days, which balanced computational tractability with predictive skill. To incorporate seasonal effects (and also due to greater computational efficiency), the MLP and LSTM model were fed data from up to the previous 400 days (to extend beyond one year of historical trend). It's worth noting that when including AR features, it introduces a temporal dependency which is important if one wishes to make forecast multiple days in advance -- i.e. to make forecast for day $t+2$, predicted SST for day $t+1$ is provided as a feature. This allowed for long-term prediction but introduced the possibility of systematic model error and bias (i.\,e.~prediction error accumulated). This is analogous to model drift observed in numerical modelling studies where model forecast can diverge from true state over time \citep{doi:10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2}. The AR features described above were combined with time features (season, month and week of year), and various combinations of atmospheric features to construct different model scenario inputs. Weather feature were selected from atmospheric data consisting of 18 time-dependent atmospheric quantities reporting standard meteorological variables such as, air temperature, solar radiation flux, cloud cover and winds \cite{IBM2018}. Three different model scenarios were created from these with different combinations of atmospheric data, namely: \begin{itemize} \item all 18 atmospheric quantities at the desired time were fed to the model (we refer to this scenario as TWC1). \item To reduce the number of covariates (and hence network size and associated demands for training data), a feature-selection module quantified the most important variables. Univariate feature selection was performed by computing $F$-scores from the correlation of each single features with the output label \cite{guyon2008feature} and retaining the \textit{three} atmospheric features with highest scores as described in Section \ref{sec:feature_engineering} (referred to as TWC2). \item This concept was further extended with time-dependent information by assigning univariate scores to lagged values of the selected atmospheric features in scenario 2, and choosing the lags with the highest scores as features (referred to as TWC3). This reflected that SST is also likely to be influenced by atmospheric conditions (e.g. air temperature) at previous days. \end{itemize} The resultant set of features to be considered for each model were, AR features with specific lag (AR), time features (time) and most appropriate combination of weather features (TWC1, TWC2 and TWC3). For all five models these set of features were investigated and for each model the best performing combination were selected that minimised error against the test dataset. Emanating from the different characteristics and complexities of the models it was not expected that a single feature combination would provide best performance across all models. Instead effective machine learning implementations requires a careful balance of appropriate features, model or algorithm complexity, and hyperparameter selection. The next stage of model setup considered hyperparameter optimisation for each of the models. In general machine learning models have a number of hyperparameters, and the selection of the most appropriate is a combination of heuristics, expertise and trial-and-error. For each model, hyperparameter optimisations adopted a greedy, grid-search approach over the user-defined parameter ranges summarised in Table~\ref{tab:hyper_params}. The $\mathbf{x}$ and $\mathbf{y}$ data were split into two groups, to form the training-data set composed of 90\% of the 6018 rows of data, and the test-data set the remaining 10\%. For each model the learning algorithm was trained on the training data and then applied to the test data set and the MSE between test data vector, $\mathbf{y}$, and its machine-learning representation $\mathbf{\hat{y}}$ was calculated. The hyperparameter combination that minimised this MSE was selected for each model. The selected values are presented in Table \ref{table:hp_selected} and discussed in more detail in Section \ref{sec:results} where we evaluate model performance. \begin{table}[t!] \caption{Hyperparameters and ranges used for model design. See Section \ref{sec:ML} for details on each model hyperparameter} \centering \begin{tabular}{ll} \toprule \textbf{Model} & \textbf{Hyperparameters} \\ \midrule GAM & \# of splines/features $\in \{10, 15, 20\}$ \\ &polynomial-spline order $\in \{3, 5, 8\}$ \\ & $\lambda \in \{0.001, 0.01, 0.1, 1, 10, 100\}$ \\ \rule{0pt}{3ex} RF & \# of trees/features $\in \{100, 200, 500\}$ \\ & max \# of features $\in \{3, 5, 10\}$ \\ & max depth $\in \{5, 10, 15, 20 \}$ \\ \rule{0pt}{3ex} XGBoost & \# of trees/features $\in \{500, 700, 1000\}$ \\ & max depth $\in \{5, 10, 15, 20 \}$ \\ & $\lambda \in \{ 0.01, 0.05, 0.1, 0.5\}$ \\ \rule{0pt}{3ex} MLP & \# of layers $\in \{ 5, 10, 20\}$ \\ & \# of nodes/layer $\in \{ 20, 50, 75 \}$ \\ & $\lambda \in \{ 0.001, 0.01, 0.1, 1\}$ \\ \rule{0pt}{3ex} LSTM & \# of layers $\in \{ 1, 2\}$ \\ & \# of units/layer $\in \{ 1, 2, 3 \}$ \\ & $\lambda \in \{ 0.001, 0.01, 0.1, 1\}$\\ \bottomrule \end{tabular} \label{tab:hyper_params} \end{table} A number of Python toolkit libraries were used to access high-level programming interfaces to statistical and machine learning libraries and to cross validate results. The GAM model was implemented using the LinearGAM API from pyGAM \citep{daniel_serven_2018}, Random Forest from the widely-used SciKit-Learn \citep{sklearn_api} toolkit, and XGBoost from the python implementation of the software library \citep{chen_boost_2006}. The deep learning models, namely MLP and LSTM were implemented using the popular Keras library which serves as a high-level neural network API \citep{chollet2015keras}. \subsection{Model Scoring and Aggregation} To assess the different modelling approaches, hyperparameters, and combinations of input features, the time series was split into training (90\%) and test (10\%) sets. The models were trained to make a prediction one day ahead based on feeding the previously described features and labels (measured value of SST for that day). The test datasets were then used to evaluate the performance of the model prediction against measured values. As prediction depended on historic estimates of SST (i.e. a prediction for one day ahead required information on the current SST), the model prediction was fed back as a feature to the model in a recurrent fashion. Specifically, the first test prediction ($t=1$) was made with \textit{measured} values of SST (at time $t=0$) as a feature. For future predictions, the measured SST feature was replaced with the \textit{prediction} from the previous day, i.e. prediction at time $t=2$ received as input feature, model prediction for time $t=1$ instead of the satellite derived value of SST, which in practise would not be available for forecasting multiple days in advance. Scoring for the entire test dataset (562 days) proceeded in this manner. This made the study sensitive to propagation of error, where a low skill prediction propagates through the entire forecasting period (in a similar manner that error in initial condition or boundary forcing formulation can propagate through the prediction of a physics-based model). The mean average error (MAE) and mean absolute percentage error (MAPE) assessed the accuracy of each model: \begin{align} \textsc{MAE} = \frac{1} {N_\mathrm{test} } \sum_{i=1}^{N_\mathrm{test}} \left\vert \left(y - \hat y\right) \right\vert, && \textsc{MAPE} = \frac{100}{N_\mathrm{test}}\sum_{i=1}^{N_\mathrm{test}} \left\vert \frac{\left(y - \hat y\right) }{y} \right\vert, \end{align} \noindent where $N_\mathrm{test}$ is the size of the training data, $y$ is the measured data, and $\hat{y}$ the model-predicted equivalent. Finally, the models were aggregated into a single best prediction weighted by the inverse MAPE of the test data \cite{Adhikari2012_inverseMAPE_average}. Convex weights for the models considered in this study were computed of the form: \begin{equation} \label{eqn:invMape} W_m = \left(\frac{MAPE_m}{\sum_m^p(MAPE_m)}\right)^{-1} \end{equation} where $W_m$ is the weight for model $m$, $p$ is the number of models, and $MAPE_m$ is the MAPE of model $m$. \section{Results} \label{sec:results} For each model implementation, we focused on identifying the optimal combination of features and hyperparameters that maximise predictive skill. Table~\ref{tab:model_metrics} presents model-selection results considering hyperparameters and feature engineering. The combination of model complexity and size of the features datasets are evident. Relatively simple models like GAM and RF provided best performance with more sophisticated feature engineering that reduced the size of the dataset. However, MLP and XGBoost both yielded the lowest test MAPE when provided with the full atmospheric dataset and allowed to infer relationships from all variables and data labels. In addition to MAPE accuracy, Table~\ref{tab:model_metrics} also lists the run times needed to train the corresponding models (on a commodity laptop). Training times were within acceptable limits for all models, although significant variability existed. As expected, the LSTM had the largest computational demand -- however it also had the highest MAPE. This non-intuitive result demonstrates the need to balance model complexity with the nature of the data. That is, na\"{i}ve selection might suggest that an RNN-based model such as LSTM is most suitable for a time-series dataset. However, results demonstrated that the LSTM model failed to capture the high-frequency variations in the data and only captured the general seasonal patterns (the monthly rolling-mean trends reported in Figure \ref{fig:data_exploration}). The inability to capture short-scale variations is due to the ``long memory'' for this model that interfered with learning short-term variations. In contrast, simpler models with time-series information explicitly included as features better learned short-term dynamics. \begin{table}[t] \label{table:hp_selected} \caption{Model-selection result for the two North Atlantic locations.} \centering \resizebox{\textwidth}{!}{% \begin{tabular}{lllcc} \toprule \bf{Model} & \bf{Features} & \bf{Hyperparameters} & \bf{Training time [sec]} & \bf{MAPE}\\ \midrule \small GAM & TWC3, time & \# of splines = 20, spline order = 8, $\lambda =10$ & ~~1.67 & 2.27 \\ RF & AR, TWC3, time & \# of trees = 500, max \# of features = 3, max depth = 20 & 19.09 & 1.97 \\ XGBoost & TWC1, time & \# of trees = 1,000, max depth = 5, $\lambda=0.05$ & ~~0.21 & 2.17 \\ MLP & TWC1, time & \# of layers = 20, of units/layer = 20, $\lambda=0.01$ & 11.18 & 2.07 \\ LSTM & TWC2 & \# of layers = 1, \# of units/layer = 2, $\lambda=0.1$ & 71.67 & 2.54\\ \bottomrule \end{tabular} } \label{tab:model_performance} \label{tab:model_metrics} \end{table} \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{fig2.pdf} \caption{Test-prediction (orange curve) of SST at $27^\circ 28^\prime 46.45^{\prime\prime}$ N, $-32^\circ 25^\prime 43.71^{\prime\prime}$ W from different ML models trained on 12 years of preceding historical data compared to measured SSTs (blue curve). Bottom figure presents ensemble average of all models based on individual model MAPE. Feature combinations and hyperparameters adopted for each model are summarised in Table \ref{tab:model_metrics}.} \label{fig:different_models} \end{figure} Figure~\ref{fig:different_models} compares model predictions (see Table~\ref{tab:model_metrics}) for the test period to MODIS data. Observing the time evolution of SST reveals that a suitable model must represent two distinct time scale components. On the one hand there is the smooth SST evolution governed by seasonality. This component of SST evolution benefited from suppression of large fluctuations. Of the models studied, this criterion was fulfilled by the GAM approach, which yielded lowest test MAPE with a large regularisation parameter $\lambda = 10$ (i.\,e.,~the penalty on the second-order derivative of fitted single-feature functions). The large regularisation effect together with the piecewise polynomial components of GAM models contributed to a smoother time-series prediction that still captured long-term trends, including correlation of data between years. Similarly, the RF approach led to a comparably smooth SST evolution but at significantly lower MAPE than the GAM model. The most obvious reflection of the seasonal pattern is evident in the LSTM prediction which produces a highly smoothed representation of the training data. The model fails to capture any small-scale dynamics at the daily or weekly level instead reproducing the seasonal heating/cooling effects only. Further analysis of model parameters suggested this to be a result of the retained long-term memory informing the broader trend only. On the other hand, the seasonal cycle has superimposed on it short-term behaviour dominated by peak events occurring at daily to weekly time scales. This is particularly evident in the XGBoost and the MLP approaches where both yielded best performances for smaller $\lambda$, which enabled them to better capture short-term events. It's worth noting that while XGBoost and MLP captured the small-scale fluctuations better, RF returned lowest MAPE. To simultaneously take both aspects into account, the final plot aggregates models based on an inverse MAPE weighting as presented in equation \ref{eqn:invMape}. As a preprocessing step, due to the comparably poor performance of the LSTM model, it was excluded from the ensemble. The ensemble average generates lowest MAPE indicating that a relatively simple model-weighting aggregation approach can outperform an individual best performing model \citep{odonncha2018integrated, o2019ensemble} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{Test_MAE_GlobalAverage.png} \includegraphics[width=0.9\textwidth]{Test_MAPE_GlobalAverage.png} \caption{Predictive skill of a weighted ensemble average of GAM, RF, XGBoost, and MLP models at a set of locations distributed equally between $\pm$ 54$^\circ$ latitude. MAE (top) and MAPE (bottom) metrics are presented to inform on absolute and relative errors. The features and hyperparameters prescribed are presented in Table~\ref{tab:model_metrics}. The blue circles on the top plot denote locations that are analysed in more detail in Section \ref{sec:r&d} and presented in Figure \ref{fig:simulation_comparison}. } \label{fig:globalskill} \end{figure} \subsection{Transportability and Comparison to State-of-the-art} As the feature engineering and hyperparameter selection process is complex and cumbersome, it is desirable to execute this procedure once and then use the selected model at different locations. The objective being to identify the most appropriate model inputs (features) and settings (hyperparameters) from a small dataset, which are then used to train (on new data) and deploy (i.e. make forecast) the models at any location we wish to make SST forecast. We investigated the performance of the model at a set of globally distributed locations. Data (SST measurements and TWC weather variables) were collected in a 6$^\circ \times$ 6$^\circ$ grid of points within 1$^\circ$ of shorelines between $\pm$ 54$^\circ$ latitude. This resulted in 730 locations globally. While the features and hyperparameters were selected as noted in Table~\ref{tab:model_performance}, the models were retrained at each location in a similar manner as previously described using a 90\%/10\% train and test data split. The resulting prediction was again an aggregation of GAM, RF, XGBoost, and MLP results, where each model was weighted by the inverse MAPE at each location to favour models with better performance in the weighted average. Figure~\ref{fig:globalskill} presents the MAE and MAPE computed at these 730 locations. Results demonstrate that MAE and MAPE were less than 1$^\circ$C and 10\%, respectively, at most locations. Table \ref{tab:globmape} presents average error metrics over all locations. The MAPE-weighted ensemble average returned MAE and MAPE of 0.68$^\circ$C and 7.9\% respectively. These values are comparable to ECMWF estimates on SST which returned values of 0.56$^\circ$C and 12.3\%, respectively. While ECMWF reports lower absolute error, relative errors are noticeably higher. This suggests a tendency of the numerical outputs to perform poorer in periods when temperatures are lower (increasing relative error). \begin{table}[h!] \caption{MAE and MAPE averaged across all spatial locations presented in Figure~\ref{fig:globalskill}. Metrics are presented for each model individually, an ensemble averaged weighted by the inverse MAPE and the final column presents error metrics for a benchmark ECMWF model against MODIS measurements.} \centering \begin{tabular}{c c c c c c c} \toprule \textbf{Metric} & \textbf{GAM} & \textbf{RF} & \textbf{XGBoost} & \textbf{MLP} & \textbf{Ens. Ave.} & \textbf{ECMWF} \\ \midrule MAE & 0.78 & 0.72 & 0.79 & 0.89 & 0.68 & 0.56 \\ MAPE & 9.7 & 9.4 & 8.8 & 10.6 & 7.9 & 12.3\\ \bottomrule \end{tabular} \label{tab:globmape} \end{table} Figure~\ref{fig:globalskill} indicates some spatial variations in performance. In general MAE is lower in the inter-tropics region than in southern or northern latitudes. This effect is more pronounced when we consider MAPE values due to lower ambient temperatures making relative differences more pronounced. Further analysis indicates that this spatial bias is largely driven by reduced data availability in locations away from the tropics. Figure \ref{fig:perccove} presents the percentage of days for which data was available for the study period (100\% indicates that data is available every day). We see a distinct pattern of higher data availability over the inter-tropical regions which is possibly a result of increased cloud coverage in southern and northern latitudes \citep{ruiz2013assessment, o2019observational}. For all locations we replaced missing data using linear interpolation which enables the models to act on data-sparse regions but limits the amount of true data available to learn the complex SST relationship. Further all error metrics were computed on the raw data (without linear interpolation) which biases the evaluation further towards locations with higher data coverage. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{Percentage_ModisDataCoverage.png} \caption{Percentage of the time (number of days over the entire 2002-2019 study period) that the MODIS Aqua sensor reported SST estimates for all global points considered in Figure \ref{fig:globalskill}. } \label{fig:perccove} \end{figure} \subsection{Discussion of results} \label{sec:r&d} Time-series forecasts are vital in many areas of scientific, industrial, and economic activity. Many ML methods have been applied to such problems and the advantages of RNN-type approaches are well documented. The ability of these DL algorithms to implicitly include effects from preceding time steps is intuitively a natural fit. However, this study demonstrated that when the relationship between predicted values was not based solely on AR features -- well-designed feature selection in conjunction with simpler ML methods capable to more rapidly adjust to short-scale fluctuations outperformed the DL approaches. Table~\ref{tab:model_performance} demonstrated that the important features needed to predict at time $t+1$ are SST values at time $t$ (and values at earlier time steps dependent on selected AR features), atmospheric information at time $t+1$ (and potentially AR features of those), and time of year information. Hence, prediction required inclusion of AR features while inferring relationships between forecasted values of atmospheric data and the response variable, SST. Figure~\ref{fig:different_models} illustrated that the LSTM model failed to adequately learn the relationship between explanatory variables and SST. Specifically, the model closely approximated seasonal behaviour (i.\,e.,~the long-term characteristics of the SST) while failing to capture high-frequency variations (i.\,e.,~variations in response to atmospheric inputs). In effect, the DL approach maintained ``memory'' of the long-term SST trends to the detriment of incorporating effects of shorter time scales. A~more focused feature-engineering module that guided the data-length fed to the LSTM model may improve performance. However, this contravenes the philosophy of RNN-type approaches that aims to implicitly learn the nature of cyclic data. Another point worth noting is that DL approaches have a larger appetite for training data than some of the simpler models adopted. Some reduction in MAPE may be possible by extending the size of the training data. Again, however, when evaluating different modelling approaches, aspects such as computational complexity and ability to learn on smaller datasets are key points that demand consideration (further, there are practical limits on amount of available data). This study considered a framework to develop a transportable model suite applied to a nonlinear, real-world dataset. Key points considered were design of an automatic feature-engineering module, which, together with a standard hyperparameter optimisation routine, facilitated ready deployment at disparate geographical locations. Results demonstrated that the different models adopted had inherent characteristics that governed accuracy and level of regularisation or overfit to training data. We compared performance of ML models with a state-of-the-art physics-based approach from ECMWF. As expected, the physics-based model provided close agreement with satellite measurements -- the ECMWF prediction is a reanalysis product which assimilates measurement (including satellite) data daily to update the accuracy of the product. This study demonstrated, however, that the machine learning based approaches achieve accuracy comparable to ECMWF model, at a fraction of the computational expense. Aggregating the models improved the robustness of this approach and served to regularise small-scale fluctuations or seasonal biases in individual models. Figure~\ref{fig:simulation_comparison} compares the ensemble predictions to the ECMWF results, satellite measured SST and predictions from selected ML model at four locations across the globe (location details provided in Figure \ref{fig:globalskill} and Table \ref{tab:ts_mse}). The four plots illustrate the varying temporal characteristics of SST data at different geographical points and the performance of ML models to capture those characteristics. Generally the models are seen to capture both the seasonal patterns and shorter-scale fluctuations (e.g. unseasonably warm autumn temperatures at location [0, -150]). Individual model prediction (green line) provides good predictive skill comparable to ECMWF, while the aggregated model is `smoother' (possibly more robust to short-scale fluctuations), while achieving comparable accuracy. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{MultipleTimeseries.png} \caption{Time series plots comparing the performance of individual model (green line) and weighted ensemble average of GAM, RF, XGBoost, and MLP models (orange line) against 1) ECMWF model estimate at nearest grid cell (red line) and 2) SST test data from MODIS satellite (blue circles). The location of the four points considered are illustrated in Figure \ref{fig:globalskill} and described in more detail in Table \ref{tab:ts_mse} (note that subfigure title denote the latitude and longitude coordinates). For each of the four plots the orange line presents a different model to provide an illustration of different model characteristics, namely (from top-left) GAM, RF, XGBoost and MLP. } \label{fig:simulation_comparison} \end{figure} \begin{table}[h!] \caption{MAE (top table) and MAPE (bottom table) for individual models and ensemble average of all models (GAM, RF, XGBoost, and MLP) for the selected number of locations presented in Figure \ref{fig:simulation_comparison} . Forecast skill of ECMWF model also presented for illustrative purposes. } \centering \begin{tabular}{c c c c c c c c } \toprule \textbf{Lat} & \textbf{Lon} & \textbf{GAM} & \textbf{RF} & \textbf{XGBoost} & \textbf{MLP} & \textbf{Ens. Ave.} & \textbf{ECMWF} \\ \midrule 0 & -150 & 0.41 & 0.44 & 0.43 & 0.42 & 0.38 & 0.33 \\ -42 & 6 & 0.6 & 0.73 &0.88 & 0.7 & 0.67 & 0.63 \\ -12 & 60 & 0.48 & 0.49 & 0.55 & 0.50 & 0.45 & 0.37 \\ 42 & 156 & 1.96 & 1.31 & 1.53 & 1.09 & 1.16 & 1.03 \\ \midrule & & & & \multicolumn{1}{c}{\textbf{MAPE} } & & &\\ \midrule 0 & -150 & 1.52 & 1.64 & 1.61 & 1.58 & 1.42 & 1.27 \\ -42 & 6 & 5.38 & 6.80 & 8.0 & 6.52 & 6.16 & 5.78 \\ -12 & 60 & 1.77 & 1.79 & 2.01 & 1.86 & 1.66 & 1.38 \\ 42 & 156 & 12.5 & 8.27 & 10.4 & 7.26 & 7.56 & 6.62 \\ \bottomrule \end{tabular} \label{tab:ts_mse} \end{table} Classical works on ensemble forecasting demonstrated that the ensemble mean should give a better forecast than a single deterministic forecast \citep{epstein1969stochastic, leith1974theoretical}. Assigning inverse MAPE weights to individual models provides a simple and effective method to rank model contributions based on performance. To illustrate forecast skill of different models, Table \ref{tab:ts_mse} presents MAE and MAPE for each individual model, an ensemble model aggregation, and the ECMWF estimate against MODIS measurements for the locations plotted in Figure \ref{fig:simulation_comparison}. Results demonstrate that the variation in error of individual models can be "regularised" by the ensemble approach. We observe that individual models perform better at different locations (e.g. GAM performs best at location [-42,6], while MLP performs best at [42, 156]), illustrating the concept of the "no free lunch" - no single machine learning algorithm necessarily outperforms all others and one must select the appropriate algorithm for the problem. However, the ensemble averaging approach outperforms individual models providing a framework to improve average predictive skill. The low computational cost of prediction enabled by machine learning is particularly amenable to ensemble modelling approaches where multiple models can be readily deployed \citep{odonncha2018integrated,o2019ensemble}. Interrogating temporal evolution of model error over the 18-month test period demonstrated some biases in individual models – e.g. GAM outperformed RF during the summer period but is significantly poorer during periods of lower temperature. The ensemble aggregation framework we implemented reduced error over the duration of the test period compared to arbitrarily selected individual models, but more importantly, also served to reduce error and biases at distinct periods of the prediction window. Table \ref{tab:seas_mse} presents seasonal MSE against satellite data for each individual model and the ensemble aggregation over the duration of the study period averaged over the same locations as Figure~\ref{fig:simulation_comparison}. \begin{table}[h!] \caption{Seasonal MSE for individual models and ensemble average} \centering \begin{tabular}{c c c c c c} \toprule \textbf{Season} & \textbf{GAM} & \textbf{RF} & \textbf{XGBoost} & \textbf{MLP} & \textbf{Ens. Ave.}\\ \midrule Spring & 1.37 & 1.04 & 1.01 & 0.90 & 0.94 \\ Summer & 0.18 & 0.40 & 0.45 & 0.68 & 0.30 \\ Autumn & 0.06 & 0.10 & 0.40 & 0.80 & 0.17 \\ Winter & 0.26 & 0.14 & 0.39 & 0.34 & 0.22 \\ \bottomrule \end{tabular} \label{tab:seas_mse} \end{table} This study presented a time-series forecasting framework applied to satellite measurement of SST. We considered the SST data as a set of disparate points. In reality, the ocean surface more closely resembles an image with interaction between neighbouring points. Results demonstrated that treating the data as distinct time-series points provided good results. However, scope exists to combine this approach with image-processing techniques such as convolutional neural networks (CNNs) to incorporate neighbouring effects into predictions. Future work will explore the viability and value of combining CNNs with time-series forecasting models to further improve the robustness of the framework. \section{Conclusions} \label{sec:concl} This paper demonstrates the viability of applying ML based approaches, addressing transportability, biases and robustness by combining feature selection and disparate models with specific characteristics in a weighted aggregation based on average model performance. This study aimed to assess the ability of data-driven approaches to accurately predict SST – characterised by seasonal patterns, temporal dependencies and short-term fluctuations. Results demonstrate comparable performance to physics-based model simulations with low computational cost, and which is easily parametrised to other geographical locations. The low computational cost of the approach has many advantages. First, it enables separation of SST forecasting models from HPC centres -- the suite of models presented here can be trained on a laptop and applied to any geographic location. Once trained, the inference step is of negligible computational expense and can be readily deployed on edge-type devices (e.g. in-situ devices deployed in the ocean). Deploying large-scale models is a complex task highly dependent on user skill to correctly configure and parametrise to specific locations. Data-driven approaches can present an alternative approach that enables rapid prediction, contingent on availability of sufficient data. \section*{Acknowledgements} \noindent This project has received funding from the European Union’s Horizon 2020 research and innovation programme as part of the RIA GAIN project under grant agreement No. 773330. \clearpage
proofpile-arXiv_065-6178
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Adaptive Identification} \label{adaptive_identification} The direct adaptive control approach fails to cancel the effects of unmatched uncertainty onto the system behavior, as the uncertainty term falls outside the span of the control matrix $\mathcal{B}$. While this is true, the unmatched terms can still be learned online using a system identifier approach and be accounted for in the total controller for the reference model tracking. This section provides details of the companion observer model and adaptive identification law for estimating the total uncertainty present in the system. We also show the guaranteed boundedness on the observer tracking errors and network parameters under adaptive identification law \eqref{eq:18} \subsection{System Observer Model} \label{observer_dynamics} Based on the system dynamics \eqref{eq:unified uncertainty dynamics}, consider a Luenberger state observer, of the form. \begin{eqnarray} \dot{\hat x}(t) &=& A\hat x(t) + Bu(t) + \hat \Delta(x) + L_{\tau}\left(x(t)-\hat x(t)\right) \label{eq:6} \end{eqnarray} where $\hat x(t) \in \mathbb{R}^n$ is state of the observer model. The term $\hat \Delta(x) \in \mathbb{R}^n$ in \eqref{eq:6} represent the estimate of the total uncertainty in \eqref{eq:unified uncertainty dynamics} and $L_{\tau}$ is the observer feedback gain. This feedback term helps placing the poles of observer tracking error dynamics to desired location further away from poles of reference plant, to make the observer tracking error dynamics faster \cite{lavretsky2012robust}. This condition is particularly helpful if using the observer information in the control synthesis. The true uncertainty $\Delta(x)$ in unknown, but it is assumed to be continues over a compact domain $\mathcal{D}_x \subset \mathbb{R}^n$. Neural Networks (NN) have been widely used to represent unstructured uncertainties whose basis is not known, Using NN, the network estimate of the uncertainty can be written as \begin{equation} \hat \Delta(x) \triangleq \hat W^T\phi(x) \label{eq:nn_estimate_defn} \end{equation} where $\hat W \in \mathbb{R}^{k \times n}$ are network weights and $\phi(x) = [1,\phi_1(x),\phi_1, \ldots,\phi_k(x)]^T$ is a $k$ dimensional vector of chosen basis function. The basis vector $\phi(x)$ is considered to be Lipschitz continuous to ensure the existence and uniqueness of the solution (\ref{eq:unified uncertainty dynamics}). From definition of the structure of total uncertainty from (\ref{eq:unified uncertainty dynamics}) the estimate of the individual components of matched and unmatched uncertainty can be expressed as, \begin{equation} \Omega\left[\begin{array}{c} \alpha \hat \Delta_m(x)\\ \beta \hat \Delta_{u}(x) \end{array} \right] \triangleq \hat \Delta(x) = \hat W^T\phi(x) \end{equation} \begin{equation} \Rightarrow \left[\begin{array}{c} \alpha \hat \Delta_m(x)\\ \beta \hat \Delta_{u}(x) \end{array} \right] = \Omega^{-1}\hat W^T\phi(x) = \xi(x) \end{equation} Therefore the estimate of matched and unmatched uncertainty are as follows, \begin{eqnarray} \hat \Delta_m(x) &=& \alpha^\dagger\xi_m(x)\\ \hat \Delta_u(x) &=& \beta^\dagger\xi_u(x) \end{eqnarray} where $\alpha^\dagger,\beta^\dagger$ are the left pseudo-inverse of $\alpha,\beta$. The projection of uncertainty on the range and the null space of $\mathcal{B}$ is represented by $\xi(x) = [\xi_m(x), \xi_u(x)]^T \in \mathbb{R}^n$ where $\xi_m(x) \in {\rm I\!R}^{r}$, $\xi_u(x) \in \mathbb{R}^{n-r}$ where $r$ being column rank of $\mathcal{B}$ Appealing to the universal approximation property of Neural Networks \cite{park1991universal} we have that, given a fixed number of basis functions $\phi(x) \in \mathbb{R}^k$ there exists ideal weights $W^* \in {\rm I\!R}^{k \times n}$ and $\epsilon(x) \in ma^{n}$ such that the following approximation holds \begin{equation} \Delta(x) = W^{*T}\phi(x) + \epsilon(x), \hspace{2mm} \forall x(t) \in \mathcal{D}_x \subset \mathbb{R}^{n} \label{eq:3} \end{equation} The network approximation error $\epsilon(x)$ is upper bounded, s.t $\bar \epsilon = \sup_{x \in \mathcal{D}_x}\|\epsilon(x)\|$, and can be made arbitrarily small given sufficiently large number of basis functions. \textbf{\emph{Assumption 1:}} For uncertainty parameterized by unknown true weight $W^* \in \mathbb{R}^{k \times n}$ and known nonlinear basis $\phi(x)$, the ideal weight matrix is assumed to be upper bounded s.t $\|W^*\| \leq \mathcal{W}$. Substituting \eqref{eq:nn_estimate_defn} in (\ref{eq:6}), the observer plant can be written as \begin{eqnarray} \dot{\hat x}(t) &=& A\hat x(t) + Bu(t) + \hat W^T\phi(x) + L_{\tau}\left(x(t)-\hat x(t)\right) \label{eq:7} \nonumber \\ \end{eqnarray} The observer model tracking error is defined as \begin{equation} e(t) = x(t)-\hat x(t) \end{equation} Using (\ref{eq:unified uncertainty dynamics}),(\ref{eq:6}) the tracking error dynamics can be written as \begin{equation} \dot e(t) = \dot x(t) - \dot{\hat x}(t) \label{eq:13} \end{equation} \begin{equation} \dot e(t) = \left(A + L_{\tau}C\right)e(t) + \tilde W^T\phi(x) + \epsilon(x) \label{eq:14} \end{equation} where $C = I_{n \times n}$ and $L_{\tau} = \left[l_{\tau 1} \ldots l_{\tau n}\right] \in \mathbb{R}^{n}$ is the tracking error feed-back gain in (\ref{eq:7}). Therefore the observer tracking error dynamics can be written as \begin{equation} \dot e(t) = A_\tau e(t) + \tilde W^T\phi(x) + \epsilon(x) \label{eq:15} \end{equation} where $A_\tau = \left(A + L_{\tau}C\right)$ is Hurwitz s.t $\lambda_{min}(A_\tau) < \lambda_{min}(A_{rm})$, where $\lambda_{min}(.)$ are minimum eigen values of $A_{\tau}$ and $A_{rm}$. \subsection{Online Parameter Estimation law} \label{Identification} The estimate to unknown true network parameters $W^*$ are evaluated on-line using gradient descent algorithm; correcting the weight estimates in the direction of minimizing the instantaneous tracking error $e(t) = x(t)-\hat x(t)$. The resulting update rule for network weights in estimating the total uncertainty in the system is as follows \begin{equation} \dot {\hat W} = \Gamma Proj(\hat W,\phi(x)e(t)'P) \label{eq:18} \end{equation} \subsubsection{Lyapunov Analysis} The on-line adaptive identification law (\ref{eq:18}) guarantees the asymptotic convergence of the observer tracking errors $e(t)$ and parameter error $\tilde W(t)$ under the condition of persistency of excitation \cite{aastrom2013adaptive,ioannou1988theory} for the structured uncertainty. Under the assumption of unstructured uncertainty, we show tracking error is uniformly ultimately bounded (UUB). \begin{theorem} Consider the actual and observer plant model (\ref{eq:unified uncertainty dynamics}) \& (\ref{eq:7}). If the weights parameterizing total uncertainty in the system are updated according to identification law \eqref{eq:18} Then the observer tracking error and error in network weights $\|\tilde e\|$, $\|\tilde W\|$ are bounded. \end{theorem} \begin{proof} Let $V(e,\tilde W) > 0$ be a differentiable, positive definite radially unbounded Lyapunov candidate function, \begin{equation} V(e,\tilde W) = e^TPe + \frac{\tilde W^T \Gamma^{-1} \tilde W}{2} \label{eq:20} \end{equation} where $\Gamma >0$ is the adaption rate. The time derivative of the lyapunov function (\ref{eq:20}) along the trajectory (\ref{eq:15}) can be evaluated as \begin{equation} \dot V(e,\tilde W) = \dot e^TPe + e^TP \dot e - \tilde W^T\Gamma^{-1}\dot{\hat W} \label{eq:25} \end{equation} \begin{eqnarray} \dot V(e,\tilde W) &=& -e^TQe + \left(\phi(x)e'P - \Gamma^{-1}\dot{\hat W}\right)2\tilde W \nonumber\\ && + 2e^TP\epsilon(x) \label{eq:21} \end{eqnarray} for $P = P^T >0$ and $A_{\tau}$ be Hurwitz matrix, $P$ is the solution of lyapunov equation $A^T_{\tau}P + PA_{\tau} = -Q$ for some $Q > 0$. Using the expressions for weight update rule (\ref{eq:18}) in (\ref{eq:21}), the time derivative of the lyanpunov function reduces to \begin{eqnarray} \dot V(e,\tilde W) &=& -e^TQe + 2e^TP\epsilon(x) \label{eq:22} \end{eqnarray} \begin{eqnarray} \dot V(e,\tilde W) &\leq& -\lambda_{min}(Q)e^Te + 2\lambda_{max}(P)\bar\epsilon e \label{eq:23} \end{eqnarray} Hence $\dot V(e,\tilde W) \leq 0$ outside compact neighborhood of the origin $e = 0$, for some sufficiently large $\lambda_{min}(Q)$. \begin{equation} \|e(t)\| \geq \frac{2\lambda_{max}(P)\bar\epsilon}{\lambda_{min}(Q)} \end{equation} Hence the observer tracking tracking error $\|e(t)\|$ is uniformly ultimately bounded. Furthermore, from the BIBO assumption $x_{rm}(t)$ is bounded for bounded reference signal $r(t)$, thereby $x(t)$ remains bounded. Since $V(e,\tilde W)$ is radially unbounded the result holds for all $x(0) \in \mathcal{D}_x$. The boundedness of estimation parameters follows from Lyapunov theory and Barbalat’s Lemma \cite{narendra2012stable} \end{proof} \section{Sample Complexity and Stability Analysis for DMRAC} In this section, we present the sample complexity results, generalization error bounds and stability guarantee proof for DMRAC. We show that DMRAC controller is characterized by the memory of the features learned over previously observed training data. We further demonstrate in simulation that when a trained DMRAC is used as a feed-forward network with frozen weights, can still produce bounded tracking performance on reference tracking tasks that are related but reasonably different from those seen during network training. We ascribe this property of DMRAC to the very low generalization error bounds of the DNN. We will prove this property in two steps. Firstly we will prove the bound on the generalization error of DNN using Lyapunov theory such that we achieve an asymptotic convergence in tracking error. Further, we will show information theoretically the lower bound on the number of independent samples we need to train through before we can claim the DNN generalization error is well below a determined lower level given by Lyapunov analysis. \subsection{Stability Analysis} The generalization error of a machine learning model is defined as the difference between the empirical loss of the training set and the expected loss of test set \cite{2018arXiv180801174J}. This measure represents the ability of the trained model to generalize well from the learning data to new unseen data, thereby being able to extrapolate from training data to new test data. Hence generalization error can be defined as \begin{equation} \hat{\Delta}(x) - f_{\boldsymbol{\theta}}(x) \leqslant \epsilon \end{equation} Using the DMRAC (as frozen network) controller in \eqref{eq:total_Controller} and using systems \eqref{eq:0} we can write the system dynamics as \begin{equation} \dot x(t) = Ax(t) + B(-Kx(t) + K_rr(t) -f_{\boldsymbol{\theta}}(x(t)) + \Delta(x)) \end{equation} We can simplify the above equation as \begin{equation} \dot x(t) = A_{rm}x(t) + B_{rm}r(t)+B(\Delta(x)-f_{\boldsymbol{\theta}}(x(t))) \end{equation} Adding and subtracting the term ${\Delta}'(x)$ in above expression and using the training and generalization error definitions we can write, \begin{eqnarray} \dot x(t) &=& A_{rm}x(t) + B_{rm}r(t)\\ &&+B(\Delta(x)-{\Delta}'(x(t))+{\Delta}'(x(t))-f_{\boldsymbol{\theta}}(x(t))) \nonumber \end{eqnarray} The term $\left(\Delta(x)-{\Delta}'(x(t))\right)$ is the D-MRGeN training error and $\left({\Delta}'(x(t))-f_{\boldsymbol{\theta}}(x(t))\right)$ is the generalization error of the DMRAC DNN network. For simplicity of analysis we assume the training error is zero, this assumption is not very restrictive since training error can be made arbitrarily small by tuning network architecture and training epochs. The reference tracking error dynamics can be written as, \begin{equation} \dot e(t) = A_{rm}e(t) + \epsilon \label{eq:DMRAC_error_dynamics} \end{equation} To analyze the asymptotic tracking performance of the error dynamics under DMRAC controller we can define a Lyapunov candidate function as $V(e) = e^TPe$ and its time derivative along the error dynamics \eqref{eq:DMRAC_error_dynamics} can be written as \begin{equation} \dot V(e) = -e^TQe + 2\epsilon Pe \end{equation} where $Q$ is solution for the Lyaunov equation $A_{rm}^TP + PA_{rm} = -Q$. To satisfy the condition $\dot V(e) < 0$ we get the following upper bound on generalization error, \begin{equation} \|\epsilon\| < \frac{\lambda_{max}(Q)\|e\|}{\lambda_{min}(P)} \label{eq:generalization_bound} \end{equation} The idea is, that if the DNN produces a generalization error lower than the specified bound \eqref{eq:generalization_bound}, then we can claim Lyanpunov stability of the system under DMRAC controller. \subsection{Sample Complexity of DMRAC} In this section, we will study the sample complexity results from computational theory and show that when applied to a network learning real-valued functions the number of training samples grows at least linearly with the number of tunable parameters to achieve specified generalization error. \begin{theorem} Suppose a neural network with arbitrary activation functions and an output that takes values in $[-1,1]$. Let $\mathcal{H}$ be the hypothesis class characterized by N-weights and each weight represented using k-bits. Then any squared error minimization (SEM) algorithm $\mathcal{A}$ over $\mathcal{H}$, to achieve a generalization error \eqref{eq:generalization_bound} admits a sample complexity bounded as follows \begin{equation} m_{\mathcal{A}}(\epsilon, \delta) \leqslant \frac{1}{\epsilon^2} \left(kN\ln2 + \ln\left(\frac{2}{\delta}\right)\right) \end{equation} where $N$ is total number of tunable weights in the DNN. \end{theorem} \begin{proof} Let $\mathcal{H}$ be finite hypothesis class of function mapping s.t $\mathcal{H}: \mathcal{X} \to [-1,1] \in \mathbb{R}^m$ and $\mathcal{A}$ is SEM algorithm for $\mathcal{H}$. Then by Hoeffding inequality for any fixed $f_{\boldsymbol{\theta}} \in \mathcal{H}$ the following event holds with a small probability $\delta$ \begin{eqnarray} &&P^m\{|L(\boldsymbol{Z, \theta}) - \mathbb{E}_P(\ell(\boldsymbol{Z, \theta}))| \geq \epsilon\}\\ &=& P^m\left\{\left|\sum_{i=1}^m \ell(\boldsymbol{Z, \theta}) - m\mathbb{E}_P(\ell(\boldsymbol{Z, \theta}))\right| \geq m\epsilon\right\}\\ &\leq& 2e^{-\epsilon^2m/2} \end{eqnarray} Hence\vspace{-5mm} \begin{eqnarray} &&P^m\{ \forall f_{\boldsymbol{\theta}} \in \mathcal{H}, | \left|L(\boldsymbol{Z, \theta}) - \mathbb{E}_P(\ell(\boldsymbol{Z, \theta}))\right| \geq \epsilon\} \nonumber \\ &\leq& 2|\mathcal{H}|e^{-\epsilon^2m/2} = \delta \label{eq:sample_complexity} \end{eqnarray} We note that the total number of possible states that is assigned to the weights is $\left(2^k\right)^N$ since there are $2^k$ possibilities for each weights. Therefore $\mathcal{H}$ is finite and $|\mathcal{H}| \leq 2^{kN}$. The result follows immediately from simplifying Eq-\eqref{eq:sample_complexity}. \end{proof} \section{Conclusion} \label{conclusions} In this paper, we presented a DMRAC adaptive controller using model reference generative network architecture to address the issue of feature design in unstructured uncertainty. The proposed controller uses DNN to model significant uncertainties without knowledge of the system's domain of operation. We provide theoretical proofs of the controller generalizing capability over unseen data points and boundedness properties of the tracking error. Numerical simulations with 6-DOF quadrotor model demonstrate the controller performance, in achieving reference model tracking in the presence of significant matched uncertainties and also learning retention when used as a feed-forward adaptive network on similar but unseen new tasks. Thereby we claim DMRAC is a highly powerful architecture for high-performance control of nonlinear systems with robustness and long-term learning properties. \vspace{-1mm} \section{Overview of the Presented Method} The presented hybrid architecture of the controller for the system with matched and unmatched uncertainties uses the combination of both direct and indirect adaptive controllers, Fig \ref{fig:flowdiagram}. This hybrid direct-indirect approach is required since a direct adaptive controller alone cannot cancel the effect of unmatched uncertainty. The total controller is realized through a two-step process (1) Learning observer model, such that $\hat x(t) \to x(t)$ and (2) Reference model tracking as $x(t) \to x_{rm}(t)$. The goal of the direct adaptive controller is to enforce the uncertain system to track a stable reference model, that characterizes the desired closed-loop response. The details of the direct adaptive controller are provided in Section \ref{direct}. The reference model is assumed to be linear and therefore the desired transient and steady-state performance is defined by a selecting the system eigenvalues in the negative half plane. The desired closed-loop response of the reference system is given by \begin{equation} \dot x_{rm}(t) = A_{rm}x_{rm}(t) + B_{rm}r(t) \label{eq:ref model} \end{equation} where $x_{rm}(t) \in \mathcal{D}_x \subset \mathbb{R}^{n}$ and $A_{rm} \in {\rm I\!R}^{n \times n}$ is Hurwitz. Furthermore, the command $r(t)$ denotes a bounded, piecewise continuous, reference signal and we assume the reference model (\ref{eq:ref model}) is bounded input-bounded output (BIBO) stable \cite{ioannou1988theory}. The observer plant model provides the estimates of the uncertainties in the system. The details of companion observer plant model are provided in Section \ref{observer_dynamics}. The $\ell_2$-norm of the observer tracking error is used as a measure of confidence on the estimation of total uncertainty. Upon convergence of the observer, that is as the tracking error falls below a determinable sufficiently small threshold ``$\gamma$", the unmatched part of the total uncertainty is remodeled into state dependent coefficient (SDC) \cite{cloutier1997state} form to augment the nominal dynamics. The details of SDC formulation of unmatched uncertainty is given in Section \ref{SDC}. Further, this augmented model is used for the synthesis of the direct adaptive controller for the nonlinear system to ensure the required reference model tracking. \section{Adaptive Identification} \label{adaptive_identification} The direct adaptive control approach fails to cancel the effects of unmatched uncertainty onto the system behavior, as the uncertainty term falls outside the span of the control matrix $\mathcal{B}$. While this is true, the unmatched terms can still be learned online using a system identifier approach and be accounted for in the total controller for the reference model tracking. This section provides details of the companion observer model and adaptive identification law for estimating the total uncertainty present in the system. We also show the guaranteed boundedness on the observer tracking errors and network parameters under adaptive identification law \eqref{eq:18} \subsection{System Observer Model} \label{observer_dynamics} Based on the system dynamics \eqref{eq:unified uncertainty dynamics}, consider a Luenberger state observer, of the form. \begin{eqnarray} \dot{\hat x}(t) &=& A\hat x(t) + Bu(t) + \hat \Delta(x) + L_{\tau}\left(x(t)-\hat x(t)\right) \label{eq:6} \end{eqnarray} where $\hat x(t) \in \mathbb{R}^n$ is state of the observer model. The term $\hat \Delta(x) \in \mathbb{R}^n$ in \eqref{eq:6} represent the estimate of the total uncertainty in \eqref{eq:unified uncertainty dynamics} and $L_{\tau}$ is the observer feedback gain. This feedback term helps placing the poles of observer tracking error dynamics to desired location further away from poles of reference plant, to make the observer tracking error dynamics faster \cite{lavretsky2012robust}. This condition is particularly helpful if using the observer information in the control synthesis. The true uncertainty $\Delta(x)$ in unknown, but it is assumed to be continues over a compact domain $\mathcal{D}_x \subset \mathbb{R}^n$. Neural Networks (NN) have been widely used to represent unstructured uncertainties whose basis is not known, Using NN, the network estimate of the uncertainty can be written as \begin{equation} \hat \Delta(x) \triangleq \hat W^T\phi(x) \label{eq:nn_estimate_defn} \end{equation} where $\hat W \in \mathbb{R}^{k \times n}$ are network weights and $\phi(x) = [1,\phi_1(x),\phi_1, \ldots,\phi_k(x)]^T$ is a $k$ dimensional vector of chosen basis function. The basis vector $\phi(x)$ is considered to be Lipschitz continuous to ensure the existence and uniqueness of the solution (\ref{eq:unified uncertainty dynamics}). From definition of the structure of total uncertainty from (\ref{eq:unified uncertainty dynamics}) the estimate of the individual components of matched and unmatched uncertainty can be expressed as, \begin{equation} \Omega\left[\begin{array}{c} \alpha \hat \Delta_m(x)\\ \beta \hat \Delta_{u}(x) \end{array} \right] \triangleq \hat \Delta(x) = \hat W^T\phi(x) \end{equation} \begin{equation} \Rightarrow \left[\begin{array}{c} \alpha \hat \Delta_m(x)\\ \beta \hat \Delta_{u}(x) \end{array} \right] = \Omega^{-1}\hat W^T\phi(x) = \xi(x) \end{equation} Therefore the estimate of matched and unmatched uncertainty are as follows, \begin{eqnarray} \hat \Delta_m(x) &=& \alpha^\dagger\xi_m(x)\\ \hat \Delta_u(x) &=& \beta^\dagger\xi_u(x) \end{eqnarray} where $\alpha^\dagger,\beta^\dagger$ are the left pseudo-inverse of $\alpha,\beta$. The projection of uncertainty on the range and the null space of $\mathcal{B}$ is represented by $\xi(x) = [\xi_m(x), \xi_u(x)]^T \in \mathbb{R}^n$ where $\xi_m(x) \in {\rm I\!R}^{r}$, $\xi_u(x) \in \mathbb{R}^{n-r}$ where $r$ being column rank of $\mathcal{B}$ Appealing to the universal approximation property of Neural Networks \cite{park1991universal} we have that, given a fixed number of basis functions $\phi(x) \in \mathbb{R}^k$ there exists ideal weights $W^* \in {\rm I\!R}^{k \times n}$ and $\epsilon(x) \in ma^{n}$ such that the following approximation holds \begin{equation} \Delta(x) = W^{*T}\phi(x) + \epsilon(x), \hspace{2mm} \forall x(t) \in \mathcal{D}_x \subset \mathbb{R}^{n} \label{eq:3} \end{equation} The network approximation error $\epsilon(x)$ is upper bounded, s.t $\bar \epsilon = \sup_{x \in \mathcal{D}_x}\|\epsilon(x)\|$, and can be made arbitrarily small given sufficiently large number of basis functions. \textbf{\emph{Assumption 1:}} For uncertainty parameterized by unknown true weight $W^* \in \mathbb{R}^{k \times n}$ and known nonlinear basis $\phi(x)$, the ideal weight matrix is assumed to be upper bounded s.t $\|W^*\| \leq \mathcal{W}$. Substituting \eqref{eq:nn_estimate_defn} in (\ref{eq:6}), the observer plant can be written as \begin{eqnarray} \dot{\hat x}(t) &=& A\hat x(t) + Bu(t) + \hat W^T\phi(x) + L_{\tau}\left(x(t)-\hat x(t)\right) \label{eq:7} \nonumber \\ \end{eqnarray} The observer model tracking error is defined as \begin{equation} e(t) = x(t)-\hat x(t) \end{equation} Using (\ref{eq:unified uncertainty dynamics}),(\ref{eq:6}) the tracking error dynamics can be written as \begin{equation} \dot e(t) = \dot x(t) - \dot{\hat x}(t) \label{eq:13} \end{equation} \begin{equation} \dot e(t) = \left(A + L_{\tau}C\right)e(t) + \tilde W^T\phi(x) + \epsilon(x) \label{eq:14} \end{equation} where $C = I_{n \times n}$ and $L_{\tau} = \left[l_{\tau 1} \ldots l_{\tau n}\right] \in \mathbb{R}^{n}$ is the tracking error feed-back gain in (\ref{eq:7}). Therefore the observer tracking error dynamics can be written as \begin{equation} \dot e(t) = A_\tau e(t) + \tilde W^T\phi(x) + \epsilon(x) \label{eq:15} \end{equation} where $A_\tau = \left(A + L_{\tau}C\right)$ is Hurwitz s.t $\lambda_{min}(A_\tau) < \lambda_{min}(A_{rm})$, where $\lambda_{min}(.)$ are minimum eigen values of $A_{\tau}$ and $A_{rm}$. \subsection{Online Parameter Estimation law} \label{Identification} The estimate to unknown true network parameters $W^*$ are evaluated on-line using gradient descent algorithm; correcting the weight estimates in the direction of minimizing the instantaneous tracking error $e(t) = x(t)-\hat x(t)$. The resulting update rule for network weights in estimating the total uncertainty in the system is as follows \begin{equation} \dot {\hat W} = \Gamma Proj(\hat W,\phi(x)e(t)'P) \label{eq:18} \end{equation} \subsubsection{Lyapunov Analysis} The on-line adaptive identification law (\ref{eq:18}) guarantees the asymptotic convergence of the observer tracking errors $e(t)$ and parameter error $\tilde W(t)$ under the condition of persistency of excitation \cite{aastrom2013adaptive,ioannou1988theory} for the structured uncertainty. Under the assumption of unstructured uncertainty, we show tracking error is uniformly ultimately bounded (UUB). \begin{theorem} Consider the actual and observer plant model (\ref{eq:unified uncertainty dynamics}) \& (\ref{eq:7}). If the weights parameterizing total uncertainty in the system are updated according to identification law \eqref{eq:18} Then the observer tracking error and error in network weights $\|\tilde e\|$, $\|\tilde W\|$ are bounded. \end{theorem} \begin{proof} Let $V(e,\tilde W) > 0$ be a differentiable, positive definite radially unbounded Lyapunov candidate function, \begin{equation} V(e,\tilde W) = e^TPe + \frac{\tilde W^T \Gamma^{-1} \tilde W}{2} \label{eq:20} \end{equation} where $\Gamma >0$ is the adaption rate. The time derivative of the lyapunov function (\ref{eq:20}) along the trajectory (\ref{eq:15}) can be evaluated as \begin{equation} \dot V(e,\tilde W) = \dot e^TPe + e^TP \dot e - \tilde W^T\Gamma^{-1}\dot{\hat W} \label{eq:25} \end{equation} \begin{eqnarray} \dot V(e,\tilde W) &=& -e^TQe + \left(\phi(x)e'P - \Gamma^{-1}\dot{\hat W}\right)2\tilde W \nonumber\\ && + 2e^TP\epsilon(x) \label{eq:21} \end{eqnarray} for $P = P^T >0$ and $A_{\tau}$ be Hurwitz matrix, $P$ is the solution of lyapunov equation $A^T_{\tau}P + PA_{\tau} = -Q$ for some $Q > 0$. Using the expressions for weight update rule (\ref{eq:18}) in (\ref{eq:21}), the time derivative of the lyanpunov function reduces to \begin{eqnarray} \dot V(e,\tilde W) &=& -e^TQe + 2e^TP\epsilon(x) \label{eq:22} \end{eqnarray} \begin{eqnarray} \dot V(e,\tilde W) &\leq& -\lambda_{min}(Q)e^Te + 2\lambda_{max}(P)\bar\epsilon e \label{eq:23} \end{eqnarray} Hence $\dot V(e,\tilde W) \leq 0$ outside compact neighborhood of the origin $e = 0$, for some sufficiently large $\lambda_{min}(Q)$. \begin{equation} \|e(t)\| \geq \frac{2\lambda_{max}(P)\bar\epsilon}{\lambda_{min}(Q)} \end{equation} Hence the observer tracking tracking error $\|e(t)\|$ is uniformly ultimately bounded. Furthermore, from the BIBO assumption $x_{rm}(t)$ is bounded for bounded reference signal $r(t)$, thereby $x(t)$ remains bounded. Since $V(e,\tilde W)$ is radially unbounded the result holds for all $x(0) \in \mathcal{D}_x$. The boundedness of estimation parameters follows from Lyapunov theory and Barbalat’s Lemma \cite{narendra2012stable} \end{proof} \section{Sample Complexity and Stability Analysis for DMRAC} In this section, we present the sample complexity results, generalization error bounds and stability guarantee proof for DMRAC. We show that DMRAC controller is characterized by the memory of the features learned over previously observed training data. We further demonstrate in simulation that when a trained DMRAC is used as a feed-forward network with frozen weights, can still produce bounded tracking performance on reference tracking tasks that are related but reasonably different from those seen during network training. We ascribe this property of DMRAC to the very low generalization error bounds of the DNN. We will prove this property in two steps. Firstly we will prove the bound on the generalization error of DNN using Lyapunov theory such that we achieve an asymptotic convergence in tracking error. Further, we will show information theoretically the lower bound on the number of independent samples we need to train through before we can claim the DNN generalization error is well below a determined lower level given by Lyapunov analysis. \subsection{Stability Analysis} The generalization error of a machine learning model is defined as the difference between the empirical loss of the training set and the expected loss of test set \cite{2018arXiv180801174J}. This measure represents the ability of the trained model to generalize well from the learning data to new unseen data, thereby being able to extrapolate from training data to new test data. Hence generalization error can be defined as \begin{equation} \hat{\Delta}(x) - f_{\boldsymbol{\theta}}(x) \leqslant \epsilon \end{equation} Using the DMRAC (as frozen network) controller in \eqref{eq:total_Controller} and using systems \eqref{eq:0} we can write the system dynamics as \begin{equation} \dot x(t) = Ax(t) + B(-Kx(t) + K_rr(t) -f_{\boldsymbol{\theta}}(x(t)) + \Delta(x)) \end{equation} We can simplify the above equation as \begin{equation} \dot x(t) = A_{rm}x(t) + B_{rm}r(t)+B(\Delta(x)-f_{\boldsymbol{\theta}}(x(t))) \end{equation} Adding and subtracting the term ${\Delta}'(x)$ in above expression and using the training and generalization error definitions we can write, \begin{eqnarray} \dot x(t) &=& A_{rm}x(t) + B_{rm}r(t)\\ &&+B(\Delta(x)-{\Delta}'(x(t))+{\Delta}'(x(t))-f_{\boldsymbol{\theta}}(x(t))) \nonumber \end{eqnarray} The term $\left(\Delta(x)-{\Delta}'(x(t))\right)$ is the D-MRGeN training error and $\left({\Delta}'(x(t))-f_{\boldsymbol{\theta}}(x(t))\right)$ is the generalization error of the DMRAC DNN network. For simplicity of analysis we assume the training error is zero, this assumption is not very restrictive since training error can be made arbitrarily small by tuning network architecture and training epochs. The reference tracking error dynamics can be written as, \begin{equation} \dot e(t) = A_{rm}e(t) + \epsilon \label{eq:DMRAC_error_dynamics} \end{equation} To analyze the asymptotic tracking performance of the error dynamics under DMRAC controller we can define a Lyapunov candidate function as $V(e) = e^TPe$ and its time derivative along the error dynamics \eqref{eq:DMRAC_error_dynamics} can be written as \begin{equation} \dot V(e) = -e^TQe + 2\epsilon Pe \end{equation} where $Q$ is solution for the Lyaunov equation $A_{rm}^TP + PA_{rm} = -Q$. To satisfy the condition $\dot V(e) < 0$ we get the following upper bound on generalization error, \begin{equation} \|\epsilon\| < \frac{\lambda_{max}(Q)\|e\|}{\lambda_{min}(P)} \label{eq:generalization_bound} \end{equation} The idea is, that if the DNN produces a generalization error lower than the specified bound \eqref{eq:generalization_bound}, then we can claim Lyanpunov stability of the system under DMRAC controller. \subsection{Sample Complexity of DMRAC} In this section, we will study the sample complexity results from computational theory and show that when applied to a network learning real-valued functions the number of training samples grows at least linearly with the number of tunable parameters to achieve specified generalization error. \begin{theorem} Suppose a neural network with arbitrary activation functions and an output that takes values in $[-1,1]$. Let $\mathcal{H}$ be the hypothesis class characterized by N-weights and each weight represented using k-bits. Then any squared error minimization (SEM) algorithm $\mathcal{A}$ over $\mathcal{H}$, to achieve a generalization error \eqref{eq:generalization_bound} admits a sample complexity bounded as follows \begin{equation} m_{\mathcal{A}}(\epsilon, \delta) \leqslant \frac{1}{\epsilon^2} \left(kN\ln2 + \ln\left(\frac{2}{\delta}\right)\right) \end{equation} where $N$ is total number of tunable weights in the DNN. \end{theorem} \begin{proof} Let $\mathcal{H}$ be finite hypothesis class of function mapping s.t $\mathcal{H}: \mathcal{X} \to [-1,1] \in \mathbb{R}^m$ and $\mathcal{A}$ is SEM algorithm for $\mathcal{H}$. Then by Hoeffding inequality for any fixed $f_{\boldsymbol{\theta}} \in \mathcal{H}$ the following event holds with a small probability $\delta$ \begin{eqnarray} &&P^m\{|L(\boldsymbol{Z, \theta}) - \mathbb{E}_P(\ell(\boldsymbol{Z, \theta}))| \geq \epsilon\}\\ &=& P^m\left\{\left|\sum_{i=1}^m \ell(\boldsymbol{Z, \theta}) - m\mathbb{E}_P(\ell(\boldsymbol{Z, \theta}))\right| \geq m\epsilon\right\}\\ &\leq& 2e^{-\epsilon^2m/2} \end{eqnarray} Hence\vspace{-5mm} \begin{eqnarray} &&P^m\{ \forall f_{\boldsymbol{\theta}} \in \mathcal{H}, | \left|L(\boldsymbol{Z, \theta}) - \mathbb{E}_P(\ell(\boldsymbol{Z, \theta}))\right| \geq \epsilon\} \nonumber \\ &\leq& 2|\mathcal{H}|e^{-\epsilon^2m/2} = \delta \label{eq:sample_complexity} \end{eqnarray} We note that the total number of possible states that is assigned to the weights is $\left(2^k\right)^N$ since there are $2^k$ possibilities for each weights. Therefore $\mathcal{H}$ is finite and $|\mathcal{H}| \leq 2^{kN}$. The result follows immediately from simplifying Eq-\eqref{eq:sample_complexity}. \end{proof} \section{Conclusion} \label{conclusions} In this paper, we presented a DMRAC adaptive controller using model reference generative network architecture to address the issue of feature design in unstructured uncertainty. The proposed controller uses DNN to model significant uncertainties without knowledge of the system's domain of operation. We provide theoretical proofs of the controller generalizing capability over unseen data points and boundedness properties of the tracking error. Numerical simulations with 6-DOF quadrotor model demonstrate the controller performance, in achieving reference model tracking in the presence of significant matched uncertainties and also learning retention when used as a feed-forward adaptive network on similar but unseen new tasks. Thereby we claim DMRAC is a highly powerful architecture for high-performance control of nonlinear systems with robustness and long-term learning properties. \vspace{-1mm} \section{Overview of the Presented Method} The presented hybrid architecture of the controller for the system with matched and unmatched uncertainties uses the combination of both direct and indirect adaptive controllers, Fig \ref{fig:flowdiagram}. This hybrid direct-indirect approach is required since a direct adaptive controller alone cannot cancel the effect of unmatched uncertainty. The total controller is realized through a two-step process (1) Learning observer model, such that $\hat x(t) \to x(t)$ and (2) Reference model tracking as $x(t) \to x_{rm}(t)$. The goal of the direct adaptive controller is to enforce the uncertain system to track a stable reference model, that characterizes the desired closed-loop response. The details of the direct adaptive controller are provided in Section \ref{direct}. The reference model is assumed to be linear and therefore the desired transient and steady-state performance is defined by a selecting the system eigenvalues in the negative half plane. The desired closed-loop response of the reference system is given by \begin{equation} \dot x_{rm}(t) = A_{rm}x_{rm}(t) + B_{rm}r(t) \label{eq:ref model} \end{equation} where $x_{rm}(t) \in \mathcal{D}_x \subset \mathbb{R}^{n}$ and $A_{rm} \in {\rm I\!R}^{n \times n}$ is Hurwitz. Furthermore, the command $r(t)$ denotes a bounded, piecewise continuous, reference signal and we assume the reference model (\ref{eq:ref model}) is bounded input-bounded output (BIBO) stable \cite{ioannou1988theory}. The observer plant model provides the estimates of the uncertainties in the system. The details of companion observer plant model are provided in Section \ref{observer_dynamics}. The $\ell_2$-norm of the observer tracking error is used as a measure of confidence on the estimation of total uncertainty. Upon convergence of the observer, that is as the tracking error falls below a determinable sufficiently small threshold ``$\gamma$", the unmatched part of the total uncertainty is remodeled into state dependent coefficient (SDC) \cite{cloutier1997state} form to augment the nominal dynamics. The details of SDC formulation of unmatched uncertainty is given in Section \ref{SDC}. Further, this augmented model is used for the synthesis of the direct adaptive controller for the nonlinear system to ensure the required reference model tracking. \section{Adaptive Control using Deep nets (DMRAC)} \label{section_DMRAC} The DNN architecture for MRAC is trained in two steps. We separate the DNN into two networks, as shown in Fig-\ref{DNN_architecture}. The faster learning outer adaptive network and slower deep feature network. DMRAC learns underlying deep feature vector to the system uncertainty using locally exciting uncertainty estimates obtained using a generative network. Between successive updates of the inner layer weights, the feature provided by the inner layers of the deep network is used as the fixed feature vector for outer layer adaptive network update and evaluation. The algorithm for DNN learning and DMRAC controller is provided in Algorithm-\ref{alg:DMRAC}. Through this architecture of mixing two-time scale learning, we fuse the benefits of DNN memory through the retention of relevant, exciting features and robustness, boundedness guarantee in reference tracking. This key feature of the presented framework ensures robustness while guaranteeing long term learning and memory in the adaptive network. Also as indicated in the controller architecture Fig-\ref{DNN_architecture} we can use contextual state `$c_i$' other than system state $x(t)$ to extract relevant features. These contextual states could be relevant model information not captured in system states. For example, for an aircraft system, vehicle parameters like pitot tube measurement, the angle of attack, engine thrust, and so on. These contextual states can extract features which help in decision making in case of faults. The work on DMRAC with contextual states will be dealt with in the follow on work. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig/DNN} \caption{DMRAC training and controller details} \label{DNN_architecture} \vspace{-5mm} \end{figure} The DNN in DMRAC controller is trained over training dataset $Z^M = \{x_i, {\Delta}'(x_i)\}_{i=1}^M$, where the ${\Delta}'(x_i)$ are D-MRGeN estimates of the uncertainty. The training dataset $Z^M$ is randomly drawn from a larger data buffer $\mathcal{B}$. Not every pair of data $\{x_i, {\Delta}'(x_i)\}$ from D-MRGeN is added to the training buffer $\mathcal{B}$. We qualify the input-target pair based on kernel independence test such that to ensure that we collect locally exciting independent information which provides a sufficiently rich representation of the operating domain. Since the state-uncertainty data is the realization of a Markov process, such a method for qualifying data to be sufficiently independent of previous data-points is necessary. The algorithm details to qualify and add a data point to the buffer is provided in detail in subsection \ref{subsection-buffer}. \subsection{Details of Deep Feature Training using D-MRGeN} This section provides the details of the DNN training over data samples observed over n-dimensional input subspace $x(t) \in \mathcal{X} \in \mathbb{R}^{n}$ and m-dimensional targets subspace $y\in\mathcal{Y} \in \mathbb{R}^m$. The sample set is denoted as $\mathcal{Z}$ where $\mathcal{Z} \in \mathcal{X} \times \mathcal{Y}$. We are interested in the function approximation tasks for DNN. The function $f_{\boldsymbol{\theta}}$ is the learned approximation to the model uncertainty with parameters $\boldsymbol{\theta} \in \boldsymbol{\Theta}$, where $\boldsymbol{\Theta}$ is the space of parameters, i.e. $f_{\boldsymbol{\theta}}: \mathbb{R}^n \to \mathbb{R}^m$. We assume a training data buffer $\mathcal{B}$ has $p_{max}$ training examples, such that the set $Z^{p_{max}} = \{Z_i | Z_i \in \mathcal{Z}\}_{i=1}^{p_{max}} = \{(x_i, y_i) \in \mathcal{X}\times\mathcal{Y}\}_{i=1}^{p_{max}}$. The samples are independently drawn from the buffer $\mathcal{B}$ over probability distribution $P$. The hypothesis set, which consist of all possible functions $f_{\boldsymbol{\theta}}$ is denoted as $\mathcal{H}$. Therefore a learning algorithm $\mathcal{A}$ (in our case SGD) is a mapping from $\mathcal{A}: \mathcal{Z}^{p_{max}} \to \mathcal{H}$ The loss function, which measures the discrepancy between true target $y$ and algorithm's estimated target function value $f_{\boldsymbol{\theta}}$ is denoted by $L(y, f_{\boldsymbol{\theta}}(x))$. Specific to work presented in this paper, we use a $\ell_2$-norm between values i.e. $\mathbb{E}_p(\ell(y, f_{\boldsymbol{\theta}}(x))) = \mathbb{E}_P \left(\|y_i - f_{\boldsymbol{\theta}}(x_i)\|_2\right)$ as loss function for DNN training. The empirical loss \eqref{empirical_loss} is used to approximate the loss function since the distribution $P$ is unknown to learning algorithm. The weights are updated using SGD in the direction of negative gradient of the loss function as given in \eqref{SGD2}. Unlike the conventional DNN training where the true target values $y \in \mathcal{Y}$ are available for every input $x \in \mathcal{X}$, in DMRAC true system uncertainties as the labeled targets are not available for the network training. We use the part of the network itself (the last layer) with pointwise weight updated according to MRAC-rule as the generative model for the data. The D-MRGeN uncertainty estimates $y = {W}^T\Phi(x,\theta_1,\theta_2, \ldots \theta_{n-1})= {\Delta}'(x)$ along with inputs $x_i$ make the training data set $Z^{p_{max}} = \{x_i, {\Delta}'(x_i)\}_{i=1}^{p_{max}}$. Note that we use interchangably $x_i$ and $x(t)$ as discrete representation of continuous state vector for DNN training. The main purpose of DNN in the adaptive network is to extract relevant features of the system uncertainties, which otherwise is very tedious to obtain without the limits on the domain of operation. We also demonstrate empirically, that the DNN features trained over past i.i.d representative data retains the memory of the past instances and can be used as the frozen feed-forward network over similar reference tracking tasks without loss of the guaranteed tracking performance. \subsection{Method for Recording Data using MRGeN for DNN Training} \label{subsection-buffer} In statistical inference, implicitly or explicitly one always assume that the training set $Z^M = \{x_i, y_i\}_{i=1}^M$ is composed on M-input-target tuples that are independently drawn from buffer $\mathcal{B}$ over same joint distribution $P(x,y)$. The i.i.d assumption on the data is required for robustness, consistency of the network training and for bounds on the generalization error \cite{xu2012robustness, vandegeer2009}. In classical generalization proofs one such condition is that $\frac{1}{p_{max}}\mathbb{X}^T\mathbb{X} \to \gamma$ as ${p_{max}} \to \infty$, where $\mathbb{X}$ denotes the design matrix with rows $\Phi_i^T$. The i.i.d assumption implies the above condition is fulfilled and hence is sufficient but not necessary condition for consistency and error bound for generative modeling. The key capability brought about by DMRAC is a relevant feature extraction from the data. Feature extraction in DNN is achieved by using recorded data concurrently with current data. The recorded data include the state $x_i$, feature vector $\Phi(x_i)$ and associated D-MRGeN estimate of the uncertainty ${\Delta}'(x_i)$. For a given $\zeta_{tol} \in \mathbb{R}_+$ a simple way to select the instantaneous data point $\{x_i, \Delta'(x_i)\}$ for recording is to required to satisfy following condition \begin{equation} \gamma_i = \frac{\|\Phi(x_i) - \Phi_p\|^2}{\|\Phi(x_i)\|} \geq \zeta_{tol} \label{eq:kernel_test} \end{equation} Where the index $p$ is over the data points in buffer $\mathcal{B}$. The above method ascertains only those data points are selected for recording that are sufficiently different from all other previously recorded data points in the buffer. Since the buffer $\mathcal{B}$ is of finite dimension, the data is stored in a cyclic manner. As the number of data points reaches the buffer budget, a new data is added only upon one existing data point is removed such that the singular value of the buffer is maximized. The singular value maximization approach for the training data buffer update is provided in \cite{5991481}. \begin{algorithm}[h!] \caption{D-MRAC Controller Training} \label{alg:DMRAC} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\Gamma, \eta, \zeta_{tol}, p_{max}$ \WHILE{New measurements are available} \STATE Update the D-MRGeN weights $W$ using Eq:\eqref{eq:18} \STATE Compute $y_{\tau+1} = \hat{W}^T\Phi(x_{\tau+1})$ \STATE Given $x_{\tau+1}$ compute $\gamma_{\tau+1}$ by Eq-\eqref{eq:kernel_test}. \IF{$\gamma_{\tau+1} \geqslant \zeta_{tol}$} \STATE Update $\mathcal{B}:\boldsymbol{Z}(:) = \{x_{\tau+1}, y_{\tau+1}\}$ and $\mathbb{X}: \Phi(x_{\tau+1})$ \IF{$|\mathcal{B}| > p_{max}$} \STATE Delete element in $\mathcal{B}$ by SVD maximization \cite{5991481} \ENDIF \ENDIF \IF{$|\mathcal{B}| \geq M$} \STATE Sample a mini-batch of data $\boldsymbol{Z}^M \subset \mathcal{B}$ \STATE Train the DNN network over mini-batch data using Eq-\eqref{SGD2} \STATE Update the feature vector $\Phi$ for D-MRGeN network \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \section{Introduction} Deep Neural Networks (DNN) have lately shown tremendous empirical performance in many applications and various fields such as computer vision, speech recognition, translation, natural language processing, Robotics, Autonomous driving and many more \cite{goodfellow2016deep}. Unlike their counterparts such as shallow networks with Radial Basis Function features \cite{sanner:TNN:92,liu2018gaussian}, deep networks learn features by learning the weights of nonlinear compositions of weighted features arranged in a directed acyclic graph \cite{2013arXiv1301.3605Y}. It is now pretty clear that deep neural networks are outshining other classical machine-learning techniques\cite{hinton2012deep}. Leveraging these successes, there have been many exciting new claims regarding the control of complex dynamical systems in simulation using deep reinforcement learning\cite{mnih2015human}. However, Deep Reinforcement Learning (D-RL) methods typically do not guarantee stability or even the boundedness of the system during the learning transient. Hence despite significant simulation success, D-RL has seldomly been used in safety-critical applications. D-RL methods often make the ergodicity assumption, requiring that there is a nonzero probability of the system states returning to the origin. In practice, such a condition is typically enforced by resetting the simulation when a failure occurs. Unfortunately, however, real-world systems do not have this reset option. Unlike, D-RL much effort has been devoted in the field of adaptive control to ensuring that the system stays stable during learning. Model Reference Adaptive Control (MRAC) is one such leading method for adaptive control that seeks to learn a high-performance control policy in the presence of significant model uncertainties \cite{ioannou1988theory,tao2003adaptive, Pomet_92_TAC}. The key idea in MRAC is to find an update law for a parametric model of the uncertainty that ensures that the candidate Lyapunov function is non-increasing. Many update laws have been proposed and analyzed, which include but not limited to $\sigma$-modification \cite{ioannou1996robust}, $e$-modification \cite{annaswamy_CDC_89}, and projection-based updates \cite{Pomet_92_TAC}. More modern laws extending the classical parametric setting include $\ell_1$-adaptive control \cite{hovakimyan2010ℒ1} and concurrent learning \cite{chowdhary2013concurrent} have also been studied. A more recent work introduced by the author is the Gaussian Process Model Reference Adaptive Control (GP-MRAC), which utilizes a GP as a model of the uncertainty. A GP is a Bayesian nonparametric adaptive element that can adapt both its weights and the structure of the model in response to the data. The authors and others have shown that GP-MRAC has strong long-term learning properties as well as high control performance \cite{chowdhary2015bayesian, joshi2018adaptive}. However, GPs are ``shallow'' machine learning models, and do not utilize the power of learning complex features through compositions as deep networks do (see \ref{sec:feature}). Hence, one wonders whether the power of deep learning could lead to even more powerful learning based MRAC architectures than those utilizing GPs. In this paper, we address this critical question: How can MRAC utilize deep networks while guaranteeing stability? Towards that goal, our contributions are as follows: a) We develop an MRAC architecture that utilizes DNNs as the adaptive element; b) We propose an algorithm for the online update of the weights of the DNN by utilizing a dual time-scale adaptation scheme. In our algorithm, the weights of the outermost layers are adapted in real time, while the weights of the inner layers are adapted using batch updates c) We develop theory to guarantee Uniform Ultimate Boundedness (UUB) of the entire DMRAC controller; d) We demonstrate through simulation results that this architecture has desirable long term learning properties. We demonstrate how DNNs can be utilized in stable learning schemes for adaptive control of safety-critical systems. This provides an alternative to deep reinforcement learning for adaptive control applications requiring stability guarantees. Furthermore, the dual time-scale analysis scheme used by us should be generalizable to other DNN based learning architectures, including reinforcement learning. \section{Background} \subsection{Deep Networks and Feature spaces in machine learning}\label{sec:feature} The key idea in machine learning is that a given function can be encoded with weighted combinations of \textit{feature} vector $\Phi \in \mathcal{F}$, s.t $\Phi(x)=[\phi_1(x), \phi_2(x),...,\phi_k(x)]^T\in \mathbb{R}^k$, and $W^*\in\mathbb{R}^{k \times m}$ a vector of `ideal' weights s.t $\|y(x)-W^{*^T}\Phi(x)\|_\infty<\epsilon(x)$. Instead of hand picking features, or relying on polynomials, Fourier basis functions, comparison-type features used in support vector machines \cite{schoelkofp:01,scholkopf2002learning} or Gaussian Processes \cite{rasmussen2006gaussian}, DNNs utilize composite functions of features arranged in a directed acyclic graphs, i.e. $\Phi(x)=\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...))))$ where $\theta_i$'s are the layer weights. The universal approximation property of the DNN with commonly used feature functions such as sigmoidal, tanh, and RELU is proved in the work by Hornik's \cite{Hornik:NN89} and shown empirically to be true by recent results \cite{2016arXiv160300988M,Poggio2017,2016arXiv161103530Z}. Hornik et al. argued the network with at least one hidden layer (also called Single Hidden Layer (SHL) network) to be a universal approximator. However, empirical results show that the networks with more hidden layers show better generalization capability in approximating complex function. While the theoretical reasons behind better generalization ability of DNN are still being investigated \cite{2016Matus}, for the purpose of this paper, we will assume that it is indeed true, and focus our efforts on designing a practical and stable control scheme using DNNs. \subsection{Neuro-adaptive control} Neural networks in adaptive control have been studied for a very long time. The seminal paper by Lewis \cite{Lewis:AJC:99} utilized Taylor series approximations to demonstrate uniform ultimate boundedness with a single hidden neural network. SHL networks are nonlinear in the parameters; hence, the analysis previously introduced for linear in parameter, radial basis function neural networks introduced by Sanner and Slotine does not directly apply \cite{sanner:TNN:92}. The back-propagation type scheme with non-increasing Lyapunov candidate as a constraint, introduced in Lewis' work has been widely used in Neuro-adaptive MRAC. Concurrent Learning MRAC (CL-MRAC) is a method for learning based neuro-adaptive control developed by the author to improve the learning properties and provide exponential tracking and weight error convergence guarantees. However, similar guarantees have not been available for SHL networks. There has been much work, towards including deeper neural networks in control; however, strong guarantees like those in MRAC on the closed-loop stability during online learning are not available. In this paper, we propose a dual time-scale learning approach which ensures such guarantees. Our approach should be generalizable to other applications of deep neural networks, including policy gradient Reinforcement Learning (RL) \cite{sutton1992reinforcement} which is very close to adaptive control in its formulation and also to more recent work in RL for control \cite{modares2014integral}. \subsection{Stochastic Gradient Descent and Batch Training} We consider a deep network model with parameters $\boldsymbol{\theta}$, and consider the problem of optimizing a non convex loss function $L(\boldsymbol{Z, \theta})$, with respect to $\boldsymbol{\theta}$. Let $L(\boldsymbol{Z, \theta})$ is defined as average loss over $M$ training sample data points. \begin{equation} L(\boldsymbol{Z, \theta}) = \frac{1}{M}\sum_{i=1}^M \ell(\boldsymbol{Z_i, \theta}) \label{empirical_loss} \end{equation} where $M$ denotes the size of sample training set. For each sample size of $M$, the training data are in form of $M$-tuple $Z^M = (Z_1, Z_2, \dots Z_M)$ of $Z-$valued random variables drawn according to some unknown distribution $P \in \mathcal{P}$. Where each $Z_i = \{x_i, y_i\}$ are the labelled pair of input and target values. For each $P$ the expected loss can be computed as $\boldsymbol{E}_p(\ell(\boldsymbol{Z,\theta}))$. The above empirical loss \eqref{empirical_loss} is used as proxy for the expected value of loss with respect to the true data generating distribution. Optimization based on the Stochastic Gradient Descent (SGD) algorithm uses a stochastic approximation of the gradient of the loss $L(\boldsymbol{Z, \theta})$ obtained over a mini-batch of $M$ training examples drawn from buffer $\mathcal{B}$. The resulting SGD weight update rule \begin{eqnarray} \boldsymbol{\theta}_{k+1} &=& \boldsymbol{\theta}_k - \eta \frac{1}{M}\sum_{i=1}^M \nabla_{ \boldsymbol{\theta}}\ell(\boldsymbol{Z_i, \theta}_k) \label{SGD2} \end{eqnarray} where $\eta$ is the learning rate. Further details on generating i.i.d samples for DNN learning and the training details of network are provided in section \ref{section_DMRAC}. \section{Simulations} \label{results} In this section, we will evaluate the presented DMRAC adaptive controller using a 6-DOF Quadrotor model for the reference trajectory tracking problem. The quadrotor model is completely described by 12 states, three position, and velocity in the North-East-Down reference frame and three body angles and angular velocities. The full description of the dynamic behavior of a Quadrotor is beyond the scope of this paper, and interested readers can refer to \cite{joshi2017robust} and references therein. The control law designed treats the moments and forces on the vehicle due to unknown true inertia/mass of the vehicle and moments due to aerodynamic forces of the crosswind, as the unmodeled uncertainty terms and are captured online through DNN adaptive element. The outer-loop control of the quadrotor is achieved through Dynamic Inversion (DI) controller, and we use DMRAC for the inner-loop attitude control. A simple wind model with a boundary layer effect is used to simulate the effect of crosswind on the vehicle. A second-order reference model with natural frequency $4 rad/s$ and damping ratio of $0.5$ is used. Further stochasticity is added to the system by adding Gaussian white noise to the states with a variance of $\omega_n = 0.01$. The simulation runs for $150$secs and uses time step of $0.05s$. The maximum number of points ($p_{max}$) to be stored in buffer $\mathcal{B}$ is arbitrarily set to $250$, and SVD maximization algorithm is used to cyclically update $\mathcal{B}$ when the budget is reached, for details refer \cite{5991481}. The controller is designed to track a stable reference commands $r(t)$. The goal of the experiment is to evaluate the tracking performance of the proposed DMRAC controller on the system with uncertainties over an unknown domain of operation. The learning rate for D-MRGeN network and DMRAC-DNN networks are chosen to be $\Gamma = 0.5I_{6 \times 6}$ and $\eta = 0.01$. The DNN network is composed of 2 hidden layers with $200,100$ neurons and with tan-sigmoid activations, and output layer with linear activation. We use ``Levenberg-Marquardt backpropagation'' \cite{yu2011levenberg} for updating DNN weights over $100$ epochs. Tolerance threshold for kernel independence test is selected to be $\zeta_{tol} = 0.2$ for updating the buffer $\mathcal{B}$. Figure-\ref{fig:plot_1} and Fig-\ref{fig:plot_2} show the closed loop system performance in tracking the reference signal for DMRAC controller and learning retention when used as the feed-forward network on a similar trajectory (Circular) with no learning. We demonstrate the proposed DMRAC controller under uncertainty and without domain information is successful in producing desired reference tracking. Since DMRAC, unlike traditional MRAC, uses DNN for uncertainty estimation is hence capable of retaining the past learning and thereby can be used in tasks with similar features without active online adaptation Fig-\ref{fig:plot_2}. Whereas traditional MRAC which is ``pointwise in time" learning algorithm and cannot generalize across tasks. The presented controller achieves tighter tracking with smaller tracking error in both outer and inner loop states as shown in Fig-\ref{fig:plot_2} and Fig-\ref{fig:plot_3} in both with adaptation and as a feed-forward adaptive network without adaptation. Figure-\ref{fig:plot_4} demonstrate the DNN learning performance vs epochs. The Training, Testing and Validation error over the data buffer for DNN, demonstrate the network performance in learning a model of the system uncertainties and its generalization capabilities over unseen test data. \begin{figure*}[tbh] \centering \begin{subfigure}{0.9\columnwidth} \includegraphics[width=1.05\textwidth, height = 0.6\textwidth]{Fig/LearningGeneralization_traj_xy.jpg} \vspace{-0.2in} \caption{} \label{fig:plot_1} \end{subfigure} \begin{subfigure}{1.1\columnwidth} \includegraphics[width=\textwidth, height = 0.5\textwidth]{Fig/LearningGeneralization_InnerLoopStates} \vspace{-0.2in} \caption{} \label{fig:plot_2} \end{subfigure} \vspace{-0.12in} \caption{DMRAC Controller Evaluation on 6DOF Quadrotor dynamics model (a) DMRAC vs MRAC vs GP-MRAC Controllers on quadrotor trajectory tracking with active learning and DMRAC as frozen feed-forward network (Circular Trajectory) to test network generalization (b) Closed-loop system response in roll rate $\phi(t)$ and Pitch $\theta(t)$} \label{fig:grid_world} \vspace{-4mm} \end{figure*} \begin{figure*}[tbh] \centering \begin{subfigure}{1.2\columnwidth} \includegraphics[width=\textwidth, height = 0.5\textwidth]{Fig/LearningGeneralization_OuterLoopStates} \vspace{-0.2in} \caption{} \label{fig:plot_3} \end{subfigure} \begin{subfigure}{0.80\columnwidth} \includegraphics[width=1.1\textwidth]{Fig/netTrain_perf} \vspace{-0.2in} \caption{} \label{fig:plot_4} \end{subfigure} \vspace{-0.12in} \caption{(a) Position Tracking performance of DMRAC vs MRAC vs GP-MRAC controller with active learning and Learning retention test over Circular Trajectory for DMRAC (b) DNN Training, Test and Validation performance.} \label{fig:grid_world} \vspace{-5mm} \end{figure*} \section{System Description} \label{system_description} This section discusses the formulation of model reference adaptive control (see e.g. \cite{ioannou1988theory}). We consider the following system with uncertainty $\Delta(x)$: \begin{equation} \dot x(t) = Ax(t) + B(u(t) + \Delta(x)) \label{eq:0} \end{equation} where $x(t) \in \mathbb{R}^n$, $t \geqslant 0$ is the state vector, $u(t) \in \mathbb{R}^m$, $t \geqslant 0$ is the control input, $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$ are known system matrices and we assume the pair $(A,B)$ is controllable. The term $\Delta(x) : \mathbb{R}^n \to \mathbb{R}^m$ is matched system uncertainty and be Lipschitz continuous in $x(t) \in \mathcal{D}_x$. Let $ \mathcal{D}_x \subset \mathbb{R}^n$ be a compact set and the control $u(t)$ is assumed to belong to a set of admissible control inputs of measurable and bounded functions, ensuring the existence and uniqueness of the solution to \eqref{eq:0}. The reference model is assumed to be linear and therefore the desired transient and steady-state performance is defined by a selecting the system eigenvalues in the negative half plane. The desired closed-loop response of the reference system is given by \begin{equation} \dot x_{rm}(t) = A_{rm}x_{rm}(t) + B_{rm}r(t) \label{eq:ref model} \end{equation} where $x_{rm}(t) \in \mathcal{D}_x \subset \mathbb{R}^{n}$ and $A_{rm} \in \mathbb{R}^{n \times n}$ is Hurwitz and $B_{rm} \in \mathbb{R}^{n \times r}$. Furthermore, the command $r(t) \in \mathbb{R}^{r}$ denotes a bounded, piece wise continuous, reference signal and we assume the reference model (\ref{eq:ref model}) is bounded input-bounded output (BIBO) stable \cite{ioannou1988theory}. The true uncertainty $\Delta(x)$ in unknown, but it is assumed to be continuous over a compact domain $\mathcal{D}_x \subset \mathbb{R}^n$. A Deep Neural Networks (DNN) have been widely used to represent a function when the basis vector is not known. Using DNNs, a non linearly parameterized network estimate of the uncertainty can be written as $\hat \Delta(x) \triangleq \theta_n^T\Phi(x)$, where $\theta_n \in \mathbb{R}^{k \times m}$ are network weights for the final layer and $\Phi(x)=\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...))))$, is a $k$ dimensional feature vector which is function of inner layer weights, activations and inputs. The basis vector $\Phi(x) \in \mathcal{F}: \mathbb{R}^{n} \to \mathbb{R}^{k}$ is considered to be Lipschitz continuous to ensure the existence and uniqueness of the solution (\ref{eq:0}). \subsection{Total Controller} \label{adaptive_identification} The aim is to construct a feedback law $u(t)$, $t \geqslant 0$, such that the state of the uncertain dynamical system (\ref{eq:0}) asymptotically tracks the state of the reference model (\ref{eq:ref model}) despite the presence of matched uncertainty. A tracking control law consisting of linear feedback term $u_{pd} = Kx(t)$, a linear feed-forward term $u_{crm} = K_rr(t)$ and an adaptive term $\nu_{ad}(t)$ form the total controller \begin{equation} u = u_{pd} + u_{crm} - \nu_{ad} \label{eq:total_Controller} \end{equation} The baseline full state feedback and feed-forward controller is designed to satisfy the matching conditions such that $A_{rm} = A-BK$ and $B_{rm} = BK_r$. For the adaptive controller ideally we want $\nu_{ad}(t) = \Delta(x(t))$. Since we do not have true uncertainty information, we use a DNN estimate of the system uncertainties in the controller as $\nu_{ad}(t) = \hat{\Delta}(x(t))$. \subsection{Deep Model Reference Generative Network (D-MRGEN) for uncertainty estimation} Unlike traditional MRAC or SHL-MRAC weight update rule, where the weights are moved in the direction of diminishing tracking error, training a deep Neural network is much more involved. Feed-Forward networks like DNNs are trained in a supervised manner over a batch of i.i.d data. Deep learning optimization is based on Stochastic Gradient Descent (SGD) or its variants. The SGD update rule relies on a stochastic approximation of the expected value of the gradient of the loss function over a training set or mini-batches. To train a deep network to estimate the system uncertainties, unlike MRAC we need labeled pairs of state-true uncertainties $\{x(t),\Delta(x(t))\}$ i.i.d samples. Since we do not have access to true uncertainties ($\Delta(x)$), we use a generative network to generate estimates of $\Delta(x)$ to create the labeled targets for deep network training. For details of the generative network architecture in the adaptive controller, please see \cite{joshi2018adaptive}. This generative network is derived from separating the DNN into inner feature layer and the final output layer of the network. We also separate in time-scale the weight updates of these two parts of DNN. Temporally separated weight update algorithm for the DNN, approximating system uncertainty is presented in more details in further sections. \subsection{Online Parameter Estimation law} \label{Identification} The last layer of DNN with learned features from inner layer forms the Deep-Model Reference Generative Network (D-MRGeN). We use the MRAC learning rule to update pointwise in time, the weights of the D-MRGeN in the direction of achieving asymptotic tracking of the reference model by the actual system. Since we use the D-MRGeN estimates to train DNN model, we first study the admissibility and stability characteristics of the generative model estimate ${\Delta}'(x)$ in the controller (\ref{eq:total_Controller}). To achieve the asymptotic convergence of the reference model tracking error to zero, we use the D-MRGeN estimate in the controller \eqref{eq:total_Controller} as $\nu_{ad} = \Delta'(x)$ \begin{equation} \nu_{ad}(t) = W^T\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...)))) \end{equation} To differentiate the weights of D-MRGeN from last layer weights of DNN ``$\theta_n$", we denote D-MRGeN weights as ``$W$". \textbf{\emph{Assumption 1:}} Appealing to the universal approximation property of Neural Networks \cite{park1991universal} we have that, for every given basis functions $\Phi(x) \in \mathcal{F}$ there exists unique ideal weights $W^* \in \mathbb{R}^{k \times m}$ and $\epsilon_1(x) \in \mathbb{R}^{m}$ such that the following approximation holds \begin{equation} \Delta(x) = W^{*T}\Phi(x) + \epsilon_1(x), \hspace{2mm} \forall x(t) \in \mathcal{D}_x \subset \mathbb{R}^{n} \label{eq:3} \end{equation} \textbf{\emph{Fact 1:}} The network approximation error $\epsilon_1(x)$ is upper bounded, s.t $\bar{\epsilon}_1 = \sup_{x \in \mathcal{D}_x}\|\epsilon_1(x)\|$, and can be made arbitrarily small given sufficiently large number of basis functions. The reference model tracking error is defined as $e(t) = x_{rm}(t)- x(t)$. Using (\ref{eq:0}) \& (\ref{eq:ref model}) and the controller of form (\ref{eq:total_Controller}) with adaptation term $\nu_{ad}$, the tracking error dynamics can be written as \begin{equation} \dot e(t) = \dot x_{rm}(t) - \dot{x}(t) \label{eq:13} \end{equation} \begin{equation} \dot e(t) = A_{rm}e(t) + \tilde W^T\Phi(x) + \epsilon_1(x) \label{eq:14} \end{equation} where $\tilde W = W^*-W$ is error in parameter. The estimate of the unknown true network parameters $W^*$ are calculated on-line using the weight update rule (\ref{eq:18}); correcting the weight estimates in the direction of minimizing the instantaneous tracking error $e(t)$. The resulting update rule for network weights in estimating the total uncertainty in the system is as follows \begin{equation} \dot {W} = \Gamma proj(W,\Phi(x)e(t)'P) \hspace{5mm} {W}(0) = {W}_0 \label{eq:18} \end{equation} where $\Gamma \in \mathbb{R}^{k \times k}$ is the learning rate and $P \in \mathbb{R}^{n \times n}$ is a positive definite matrix. For given Hurwitz $A_{rm}$, the matrix $P \in \mathbb{R}^{n \times n}$ is a positive definite solution of Lyapunov equation $A_{rm}^TP + PA_{rm} + Q = 0$ for given $Q > 0$ \textbf{\emph{Assumption 2:}} For uncertainty parameterized by unknown true weight ${W}^* \in \mathbb{R}^{k \times m}$ and known nonlinear basis $\Phi(x)$, the ideal weight matrix is assumed to be upper bounded s.t $\|{W}^*\| \leq \mathcal{W}_b$. This is not a restrictive assumption. \subsubsection{Lyapunov Analysis} The on-line adaptive identification law (\ref{eq:18}) guarantees the asymptotic convergence of the tracking errors $e(t)$ and parameter error $\tilde W(t)$ under the condition of persistency of excitation \cite{aastrom2013adaptive,ioannou1988theory} for the structured uncertainty. Similar to the results by Lewis for SHL networks \cite{lewis1999nonlinear}, we show here that under the assumption of unstructured uncertainty represented by a deep neural network, the tracking error is uniformly ultimately bounded (UUB). We will prove the following theorem under switching feature vector assumption. \begin{theorem} Consider the actual and reference plant model (\ref{eq:0}) \& (\ref{eq:ref model}). If the weights parameterizing total uncertainty in the system are updated according to identification law \eqref{eq:18} Then the tracking error $\|e\|$ and error in network weights $\|\tilde W\|$ are bounded for all $\Phi \in \mathcal{F}$. \label{Theorem-1} \end{theorem} \begin{proof} The feature vectors belong to a function class characterized by the inner layer network weights $\theta_i$ s.t $\Phi \in \mathcal{F}$. We will prove the Lyapunov stability under the assumption that inner layer of DNN presents us a feature which results in the worst possible approximation error compared to network with features before switch. For the purpose of this proof let $\Phi(x)$ denote feature before switch and $\bar{\Phi}(x)$ be the feature after switch. We define the error $\epsilon_2(x)$ as, \begin{equation} \epsilon_2(x) = \sup_{\bar{\Phi} \in \mathcal{F}}\left|W^T\bar{\Phi}(x) - W^T\Phi(x)\right| \end{equation} Similar to \textbf{\emph{Fact-1}} we can upper bound the error $\epsilon_2(x)$ as $\bar{\epsilon}_2 = \sup_{x \in \mathcal{D}_x}\|\epsilon_2(x)\|$. By adding and subtracting the term $W^T\bar{\Phi}(x)$, we can rewrite the error dynamics \eqref{eq:14} with switched basis as, \begin{eqnarray} \dot e(t) &=& A_{rm}e(t) + W^{*T}\Phi(x) - W^T\Phi(x) \nonumber \\ && + W^T\bar{\Phi}(x) - W^T\bar{\Phi}(x) + \epsilon_1(x) \label{eq:14_1} \end{eqnarray} From \textbf{\emph{Assumption-1}} we know there exists a $W^*$ $\forall \Phi \in \mathcal{F}$. Therefore we can replace $W^{*T}\Phi(x)$ by $W^{*T}\bar{\Phi}(x)$ and rewrite the Eq-\eqref{eq:14_1} as \begin{eqnarray} \dot e(t) &=& A_{rm}e(t) + \tilde{W}^{T}\bar{\Phi}(x) + W^T(\bar{\Phi}(x) - \Phi(x)) + \epsilon_1(x) \nonumber \\ \label{eq:14_2} \end{eqnarray} For arbitrary switching, for any $\bar{\Phi}(x) \in \mathcal{F}$, we can prove the boundedness by considering worst possible approximation error and therefore can write, \begin{eqnarray} \dot e(t) &=& A_{rm}e(t) + \tilde{W}^{T}\bar{\Phi}(x) + \epsilon_2(x) + \epsilon_1(x) \label{eq:14_3} \end{eqnarray} Now lets consider $V(e,\tilde W) > 0$ be a differentiable, positive definite radially unbounded Lyapunov candidate function, \begin{equation} V(e,\tilde W) = e^TPe + \frac{\tilde W^T \Gamma^{-1} \tilde W}{2} \label{eq:20} \end{equation} The time derivative of the lyapunov function (\ref{eq:20}) along the trajectory (\ref{eq:14_3}) can be evaluated as \begin{equation} \dot V(e,\tilde W) = \dot e^TPe + e^TP \dot e - \tilde W^T\Gamma^{-1}\dot{\hat W} \label{eq:25} \end{equation} Using (\ref{eq:14_3}) \& (\ref{eq:18}) in (\ref{eq:25}), the time derivative of the lyanpunov function reduces to \begin{eqnarray} \dot V(e,\tilde W) &=& -e^TQe + 2e^TP\epsilon(x) \label{eq:22} \end{eqnarray} where $\epsilon(x) = \epsilon_1(x) +\epsilon_2(x)$ and $\bar{\epsilon} = \bar{\epsilon_1} + \bar{\epsilon_2}$.\\ Hence $\dot V(e,\tilde W) \leq 0$ outside compact neighborhood of the origin $e = 0$, for some sufficiently large $\lambda_{min}(Q)$. \begin{equation} \|e(t)\| \geq \frac{2\lambda_{max}(P)\bar\epsilon}{\lambda_{min}(Q)} \label{eq:error_bound} \end{equation} Using the BIBO assumption $x_{rm}(t)$ is bounded for bounded reference signal $r(t)$, thereby $x(t)$ remains bounded. Since $V(e,\tilde W)$ is radially unbounded the result holds for all $x(0) \in \mathcal{D}_x$. Using the fact, the error in parameters $\tilde{W}$ are bounded through projection operator \cite{larchev2010projection} and further using Lyapunov theory and Barbalat’s Lemma \cite{narendra2012stable} we can show that $e(t)$ is uniformly ultimately bounded in vicinity to zero solution. \end{proof} From Theorem-\ref{Theorem-1} \& (\ref{eq:14}) and using system theory \cite{kailath1980linear} we can infer that as $e(t) \to 0$, ${\Delta}'(x) \to \Delta(x)$ in point-wise sense. Hence D-MRGeN estimates $y_{\tau} = {\Delta}'(x_{\tau})$ are admissible target values for training DNN features over the data $Z^M = \{\{x_{\tau},y_{\tau}\}\}_{\tau = 1}^{M}$. The details of DNN training and implementation details of DMRAC controller is presented in the following section. \section{Adaptive Control using Deep nets (DMRAC)} \label{section_DMRAC} The DNN architecture for MRAC is trained in two steps. We separate the DNN into two networks, as shown in Fig-\ref{DNN_architecture}. The faster learning outer adaptive network and slower deep feature network. DMRAC learns underlying deep feature vector to the system uncertainty using locally exciting uncertainty estimates obtained using a generative network. Between successive updates of the inner layer weights, the feature provided by the inner layers of the deep network is used as the fixed feature vector for outer layer adaptive network update and evaluation. The algorithm for DNN learning and DMRAC controller is provided in Algorithm-\ref{alg:DMRAC}. Through this architecture of mixing two-time scale learning, we fuse the benefits of DNN memory through the retention of relevant, exciting features and robustness, boundedness guarantee in reference tracking. This key feature of the presented framework ensures robustness while guaranteeing long term learning and memory in the adaptive network. Also as indicated in the controller architecture Fig-\ref{DNN_architecture} we can use contextual state `$c_i$' other than system state $x(t)$ to extract relevant features. These contextual states could be relevant model information not captured in system states. For example, for an aircraft system, vehicle parameters like pitot tube measurement, the angle of attack, engine thrust, and so on. These contextual states can extract features which help in decision making in case of faults. The work on DMRAC with contextual states will be dealt with in the follow on work. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig/DNN} \caption{DMRAC training and controller details} \label{DNN_architecture} \vspace{-5mm} \end{figure} The DNN in DMRAC controller is trained over training dataset $Z^M = \{x_i, {\Delta}'(x_i)\}_{i=1}^M$, where the ${\Delta}'(x_i)$ are D-MRGeN estimates of the uncertainty. The training dataset $Z^M$ is randomly drawn from a larger data buffer $\mathcal{B}$. Not every pair of data $\{x_i, {\Delta}'(x_i)\}$ from D-MRGeN is added to the training buffer $\mathcal{B}$. We qualify the input-target pair based on kernel independence test such that to ensure that we collect locally exciting independent information which provides a sufficiently rich representation of the operating domain. Since the state-uncertainty data is the realization of a Markov process, such a method for qualifying data to be sufficiently independent of previous data-points is necessary. The algorithm details to qualify and add a data point to the buffer is provided in detail in subsection \ref{subsection-buffer}. \subsection{Details of Deep Feature Training using D-MRGeN} This section provides the details of the DNN training over data samples observed over n-dimensional input subspace $x(t) \in \mathcal{X} \in \mathbb{R}^{n}$ and m-dimensional targets subspace $y\in\mathcal{Y} \in \mathbb{R}^m$. The sample set is denoted as $\mathcal{Z}$ where $\mathcal{Z} \in \mathcal{X} \times \mathcal{Y}$. We are interested in the function approximation tasks for DNN. The function $f_{\boldsymbol{\theta}}$ is the learned approximation to the model uncertainty with parameters $\boldsymbol{\theta} \in \boldsymbol{\Theta}$, where $\boldsymbol{\Theta}$ is the space of parameters, i.e. $f_{\boldsymbol{\theta}}: \mathbb{R}^n \to \mathbb{R}^m$. We assume a training data buffer $\mathcal{B}$ has $p_{max}$ training examples, such that the set $Z^{p_{max}} = \{Z_i | Z_i \in \mathcal{Z}\}_{i=1}^{p_{max}} = \{(x_i, y_i) \in \mathcal{X}\times\mathcal{Y}\}_{i=1}^{p_{max}}$. The samples are independently drawn from the buffer $\mathcal{B}$ over probability distribution $P$. The hypothesis set, which consist of all possible functions $f_{\boldsymbol{\theta}}$ is denoted as $\mathcal{H}$. Therefore a learning algorithm $\mathcal{A}$ (in our case SGD) is a mapping from $\mathcal{A}: \mathcal{Z}^{p_{max}} \to \mathcal{H}$ The loss function, which measures the discrepancy between true target $y$ and algorithm's estimated target function value $f_{\boldsymbol{\theta}}$ is denoted by $L(y, f_{\boldsymbol{\theta}}(x))$. Specific to work presented in this paper, we use a $\ell_2$-norm between values i.e. $\mathbb{E}_p(\ell(y, f_{\boldsymbol{\theta}}(x))) = \mathbb{E}_P \left(\|y_i - f_{\boldsymbol{\theta}}(x_i)\|_2\right)$ as loss function for DNN training. The empirical loss \eqref{empirical_loss} is used to approximate the loss function since the distribution $P$ is unknown to learning algorithm. The weights are updated using SGD in the direction of negative gradient of the loss function as given in \eqref{SGD2}. Unlike the conventional DNN training where the true target values $y \in \mathcal{Y}$ are available for every input $x \in \mathcal{X}$, in DMRAC true system uncertainties as the labeled targets are not available for the network training. We use the part of the network itself (the last layer) with pointwise weight updated according to MRAC-rule as the generative model for the data. The D-MRGeN uncertainty estimates $y = {W}^T\Phi(x,\theta_1,\theta_2, \ldots \theta_{n-1})= {\Delta}'(x)$ along with inputs $x_i$ make the training data set $Z^{p_{max}} = \{x_i, {\Delta}'(x_i)\}_{i=1}^{p_{max}}$. Note that we use interchangably $x_i$ and $x(t)$ as discrete representation of continuous state vector for DNN training. The main purpose of DNN in the adaptive network is to extract relevant features of the system uncertainties, which otherwise is very tedious to obtain without the limits on the domain of operation. We also demonstrate empirically, that the DNN features trained over past i.i.d representative data retains the memory of the past instances and can be used as the frozen feed-forward network over similar reference tracking tasks without loss of the guaranteed tracking performance. \subsection{Method for Recording Data using MRGeN for DNN Training} \label{subsection-buffer} In statistical inference, implicitly or explicitly one always assume that the training set $Z^M = \{x_i, y_i\}_{i=1}^M$ is composed on M-input-target tuples that are independently drawn from buffer $\mathcal{B}$ over same joint distribution $P(x,y)$. The i.i.d assumption on the data is required for robustness, consistency of the network training and for bounds on the generalization error \cite{xu2012robustness, vandegeer2009}. In classical generalization proofs one such condition is that $\frac{1}{p_{max}}\mathbb{X}^T\mathbb{X} \to \gamma$ as ${p_{max}} \to \infty$, where $\mathbb{X}$ denotes the design matrix with rows $\Phi_i^T$. The i.i.d assumption implies the above condition is fulfilled and hence is sufficient but not necessary condition for consistency and error bound for generative modeling. The key capability brought about by DMRAC is a relevant feature extraction from the data. Feature extraction in DNN is achieved by using recorded data concurrently with current data. The recorded data include the state $x_i$, feature vector $\Phi(x_i)$ and associated D-MRGeN estimate of the uncertainty ${\Delta}'(x_i)$. For a given $\zeta_{tol} \in \mathbb{R}_+$ a simple way to select the instantaneous data point $\{x_i, \Delta'(x_i)\}$ for recording is to required to satisfy following condition \begin{equation} \gamma_i = \frac{\|\Phi(x_i) - \Phi_p\|^2}{\|\Phi(x_i)\|} \geq \zeta_{tol} \label{eq:kernel_test} \end{equation} Where the index $p$ is over the data points in buffer $\mathcal{B}$. The above method ascertains only those data points are selected for recording that are sufficiently different from all other previously recorded data points in the buffer. Since the buffer $\mathcal{B}$ is of finite dimension, the data is stored in a cyclic manner. As the number of data points reaches the buffer budget, a new data is added only upon one existing data point is removed such that the singular value of the buffer is maximized. The singular value maximization approach for the training data buffer update is provided in \cite{5991481}. \begin{algorithm}[h!] \caption{D-MRAC Controller Training} \label{alg:DMRAC} \begin{algorithmic}[1] \STATE {\bfseries Input:} $\Gamma, \eta, \zeta_{tol}, p_{max}$ \WHILE{New measurements are available} \STATE Update the D-MRGeN weights $W$ using Eq:\eqref{eq:18} \STATE Compute $y_{\tau+1} = \hat{W}^T\Phi(x_{\tau+1})$ \STATE Given $x_{\tau+1}$ compute $\gamma_{\tau+1}$ by Eq-\eqref{eq:kernel_test}. \IF{$\gamma_{\tau+1} \geqslant \zeta_{tol}$} \STATE Update $\mathcal{B}:\boldsymbol{Z}(:) = \{x_{\tau+1}, y_{\tau+1}\}$ and $\mathbb{X}: \Phi(x_{\tau+1})$ \IF{$|\mathcal{B}| > p_{max}$} \STATE Delete element in $\mathcal{B}$ by SVD maximization \cite{5991481} \ENDIF \ENDIF \IF{$|\mathcal{B}| \geq M$} \STATE Sample a mini-batch of data $\boldsymbol{Z}^M \subset \mathcal{B}$ \STATE Train the DNN network over mini-batch data using Eq-\eqref{SGD2} \STATE Update the feature vector $\Phi$ for D-MRGeN network \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \section{Introduction} Deep Neural Networks (DNN) have lately shown tremendous empirical performance in many applications and various fields such as computer vision, speech recognition, translation, natural language processing, Robotics, Autonomous driving and many more \cite{goodfellow2016deep}. Unlike their counterparts such as shallow networks with Radial Basis Function features \cite{sanner:TNN:92,liu2018gaussian}, deep networks learn features by learning the weights of nonlinear compositions of weighted features arranged in a directed acyclic graph \cite{2013arXiv1301.3605Y}. It is now pretty clear that deep neural networks are outshining other classical machine-learning techniques\cite{hinton2012deep}. Leveraging these successes, there have been many exciting new claims regarding the control of complex dynamical systems in simulation using deep reinforcement learning\cite{mnih2015human}. However, Deep Reinforcement Learning (D-RL) methods typically do not guarantee stability or even the boundedness of the system during the learning transient. Hence despite significant simulation success, D-RL has seldomly been used in safety-critical applications. D-RL methods often make the ergodicity assumption, requiring that there is a nonzero probability of the system states returning to the origin. In practice, such a condition is typically enforced by resetting the simulation when a failure occurs. Unfortunately, however, real-world systems do not have this reset option. Unlike, D-RL much effort has been devoted in the field of adaptive control to ensuring that the system stays stable during learning. Model Reference Adaptive Control (MRAC) is one such leading method for adaptive control that seeks to learn a high-performance control policy in the presence of significant model uncertainties \cite{ioannou1988theory,tao2003adaptive, Pomet_92_TAC}. The key idea in MRAC is to find an update law for a parametric model of the uncertainty that ensures that the candidate Lyapunov function is non-increasing. Many update laws have been proposed and analyzed, which include but not limited to $\sigma$-modification \cite{ioannou1996robust}, $e$-modification \cite{annaswamy_CDC_89}, and projection-based updates \cite{Pomet_92_TAC}. More modern laws extending the classical parametric setting include $\ell_1$-adaptive control \cite{hovakimyan2010ℒ1} and concurrent learning \cite{chowdhary2013concurrent} have also been studied. A more recent work introduced by the author is the Gaussian Process Model Reference Adaptive Control (GP-MRAC), which utilizes a GP as a model of the uncertainty. A GP is a Bayesian nonparametric adaptive element that can adapt both its weights and the structure of the model in response to the data. The authors and others have shown that GP-MRAC has strong long-term learning properties as well as high control performance \cite{chowdhary2015bayesian, joshi2018adaptive}. However, GPs are ``shallow'' machine learning models, and do not utilize the power of learning complex features through compositions as deep networks do (see \ref{sec:feature}). Hence, one wonders whether the power of deep learning could lead to even more powerful learning based MRAC architectures than those utilizing GPs. In this paper, we address this critical question: How can MRAC utilize deep networks while guaranteeing stability? Towards that goal, our contributions are as follows: a) We develop an MRAC architecture that utilizes DNNs as the adaptive element; b) We propose an algorithm for the online update of the weights of the DNN by utilizing a dual time-scale adaptation scheme. In our algorithm, the weights of the outermost layers are adapted in real time, while the weights of the inner layers are adapted using batch updates c) We develop theory to guarantee Uniform Ultimate Boundedness (UUB) of the entire DMRAC controller; d) We demonstrate through simulation results that this architecture has desirable long term learning properties. We demonstrate how DNNs can be utilized in stable learning schemes for adaptive control of safety-critical systems. This provides an alternative to deep reinforcement learning for adaptive control applications requiring stability guarantees. Furthermore, the dual time-scale analysis scheme used by us should be generalizable to other DNN based learning architectures, including reinforcement learning. \section{Background} \subsection{Deep Networks and Feature spaces in machine learning}\label{sec:feature} The key idea in machine learning is that a given function can be encoded with weighted combinations of \textit{feature} vector $\Phi \in \mathcal{F}$, s.t $\Phi(x)=[\phi_1(x), \phi_2(x),...,\phi_k(x)]^T\in \mathbb{R}^k$, and $W^*\in\mathbb{R}^{k \times m}$ a vector of `ideal' weights s.t $\|y(x)-W^{*^T}\Phi(x)\|_\infty<\epsilon(x)$. Instead of hand picking features, or relying on polynomials, Fourier basis functions, comparison-type features used in support vector machines \cite{schoelkofp:01,scholkopf2002learning} or Gaussian Processes \cite{rasmussen2006gaussian}, DNNs utilize composite functions of features arranged in a directed acyclic graphs, i.e. $\Phi(x)=\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...))))$ where $\theta_i$'s are the layer weights. The universal approximation property of the DNN with commonly used feature functions such as sigmoidal, tanh, and RELU is proved in the work by Hornik's \cite{Hornik:NN89} and shown empirically to be true by recent results \cite{2016arXiv160300988M,Poggio2017,2016arXiv161103530Z}. Hornik et al. argued the network with at least one hidden layer (also called Single Hidden Layer (SHL) network) to be a universal approximator. However, empirical results show that the networks with more hidden layers show better generalization capability in approximating complex function. While the theoretical reasons behind better generalization ability of DNN are still being investigated \cite{2016Matus}, for the purpose of this paper, we will assume that it is indeed true, and focus our efforts on designing a practical and stable control scheme using DNNs. \subsection{Neuro-adaptive control} Neural networks in adaptive control have been studied for a very long time. The seminal paper by Lewis \cite{Lewis:AJC:99} utilized Taylor series approximations to demonstrate uniform ultimate boundedness with a single hidden neural network. SHL networks are nonlinear in the parameters; hence, the analysis previously introduced for linear in parameter, radial basis function neural networks introduced by Sanner and Slotine does not directly apply \cite{sanner:TNN:92}. The back-propagation type scheme with non-increasing Lyapunov candidate as a constraint, introduced in Lewis' work has been widely used in Neuro-adaptive MRAC. Concurrent Learning MRAC (CL-MRAC) is a method for learning based neuro-adaptive control developed by the author to improve the learning properties and provide exponential tracking and weight error convergence guarantees. However, similar guarantees have not been available for SHL networks. There has been much work, towards including deeper neural networks in control; however, strong guarantees like those in MRAC on the closed-loop stability during online learning are not available. In this paper, we propose a dual time-scale learning approach which ensures such guarantees. Our approach should be generalizable to other applications of deep neural networks, including policy gradient Reinforcement Learning (RL) \cite{sutton1992reinforcement} which is very close to adaptive control in its formulation and also to more recent work in RL for control \cite{modares2014integral}. \subsection{Stochastic Gradient Descent and Batch Training} We consider a deep network model with parameters $\boldsymbol{\theta}$, and consider the problem of optimizing a non convex loss function $L(\boldsymbol{Z, \theta})$, with respect to $\boldsymbol{\theta}$. Let $L(\boldsymbol{Z, \theta})$ is defined as average loss over $M$ training sample data points. \begin{equation} L(\boldsymbol{Z, \theta}) = \frac{1}{M}\sum_{i=1}^M \ell(\boldsymbol{Z_i, \theta}) \label{empirical_loss} \end{equation} where $M$ denotes the size of sample training set. For each sample size of $M$, the training data are in form of $M$-tuple $Z^M = (Z_1, Z_2, \dots Z_M)$ of $Z-$valued random variables drawn according to some unknown distribution $P \in \mathcal{P}$. Where each $Z_i = \{x_i, y_i\}$ are the labelled pair of input and target values. For each $P$ the expected loss can be computed as $\boldsymbol{E}_p(\ell(\boldsymbol{Z,\theta}))$. The above empirical loss \eqref{empirical_loss} is used as proxy for the expected value of loss with respect to the true data generating distribution. Optimization based on the Stochastic Gradient Descent (SGD) algorithm uses a stochastic approximation of the gradient of the loss $L(\boldsymbol{Z, \theta})$ obtained over a mini-batch of $M$ training examples drawn from buffer $\mathcal{B}$. The resulting SGD weight update rule \begin{eqnarray} \boldsymbol{\theta}_{k+1} &=& \boldsymbol{\theta}_k - \eta \frac{1}{M}\sum_{i=1}^M \nabla_{ \boldsymbol{\theta}}\ell(\boldsymbol{Z_i, \theta}_k) \label{SGD2} \end{eqnarray} where $\eta$ is the learning rate. Further details on generating i.i.d samples for DNN learning and the training details of network are provided in section \ref{section_DMRAC}. \section{Simulations} \label{results} In this section, we will evaluate the presented DMRAC adaptive controller using a 6-DOF Quadrotor model for the reference trajectory tracking problem. The quadrotor model is completely described by 12 states, three position, and velocity in the North-East-Down reference frame and three body angles and angular velocities. The full description of the dynamic behavior of a Quadrotor is beyond the scope of this paper, and interested readers can refer to \cite{joshi2017robust} and references therein. The control law designed treats the moments and forces on the vehicle due to unknown true inertia/mass of the vehicle and moments due to aerodynamic forces of the crosswind, as the unmodeled uncertainty terms and are captured online through DNN adaptive element. The outer-loop control of the quadrotor is achieved through Dynamic Inversion (DI) controller, and we use DMRAC for the inner-loop attitude control. A simple wind model with a boundary layer effect is used to simulate the effect of crosswind on the vehicle. A second-order reference model with natural frequency $4 rad/s$ and damping ratio of $0.5$ is used. Further stochasticity is added to the system by adding Gaussian white noise to the states with a variance of $\omega_n = 0.01$. The simulation runs for $150$secs and uses time step of $0.05s$. The maximum number of points ($p_{max}$) to be stored in buffer $\mathcal{B}$ is arbitrarily set to $250$, and SVD maximization algorithm is used to cyclically update $\mathcal{B}$ when the budget is reached, for details refer \cite{5991481}. The controller is designed to track a stable reference commands $r(t)$. The goal of the experiment is to evaluate the tracking performance of the proposed DMRAC controller on the system with uncertainties over an unknown domain of operation. The learning rate for D-MRGeN network and DMRAC-DNN networks are chosen to be $\Gamma = 0.5I_{6 \times 6}$ and $\eta = 0.01$. The DNN network is composed of 2 hidden layers with $200,100$ neurons and with tan-sigmoid activations, and output layer with linear activation. We use ``Levenberg-Marquardt backpropagation'' \cite{yu2011levenberg} for updating DNN weights over $100$ epochs. Tolerance threshold for kernel independence test is selected to be $\zeta_{tol} = 0.2$ for updating the buffer $\mathcal{B}$. Figure-\ref{fig:plot_1} and Fig-\ref{fig:plot_2} show the closed loop system performance in tracking the reference signal for DMRAC controller and learning retention when used as the feed-forward network on a similar trajectory (Circular) with no learning. We demonstrate the proposed DMRAC controller under uncertainty and without domain information is successful in producing desired reference tracking. Since DMRAC, unlike traditional MRAC, uses DNN for uncertainty estimation is hence capable of retaining the past learning and thereby can be used in tasks with similar features without active online adaptation Fig-\ref{fig:plot_2}. Whereas traditional MRAC which is ``pointwise in time" learning algorithm and cannot generalize across tasks. The presented controller achieves tighter tracking with smaller tracking error in both outer and inner loop states as shown in Fig-\ref{fig:plot_2} and Fig-\ref{fig:plot_3} in both with adaptation and as a feed-forward adaptive network without adaptation. Figure-\ref{fig:plot_4} demonstrate the DNN learning performance vs epochs. The Training, Testing and Validation error over the data buffer for DNN, demonstrate the network performance in learning a model of the system uncertainties and its generalization capabilities over unseen test data. \begin{figure*}[tbh] \centering \begin{subfigure}{0.9\columnwidth} \includegraphics[width=1.05\textwidth, height = 0.6\textwidth]{Fig/LearningGeneralization_traj_xy.jpg} \vspace{-0.2in} \caption{} \label{fig:plot_1} \end{subfigure} \begin{subfigure}{1.1\columnwidth} \includegraphics[width=\textwidth, height = 0.5\textwidth]{Fig/LearningGeneralization_InnerLoopStates} \vspace{-0.2in} \caption{} \label{fig:plot_2} \end{subfigure} \vspace{-0.12in} \caption{DMRAC Controller Evaluation on 6DOF Quadrotor dynamics model (a) DMRAC vs MRAC vs GP-MRAC Controllers on quadrotor trajectory tracking with active learning and DMRAC as frozen feed-forward network (Circular Trajectory) to test network generalization (b) Closed-loop system response in roll rate $\phi(t)$ and Pitch $\theta(t)$} \label{fig:grid_world} \vspace{-4mm} \end{figure*} \begin{figure*}[tbh] \centering \begin{subfigure}{1.2\columnwidth} \includegraphics[width=\textwidth, height = 0.5\textwidth]{Fig/LearningGeneralization_OuterLoopStates} \vspace{-0.2in} \caption{} \label{fig:plot_3} \end{subfigure} \begin{subfigure}{0.80\columnwidth} \includegraphics[width=1.1\textwidth]{Fig/netTrain_perf} \vspace{-0.2in} \caption{} \label{fig:plot_4} \end{subfigure} \vspace{-0.12in} \caption{(a) Position Tracking performance of DMRAC vs MRAC vs GP-MRAC controller with active learning and Learning retention test over Circular Trajectory for DMRAC (b) DNN Training, Test and Validation performance.} \label{fig:grid_world} \vspace{-5mm} \end{figure*} \section{System Description} \label{system_description} This section discusses the formulation of model reference adaptive control (see e.g. \cite{ioannou1988theory}). We consider the following system with uncertainty $\Delta(x)$: \begin{equation} \dot x(t) = Ax(t) + B(u(t) + \Delta(x)) \label{eq:0} \end{equation} where $x(t) \in \mathbb{R}^n$, $t \geqslant 0$ is the state vector, $u(t) \in \mathbb{R}^m$, $t \geqslant 0$ is the control input, $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$ are known system matrices and we assume the pair $(A,B)$ is controllable. The term $\Delta(x) : \mathbb{R}^n \to \mathbb{R}^m$ is matched system uncertainty and be Lipschitz continuous in $x(t) \in \mathcal{D}_x$. Let $ \mathcal{D}_x \subset \mathbb{R}^n$ be a compact set and the control $u(t)$ is assumed to belong to a set of admissible control inputs of measurable and bounded functions, ensuring the existence and uniqueness of the solution to \eqref{eq:0}. The reference model is assumed to be linear and therefore the desired transient and steady-state performance is defined by a selecting the system eigenvalues in the negative half plane. The desired closed-loop response of the reference system is given by \begin{equation} \dot x_{rm}(t) = A_{rm}x_{rm}(t) + B_{rm}r(t) \label{eq:ref model} \end{equation} where $x_{rm}(t) \in \mathcal{D}_x \subset \mathbb{R}^{n}$ and $A_{rm} \in \mathbb{R}^{n \times n}$ is Hurwitz and $B_{rm} \in \mathbb{R}^{n \times r}$. Furthermore, the command $r(t) \in \mathbb{R}^{r}$ denotes a bounded, piece wise continuous, reference signal and we assume the reference model (\ref{eq:ref model}) is bounded input-bounded output (BIBO) stable \cite{ioannou1988theory}. The true uncertainty $\Delta(x)$ in unknown, but it is assumed to be continuous over a compact domain $\mathcal{D}_x \subset \mathbb{R}^n$. A Deep Neural Networks (DNN) have been widely used to represent a function when the basis vector is not known. Using DNNs, a non linearly parameterized network estimate of the uncertainty can be written as $\hat \Delta(x) \triangleq \theta_n^T\Phi(x)$, where $\theta_n \in \mathbb{R}^{k \times m}$ are network weights for the final layer and $\Phi(x)=\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...))))$, is a $k$ dimensional feature vector which is function of inner layer weights, activations and inputs. The basis vector $\Phi(x) \in \mathcal{F}: \mathbb{R}^{n} \to \mathbb{R}^{k}$ is considered to be Lipschitz continuous to ensure the existence and uniqueness of the solution (\ref{eq:0}). \subsection{Total Controller} \label{adaptive_identification} The aim is to construct a feedback law $u(t)$, $t \geqslant 0$, such that the state of the uncertain dynamical system (\ref{eq:0}) asymptotically tracks the state of the reference model (\ref{eq:ref model}) despite the presence of matched uncertainty. A tracking control law consisting of linear feedback term $u_{pd} = Kx(t)$, a linear feed-forward term $u_{crm} = K_rr(t)$ and an adaptive term $\nu_{ad}(t)$ form the total controller \begin{equation} u = u_{pd} + u_{crm} - \nu_{ad} \label{eq:total_Controller} \end{equation} The baseline full state feedback and feed-forward controller is designed to satisfy the matching conditions such that $A_{rm} = A-BK$ and $B_{rm} = BK_r$. For the adaptive controller ideally we want $\nu_{ad}(t) = \Delta(x(t))$. Since we do not have true uncertainty information, we use a DNN estimate of the system uncertainties in the controller as $\nu_{ad}(t) = \hat{\Delta}(x(t))$. \subsection{Deep Model Reference Generative Network (D-MRGEN) for uncertainty estimation} Unlike traditional MRAC or SHL-MRAC weight update rule, where the weights are moved in the direction of diminishing tracking error, training a deep Neural network is much more involved. Feed-Forward networks like DNNs are trained in a supervised manner over a batch of i.i.d data. Deep learning optimization is based on Stochastic Gradient Descent (SGD) or its variants. The SGD update rule relies on a stochastic approximation of the expected value of the gradient of the loss function over a training set or mini-batches. To train a deep network to estimate the system uncertainties, unlike MRAC we need labeled pairs of state-true uncertainties $\{x(t),\Delta(x(t))\}$ i.i.d samples. Since we do not have access to true uncertainties ($\Delta(x)$), we use a generative network to generate estimates of $\Delta(x)$ to create the labeled targets for deep network training. For details of the generative network architecture in the adaptive controller, please see \cite{joshi2018adaptive}. This generative network is derived from separating the DNN into inner feature layer and the final output layer of the network. We also separate in time-scale the weight updates of these two parts of DNN. Temporally separated weight update algorithm for the DNN, approximating system uncertainty is presented in more details in further sections. \subsection{Online Parameter Estimation law} \label{Identification} The last layer of DNN with learned features from inner layer forms the Deep-Model Reference Generative Network (D-MRGeN). We use the MRAC learning rule to update pointwise in time, the weights of the D-MRGeN in the direction of achieving asymptotic tracking of the reference model by the actual system. Since we use the D-MRGeN estimates to train DNN model, we first study the admissibility and stability characteristics of the generative model estimate ${\Delta}'(x)$ in the controller (\ref{eq:total_Controller}). To achieve the asymptotic convergence of the reference model tracking error to zero, we use the D-MRGeN estimate in the controller \eqref{eq:total_Controller} as $\nu_{ad} = \Delta'(x)$ \begin{equation} \nu_{ad}(t) = W^T\phi_n(\theta_{n-1},\phi_{n-1}(\theta_{n-2},\phi_{n-2}(...)))) \end{equation} To differentiate the weights of D-MRGeN from last layer weights of DNN ``$\theta_n$", we denote D-MRGeN weights as ``$W$". \textbf{\emph{Assumption 1:}} Appealing to the universal approximation property of Neural Networks \cite{park1991universal} we have that, for every given basis functions $\Phi(x) \in \mathcal{F}$ there exists unique ideal weights $W^* \in \mathbb{R}^{k \times m}$ and $\epsilon_1(x) \in \mathbb{R}^{m}$ such that the following approximation holds \begin{equation} \Delta(x) = W^{*T}\Phi(x) + \epsilon_1(x), \hspace{2mm} \forall x(t) \in \mathcal{D}_x \subset \mathbb{R}^{n} \label{eq:3} \end{equation} \textbf{\emph{Fact 1:}} The network approximation error $\epsilon_1(x)$ is upper bounded, s.t $\bar{\epsilon}_1 = \sup_{x \in \mathcal{D}_x}\|\epsilon_1(x)\|$, and can be made arbitrarily small given sufficiently large number of basis functions. The reference model tracking error is defined as $e(t) = x_{rm}(t)- x(t)$. Using (\ref{eq:0}) \& (\ref{eq:ref model}) and the controller of form (\ref{eq:total_Controller}) with adaptation term $\nu_{ad}$, the tracking error dynamics can be written as \begin{equation} \dot e(t) = \dot x_{rm}(t) - \dot{x}(t) \label{eq:13} \end{equation} \begin{equation} \dot e(t) = A_{rm}e(t) + \tilde W^T\Phi(x) + \epsilon_1(x) \label{eq:14} \end{equation} where $\tilde W = W^*-W$ is error in parameter. The estimate of the unknown true network parameters $W^*$ are calculated on-line using the weight update rule (\ref{eq:18}); correcting the weight estimates in the direction of minimizing the instantaneous tracking error $e(t)$. The resulting update rule for network weights in estimating the total uncertainty in the system is as follows \begin{equation} \dot {W} = \Gamma proj(W,\Phi(x)e(t)'P) \hspace{5mm} {W}(0) = {W}_0 \label{eq:18} \end{equation} where $\Gamma \in \mathbb{R}^{k \times k}$ is the learning rate and $P \in \mathbb{R}^{n \times n}$ is a positive definite matrix. For given Hurwitz $A_{rm}$, the matrix $P \in \mathbb{R}^{n \times n}$ is a positive definite solution of Lyapunov equation $A_{rm}^TP + PA_{rm} + Q = 0$ for given $Q > 0$ \textbf{\emph{Assumption 2:}} For uncertainty parameterized by unknown true weight ${W}^* \in \mathbb{R}^{k \times m}$ and known nonlinear basis $\Phi(x)$, the ideal weight matrix is assumed to be upper bounded s.t $\|{W}^*\| \leq \mathcal{W}_b$. This is not a restrictive assumption. \subsubsection{Lyapunov Analysis} The on-line adaptive identification law (\ref{eq:18}) guarantees the asymptotic convergence of the tracking errors $e(t)$ and parameter error $\tilde W(t)$ under the condition of persistency of excitation \cite{aastrom2013adaptive,ioannou1988theory} for the structured uncertainty. Similar to the results by Lewis for SHL networks \cite{lewis1999nonlinear}, we show here that under the assumption of unstructured uncertainty represented by a deep neural network, the tracking error is uniformly ultimately bounded (UUB). We will prove the following theorem under switching feature vector assumption. \begin{theorem} Consider the actual and reference plant model (\ref{eq:0}) \& (\ref{eq:ref model}). If the weights parameterizing total uncertainty in the system are updated according to identification law \eqref{eq:18} Then the tracking error $\|e\|$ and error in network weights $\|\tilde W\|$ are bounded for all $\Phi \in \mathcal{F}$. \label{Theorem-1} \end{theorem} \begin{proof} The feature vectors belong to a function class characterized by the inner layer network weights $\theta_i$ s.t $\Phi \in \mathcal{F}$. We will prove the Lyapunov stability under the assumption that inner layer of DNN presents us a feature which results in the worst possible approximation error compared to network with features before switch. For the purpose of this proof let $\Phi(x)$ denote feature before switch and $\bar{\Phi}(x)$ be the feature after switch. We define the error $\epsilon_2(x)$ as, \begin{equation} \epsilon_2(x) = \sup_{\bar{\Phi} \in \mathcal{F}}\left|W^T\bar{\Phi}(x) - W^T\Phi(x)\right| \end{equation} Similar to \textbf{\emph{Fact-1}} we can upper bound the error $\epsilon_2(x)$ as $\bar{\epsilon}_2 = \sup_{x \in \mathcal{D}_x}\|\epsilon_2(x)\|$. By adding and subtracting the term $W^T\bar{\Phi}(x)$, we can rewrite the error dynamics \eqref{eq:14} with switched basis as, \begin{eqnarray} \dot e(t) &=& A_{rm}e(t) + W^{*T}\Phi(x) - W^T\Phi(x) \nonumber \\ && + W^T\bar{\Phi}(x) - W^T\bar{\Phi}(x) + \epsilon_1(x) \label{eq:14_1} \end{eqnarray} From \textbf{\emph{Assumption-1}} we know there exists a $W^*$ $\forall \Phi \in \mathcal{F}$. Therefore we can replace $W^{*T}\Phi(x)$ by $W^{*T}\bar{\Phi}(x)$ and rewrite the Eq-\eqref{eq:14_1} as \begin{eqnarray} \dot e(t) &=& A_{rm}e(t) + \tilde{W}^{T}\bar{\Phi}(x) + W^T(\bar{\Phi}(x) - \Phi(x)) + \epsilon_1(x) \nonumber \\ \label{eq:14_2} \end{eqnarray} For arbitrary switching, for any $\bar{\Phi}(x) \in \mathcal{F}$, we can prove the boundedness by considering worst possible approximation error and therefore can write, \begin{eqnarray} \dot e(t) &=& A_{rm}e(t) + \tilde{W}^{T}\bar{\Phi}(x) + \epsilon_2(x) + \epsilon_1(x) \label{eq:14_3} \end{eqnarray} Now lets consider $V(e,\tilde W) > 0$ be a differentiable, positive definite radially unbounded Lyapunov candidate function, \begin{equation} V(e,\tilde W) = e^TPe + \frac{\tilde W^T \Gamma^{-1} \tilde W}{2} \label{eq:20} \end{equation} The time derivative of the lyapunov function (\ref{eq:20}) along the trajectory (\ref{eq:14_3}) can be evaluated as \begin{equation} \dot V(e,\tilde W) = \dot e^TPe + e^TP \dot e - \tilde W^T\Gamma^{-1}\dot{\hat W} \label{eq:25} \end{equation} Using (\ref{eq:14_3}) \& (\ref{eq:18}) in (\ref{eq:25}), the time derivative of the lyanpunov function reduces to \begin{eqnarray} \dot V(e,\tilde W) &=& -e^TQe + 2e^TP\epsilon(x) \label{eq:22} \end{eqnarray} where $\epsilon(x) = \epsilon_1(x) +\epsilon_2(x)$ and $\bar{\epsilon} = \bar{\epsilon_1} + \bar{\epsilon_2}$.\\ Hence $\dot V(e,\tilde W) \leq 0$ outside compact neighborhood of the origin $e = 0$, for some sufficiently large $\lambda_{min}(Q)$. \begin{equation} \|e(t)\| \geq \frac{2\lambda_{max}(P)\bar\epsilon}{\lambda_{min}(Q)} \label{eq:error_bound} \end{equation} Using the BIBO assumption $x_{rm}(t)$ is bounded for bounded reference signal $r(t)$, thereby $x(t)$ remains bounded. Since $V(e,\tilde W)$ is radially unbounded the result holds for all $x(0) \in \mathcal{D}_x$. Using the fact, the error in parameters $\tilde{W}$ are bounded through projection operator \cite{larchev2010projection} and further using Lyapunov theory and Barbalat’s Lemma \cite{narendra2012stable} we can show that $e(t)$ is uniformly ultimately bounded in vicinity to zero solution. \end{proof} From Theorem-\ref{Theorem-1} \& (\ref{eq:14}) and using system theory \cite{kailath1980linear} we can infer that as $e(t) \to 0$, ${\Delta}'(x) \to \Delta(x)$ in point-wise sense. Hence D-MRGeN estimates $y_{\tau} = {\Delta}'(x_{\tau})$ are admissible target values for training DNN features over the data $Z^M = \{\{x_{\tau},y_{\tau}\}\}_{\tau = 1}^{M}$. The details of DNN training and implementation details of DMRAC controller is presented in the following section.
proofpile-arXiv_065-6181
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} In a system of athermal granular particles with only repulsive contact interactions, as the packing fraction of particles $\phi$ increases, the system undergoes a jamming transition \cite{OHern,LiuNagel} at a critical $\phi_J$. For $\phi<\phi_J$ the system behaves similarly to a liquid, while for $\phi>\phi_J$ the system behaves like a rigid but disordered solid. One way to probe the jamming transition is through the application of a simple shear deformation to the system. For an infinite system in the ``thermodynamic limit," if one applies a simple shear stress $\sigma$ no matter how small, then if the system is below $\phi_J$ the system responds with a simple shear flow, with a velocity profile that varies linearly in the direction transverse to the flow. Above $\phi_J$, the application of a small shear stress causes the system to have an elastic shear distortion determined by the finite shear modulus of the solid phase; the system does not flow. However, if $\sigma$ exceeds a critical yield stress $\sigma_0$, then plastic deformations cause the solid to flow. The point where this yield stress $\sigma_0(\phi)$ vanishes upon decreasing $\phi$ then determines the shear-driven jamming transition $\phi_J$ \cite{OlssonTeitelPRL,OlssonTeitelPRE,VagbergOlssonTeitel}. For frictionless particles, such as those considered in this work, $\sigma_0$ vanishes continuously \cite{OlssonTeitelPRL,OlssonTeitelPRE} as $\phi\to\phi_J$ from above. Many numerical studies of the jamming transition, and granular materials more generally, have used spherically shaped particles for simplicity. It is therefore interesting to ask how behavior is modified if the particles have shapes with a lower rotational symmetry \cite{Borzsonyi.Soft.2013}. In a recent work \cite{MT1} we considered the shear-driven jamming of athermal, bidisperse, overdamped, frictionless, spherocylinders in two dimensions (2D), uniformly sheared at a fixed strain rate $\dot\gamma$. In that work we considered the global rheology of the system, investigating how pressure, deviatoric shear stress, and macroscopic friction vary with particle packing fraction $\phi$, shear strain rate $\dot\gamma$ and particle asphericity $\alpha$. We determined the jamming packing fraction $\phi_J(\alpha)$ as a function of the spherocylinder asphericity, and the average number of contacts per particle at jamming, $Z_J(\alpha)$. We also studied the probability for an inter-particle contact to form at a particular angle $\vartheta$ along the surface of the spherocylinder, and argued that the $\alpha\to 0$ limit approaching a circular particle was singular; we found that the total probability for a contact to form somewhere on one of the flat sides of the spherocylinder stays constant as $\alpha\to 0$, even as the length of those flat sides becomes a vanishing fraction of the total particle perimeter. In the present work we continue our studies of this 2D spherocylinder model, but now concentrating on the rotational motion of particles and their orientational ordering. As this work is a continuation of our work in Ref.~\cite{MT1}, the introduction and description of the model presented here are abbreviated. We therefore refer the reader to Ref.~\cite{MT1} for a discussion of the broader context of, and motivation for, our model, a more complete list of references, and more details of the derivation of our equations of motion. Some of our results in the present work have been presented previously \cite{MKOT}; here we broaden these prior investigations and present greater detail. When sheared, aspherical particles are known to undergo orientational ordering due to the torques induced on the particles by the shear flow. Several numerical works focused on this shear-induced orientational ordering of ellipsoids \cite{Campbell} and rod-shaped particles \cite{Guo1,Guo2} of different aspect ratios in three dimensions (3D) approaching, but staying below, jamming. They found that orientational order increased with increasing packing $\phi$, and that particles were preferentially oriented at a finite angle $\theta_2>0$ with respect to the direction of the shear flow. Experiments and simulations of rod-shaped particles in 3D \cite{Borzsonyi1,Borzsonyi2,Wegner,Wegner2} found similar results, while also studying the rotation of particles in steady-state simple shear, and the transient approaches to the steady state. Other experimental works have studied the transient behavior of orientational ordering and pressure $p$ of ellipses in 2D under quasistatic shearing \cite{Farhadi,Wang}. Numerical simulations, measuring rheological properties as well as orientational ordering in the hard-core limit below jamming, have been carried out for frictional 3D spherocylinders sheared by biaxial compression \cite{Azema2010, Azema2012}, frictionless 3D spherocylinders in steady-state simple shear \cite{Nagy}, and for both frictionless and frictional 2D ellipses in steady-state simple shear \cite{Trulsson}. The rheology of 3D frictional and frictionless spherocylinders in steady simple shear has also recently been simulated \cite{Nath}. In this work work we consider the uniform steady-state simple shearing of a system of 2D spherocylinders, considering a broad range of particle asphericities, from moderately elongated to very nearly circular. The above previous works \cite{Campbell, Guo1, Guo2, Borzsonyi1,Borzsonyi2,Wegner,Wegner2,Azema2010, Azema2012,Nagy,Trulsson,Nath} modeled dry granular materials, in which energy is dissipated in particle collisions, rheology is Bagnoldian, and there may be microscopic inter-particle Coulombic friction. In contrast, here we model particles in suspension, where the rheology is Newtonian at small strain rates below jamming. We use a simple model that has been widely used in studies of the shear-driven jamming of spherical and circular particles \cite{OlssonTeitelPRL,OlssonTeitelPRE,MT1,MKOT,Durian,Hatano,Heussinger,Andreotti,OT3,Wyart1,Vagberg.PRL.2014,Wyart2,Berthier}. In this model, particles are frictionless with a soft-core, one-sided, harmonic repulsive interaction, and energy is dissipated by a viscous drag with respect to an affinely sheared host medium. Particles obey an overdamped equation of motion and inertial effects are thus ignored. Our simple model omits several physical processes that may be relevant to real physical suspensions, such as hydrodynamic forces \cite{hydro}, lubrication forces \cite{lub1,lub2,lub3}, inertial effects \cite{inertia}, and frictional contact interactions which have recently been proposed as a possible mechanism for shear thickening \cite{DST0,DST1,DST2,DST3,DST4,DST5,DST6}. However, the greater simplicity of our model allows a more thorough investigation over a wide range of the parameter space, in particular going to smaller values of the strain rate $\dot\gamma$ and smaller values of the particle asphericity $\alpha$. Our work is carried out in the spirit that it is useful to first understand the behavior of simple models before adding more realistic complexities. The remainder of this paper is organized as follows. In Sec.~\ref{sec:modelMethod} we define our model and give details of our numerical simulations. In Sec.~\ref{sec:isolated} we consider the behavior of an isolated spherocylinder in an affinely sheared host medium, considering the rotational motion and the probability for the particle to be at a particular orientation. Understanding the motion of an isolated single particle will help inform our understanding of the many particle system. In Sec.~\ref{sec:ResultsRotation} we present our numerical results for the rotational motion of particles and their orientational ordering as the packing $\phi$ of particles increases through the jamming transition. We compute the average angular velocity of particles scaled by the strain rate, $\langle\dot\theta_i\rangle/\dot\gamma$, and the nematic orientational order parameter $\mathbf{S}_2$. We addresses two basic questions in this section: (1) What underlying physical processes are reflected in the observed non-monotonic behavior of both $\langle\dot\theta_i\rangle/\dot\gamma$ and the magnitude of the nematic order parameter $S_2$ as the packing $\phi$ increases, and (2) is the finite nematic ordering $\mathbf{S}_2$ a cooperative effect of multi-particle coherent motion, or is it a consequence of shearing acting like an ordering field? We address these questions by considering (i) the time dependence of particle rotations, (ii) the behavior of the system under pure, as opposed to simple, shearing, and (iii) the relaxation of $\mathbf{S}_2$ when it is perturbed away from its steady-state value, and (iv) by constructing a numerical mean-field model for the rotation of particles. We also use these results to explain the singular behavior we previously found \cite{MKOT} as the particle asphericity $\alpha\to 0$, and particles approach a circular shape. In Sec.~\ref{sec:discus} we summarize our results. We find that the non-montonic behavior of $S_2$ and $\langle\dot\theta_i\rangle/\dot\gamma$ can be viewed as a crossover from a single particle-like behavior at small $\phi$, where the imposed simple shear results in a steady but non-uniform rotation of the particles, to a many particle behavior at large $\phi$, where the geometry of the dense packing and the decreasing free volume inhibits particle rotation, which becomes more of a random Poisson-like process. We conclude that the orientational ordering is a consequence of the shear serving as an ordering field rather than due to cooperative behavior among the particles. Finally, in the Appendices we consider several ancillary matters. In Appendix~\ref{aOrientations} we consider the distribution of particle orientations in steady-state shear flow and relate that distribution to the orientation of the nematic order parameter. In Appendix~\ref{sAtoZ} we present further analysis of the singular $\alpha\to 0$ limit, and explore how this limit is affected if we consider a system of particles polydisperse in shape. \section{Model and Simulation Method} \label{sec:modelMethod} Our model system is one of $N$ two dimensional, athermal, frictionless spherocylinders, consisting of a rectangle with two semi-circular end caps, as illustrated in Fig.~\ref{sphero}. The half-length of the rectangle of particle $i$ is $A_i$, the radius is $R_i$, and we define the asphericity $\alpha_i$ as, \begin{equation} \alpha_i=A_i/R_i \end{equation} so that $\alpha=0$ is a pure circular particle. The ``spine" of the spherocylinder is the axis of length $2A_i$ that goes down the center of the rectangle. For every point on the perimeter of the spherocylinder, the shortest distance from the spine is $R_i$. The center of mass of the particle is $\mathbf{r}_i$ and the angle $\theta_i$ denotes the orientation of the spine with respect to the $\mathbf{\hat x}$ direction. Our system box has lengths $L_x$ and $L_y$ in the $\mathbf{\hat x}$ and $\mathbf{\hat y}$ directions, respectively. We will in general take $L_x=L_y\equiv L$ unless otherwise noted. If $\mathcal{A}_i$ is the area of spherocylinder $i$, the packing fraction $\phi$ is, \begin{equation} \phi=\frac{1}{L^2}\sum_{i=1}^N\mathcal{A}_i. \end{equation} Unless otherwise stated, all our particles have equal asphericity $\alpha$, and are bidisperse in size with equal numbers of big and small particles with length scales in the ratio $R_b/R_s=1.4$. \begin{figure} \centering \includegraphics[width=2.5in]{sphero} \caption{ An isolated spherocylinder indicating the spine half-length $A_i$, end cap radius $R_i$, center of mass position $\mathbf{r}_i$, and angle of orientation $\theta_i$. } \label{sphero} \end{figure} The dynamics of our model has been described in detail in Ref.~\cite{MT1}, here we summarize the main features. Periodic boundary conditions are taken along $\mathbf{\hat x}$, while Lees-Edward boundary conditions \cite{LeesEdwards} are taken along $\mathbf{\hat y}$ to introduce a simple shear strain $\gamma$. We take $\gamma =\dot\gamma t$ to model simple shear flow in the $\mathbf{\hat x}$ direction at a fixed finite strain rate $\dot\gamma$. Particles interact with each other via elastic contact interactions. Energy dissipation is due to a viscous drag between the particles and an affinely sheared host medium, \begin{equation} \mathbf{v}_\mathrm{host}(\mathbf{r})=\dot\gamma y \mathbf{\hat x}, \end{equation} modeling the behavior of particles in a uniform non-Brownian suspension. Defining $r_{ij}$ as the shortest distance between the spines of spherocylinders $i$ and $j$ \cite{Pournin.GranulMat.2005}, and $d_{ij}=R_i+R_j$, two spherocylinders are in contact whenever $r_{ij}<d_{ij}$. In this case there is a repulsive harmonic interaction between the particles with the force on $i$ being given by, \begin{equation} \mathbf{F}_{ij}^\mathrm{el}=\frac{k_e}{d_{ij}}\left(1-\frac{r_{ij}}{d_{ij}}\right)\mathbf{\hat n}_{ij}, \end{equation} where $k_e$ is the particle stiffness and $\mathbf{\hat n}_{ij}$ the unit vector pointing normally inwards to particle $i$ at the point of contact with $j$. The force $\mathbf{F}_{ij}^\mathrm{el}$ acts at the contact point, which is located a distance $(R_i/d_{ij})r_{ij}$ from the spine of particle $i$, along the cord $r_{ij}$, and gives rise to a torque on particle $i$, \begin{equation} \boldsymbol{\tau}_{ij}^\mathrm{el}=\mathbf{\hat z} \tau_{ij}^\mathrm{el}=\mathbf{s}_{ij}\times\mathbf{F}_{ij}^\mathrm{el}, \end{equation} where $\mathbf{s}_{ij}$ is the moment arm from the center of mass of $i$ to its point of contact with $j$. The total elastic force and torque on particle $i$ are then \begin{equation} \mathbf{F}_i^\mathrm{el}=\sum_j \mathbf{F}_{ij}^\mathrm{el},\qquad \tau_i^\mathrm{el}=\sum_j \tau_{ij}^\mathrm{el} \end{equation} where the sums are over all particles $j$ in contact with $i$. The viscous drag between particle $i$ and the host medium gives rise to a dissipative force, \begin{equation} \mathbf{F}_i^\mathrm{dis}=\int_i d^2r\,\mathbf{f}_i^\mathrm{dis}(\mathbf{r}), \end{equation} where the integral is over the area of particle $i$ and the dissipative force per unit area acting at position $\mathbf{r}$ on the particle is given by the local velocity difference between the particle and the host medium, \begin{equation} \mathbf{f}_i^\mathrm{dis}(\mathbf{r})=-k_d[\mathbf{v}_i(\mathbf{r})-\mathbf{v}_\mathrm{host}(\mathbf{r})], \end{equation} where $k_d$ is a viscous damping coefficient and $\mathbf{v}_i(\mathbf{r})$ is the local velocity of the particle at position $\mathbf{r}$, \begin{equation} \mathbf{v}_i(\mathbf{r})=\mathbf{\dot r}_i+\dot\theta_i\mathbf{\hat z}\times (\mathbf{r}-\mathbf{r}_i). \end{equation} Here $\dot{\mathbf{r}}_i=d\mathbf{r}_i/dt$ is the center of mass velocity of the particle and $\dot\theta_i$ is its angular velocity about the center of mass. The corresponding dissipative torque is, \begin{equation} \boldsymbol{\tau}_i^\mathrm{dis}=\mathbf{\hat z}\tau_i^\mathrm{dis}=\int_i d^2r\,(\mathbf{r}-\mathbf{r}_i)\times \mathbf{f}_i^\mathrm{dis}(\mathbf{r}). \end{equation} The above elastic and dissipative forces are the only forces included in our model; there are no inter-particle dissipative or frictional forces. We will carry out our simulations in the overdampled (low particle mass) limit, where the total force and torque on each particle are damped to zero, \begin{equation} \mathbf{F}_i^{\mathrm{el}} + \mathbf{F}_i^{\mathrm{dis}} = 0, \quad \tau_i^{\mathrm{el}} + \tau_i^{\mathrm{dis}} = 0. \end{equation} The resulting translational and rotational equations of motion for particle $i$ can then be written as \cite{MT1}, \begin{align} \dot{\mathbf{r}}_i &= \dot{\gamma}y_i{\mathbf{\hat x}}+\dfrac{\mathbf{F}_i^{\mathrm{el}}}{k_d \mathcal{A}_i}, \label{eq:ri_eom} \\ \dot{\theta}_i &= - \dot{\gamma} f(\theta_i)+ \dfrac{\tau_i^{\mathrm{el}}}{k_d \mathcal{A}_i I_i}, \label{eq:theta_eom} \end{align} where $\mathcal{A}_i$ is the area of particle $i$, $I_i$ is the trace of the particle's moment of inertia tensor, and \begin{equation} f(\theta)=\frac{1}{2}\left[1-\left({\Delta I_i}/{I_i}\right)\cos 2\theta\right], \label{eftheta} \end{equation} where $\Delta I_i$ is the absolute value of the difference of the two eigenvalues of the moment of inertia tensor. We assume a uniform constant mass density for both our small and big particles. For our simulations we take $2 R_s = 1$ as the unit of distance, $k_e = 1$ as the unit of energy, and $t_0 = (2 R_s)^2 k_d\mathcal{A}_s / k_e = 1$ as the unit of time. For simplicity, we take the damping coeficient $k_d$ to vary with particle size, so that $k_d\mathcal{A}_i=1$ for all particles. We numerically integrate the equations of motion (\ref{eq:ri_eom}) and (\ref{eq:theta_eom}) using a two-stage Heun method with a step size of $\Delta t = 0.02$. Unless otherwise stated, we begin each shearing run in a finite energy configuration at the desired packing fraction $\phi$ with random initial particle positions and orientations. To generate such initial configurations we place the spherocylinders in the system one-by-one, while rejecting and retrying any time a new placement would lead to an unphysical overlap where the spines of two spherocylinders intersect. In general we use $N=1024$ particles. We have found this to be sufficiently large to avoid any significant finite size effects for the behaviors discussed in this work. Most of our simulations typically extend to strains of at least $\gamma\approx 150$. Discarding an initial $\Delta\gamma\approx 20$ of the strain from the averaging so as to eliminate transients effects, we find that our steady-state averages are generally insensitive to the particular starting configuration \cite{Vagberg.PRE.2011}. See the Supplemental Material to Ref.~\cite{MKOT} for tests that these simulation parameters, in particular $N$ and $\Delta t$, are sufficient to obtain accurate results for particles with our smallest asphericity, $\alpha=0.001$. Note that we restrict the strain coordinate $\gamma$ used in our Lees-Edwards boundary condition to the range $\gamma\in \left(-L_x/2L_y, L_x/2L_y\right]$; whenever it exceeds this maximum it is reset by taking $\gamma \to \gamma - L_x/Ly$, allowing us to shear to arbitrarily large total strains. \section{Isolated Particles: Rotations and Orientational Ordering} \label{sec:isolated} Although the main objective of this work is to study the behavior of many interacting particles, it is of interest to first consider the case of an isolated particle, for which $\mathbf{F}_i^\mathrm{el}=\boldsymbol{\tau}_i^\mathrm{el}=0$. In this case Eq.~(\ref{eq:ri_eom}) gives that the particle flows with the local host velocity, $\dot{\mathbf{r}}_i=\dot\gamma y_i\mathbf{\hat x}$, while from Eq.~(\ref{eq:theta_eom}) the rotational motion obeys the deterministic equation, $\dot\theta_i=-\dot\gamma f(\theta_i)$, with $f(\theta)$ as in Eq.~(\ref{eftheta}). Since in general $f(\theta)>0$, the particle will rotate continuously clockwise, but with a non-uniform angular velocity that is slowest at $\theta_i=0$ or $\pi$ where $f(\theta_i)$ is at its minimum, and fastest at $\theta_i=\pi/2$ or $3\pi/2$ where $f(\theta_i)$ is at its maximum. This is analogous to the Jeffrey orbits of ellipsoids in a viscous fluid \cite{Jeffery.RSPA.1922}. The particle will thus spend more time oriented at $\theta_i=0$, aligned parallel to the flow direction $\mathbf{\hat x}$. We show this explicitly by integrating the equation of motion and plotting $\theta_i(t)$ vs $\gamma=\dot\gamma t$ in Fig.~\ref{theta-vs-g}(a) for spherocylinders of several different $\alpha$. \begin{figure} \centering \includegraphics[width=3.5in]{theta-vs-g} \caption{For an isolated spherocylinder in a uniform shear flow, (a) orientation $\theta_i$ vs net shear strain $\gamma=\dot\gamma t$, and (b) probability density $\mathcal{P}(\theta)$ vs $\theta$ for the spherocylinder to be oriented at angle $\theta$. From bottom to top in (a) the curves are for spherocylinders with asphericity $\alpha=0.1$, 0.5, 1.0, 2.0 and 4.0, and similarly for the curves at $\theta=\pi$ in (b). } \label{theta-vs-g} \end{figure} For such an isolated particle tumbling in the flow field of the host medium, we can compute the probability density for the particle's orientation to be at a particular angle $\theta$, \begin{align} \mathcal{P}(\theta)&=\frac{1}{T}\int_0^T\!\!dt\,\delta (\theta_i(t)-\theta)\\[10pt] &=\frac{1}{T}\int_0^{2\pi}\!\!d\theta_i\,\frac{\delta(\theta_i-\theta)}{|\dot\theta_i|} =\dfrac{1}{T\dot\gamma f(\theta)}, \label{Piso} \end{align} where $T$ is the period of the rotation. We plot $\mathcal{P}(\theta)$ vs $\theta$ for spherocylinders with different $\alpha$ in Fig.~\ref{theta-vs-g}(b). Normalization of $\mathcal{P}(\theta)$ then determines the period $T$ and thus gives for the average angular velocity, \begin{equation} -\dfrac{\langle\dot\theta_i\rangle}{\dot\gamma} = \dfrac{2\pi}{\dot\gamma T} = \frac{1}{2}\sqrt{1-(\Delta I_i/I_i)^2}. \label{eomegasingle} \end{equation} For a circular particle one has $\Delta I_i/I_i=0$ and so $-\langle\dot\theta\rangle/\dot\gamma=1/2$. More generally, since $0\le \Delta I_i/I_i< 1$, one then has $0<-\langle\dot\theta\rangle/\dot\gamma \le 1/2$. Since $\mathcal{P}(\theta+\pi)=\mathcal{P}(\theta)$, corresponding to the fact that the particle has neither a head nor a tail, orientational ordering will be nematic. The direction of the nematic order parameter $\mathbf{S}_2$ is $\theta_2=0$, aligned with the flow, while the magnitude is given by, \begin{equation} S_2=\int_0^{2\pi}\!\!d\theta\,\mathcal{P}(\theta)\cos 2\theta = \dfrac{1-\sqrt{1-(\Delta I_i/I_i)^2}}{(\Delta I_i/I_i)}. \label{eS2single} \end{equation} In Fig.~\ref{S2-vs-C}(a) we plot $-\langle \dot\theta\rangle/\dot\gamma$ and $S_2$ vs $\Delta I_i/I_i$ for an isolated particle, using Eqs.~(\ref{eomegasingle}) and (\ref{eS2single}). We see, not surprisingly, an anti-correlation between the two quantities; $-\langle\dot\theta\rangle/\dot\gamma$ decreases as the particle becomes more aspherical (i.e., as $\Delta I_i/I_i$ increases), while $S_2$ increases. For spherocylinders of asphericity $\alpha$ we have, \begin{equation} \dfrac{\Delta I_i}{I_i}=\dfrac{2\alpha(4+3\pi\alpha+4\alpha^2)}{3\pi+24\alpha+6\pi\alpha^2+8\alpha^3}, \label{eDI} \end{equation} which we plot in Fig.~\ref{S2-vs-C}(b). \begin{figure} \centering \includegraphics[width=3.5in]{S2-vs-C} \caption{(a) Average scaled angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ and magnitude of the nematic order parameter $S_2$ vs $\Delta I_i/I_i$ for an isolated particle in a uniform shear flow. (b) Plot of $\Delta I_i/I_i$ vs $\alpha$ for spherocylinders of asphericity $\alpha$. } \label{S2-vs-C} \end{figure} As the packing $\phi$ increases from zero, the above single particle behavior will be modified due to collisions that occur between particles, giving rise to elastic forces and torques. It is interesting to consider a naive model in which, at small $\phi$, we regard these collisions as introducing uncorrelated random torques, as if the particle were at a finite temperature. We therefore rewrite Eq.~(\ref{eq:theta_eom}) as, \begin{equation} \dfrac{\dot\theta_i}{\dot\gamma}=\dfrac{d\theta_i}{d\gamma}=-f(\theta_i) + \zeta(\gamma) \label{enoisy} \end{equation} where $\zeta = \tau_i^\mathrm{el}/(k_d \mathcal{A}_i I_i\dot\gamma)$ and we assume, \begin{equation} \langle \zeta(\gamma)\rangle=0,\qquad \langle\zeta(\gamma)\zeta(\gamma^\prime)\rangle = \varepsilon^2\delta(\gamma-\gamma^\prime). \end{equation} Numerically integrating Eq.~(\ref{enoisy}), in Fig.~\ref{noisy}(a) we plot the resulting probability density $\mathcal{P}(\theta)$ for a spherocylinder of $\alpha=4$, for various noise levels $\varepsilon$. We see several significant changes from the noiseless $\varepsilon=0$ case. As $\varepsilon$ increases, we see that the amplitude of the variation in $\mathcal{P}(\theta)$ decreases, and the location of the peak shifts from $\theta=0$ to larger $\theta >0$. This indicates that the magnitude of the nematic order $S_2$ is decreasing and the nematic director becomes oriented at a finite positive angle with respect to the shear flow. To quantify this observation, we compute the nematic order parameter as follows: For a particle in 2D, the magnitude $S_2$ and orientation $\theta_2$ of the nematic order parameter $\mathbf{S}_2$ are given by \cite{Torquato}, \begin{equation} S_2=\max_{\theta_2}\left[\langle\cos(2[\theta-\theta_2])\rangle\right], \label{eS2iso0} \end{equation} where $\langle \dots\rangle$ denotes an average over time, or equivalently over strain $\gamma=\dot\gamma t$. From this one can show, \begin{equation} S_2=\sqrt{\langle\cos 2\theta\rangle^2+\langle\sin 2\theta\rangle^2} \label{eS2iso1} \end{equation} and \begin{equation} \tan 2\theta_2 =\langle\sin 2\theta\rangle/\langle\cos 2\theta\rangle. \label{eS2iso2} \end{equation} In Fig.~\ref{noisy}(b) we plot $\theta_2$ vs noise level $\varepsilon$ for several different spherocylinder asphericities $\alpha$. The values of $\theta_2$ for $\alpha=4$ coincide with the locations of the peaks in $\mathcal{P}(\theta)$ in Fig.~\ref{noisy}(a). We see that there is no strong dependence of $\theta_2$ on $\alpha$, except at small $\varepsilon$, and that $\theta_2$ saturates to $45^\circ$ as $\varepsilon$ gets large; $45^\circ$ corresponds to the eigen-direction of expansion of the affine strain rate tensor, and hence also the direction of minimal stress. In Fig.~\ref{noisy}(c) we plot $S_2$ vs $\varepsilon$ for different $\alpha$ and see that $S_2$ decays to zero as $\varepsilon$ increases; we find the large $\varepsilon$ tail of this decay to be well fit to an exponential $\sim \exp(-\varepsilon/\varepsilon_0)$, with $\varepsilon_0\approx 1.16$ for all $\alpha$. Finally in Fig.~\ref{noisy}(d) we plot the scaled average angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ vs $\varepsilon$ for different $\alpha$. As $\varepsilon$ increases, $-\langle\dot\theta_i\rangle/\dot\gamma$ saturates to 1/2, the rotational velocity of the affinely sheared host medium, as well as the value expected for a circular particle. We find the large $\varepsilon$ behavior to be well fit to the form $\sim \frac{1}{2}[1-c\exp(-\varepsilon/\varepsilon_0^\prime)]$, with $\varepsilon_0^\prime\approx 0.34$ for all $\alpha$. As in Fig.~\ref{S2-vs-C}(a) we see that $S_2$ and $-\langle\dot\theta_i\rangle/\dot\gamma$ are anticorrelated; as one increases, the other decreases. \begin{figure} \centering \includegraphics[width=3.5in]{noisy} \caption{(a) Probably density $\mathcal{P}(\theta)$ for a spherocylinder of asphericity $\alpha=4$ to be oriented at angle $\theta$, for various strengths $\varepsilon$ of uncorrelated random torque noise. (b) Orientation $\theta_2$ of the nematic order parameter, (c) magnitude $S_2$ of the nematic order parameter, and (d) scaled particle angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ vs noise strength $\varepsilon$, for spherocylinders of various $\alpha$. } \label{noisy} \end{figure} These results are easy to understand. The nematic ordering, in the isolated particle limit, is determined by how long the particle spends at the preferred alignment $\theta=0$ or $\pi$, where $f(\theta)$ has its minimum. When a particle oriented near $\theta=0$ receives a random kick directed counter-clockwise, the particle deflects to positive $\theta$, but then quickly relaxes back towards $\theta=0$ under the influence of the driving term $-f(\theta)$; however if the random kick is directed clockwise, the particle will rapidly rotate through $\pi$, before relaxing towards the preferred alignment at $\theta=\pi$. This effect results in the particle spending more time at angles $\theta>0$ than at corresponding angles $\theta < 0$, and as a consequence $\theta_2$ becomes finite and positive, growing with the strength of the random kicks. At the same time, the occurrence of clockwise directed random kicks serves to shorten the time the particle spends in the preferred aligned direction $\theta=0$ or $\pi$, resulting in an increase in the average angular velocity $-\langle\dot\theta\rangle/\dot\gamma$ and a decrease in the magnitude of the nematic ordering $S_2$. In the following sections we explore what happens as the packing $\phi$ increases in a true model of $N$ interacting spherocylinders. We will see that, as $\phi$ increases from small values, $\theta_2$ increases from zero in accord with the above naive model. However we will see that $S_2$ and $-\langle\dot\theta_i\rangle/\dot\gamma$ behave qualitatively the opposite of this naive model; as $\phi$ increases from small values, $S_2$ {\em increases} while $-\langle\dot\theta_i\rangle/\dot\gamma$ {\em decreases}. As we will see in Sec.~\ref{sec:MF}, the reason for this difference is that, while our naive model above assumed the collisions provided no net torque $\langle\zeta\rangle=0$, in fact the collisions that occur due to shearing create an orientation-dependent elastic torque on a particle which on average is finite and counter-clockwise, thus slowing down the rotation of particles and increasing orientational ordering. \section{Numerical results: Rotations and Orientational Ordering} \label{sec:ResultsRotation} At finite packing $\phi$, particles will come into contact, $\tau_i^\mathrm{el}$ will no longer be zero, and the isolated particle behavior of the previous section will be modified. Here we report on our numerical results for systems of particles with different asphericity from $\alpha=0.001$ to 4, for a range of packings $\phi$ from dilute, to jamming, and above. We will look in greater detail at the two specific cases of moderately elongated particles with $\alpha=4$, and nearly circular particles with $\alpha=0.01$. In Fig.~\ref{configs} we show snapshots of typical steady-state configurations for these two cases, sheared at a rate $\dot\gamma=10^{-6}$. For $\alpha=4$ we show a dense configuration at $\phi=0.905$, close to its jamming $\phi_J=0.906$; for $\alpha=0.01$ we show a configuration at its jamming $\phi_J=0.85$. \begin{figure} \centering \includegraphics[width=3.5in]{configs} \caption{Snapshot configurations in simple sheared steady-state with strain rate $\dot\gamma=10^{-6}$ for spherocylinders of asphericity (a) $\alpha=4$ at packing $\phi=0.905$ near the jamming $\phi_J=0.906$, and (b) $\alpha=0.01$ at packing $\phi_J=0.845$. For the nearly circular particles at $\alpha=0.01$, the black line bisecting each particle indicates the direction of the spherocylinder axis. Colors are used to help distinguish different particles and have no other meaning. Corresponding animations, showing the evolutions of these configurations under shearing, are available in the Supplemental Material \cite{SM}. } \label{configs} \end{figure} When comparing results for systems of different $\alpha$, we will find it convenient to plot quantities in terms of a reduced packing fraction, $\phi/\phi_J(\alpha)$, where $\phi_J(\alpha)$ is the shear-driven jamming packing fraction for particles of that particular value of $\alpha$. For reference, in Fig.~\ref{phiJ-vs-alpha} we plot this $\phi_J$ vs $\alpha$, as we have determined in our earlier work \cite{MT1}. Note that this $\phi_J(\alpha)$ monotonically increases with $\alpha$, for the range of $\alpha$ studied here. This is in contrast to compression-driven jamming where $\phi_J(\alpha)$ reaches a maximum near $\alpha\approx 1$ and then decreases as $\alpha$ increases further \cite{MTCompress}. This difference is because there is no nematic ordering for athermal isotropic compression \cite{MTCompress}, while (as we will see below) there is nematic ordering in the sheared system; the orientational ordering of the sheared system allows the particles to pack more efficiently and so results in a larger $\phi_J$ that continues to increases with increasing $\alpha$. \begin{figure} \centering \includegraphics[width=3.5in]{phiJ-vs-alpha} \caption{Critical packing fraction $\phi_J$ for shear-driven jamming vs spherocylinder asphericity $\alpha$, from Ref.~\cite{MT1}. } \label{phiJ-vs-alpha} \end{figure} \subsection{Average Angular Velocity} \label{sAV} \begin{figure} \centering \includegraphics[width=3.5in]{dthdg-vs-phiophiJ} \caption{Average particle angular velocity scaled by strain rate $-\langle\dot\theta_i\rangle/\dot\gamma$ vs reduced packing fraction $\phi/\phi_J$ for spherocylinders of different asphericity $\alpha$. For each $\alpha$ we show results for two different small strain rates $\dot\gamma_1$ (solid symbols) $<\dot\gamma_2$ (open symbols) (see Table~\ref{tab1} for values). The vertical dashed line locates the jamming transition $\phi/\phi_J=1$. The horizontal dashed line denotes the rotation $1/2$ of the affinely sheared host medium. } \label{dthdg-vs-phiophiJ} \end{figure} We first consider the angular velocity of the particles' rotational motion. For the coordinate system of our model, a counterclockwise rotation is a positive angular velocity, while a clockwise rotation is negative. Since our particles have a net rotation that is clockwise, it is therefore convenient to consider $-\dot\theta_i$. It will also be convenient to measure in dimensionless units, which we will find gives a finite value in the quasistatic limit $\dot\gamma\to 0$. Hence, when we refer to the angular velocity of particle $i$, we will generally mean $-\dot\theta_i/\dot\gamma$. From Eq.~(\ref{eq:theta_eom}) we can write for the average angular velocity of individual particles, \begin{equation} -\dfrac{\langle\dot\theta_i\rangle}{\dot\gamma} =\left\langle\dfrac{1}{N}\sum_{i=1}^N\left[f(\theta_i)-\dfrac{\tau_i^\mathrm{el}}{\dot\gamma k_d \mathcal{A}_i I_i}\right]\right\rangle, \end{equation} where $\langle\dots\rangle$ indicates an average over configurations in the steady state. In an earlier letter \cite{MKOT} we plotted the resulting $-\langle\dot\theta_i\rangle/\dot\gamma$ vs the packing fraction $\phi$, for spherocylinders of different asphericity. In Fig.~\ref{dthdg-vs-phiophiJ} we reproduce those results for asphericities $\alpha=0.001$ to 4, but now plotting vs the reduced packing fraction $\phi/\phi_J$, so as to more easily compare behaviors near the $\alpha$-dependent jamming transition. For each $\alpha$ we show results at two different small strain rates, $\dot\gamma_1 <\dot\gamma_2$, in order to demonstrate that our results, except for the largest $\phi$ near and above jamming, are in the quasistatic limit where $\langle\dot\theta_i\rangle/\dot\gamma$ is independent of $\dot\gamma$. The values of $\dot\gamma_1$ and $\dot\gamma_2$ used for each $\alpha$ are given in Table~\ref{tab1}. That $-\langle\dot\theta_i\rangle/\dot\gamma>0$ indicates that the particles continuously rotate in a clockwise direction, and such rotation persists even in dense configurations above jamming. Here, and in subsequent plots, error bars represent one standard deviation of estimated statistical error; when error bars are not visible, they are smaller than the size of the symbol representing the data point. In Fig.~\ref{av_v_phi-allgdot} we similarly plot $-\langle\dot\theta_i\rangle/\dot\gamma$ vs $\phi$, but now showing results for multiple different strain rates $\dot\gamma$, for the two particular cases of moderately extended rods, with $\alpha=4$, and nearly circular particles, with $\alpha=0.01$. We see, as mentioned above, that the $\dot\gamma$ dependence of the angular velocity increases as one approaches and goes above $\phi_J$, but seems to be approaching a finite limiting value as $\dot\gamma\to 0$. \begin{table}[h!] \caption{Strain rate values used for data in Figs.~\ref{dthdg-vs-phiophiJ}, \ref{S2-vs-phiophiJ} and \ref{th2D-vs-phiophiJ}} \begin{center} \begin{tabular}{|c|c|c|} \hline $\alpha$ & $\dot\gamma_1$ & $\dot\gamma_2$ \\ \hline 0.001 & $1\times 10^{-7}$ & $4\times10^{-7}$ \\ 0.01 & $4\times 10^{-7}$ & $1\times10^{-6}$ \\ $\alpha\ge0.06$ & $1\times 10^{-5}$ & $4\times 10^{-5}$ \\ \hline \end{tabular} \end{center} \label{tab1} \end{table}% \begin{figure} \centering \includegraphics[width=3.5in]{av_v_phi-allgdot} \caption{Average particle angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ vs packing $\phi$ for different strain rates $\dot\gamma$, for spherocylinders of asphericity (a) $\alpha=4$ and (b) $\alpha=0.01$. Vertical dashed lines indicate the location of the jamming transitions, $\phi_J=0.906$ and $\phi_J=0.845$, respectively. } \label{av_v_phi-allgdot} \end{figure} There are several obvious features to note in Figs.~\ref{dthdg-vs-phiophiJ} and \ref{av_v_phi-allgdot}: (i) The angular velocity $-\langle \dot\theta_i\rangle/\dot\gamma$ is non-monotonic in $\phi$, initially decreasing as $\phi$ increases from the dilute limit, reaching a minimum at a $\phi_{\dot\theta\,\mathrm{min}}$ close to but below the jamming $\phi_J$, and then increasing again as $\phi$ further increases towards $\phi_J$ and goes above. As $\alpha$ decreases, this variation in $-\langle \dot\theta_i\rangle/\dot\gamma$ gets squeezed into a narrower range of $\phi$, closer to $\phi_J$. One of our main objectives in this work will be to understand the physical origin of this non-monotonic behavior. (ii) For small $\alpha$, at both small $\phi$ and large $\phi>\phi_J$, the angular velocity $-\langle \dot\theta_i\rangle/\dot\gamma\approx 1/2$, the value expected for perfectly circular particles. However, even for the very nearly circular particles with $\alpha=0.001$, the dip in $-\langle \dot\theta_i\rangle/\dot\gamma$ at $\phi_{\dot\theta\,\mathrm{min}}$ remains sizable, about $20\%$ below 1/2. The main result of our earlier Letter \cite{MKOT} was to argue that this dip remains finite in the $\alpha\to 0$ limit approaching circular disks. In this work we will provide further understanding of what causes this singular behavior as $\alpha\to 0$. (iii) In the dilute limit at small $\phi$, the angular velocity $-\langle \dot\theta_i\rangle/\dot\gamma$ is decreasing as $\phi$ increases, which is the opposite of the behavior seen in Fig.~\ref{noisy}(d) for the noisy isolated particle model. Thus one should not regard the elastic collisions in the dilute ``gas" limit as behaving simply like an effective temperature. Finally, we make one last point concerning the angular velocity. Since our system is bidisperse in particle size, one can separately compute the average angular velocity for big particles as compared to small particles. In Figs.~\ref{av-bs}(a) and \ref{av-bs}(b) we plot these for spherocylinders with $\alpha=4$ and 0.01, respectively. Not surprisingly, we see that big particles rotate more slowly than the average, while small particles rotate more quickly. \begin{figure} \centering \includegraphics[width=3.5in]{av-bs} \caption{Average angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ vs $\phi$ for big and small particles separately, for spherocylinders with (a) $\alpha=4$ at $\dot\gamma=10^{-5}$ and (b) $\alpha=0.01$ at $\dot\gamma=10^{-6}$. The average over all particles is given by the dashed line. } \label{av-bs} \end{figure} \subsection{Nematic Orientational Ordering} \label{sS2} In this section we consider the orientational ordering of the interacting particles. For a system in $d$ dimensions, the nematic order parameter $\mathbf{S}_2$ can be obtained from the traceless, symmetric, ordering tensor of an $N$ particle configuration, \begin{equation} \mathbf{T}=\dfrac{d}{(d-1)N}\sum_{i=1}^N\left[\boldsymbol{\hat\ell}_i\otimes\boldsymbol{\hat\ell}_i-\dfrac{1}{d}\mathbf{I}\right], \end{equation} where $\boldsymbol{\hat\ell}_i$ is a unit vector that lies along the spine of particle $i$, and $\mathbf{I}$ is the identity tensor. The magnitude $S_2$ of the nematic order parameter is given by the largest eigenvalue of $\mathbf{T}$, and the corresponding eigenvector $\boldsymbol{\hat\ell}_2$ gives the orientation of the nematic director. We will define the nematic order parameter as $\mathbf{S}_2=S_2\boldsymbol{\hat\ell}_2$. For our system in $d=2$ dimensions, the angle of $\boldsymbol{\hat\ell}_2$ with respect to the flow direction $\mathbf{\hat x}$ will define the orientation angle $\theta_2$ of the nematic director. We define the instantaneous nematic order parameter, given by $S_2(\gamma)$ and $\theta_2(\gamma)$, in terms of the tensor $\mathbf{T}(\gamma)$ for the specific configuration of the system after a total strain $\gamma$. We define the ensemble averaged nematic order parameter, given by $S_2$ and $\theta_2$, in terms of the ensemble averaged tensor $\langle\mathbf{T}\rangle$, which is an average over configurations in the steady state. Note that while $\langle\mathbf{T}\rangle$ is a linear average over the instantaneous $\mathbf{T}(\gamma)$, the same is not in general true of $S_2$ and $\theta_2$ because of variations in the eigenvector directions of $\mathbf{T}(\gamma)$, due either to fluctuations about a steady-state, or to possible systematic variations of $\mathbf{T}(\gamma)$ with $\gamma$. For a $d=2$ dimensional system, one can show that the above definitions for $S_2$ and $\theta_2$ are equivalent to generalizations of Eqs.~(\ref{eS2iso0})-(\ref{eS2iso2}). For a given configuration after total strain $\gamma$ we have for the instantaneous order parameter, \begin{equation} S_2(\gamma)=\max_{\theta^\prime}\left[\frac{1}{N}\sum_{i=1}^N\cos (2[\theta_i-\theta^\prime])\right], \label{eS2g0} \end{equation} with $\theta_2(\gamma)$ being the maximizing value of $\theta^\prime$. From this one can show \cite{Torquato} that \begin{equation} S_2(\gamma)=\sqrt{\left[\frac{1}{N}\sum_{i=1}^N \cos (2\theta_i)\right]^2 +\left[\frac{1}{N}\sum_{i=1}^N \sin (2\theta_i)\right]^2} \label{eS2g1} \end{equation} and \begin{equation} \tan [2\theta_2(\gamma)] = {\left[ \displaystyle{\frac{1}{N}\sum_{i=1}^N\sin (2\theta_i)}\right]}\bigg/ {\left[ \displaystyle{\frac{1}{N}\sum_{i=1}^N\cos (2\theta_i)}\right]}. \label{eS2g2} \end{equation} The ensemble averaged order parameter, given by $S_2$ and $\theta_2$, are similarly obtained, except by replacing the large square brackets $[\dots]$ in Eqs.~(\ref{eS2g0})-(\ref{eS2g2}), which represent sums over particles in a particular configuration, by ensemble averages $\langle\dots\rangle$ over the many different configurations in the steady-state. \subsubsection{Time Dependence of Nematic Ordering} \label{sTNO} The athermal shearing of aspherical rod-shaped particles has been compared to the thermalized shearing of nematic liquid crystals \cite{Borzsonyi1,Borzsonyi2,Wegner}. In the latter case, several different types of behavior may occur depending on material parameters \cite{Jenkins,Larson2,Hess,Larson}. The system may settle into a steady-state with constant $S_2$ and $\theta_2$; the system may ``tumble," with the orientation of the nematic director $\theta_2$ rotating through $\pi$ over a well defined period; or the system might show ``wagging," in which $\theta_2$ has periodic variations back and forth within a fixed interval without rotating. We thus wish to investigate whether such time varying behavior exists in our athermal system. Given that we do find that individual particles continue to rotate even as the system gets dense, is there any coherent rotation of particles that would lead to a systematic variation of $\mathbf{S}_2(\gamma)$ with $\gamma$? For our 2D spherocylinders we do indeed see both tumbling and wagging of the nematic director, however we believe that these occur only as a transient effect due to poor equilibration of the rotational degrees of freedom, either because the density $\phi$ is so small that collisions are rare, or because $\alpha$ is so small that small moment arms lead to small elastic torques and so take long times to reach proper equilibration. \begin{figure} \centering \includegraphics[width=3.5in]{S2-v-g-a4} \caption{For spherocylinders of asphericity $\alpha=4$ at $\dot\gamma=10^{-5}$: instantaneous (a) magnitude $S_2(\gamma)$ and (b) orientation $\theta_2(\gamma)$ of the nematic order parameter vs total strain $\gamma=\dot\gamma t$, for several different packing fractions $\phi$. Horizontal dotted lines indicate the ensemble averaged values of $S_2$ and $\theta_2$. } \label{S2-v-g-a4} \end{figure} In Fig.~\ref{S2-v-g-a4} we plot the instantaneous $S_2(\gamma)$ and $\theta_2(\gamma)$ vs total strain $\gamma=\dot\gamma t$, for spherocylinders of $\alpha=4$ at $\dot\gamma=10^{-5}$ for a few different packings $\phi$. Our shearing starts from a random initial configuration for which $S_2(0)\approx 0$. For the very small $\phi=0.1$ we see damped oscillations in both $S_2(\gamma)$ and $\theta_2(\gamma)$ with a period $\Delta\gamma\approx 16.1$, almost equal to the period $16.04$ of an isolated particle. The behavior of $\theta_2(\gamma)$ identifies this as a wagging of the order parameter. As $\gamma$ increases, the amplitude of these oscillations decays, but the periodicity remains. For $\phi=0.3$, the behavior at small $\gamma$ is similar to that at $\phi=0.1$, but the amplitude of the oscillations dies out faster. At larger $\gamma$ there is no longer any remnant of the initial periodic behavior, and $S_2(\gamma)$ and $\theta_2(\gamma)$ show only random fluctuations about the ensemble averaged values $S_2$ and $\theta_2$. For larger $\phi$, the initial transient dies out even more quickly. \begin{figure} \centering \includegraphics[width=3.5in]{S2-v-g-a01} \caption{For spherocylinders of asphericity $\alpha=0.01$ at $\dot\gamma=10^{-6}$: instantaneous (a) magnitude $S_2(\gamma)$ and (b) orientation $\theta_2(\gamma)$ of the nematic order parameter vs total strain $\gamma=\dot\gamma t$ for several different packing fractions $\phi$. Horizontal dotted lines indicate the ensemble averaged values $S_2$ and $\theta_2$; for $\phi=0.77$ this average is taken only over the latter part of the run $\gamma>125$. } \label{S2-v-g-a01} \end{figure} In Fig.~\ref{S2-v-g-a01} we show similar plots of $S_2(\gamma)$ and $\theta_2(\gamma)$, but now for particles of $\alpha=0.01$ at $\dot\gamma=10^{-6}$. For the smallest $\phi=0.77$ shown we see strong oscillations in $S_2(\gamma)$, and $\theta_2(\gamma)$ initially makes full clockwise rotations with a period $\Delta\gamma\approx 6.7$, close to the period $6.28$ for an isolated particle. As $\gamma$ increases, the rotations become a wagging and the amplitude of the oscillations in $S_2(\gamma)$ decreases, but there remains a clear periodic behavior. For $\phi=0.81$ there are no longer any initial rotations, but the wagging continues with a small erratic amplitude but definite periodicity out to the largest $\gamma$. For $\phi=0.83$ and above, we see only random fluctuations about the ensemble averaged values. We conclude from Figs.~\ref{S2-v-g-a4} and \ref{S2-v-g-a01} that the rotating and wagging of the nematic order parameter $\mathbf{S}_2$ are only transient effects that should die out if the simulation is run long enough, rather than being stable periodic motions of the macroscopic order parameter. \subsubsection{Ensemble Averaged Nematic Ordering} \label{sec:NO} \begin{figure} \centering \includegraphics[width=3.5in]{S2-vs-phiophiJ} \caption{Magnitude of the ensemble averaged nematic order parameter $S_2$ vs reduced packing fraction $\phi/\phi_J$ for spherocylinders of different asphericity $\alpha$. For each $\alpha$ we show results for two different small strain rates $\dot\gamma_1$ (solid symbols) $< \dot\gamma_2$ (open symbols) (see Table~\ref{tab1} for values). The vertical dashed line locates the jamming transition $\phi/\phi_J=1$. } \label{S2-vs-phiophiJ} \end{figure} Having argued in the preceding section that we expect no coherent time variation of the instantaneous nematic order parameter $\mathbf{S}_2(\gamma)$ in a well equilibrated system, we turn now to consider the ensemble averaged nematic order parameter, given by its magnitude $S_2$ and orientation angle $\theta_2$. In an earlier Letter \cite{MKOT} we plotted the ensemble averaged $S_2$ vs the packing $\phi$ for spherocylinders of different aspect ratios. In Fig.~\ref{S2-vs-phiophiJ} we reproduce those results for asphericities $\alpha=0.001$ to 4, but now plotting vs the reduced packing fraction $\phi/\phi_J$. For each $\alpha$ we show results at two different strain rates $\dot\gamma_1<\dot\gamma_2$, whose values are given in Table~\ref{tab1}, to demonstrate that our results are in the quasistatic limit where $S_2$ becomes independent of $\dot\gamma$, except for the largest $\phi$ approaching and going above jamming. In Fig.~\ref{S2_v_phi_a4-01} we similarly plot $S_2$ vs $\phi$, but now showing results for a wider range of strain rates $\dot\gamma$, for the two particular cases $\alpha=4$ and $\alpha=0.01$. We see that the dependence of $S_2$ on $\dot\gamma$ is strongest near the jamming transition, but that $S_2$ appears to be approaching a finite limit as $\dot\gamma\to 0$. \begin{figure} \centering \includegraphics[width=3.5in]{S2_v_phi_a4-01} \caption{Magnitude of the ensemble averaged nematic order parameter $S_2$ vs packing fraction $\phi$ at different strain rates $\dot\gamma$, for spherocylinders of asphericity (a) $\alpha=4$ and (b) $\alpha=0.01$. Vertical dashed lines locate the jamming transitions, $\phi_J=0.906$ and $\phi_J=0.845$, respectively. } \label{S2_v_phi_a4-01} \end{figure} Similar to what we observed for the angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ in Figs.~\ref{dthdg-vs-phiophiJ} and \ref{av_v_phi-allgdot}, our results for $S_2$ show several significant features: (i) As was found for $-\langle\dot\theta_i\rangle/\dot\gamma$, $S_2$ is non-monotonic in $\phi$, reaching a maximum at $\phi_{S_2\,\mathrm{max}}$ somewhat below the jamming $\phi_J$. As was found for an isolated particle in Fig.~\ref{S2-vs-C}(a), comparing Figs.~\ref{dthdg-vs-phiophiJ} and \ref{S2-vs-phiophiJ} we see an anti-correlation between angular velocity and nematic ordering; roughly speaking, when $-\langle\dot\theta_i\rangle/\dot\gamma$ decreases $S_2$ increases, and vice versa. In Fig.~\ref{phi-extreme} we plot $\phi_{S_2\,\mathrm{max}}$, the location of the maximum in $S_2$, and $\phi_{\dot\theta\,\mathrm{min}}$, the location of the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$, vs $\alpha$. We see that they are close and become roughly equal for $\alpha\lesssim 0.5$. (ii) As $\alpha$ decreases, the variation in $S_2$ gets squeezed into an increasingly narrow range of $\phi$, closer to $\phi_J$, and the degree of ordering $S_2$ decreases. However, even for the very nearly circular particles with $\alpha=0.001$, the maximum value $S_{2\,\mathrm{max}}=0.33$ remains relatively large. This is another reflection of the singular $\alpha\to 0$ limit, discussed above in connection with the angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$, and reported in our earlier letter \cite{MKOT}. (iii) In the dilute limit at small $\phi$, we see $S_2$ is increasing as $\phi$ increases, which is the opposite of the behavior seen in Fig.~\ref{noisy}(c) for the noisy isolated particle. Thus, as we concluded also from the behavior of $-\langle\dot\theta_i\rangle/\dot\gamma$, one cannot regard the elastic collisions in the dilute ``gas" limit as behaving similarly to an effective temperature. In subsequent sections we will develop an understanding of the behaviors (i) and (ii). \begin{figure} \centering \includegraphics[width=3.5in]{phi-extreme} \caption{Location $\phi_{S_2\,\mathrm{max}}$ of the maximum in the nematic order parameter $S_2$ of Fig.~\ref{S2-vs-phiophiJ}, and location $\phi_{\dot\theta\,\mathrm{min}}$ of the minimum in the angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ of Fig.~\ref{dthdg-vs-phiophiJ}, vs particle asphericity $\alpha$. } \label{phi-extreme} \end{figure} Next we consider the orientation angle $\theta_2$ of the nematic director. In Fig.~\ref{th2D-vs-phiophiJ} we plot $\theta_2$ vs the reduced packing $\phi/\phi_J$ for different asphericities $\alpha$, showing results for the two values of strain rate $\dot\gamma_1<\dot\gamma_2$ (see Table~\ref{tab1} for values). For an isolated particle, $\theta_2=0$, indicating average alignment parallel to the flow direction $\mathbf{\hat x}$. As $\phi$ increases from this small $\phi$ isolated particle limit, we see that $\theta_2$ initially goes negative. Increasing $\phi$ further, $\theta_2$ increases, becomes positive, and upon approaching $\phi_J$ saturates to a value that increases towards $45^\circ$ as $\alpha$ decreases; as $\phi$ gets close to and goes above $\phi_J$, we see a slight decrease in $\theta_2$. \begin{figure} \centering \includegraphics[width=3.5in]{th2D-vs-phiophiJ} \caption{Orientation of the ensemble averaged nematic order parameter $\theta_2$ vs reduced packing fraction $\phi/\phi_J$ for spherocylinders of different asphericity $\alpha$. For each $\alpha$ we show results for two different small strain rates $\dot\gamma_1$ (solid symbols) $< \dot\gamma_2$ (open symbols) (see Table~\ref{tab1} for values). The vertical dashed line locates the jamming transition $\phi/\phi_J=1$, the horizonal dashed line denotes $\theta_2=45^\circ$, while the horizontal solid line denotes $\theta_2=0$. } \label{th2D-vs-phiophiJ} \end{figure} While at very small packing $\phi$ the particles tend to align with the flow direction, one might think that, as the particle packing increases, the nematic director would align with the direction of minimal stress. However we find that this is in general not so. If $p$ is the pressure and $\sigma$ is the deviatoric shear stress, the orthogonal eigenvectors of the stress tensor, corresponding to eigenvalues $p\pm\sigma$, are oriented at angles $\theta_\pm$ with respect to the flow direction $\mathbf{\hat x}$. In an earlier work \cite{MT1} we have computed the angle of the minimum stress eigenvector, $\theta_-$. At small $\phi$ for any $\alpha$ we find $\theta_-\approx 45^\circ$, as it would be for a uniformly sheared continuum. At dense $\phi$, near and above jamming, we find that $\theta_- \to 45^\circ$ as $\alpha\to 0$, but otherwise decreases from $45^\circ$ as $\alpha$ increases. In between, $\theta_-$ can vary non-monotonically as $\phi$ increases. In Fig.~\ref{th2X-vs-phiophiJ} we plot $\theta_2-\theta_-$ vs $\phi$ for different $\alpha$, at the strain rate $\dot\gamma_1$ (see Table~\ref{tab1} for values). We see that only for the smaller values $\alpha\lesssim 0.25$, and only approaching $\phi_J$ and going above, do we find $\theta_2\approx \theta_-$, i.e. the nematic order parameter is aligning close to the minimum stress direction. \begin{figure} \centering \includegraphics[width=3.5in]{th2X-vs-phiophiJ} \caption{Difference between nematic order parameter orientation $\theta_2$ and the orientation of the minimal stress eigenvector $\theta_-$, vs reduced packing fraction $\phi/\phi_J$ for spherocylinders of different asphericity $\alpha$ at small strain rates $\dot\gamma_1$ (see Table~\ref{tab1} for values). The vertical dashed line locates the jamming transition $\phi/\phi_J=1$, the horizonal dashed line denotes $\theta_2-\theta_-=-45^\circ$, and the horizontal solid line denotes $\theta_2-\theta_-=0$. } \label{th2X-vs-phiophiJ} \end{figure} In Appendix \ref{aOrientations} we discuss further properties of particle orientations. By considering the distribution of particle orientations $\mathcal{P}(\theta_i)$, we show that the angle $\theta_2$ of the nematic order parameter is in general not equal to the most likely particle orientation, determined by the maximum in $\mathcal{P}(\theta_i)$, although the two are close. \subsection{Time Dependence of Particle Rotations} \label{stimedep} A principle result of the preceding two sections is the observation that $-\langle\dot\theta_i\rangle/\dot\gamma$ and $S_2$ both vary non-monotonically as the packing $\phi$ increases. In this section we provide a physical understanding of this behavior by demonstrating that the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$ represents a crossover from small packings $\phi$, where particle rotations are qualitatively like the periodic rotations of an isolated particle (perturbed by inter-particle collisions), to large packings $\phi$, where the geometry of the dense packing becomes the dominant factor influencing rotations, which then behave similar to a random Poisson process. We will show this by considering the distribution of strain intervals $\Delta\gamma$ between successive rotations of a particle by $\pi$. In Sec.~\ref{sAV} we discussed the average angular velocity of individual particle rotations, $-\langle\dot\theta_i\rangle/\dot\gamma$. Now we consider the time evolution of a particle's rotation. We consider first the case of elongated particles with $\alpha=4$. In Fig.~\ref{theta-a4} we plot $\theta_i(\gamma)$ vs $\gamma=\dot\gamma t$ for six randomly selected particles, three big and three small, at several different packing fractions $\phi$ and $\dot\gamma=10^{-5}$. The average motion, $\theta_i = [\langle\dot\theta_i\rangle/\dot\gamma]\gamma$, is indicated by the dashed diagonal line. Comparing Fig.~\ref{theta-a4} with the corresponding curve for a isolated particle shown in Fig.~\ref{theta-vs-g}(a), we see a general similarity: There are plateaus near integer values $\theta_i=-n\pi$, separated by regions where $\theta_i$ rapidly transitions by an amount $-\pi$, representing a clockwise flipping of the orientation of the particle. Upon further inspection, however, there are two important differences. For the case of the isolated particle in Fig.~\ref{theta-vs-g}(a), the plateaus show a small downwards slope due to the finite angular velocity $\dot\theta_i/\dot\gamma =d\theta_i/d\gamma =- f(0)=-[1-(\Delta I_i/I_i)]/2$ when the particle is oriented parallel to the flow. In Fig.~\ref{theta-a4} however, the plateaus appear on average to be mostly flat. For the isolated particle, the jumps in $\theta_i$ by $-\pi$, as the particle flips orientation, occur in a perfectly periodic fashion. In Fig.~\ref{theta-a4} however, the timing between such jumps appears to be more random. In the densest system at $\phi=0.95 >\phi_J$, shown in Fig.~ \ref{theta-a4}(d), we also see that particle 1 makes a counterclockwise flip of $+\pi$ at small $\gamma$; However for $\alpha=4$ these counterclockwise flips are rare events, occurring infrequently for $\phi=0.95$, and even less so for smaller $\phi$, over the length of our simulations. In Fig.~\ref{theta-a4} we see that the average value of $\theta_i$ on these plateaus lies slightly above the values $-n\pi$ at the larger values of $\phi$; the particles are thus at some small finite angle $[\theta_i \text{ modulo } \pi]>0$ with respect to the flow direction. This is a consequence of the increasing orientation angle of the nematic director $\theta_2$ as $\phi$ increases, as shown in Fig.~\ref{th2D-vs-phiophiJ}. We also see that the fluctuations about the plateaus tend to increase as $\phi$ increases. This is a consequence of the broadening of the distribution of orientations $\mathcal{P}(\theta_i)$ as $\phi$ increases, as shown in Appendix~\ref{aOrientations}. \begin{figure} \centering \includegraphics[width=3.5in]{theta-a4} \caption{For spherocylinders of asphericity $\alpha=4$ at strain rate $\dot\gamma=10^{-5}$, particle orientation $\theta_i$ vs net strain $\gamma=\dot\gamma t$ for six randomly selected particles at packings (a) $\phi=0.50$, (b) $\phi=0.80$, (c) $\phi=0.905\approx \phi_J$, and (d) $\phi=0.95$. In each case particles 1, 2 and 3 are big particles, while 4, 5 and 6 are small particles. The diagonal dashed lines indicate the average rotation, $\theta_i=[\langle\dot\theta_i\rangle/\dot\gamma]\gamma$. } \label{theta-a4} \end{figure} Measuring the strain $\Delta\gamma$ between two successive rotational flips of a particle by $-\pi$, we plot the distribution $\mathcal{P}_\gamma(\Delta\gamma)$ vs $\Delta\gamma$ for different $\phi$ at fixed $\dot\gamma=10^{-5}$ in Fig.~\ref{flipHist-a4}(a). For the smaller values of $\phi$ we find that $\mathcal{P}_\gamma$ peaks at the value $\Delta\gamma\approx 16$, which is the same as the strain interval between the periodic flips by $-\pi$ for an isolated particle, as seen in Fig.~\ref{theta-vs-g}(a); however as $\phi$ increases, the distribution broadens and is increasingly skewed towards values on the large $\Delta\gamma$ side of the peak. As $\phi$ increases further, we see that the location of the peak in $\mathcal{P}_\gamma$ steadily shifts to smaller values of $\Delta\gamma$ and the large $\Delta\gamma$ tail of the distribution becomes exponential, as seen by the roughly linear decrease of the distributions on our semi-log plot. This exponential waiting time between flips, $\Delta t=\Delta\gamma/\dot\gamma$, suggests that at large $\phi$ particle flips are a Poisson-like process, and that, aside from an initial waiting time corresponding to the rise of $\mathcal{P}_\gamma$ to its peak, the time until the next particle flip is independent of how long the particle has spent since its last flip. Thus, unlike the case of an isolated particle for which the particle undergoes periodic rotation with a non-uniform angular velocity, here our results suggest a scenario in which, as the particle density increases, the reduced free volume between particles blocks particle rotations, leaving particles to spend most of their time having small angular deflections about a fixed value. Then, after some random strain $\Delta\gamma$, a local rearrangement appears that allows the particle to rotate rapidly through $\Delta\theta_i=-\pi$. The exponential distribution of the waiting times implies that the appearance of such local rearrangements are uncorrelated, except for a minimal waiting time. \begin{figure} \centering \includegraphics[width=3.5in]{flipHist-a4} \caption{For spherocylinders of asphericity $\alpha=4$ at strain rate $\dot\gamma=10^{-5}$: (a) Distribution $\mathcal{P}_\gamma(\Delta\gamma)$ of the strain interval $\Delta\gamma=\dot\gamma\Delta t$ between successive clockwise rotations of a particle by $\pi$ for different packings $\phi$. (b) With $\Delta\gamma_0$ obtained from fitting the exponentially decaying large $\Delta\gamma$ tail of $\mathcal{P}_\gamma$ to $\exp[-\Delta\gamma/\Delta\gamma_0]$, a comparison of $\pi/\Delta\gamma_0$ vs the average particle angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$. The vertical dashed line locates the jamming $\phi_J$. } \label{flipHist-a4} \end{figure} Fitting the large $\Delta\gamma$ tail of the distribution to $\mathcal{P}_\gamma\propto\exp[-\Delta\gamma/\Delta\gamma_0]$, we determine the rate of particle flips $1/\Delta\gamma_0$. This rate, which is just the slope of the linearly decreasing distributions in the semi-log plot of Fig.~\ref{flipHist-a4}(a), is seen to be non-monotonic in $\phi$, reaching a minimum value near $\phi\approx 0.80$. In Fig.~\ref{flipHist-a4}(b) we plot this rate as $\pi/\Delta\gamma_0$ vs $\phi$ and compare it to the average angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$, shown previously in Fig.~\ref{av_v_phi-allgdot}(a). If the $\mathcal{P}_\gamma$ were exactly exponential distributions, these two curves would be equal. But $\mathcal{P}_\gamma$ is not precisely exponential, due to the waiting time represented by the rise of $\mathcal{P}_\gamma$ to its peak value. Because of this waiting time we expect $\langle\Delta\gamma\rangle > \Delta\gamma_0$, and so $-\langle\dot\theta_i\rangle/\dot\gamma=\pi/\langle\Delta\gamma\rangle$ will lie below $\pi/\Delta\gamma_0$, as we indeed find to be the case. Nevertheless we see that at the larger $\phi$, $\pi/\Delta\gamma_0$ behaves qualitatively the same as $-\langle\dot\theta_i\rangle/\dot\gamma$, with a similar minimum around $\phi_{\dot\theta\,\mathrm{min}}\approx 0.80$; the difference between the two curves becomes greatest as $\phi$ decreases below the minimum. We thus form the following picture. At small $\phi$ particles behave similarly to isolated particles, with the typical strain $\Delta\gamma$ between particle flips being roughly equal to that of an isolated particle, but with random fluctuations due to particle collisions; these fluctuations are skewed to larger $\Delta\gamma$ thus causing the decrease in $-\langle\dot\theta_i\rangle/\dot\gamma$. The average $\langle\Delta\gamma\rangle$ at these small $\phi$ is significantly different from the $\Delta\gamma_0$ that describes the large $\Delta\gamma$ tail of the distribution. As $\phi$ increases however, the flips become more of a Poisson-like process in which the average time until the next particle flip is independent of the time since the last flip. The exponential part of the distribution $\mathcal{P}_\gamma$ dominates the behavior and $\Delta\gamma_0$ gives a qualitative explanation for the average angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ in the range of $\phi$ approaching the minimum $\phi_{\dot\theta\,\mathrm{min}}$ and going above. Note, although we described the rotations by $\pi$ in Figs.~\ref{theta-a4}(c) and \ref{theta-a4}(d) as ``rapid," this is meant as rapid relative to the strain interval $\Delta\gamma$ between successive particle rotations. Upon closer examination, a particle rotation takes place over a typical strain scale of $\delta\gamma\sim 5$; this is roughly the strain needed for particles of tip-to-tip length $5D_s$, in neighboring rows parallel to the flow direction, to slide past one another. Thus the entire configuration has undergone substantial change over the time it takes the particle to rotate; moreover, although as we will argue later there is no long range coherence in particle motion, there are strong correlations in particle motion on short length scales. It is therefore not obvious to visually identify the particular configurational fluctuations that facilitate particle rotations. \begin{figure} \centering \includegraphics[width=3.5in]{theta-a01} \caption{For spherocylinders of asphericity $\alpha=0.01$ at strain rate $\dot\gamma=10^{-6}$, particle orientation $\theta_i$ vs net strain $\gamma=\dot\gamma t$ for six randomly selected particles at packings (a) $\phi=0.81$, (b) $\phi=0.83$, (c) $\phi=0.84\approx \phi_J=0.845$, and (d) $\phi=0.86$. In each case particles 1, 2 and 3 are big particles, while 4, 5 and 6 are small particles. The dashed lines indicate the average rotation, $\theta_i=[\langle\dot\theta_i\rangle/\dot\gamma]\gamma$. } \label{theta-a01} \end{figure} Next we consider the case of nearly circular particles with $\alpha=0.01$. For an isolated particle, $\Delta I_i/I_i=0.0085$ is so small that a plot of $\theta_i$ vs $\gamma$ would look like a straight line of slope $-1/2$; no plateaus are observable to the eye. In Fig.~\ref{theta-a01} we plot $\theta_i(\gamma)$ vs $\gamma=\dot\gamma t$ for six randomly selected particles, three big and three small, at several different packing fractions $\phi$ and $\dot\gamma=10^{-6}$. The average motion, $\theta_i = [\langle\dot\theta_i\rangle/\dot\gamma]\gamma$, is indicated by the dashed diagonal line. For $\phi=0.81$, below the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$ at $\phi_{\dot\theta\,\mathrm{min}}$ (see Fig.~\ref{av_v_phi-allgdot}(b)), we see in Fig.~\ref{theta-a01}(a) small fluctuations about the isolated particle behavior. For $\phi=0.83\approx \phi_{\dot\theta\,\mathrm{min}}$ in Fig.~\ref{theta-a01}(b), near the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$, we see larger fluctuations, some small isolated plateaus where particles stay at a fixed orientation, but for the most part particles are rotating nearly uniformly. However, for $\phi=0.84$ in Fig.~ \ref{theta-a01}(c), just below the jamming $\phi_J=0.845$, and for $\phi=0.86$ in Fig.~\ref{theta-a01}(d), above $\phi_J$, we see dramatically different behavior. Fluctuations are now extremely large, and rotation is highly non-uniform. Compared to Fig.~\ref{theta-a4} for $\alpha=4$, here it is hard to identify clear plateaus, and there is considerable counterclockwise rotation (where $\theta_i$ increases with increasing $\gamma$) in addition to clockwise rotation (where $\theta_i$ decreases with increasing $\gamma$). Nevertheless, we can still carry out an analysis of flipping times in analogy with what we did for $\alpha=4$ in Fig.~\ref{flipHist-a4}. If we denote by $\gamma_1$ the strain at which a given particle trajectory first passes through $\theta_i=-(n+1/2)\pi$ upon rotating clockwise, and by $\gamma_2$ the strain at which it next passes through $\theta_i=-(n+3/2)\pi$, then $\Delta\gamma_-=\gamma_2-\gamma_1$ can be taken as the net strain displacement over which the particle has flipped its orientation, rotating clockwise through an angle $\pi$. In a similar way we can determine $\Delta\gamma_+$, the net strain displacement for the particle to flip its orientation rotating counterclockwise through an angle $\pi$. In Figs.~\ref{flipHist-a01}(a) and \ref{flipHist-a01}(b) we plot the distributions $\mathcal{P}_{\gamma}^{-}(\Delta\gamma_-)$ for clockwise flips, and $\mathcal{P}^{+}_{\gamma}(\Delta\gamma_+)$ for counterclockwise flips, respectively, for different packings $\phi$ at $\dot\gamma=10^{-6}$. Despite the qualitative differences in the trajectories $\theta_i(\gamma)$ for $\alpha=0.01$, shown in Fig.~\ref{theta-a01}, from those for $\alpha=4$, shown in Fig.~\ref{theta-a4}, the distribution $\mathcal{P}_\gamma^-$ for $\alpha=0.01$ shows the same qualitative behavior as the $\mathcal{P}_\gamma$ found for $\alpha=4$ in Fig.~\ref{flipHist-a4}(a). For small $\phi\lesssim 0.82$, the peak in $\mathcal{P}_\gamma^-$ lies close to $\Delta\gamma_-\approx 6.3$, which is the same as the strain interval between the periodic rotations by $\pi$ of an isolated particle. However as $\phi$ increases, approaching the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$ at $\phi_{\dot\theta\,\mathrm{min}}\approx 0.83$, the distribution broadens and an exponential tail appears on the large $\Delta\gamma_-$ side of the peak. As $\phi$ increases above $0.83$ the location of the peak in $\mathcal{P}_\gamma^-$ shifts towards smaller $\Delta\gamma_-$ and the exponential tails grow, until at our largest values of $\phi$ the distribution $\mathcal{P}_\gamma^-$ is almost a pure exponential. Fitting to the large $\Delta\gamma_-$ tail of $\mathcal{P}_\gamma^-$ we determine the exponential rate $1/\Delta\gamma_{0-}$, which is just the slope of the linearly decreasing distributions in the semi-log plot of Fig.~\ref{flipHist-a01}(a). We see that this rate is non-monotonic, having its smallest value at $\phi\approx 0.83\approx \phi_{\dot\theta\,\mathrm{min}}$ where the average angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ is minimum. \begin{figure} \centering \includegraphics[width=3.5in]{flipHist-a01} \caption{For spherocylinders of asphericity $\alpha=0.01$ at strain rate $\dot\gamma=10^{-6}$: Distributions (a) $\mathcal{P}^-_\gamma(\Delta\gamma_-)$ for the strain interval $\Delta\gamma_-$ between successive clockwise rotations of a particle by $\pi$ for different packings $\phi$, and (b) $\mathcal{P}^+_\gamma(\Delta\gamma_+)$ for the strain interval $\Delta\gamma_+$ between successive counterclockwise rotations of a particle by $\pi$ for different packings $\phi$, } \label{flipHist-a01} \end{figure} For counterclockwise rotations, we see that the distributions of $\mathcal{P}_\gamma^+$, shown in Fig.~\ref{flipHist-a01}(b), are close to exponential, with a rate that rapidly decreases as $\phi$ decreases from above jamming towards the $\phi_{\dot\theta\,\mathrm{min}}\approx0.83$ that locates the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$. For $\phi<0.835$, counterclockwise rotations are so rare over the length of our simulation runs that we are unable to determine the distribution $\mathcal{P}_\gamma^+$ at such small $\phi$. For $\phi\ge0.835$ we fit the large $\Delta\gamma_+$ tails of $\mathcal{P}_\gamma^+$ to determine the exponential rate $1/\Delta\gamma_{0+}$. In Fig.~\ref{flipRate-a01}(a) we plot the clockwise and counterclockwise rates as $\pi/\Delta\gamma_{0-}$ and $\pi/\Delta\gamma_{0+}$ vs $\phi$. As found for $\pi/\Delta\gamma_0$ for $\alpha=4$ in Fig.~\ref{flipHist-a4}(b), we see that $\pi/\Delta\gamma_{0-}$ has a minimum at $\phi=0.83\approx \phi_{\dot\theta\,\mathrm{min}}$ where $-\langle\dot\theta_i\rangle/\dot\gamma$ is minimum. In contrast, $\pi/\Delta\gamma_{0+}$ is getting small, and perhaps vanishing, as $\phi\to0.83$ from above. If the distributions $\mathcal{P}_\gamma^-$ and $\mathcal{P}_\gamma^+$ were exactly exponential, then the average angular velocity would just be $(\pi/\Delta\gamma_{0-})-(\pi/\Delta\gamma_{0+})$. In Fig.~\ref{flipRate-a01}(b) we compare this quantity with the exactly computed $-\langle\dot\theta_i\rangle/\dot\gamma$, plotting both vs the packing $\phi$. As for the case of spherocylinders with $\alpha=4$, shown in Fig.~\ref{flipHist-a4}(b), we see that these two curves qualitatively agree upon approaching the minimum at $\phi_{\dot\theta\,\mathrm{min}}=0.83$ and going above, but they quickly separate as $\phi$ decreases below 0.83. As with $\alpha=4$, the difference between the two curves results from the fact that the distributions $\mathcal{P}_\gamma^-$ and $\mathcal{P}_\gamma^+$ are not exactly exponential, with $\langle\Delta\gamma_{\pm}\rangle > \Delta\gamma_{0\pm}$ due to the rise of the distributions to their peak at a finite $\Delta\gamma_\pm$; this difference becomes most pronounced at the smaller $\phi < 0.83$. \begin{figure} \centering \includegraphics[width=3.5in]{flipRate-a01} \caption{For spherocylinders of asphericity $\alpha=0.01$ at strain rate $\dot\gamma=10^{-6}$: (a) Rates $\pi/\Delta\gamma_{0-}$ and $\pi/\Delta\gamma_{0+}$ characterizing the exponential tails of the distributions $\mathcal{P}_\gamma^-$ and $\mathcal{P}_\gamma^+$ for the wait times for clockwise and counterclockwise rotations of a particle by $\pi$, and (b) average particle angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$ compared to $(\pi/\Delta\gamma_{0-})-(\pi/\Delta\gamma_{0+})$ vs packing $\phi$. The dashed vertical line locates the jamming $\phi_J$. } \label{flipRate-a01} \end{figure} Our analysis of spherocylinders with both $\alpha=4$ and $\alpha=0.01$ thus points to a common scenario. The minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$ at $\phi_{\dot\theta\,\mathrm{min}}$ results from a crossover between two different types of behavior as $\phi$ varies. For $\phi\ll\phi_{\dot\theta\,\mathrm{min}}$, particles behave qualitatively like isolated particles. While an isolated particle will have perfectly periodic rotations by $\pi$ given by a strain period of $\Delta\bar\gamma=2\pi/\sqrt{1-(\Delta I_i/I_i)^2}$ (see Eq.~(\ref{eomegasingle})), the interacting particles will have a distribution of $\Delta\gamma$ that peaks near $\Delta\bar\gamma$ but has a finite width, with a skew to the large $\Delta\gamma$ side of the peak; the width of the distribution and the skew increase as $\phi$ increases, giving a decreasing $-\langle\dot\theta_i\rangle/\dot\gamma$. This effect is presumably a result of the reduction in free volume between the particles as $\phi$ increases, thereby inhibiting rotations. For $\phi\gtrsim\phi_{\dot\theta\,\mathrm{min}}$, however, the distribution peak shifts down towards zero, and the distribution becomes increasingly exponential, as $\phi$ increases. This exponential distribution suggests that rotations by $\pi$ become a Poisson-like process; particles in general fluctuate about fixed orientations, while flips with a $\pi$ rotation occur at uncorrelated random times set by a rate $1/\Delta\gamma_0$. The time until the next flip is largely independent of the time since the last flip, except for a minimum waiting time. As $\phi$ increases above $\phi_{\dot\theta\,\mathrm{min}}$, the flipping rate $1/\Delta\gamma_0$ increases and so $-\langle\dot\theta_i\rangle/\dot\gamma$ increases. \subsection{Pure vs Simple Shearing} \label{sPure} In this section we present another analysis that again suggests that the non-monotonic behavior of $-\langle\dot\theta_i\rangle/\dot\gamma$ and $S_2$, as $\phi$ increases, results from a crossover from single particle like behavior to behavior dominated by the geometry of the dense packing. Our analysis here focuses on the magnitude of the nematic order parameter $S_2$. Our results will also offer an explanation for the singular behavior reported in our earlier Letter \cite{MKOT}, in which we found for simple shearing that as $\alpha\to 0$, and particles approach a circular shape, $S_2$ vanishes for $\phi<\phi_J$ but $S_2$ remains finite at and just above $\phi_J$. All the results elsewhere in this paper involve the behavior of our system under simple shearing. Here, however, we consider the behavior of our system under pure shearing. As we discuss below, the behavior of an isolated single particle is dramatically different under pure vs simple shearing. We will find that the behavior of $S_2$ of our many particle system is similarly qualitatively different for pure vs simple shearing at small packings, but that they are qualitatively the same at large packings, thus suggesting the crossover described above. In our model, dissipation arises due to a viscous drag between the local velocity of the particle and the local velocity $\mathbf{v}_\mathrm{host}(\mathbf{r})$ of the suspending host medium. For simple shear in the $\mathbf{\hat x}$ direction, $\mathbf{v}_\mathrm{host}(\mathbf{r})=\dot\gamma y\mathbf{\hat x}$. For a more general linear deformation of the host medium we can write, \begin{equation} \mathbf{v}_\mathrm{host}(\mathbf{r})=\dot{\boldsymbol{\Gamma}}\cdot\mathbf{r}, \end{equation} with $\dot{\boldsymbol{\Gamma}}$ the strain rate tensor. For simple shear we can write, \begin{equation} \dot{\boldsymbol{\Gamma}}_\mathrm{ss}= \left[ \begin{array}{cc} 0 & \dot\gamma\\ 0 & 0 \end{array} \right] = \left[ \begin{array}{cc} 0 & \dot\gamma/2\\ \dot\gamma/2 & 0 \end{array} \right] +\left[ \begin{array}{cc} 0 & \dot\gamma/2\\ -\dot\gamma/2 & 0 \end{array} \right]. \label{epures} \end{equation} The first term on the right most side of Eq.~(\ref{epures}) represents a pure shear distortion, in which the host medium is expanded in the $\mathbf{\hat x}+\mathbf{\hat y}$ direction, while being compressed in the $\mathbf{\hat x}-\mathbf{\hat y}$ direction, both at a rate $\dot\gamma/2$, so as to preserve the system area. The second term represents a clockwise rotation $(-\dot\gamma/2)\mathbf{\hat z}\times \mathbf{r}$, with angular velocity $-\dot\gamma/2$. Thus a simple shear can be viewed as the sum of a pure shear and a rotation. It is this rotational part which gives rise to the constant term $1/2$ in the angular driving function $f(\theta)$ of Eq.~(\ref{eftheta}), while the pure shear part gives rise to the $\cos 2\theta$ term. It is the rotational part that drives the continuous rotation of particles under simple shear, resulting in the finite $-\langle\omega_{zi}\rangle/\dot\gamma >0$ found in steady-state, as seen in Fig.~\ref{dthdg-vs-phiophiJ}. Studying pure shear thus allows us to study the orientational ordering of the system in the absence of the rotational drive. For our pure shear simulations we choose $\mathbf{\hat x}$ as the expansive direction and $\mathbf{\hat y}$ as the compressive direction, using periodic boundary conditions in both directions. In this case, the translational and rotational equations of motion for pure shear become, \begin{equation} \dot{\mathbf{r}}_i=\dfrac{\dot\gamma}{2}[x_i\mathbf{\hat x}-y_i\mathbf{\hat y}] +\dfrac{\mathbf{F}^\mathrm{el}_i}{k_d\mathcal{A}_i}, \end{equation} \begin{equation} \dot\theta_i=-\dfrac{\dot\gamma}{2}\dfrac{\Delta I_i}{I_i}\sin 2\theta_i + \dfrac{\tau_i^\mathrm{el}}{k_d \mathcal{A}_i I_i}. \end{equation} For an isolated particle, where $\tau_i^\mathrm{el}=0$, one can solve the rotational equation of motion analytically, \begin{equation} |\tan\theta_i(t)| = \mathrm{e}^{-\dot\gamma t \Delta I_i/I_i}|\tan \theta_i(0)|. \label{ethetatps} \end{equation} An isolated particle will relax exponentially to $\theta_i=0$ or $\pi$ with a relaxation time $t_\mathrm{relax}$ set by a total strain $\gamma_\mathrm{relax}=\dot\gamma t_\mathrm{relax} = I_i/\Delta I_i$. Unlike simple shearing, there is no continuing rotation of the particle. Thus, for an isolated particle under pure shearing, we find perfect nematic ordering with $S_2=1$ and $\theta_2=0$ for particles of {\em any} asphericity $\alpha$. This is in contrast to the behavior under simple shearing where, due to continuing particle rotation, Eq.~(\ref{eS2single}) gives $S_2 < 1$. This difference between pure and simple shearing is most dramatic for the case of a nearly circular particle with small $\alpha$. For small $\alpha$, Eq.~(\ref{eDI}) gives a small $\Delta I_i/I_i\sim \alpha$. For pure shearing, an isolated particle will relax to perfect ordered alignment with the minimal stress direction, $S_2=1$ and $\theta_2=0$, although the relaxation strain to achieve that ordered state, $\gamma_\mathrm{relax}=I_i/\Delta I_i\sim 1/\alpha$, grows large as $\alpha$ decreases. For simple shearing, however, an isolated particle with small $\alpha$ will continue to rotate, with a nearly uniform angular velocity $\dot\theta_i\approx -\dot\gamma/2$, so that Eq.~(\ref{eS2single}) gives $S_2\sim \Delta I_i/I_i\sim \alpha$, which thus vanishes as $\alpha$ decreases to zero. To investigate the response to pure shear at a finite packing $\phi$, in particular near and above jamming, we carry out numerical simulations. Unlike simple shear, where the system lengths $L_x$ and $L_y$ remain constant as the system strains, under pure shear these lengths change with the total strain $\gamma$ according to $L_x(\gamma)=L_x(0)\mathrm{e}^{\gamma/2}$ and $L_y(\gamma)=L_y(0)\mathrm{e}^{-\gamma/2}$. Thus a practical limitation of pure shear simulations is that, unlike for simple shear, there is a limit to the total strain $\gamma$ that can be applied to a finite numerical system before the system collapses to a narrow height of order one particle length. Therefore, to increase the total possible strain $\gamma$, we use systems with an initial system aspect ratio of $L_y(0)/L_x(0)=\beta$, and shear to a strain $\gamma$ such that $L_y(\gamma)/L_x(\gamma)=1/\beta$, thus allowing a maximum strain of $\gamma_\mathrm{max}=2\ln \beta$. The value of $\beta$ and the number of particles $N$ are varied with $\alpha$, so that the final system height after the maximal strain is comparable to the fixed system length of our simple shear simulations. In particular, for $\alpha\le0.01$ we use $\beta=12$ and $N=4096$; for $0.01<\alpha<4$ we use $\beta=16$ and $N=8192$; for $\alpha=4$ we use $\beta=20$ and $N=16384$. All our results below use a fixed strain rate $\dot\gamma=10^{-6}$, and start from random initial configurations, constructed in the same manner as for our simple shear simulations. \begin{figure} \centering \includegraphics[width=3.5in]{S2-vs-gamma} \caption{For a pure shear deformation, (a) and (c) show the magnitude of the nematic order parameter $S_2$ vs total strain $\gamma=\dot\gamma t$ at different packing fractions $\phi$, for particles of asphericity $\alpha=4$ and 0.01, respectively; (b) and (d) show the corresponding orientation $\theta_2$ of the nematic order parameter. Results are for a strain rate $\dot\gamma=10^{-6}$ with the number of particles $N$ as indicated in each panel. Solid lines connect data points; symbols are shown only on a dilute set of the data points, so as to aid identification of the different curves. The jamming packing fraction is $\phi_J=0.906$ for $\alpha=4$ and $\phi_J=0.845$ for $\alpha=0.01$. } \label{S2-vs-gamma} \end{figure} In Fig.~\ref{S2-vs-gamma}(a) we plot $S_2$ vs strain $\gamma$ at several different packings $\phi$, for our elongated particles with $\alpha=4$. We see that as $\gamma$ increases, $S_2$ rises from its near zero value in the initial random configuration and saturates to a constant steady-state value at large $\gamma$. As $\phi$ increases, this steady-state value of $S_2$ decreases, as the decreasing free volume associated with the increasing particle density blocks particles from perfect alignment. In Fig.~\ref{S2-vs-gamma}(b) we plot the corresponding orientation of the nematic order parameter $\theta_2$ vs $\gamma$. We see that $\theta_2$ starts at some finite value, depending on the small, randomly directed, residual $\mathbf{S}_2$ in the initial random configuration, and then rapidly decays to $\theta_2=0$ as $\gamma$ increases. Thus, as expected, the pure shearing orders the particles with a nematic order parameter oriented parallel to the minimal stress direction. Our results in Figs.~\ref{S2-vs-gamma}(a) and \ref{S2-vs-gamma}(b) are from a single pure shear run at each $\phi$. In Figs.~\ref{S2-vs-gamma}(c) and \ref{S2-vs-gamma}(d) we show corresponding results for $S_2$ and $\theta_2$ vs $\gamma$ for the case of nearly circular particles with $\alpha=0.01$. Again we see that $S_2$ increases from zero to saturate at a steady-state value as $\gamma$ increases. Unlike the very slow relaxation $\gamma_\mathrm{relax}\sim 1/\alpha$ we expect for an isolated particle, here we see that relaxation to the steady-state is relatively rapid at large packings $\phi$; the frequent collisions between particles at large densities act to quickly equilibrate the system. However as $\phi$ decreases, the relaxation strain $\gamma_\mathrm{relax}$ increases, and at our smallest packing $\phi=0.82$, $S_2$ fails to saturate to the steady-state value within our maximum strain $\gamma_\mathrm{max}=2\ln 12\approx 5$. We previously reported similar results for $\alpha=0.001$ in the Supplemental Material to Ref.~\cite{MKOT}. Our results in Figs.~\ref{S2-vs-gamma}(c) and \ref{S2-vs-gamma}(d) are from the average of two independent runs at each $\phi$. We note that similar simulations have been carried out by Az{\'e}ma and Radja{\"i} in Ref.~\cite{Azema2010} for {\em frictional} 2D spherocylinders near the jamming packing, but using a constant lateral pressure rather than a constant volume, and shearing only to much smaller total strains than we do here. They similarly find that particles orient parallel to the minimal stress direction as they are sheared, but they seem to reach the large strain steady-state only for relatively small particle asphericities. \begin{figure} \centering \includegraphics[width=3.5in]{S2-pure-simple} \caption{Magnitude of the steady-state nematic order parameter $S_2$ vs packing $\phi$ for pure shear (solid symbols, dotted lines) compared to simple shear (open symbols, solid lines), for several small values of particle asphericity $\alpha$. For pure shear the strain rate is $\dot\gamma=10^{-6}$. For simple shear $\dot\gamma=10^{-6}$ for $\alpha=0.001$ and 0.01; for larger $\alpha$ a larger $\dot\gamma$ is used, but one that is still in the quasistatic limit where $S_2$ becomes independent of $\dot\gamma$. } \label{S2-pure-simple} \end{figure} In Fig.~\ref{S2-pure-simple} we plot the pure shear steady-state value of $S_2$ vs $\phi$ (solid symbols, dotted lines) at several of our smaller $\alpha$, showing only results where $S_2(\gamma)$ has saturated to the large $\gamma$ steady-state value. We see that as $\phi$ decreases, $S_2$ monotonically increases. Based on the behavior of an isolated particle, given by Eq.~(\ref{ethetatps}), we believe that $S_2$ will continue to increase and approach unity as $\phi\to 0$, however we cannot see this explicitly since we would need larger strains $\gamma$ to reach the steady-state as $\phi$ decreases. For comparison, we also show in Fig.~\ref{S2-pure-simple} our results for the steady-state value of $S_2$ vs $\phi$ obtained from simple shearing (open symbols, solid lines). For $\alpha=0.001$ and 0.01 we show results for $\dot\gamma=10^{-6}$, the same rate as we used in the pure shear simulations. For $\alpha=0.06$ we use $\dot\gamma=4\times 10^{-6}$ and for $\alpha>0.06$ we use $\dot\gamma=10^{-5}$; however, in these cases the results of Fig.~\ref{S2-vs-phiophiJ} show that these larger $\dot\gamma$ have already reached the quasistatic limit, where $S_2$ becomes independent of $\dot\gamma$, for the range of $\phi$ of interest. While at the largest $\phi$ we see that $S_2$ from pure shearing is somewhat smaller than that from simple shearing, the two are qualitatively similar, and remain so as $\phi$ decreases. However as $\phi$ approaches and decreases below $\phi_{S_2\,\mathrm{max}}$, the location of the peak in $S_2$ for simple shearing, we see that $S_2$ for pure shearing continues to increase while $S_2$ for simple shearing reaches its maximum and then decreases. Thus above $\phi_{S_2\,\mathrm{max}}$ pure and simple shearing induce qualitatively similar orientational ordering, while below $\phi_{S_2\,\mathrm{max}}$ they become dramatically different. The non-monotonic behavior of $S_2$ under simple shearing can thus be understood as a competition between rotational drive and free volume. At large $\phi$, the small free volume inhibits particles from aligning. As $\phi$ decreases, the free volume increases allowing a better particle alignment and a larger $S_2$. In such dense configurations, particles undergoing simple shear still rotate with a finite $\langle\dot\theta_i\rangle/\dot\gamma$, however, according to the results of Sec.~\ref{stimedep}, these rotations occur randomly as a Poisson-like process with the average rotation rate being determined by the long waiting time tails of the distribution (see Figs.~\ref{flipHist-a4}(a) and \ref{flipHist-a01}(a)); particle orientations are driven primarily by the interactions with other particles. As $\phi$ decreases below $\phi_{S_2\,\mathrm{max}}$, the rotational drive of the simple shear becomes dominant, and particle rotation becomes more similar to the periodic rotations of an isolated particle, but with random perturbations due to particle collisions (see Sec.~\ref{stimedep}, particularly Figs.~\ref{flipHist-a4}(a) and \ref{flipHist-a01}(a)). In this case, the particle rotations act to reduce the orientational ordering (and destroy it as $\alpha\to 0$), and $S_2$ decreases; this is unlike the case of pure shearing where there is no such rotational driving term [i.e. the second term on the right hand side of Eq.~(\ref{epures})] and $S_2$ continues to increases as $\phi$ decreases. The above scenario also helps to understand the singular $\alpha\to 0$ behavior under simple shearing, discussed in our recent Letter \cite{MKOT}, in which as particles approach a circular shape, $S_2$ vanishes for $\phi<\phi_J$ but $S_2$ remains finite at and just above $\phi_J$. Such singular behavior is suggested in Fig.~\ref{S2-pure-simple} where we see that, for nearly circular particles with $\alpha=0.001$ undergoing simple shearing, the peak value of $S_{2\,\mathrm{max}}\approx 0.3$ remains relatively large, even though the fraction of the particle perimeter occupied by the two flat sides is only $0.064\%$. In Appendix~\ref{sAtoZ} we present further analysis to determine the $\phi$ dependence of both $S_2$ and $-\langle \dot\theta_i\rangle/\dot\gamma$ in the $\alpha\to 0$ limit (see Fig.~\ref{S2-av-alpha-to-0}). For nearly circular particles with small $\alpha$, at small $\phi$ well below $\phi_{S_2\,\mathrm{max}}$, the rotational drive causes the particles to rotate almost uniformly with $-\langle\dot\theta_i\rangle/\dot\gamma\approx 1/2$, which by Eqs.~(\ref{eS2single}) and (\ref{eDI}) results in a small $S_2\propto \alpha$. Particle collisions that give significant torques that increase $S_2$ only occur as the particle density increases to $\phi_{S_2\,\mathrm{max}}$, which itself increases to the $\alpha=0$ jamming fraction $\phi_J^{(0)}$ as $\alpha\to 0$ \cite{MKOT}. Thus we expect that as $\alpha\to 0$, $S_2\propto\alpha\to 0$ for all $\phi<\phi_J^{(0)}$. Above $\phi_J^{(0)}$, however, particle interactions dominate over the rotational drive, and $S_2$ behaves as it would under pure shearing, with a finite $S_2$ that decreases as $\phi$ increases. Moreover, as $\alpha\to 0$, we found in Fig.~\ref{th2D-vs-phiophiJ} that the orientation of the the nematic order parameter becomes $\theta_2\approx 45^\circ$ above $\phi_J^{(0)}$, hence $\mathbf{S}_2$ is aligning along the minimal stress direction (see also Fig.~\ref{th2X-vs-phiophiJ}), again just as it does under pure shearing. Thus the singular behavior of $S_2$ as $\alpha\to 0$ for simple shearing is due to a sharp transition from the domination by rotational drive at $\phi<\phi_J$, to domination by geometric effects of the dense packings at $\phi>\phi_J$. We have thus explained the non-monotonic behavior we have found for $S_2$ in terms of the competition between rotation and free volume. However, recent simulations by Trulsson \cite{Trulsson}, on the simple shearing of 2D ellipses, found that the non-monotonic behavior of $S_2$, seen for frictionless particles as $\phi$ increases, goes away once inter-particle frictional forces are added. Instead of $S_2$ decreasing as $\phi$ increases above some $\phi_{S_2\,\mathrm{max}}$, for frictional particles $S_2$ seems to saturate to a constant value as $\phi$ increases. However Trulsson simulates in the hard-core particle limit, and so all his simulations take place for $\phi\lesssim\phi_J(\mu_p)$, where $\phi_J(\mu_p)$ is the jamming packing fraction for particles with inter-particle frictional coefficient $\mu_p$. For frictional particles, the additional frictional forces act to stabilize particle packings at smaller densities than the geometric jamming limit found for frictionless particles \cite{Makse,Otsuki}, and so $\phi_J(\mu_p)<\phi_J(\mu_p=0)$. The difference between $\phi_J(\mu_p)$ and $\phi_J(\mu_p=0)$ increases as $\alpha$ increases \cite{Trulsson}. Whereas for simple shear-driven jamming $\phi_J(\mu_p=0)$ seems to monotonically increase as $\alpha$ increases, $\phi_J(\mu_p)$ initially increases, reaches a maximum, and then decreases; the difference in $\phi_J$ between the frictionless and the frictional cases becomes more dramatic as $\mu_p$ increases (see Fig.~6 of Ref.~\cite{Trulsson}). Thus Trulsson's simulations do not probe the large density limit approaching geometric random close packing, and so might not reach the dense limit where free volume effects are dominating the behavior of $S_2$. Fixed volume simulations with soft-core frictional particles, allowing one to investigate the range of $\phi$ above $\phi_J(\mu_p)$, might thus help to clarify the situation. \subsection{Relaxation to the Steady-State} \label{sRSS} In this section we address a second issue concerning the nematic orientational ordering of aspherical particles in simple shear flow. Since there is a finite orientational order $\mathbf{S}_2$ even for an isolated single particle, is the finite $\mathbf{S}_2$ observed in the many particle system just a consequence of shearing acting like an ordering field? Or is the macroscopic $\mathbf{S}_2$ in the many particle system a consequence of cooperative behavior among the particles, as in an equilibrium ordering transition? In this section we investigate this question by considering the relaxation of the system when perturbed away from the steady-state. In Sec.~\ref{sTNO} we argued that the nematic order parameter $\mathbf{S}_2$ does not show any coherent time-dependent behavior, but rather has a constant value in the sheared steady-state. However, if $\mathbf{S}_2$ is perturbed away from this steady-state value by a coherent rotation of all particles, it will relax back to the steady-state. In Ref.~\cite{Wegner} Wegner et al. suggested, by analogy with behavior in ordered nematic liquid crystals, that the relaxation of $\mathbf{S}_2$ should obey a macroscopic equation of motion that can be written in the form, \begin{equation} \dot\theta_2=-\dot\gamma C(1-\kappa\cos 2\theta_2). \label{dth2dt} \end{equation} If such an equation holds, it would suggest that $\mathbf{S}_2$ reflects a macroscopic ordering resulting from the coherent interaction of many particles. The macroscopic equation~(\ref{dth2dt}) is similar to Eq.~(\ref{eq:theta_eom}) for the rotation of an isolated particle, except now it is assumed that $\kappa >1$. This gives a stable steady-state equilibrium value of $\theta^\mathrm{ss}_{2} = \frac{1}{2}\arccos(1/\kappa)$ and an unstable equilibrium value ($\dot\theta_2=0$) at $\theta_2=-\theta_2^\mathrm{ss}$. One can then rewrite Eq.~(\ref{dth2dt}) as, \begin{equation} \dot\theta_2=-\dot\gamma C \left( 1-\dfrac{\cos 2\theta_2}{\cos 2\theta_2^\mathrm{ss}}\right). \label{dth2dt2} \end{equation} Defining $\theta_2\in (-\pi/2,\pi/2]$, the above equation of motion predicts that when $|\theta_2| <\theta_2^\mathrm{ss}$, then $\mathbf{S}_2$ will relax to the steady state by rotating counter-clockwise to approach $\theta_2^\mathrm{ss}$; however, when $\theta_2$ lies outside this interval, $\mathbf{S}_2$ will relax to the steady state by rotating clockwise to approach $\theta_2^\mathrm{ss}$. To test this prediction we prepare numerical samples in which the steady-state $\mathbf{S}_2$ is rotated clockwise by a predetermined amount, and then measure the relaxation of $S_2$ and $\theta_2$ back to the steady-state as the system is sheared. To create these samples with rotated $\mathbf{S}_2$ we use the method illustrated in Fig.~\ref{flipS2}. A system with shear strain $\gamma$, sampled from our steady-state ensemble, is rotated clockwise by the angle $\psi=\mathrm{cot}^{-1}\gamma$, so that the two sides of the system boundary which were previously slanted now become the horizontal sides parallel to the flow direction. We then continue to shear the system in the horizontal direction. \begin{figure} \centering \includegraphics[width=3.5in]{flipS2} \caption{Schematic of the procedure to construct a configuration in which the nematic order parameter $\mathbf{S}_2$ is rotated clockwise by an angle $\psi$. Start with a configuration with a net shear strain $\gamma=\cot\psi$ (left figure) and rotate by $\psi$ to create the new configuration (right figure). Under this transformation the configuration boundary conditions are preserved, as indicated by the shaded circles and squares on the various sides of the system boundary, but the system aspect ratio changes, $L_y/L_x \to L_x/[L_y(1+\gamma^2)]$. } \label{flipS2} \end{figure} Such a rotation preserves the boundary conditions of the original configuration; the periodic boundary condition previously obeyed at the slanted sides now becomes the Lees-Edwards boundary condition at the new horizontal sides, and vice versa, as illustrated by the shaded circles and squares on the various sides in Fig.~\ref{flipS2}. If the original configuration had a length $L_x$ and a height $L_y$, the new rotated configuration has length $L_y\sqrt{1+\gamma^2}$ and height $L_x/\sqrt{1+\gamma^2}$. If the original $\mathbf{S}_2$ was at an angle $\theta_2$, close to but not necessarily exactly equal to $\theta_2^\mathrm{ss}$ because of fluctuations, the new $\mathbf{S}_2$ will be at an angle $\theta_2-\psi$. By choosing different strains $\gamma$ at which to make this system rotation, we wind up with configurations in which the original steady-state $\mathbf{S}_2$ has been rotated by various angles $\psi=\cot^{-1}\gamma$. To avoid a too elongated system when we rotate at a large $\gamma$ (so as to produce a small rotation angle $\psi$), we start with an initial system in which $L_x>L_y$, instead of our usual $L_x=L_y$. \begin{figure} \centering \includegraphics[width=3.5in]{relax-a4} \caption{For spherocylinders of asphericity $\alpha=4$ at strain rate $\dot\gamma=10^{-5}$: (a) and (b) instantaneous angle $\theta_2$, and (c) and (d) instantaneous magnitude $S_2$ of the nematic order parameter $\mathbf{S}_2$, vs shear strain $\gamma=\dot\gamma t$, after a rotation of a configuration in the steady-state by different angles $\psi$ as illustrated in Fig.~\ref{flipS2}. (a) and (c) are for $\phi=0.80$ near the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$, while (b) and (d) are for $\phi=0.95$ above the jamming $\phi_J=0.906$. In (a) and (b) the left most point on each curve gives the initial value $\theta_2^\mathrm{init}$ after the system rotation; the horizontal dashed lines give the ensemble averaged steady state values of $\pm\theta_2^\mathrm{ss}$. In (c) and (d) the horizontal dashed line gives the ensemble averaged steady state value of $S_2$. For ease of comparison, the strain axis has been shifted for each curve so that the point where $\theta_2=0$ or $90^\circ$ occurs at $\gamma=0$. The two thicker curves denote (i) the largest of our $\theta_2^\mathrm{init}$ that results in a pure clockwise relaxation to the steady-state, and (ii) the smallest of our $\theta_2^\mathrm{init}$ that results in a mostly counter-clockwise relaxation. } \label{relax-a4} \end{figure} We first consider the relaxation of a system of moderately elongated spherocylinders with asphericity $\alpha=4$. Using a system sheared at a strain rate $\dot\gamma=10^{-5}$, Fig.~\ref{relax-a4} shows the relaxation of the rotated nematic order parameter $\mathbf{S}_2$ back to the steady state. In Figs.~\ref{relax-a4}(a) and \ref{relax-a4}(b) we show the relaxation of the orientation $\theta_2$ vs net strain $\gamma=\dot\gamma t$, at packing fractions $\phi=0.80$ and $\phi=0.95$, respectively; $\phi=0.80$ is the packing that gives the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$, while $\phi=0.95$ is above the jamming $\phi_J=0.906$. Figures~\ref{relax-a4}(c) and \ref{relax-a4}(d) show the corresponding relaxation of the magnitude $S_2$. For each $\phi$ we show results for rotations through several different angles $\psi$, giving different initial values of $\theta_2^\mathrm{init}=\theta_2^\mathrm{ss}-\psi$. For ease of comparison, for each curve the strain axis has been shifted so that the point where $\theta_2=0$ occurs at $\gamma=0$; this also corresponds to the point where $|d\theta_2/d\gamma|$ is largest (for the cases with the smallest $\theta_2^\mathrm{init}$, where particles relax by a pure clockwise rotation, this point corresponds to where $\theta_2$, consistent with our definition of $\theta_2\in (-\pi/2,\pi/2]$, takes a discontinuous jump from $-90^\circ$ to $+90^\circ$). Denoting the values of $\pm\theta_2^\mathrm{ss}$ by horizontal dashed lines, in Figs.~\ref{relax-a4}(a) and \ref{relax-a4}(b) we see that for $\theta_2^\mathrm{init}$ sufficiently more negative than $-\theta_2^\mathrm{ss}$, the order parameter angle $\theta_2$ does relax back to the steady state by rotating clockwise, in agreement with Eq.~(\ref{dth2dt2}). Similarly, for $-\theta_2^\mathrm{ss}<\theta_2^\mathrm{init}<0$ we see that $\theta_2$ relaxes by rotating counter-clockwise, again in agreement with Eq.~(\ref{dth2dt}). However there exists a region of $\theta_2^\mathrm{init} \lesssim -\theta_2^\mathrm{ss}$ where the order parameter starts rotating clockwise, then reverses direction to rotate counter-clockwise, overshoots $\theta_2^\mathrm{ss}$, then reverses direction again, rotating clockwise to relax back to $\theta_2^\mathrm{ss}$. The two curves that separate the region where $\theta_2$ relaxes in a purely clockwise fashion from the region where it starts clockwise but then reverses to counter-clockwise, are indicated by thicker lines in the figures. Since Eq.~(\ref{dth2dt2}) predicts a monotonic increase (i.e., counterclockwise rotation) or monotonic decrease (i.e., clockwise rotation) of $\theta_2$ as the system relaxes, it cannot be describing the system well for such $\theta_2^\mathrm{init}$. Moreover, being a first order differential equation, Eq.~(\ref{dth2dt2}) would predict that $\theta_2(\gamma)$ would follow a fixed trajectory determined solely by the initial value $\theta_2^\mathrm{init}$. However, in Figs.~\ref{relax-a4}(a) and \ref{relax-a4}(b) we see curves that pass through the same value of $\theta_2$ (for example $\theta_2=0$) but do not then follow the same trajectory as $\gamma$ increases. The reason for this more complex behavior lies in the behavior of the magnitude of the order parameter, which in Eq.~(\ref{dth2dt2}) is presumed to stay constant. In contrast, we see in Figs.~\ref{relax-a4}(c) and \ref{relax-a4}(d) that the rapid change in $\theta_2$ at $\gamma=0$ is accompanied by a pronounced drop in the magnitude of the order parameter $S_2$. The largest drop in $S_2$, almost but not quite to zero, occurs for those $\theta_2^\mathrm{init}$ which give curves that are on the border between a pure clockwise relaxation and where the relaxation reverses from initially clockwise to counter-clockwise (indicated by the thicker curves in the figure). \begin{figure} \centering \includegraphics[width=3.5in]{relax-a4_intensity} \caption{For spherocylinders of asphericity $\alpha=4$ at strain rate $\dot\gamma=10^{-5}$ and packing $\phi=0.80$: Intensity plot showing the number of particles oriented at a particular angle $\theta_i$ vs net strain $\gamma=\dot\gamma t$, as the system relaxes back to steady-state after an initial rotation of a configuration sampled from the steady-state ensemble. The nematic order parameter $\mathbf{S}_2$ is rotated to have the value of $\theta_2^\mathrm{init}$ that corresponds to the curve in Fig.~\ref{relax-a4}(c) that has the largest drop in the magnitude $S_2$ at $\gamma=0$. The strain scale $\gamma$ has been shifted so that the left edge of the figure corresponds to the initial configuration after the rotation, while $\gamma=0$ corresponds to the strain at which $\theta_2=0$. Horizontal dashed lines indicate the values of $\pm\theta_2^\mathrm{ss}$; the vertical dashed line indicates $\gamma=0$. } \label{relax-a4_intensity} \end{figure} \begin{figure} \centering \includegraphics[width=3.5in]{relax-a01} \caption{For spherocylinders of asphericity $\alpha=0.01$ at strain rate $\dot\gamma=10^{-6}$: (a) and (b) instantaneous angle $\theta_2$, and (c) and (d) instantaneous magnitude $S_2$ of the nematic order parameter $\mathbf{S}_2$, vs shear strain $\gamma=\dot\gamma t$, after a rotation of a configuration in the steady-state by different angles $\psi$ as illustrated in Fig.~\ref{flipS2}. (a) and (c) are for $\phi=0.83$ near the minimum in $-\langle\dot\theta_i\rangle/\dot\gamma$, while (b) and (d) are for $\phi=0.86$ above the jamming $\phi_J=0.845$. In (a) and (b) the left most point on each curve gives the initial value $\theta_2^\mathrm{init}$ after the system rotation; the horizontal dashed lines give the ensemble averaged steady state values of $\pm\theta_2^\mathrm{ss}$. In (c) and (d) the horizontal dashed line gives the ensemble averaged steady state value of $S_2$. For ease of comparison, the strain axis has been shifted for each curve so that the point where $\theta_2=0$ or $90^\circ$ occurs at $\gamma=0$. The two thicker curves denote (i) the largest of our $\theta_2^\mathrm{init}$ that results in a pure clockwise relaxation to the steady-state, and (ii) the smallest of our $\theta_2^\mathrm{init}$ that results in a mostly counter-clockwise relaxation. } \label{relax-a01} \end{figure} \begin{figure} \centering \includegraphics[width=3.5in]{relax-a01_intensity} \caption{For spherocylinders of asphericity $\alpha=0.01$ at strain rate $\dot\gamma=10^{-6}$ and packing $\phi=0.83$: Intensity plot showing the number of particles oriented at a particular angle $\theta_i$ vs net strain $\gamma=\dot\gamma t$, as the system relaxes back to steady-state after an initial rotation of a configuration sampled from the steady state ensemble. The nematic order parameter $\mathbf{S}_2$ is rotated to have the value of $\theta_2^\mathrm{init}$ that corresponds to the curve in Figs.~\ref{relax-a01}(c) that has the largest drop in the magnitude $S_2$ at $\gamma=0$. The strain scale $\gamma$ has been shifted so that the left edge of the figure corresponds to the initial configuration after the rotation, while $\gamma=0$ corresponds to the strain at which $\theta_2=0$. Horizontal dashed lines indicate the values of $\pm\theta_2^\mathrm{ss}$; the vertical dashed line indicates $\gamma=0$.} \label{relax-a01_intensity} \end{figure} To understand this behavior of $S_2$, in Fig.~\ref{relax-a4_intensity} we show an intensity plot of the orientations $\theta_i$ of the individual particles, as a function of the net shear strain $\gamma=\dot\gamma t$, as the system relaxes following the rotation of a configuration sampled from the steady-state. At each $\gamma$, the range of angles $\theta_i$ is binned into $2^\circ$ intervals and we count the number of particles with orientation $\theta_i$ in each bin; this count is then imaged by the graryscale as shown. We use the same system as in Figs.~\ref{relax-a4}(a) and \ref{relax-a4}(c), with $\alpha=4$ and $\dot\gamma=10^{-5}$ at packing $\phi=0.80$; a rotation is chosen that corresponds to the curve with the largest drop in $S_2$ seen in Fig.~\ref{relax-a4}(c). We see that some fraction of the particles relax by rotating clockwise, while the others relax by rotating counter-clockwise. At $\gamma=0$, corresponding to the smallest value of $S_2$, we see the broadest distribution of values of $\theta_i$. The sharp drop in $S_2$ as the system relaxes back to steady state is thus due to the lack of coherence in the relaxation of the individual particles. We find qualitatively the same behavior if we look at other packing fractions near and above jamming. We note that similar results as in our Figs.~\ref{relax-a4} and \ref{relax-a4_intensity} have been observed experimentally by B{\"o}rzs{\"o}nyi et al. for the relaxation of shear-reversed dry granular 3D packings of glass cylinders \cite{Borzsonyi2}. Finally, in Figs.~\ref{relax-a01} and \ref{relax-a01_intensity} we show similar plots, but now for nearly circular particles with $\alpha=0.01$. We see the same qualitative features as were found for the more elongated particles with $\alpha=4$. We thus conclude from these relaxation simulations that the nematic ordering $\mathbf{S}_2$ in our simple sheared system is a consequence of the shearing acting as an ordering field, and not due to large scale cooperative behavior among the particles. The sharp drop in the magnitude $S_2$ to small values, as the system relaxes back to steady-state, demonstrates that the relaxation takes place through the incoherent rotation of individual particles, not a coherent rotation of many particles that would preserve the magnitude of the ordering. We will confirm the absence of long range coherence in particle orientations in a separate work \cite{MTStructure} where we directly compute the spatial correlation function of $\mathbf{S}_2$ and find it to be short ranged. \subsection{A Numerical Mean-Field Model} \label{sec:MF} In the preceding section we have argued that, although there is a finite nematic ordering in the system, there is no macroscopic coherence among the particles. In this section we therefore explore whether one can make a mean-field-like model for the rotation of a particle, that depends only on the state of the individual particle itself, but reproduces reasonably the observed ensemble averages for the nematic order parameter $\mathbf{S}_2$ and the angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$, as time averages of the single particle. The rotational motion of a particle is governed by Eq.~(\ref{eq:theta_eom}), which we can rewrite as, \begin{equation} \dfrac{\dot\theta_i}{\dot\gamma}=\dfrac{d\theta_i}{d\gamma}=-f(\theta_i)+g_i,\quad\text{where}\quad g_i=\dfrac{\tau_i^\mathrm{el}}{k_d \mathcal{A}_i I_i\dot\gamma} \end{equation} gives the interaction with other particles due to the torques from elastic collisions. We consider four different approximations to $g_i$, replacing the term from the fluctuating collisional torques by \begin{flalign} &\text{(i)}\qquad\qquad\qquad g_i\to \bar g=\langle g_i\rangle& \end{flalign} where we average over both different particles in a given configuration, and over different configurations in the steady-state ensemble, and \begin{flalign} &\text{(ii)}\qquad\qquad\qquad g_i\to\bar g+\delta g(\gamma)& \end{flalign} where $\delta g(\gamma)$ is an uncorrelated Gaussian white noise with \begin{align} &\langle \delta g(\gamma)\rangle=0 \\ &\langle\delta g(\gamma)\,\delta g(\gamma^\prime)\rangle = [\delta g]^2\delta(\gamma-\gamma^\prime), \end{align} with $[\delta g]^2=\mathrm{var}[g_i]$, where the variance is computed from the steady-state ensemble. In the mean-field models (i) and (ii) the elastic torque that the particle experiences is independent of the orientation of the particle. As a next level of approximation, we consider mean-field models in which the elastic torque will be a function of the particle's orientation $\theta$. \begin{flalign} &\text{(iii)}\qquad\qquad\qquad g_i\to\bar g(\theta)=\langle g_i\rangle_\theta,& \end{flalign} where now the average is restricted to particles oriented at a particular angle $\theta$. \begin{flalign} &\text{(iv)}\qquad\qquad\qquad g_i\to\bar g(\theta)+\delta g(\theta;\gamma)& \end{flalign} where $\delta g(\theta;\gamma)$ is an uncorrelated Gaussian white noise with \begin{align} &\langle \delta g(\theta;\gamma)\rangle=0\\ &\langle \delta g(\theta;\gamma)\,\delta g(\theta;\gamma^\prime)\rangle=[\delta g(\theta)]^2\delta(\gamma-\gamma^\prime), \end{align} with $[\delta g(\theta)]^2 =\mathrm{var}[g_i]_\theta$, where the variance is taken only over particles with orientation $\theta$. These different approximations allow us to examine the relative importance of average torque vs torque noise, and the sensitivity of behavior to the variation of elastic torque with particle orientation. \begin{figure} \centering \includegraphics[width=3.5in]{MF1-2} \caption{For mean-field models (i) and (ii): average elastic torque $\bar g= \langle \tau^\mathrm{el}_i/k_d \mathcal{A}_i I_i\dot\gamma\rangle$ and associated noise magnitude $\delta g$ vs packing $\phi$ for (a) $\alpha=0.01$ at $\dot\gamma=10^{-6}$, and (b) $\alpha=4$ at $\dot\gamma=10^{-5}$. Horizontal dashed lines $f_\mathrm{min}$ and $f_\mathrm{max}$ denote the minimum $f(0)$ and maximum $f(\pi/2)$ values of $f(\theta)=(1-[\Delta I_i/I_i]\cos 2\theta)/2$ in Eq.~(\ref{eftheta}); note that for $\alpha=0.01$ these two are nearly indistinguishable since $\Delta I_i/I_i=0.00847$ is so small. Vertical dashed lines locate the jamming packings, $\phi_J=0.845$ for $\alpha=0.01$ and $\phi_J=0.906$ for $\alpha=4$. } \label{MF1-2} \end{figure} \begin{figure} \centering \includegraphics[width=3.5in]{MF3-4} \caption{For mean-field models (iii) and (iv): average elastic torque $\bar g(\theta)= \langle \tau^\mathrm{el}_i/k_d \mathcal{A}_i I_i\dot\gamma\rangle_\theta$ and associated noise $\delta g(\theta)$ for particles oriented at angle $\theta$. Top row (a) and (b) is for $\alpha=0.01$ at $\dot\gamma=10^{-6}$, with $\phi_J=0.845$; bottom row (c) and (d) is for $\alpha=4$ at $\dot\gamma=10^{-5}$, with $\phi_J=0.906$. (a) and (c): $f(\theta)-\bar g(\theta)$ vs $\theta$ at different packings $\phi$, where $f(\theta)=(1-[\Delta I_i/I_i]\cos 2\theta)/2$ as in Eq.~(\ref{eftheta}). The thick solid black line is just $f(\theta)$, corresponding to $\phi\to0$ where $\bar g(\theta)=0$. Thin colored lines are the Fourier series approximation to the data at each $\phi$, as given by Eq.~(\ref{eFourier2}). (b) and (d): magnitude of the the noise $\delta g(\theta)$ vs $\theta$ at different packings $\phi$. Note the logarithmic vertical scale. } \label{MF3-4} \end{figure} In Fig.~\ref{MF1-2} we plot our results for $\bar g$ and $\delta g$ vs $\phi$, which are used in constructing the mean-field (MF) models (i) and (ii). In Fig.~\ref{MF1-2}(a) we show results for nearly circular particles with $\alpha=0.01$ at strain rate $\dot\gamma=10^{-6}$; in Fig.~\ref{MF1-2}(b) we show results for elongated particles with $\alpha=4$ at $\dot\gamma=10^{-5}$. The horizontal black dashed lines in each panel are the values of $f_\mathrm{min}\equiv f(0)=(1-\Delta I_i/I_i)/2$ and $f_\mathrm{max}\equiv f(\pi/2)=(1+\Delta I_i/I_i)/2$, which are the minimum and maximum values of $f(\theta)=(1-[\Delta I_i/I_i]\cos 2\theta)/2$ given in Eq.~(\ref{eftheta}). If ever we have $f_\mathrm{min}<\bar g<f_\mathrm{max}$, then in MF model (i) the direction $\theta_i$ such that $f(\theta_i)=\bar g$ is a stationary point where $\dot\theta_i/\dot\gamma =0$. From Fig.~\ref{MF1-2} we see that this situation never arises for $\alpha=0.01$, however it does occur for $\alpha=4$ when $\phi>0.5$. Note that in both cases the average elastic torque $\bar g = \langle \tau^\mathrm{el}_i/k_d \mathcal{A}_i I_i \dot\gamma\rangle$ is positive, showing that, on average, the elastic torques serve to slow down the clockwise rotation of the particles. Note also that in both cases the magnitude of the noise $\delta g$ is one or more orders of magnitude larger than the average $\bar g$ for the range of $\phi$ considered. In Fig.~\ref{MF3-4} we show results for $\bar g(\theta)$ and $\delta g(\theta)$ vs $\theta$, which are used for constructing the models MF (iii) and MF (iv). In Figs.~\ref{MF3-4}(a) and \ref{MF3-4}(b) we show results for $\alpha=0.01$ at $\dot\gamma=10^{-6}$, while in Figs.~\ref{MF3-4}(c) and \ref{MF3-4}(d) we show results for $\alpha=4$ at $\dot\gamma=10^{-5}$. In each case we show results at four different typical values of $\phi$: below $\phi_{S_2\,\mathrm{max}}$, near $\phi_{S_2\,\mathrm{max}}$, near $\phi_J$ and above $\phi_J$. Rather than show $\bar g(\theta)$ directly, in Figs.~\ref{MF3-4}(a) and \ref{MF3-4}(c) we instead plot $f(\theta)-\bar g(\theta)=-\dot\theta_i/\dot\gamma$, since this more directly gives the rotational motion of the particle. A positive value of $f(\theta)-\bar g(\theta)$ indicates a clockwise rotation. A value of $\theta$ such that $f(\theta)-\bar g(\theta)=0$ indicates a stationary point in MF (iii), where $\dot\theta_i/\dot\gamma=0$; if $d[f(\theta)-\bar g(\theta)]/d\theta >0$ this is a stable stationary point. At the larger values of $\phi$ our data for $f(\theta)-\bar g(\theta)$ become quite scattered, particularly for $\alpha=4$. To get a smooth $\bar g(\theta)$ for integrating our mean-field single particle equation of motion we therefore approximate $\bar g(\theta)$ by expanding our data as a Fourier series and keeping only the lowest several terms, \begin{align} \bar g(\theta)&=\dfrac{a_0}{\pi}+\frac{2}{\pi}\sum_{n=1}\left[a_n\cos 2n\theta + b_n\sin 2n\theta\right],\label{eFourier2}\\ a_n&=\int_{-\pi/2}^{\pi/2}\!\!d\theta\,\bar g(\theta)\cos 2n\theta,\\ b_n&=\int_{-\pi/2}^{\pi/2}\!\!d\theta\,\bar g(\theta)\sin 2n\theta. \end{align} For the largest $\phi$, where the data are most scattered, we use up to $n=3$ terms for our approximate $\bar g(\theta)$; for smaller $\phi$, where the data are smoother but where there are regions of $\theta$ where $\bar g(\theta)$ is rather flat, we use up to $n=16$ terms. This Fourier approximation gives the solid lines in Figs.~\ref{MF3-4}(a) and \ref{MF3-4}(c). We now consider how well these mean-field models do in describing the behavior of our interacting many particle system. In Fig.~\ref{MF-av-S2-th2} we show our results for $-\langle\dot\theta_i\rangle/\dot\gamma$, $S_2$, and $\theta_2$ (top, middle, and bottom rows respectively) vs the packing $\phi$, comparing our $N=1024$ particle simulations against that of the single particle mean-field models MF (i), (ii), (iii), and (iv). The left column is for nearly circular particles with $\alpha=0.01$ at $\dot\gamma=10^{-6}$, while the right column is for elongated particles with $\alpha=4$ at $\dot\gamma=10^{-5}$. \begin{figure} \centering \includegraphics[width=3.5in]{MF-av-S2-th2} \caption{Comparison of $-\langle\dot\theta_i\rangle/\dot\gamma$, $S_2$, and $\theta_2$ vs $\phi$ (top, middle, and bottom rows respectively) between our $N=1024$ interacting particle simulations and the single-particle mean-field approximations MF (i), (ii), (iii) and (iv). The left column is for nearly circular particles of $\alpha=0.01$ at $\dot\gamma=10^{-6}$, while the right column is for elongated particles of $\alpha=4$ at $\dot\gamma=10^{-5}$. The vertical dashed lines locate the jamming packings, $\phi_J=0.845$ and 0.906 for $\alpha=0.01$ and 4, respectively. } \label{MF-av-S2-th2} \end{figure} We discuss $\alpha=0.01$ first. We see in Fig.~\ref{MF-av-S2-th2}(a) that all the models MF (i) -- (iv) do a good job in predicting the angular velocity $-\langle\dot\theta_i\rangle/\dot\gamma$. This is not surprising. For $\alpha=0.01$, the term $\Delta I_i/I_i=0.00847$ is so small that the variation in $f(\theta)$ is exceedingly slight, and so to good approximation one can take $f(\theta)\approx 1/2$; an isolated particle is essentially rotating uniformly. The elastic torque of MF (i), modeled by the $\theta$-independent $\bar g$, with $\bar g<f_\mathrm{min}$ at all $\phi$ (see Fig.~\ref{MF1-2}(a)), then just subtracts from this average drive $f\approx 1/2$ to give the correct average angular velocity. Adding the noise $\delta g$ in MF (ii), or using an orientationally dependent $\bar g(\theta)$ in MF (iii) and corresponding noise $\delta g(\theta)$ in MF (iv), does not change this average rotational behavior. Only as one goes above $\phi_J$, and correlations between particles become longer ranged, do we see a difference between the interacting many particle system and our single particle mean-field models. In contrast, if we consider $S_2$, we see in Fig.~\ref{MF-av-S2-th2}(b) that the simple MF (i) does an exceedingly poor job. Again, this is not surprising. As discussed above, since for $\alpha=0.01$ the model MF (i) results in a particle that rotates almost uniformly, there is no mechanism for $S_2$ to grow above the very small value $S_2=0.0042$ that is found for an isolated particle. Similarly, as seen in Fig.~\ref{MF-av-S2-th2}(c), MF (i) gives $\theta_2=0$, just as for an isolated particle. Adding noise, as in MF (ii), does nothing to improve the results for $S_2$ or $\theta_2$. However, using the orientationally dependent average elastic torque $\bar g(\theta)$ of MF (iii) results in excellent agreement for both $S_2$ and $\theta_2$. The strong variation of $\bar g(\theta)$ with $\theta$, as seen in Fig.~\ref{MF3-4}(a), results in the non-uniform rotation of the particle that is essential to dramatically increase $S_2$ over the isolated particle limit. No further improvement is found by adding the orientationally dependent noise $\delta g(\theta)$ of MF (iv). Turning to elongated particles with $\alpha=4$, we see in Fig.~\ref{MF-av-S2-th2}(d) that now MF (i) fails dramatically even when considering $-\langle\dot\theta_i\rangle/\dot\gamma$. While agreement is not bad at the smallest $\phi$, once $\phi$ increases above 0.5 and $\bar g$ increases above $f_\mathrm{min}=f(0)$ (see Fig.~\ref{MF1-2}(b)), the particle locks into a stationary state where $\dot\theta_i/\dot\gamma=0$, and consequently one has $S_2=1$, as seen in Fig.~\ref{MF-av-S2-th2}(e). The orientation $\theta_2$, shown in Fig.~\ref{MF-av-S2-th2}(f), then increases with $\phi$ so as to obey $f(\theta_2)=\bar g$. Adding the noise $\delta g$ of MF (ii) is not sufficient to allow the particle to escape from this stationary state, until $\phi$ gets close to and goes above jamming. To get good agreement for $\alpha=4$ it is thus necessary, as we found for $\alpha=0.01$, to consider the orientational dependence of the average elastic torque. Using the $\bar g(\theta)$ of MF (iii) we see that we get excellent agreement for all three quantities, $-\langle\dot\theta_i\rangle/\dot\gamma$, $S_2$, and $\theta_2$, for all $\phi$ {\em except} upon approaching close to the jamming $\phi_J$. Close to $\phi_J$, Fig.~\ref{MF3-4}(c) shows that $f(\theta)-\bar g(\theta)$ can go negative, giving rise to a stationary state when $f(\theta)-\bar g(\theta)=0$. Thus we see in Fig.~\ref{MF-av-S2-th2}(d) that as $\phi$ approaches $\phi_J$, $-\langle\dot\theta_i\rangle/\dot\gamma$ drops to zero, while in Fig.~\ref{MF-av-S2-th2}(e) we see that $S_2$ jumps to unity. However adding the noise $\delta g(\theta)$ of MF (iv) is sufficient to allow the particle to escape this stationary state, and restore good agreement with the many particle simulation, until one goes above $\phi_J$. We thus conclude that our single-particle mean-field model gives an excellent description of the rotational motion of our particles, over a wide range of asphericities $\alpha$ and packings $\phi$, provided one includes the proper orientational dependence to the average torque from the elastic interactions, as in MF (iii). Agreement at large $\phi$ approaching jamming is further improved by adding the noise term of MF (iv). However our mean-field model seems to do less well as $\phi$ increases above $\phi_J$. Whether this is an effect of increasing correlations between particles as they jam, or whether it is due to poor accuracy in our estimate of $\bar g(\theta)$, due to poor statistics, remains unclear. \section{Summary} \label{sec:discus} In this work we have considered a model of sheared, athermal, frictionless two dimensional spherocylinders in suspension at constant volume. The simplicity of our model, in which the only interactions are pairwise repulsive elastic forces and a viscous damping with respect to the suspending host medium, allows us to shear to very long total strains and completely characterize the behavior of the system over a wide range of packing fractions $\phi$, strain rates $\dot\gamma$, and particle asphericities $\alpha$. In a prior work we focused on the rheological properties of this model and the variation of the jamming transition $\phi_J$ with particle asphericity \cite{MT1}. In the present work we have focused on the shear-induced rotation of particles and their nematic orientational ordering. We found that, under simple shearing, particles continue to rotate at all packings, even above jamming, and that the nematic order parameter $\mathbf{S}_2$ has a constant, time-independent, value in the sheared steady-state. We have found that the average angular velocity of particles $-\langle\dot\theta_i\rangle/\dot\gamma$ and the magnitude of the nematic order parameter $S_2$ are non-montonic as the packing $\phi$ increases, with the minimum of $-\langle\dot\theta_i\rangle/\dot\gamma$ and the maximum of $S_2$ occurring below the jamming transition. By considering the distribution of strain intervals $\Delta\gamma$ between successive rotations of a particle by $\pi$ in Sec.~\ref{stimedep}, and by comparing the response of the system under pure shear as opposed to simple shear in Sec.~\ref{sPure}, the following scenario emerges. At the smaller packings $\phi$, behavior is qualitatively similar to that of an isolated particle. The rotational drive implicit in simple shearing (but absent in pure shearing) causes particles to rotate with a non-uniform angular velocity that depends on the particle's orientation. As $\phi$ increases, the rate of collisions between particles increases, leading to a broadening of the distribution of rotation times, but still with a typical rotation time comparable to the average. The average $S_2$ is dominated by the average particle rotation, as evidenced by the observed difference in $S_2$ between simple and pure shearing; in contrast to the increase in $S_2$ as $\phi$ increases under simple shearing, under pure shearing, which has no rotational driving term, $S_2$ shows perfect ordering at small $\phi$ and is monotonically decreasing as $\phi$ increases. At larger $\phi$, however, the system becomes so dense that the decreasing free volume inhibits rotations. Particles tend to lock into the local configuration, with rotational rattling about a particular orientation, until a shear-induced fluctuation in the local particle structure allows a rotation to take place. Particle rotations become a Poisson-like process in which the time until the next particle rotation is largely independent of the time since the last rotation. The average $S_2$ is now dominated by the local structure of the dense packing, rather than the particle rotations, as evidenced by the qualitative agreement now found for the behavior of $S_2$ comparing simple and pure shear (see Fig.~\ref{S2-pure-simple}). The above scenario helps to explain our surprising result of Ref.~\cite{MKOT}, further discussed in Appendix.~\ref{sAtoZ}, that the $\alpha\to 0$ limit, approaching perfectly circular particles, is singular. As particles approach the rotationally invariant circular shape, one would naively expect that the nematic orientational order parameter $\mathbf{S}_2$ should vanish. However, in the limit of finite $\alpha\to 0$, we found that $S_2$ vanishes below $\phi_J$, but remains finite at $\phi_J$ and above. To explain this, consider first the behavior under pure shear, where we have argued that particles of any finite $\alpha$, no matter how small, will exponentially relax their orientation to the minimal stress direction, and so eventually order with $S_2\approx 1$, at sufficiently small packings $\phi$. As $\phi$ increases, the decreasing free volume inhibits particle rotation, limiting the extent of ordering, and leading to an $S_2$ that decreases monotonically as $\phi$ increases; we found numerically that $S_2$, under pure shear, remains finite above jamming even for very small $\alpha$. Consider now the behavior under simple shear. According to the above scenario, above the peak in $S_2$ under simple shear, behavior is dominated by the local structure of the dense configuration, and simple and pure shear result in qualitatively similar ordering. As $\alpha\to 0$ the location of the peak in $S_2$ moves to the jamming transition. Hence we expect that, even as $\alpha\to 0$, the simple sheared system will order with finite $S_2$ for $\phi\ge\phi_J$. However for $\phi<\phi_J$, the rotational drive of the simple shear, absent for pure shear, will dominate and cause the particles to rotate with an increasingly uniform (i.e., independent of the particle orientation) angular velocity as $\alpha$ gets small. As $\alpha\to 0$ this uniform rotation will drive $S_2\to 0$. Hence our scenario leads one to expect that, as $\alpha\to 0$, one will have $S_2=0$ for $\phi<\phi_J$ but $S_2>0$ for $\phi\ge\phi_J$, just as we found to be the case. Finally, although our sheared system of aspherical particles displays finite nematic orientational ordering at any packing $\phi$, this ordering is not due to long range coherence between particles as in an equilibrium liquid crystal, but rather is due to the shearing acting as an ordering field. This conclusion is supported by our results in Sec.~\ref{sRSS}, where we investigated the relaxation of $\mathbf{S}_2$ upon being rotated away from its steady-state direction. The sharp drop in the magnitude $S_2$ to small values, as the system relaxes back to steady-state, indicates that the relaxation takes place through the incoherent rotation of individual particles, not a coherent rotation of many partices that would preserve the magnitude of the ordering. Additionally, the success of our numerical mean-field model of Sec.~\ref{sec:MF}, in which we modeled the system by an isolated particle being acted upon by an orientation dependent average elastic torque and random incoherent torque noise, indicates that correlations between particles are not important to describe the behavior of the system. We will give further evidence for this conclusion in a separate paper \cite{MTStructure}, where we directly compute the spatial correlation function for $\mathbf{S}_2$ and show that is it short ranged. \section*{Acknowledgements} This work was supported in part by National Science Foundation Grants No. CBET-1435861 and No. DMR-1809318. Computations were carried out at the Center for Integrated Research Computing at the University of Rochester.
proofpile-arXiv_065-6183
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:chiSF} Obtaining precise physical results from lattice calculations requires a well controlled continuum limit and, for many quantities, non-perturbative renormalisation. Ideally the renormalisation scheme should be not only \emph{non-perturbative} but also \emph{mass independent} and preferably \emph{gauge invariant}. Schr\"odinger functional (SF) schemes~\cite{luescher1,sint1,luescher2} are known to fulfill these properties. Additionally, to ease the burden of taking the continuum limit, {$O(a)$ improvement} is highly desirable. However, to eliminate the many counterterms necessary when applying the standard {$O(a)$ improvement} program with Wilson fermions, we would like to capitalize on the automatic {$O(a)$ improvement} provided by maximally twisted mass fermions~\cite{FR1} (see \cite{andrea} for a review). Unfortunately, bulk automatic {$O(a)$ improvement} with Wilson fermions and the standard SF (sSF) boundary conditions (BCs) are not compatible. {$O(a)$ improvement} is only possible introducing a number of additional bulk improvement counter-terms to the action and operators. Since there are extensive calculations with maximally twisted mass fermions \cite{Boucaud:2007uk,Boucaud:2008xu} it would be clearly desirable to employ the SF scheme while keeping automatic $O(a)$-improvement. A new formulation of the SF has been developed in \Ref{\cite{sint2}}, which we will refer to as the {\emph chirally rotated} SF ({$\chi$SF}), that implements a SF scheme while maintaining automatic {$O(a)$ improvement} for massless Wilson fermions. The {$\chi$SF} is related (in the continuum) to the {sSF} by means of a non-singlet chiral transformation, i.e. they are equivalent in the continuum limit. However, when using massless Wilson fermions as a lattice regulator, {$\chi$SF} BCs are invariant under a subgroup of the chiral symmetry transformations broken by the Wilson term (in contrast to {sSF} BCs). As a result {$\chi$SF} BCs are compatible with automatic {$O(a)$ improvement}. The three-dimensional boundaries of the SF lead to an unavoidable dimension four boundary operator. Additionally, regulating the {$\chi$SF} with Wilson fermions induces the usual bulk mass operator as well as a dimension three boundary operator. The dimension four boundary operator is irrelevant, and hence the corresponding coefficient can be safely fixed by perturbation theory in order to eliminate the corresponding $O(a)$ boundary contributions. The bulk operator is relevant and is handled by the standard non-perturbative tuning of the bare quark mass, equivalently $\kappa$, to its critical value. The dimension three operator is also relevant and can spoil not only the automatic {$O(a)$ improvement} but also the universality of the continuum limit. This requires an additional non-perturbative tuning of one more counterterm, $z_{\mathrm{f}}$. However, having tuned both $\kappa$ and $z_{\mathrm{f}}$, all operators are automatically $O(a)$ improved and no further counterterms are necessary. Here we present the non-perturbative tuning of $\kappa$ and $z_{\mathrm{f}}$ for the {$\chi$SF} in the quenched approximation. We demonstrate the feasibility of tuning both parameters simultaneously. In particular, the inclusion of the bulk dimension five operator, with corresponding counterterm $c_{\mathrm{sw}}$, as used in \Ref{\cite{sintleder}}, is found to be unnecessary. \section{Boundary conditions} The {$\chi$SF} is related to the {sSF} by a non-singlet chiral transformation, $\chi = \exp(-i\pi \gamma_5 \tau^3/4)\psi$, where $\psi$ is the fermion doublet in the $N_{\rm f} = 2$ standard formulation, $\chi$ is the corresponding doublet in the rotated basis and $\tau^3$ is a Pauli matrix. This field transformation maps the {sSF} BCs to the {$\chi$SF} BCs, \begin{align} \label{eq:contbc} Q_{+}\chi(x)|_{x_{0} = 0}& =0& Q_{-}\chi(x)|_{x_{0} = T}& =0\\ \overline{\chi}(x)Q_{+}|_{x_{0} = 0}& =0& \overline{\chi}(x)Q_{-}|_{x_{0} = T}& =0\,, \nonumber \end{align} where $T$ is the Euclidean time extent and $Q_{\pm}$ are projectors given by \begin{displaymath} Q_{\pm} = \frac{1}{2}\, \left( \mathbbm{1} \pm i\, \gamma_{0}\gamma_{5}\tau^{3} \right)\,. \end{displaymath} Thus the $Q_{\pm}$ are simply the chirally rotated projectors corresponding to the {sSF} projectors, $P_{\pm} = 1/2(1\pm\gamma_0)$. However, once the theory is regularised on the lattice, we must ensure that the BCs in \eqref{eq:contbc} are in fact recovered in the continuum limit. Using orbifolding techniques, it was shown that the BCs can be implemented at finite lattice spacing by a simple modification of the standard Wilson-Dirac operator, ${D}_{\rm W}$, near the time boundaries~\cite{sint4}. The resulting action is \begin{equation} \label{eq:action} S = a^4 \sum_{x_0=0}^{T}\sum_{\rm{\vec{x}}}\overline\chi(x)\left({\mathcal D}_{\rm W} + m_0\right)\chi(x) \end{equation} and the modified Wilson-Dirac operator is given by \begin{equation} a {\mathcal D}_{\rm W}\chi(x) = \left\{ \begin{array}{ l l } -U(x,0)P_-\chi(x+a\hat{0}) + (aK +i \gamma_5\tau^3 P_- )\chi(x) & \qquad {\rm if} \quad x_0=0 \\ a {D}_{\rm W}\chi(x) & \qquad {\rm if} \quad 0 < x_0 < T \\ (aK +i \gamma_5\tau^3 P_+ )\chi(x)-U^{\dagger}(x-a\hat{0},0)P_+\chi(x-a\hat{0}) & \qquad {\rm if} \quad x_0=T \\ \end{array} \right. \label{eq:latact} \end{equation} where $K$ is the time-diagonal contribution to ${D}_{\rm W}$. \section{Boundary counterterms} To ensure the correct continuum limit, we must account for all relevant operators allowed by the symmetries of the action above. This means dimension four or less for the bulk action. There is one such operator, $\overline{\chi}\chi$, and the corresponding counterterm is the term proportional to the critical quark mass, $m_{\rm cr}$, or equivalently $\kappa_{\rm cr}$. This is the standard operator that is present for all Wilson actions due to the breaking of chiral symmetry by the Wilson term. Similarly, we must include all permitted boundary operators of dimension three or less. Again, the one allowed operator is $\overline{\chi}\chi$~\cite{sint2}, which gives rise to the following counterterm to the lattice action, \begin{displaymath} \delta S_3 = (z_{\mathrm{f}} - 1) a^{3}\sum_{\vec{x}}\, \left( \overline{\chi}\chi|_{x_{0}=0} + \overline{\chi}\chi|_{x_{0}=T} \right)\,. \end{displaymath} Such an operator would be forbidden in the continuum action, but the reduced symmetries of the Wilson action do not allow us to exclude this operator on the lattice. The presence of $\delta S_3$ can then be understood as necessary to restore the symmetries broken by the Wilson term in the continuum limit. The fact that it is a relevant operator implies that we must compute the bare coupling dependence of $z_{\mathrm{f}}$ non-perturbatively, just as for $\kappa$. Furthermore, we must examine those irrelevant operators that lead to $O(a)$ contributions. In the bulk, there is the dimension five Sheikholeslami-Wohlert term, but automatic {$O(a)$ improvement} eliminates the need for this operator. Yet, there does remain an $O(a)$ contribution from the boundary due to the irrelevant dimension four operator~\cite{sintleder}, \begin{displaymath} \delta S_4 = (d_\mathrm{s} - 1) a^{4}\sum_{\vec{x}}\, \left( \overline{\chi}\gamma_{k}D_{k}\chi|_{x_{0}=0} + \overline{\chi}\gamma_{k}D_{k}\chi|_{x_{0}=T} \right). \end{displaymath} Such a contribution is present in all SF formulations~\cite{luescher2} and is not due to the particular lattice action or BCs we have chosen. In fact, $d_\mathrm{s}$ plays a role that is analogous to the $\tilde{c}_t$ counterterm in the {sSF}~\cite{ct_tilde}. Given that $\delta S_4$ is an irrelevant operator, $d_s$ can be computed in perturbation theory. For the investigation presented here, we simply use the tree-level value of $1/2$. \section{Tuning conditions} The non-perturbative determination of $\kappa$ and $z_{\mathrm{f}}$ requires imposing conditions at finite lattice spacing that ensure the restoration of all expected symmetries in the continuum limit: parity and flavour symmetries in the $\chi-$ basis\footnote{We recall that in the $\chi-$ basis parity and flavour symmetries take a slightly different form (see ref.~\cite{andrea} for a discussion about the dependence of the symmetries on the basis adopted).}. Moreover, these conditions should be imposed at each lattice spacing while fixing a suitable renormalised quantity. In this work, we keep the renormalised SF coupling, $\overline{g}$, fixed. This is equivalent to fixing the physical size of the box, $L$. All other dimensionful quantities must scale with $L$, so we choose $T=L$, evaluate all correlation functions at $x_0 = T/2$ and use periodic boundary conditions with $\theta=0$. Before specifying the tuning conditions, we define the following boundary to bulk correlation functions \begin{displaymath} g_{\mathrm{A}_{\pm}}^{ab}(x_{0}) = -\langle A_{0}^{a}(x)\mathcal{Q}_{\pm}^{b}\rangle \qquad g_{\mathrm{P}_{\pm}}^{ab}(x_{0}) = -\langle P^{a}(x)\mathcal{Q}_{\pm}^{b}\rangle \end{displaymath} where the boundary operator, $\mathcal{Q}_{\pm}^{a}$, is defined for the $x_0=0$ boundary by \begin{displaymath} \mathcal{Q}_{\pm}^{a} = a^{6}\sum_{{\vec{y}},{\vec{z}}} \overline{\zeta}({\vec{y}})\gamma_{5}\frac{1}{2}\tau^{a}Q_{\pm}\zeta({\vec{z}})\, e^{i{\vec{p}}({\vec{y}}-{\vec{z}})}\,, \end{displaymath} the bulk operators $A_{\mu}^{a}(x)$ and $P^{a}(x)$ are the axial current and pseudoscalar density in the $\chi$-basis, and the boundary fields for $x_0=0$ are defined as \begin{displaymath} \zeta({\vec{x}}) = U(x_{0}-a,{\vec{x}};0)\chi(x)|_{x_{0}=a} \qquad \overline{\zeta}({\vec{x}}) = \overline{\chi}(x)U^{\dagger}(x_{0}-a,{\vec{x}};0)|_{x_{0}=a}. \end{displaymath} To tune $\kappa$ to its critical value, we adopt the standard procedure of imposing a vanishing PCAC mass. To tune $z_{\mathrm{f}}$, we require the $\gamma_{5}\tau_{1}$-odd correlation function $g_{\mathrm{A}_{-}}^{11}$ to vanish, \begin{equation} m_\mathrm{PCAC} \equiv \frac{\partial_{0}^{\mathrm{latt}} g_{\mathrm{A}_{-}}^{11}(T/2)}{2g_{\mathrm{P}_{-}}^{11}(T/2)} = 0 \qquad g_{\mathrm{A-}} \equiv g_{\mathrm{A}_{-}}^{11}(T/2) = 0\,. \end{equation} The second condition in particular is sensitive to the symmetries broken by the lattice action \eqref{eq:action}, and both conditions together ensure that in the continuum limit all broken symmetries are indeed restored. Imposing different symmetry restoration conditions would give rise to different values of $\kappa$ and $z_{\mathrm{f}}$ that would differ amongst themselves by cutoff effects. It will be important to study the sensitivity of $\kappa$ and $z_{\mathrm{f}}$ to the particular definitions used in order to better understand the intrinsic uncertainty in the determination of these counterterms. \section{Tuning results} To check the practicality of tuning both $\kappa$ and $z_{\mathrm{f}}$ non-perturbatively for the {$\chi$SF}, we perform the tuning at three values of the renormalisation scale $\mu=1/L$, corresponding to a hadronic ($\overline{g}^{2}$ fixed with $L = 1.436r_{0}$), an intermediate ($\overline{g}^{2}=2.4484$) and a perturbative ($\overline{g}^{2}=0.9944$) scale. The results at these three points are summarised in \Tab{\ref{tab:tuning}}. \begin{table} \begin{center} \begin{tabular}[c]{|r|l|l|l|l|}\hline \multicolumn{1}{|c|}{$L/a$} & \multicolumn{1}{|c|}{$\beta$} & \multicolumn{1}{|c|}{$z_{\mathrm{f}}^\ast$ ({$\chi$SF})} & \multicolumn{1}{|c|}{$\kappa_{\mathrm{cr}}$ ({$\chi$SF})} & \multicolumn{1}{|c|}{$\kappa_{\mathrm{cr}}$ ({sSF})} \\\hline \multicolumn{5}{|c|}{Tuning at a hadronic scale, $\mu \sim 300\textrm{ MeV}$} \\\hline 8 & 6.0219 & 1.8090\,(32) & 0.153530\,(24) & 0.153371\,(10) \\ 10 & 6.1628 & 1.7920\,(30) & 0.152134\,(17) & 0.152012\,(7) \\ 12 & 6.2885 & 1.7664\,(51) & 0.150815\,(22) & 0.150752\,(10) \\ 16 & 6.4956 & 1.7212\,(83) & 0.148945\,(25) & 0.148876\,(13) \\\hline \multicolumn{5}{|c|}{Tuning at an intermediate scale, $\mu \sim 1\textrm{ GeV}$} \\\hline 8 & 7.0197 & 1.5467\,(15) & 0.144501\,(13) & 0.144454\,(7) \\ 12 & 7.3551 & 1.5126\,(23) & 0.143113\,(12) & 0.143113\,(6) \\ 16 & 7.6101 & 1.4942\,(37) & 0.142112\,(13) & 0.142107\,(6) \\\hline \multicolumn{5}{|c|}{Tuning at a perturbative scale, $\mu \sim 30\textrm{ GeV}$} \\\hline 8 & 10.3000 & 1.29730\,(67) & 0.1354609\,(54) & 0.135457\,(5) \\ 12 & 10.6086 & 1.2954\,(11) & 0.1351758\,(56) & 0.135160\,(4) \\ 16 & 10.8910 & 1.2858\,(15) & 0.1348440\,(61) & 0.134849\,(6) \\\hline \end{tabular} \caption{Tuning results at a hadronic, intermediate and perturbative scale. We give the critical values, $z_{\mathrm{f}}^\ast$ and $\kappa_{\mathrm{cr}}$, calculated in this work for the {$\chi$SF}. For reference, we also give $\kappa_{\mathrm{cr}}$ for the {sSF}~\cite{ref1,ref2,ref3}.} \label{tab:tuning} \end{center} \end{table} We now briefly explain the procedure we used to perform the tuning, showing examples from our most difficult point at the hadronic scale and for the smallest lattice, $L/a=8$. The values of $\beta$ used are given in \Tab{\ref{tab:tuning}} and are taken from \Ref{\cite{alpha1}}. The tuning is performed in several steps. First, we calculate $m_\mathrm{PCAC}$ and $g_{\mathrm{A-}}$ at four values of $z_{\mathrm{f}}$, and for each value of $z_{\mathrm{f}}$, we use four values of $\kappa$, thus giving 16 pairs of $\kappa$ and $z_{\mathrm{f}}$. This allows us to determine $g_{A_{-}}$ as a function of $m_\mathrm{PCAC}$ for each value of $z_{\mathrm{f}}$, as illustrated in \Fig{\ref{fig:ga_vs_mpcac}}. \begin{figure} \begin{minipage}{214pt} \includegraphics[width=165pt,angle=270]{./plots/ga_vs_mpcac_np8.eps} \caption{Plot of $g_{\mathrm{A-}}$ versus $m_\mathrm{PCAC}$.} \label{fig:ga_vs_mpcac} \end{minipage} \begin{minipage}{214pt} \includegraphics[width=165pt,angle=270]{./plots/ga_vs_zf_np8.eps} \caption{Plot of $g_{\mathrm{A-}}$ versus $z_{\mathrm{f}}$. \label{fig:ga_vs_zf}} \end{minipage} \end{figure} For each value of $z_{\mathrm{f}}$, we perform a linear interpolation of $g_{\mathrm{A-}}$ in terms of $m_\mathrm{PCAC}$ to the point $m_\mathrm{PCAC}=0$. This determines the values of $g_{\mathrm{A-}}$ at $m_\mathrm{PCAC}=0$ for each of the four values of $z_{\mathrm{f}}$, as shown in \Fig{\ref{fig:ga_vs_zf}}. We now interpolate these values of $g_{\mathrm{A-}}$ as a function of $z_{\mathrm{f}}$ to the point of vanishing $g_{A_{-}}$, thus giving us the critical value $z_{\mathrm{f}}^\ast$. Next we determine $\kappa_{\mathrm{cr}}$. Using the same 16 pairs of $\kappa$ and $z_{\mathrm{f}}$, we calculate $m_\mathrm{PCAC}$ as a function of $\kappa$ for each $z_{\mathrm{f}}$. This is shown in \Fig{\ref{fig:mpcac_vs_kappa}}. Note that $m_\mathrm{PCAC}$ has a very mild dependence on $z_{\mathrm{f}}$, so the four curves at fixed $z_{\mathrm{f}}$ are nearly indistinguishable. Interpolating in $\kappa$ to the point of vanishing PCAC mass, we obtain the critical values of $\kappa$ at each $z_{\mathrm{f}}$. The resulting values of $\kappa$ as a function of $z_{\mathrm{f}}$ are shown in \Fig{\ref{fig:kappac_vs_zf}}. \begin{figure} \begin{minipage}{214pt} \includegraphics[width=165pt,angle=270]{./plots/mpcac_vs_k_np8.eps} \caption{Plot of $m_\mathrm{PCAC}$ versus $\kappa$. \label{fig:mpcac_vs_kappa}} \end{minipage} \begin{minipage}{214pt} \includegraphics[width=165pt,angle=270]{./plots/kc_vs_zf_np8.eps} \caption{Plot of $\kappa_{\mathrm{cr}}$ versus $z_{\mathrm{f}}$. \label{fig:kappac_vs_zf}} \end{minipage} \end{figure} We now interpolate these results in $z_{\mathrm{f}}$ to the previously determined value of $z_{\mathrm{f}}^\ast$, thus determining the value of $\kappa_{\mathrm{cr}}$. A key observation of this work is the mild dependence of $m_\mathrm{PCAC}$ on $z_{\mathrm{f}}$, at least in the region near $\kappa_{\mathrm{cr}}$ and $z_{\mathrm{f}}^\ast$. You can easily see this in \Fig{\ref{fig:mpcac_vs_kappa}}. The consequence of this is clear in \Fig{\ref{fig:kappac_vs_zf}}:\ the determination of $\kappa_{\mathrm{cr}}$ also has a weak dependence on $z_{\mathrm{f}}^\ast$ and the errors of both are relatively independent. If this behaviour persists with dynamical calculations, it could ease the numerical effort necessary to perform the tuning, thus reducing the number of required simulations. \section{Conclusions} We have presented the results of the non-perturbative tuning of $\kappa$ and $z_{\mathrm{f}}$ for the {$\chi$SF} at three physical scales and for a range of lattice spacings. This demonstrates that the tuning of these two coefficients is indeed feasible, at least in the quenched approximation. Moreover, we observe that the tuning of $z_{\mathrm{f}}$ and $\kappa$ are nearly independent. Note that even with non-improved Wilson fermions in the bulk, $\kappa$ and $z_{\mathrm{f}}$ are the only parameters that must be tuned within the {$\chi$SF} setup in order to guarantee bulk automatic {$O(a)$ improvement}, thus eliminating the need for the bulk counterterm, $c_{\mathrm{sw}}$, and for the many operator improvement coefficients necessary in the {sSF}. Our next step is to perform an universality test of this formulation as well as a demonstration that automatic {$O(a)$ improvement} holds. This can be done by reproducing a variety of quantities already computed in the standard setup. A natural candidate would be the computation of the step-scaling function of the pseudoscalar renormalisation factor, $Z_\mathrm{P}$, which could be compared to the results of~\cite{alpha1}. We recall that the {$\chi$SF} and the {sSF} are equivalent in the continuum limit, therefore, it is not necessary to recompute the entire evolution of an operator. The only quantity that must be recomputed is the renormalisation factor at the most non-perturbative scale. We also plan to explore whether the value of $\kappa_{\mathrm{cr}}$ determined from the finite volume simulations can be used in large volume, preserving the nice scaling behaviour obtained in \Refs{\cite{chilf1,Dimopoulos:2008sy}}, without the need for a large volume determination of $\kappa_{\mathrm{cr}}$. A lattice perturbation theory computation of $d_\mathrm{s}$ and $z_{\mathrm{f}}$ is also planned. The final goal is to perform dynamical simulations. \acknowledgments We thank S.~Sint and B.~Leder for many discussions and the private communication of the unpublished results of ref.~\cite{sint4}. We also acknowledge the support of the computer center in DESY-Zeuthen and the NW-grid in Lancaster.
proofpile-arXiv_065-6195
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The renormalized square of white noise (RSWN) was firstly introduced by Accardi-Lu--Volovich in \cite{AcLuVo}. Later, Sniady studied the connection between the RSWN and the free white noise (cf \cite{Sn}). Subsequently, its relation with the L\'evy processes on real Lie algebras was established in \cite{AcFrSk}. Recently, in \cite{AcDh1}--\cite{AcDh2}, the authors obtained the Fock representation of the RSWN. They started defining the quadratic Fock space and the quadratic second quantization. After doing that, they characterized the operators on the one-particle Hilbert algebra whose quadratic second quantization is isometric (resp. unitary). A sufficient condition for the contractivity of the quadratic second quantization was derived too. It is well known that the first order second quantization $\Gamma_1(p)$ of an operator $p$, defined on the usual Fock space, is an orthogonal projection if and only if $p$ is an orthogonal projection (cf \cite{Par}). Within this paper, it is shown that the set of orthogonal projections $p$, for which its quadratic second quantization $\Gamma_2(p)$ is an orthogonal projection, is quite reduced. More precisely, we prove that $\Gamma_2(p)$ is an orthogonal projection if and only if $p$ is a multiplication operator by a characteristic function $\chi_I$, $I\subset\mathbb{R}^d$. This paper is organized as follows. In section \ref{section1}, we recall some main properties of the quadratic Fock space and the quadratic second quantization. The main result is proved in section \ref{section2}. \section{Quadratic Fock functor}\label{section1} The algebra of the renormalized square of white noise (RSWN) with test function Hilbert algebra $$ {\cal A} := L^2(\mathbb{R^d})\cap L^\infty(\mathbb{ R^d}) $$ is the $*$-Lie-algebra, with central element denoted $1$, generators $B^+_f,B_h, N_g ,\, \, f,g,h\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)\}$, involution $$ (B^+_f)^*=B_f \qquad , \qquad N_f^*=N_{\bar{f}} $$ and commutation relations \begin{eqnarray}\label{commutation} [B_f,B^+_g]=2c\langle f,g\rangle+4N_{\bar fg},\,\;[N_a,B^+_f]=2B^+_{af} \end{eqnarray} $$[B^+_f,B^+_g]=[B_f,B_g]=[N_a,N_{a'}]=0,$$ for all $a$, $a'$, $f$, $g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. The Fock representation of the RSWN is characterized by a cyclic vector $\Phi$ satisfying $$ B_f\Phi=N_g\Phi=0 $$ for all $f,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ (cf \cite{AcAmFr}, \cite{AcFrSk}). \subsection{Quadratic Fock space} In this subsection, we recall some basic definitions and properties of the quadratic exponential vectors and the quadratic Fock space. We refer the interested reader to \cite{AcDh1}--\cite{AcDh2} for more details. The quadratic Fock space $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$ is the closed linear span of $\big\{B^{+n}_f\Phi$, $n\in\mathbb{N}$, $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)\}$, where $B^{+0}_f\Phi=\Phi$, for all $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. From \cite{AcDh2} it follows that $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$ is an interacting Fock space. Moreover, the scalar product between two $n$-particle vectors is given by the following (cf \cite{AcDh1}). \begin{proposition} \label{prop1}For all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, one has \begin{eqnarray*} \langle B^{+n}_f\Phi,B^{+n}_g\Phi\rangle&=&c\sum^{n-1}_{k=0}2^{2k+1}{n!(n-1)!\over((n-k-1)!)^2}\,\langle f^{k+1}, g^{k+1}\rangle\\ \;\;\;\;\;&&\langle B^{+(n-k-1)}_f\Phi,B^{+(n-k-1)}_g\Phi\rangle. \end{eqnarray*} \end{proposition} The quadratic exponential vector of an element $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, if it exists, is given by $$ \Psi(f)=\sum_{n\geq0}\frac{B^{+n}_f\Phi}{n!} $$ where by definition \begin{equation}\label{Psi(0)Phi} \Psi(0)= B^{+0}_f\Phi = \Phi. \end{equation} It is proved in \cite{AcDh1} that the quadratic exponential vector $\Psi(f)$ exists if and only if $\|f\|_\infty<\frac{1}{2}$. Furthermore, the scalar product between two exponential vectors, $\Psi(f)$ and $\Psi(g)$, is given by \begin{equation}\label{Form} \langle \Psi(f),\Psi(g)\rangle=e^{-\frac{c}{2}\int_{\mathbb{R}^d}\ln(1-4\bar{f}(s)g(s))ds}. \end{equation} Now, we refer to \cite{AcDh1} for the proof of the following theorem. \begin{theorem} The quadratic exponential vectors are linearly independents. Moreover, the set of quadratic exponential vectors is a total set in $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$. \end{theorem} \subsection{Quadratic second quantization} For all linear operator $T$ on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, we define its quadratic second quantization, if it is well defined, by $$\Gamma_2(T)\Psi(f)=\Psi(Tf)$$ for all $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Note that in \cite{AcDh2}, the authors have proved that $\Gamma_2(T)$ is well defined if and only if $T$ is a contraction on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to the norm $||.||_\infty$. Moreover, they have given a characterization of operators $T$ on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ whose quadratic second quantization is isometric (resp. unitary). The contractivity of $\Gamma_2(T)$ was also investigated. \section{Main result}\label{section2} Given a contraction $p$ on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to $\|.\|_\infty$, the aim of this section is to prove under which condition $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$. \begin{lemma}\label{lem} For all $f,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, one has $$\langle B^{+n}_f\Phi,B^{+n}_g\Phi\rangle=n!\frac{d^n}{dt^n}\Big|_{t=0}\langle\Psi(\sqrt{t}f),\Psi(\sqrt{t}g)\rangle.$$ \end{lemma} \begin{proof} Let $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ such that $\|f\|_\infty>0$ and $\|g\|_\infty>0$. Consider $0\leq t\leq \delta$, where $$\delta<\frac{1}{4}\inf\Big(\frac{1}{\|f\|_\infty^2},\frac{1}{\|g\|_\infty^2}\Big).$$ It is clear that $\|\sqrt{t}f\|_\infty< \frac{1}{2}$ and $\|\sqrt{t}g\|_\infty<\frac{1}{2}$. Moreover, one has $$\langle\Psi(\sqrt{t}f),\Psi(\sqrt{t}g)\rangle=\sum_{m\geq0}\frac{t^m}{(m!)^2}\langle B^{+m}_f\Phi,B^{+m}_g\Phi\rangle.$$ Note that for all $m\geq n$, one has \begin{eqnarray*} &&\frac{d^n}{dt^n}\Big(\frac{t^m}{(m!)^2}\langle B^{+m}_{f}\Phi, B^{+m}_{g}\Phi\rangle\Big)\\ &&=\frac{m!t^{m-n}}{(m!)^2(m-n)!}\langle B^{+m}_{f}\Phi, B^{+m}_{g}\Phi\rangle\\ &&=\frac{t^{m-n}}{m!(m-n)!}\langle B^{+m}_{f}\Phi, B^{+m}_{g}\Phi\rangle. \end{eqnarray*} Put $$K_m=\frac{\delta^{m-n}}{m!(m-n)!}\|B^{+m}_{f}\Phi\| \|B^{+m}_{g}\Phi\|.$$ Then, from Proposition \ref{prop1}, it follows that \begin{eqnarray*} ||B^{+m}_f\Phi||^2&=&c\sum_{k=0}^{m-1}2^{2k+1}\frac{m!(m-1)!}{((m-k-1)!)^2}|\|f^{k+1}\|^2_2\|B_f^{+(m-k-1)}\Phi\|^2\\ &=&c\sum_{k=1}^{m-1}2^{2k+1}\frac{m!(m-1)!}{((m-k-1)!)^2}|\|f^{k+1}\|^2_2\|B_f^{+(m-k-1)}\Phi\|^2\\ &&+2mc\|f\|^2_2\|B^{+(m-1)}_f\Phi\|^2\\ &=&c\sum_{k=0}^{m-2}2^{2k+3}\frac{m!(m-1)!}{(((m-1)-k-1)!)^2}|\|f^{k+2}\|^2_2\|B_f^{+((m-1)-k-1)}\Phi\|^2\\ &&+2mc\|f\|^2_2\|B^{+(m-1)}_f\Phi\|^2\\ &\leq&\Big(4m(m-1)\|f\|^2_\infty\Big)\Big[c\sum_{k=0}^{m-2}2^{2k+1}\frac{(m-1)!(m-2)!}{(((m-1)-k-1)!)^2}\|f^{k+1}\|^2_2\\ &&\;\;\;\;\|B_f^{+((m-1)-k-1)}\Phi\|^2\Big]. \end{eqnarray*} But, one has $$||B^{+(m-1)}_f\Phi||^2=c\sum_{k=0}^{m-2}2^{2k+1}\frac{(m-1)!(m-2)!}{((m-1)-k-1)!)^2}|\|f^{k+1}\|^2_2\|B_f^{+((m-1)-k-1)}\Phi\|^2.$$ Therefore, one gets $$||B^{+m}_f\Phi||^2\leq\Big[4m(m-1)\|f\|^2_\infty+2m\|f\|^2_2\Big]\|B^{+(m-1)}_f\Phi\|^2.$$ This proves that $$\frac{K_m}{K_{m-1}} \leq \frac{\sqrt{4m(m-1)\|f\|^2_\infty+2m\|f\|^2_2}\sqrt{4m(m-1)\|g\|^2_\infty+2m\|g\|^2_2}}{m(m-n)}\,\delta.$$ It follows that $$\lim_{m\rightarrow\infty}\frac{K_m}{K_{m-1}} \leq 4\|f\|_\infty\|g\|_\infty\delta<1.$$ Hence, the series $\sum_{m}K_m$ converges. Finally, we have proved that \begin{eqnarray}\label{chi} \frac{d^n}{dt^n}\langle\Psi(\sqrt{t}f),\Psi(\sqrt{t}g)\rangle=\sum_{m\geq n}\frac{t^{m-n}}{m!(m-n)!}\langle B^{+m}_{f}\Phi, B^{+m}_{g}\Phi\rangle. \end{eqnarray} Thus, by taking $t=0$ in the right hand side of (\ref{chi}), the result of the above lemma holds. \end{proof} As a consequence of the above lemma, we prove the following. \begin{lemma}\label{lemm} Let $p$ be a contraction on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to the norm $\|.\|_\infty$. If $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$, then one has $$\langle B^{+n}_{p(f)}\Phi, B^{+n}_{p(g)}\Phi\rangle=\langle B^{+n}_{p(f)}\Phi, B^{+n}_{g}\Phi\rangle=\langle B^{+n}_{f}\Phi, B^{+n}_{p(g)}\Phi\rangle$$ for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ and all $n\geq1$. \end{lemma} \begin{proof} It $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$, then \begin{eqnarray}\label{ab} \langle\Psi(\sqrt{t}p(f)),\Psi(\sqrt{t}p(g))\rangle=\langle\Psi(\sqrt{t}p(f)),\Psi(\sqrt{t}g)\rangle=\langle\Psi(\sqrt{t}f),\Psi(\sqrt{t}p(g))\rangle \end{eqnarray} for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ (with $\|f\|_\infty>0$ and $\|g\|_\infty>0$) and all $0\leq t\leq\delta$ such that $$\delta<\frac{1}{4}\inf\Big(\frac{1}{\|f\|_\infty^2},\frac{1}{\|g\|_\infty^2}\Big).$$ Therefore, the result of the above lemma follows from Lemma \ref{lem} and identity (\ref{ab}). \end{proof} Lemma \ref{lemm} ensures that the following result holds true. \begin{lemma}\label{lemmm} Let $p$ be a contraction on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to the norm $\|.\|_\infty$. If $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$, then one has \begin{eqnarray}\label{sassi} \langle (p(f))^n,(p(g))^n\rangle=\langle f^n,(p(g))^n\rangle= \langle (p(f))^n,g^n\rangle \end{eqnarray} for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ and all $n\geq1$. \end{lemma} \begin{proof} Suppose that $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$. Then, in order to prove the above lemma we have to use induction. - For $n=1$: Lemma \ref{lemm} implies that $$\langle B^{+}_{p(f)}\Phi, B^{+}_{p(g)}\Phi\rangle=\langle B^{+}_{p(f)}\Phi, B^{+}_{g}\Phi\rangle=\langle B^{+}_{f}\Phi, B^{+}_{p(g)}\Phi\rangle$$ for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Using the fact that $B_f\Phi=0$ for all $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ and the commutation relations in (\ref{commutation}) to get \begin{eqnarray*} \langle B^{+}_{p(f)}\Phi, B^{+}_{p(g)}\Phi\rangle&=&\langle \Phi,B_{p(f)}B^{+}_{p(g)}\Phi\rangle=2c\langle p(f),p(g)\rangle\\ \langle B^{+}_{p(f)}\Phi, B^{+}_{g}\Phi\rangle&=&\langle \Phi,B_{p(f)}B^{+}_{g}\Phi\rangle=2c\langle p(f),g\rangle\\ \langle B^{+}_{f}\Phi, B^{+}_{p(g)}\Phi\rangle&=&\langle \Phi,B_{f}B^{+}_{p(g)}\Phi\rangle=2c\langle f,p(g)\rangle. \end{eqnarray*} This proves that identity (\ref{sassi}) holds true for $n=1$. - Let $n\geq1$ and suppose that identity (\ref{sassi}) is satisfied. Then, from Lemma \ref{lemm}, it follows that for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ one has \begin{eqnarray}\label{rg} \langle B^{+(n+1)}_{p(f)}\Phi, B^{+(n+1)}_{p(g)}\Phi\rangle=\langle B^{+(n+1)}_{p(f)}\Phi, B^{+(n+1)}_{g}\Phi\rangle=\langle B^{+(n+1)}_{f}\Phi, B^{+(n+1)}_{p(g)}\Phi\rangle. \end{eqnarray} Identity (\ref{rg}) and Proposition \ref{prop1} imply that \begin{eqnarray}\label{roch} \langle B^{+(n+1)}_{p(f)}\Phi, B^{+(n+1)}_{p(g)}\Phi\rangle&=&2^{2n+3}c n!(n+1)!\langle (p(f))^{n+1}, (p(g))^{n+1}\rangle\nonumber\\ &&+c\sum^{n-1}_{k=0}2^{2k+1}{n!(n+1)!\over((n-k)!)^2}\,\langle (p(f))^{k+1}, (p(g))^{k+1}\rangle\nonumber\\ &&\;\;\;\;\;\;\;\langle B^{+(n-k)}_{p(f)}\Phi,B^{+(n-k)}_{p(g)}\Phi\rangle\nonumber\\ &=&2^{2n+3}c n!(n+1)!\langle f^{n+1}, (p(g))^{n+1}\rangle\nonumber\\ &&+c\sum^{n-1}_{k=0}2^{2k+1}{n!(n+1)!\over((n-k)!)^2}\,\langle f^{k+1}, (p(g))^{k+1}\rangle\\ &&\;\;\;\;\;\;\;\langle B^{+(n-k)}_f\Phi,B^{+(n-k)}_{p(g)}\Phi\rangle\nonumber\\ &=&2^{2n+3}c n!(n+1)!\langle (p(f))^{n+1}, g^{n+1}\rangle\nonumber\\ &&+c\sum^{n-1}_{k=0}2^{2k+1}{n!(n+1)!\over((n-k)!)^2}\,\langle (p(f))^{k+1}, g^{k+1}\rangle\nonumber\\ &&\;\;\;\;\;\;\;\langle B^{+(n-k)}_{p(f)}\Phi,B^{+(n-k)}_g\Phi\rangle.\nonumber \end{eqnarray} Note that by induction assumption, one has \begin{eqnarray}\label{sam} \langle (p(f))^{k+1}, (p(g))^{k+1}\rangle=\langle f^{k+1}, (p(g))^{k+1}\rangle=\langle (p(f))^{k+1}, g^{k+1}\rangle \end{eqnarray} for all $k=0,\dots,n-1$. Therefore, from Lemma \ref{lemm} and identity (\ref{sam}), one gets \begin{eqnarray*} &&c\sum^{n-1}_{k=0}2^{2k+1}{n!(n+1)!\over((n-k)!)^2}\,\langle (p(f))^{k+1}, (p(g))^{k+1}\rangle\langle B^{+(n-k)}_{p(f)}\Phi,B^{+(n-k)}_{p(g)}\Phi\rangle\\ &&=c\sum^{n-1}_{k=0}2^{2k+1}{n!(n+1)!\over((n-k)!)^2}\,\langle f^{k+1}, (p(g))^{k+1}\rangle\langle B^{+(n-k)}_f\Phi,B^{+(n-k)}_{p(g)}\Phi\rangle\\ &&=c\sum^{n-1}_{k=0}2^{2k+1}{n!(n+1)!\over((n-k)!)^2}\,\langle (p(f))^{k+1}, g^{k+1}\rangle\langle B^{+(n-k)}_{p(f)}\Phi,B^{+(n-k)}_g\Phi\rangle. \end{eqnarray*} Finally, from (\ref{roch}) one can conclude. \end{proof} Note that the set of contractions $p$ on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to $\|.\|_\infty$, such that $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$, is reduced to the following. \begin{lemma}\label{bar} Let $p$ be a contraction on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to the norm $\|.\|_\infty$. If $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$, then $$p(\bar{f})=\overline{p(f)}.$$ for all $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. \end{lemma} \begin{proof} Let $p$ be a contraction on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to the norm $\|.\|_\infty$ such that $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$. Then, from Lemma \ref{lemmm} it is clear that $p=p^*=p^2$ (taking $n=1$ in (\ref{sassi})). Moreover, for all $f_1,f_2,g_1,g_2\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, one has \begin{eqnarray*} \langle (p(f_1+f_2))^2,\,(g_1+g_2)^2\rangle=\langle (p(f_1+f_2))^2,\,(p(g_1+g_2))^2\rangle. \end{eqnarray*} It follows that \begin{eqnarray}\label{p} &&\langle (p(f_1))^2,g_1^2+g_2^2\rangle+\langle (p(f_2))^2,g_1^2+g_2^2\rangle+4\langle p(f_1)p(f_2),g_1g_2\rangle\nonumber\\ &&=\langle (p(f_1))^2,(p(g_1))^2+(p(g_2))^2\rangle+\langle (p(f_2))^2,(p(g_1))^2+(p(g_2))^2\rangle\\ &&+4\langle p(f_1)p(f_2),p(g_1)p(g_2)\rangle.\nonumber \end{eqnarray} Then, using (\ref{sassi}) and (\ref{p}) to obtain \begin{eqnarray}\label{bah} \langle p(f_1)p(f_2),g_1g_2\rangle=\langle p(f_1)p(f_2),p(g_1)p(g_2)\rangle \end{eqnarray} for all $f_1,f_2,g_1,g_2\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Now, denote by $\mathcal{M}_a$ the multiplication operator by the function $a\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Then, identity (\ref{bah}) implies that $$\langle \mathcal{M}_{p(f_2)\bar{g_2}}p(f_1),g_1\rangle=\langle \mathcal{M}_{p(f_2)\overline{p(g_2)}}p(f_1),p(g_1)\rangle$$ for all $f_1,f_2,g_1,g_2\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. This gives that \begin{equation}\label{multiplication} \mathcal{M}_{p(f_2)\bar{g_2}}p=p\mathcal{M}_{p(f_2)\overline{p(g_2)}}p \end{equation} for all $f_2, g_2\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Taking the adjoint in (\ref{multiplication}), one gets \begin{equation}\label{adjoint} p\mathcal{M}_{\overline{p(f_2)}\,g_2}=p\mathcal{M}_{\overline{p(f_2)}p(g_2)}p. \end{equation} Note that, for all $f,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, identity (\ref{multiplication}) implies that \begin{eqnarray}\label{identification} \mathcal{M}_{p(f)\bar{g}}p=p\mathcal{M}_{p(f)\,\overline{p(g)}}p. \end{eqnarray} Moreover, from (\ref{adjoint}), one has \begin{eqnarray}\label{iden} p\mathcal{M}_{p(f)\,\overline{p(g)}}p=p\mathcal{M}_{f\,\overline{p(g)}}. \end{eqnarray} Therefore, identities (\ref{identification}) and (\ref{iden}) yield \begin{equation}\label{dual} \mathcal{M}_{p(f)\bar{g}}p=p\mathcal{M}_{f\,\overline{p(g)}} \end{equation} for all $f,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Hence, for all $f,g,h\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, identity (\ref{dual}) gives \begin{equation}\label{dhahri} p(f\,\overline{p(g)}h)=p(f)\bar{g}p(h). \end{equation} Taking $f=g=p(h)$ in (\ref{dual}) to get \begin{equation}\label{17} \mathcal{M}_{|p(h)|^2}p=p\mathcal{M}_{|p(h)|^2}. \end{equation} Then, if we put $f=h=p(g)$ in (\ref{dhahri}), one has $$p(\overline{p(g)}\,p(g)^2)=p(|p(g)|^2p(g))=(p(g))^2\bar{g}.$$ But, from (\ref{17}), one has $$p(|p(g)|^2p(g))=(p\mathcal{M}_{|p(g)|^2})(p(g))=\mathcal{M}_{|p(g)|^2}p(p(g))=|p(g)|^2p(g).$$ Hence, one obtains \begin{equation}\label{fin} |p(g)|^2p(g)=(p(g))^2\bar{g} \end{equation} for all $g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Now, let $g$ be a real function in $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. So, the polar decomposition of $p(g)$ is given by $$p(g)=|p(g)|e^{i\theta_{p(g)}}.$$ Thus, identity (\ref{fin}) implies that $$|p(g)|^3e^{-i\theta_{p(g)}}=|p(g)|^2g.$$ This proves that for all $x\in\mathbb{R}^d$, $\theta_{p(g)}(x)=k_x\pi$, $k_x\in\mathbb{Z}$. Therefore, $p(g)$ is a real function. Now, taking $f=f_1+if_2\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$, where $f_1,f_2$ are real functions on $\mathbb{R}^d$. It is clear that $$p(\bar{f})=p(f_1-if_2)=\overline{p(f_1)+ip(f_2)}=\overline{p(f)}.$$ This completes the proof of the above lemma. \end{proof} As a consequence of Lemmas \ref{lemmm} and \ref{bar} we prove the following theorem. \begin{theorem} Let $p$ be a contraction on $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ with respect to the norm $\|.\|_\infty$. Then, $\Gamma(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$ if and only if $p=\mathcal{M}_{\chi_I}$, where $\mathcal{M}_{\chi_I}$ is a multiplication operator by a characteristic function $\chi_I$, $I\subset\mathbb{R}^d$. \end{theorem} \begin{proof} Note that if $p=\mathcal{M}_{\chi_I}$, $I\subset\mathbb{R}^d$, then from identity (\ref{Form}) it is clear that \begin{eqnarray*} e^{-\frac{c}{2}\int_{I}\ln(1-4\bar{f}(s)g(s))ds}&=&\langle \Psi_2(p(f)),\Psi_2(g)\rangle\\ &=&\langle \Psi_2(f),\Psi_2(p(g))\rangle\\ &=&\langle \Psi_2(p(f)),\Psi_2(p(g))\rangle \end{eqnarray*} for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ such that $\|f\|_\infty<\frac{1}{2}$ and $\|g\|_\infty<\frac{1}{2}$. Hence, $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$. Now, suppose that $\Gamma_2(p)$ is an orthogonal projection on $\Gamma_2(L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d))$. Then, Lemma \ref{lemmm} implies that \begin{eqnarray*} \langle (p(f))^n,(p(g))^n\rangle=\langle f^n,(p(g))^n\rangle= \langle (p(f))^n,g^n\rangle \end{eqnarray*} for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$ and all $n\geq1$. In particular, if $n=2$ one has \begin{eqnarray*} \langle (p(f_1+\bar{f_2}))^2, g^2\rangle=\langle (f_1+\bar{f_2})^2,(p(g))^2\rangle \end{eqnarray*} for all $f_1,\,f_2,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. This gives \begin{eqnarray}\label{samiha} &&\langle (p(f_1))^2, g^2\rangle+2\langle p(f_1)p(\bar{f_2}), g^2\rangle+ \langle (p(\bar{f_2}))^2, g^2\rangle\nonumber\\ &&=\langle f_1^2, (p(g))^2\rangle+2\langle f_1\bar{f_2}, (p(g))^2\rangle+ \langle (\bar{f_2})^2, (p(g))^2\rangle. \end{eqnarray} Using identity (\ref{samiha}) and Lemma \ref{lemmm} to get $$\langle p(f_1)p(\bar{f_2}), g^2\rangle=\langle f_1\bar{f_2}, (p(g))^2\rangle.$$ This yields \begin{eqnarray}\label{erri} \int_{\mathbb{R}^d}\bar{f_1}(x)f_2(x)(p(g))^2(x)dx=\int_{\mathbb{R}^d}\overline{p(f_1)}(x)p(f_2)(x)g^2(x)dx. \end{eqnarray} But, from Lemma \ref{bar}, one has $\overline{p(f)}=p(\bar{f})$, for all $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Then, identity (\ref{erri}) implies that \begin{eqnarray*} \langle f_1, M_{(p(g))^2}f_2\rangle=\langle f_1, (pM_{g^2}p)f_2\rangle \end{eqnarray*} for all $f,\,g\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Hence, one obtains \begin{eqnarray}\label{war} M_{(p(g))^2}=pM_{g^2}p. \end{eqnarray} In particular, for $g=\chi_I$ where $I\subset\mathbb{R}^d$, one has \begin{equation}\label{hed} \mathcal{M}_{(p(\chi_I))^2}=p\mathcal{M}_{\chi_I}p. \end{equation} If $I$ tends to $\mathbb{R}^d$, the operator $\mathcal{M}_{\chi_I}$ converges to $id$ (identity of $L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$) for the strong topology. From (\ref{hed}), it follows that $$p(f)=p^2(f)=\lim_{I\uparrow\mathbb{R}^d}\mathcal{M}_{(p(\chi_I))^2}f$$ for all $f\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. But, the set of multiplication operators is a closed set for the strong topology. This proves that $p=\mathcal{M}_{a}$, where $a\in L^2(\mathbb{R}^d)\cap L^\infty(\mathbb{R}^d)$. Note that $p=p^2$ is a positive operator. This implies that $a$ is a positive function. Moreover, one has $p^n=p$ for all $n\in\mathbb{N}^*$. This gives $\mathcal{M}_{a^n}=\mathcal{M}_{a}$ for all $n\in\mathbb{N}^*$. It follows that $a^n=a$ for all $n\in\mathbb{N}^*$. Therefore, the operator $a$ is necessarily a characteristic function on $\mathbb{R}^d$. \end{proof} \bigskip {\bf\large Acknowledgments}\bigskip The author gratefully acknowledges stimulating discussions with Eric Ricard.
proofpile-arXiv_065-6202
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction\protect\bigskip} \subsection{The model} Functional data analysis has become these last years an important field in statistical research, showing a lot of possibilities of applications in many domains (climatology, teledetection, linguistics, economics, \ldots ). When one is interested on a phenomenon continuously indexed by time for instance, it seems appropriate to consider this phenomenon as a whole curve. Practical aspects also go in this direction, since actual technologies allow to collect data on thin discretized grids. The papers by Ramsay and Dalzell (1991) and Frank and Friedman (1993) began to pave the way in favour of this idea of taking into account the functional nature of these data, and highlighted the drawbacks of considering a multivariate point of view. Major references in this domain are the monographs by Ramsay and Silverman (2002, 2005) which give an overview about the philosophy and the basic models involving functional data. Important nonparametric issues are treated in the monograph by Ferraty and Vieu (2006). A particular problem in statistics is to predict the value of an interest variable $Y$ knowing a covariate $X$. An underlying model can then write \begin{equation*} Y=r(X)+\varepsilon , \end{equation*} \noindent where $r$ is an operator representing the link between the variables $X$ and $Y$ and $\varepsilon $ is a noise random variable. In our functional data context, we want to consider that both variables $X$ and $Y$ are of functional nature, \textit{i.e.} are random functions taking values on an interval $I=[a,b]$ of $\mathbb{R}$. We assume that $X$ and $Y$ take values in the space $L^{2}(I)$ of square integrable on $I$. In the following and in order to simplify, we assume that $I=[0,1]$, which is not restrictive since the simple transformation $x\longmapsto (x-a)/(b-a)$ allows to come back to that case. We assume as well that $X$ and $Y$ are centered. The issue of estimating the means $\mathbb{E}\left( X\right) $ and $\mathbb{E}\left( Y\right) $ in order to center the data was exhaustively treated in the literature and is of minor interest in our setting. The objective of this paper is to consider the model with functional input and ouptut $: \begin{equation} Y\left( t\right) =\int_{0}^{1}\mathcal{S}\left( s,t\right) X\left( s\right) ds+\varepsilon \left( t\right) ,\quad \mathbb{E}\left( \varepsilon |X\right) =0, \label{model-kernel} \end{equation} \noindent where $\mathcal{S}\left( \cdot ,\cdot \right) $ is an integrable kernel : $\int \int \left\vert \mathcal{S}\left( s,t\right) \right\vert dsdt<+\infty $. The kernel $\mathcal{S}$ may be represented on a $3D$-plot by a surface. The functional historical model (Malfait and Ramsay, 2003) is \begin{equation*} Y\left( t\right) =\int_{0}^{t}\mathcal{S}_{hist}\left( s,t\right) X\left( s\right) ds+\varepsilon \left( t\right) , \end{equation* and may be recovered from the first model be setting $\mathcal{S}\left( s,t\right) =\mathcal{S}_{hist}\left( s,t\right) 1\!1_{\left\{ s\leq t\right\} }$ and the surface defining $\mathcal{S}$ is null when $\left( s,t\right) $ is located in the triangle above the first diagonal of the unit square. Model (\ref{model-kernel}) may be viewed as a random Fredholm equation where both the input an the ouput are random (or noisy). This model has already been the subject of some studies, as for instance Chiou, M\"{u}ller and Wang (2004) or Yao, Muller and Wang (2005), which propose an estimation of the functional parameter $\mathcal{S}$ using functional PCAs of the curves $X$ and $Y$. One of the first studies about this model is due to Cuevas, Febrero and Fraiman (2002) which considered the case of a fixed design. In this somewhat different context, they study an estimation of the functional coefficient of the model and give consistency results for this estimator. Recently, Antoch \textit{et al.} (2008) proposed a spline estimator of the functional coefficient in the functional linear model with a functional response, while Aguilera, Oca\~{n}a and Valderrama (2008) proposed a wavelet estimation of this coefficient. We start with a sample $\left( Y_{i}, X_{i} \right)_{1 \leq i \leq n}$ with the same law as $(Y,X)$, and we consider a new observation $X_{n+1}$. In all the paper, our goal will be to predict the value of $Y_{n+1}$. The model (\ref{model-kernel}) may be revisited if one acknowledges that \int_{0}^{1}\mathcal{S}\left( s,t\right) X\left( s\right) ds$ is the image of $X$ through a general linear integral operator. Denoting $S$ the operator defined on and with values in $L^{2}\left( \left[ 0,1\right] \right) $ by \left( Sf\right) \left( t\right) =\int_{0}^{1}\mathcal{S}\left( s,t\right) f\left( s\right) ds$ we obtain from (\ref{model-kernel}) that $Y\left( t\right) =S\left( X\right) \left( t\right) +\varepsilon \left( t\right) $ o \begin{equation*} Y=SX+\varepsilon ,\quad \text{where} \quad S\left( X\right) \left( t\right) =\int \mathcal{S}\left( s,t\right) X\left( s\right) ds. \end{equation* This fact motivates a more general framework : it may be interesting to consider Sobolev spaces $W^{m,p}$ instead of $L^{2}\left( \left[ 0,1\right] \right) $ in order to allow some intrinsic smoothness for the data. It turns out that, amongst this class of spaces, we should privilege Hilbert spaces. Indeed the unknown parameter is a linear operator and spectral theory of these operators acting on Hilbert space allows enough generality, intuitive approaches and easier practical implementation. That is why in all the sequel we consider a sample $\left( Y_{i},X_{i}\right) _{1\leq i\leq n}$ where $Y$ and $X$ are independent, identically distributed and take values in the same Hilbert space $H$ endowed with inner product $\left\langle \cdot ,\cdot \right\rangle $ and associated norm $\left\Vert \cdot \right\Vert .$ Obviously the model we consider generalizes the regression model with a real output $y$ \begin{equation} y=\int_{0}^{1}\beta \left( s\right) X\left( s\right) ds+\varepsilon =\left\langle \beta ,X\right\rangle +\varepsilon , \label{scalar-model} \end{equation and all our results hold in this direction. The literature is wide about \ref{scalar-model}) but we picked articles which are close to our present concerns and will be cited again later in this work : Yao, M\"{u}ller and Wang (2005), Hall and Horowitz (2007), Crambes, Kneip, Sarda (2009)... Since the unknown parameter is here an operator, the infinite-dimensional equivalent of a matrix, it is worth giving some basic information about operator theory on Hilbert spaces. The interested reader can find basics and complements about this topic in the following reference monographs : Akhiezer and Glazman (1981), Dunford and Schwartz (1988), Gohberg, Goldberg and Kaashoek (1991). We denote by $\mathcal{L}$ the space of bounded -hence continuous- operators on a Hilbert space $H$. For our statistical or probabilistic purposes, we restrain this space to the space of compact operators $\mathcal{L}_{c}$. Then, any compact and symmetric operator $T$ belonging to $\mathcal{L}_{c}$ admits a unique Schmidt decomposition of the form $T=\sum_{j\in \mathbb{N}}\mu _{j}\phi _{j}\otimes \phi _{j}$ where the \left( \mu _{j},\phi _{j}\right) $'s are called the eigenelements of $T$, and the tensor product notation $\otimes $ is defined in the following way: for any function $f$, $g$ and $h$ belonging to $H$, we define $f\otimes g=\left\langle g,.\right\rangle f$ or \begin{equation*} \left[ f\otimes g\right] \left( h\right) \left( s\right) =\left( \int g\left( t\right) h\left( t\right) dt\right) f\left( s\right) . \end{equation*} Finally we mention two subclasses of $\mathcal{L}_{c}$ one of which will be our parameter space. The space of Hilbert-Schmidt operators and trace class operators are defined respectively b \begin{equation*} \mathcal{L}_{2}=\left\{ T\in \mathcal{L}_{c}:\sum_{j\in \mathbb{N}}\mu _{j}^{2}<+\infty \right\} ,\ \mathcal{L}_{1}=\left\{ T\in \mathcal{L _{c}:\sum_{j\in \mathbb{N}}\mu _{j}<+\infty \right\} . \end{equation*} It is well-known that if $S$ is the linear operator associated to the kernel $\mathcal{S}$ like in display (\ref{model-kernel}) then if $\int \int \left\vert \mathcal{S}\left( s,t\right) \right\vert dsdt<+\infty $, $S$ is Hilbert-Schmidt and $S$ is trace class if $\mathcal{S}\left( s,t\right) $ is continuous as a function of $\left( s,t\right) $. \subsection{Estimation} Our purpose here is first to introduce the estimator. This estimate looks basically like the one studied in Yao, M\"{u}ller and Wang (2005). Our second goal is to justify from a more theoretical position the choice of such a candidate. Two strategies may be carried out to propose an estimate of $S.$ They join finally, like in the finite-dimensional framework. One could consider the theoretical mean square program (convex in $S$) \begin{equation*} \min_{S\in \mathcal{L}_{2}}\mathbb{E}\left\Vert Y-SX\right\Vert ^{2}, \end{equation* whose solution $S_{\ast }$ is defined by the equation $\mathbb{E}\left[ Y\otimes X\right] =S_{\ast }\mathbb{E}\left[ X\otimes X\right] .$ On the other hand it is plain that the moment equation \begin{equation*} \mathbb{E}\left[ Y\otimes X\right] =\mathbb{E}\left[ S\left( X\right) \otimes X\right] +\mathbb{E}\left[ \varepsilon \otimes X\right] \end{equation* leads to the same solution. Finally denoting $\Delta =\mathbb{E}\left[ Y\otimes X\right] ,\quad \Gamma =\mathbb{E}\left[ X\otimes X\right] $ we get $\Delta =S\Gamma .$ Turning to empirical counterparts with \begin{equation*} \Delta _{n}=\frac{1}{n}\sum_{i=1}^{n}Y_{i}\otimes X_{i},\quad \Gamma _{n} \frac{1}{n}\sum_{i=1}^{n}X_{i}\otimes X_{i}, \end{equation* the estimate $\widehat{S}_{n}$ of $S$ should naturally be defined by $\Delta _{n}=\widehat{S}_{n}\Gamma _{n}.$Once again the moment method and the minimization of the mean square program coincide. By the way note that \Delta _{n}=S\Gamma _{n}+U_{n}$ with $U_{n}=\frac{1}{n}\sum_{i=1}^{n \varepsilon _{i}\otimes X_{i}$. The trouble is that, from $\Delta _{n}=S_{n}\Gamma _{n}$ we cannot directly derive an explicit form for S_{n}. $ Indeed $\Gamma _{n}$ is not invertible on the whole $H$ since it has finite rank. The next section proposes solutions to solve this inverse problem by classical methods. As a last point we note that if $\widehat{S}_{n}$ is an estimate of $S$, a statistical predictor given a new input $X_{n+1}$ is \begin{equation} \widehat{Y}_{n+1}\left( t\right) =\widehat{S}_{n}\left( X_{n+1}\right) \left( t\right) =\int\widehat{\mathcal{S}}\left( s,t\right) X_{n+1}\left( s\right) ds. \label{predictor} \end{equation} \subsection{Identifiabiliy, inverse problem and regularization issues} We turn again to the equation which defines the operator $S$ : $\Delta =S\Gamma .$ Taking a one-to one $\Gamma $ is a first and basic requirement for identifiability. It is simple to check that if $v\in \ker \Gamma \neq \left\{ 0\right\} ,$ $\Delta =S\Gamma =\left( S+v\otimes v\right) \Gamma $ for instance and the unicity of $S$ is no more ensured. More precisely, the inference based on the equation $\Delta =S\Gamma $ does not ensure the identifiability of the model. From now on we assume that $\ker \Gamma =\left\{ 0\right\} .$ At this point, some more theoretical concerns should be mentioned. Indeed, writing $S=\Delta \Gamma ^{-1}$ is untrue. The operator $\Gamma ^{-1}$ exists whenever $\ker \Gamma =\left\{ 0\right\} $ but is unbounded, that is, not continuous. We refer once again to Dunford and Schwartz (1988) for instance for developments on unbounded operators. It turns out that $\Gamma ^{-1}$ is a linear mapping defined on a dense domain $\mathcal{D}$ of $H$ which is measurable but continuous at no point of his domain. Let us denote $\left( \lambda _{j},e_{j}\right) $ the eigenelements of $\Gamma $. Elementary facts of functional analysis show that $S_{|\mathcal{D}}=\Delta \Gamma ^{-1}$ where $\mathcal{D}$ is the domain of $\Gamma ^{-1}$ \textit{i.e.} the range of $\Gamma $ and is defined by \begin{equation*} \mathcal{D}=\left\{ x=\sum_{j}x_{j}e_{j}\in H:\sum_{j}\frac{x_{j}^{2}} \lambda _{j}^{2}}<+\infty \right\} . \end{equation*} A link is possible with probability and gaussian analysis which may be illustrative. If $\Gamma $ is the covariance operator of a gaussian random element $X$ on $H$ (a process, a random function, etc) then the Reproducing Kernel Hilbert Space of $X$ coincides with the domain of $\Gamma ^{-1/2}$ and the range of $\Gamma ^{1/2}$ : $RKHS\left( X\right) =\left\{ x=\sum_{j}x_{j}e_{j}\in H:\sum_{j}x_{j}^{2}/\lambda _{j}<+\infty \right\} .$ The last stumbling stone comes from switching population parameters to empirical ones. We construct our estimate from the equation $\Delta _{n}=S\Gamma_{n}+U_{n}$ as seen above and setting $\Delta_{n}=\widehat{S _{n}\Gamma_{n}$. Here the inverse of $\Gamma_{n}$ does not even exist since this covariance operator is finite-rank. If $\Gamma_{n}$ was invertible we could set $S_{n}=\Delta_{n}\Gamma_{n}^{-1}$ but we have to regularize \Gamma_{n}$ first. We carry out techniques which are classical in inverse problems theory. Indeed, the spectral decomposition of $\Gamma_{n}$ is \Gamma_{n}=\sum_{j}\widehat{\lambda}_{j}\left( \widehat{e}_{j}\otime \widehat{e}_{j}\right) $ where $\left( \widehat{\lambda}_{j},\widehat{e _{j}\right) $ are the empirical eigenelements of $\Gamma_{n}$ (the $\widehat \lambda}_{j}$'s are sorted in a decreasing order and some of them may be null) derived from the functional PCA. The spectral cut regularized inverse is given for some integer $k$ by \begin{equation} \Gamma_{n}^{\dag}=\sum_{j=1}^{k}\widehat{\lambda}_{j}^{-1}\left( \widehat{e _{j}\otimes\widehat{e}_{j}\right). \label{gamma-dag} \end{equation} The choice of $k=k_{n}$ is crucial ; all the $\left( \widehat{\lambda _{j}\right) _{1\leq j\leq k}$ cannot be null and one should stress that \widehat{\lambda }_{j}^{-1}\uparrow +\infty $ when $j$ increases. The reader will note that we could define equivalently $\Gamma ^{\dag }=\sum_{j=1}^{k}\lambda _{j}^{-1}\left( e_{j}\otimes e_{j}\right) .$ From the definition of the regularized inverse above, we can derive a useful equation. Indeed, let $\widehat{\Pi }_{k}$ denote the projection of the $k$ first eigenvectors of $\Gamma _{n},$ that is the projection on \textrm{span} \left( \widehat{e}_{1},...,\widehat{e}_{k}\right) .$ Then $\Gamma _{n}^{\dag }\Gamma _{n}=\Gamma _{n}\Gamma _{n}^{\dag }=\widehat{\Pi }_{k}.$ For further purpose we define as well $\Pi _{k}$ to be the projection operator on (the space spanned by) the $k$ first eigenvectors of $\Gamma .$ \begin{remark} The regularization method we propose is the most intuitive to us but may be changed by considering : $\Gamma_{n,f}^{\dag}=\sum_{j=1}^{k}f_{n}\left( \widehat{\lambda}_{j}\right) \left( \widehat{e}_{j}\otimes\widehat{e _{j}\right) $ where $f_{n}$ is a smooth function which converges pointwise to $x\rightarrow1/x.$ For instance, we could choose $f_{n}\left( \widehat \lambda}_{j}\right) =\left( \alpha_{n}+\widehat{\lambda}_{j}\right) ^{-1}$ where $\alpha_{n}>0$ and $\alpha_{n}\downarrow0$, and $\Gamma_{n}^{\dag}$ would be the penalized-regularized inverse of $\Gamma_{n}.$ Taking f_{n}\left( \widehat{\lambda}_{j}\right) =\widehat{\lambda}_{j}\left( \alpha _{n}+\widehat{\lambda}_{j}^{2}\right) ^{-1}$ leads to a Tikhonov regularization. We refer to the remarks within section 3 of Cardot, Mas, Sarda (2007) to check that additional assumptions on $f_{n}$ (controlling the rate of convergence of $f_{n}$ to $x\rightarrow1/x$) allow to generalize the overall approach of this work to the class of estimates \Gamma_{n,f}^{\dag}$. \end{remark} To conclude this subsection, we refer the reader interested by the topic of inverse problem solving to the following books : Tikhonov and Arsenin (1977), Groetsch (1993), Engl, Hanke and Neubauer (2000). \subsection{Assumptions\label{assumptions}} The assumptions we need are classically of three types : regularity of the regression parameter $S,$ moment assumptions on $X$ and regularity assumptions on $X$ which are often expressed in terms of spectral properties of $\Gamma $ (especially the rate of decrease to zero of its eigenvalues). \textbf{Assumption on }$S$ As announced sooner, we assume that $S$ is Hilbert Schmidt which may be rewritten : for any basis $\left( \phi_{j}\right) _{j\in\mathbb{N}}$ of $H$ \begin{equation} \sum_{j,\ell}\left\langle S\left( \phi_{\ell}\right) ,\phi_{j}\right\rangle ^{2}<+\infty. \label{assumpt-s} \end{equation} This assumption finally echoes assumption $\sum_{j}\beta _{j}^{2}<+\infty $ in the functional linear model (\ref{scalar-model}) with real ouptuts. We already underlined that (\ref{assumpt-s}) is equivalent to assuming that \mathcal{S}$ is doubly integrable if $H$ is $L^{2}\left( \left[ 0,T\right] \right) $. Finally no continuity or smoothness is required for the kernel \mathcal{S}$ at this point. \textbf{Moment assumptions on }$X$ In order to better understand the moment assumptions on $X$, we recall the Karhunen-Loeve development, which is nothing but the decomposition of $X$ in the basis of the eigenvectors of $\Gamma $, $X=\sum_{j=1}^{+\infty }\sqrt \lambda _{j}}\xi _{j}e_{j}\quad a.s.$ where the $\xi _{j}$'s are independent centered real random variables with unit variance. We need higher moment assumptions because we need to apply Bernstein's exponential inequality to functionals of $\Gamma -\Gamma _{n}.$ We assume that for all $j,\ell \in \mathbb{N}$ there exists a constant $b$ such that \begin{equation} \mathbb{E}\left( \left\vert \xi _{j}\right\vert ^{\ell }\right) \leq \frac \ell !}{2}b^{\ell -2}\cdot \mathbb{E}\left( \left\vert \xi _{j}\right\vert ^{2}\right) \label{assumpt-bernstein} \end{equation which echoes the assumption (2.19) p. 49 in Bosq (2000). As a consequence, we see that \begin{equation} \mathbb{E}\left\langle X,e_{j}\right\rangle ^{4}\leq C\left( \mathbb{E \left\langle X,e_{j}\right\rangle ^{2}\right) ^{2}. \label{H2} \end{equation This requirement already appears in several papers. It assesses that the sequence of the fourth moment of the margins of $X$ tends to $0$ quickly enough. The assumptions above always hold for a gaussian $X$. These assumptions are close to the moment assumptions usually required when rates of convergence are addressed. \textbf{Assumptions on the spectrum of }$\Gamma$ The covariance operator $\Gamma $ is assumed to be injective hence with \textit{strictly} positive eigenvalues arranged in a decreasing order. Let the function $\lambda :\mathbb{R}^{+}\rightarrow \mathbb{R}^{+\ast }$ be defined by $\lambda \left( j\right) =\lambda _{j}$ for any $j\in \mathbb{N}$ (the $\lambda _{j}$'s are continuously interpolated between $j$ and $j+1.$ >From the assumption above we already know that $\sum_{j}\lambda _{j}<+\infty $. Indeed the summability of the eigenvalues of $\Gamma $ is ensured whenever $\mathbb{E}\left\Vert X\right\Vert ^{2}<+\infty .$ Besides, assume that for $x$ large enoug \begin{equation} x\rightarrow \lambda \left( x\right) \text{ \textrm{is convex}}. \label{H1} \end{equation These last conditions are mild and match a very large class of eigenvalues : with arithmetic decay $\lambda _{j}=Cj^{-1-\alpha }$ where $\alpha >0$ (like in Hall and Horowitz (2007)), with exponential decay $\lambda _{j}=Cj^{-\beta }\exp \left( -\alpha j\right) $, Laurent series $\lambda _{j}=Cj^{-1-\alpha }\left( \log j\right) ^{-\beta }$ or even $\lambda _{j}=Cj^{-1}\left( \log j\right) ^{-1-\alpha }.$ Such a rate of decay occurs for extremely irregular processes, even more irregular than the Brownian motion for which $\lambda _{j}=Cj^{-2}$. In fact our framework initially relaxes prior assumptions on the rate of decay of the eigenvalues, hence on the regularity of $X.$ It will be seen later that exact risk and optimality are obtained when considering specific classes of eigenvalues. Assumption \ref{H1}) is crucial however since the most general Lemmas rely on convex inequalitites for the eigenvalues. \section{Asymptotic results} We are now in a position to introduce our estimate. \begin{definition} \label{defestim} The estimate $\widehat{S}_{n}$ of $S$ is defined by : \widehat{S}_{n}=\Delta _{n}\Gamma _{n}^{\dag }$, the associated predictor is $\widehat{Y}_{n+1}=\widehat{S}_{n}\left( X_{n+1}\right) =\Delta _{n}\Gamma _{n}^{\dag }\left( X_{n+1}\right) .$ It is possible to provide a kernel form. We deduce from $S_{n}=\Delta _{n}\Gamma _{n}^{\dag }$ that \begin{equation*} \mathcal{S}_{n}\left( s,t\right) =\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{k \frac{\int X_{i}\widehat{e}_{j}}{\widehat{\lambda }_{j}}\cdot Y_{i}\left( t\right) \widehat{e}_{j}\left( s\right) . \end{equation*} \end{definition} Though distinct, this estimate remains close from the one proposed in Yao, \"{u}ller and Wang (2005), the difference consisting in the fact that we do not consider a Karhunen-Loeve development of $Y$. In the sequel, our main results are usually given in term of $\widehat{S}_{n}$ but we frequently switch to the 'kernel' viewpoint since it may be sometimes more illustrative. Then we implicitely assume that $H=L^{2}\left( \left[ 0, \right] \right) .$ We insist on our philosophy. Estimating $S$ is not our seminal concern. We focus on the predictor at a random design point $X_{n+1}$, independent from the initial sample. The issue of estimating $S$ itself may arise typically for testing. As shown later in this work and as mentioned in Crambes, Kneip and Sarda (2009), considering the prediction mean square error finally comes down to studying the mean square error of $S$ for a smooth, intrinsic norm, depending on $\Gamma$. From now on, all our results are stated when assumptions of the subsection \ref{assumptions} hold. \subsection{Mean square prediction error and optimality} We start with an upper bound from which we deduce, as a Corollary, the exact asymptotic risk of the predictor. What is considered here is the predictor \widehat{Y}_{n+1}$ based on $\widehat{S}_{n}$ and $X_{n+1}.$ It is compared with $\mathbb{E}\left( Y_{n+1}|X_{n+1}\right) =S\left( X_{n+1}\right) .$ Let $\Gamma_{\varepsilon}=\mathbb{E}\left( \varepsilon\otimes\varepsilon \right) $ be the covariance operator of the noise and denote $\sigma _{\varepsilon}^{2}=\mathrm{tr}\Gamma_{\varepsilon}.$ \begin{theorem} \label{TH2}The mean square prediction error of our estimate has the following exact asymptotic development \begin{equation} \mathbb{E}\left\Vert \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}=\sigma _{\varepsilon }^{2}\frac{k}{n +\sum_{j=k+1}^{+\infty }\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}+A_{n}+B_{n}, \label{mse} \end{equation where $A_{n}\leq C_{A}\left\Vert S\right\Vert _{\mathcal{L}_{2}}k^{2}\lambda _{k}/n$ and $B_{n}\leq C_{B}k^{2}\log k/n^{2}$ where $C_{A}$ and $C_{B}$ are constants which do not depend on $k$, $n$ or $S$. \end{theorem} The two first term determine the convergence rate : the variance effect appears through $\sigma _{\varepsilon }^{2}k/n$ and the bias (related to smoothness) through $\sum_{j=k+1}^{+\infty }\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}$. Several comments are needed at this point. The term $A_{n}$ comes from bias decomposition and $B_{n}$ is a residue from variance. Both are negligible with respect to the first two terms. Indeed, k\lambda _{k}\rightarrow 0$ since $\sum_{k}\lambda _{k}<+\infty $ and A_{n}=o\left( k/n\right) .$ Turning to $B_{n}$ is a little bit more tricky. It can be seen from the lines just above the forthcoming Proposition \re {univ-bound} that necessarily $\left( k\log k\right) ^{2}/n\rightarrow 0$ which ensures that $B_{n}=o\left( k/n\right) .$ A second interesting property arises from Theorem \ref{TH2}. Rewriting $\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}=\left\Vert S\Gamma ^{1/2}\left( e_{j}\right) \right\Vert ^{2}$ we see that the only regularity assumptions needed may be made from the spectral decomposition of the operator $S\Gamma ^{1/2}$ itself and not from $X$ (or $\Gamma $ as well) and $S$ separately. Before turning to optimality we introduce the class of parameters $S$ over which optimality will be obtained. \begin{definition} Let $\varphi:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ be a $C^{1}$ decreasing function such that $\sum_{j=1}^{+\infty}\varphi\left( j\right) =1$ and set $\mathcal{L}_{2}\left( \varphi,L\right) $ be the class of linear operator from $H$ to $H$ be defined by \begin{equation*} \mathcal{L}_{2}\left( \varphi,L\right) =\left\{ T\in\mathcal{L _{2},\left\Vert T\right\Vert _{\mathcal{L}_{2}}\leq L:\left\Vert T\left( e_{j}\right) \right\Vert \leq L\sqrt{\varphi\left( j\right) }\right\}. \end{equation*} \end{definition} The set $\mathcal{L}_{2}\left( \varphi ,L\right) $ is entirely determined by the bounding constant $L$ and the function $\varphi $. Horowitz and Hall (2007) consider the case when $\varphi \left( j\right) =Cj^{-\left( \alpha +2\beta \right) }$ where $\alpha >1$ and $\beta >1/2.$ As mentioned earlier we are free here to take any $\varphi $ such that $\int^{+\infty }\varphi \left( s\right) ds<+\infty $ and which leaves assumption (\ref{H1}) unchanged. As an easy consequence, we derive the uniform bound with exact constants below. \begin{theorem} Set $L=\left\Vert S\Gamma ^{1/2}\right\Vert _{\mathcal{L}_{2}},$ $\varphi \left( j\right) =\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}/L^{2}$ and $k_{n}^{\ast }$ as the integer part of the unique solution of the integral equation (in $x$) \begin{equation} \frac{1}{x}\int_{x}^{+\infty }\varphi \left( x\right) dx=\frac{1}{n}\frac \sigma _{\varepsilon }^{2}}{L^{2}}. \label{k-opt} \end{equation Let $\mathcal{R}_{n}\left( \varphi ,L\right) $ be the uniform prediction risk of the estimate $\widehat{S}_{n}$ over the class $\mathcal{L}_{2}\left( \varphi ,L\right) $ : \begin{equation*} \mathcal{R}_{n}\left( \varphi ,L\right) =\sup_{S\Gamma ^{1/2}\in \mathcal{L _{2}\left( \varphi ,L\right) }\mathbb{E}\left\Vert \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}, \end{equation* then \begin{equation*} \lim \sup_{n\rightarrow +\infty }\frac{n}{k_{n}^{\ast }}\mathcal{R _{n}\left( \varphi ,L\right) =2\sigma _{\varepsilon }^{2}. \end{equation*} \end{theorem} Display (\ref{k-opt}) has a unique solution because the function of $x$ on the left hand is strictly decreasing. The integer $k_{n}^{\ast }$ is the optimal dimension : the parameter which minimizes the prediction risk. It plays the same role as the optimal bandwidth in nonparametric regression. The upper bound in the display above is obvious from (\ref{mse}). This upper bound is attained when taking for $S$ the diagonal operator defined in the basis of eigenvectors by $Se_{j}=L\varphi ^{1/2}\left( j\right) \lambda _{j}^{-1/2}e_{j}$. The proof of this Theorem is an easy consequence of Theorem \ref{TH2} hence omitted. The next Corollary is an attempt to illustrate the consequences of the previous Theorem by taking explicit sequences $\left( \varphi\left( j\right) \right) _{j\in\mathbb{N}}$. We chose to treat the case of general Laurent series (including very irregular input and parameter when $\alpha=0$) and the case of exponential decay. \begin{corollary} Set $\varphi _{a}\left( j\right) =C_{\alpha ,\beta }\left( j^{2+\alpha }\left( \log j\right) ^{\beta }\right) ^{-1}$ and $\varphi _{b}\left( j\right) =C_{\alpha }^{\prime }\exp \left( -\alpha j\right) $ where either \alpha >0$ and $\beta \in \mathbb{R}$ or $\alpha =0$ and $\beta >1$, C_{\alpha ,\beta }$ and $C_{\alpha }^{\prime }$ are normalizing constants, then \begin{align*} \mathcal{R}_{n}\left( \varphi _{a},L\right) & \sim \frac{\left( \log n\right) ^{\beta /\left( 2+\alpha \right) }}{n^{\left( 1+\alpha \right) /\left( 2+\alpha \right) }}\left( \frac{C_{\alpha ,\beta }L^{2}}{2\sigma _{\varepsilon }^{2}}\right) ^{1/\left( 2+\alpha \right) }, \\ \mathcal{R}_{n}\left( \varphi _{b},L\right) & \leq \frac{\log n}{\alpha n}. \end{align*} \end{corollary} In the second display we could not compute an exact bound because equation \ref{k-opt}) has no explicit solution. But the term $\left( \log n\right) /\alpha n$ is obviously sharp since parametric up to $\log n$. The special case $\beta =0$ and $\alpha >1$ matches the optimal rate derived in Hall and Horowitz (2007) with a slight damage due to the fact that the model shows more complexity ($\mathcal{S}$ is a function of two variables whereas $\beta $ the slope parameter in the latter article and in model (\ref{scalar-model ) was a function of a single variable). We also refer the reader to Stone (1982) who underlines this effect of dimension on the convergence rates in order to check that our result matches the ones announced by Stone. In our setting the data $Y$ are infinite dimensional. Obtaining lower bound for optimality in minimax version is slightly different than in the case studied in Hall and Horowitz (2007), Crambes, Kneip and Sarda (2009). In order to get a lower bound, our method is close to the one carried out by Cardot and Johannes (2010), based on a variant of Assouad's Lemma. We consider gaussian observations under $2^{k_{n}}$ distinct models. \begin{theorem} \label{TH2bis}The following bound on the minimax asymptotic risk up to constants proves that our estimator is optimal in minimax sense \begin{equation*} \inf_{\widehat{S}_{n}}\sup_{S\in\mathcal{L}_{2}\left( \varphi,L\right) \mathbb{E}\left\Vert \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}\asymp\frac{k_{n}^{\ast}}{n}. \end{equation*} \end{theorem} It appears that another upper bound may be derived from (\ref{mse}). We can avoid to introduce the class $\mathcal{L}_{2}\left( \varphi ,L\right) .$ >From $\sum_{j}\lambda _{j}=\sigma _{\varepsilon }^{2}$ and \sum_{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}=\left\Vert S\right\Vert _{\mathcal{L}_{2}}^{2}$ we see that the sequences $\lambda _{j}$ and $\left\Vert S\left( e_{j}\right) \right\Vert ^{2}$ may be both bounded by $j^{-1}(\log j)^{-1}$ hence that $\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}\leq j^{-2}(\log j)^{-2}.$ A classical sum-integral comparison yields then $\sum_{j\geq k+1}\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}\leq Ck^{-1}(\log k)^{-2}.$ We obtain in the Proposition below a new bound for which no regularity assumption is needed for $S$. \begin{proposition} \label{univ-bound}The following bound shows uniformity with respect to all Hilbert-Schmidt operators $S$ (hence any integrable kernel $\mathcal{S}$) and all functional data matching the moment assumptions mentioned above \begin{equation*} \sup_{\left\Vert S\right\Vert _{\mathcal{L}_{2}}\leq L}\mathbb{E}\left\Vert \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}\leq \sigma _{\varepsilon }^{2}\frac{k}{n}+C\frac{L^{2}}{k\log ^{2}k}, \end{equation* where $C$ is a universal constant. We deduce the uniform bound with no regularity assumption on the data or on $S$ \begin{equation*} \lim \sup_{n\rightarrow +\infty }\sqrt{n}\log n\sup_{\left\Vert S\right\Vert _{\mathcal{L}_{2}}\leq L}\mathbb{E}\left\Vert \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}\leq \sigma _{\varepsilon }^{2}+CL^{2}. \end{equation*} \end{proposition} The bound above is rough. The constant $C$ does not really matter. The fundamental idea of the Proposition is to provide an upper bound for the rate uniformly on balls of $\mathcal{L}_{2}$ without regularity restrictions : if $\alpha _{n}$ is the rate of prediction error in square norm considered above, then necessarily $\alpha _{n}\leq n^{-1/2}(\log n)^{-1}$ (in fact we even have $\alpha _{n}=o\left( n^{-1/2}(\log n)^{-1}\right) $) whatever the unknown parameter $S$. \begin{remark} \label{tradeoff}The bound above holds with highly irregular data (for instance when $\lambda _{j}\asymp Cj^{-1}(\log j)^{-1-\alpha }$ with $\alpha >0$ or with very regular data featuring a flat spectrum with $\lambda _{j}\asymp Cj^{-\gamma }\exp \left( -\alpha j\right) $ or even the intermediate situation like $\lambda _{j}\asymp Cj^{-1-\beta }(\log j)^{1+\alpha }$). The literature on linear regression with functional data usually addressed such issues in restrained case with prior knowledge upon the eignevalues like $\lambda _{j}\asymp Cj^{-1-\beta }$. The same remarks are valid when turning to the regularity of the kernel $\mathcal{S}$ or of the operator $S$ expressed through the sequence $\left\Vert S\left( e_{j}\right) \right\Vert ^{2}$. Obviously in the case of rapid decay (say at an exponential rate $\lambda _{j}\asymp C\exp \left( -\alpha j\right) $) one may argue that multivariate method would fit the data with much accuracy. We answer that, conversely in such a situation -fitting a linear regression model- the usual mean square methods turn out to be extremely unstable due to ill-conditioning. Our method of proof shows that smooth, regular processes (with rapid decay of $\lambda _{j}$) have good approximation properties but ill-conditioned $\Gamma _{n}^{\dag }$ (\textit{i.e.} with rapidly increasing norm) damaging the rate of convergence of $\widehat{S _{n} $ which depends on it. But we readily see that irregular processes (with slowly decreasing $\lambda _{j}$), despite their poor approximation properties, lead to a slowly increasing $\Gamma _{n}^{\dag }$ and to solving an easier inverse problem. \end{remark} \begin{remark} \label{conv-k}At this point it is worth giving a general comment on the rate of increase of the sequence $k_{n}.$ From the few lines above Proposition \ref{univ-bound}, we always have $\left( k\log k\right) ^{2}/n\rightarrow0$ whatever the parameter $S$ in the space of Hilbert-Schmidt operators. This property will be useful for asymptotics and the mathematical derivations given in the last section.\bigskip \end{remark} \subsection{Weak convergence} The next and last result deals with weak convergence. We start with a negative result which shows that due to the underlying inverse problem, the issue of weak convergence cannot be addressed under too strong topologies. \begin{theorem} \label{TH1}It is impossible for $S_{n}$ to converge in distribution for the Hilbert-Schmidt norm. \end{theorem} Once again turning to the predictor, hence smoothing the estimated operator, will produce a positive result. We improve twofold the results by Cardot, Mas and Sarda (2007) since first the model is more general and second we remove the bias term. Weak convergence (convergence in distribution) is denoted $\overset{w}{\rightarrow }.$ The reader should pay attention to the fact that the following Theorem holds in space of functions (here $H$). Within this theorem, two results are proved. The first assesses weak convergence for the predictor with a bias term. The second removes this bias at the expense of a more specific assumption on the sequence $k_{n}$. \begin{theorem} \label{TH3} If the condition $\left( k\log k\right) ^{2}/n\rightarrow 0$ holds, then \begin{equation*} \sqrt{\frac{n}{k}}\left[ \widehat{S}_{n}\left( X_{n+1}\right) -S\Pi _{k}\left( X_{n+1}\right) \right] \overset{w}{\rightarrow }\mathcal{G _{\varepsilon } \end{equation* where $\mathcal{G}_{\varepsilon }$ is a centered gaussian random element with values in $H$ and covariance operator $\Gamma _{\varepsilon }$. Besides, denoting $\gamma _{k}=\sup_{j\geq k}\left\{ j\log j\left\Vert S\left( e_{j}\right) \right\Vert \sqrt{\lambda _{j}}\right\} $ ( it is plain that $\gamma _{k}\rightarrow 0$) and choosing $k$ such that $n\leq \left( k\log k\right) ^{2}/\gamma _{k}$ (which means that $\left( k\log k\right) ^{2}/n$ should not decay too quickly to zero), the bias term can be removed and we obtain \begin{equation*} \sqrt{\frac{n}{k}}\left[ \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right] \overset{w}{\rightarrow }\mathcal{G}_{\varepsilon }. \end{equation*} \end{theorem} \begin{remark} We pointed out above the improvement in estimating the rate of decrease of the bias. The proof of the Theorem comes down to proving weak convergence of a series with values in the space $H$. More precisely, an array \sum_{i=1}^{n}z_{i,n}\varepsilon_{i}$ appears where $z_{i,n}$ are real valued random variables with increasing variances (when $n\rightarrow+\infty ) which are not independent but turn out to be martingale differences. \end{remark} >From Theorem \ref{TH3} we deduce general confidence sets for the predictor : let $\mathcal{K}$ be a continuous set for the measure induced by $\mathcal{G _{\varepsilon }$, that is $\mathbb{P}\left( \mathcal{G}_{\varepsilon }\in \partial \mathcal{K}\right) =0$ where $\partial \mathcal{K=}\overline \mathcal{K}}\backslash \mathrm{int}\left( \mathcal{K}\right) $ is the fronteer of $\mathcal{K}$ then $\mathbb{P}\left( \widehat{S}_{n}\left( X_{n+1}\right) \in S\left( X_{n+1}\right) +\sqrt{\frac{k}{n}}\mathcal{K \right) \rightarrow \mathbb{P}\left( \mathcal{G}_{\varepsilon }\in \mathcal{ }\right) $ when $n\rightarrow +\infty $. As an application, we propose the two following corollaries of Theorem \ref{TH3}. The notation $Y_{n+1}^{\ast } $ stands for $S\left( X_{n+1}\right) =\mathbb{E}\left( Y_{n+1}|X_{n+1}\right) $. The first corollary deals with asymptotic confidence sets for general functionals of the theoretical predictor such as weighted integrals. \begin{corollary} Let $m$ be a fixed function in the space $H=L^{2}\left( \left[ 0,1\right] \right) $. We have the following asymptotic confidence interval for $\int Y_{n+1}^{\ast }\left( t\right) m\left( t\right) dt$ at level $1-\alpha $ : \begin{equation*} \mathbb{P}\left( \int_{0}^{1}Y_{n+1}^{\ast }\left( t\right) m\left( t\right) dt\in \left[ \int_{0}^{1}\widehat{Y}_{n+1}\left( t\right) m\left( t\right) dt\pm \sqrt{\frac{k}{n}}\sigma _{m}q_{1-\alpha /2}\right] \right) =1-\alpha , \end{equation* where $\sigma _{m}^{2}=\left\langle m,\Gamma _{\varepsilon }m\right\rangle =\int \int \Gamma _{\varepsilon }\left( s,t\right) m\left( t\right) m\left( s\right) dtds$ rewritten in 'kernel' form and $q_{1-\alpha /2}$ is the quantile of order $1-\alpha /2$ of the $\mathcal{N}\left( 0,1\right) $ distribution. \end{corollary} Theorem \ref{TH3} holds for the Hilbert norm. In order to derive a confidence interval for $Y_{n+1}^{\ast}\left( t_{0}\right) $ (where $t_{0}$ is fixed in $\left[ 0,1\right] $), we have to make sure that the evaluation (linear) functional $f\in H\longmapsto f\left( t_{0}\right) $ is continuous for the norm $\left\Vert \cdot\right\Vert .$ This functional is always continuous in the space $\left( C\left( \left[ 0,1\right] \right) ,\left\vert \cdot\right\vert _{\infty}\right) $ but is not in the space L^{2}\left( \left[ 0,1\right] \right) .$ A slight change in $H$ will yield the desired result, stated in the next Corollary. \begin{corollary} \label{coro1} When $H=W_{0}^{2,1}\left( \left[ 0,1\right] \right) =\left\{ f\in L^{2}\left( \left[ 0,1\right] \right) :f\left( 0\right) =0,f^{\prime }\in L^{2}\left( \left[ 0,1\right] \right) \right\} $ endowed with the inner product $\left\langle u,v\right\rangle =\int_{0}^{1}u^{\prime }v^{\prime }$, the evaluation functional is continuous with respect to the norm of $H$ and we can derive from Theorem \ref{TH3} \begin{equation*} \mathbb{P}\left( Y_{n+1}^{\ast }\left( t_{0}\right) \in \left[ \widehat{Y _{n+1}\left( t_{0}\right) \pm \sqrt{\frac{k}{n}}\sigma _{t_{0}}q_{1-\alpha /2}\right] \right) =1-\alpha \end{equation* where $\sigma _{t_{0}}^{2}=\Gamma _{\varepsilon }\left( t_{0},t_{0}\right) .$ \end{corollary} Note that data $\left( Y_{i}\right) _{1\leq i\leq n}$ reconstructed by cubic splines and correctly rescaled to match the condition $\left[ f\left( 0\right) =0\right] $ belong to the space $W_{0}^{2,1}\left( \left[ 0,1\right] \right) $ mentioned in the Corollary. \begin{remark} It is out of the scope of this article to go through all the testing issues which can be solved by Theorem \ref{TH3}. It is interesting to note that if S=0$, the Theorem ensures tha \begin{equation*} \sqrt{\frac{n}{k}}\left[ \widehat{S}_{n}\left( X_{n+1}\right) \right] \overset{w}{\rightarrow }\mathcal{G}_{\varepsilon }, \end{equation* which may be the starting point for a testing procedure of $S=0$ versus various alternatives. \end{remark} \subsection{Comparison with existing results - Conclusion} The literature on linear models for functional data gave birth to impressive and brilliant recent works. We discuss briefly here our contribution with respect to some articles, close in spirit to this present paper. We consider exactly the same model (with functional outputs) as Yao, M\"{u ller and Wang (2005) and our estimate is particularly close to the one they propose. In their work the case of longitudinal data was studied with care with possibly sparse and irregular data. They introduce a very interesting functional version of the $R^{2}$ and prove convergence in probability of their estimates in Hilbert-Schmidt. We complete their work by providing the rates and optimality for convergence in mean square. Our initial philosophy is close to the article by Crambes, Kneip and Sarda (2009). Like these authors we consider the prediction with random design. We think that this way seems to be the most justified from a statistical point of view. The case of a fixed design gives birth to several situations and different rates (with possible oversmoothing which entails parametric rates of convergence which are odd in this truly nonparametric model) and does not necessarily correspond to the statistical reality. The main differences rely in the fact that our results hold in mean square norm rather than in probability for a larger class of data and parameter at the expense of more restricted moment assumptions. Our methodology is closer to the articles by Hall and Horowitz (2007). They studied the prediction risk at a fixed design in the model with real outputs (\ref{scalar-model}) but with specified eigenvalues namely $\lambda _{j}\sim Cj^{-1-\alpha }$ and parameter spectral decomposition $\left\langle \beta ,e_{j}\right\rangle \sim Cj^{-1-\gamma }$ with $\alpha ,\gamma >0$. The comparisons may be simpler with these works since we share the approach through spectral decomposition of operators or Karhunen-Loeve development for the design $X.$ The problem of weak convergence is considered only in Yao, M\"{u}ller and Wang (2005) : they provide very useful and practical pointwise confidence sets which imply estimation of the covariance of the noise. Our result may allow to consider a larger class of testing issues through delta-methods (we have in mind testing of hypotheses like $S=S_{0}$ versus $S_{\left( n\right) }=S_{0}+\eta _{n}v$ where $\eta _{n}\rightarrow 0$ and $v$ belongs to a well-chosen set in $H$). The contribution of this article essentially deals with a linear regression model -the concerns related to the functional outputs concentrate on lower bounds in optimality results and in proving weak convergence with specific techniques adapted to functional data. We hope that our methods will demonstrate that optimal results are possible in a general framework and that regularity assumptions can often be relaxed thanks to the compensation (or regularity/inverse problem trade-off) phenomenon mentioned within Remark \ref{tradeoff}. The Hilbert space framework is necessary at least in the section devoted to weak convergence. Generalizations to Banach spaces of functions could be investigated, for instance in $C\left( \left[ 0,1\right] \right) ,$ H\"{o}lder or Besov spaces. Finally we do not investigate in this paper the practical point of view of this prediction method. It is a work in progress. Many directions can be considered. The practical choice of $k_{n}$ is crucial. Since we provide the exact theoretical formula for the optimal projection dimension at (\re {k-opt}) it would be interesting to compare it with the results of a cross-validation method on a simulated dataset. The covariance structure of the noise is a central and major concern : the covariance operator appears in the limiting distribution, its trace determines the optimal choice of the dimension $k_{n}^{\ast}.$ Estimating \Gamma _{\varepsilon}$ turns out to be challenging both from a practical and applied point of view. \section{Mathematical derivations} In the sequel, the generic notation $C$ stands for a constant which does not depend on $k,$ $n$ or $S.$ All our results are related to the decomposition given below \begin{equation} \widehat{S}_{n}=S\Gamma _{n}\Gamma _{n}^{\dag }+U_{n}\Gamma _{n}^{\dag }= \widehat{\Pi }_{k}+\frac{1}{n}\sum_{i=1}^{n}\varepsilon _{i}\otimes \Gamma _{n}^{\dag }X_{i}. \label{decomp} \end{equation It is plain that a bias-variance decomposition is exhibited just above. The random projection $\widehat{\Pi }_{k}$ is not a satisfactory term and we intend to remove it and to replace it with its non-random counterpart. When turning to the predictor, (\ref{decomp}) may be enhanced \begin{align} & \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \label{decomp-pred} \\ & =S\left( \Pi _{k}-I\right) \left( X_{n+1}\right) +S\left[ \widehat{\Pi _{k}-\Pi _{k}\right] \left( X_{n+1}\right) +\frac{1}{n}\sum_{i=1}^{n \varepsilon _{i}\left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle , \notag \end{align} \noindent where $\Pi_{k}$ is defined in the same way as we defined $\widehat \Pi}_{k}$ previously, \textit{i.e.} the projection on the $k$ first eigenvectors of $\Gamma$. In terms of mean square error, the following easily stems from $\mathbb{E \left( \varepsilon_{i}|X\right) =0$~ \begin{align*} & \mathbb{E}\left\Vert \widehat{S}_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2} \\ & =\mathbb{E}\left\Vert S\widehat{\Pi}_{k}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}+\mathbb{E}\left\Vert \frac{1}{n}\sum _{i=1}^{n}\varepsilon_{i}\left\langle \Gamma_{n}^{\dag}X_{i},X_{n+1}\right\rangle \right\Vert ^{2}. \end{align*} We prove below that \begin{equation} \mathbb{E}\left\Vert S\left[ \widehat{\Pi}_{k}-\Pi_{k}\right] \left( X_{n+1}\right) \right\Vert ^{2}=o\left( \mathbb{E}\left\Vert \frac{1}{n \sum_{i=1}^{n}\varepsilon_{i}\left\langle \Gamma_{n}^{\dag}X_{i},X_{n+1}\right\rangle \right\Vert ^{2}\right), \label{interm} \end{equation} and that the two terms that actually influence the mean square error are the first and the third in display (\ref{decomp-pred}). The first term $S\left( \Pi_{k}-I\right) \left( X_{n+1}\right) $ is the bias term and the third a variance term (see display (\ref{mse})). The proofs are split into two parts. In the first, part we provide some technical lemmas which are collected there to enhance the reading of the second part devoted to the proof of the main results. In all the sequel, the sequence $k=k_{n}$ depends on $n$ even if this index is dropped. We assume that all the assumptions mentioned earlier in the paper hold ; they will be however recalled when addressing crucial steps. We assume once and for all that $\left( k\log k\right) ^{2}/n\rightarrow 0$ as announced in Remark \re {conv-k} above. The rate of convergence to $0$ of $\left( k\log k\right) ^{2}/n$ will be tuned when dealing with weak convergence. \subsection{Preliminary material} \bigskip All along the proofs, we will make an intensive use of perturbation theory for bounded operators. It may be useful to have basic notions about spectral representation of bounded operators and perturbation theory. We refer to Kato (1976), Dunford and Schwartz (1988, Chapter VII.3) or to Gohberg, Goldberg and Kaashoek (1991) for an introduction to functional calculus for operators related with Riesz integrals. Roughly speaking, several results mentioned below and throughout the article may be easily understood by considering the formula of residues for analytic functions on the complex plane (see Rudin (1987)) and extending it to functions still defined on the complex plane but with values in the space of operators. The introduction of Gohberg, Goldberg and Kaashoek (1991, pp. 4-16) is illuminating with respect to this issue. Let us denote by $\mathcal{B}_{j}$ the oriented circle of the complex plane with center $\lambda _{j}$ and radius $\delta _{j}/2$ where $\delta _{j}=\min \left\{ \lambda _{j}-\lambda _{j+1},\lambda _{j-1}-\lambda _{j}\right\} =\lambda _{j}-\lambda _{j+1}$, the last equality coming from the convexity associated to the $\lambda _{j}$'s. Let us define $\mathcal{C _{k}=\bigcup_{j=1}^{k}\mathcal{B}_{j}\ .$The open domain whose boundary is \mathcal{C}_{k}$ is not connected but we can apply the functional calculus for bounded operators (see Dunford-Schwartz, Section VII.3, Definitions 8 and 9). With this formalism at hand it is easy to prove the following formulas \begin{align} \Pi _{k_{n}}& =\frac{1}{2\pi \iota }\int_{\mathcal{C}_{k}}\left( zI-\Gamma \right) ^{-1}dz, \\ \Gamma ^{\dag }& =\frac{1}{2\pi \iota }\int_{\mathcal{C}_{k}}\frac{1}{z \left( zI-\Gamma \right) ^{-1}dz. \label{resinv} \end{align} The same is true with the random $\Gamma _{n}$, but the contour $\mathcal{C _{k}$ must be replaced by its random counterpart $\widehat{\mathcal{C} _{k}=\bigcup_{j=1}^{k_{n}}\widehat{\mathcal{B}}_{j}$ where each $\widehat \mathcal{B}}_{j}$ is a random ball of the complex plane with center \widehat{\lambda }_{j}$ and for instance a radius $\widehat{\delta }_{j}/2$ with plain notations. Then \begin{equation*} \widehat{\Pi }_{k_{n}}=\frac{1}{2\pi \iota }\int_{\widehat{\mathcal{C} _{k}}\left( zI-\Gamma _{n}\right) ^{-1}dz,\quad \Gamma _{n}^{\dag }=\frac{1} 2\pi \iota }\int_{\widehat{\mathcal{C}}_{k}}\frac{1}{z}\left( zI-\Gamma _{n}\right) ^{-1}dz. \end{equation*} This first lemma is based on convex inequalities. In the sequel, much depends on the bounds derived in this Lemma. \begin{lemma} \label{L1}Consider two large enough positive integers $j$ and $k$ such that k>j$. Then \begin{gather} j\lambda _{j}\ \geq \ k\lambda _{k},\quad \lambda _{j}-\lambda _{k}\geq \left( 1-\frac{j}{k}\right) \lambda _{j},\quad \sum_{j\geq k}\lambda _{j}\leq \left( k+1\right) \lambda _{k}. \label{t2} \\ \sum_{j\geq 1,j\neq k}\frac{\lambda _{j}}{\left\vert \lambda _{k}-\lambda _{j}\right\vert }\leq Ck\log k. \notag \end{gather Besides \begin{equation*} \mathbb{E}\sup_{z\in \mathcal{B}_{j}}\left\Vert \left( zI-\Gamma \right) ^{-1/2}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1/2}\right\Vert _{\mathcal{L}_{2}}^{2}\leq \frac{C}{n}\left( j\log j\right) ^{2}. \end{equation*} \end{lemma} The proof of this lemma will be found in Cardot, Mas, Sarda (2007), pp. 339-342. We introduce the following event \begin{equation*} \mathcal{A}_{n}=\left\{ \forall j\in \left\{ 1,...,k_{n}\right\} ,\;\;\frac \left\vert \widehat{\lambda }_{j}-\lambda _{j}\right\vert }{\delta _{j} <1/2\right\} . \end{equation* which decribes the way the estimated eigenvalues concentrate around the population ones : the higher the index $j$ the closer are the $\widehat \lambda }_{j}$'s to the $\lambda _{j}$'s. \begin{proposition} \label{P1}If $\left( k\log k\right) ^{2}/n\rightarrow 0$ \begin{equation*} \mathbb{P}\left( \lim \sup \overline{\mathcal{A}}_{n}\right) =0. \end{equation*} \end{proposition} \textbf{Proof :} We just check that the Borel-Cantelli lemma holds \sum_{n=1}^{+\infty }\mathbb{P}\left( \overline{\mathcal{A}}_{n}\right) <+\infty $ wher \begin{align*} \mathbb{P}\left( \overline{\mathcal{A}}_{n}\right) & =\mathbb{P}\left( \exists j\in \left\{ 1,...,k_{n}\right\} |\left\vert \widehat{\lambda _{j}-\lambda _{j}\right\vert /\delta _{j}>1/2\right) \\ & \leq \sum_{j=1}^{k}\mathbb{P}\left( \left\vert \widehat{\lambda _{j}-\lambda _{j}\right\vert /\lambda _{j}>\delta _{j}/\left( 2\lambda _{j}\right) \right) \leq \sum_{j=1}^{k}\mathbb{P}\left( \left\vert \widehat \lambda }_{j}-\lambda _{j}\right\vert /\lambda _{j}>1/2\left( j+1\right) \right) . \end{align* Now, applying the asymptotic results proved in Bosq (2000) at page 122-124, we see that the asymptotic behaviour of $\mathbb{P}\left( \left\vert \widehat{\lambda }_{j}-\lambda _{j}\right\vert /\lambda _{j}>\frac{1}{2j \right) $ is the same as \begin{equation*} \mathbb{P}\left( \left\vert \frac{1}{n}\sum_{i=1}^{n}\left\langle X_{i},e_{j}\right\rangle ^{2}-\lambda _{j}\right\vert >\frac{\lambda _{j}} 2\left( j+1\right) }\right) . \end{equation* We apply Bernstein's exponential inequality -which is possible due to assumption (\ref{assumpt-bernstein})- to the latter, and we obtain (for the sake of brevity $j+1$ was replaced by $j$ in the right side of the probability but this does not change the final result) \begin{equation*} \mathbb{P}\left( \left\vert \frac{1}{n}\sum_{i=1}^{n}\left\langle X_{i},e_{j}\right\rangle ^{2}-\lambda _{j}\right\vert >\frac{\lambda _{j}}{2 }\right) \leq 2\exp \left( -\frac{n}{j^{2}}\frac{1}{8c+1/\left( 6j\right) \right) \leq 2\exp \left( -C\frac{n}{j^{2}}\right) , \end{equation* and the \begin{equation*} \sum_{j=1}^{k}\mathbb{P}\left( \left\vert \widehat{\lambda }_{j}-\lambda _{j}\right\vert >\frac{\lambda _{j}}{2j}\right) \leq 2k\exp \left( -C\frac{ }{k^{2}}\right) . \end{equation* Now it is plain from $\left( k\log k\right) ^{2}/n\rightarrow 0$ that $k\exp \left( -C\frac{n}{k^{2}}\right) \leq 1/n^{1+\varepsilon }$ for some \varepsilon >0$ which leads to checking that $\sum_{n}k_{n}\exp \left( - \frac{n}{k_{n}^{2}}\right) <+\infty ,$ and to the statement of Proposition \ref{P1} through Borel-Cantelli's Lemma. \begin{corollary} \label{coro2} We may writ \begin{equation*} \widehat{\Pi }_{k_{n}}=\frac{1}{2\pi \iota }\int_{\mathcal{C}_{k}}\left( zI-\Gamma _{n}\right) ^{-1}dz,\quad \Gamma _{n}^{\dag }=\frac{1}{2\pi \iota \int_{\mathcal{C}_{k}}\frac{1}{z}\left( zI-\Gamma _{n}\right) ^{-1}dz\quad a.s., \end{equation* where this time the contour is $\mathcal{C}_{k}$ hence no more random. \end{corollary} \textbf{Proof :} From Proposition \ref{P1}, it is plain that we may assume that almost surely $\widehat{\lambda}_{j}\in\mathcal{B}_{j}$ for j\in\left\{ 1,...,k\right\} .$ Then the formulas above easily stem from perturbation theory (see Kato (1976), Dunford and Schwartz (1988) for instance). \subsection{Proofs of the main results} We start with proving (\ref{interm}) as announced in the foreword of this section. What we give here is nothing but the term $A_{n}$ in Theorem \re {TH2}. \begin{proposition} \label{ks}The following bound holds \begin{equation*} \mathbb{E}\left\Vert S\left( \widehat{\Pi}_{k}-\Pi_{k}\right) \left( X_{n+1}\right) \right\Vert ^{2}\leq C\frac{k^{2}\lambda_{k}}{n}\left\Vert S\right\Vert _{\mathcal{L}_{2}}. \end{equation*} \end{proposition} \textbf{Proof : }We start with noting tha \begin{align*} \mathbb{E}\left\Vert S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \left( X_{n+1}\right) \right\Vert ^{2}& =\mathbb{E}\left[ \mathrm{tr}\left( \Gamma \left( \widehat{\Pi }_{k}-\Pi _{k}\right) S^{\ast }S\left( \widehat{\Pi _{k}-\Pi _{k}\right) \right) \right] \\ & =\mathbb{E}\left\Vert S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\right\Vert _{\mathcal{L}_{2}}^{2} \\ & =\sum_{j=1}^{+\infty }\sum_{\ell =1}^{+\infty }\left\langle S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\left( e_{j}\right) ,e_{\ell }\right\rangle ^{2}. \end{align*} By Corollary \ref{coro2}, we have \begin{equation} \widehat{\Pi }_{k}-\Pi _{k}=\frac{1}{2\pi \iota }\sum_{m=1}^{k}\int_ \mathcal{B}_{m}}\left\{ \left( zI-\Gamma _{n}\right) ^{-1}-\left( zI-\Gamma \right) ^{-1}\right\} dz=\sum_{m=1}^{k}T_{m,n}, \label{sw} \end{equation where $T_{m,n}=\frac{1}{2\pi \iota }\int_{\mathcal{B}_{m}}\left( zI-\Gamma _{n}\right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}dz$. To go ahead now, we ask the reader to accept momentaneously that for all m\leq k$, the asymptotic behaviour of $T_{m,n}$ is the same as \begin{equation*} T_{m,n}^{\ast }=\frac{1}{2\pi \iota }\int_{\mathcal{B}_{m}}\left( zI-\Gamma \right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}dz, \end{equation* where the random $\left( zI-\Gamma _{n}\right) ^{-1}$ was replaced by the non-random $\left( zI-\Gamma \right) ^{-1}$ and that studying $\widehat{\Pi _{k}-\Pi _{k}$ comes down to studying \begin{equation*} \frac{1}{2\pi \iota }\sum_{m=1}^{k}\int_{\mathcal{B}_{m}}\left( zI-\Gamma \right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}dz. \end{equation* The proof that this switch is allowed is postponed to Lemma \ref{switch}. We go on wit \begin{align*} & \left\langle S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\left( e_{j}\right) ,e_{\ell }\right\rangle =\frac{1}{2\pi \iota \sum_{m=1}^{k}\int_{\mathcal{B}_{m}}\left\langle \left( zI-\Gamma \right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}\Gamma ^{1/2}\left( e_{j}\right) ,S^{\ast }e_{\ell }\right\rangle dz \\ & =\frac{\sqrt{\lambda _{j}}}{2\pi \iota }\sum_{m=1}^{k}\int_{\mathcal{B _{m}}\left\langle \left( zI-\Gamma \right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,S^{\ast }e_{\ell }\right\rangle \frac{dz} z-\lambda _{j}}, \end{align* where $S^{\ast }$ is the adjoint operator of $S$. We obtain \begin{align*} & \int_{\mathcal{B}_{m}}\left\langle \left( zI-\Gamma \right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,S^{\ast }e_{\ell }\right\rangle \frac{dz}{z-\lambda _{j}} \\ & =\int_{\mathcal{B}_{m}}\sum_{j^{\prime }=1}^{+\infty }\left\langle \left( zI-\Gamma \right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \frac{dz}{z-\lambda _{j}} \\ & =\int_{\mathcal{B}_{m}}\sum_{j^{\prime }=1}^{+\infty }\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \frac{dz} \left( z-\lambda _{j}\right) \left( z-\lambda _{j^{\prime }}\right) }. \end{align* We deduce tha \begin{equation*} \left\langle S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\left( e_{j}\right) ,e_{\ell }\right\rangle =\frac{\sqrt{\lambda _{j}}}{2\pi \iota \sum_{j^{\prime }=1}^{+\infty }\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \sum_{m=1}^{k}\int_{\mathcal B}_{m}}\frac{dz}{\left( z-\lambda _{j}\right) \left( z-\lambda _{j^{\prime }}\right) }, \end{equation* the \begin{align*} & \left\langle S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\left( e_{j}\right) ,e_{\ell }\right\rangle \\ & =\frac{\sqrt{\lambda _{j}}}{2\pi \iota }\sum_{j^{\prime }=1}^{k}\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \sum_{m=1}^{k}\int_{\mathcal{B}_{m}}\frac{dz}{\left( z-\lambda _{j}\right) \left( z-\lambda _{j^{\prime }}\right) } \\ & +\frac{\sqrt{\lambda _{j}}}{2\pi \iota }\sum_{j^{\prime }=k+1}^{+\infty }\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \sum_{m=1}^{k}\int_{\mathcal{B}_{m}}\frac{dz}{\left( z-\lambda _{j}\right) \left( z-\lambda _{j^{\prime }}\right) }, \end{align* wher \begin{equation*} \sum_{m=1}^{k}\int_{\mathcal{B}_{m}}\frac{dz}{\left( z-\lambda _{j}\right) \left( z-\lambda _{j^{\prime }}\right) }=\left\{ \begin{array}{l} 0\text{ if $j,j^{\prime }>m$}, \\ \left( \lambda _{j}-\lambda _{j^{\prime }}\right) ^{-1}\text{ if $j^{\prime }>m,j\leq m$}, \\ \left( \lambda _{j^{\prime }}-\lambda _{j}\right) ^{-1}\text{ if $j^{\prime }\leq m,j>m$}, \\ 1-1=0\text{ if $j,j^{\prime }\leq m$} \end{array \right. \end{equation* The \begin{align*} & \sum_{j=1}^{+\infty }\left\langle S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\left( e_{j}\right) ,e_{\ell }\right\rangle ^{2}=\sum_{j=1}^{k}\left[ \frac{\sqrt{\lambda _{j}}}{2\pi \iota \sum_{j^{\prime }=k+1}^{+\infty }\frac{\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2} \\ & +\sum_{j=k+1}^{+\infty }\left[ \frac{\sqrt{\lambda _{j}}}{2\pi \iota \sum_{j^{\prime }=1}^{k}\frac{\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle }{\left( \lambda _{j^{\prime }}-\lambda _{j}\right) }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2}=A+B, \end{align* wher \begin{align*} A& =\frac{1}{4\pi ^{2}}\sum_{j=1}^{k}\lambda _{j}\left[ \sum_{j^{\prime }=k+1}^{+\infty }\frac{\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2}, \\ B& =\frac{1}{4\pi ^{2}}\sum_{j=k+1}^{+\infty }\lambda _{j}\left[ \sum_{j^{\prime }=1}^{k}\frac{\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle }{\left( \lambda _{j^{\prime }}-\lambda _{j}\right) }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2}. \end{align* We first compute $\mathbb{E}A$. To that aim we focus o \begin{eqnarray*} &&\mathbb{E}\left[ \sum_{j^{\prime }=k+1}^{+\infty }\frac{\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2}=\sum_{j^{\prime }=k+1}^{+\infty }\frac{\mathbb{E}\left\langle \left( \Gamma _{n}-\Gamma \right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle ^{2}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) ^{2}}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2} \\ &&+\sum_{{\substack{j',j''=k+1 \\ j' \neq j''}}}^{+\infty }\mathbb{E \left\langle \left( \Gamma _{n}-\Gamma \right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle \left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime \prime }}\right\rangle \frac{\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime \prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) \left( \lambda _{j}-\lambda _{j^{\prime \prime }}\right) } \\ &=&\frac{1}{n}\sum_{j^{\prime }=k+1}^{+\infty }c_{j,j^{\prime }}\frac \lambda _{j}\lambda _{j^{\prime }}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) ^{2}}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2} \\ &&+\frac{1}{n}\sum_{{\substack{j',j''=k+1 \\ j' \neq j''}}}^{+\infty \mathbb{E}\left[ \left\langle X,e_{j}\right\rangle ^{2}\left\langle X,e_{j^{\prime }}\right\rangle \left\langle X,e_{j^{\prime \prime }}\right\rangle \right] \frac{\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime \prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) \left( \lambda _{j}-\lambda _{j^{\prime \prime }}\right) } \end{eqnarray*} Then \begin{eqnarray*} &&\mathbb{E}\left[ \sum_{j^{\prime }=k+1}^{+\infty }\frac{\left\langle \left( \Gamma -\Gamma _{n}\right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2} \\ &\leq &C_{1}\frac{\lambda _{j}}{n}\sum_{j^{\prime }=k+1}^{+\infty }\frac \lambda _{j^{\prime }}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) ^{2}}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}+C_{2 \frac{\lambda _{j}}{n}\sum_{{\substack{j',j''=k+1 \\ j' \neq j''}}}^{+\infty }\sqrt{\lambda _{j^{\prime }}\lambda _{j^{\prime \prime }}}\frac \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \left\langle S^{\ast }e_{\ell },e_{j^{\prime \prime }}\right\rangle }{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) \left( \lambda _{j}-\lambda _{j^{\prime \prime }}\right) } \\ &\leq &C\frac{\lambda _{j}}{n}\left( \sum_{j^{\prime }=k+1}^{+\infty }\frac \sqrt{\lambda _{j^{\prime }}}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right) ^{2}. \end{eqnarray* We could prove exactly in the same way tha \begin{equation} \mathbb{E}\left[ \sum_{j^{\prime }=1}^{k}\frac{\left\langle \left( \Gamma _{n}-\Gamma \right) \left( e_{j}\right) ,e_{j^{\prime }}\right\rangle } \left( \lambda _{j^{\prime }}-\lambda _{j}\right) }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right] ^{2}\leq C^{\prime }\frac \lambda _{j}}{n}\left( \sum_{j^{\prime }=1}^{k}\frac{\sqrt{\lambda _{j^{\prime }}}}{\left( \lambda _{j^{\prime }}-\lambda _{j}\right) \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right) ^{2}. \label{L2} \end{equation We turn back to \begin{align*} & \left\vert \sum_{j^{\prime }=k+1}^{+\infty }\frac{\sqrt{\lambda _{j^{\prime }}}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \leq \sum_{j^{\prime }=k+1}^{+\infty }\frac{\sqrt{\lambda _{j^{\prime }}}} \left( \lambda _{j}-\lambda _{j^{\prime }}\right) }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \\ & =\sum_{j^{\prime }=k+1}^{2k}\frac{\sqrt{\lambda _{j^{\prime }}}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert +\sum_{j^{\prime }=2k+1}^{+\infty }\frac{\sqrt{\lambda _{j^{\prime }}}}{\left( \lambda _{j}-\lambda _{j^{\prime }}\right) }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \\ & \leq \frac{\sqrt{\lambda _{k+1}}}{\left( \lambda _{j}-\lambda _{k+1}\right) }\sum_{j^{\prime }=k+1}^{2k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert +\frac{2}{\lambda _{j} \sum_{j^{\prime }=2k+1}^{+\infty }\sqrt{\lambda _{j^{\prime }}}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert , \end{align* henc \begin{eqnarray} \mathbb{E}A &\leq &\frac{C}{n}\sum_{j=1}^{k}\lambda _{j}^{2}\left[ \frac \lambda _{k+1}}{\left( \lambda _{j}-\lambda _{k+1}\right) ^{2}}\left( \sum_{j^{\prime }=k+1}^{2k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2}\right] \label{ring} \\ &&+.\frac{Ck}{n}\left( \sum_{j^{\prime }=2k+1}^{+\infty }\sqrt{\lambda _{j^{\prime }}}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2} \notag \end{eqnarray The term below is bounded by \begin{equation*} \frac{Ck}{n}\left( \sum_{j^{\prime }=2k+1}^{+\infty }\lambda _{j^{\prime }}\sum_{j^{\prime }=2k+1}^{+\infty }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2}\right) \leq \frac{Ck^{2}}{n \lambda _{k}\sum_{j^{\prime }=2k+1}^{+\infty }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2} \end{equation* because $\sum_{j^{\prime }=2k+1}^{+\infty }\lambda _{j^{\prime }}\leq \left( 2k+1\right) \lambda _{2k+1}\leq k\lambda _{k}$ by Lemma \ref{L1}. We focus on the term on line (\ref{ring}) \begin{align*} & \sum_{j=1}^{k}\lambda _{j}^{2}\left[ \frac{\lambda _{k+1}}{\left( \lambda _{j}-\lambda _{k+1}\right) ^{2}}\left( \sum_{j^{\prime }=k+1}^{2k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2}\right] \leq \lambda _{k+1}\sum_{j=1}^{k}\left[ \left( \frac{k+ }{k+1-j}\right) ^{2}\left( \sum_{j^{\prime }=k+1}^{2k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2}\right] \\ & \leq k\left( \sum_{j^{\prime }=k+1}^{2k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2}\right) \left( k+1\right) ^{2}\lambda _{k+1}\sum_{j=1}^{k}\frac{1}{j^{2}}\leq C\left( \sum_{j^{\prime }=k+1}^{2k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2}\right) k^{2}\lambda _{k+1}, \end{align* hence $\mathbb{E}A\leq \frac{C}{n}\left( \sum_{j^{\prime }=k+1}^{+\infty }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2}\right) k^{2}\lambda _{k}.$ We turn to proving a similar bound for $B$. The method is given because it is significantly distinct. We start from (\ref{L2}) and we denote $\lfloor x\rfloor $ the largest integer smaller than $x$ \begin{align*} & \frac{\lambda _{j}}{n}\left( \sum_{j^{\prime }=1}^{k}\frac{\sqrt{\lambda _{j^{\prime }}}}{\left( \lambda _{j^{\prime }}-\lambda _{j}\right) \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right) ^{2} \\ & \leq \frac{\lambda _{j}}{n}\left[ \left( \sum_{j^{\prime }=1}^{\left\lfloor k/2\right\rfloor }\frac{\sqrt{\lambda _{j^{\prime }}}} \left( \lambda _{j^{\prime }}-\lambda _{j}\right) }\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2}+\left( \sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\frac{\sqrt \lambda _{j^{\prime }}}}{\left( \lambda _{j^{\prime }}-\lambda _{j}\right) \left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2}\right] \\ & \leq C\frac{\lambda _{j}}{n}\left[ \left( \sum_{j^{\prime }=1}^{\left\lfloor k/2\right\rfloor }\frac{1}{\sqrt{\lambda _{j^{\prime }}} \left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert \right) ^{2}+\frac{1}{\lambda _{k}-\lambda _{j}}\frac{j}{j-k k\sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}\right] \\ & \leq C\frac{\lambda _{j}k}{n\lambda _{k}}\sum_{j^{\prime }=1}^{\left\lfloor k/2\right\rfloor }\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}+\frac{1}{n}\frac{\lambda _{j}}{\lambda _{k}-\lambda _{j}}\frac{j}{j-k}k\sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2} \\ & \leq C\frac{k}{n}\sum_{j^{\prime }=1}^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}+\frac{k}{n}\left( \frac{j}{j-k}\right) ^{2}\sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}. \end{align* >From the definition of $B$, we get finall \begin{equation*} \mathbb{E}B\leq C\frac{k}{n}\left( \sum_{j^{\prime }=1}^{k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2}\right) \sum_{j=k+1}^{+\infty }\lambda _{j}+\left( \sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}\right) \frac{k}{n}\sum_{j=k+1}^{+\infty }\lambda _{j}\left( \frac{j}{j-k}\right) ^{2}. \end{equation* It is plain that, for sufficiently large $k$, $\sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}\leq C/k$ (otherwise $\sum_{j^{\prime }}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}$ cannot converge), whenc \begin{align*} & \left( \sum_{j^{\prime }=\left\lfloor k/2\right\rfloor }^{k}\left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle ^{2}\right) \frac{k}{n \sum_{j=k+1}^{+\infty }\lambda _{j}\left( \frac{j}{j-k}\right) ^{2}\leq \frac{C}{n}\left[ \sum_{j=k+1}^{2k}\lambda _{j}\left( \frac{j}{j-k}\right) ^{2}+\sum_{j=2k}^{+\infty }\lambda _{j}\left( \frac{j}{j-k}\right) ^{2 \right] \\ & \leq \frac{C}{n}\left[ \sum_{j=k+1}^{2k}\lambda _{j}\left( \frac{j}{j-k \right) ^{2}+4\sum_{j=2k}^{+\infty }\lambda _{j}\right] . \end{align* Denoting $\varkappa _{k}=\sup_{k+1\leq j\leq 2k}\left( j\log j\lambda _{j}\right) $ we get at last \begin{align*} \sum_{j=k+1}^{2k}\lambda _{j}\left( \frac{j}{j-k}\right) ^{2}& \leq \sup_{k+1\leq j\leq 2k}\left( j\log j\lambda _{j}\right) \frac{1}{\log k \sum_{j=k+1}^{2k}\frac{j}{j-k} \\ & \leq \varkappa _{k}\frac{1}{\log k}\sum_{j=1}^{k}\frac{k+j}{j}\leq Ck\varkappa _{k}, \end{align* and $\mathbb{E}B\leq C\frac{k}{n}\varkappa _{k}\left( \sum_{j^{\prime }=1}^{k}\left\vert \left\langle S^{\ast }e_{\ell },e_{j^{\prime }}\right\rangle \right\vert ^{2}\right) ,$with $\varkappa _{k}\rightarrow 0 . Finally \begin{equation*} \sum_{j=1}^{+\infty }\sum_{\ell =1}^{+\infty }\left\langle S\left( \widehat \Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}\left( e_{j}\right) ,e_{\ell }\right\rangle ^{2}\leq C\frac{k}{n}\varkappa _{k}\sum_{j=1}^{+\infty }\sum_{\ell =1}^{+\infty }\left\vert \left\langle S^{\ast }e_{\ell },e_{j}\right\rangle \right\vert ^{2}. \end{equation*} This last bound almost concludes the rather long proof of Proposition \re {ks}. It remains to ensure that switching $T_{m,n}^{\ast }$ and $T_{m,n}$ as announced just below display (\ref{sw}) is possible. \begin{lemma} \label{switch}We hav \begin{equation*} \mathbb{E}\sum_{j=1}^{+\infty}\sum_{\ell=1}^{+\infty}\left\langle S\left( \widehat{\Pi}_{k}-\Pi_{k}\right) \Gamma^{1/2}\left( e_{j}\right) ,e_{\ell}\right\rangle ^{2}\sim\mathbb{E}\sum_{j=1}^{+\infty}\sum_ \ell=1}^{+\infty }\sum_{m=1}^{k}\left\langle ST_{m,n}^{\ast}\Gamma^{1/2}\left( e_{j}\right) ,e_{\ell}\right\rangle ^{2}. \end{equation*} In other words, switching $T_{m,n}^{\ast}$ and $T_{m,n}$ is possible in display (\ref{sw}). \end{lemma} The proof of this Lemma is close to the control of second order term at page 351-352 of Cardot, Mas and Sarda (2007) and we will give a sketch of it. We start from \begin{align*} T_{m,n}& =\frac{1}{2\pi \iota }\int_{\mathcal{B}_{m}}\left( zI-\Gamma _{n}\right) ^{-1}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}dz \\ & =\frac{1}{2\pi \iota }\int_{\mathcal{B}_{m}}\left( zI-\Gamma \right) ^{-1/2}R_{n}\left( z\right) \left( zI-\Gamma \right) ^{-1/2}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}dz, \end{align* with $R_{n}\left( z\right) =\left( zI-\Gamma \right) ^{1/2}\left( zI-\Gamma _{n}\right) ^{-1}\left( zI-\Gamma \right) ^{1/2}$. Besides, as can be seen from Lemma 4 in\ Cardot, Mas and Sarda (2007) \begin{equation*} \left[ I+\left( zI-\Gamma \right) ^{-1/2}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1/2}\right] R_{n}\left( z\right) =I. \end{equation* Denoting $S_{n}\left( z\right) =\left( zI-\Gamma \right) ^{-1/2}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1/2}$, it is plain that when $\left\Vert S_{n}\left( z\right) \right\Vert \leq 1$ for all $z\in \mathcal{C}_{k}$ \begin{equation*} R_{n}\left( z\right) =I+\sum_{m=1}^{+\infty }\left( -1\right) ^{m}S_{n}^{m}\left( z\right) :=I+R_{n}^{0}\left( z\right) , \end{equation* with $\left\Vert R_{n}^{0}\left( z\right) \right\Vert _{\infty }\leq C\left\Vert S_{n}\left( z\right) \right\Vert _{\infty }$ for all $z\in \mathcal{C}_{k}.$ Turning back to our initial equation we get, conditionally to $\left\Vert S_{n}\left( z\right) \right\Vert \leq 1$ for all $z\in \mathcal{C}_{k}$ \begin{equation*} T_{m,n}-T_{m,n}^{\ast }=\frac{1}{2\pi \iota }\int_{\mathcal{B}_{m}}\left( zI-\Gamma \right) ^{-1/2}R_{n}^{0}\left( z\right) \left( zI-\Gamma \right) ^{-1/2}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1}dz, \end{equation* and we confine to considering only the first term in the devlopment of R_{n}^{0}\left( z\right) $ which writes $\left( 2\pi \iota \right) ^{-1}\int_{\mathcal{B}_{m}}\left( zI-\Gamma \right) ^{-1/2}S_{n}^{2}\left( z\right) \left( zI-\Gamma \right) ^{-1/2}dz$. Now split $S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}=S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}1\mathrm{I}_{\mathcal{J }+S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}1\mathrm{I}_ \overline{\mathcal{J}}}$ where \begin{equation*} \mathcal{J}=\left\{ \sup_{z\in \mathcal{C}_{k}}\left\Vert \left( zI-\Gamma \right) ^{-1/2}\left( \Gamma -\Gamma _{n}\right) \left( zI-\Gamma \right) ^{-1/2}\right\Vert _{\mathcal{L}_{2}}^{2}<\tau _{n}k_{n}/n\right\} \end{equation*} and $\tau _{n}$ will be tuned later. We have \begin{equation} \mathbb{E}\left\Vert S\left( \widehat{\Pi }_{k}-\Pi _{k}\right) \Gamma ^{1/2}1\mathrm{I}_{\overline{\mathcal{J}}}\right\Vert _{\mathcal{L _{2}}^{2}\leq 4\left\Vert S\Gamma ^{1/2}\right\Vert _{\mathcal{L}_{2}}^{2 \mathbb{P}\left( \overline{\mathcal{J}}\right), \label{rem} \end{equation and \begin{eqnarray*} &&\left\Vert S\left[ \left( \widehat{\Pi }_{k}-\Pi _{k}\right) -\sum_{m=1}^{k}T_{m,n}^{\ast }\right] \Gamma ^{1/2}1\mathrm{I}_{\mathcal{J }\right\Vert _{\mathcal{L}_{2}} \\ &\leq &\left\Vert S\left[ \sum_{m=1}^{k}\left( 2\pi \iota \right) ^{-1}\int_ \mathcal{B}_{m}}\left( zI-\Gamma \right) ^{-1/2}S_{n}^{2}\left( z\right) \left( zI-\Gamma \right) ^{-1/2}dz\right] \Gamma ^{1/2}1\mathrm{I}_{\mathcal J}}\right\Vert _{\mathcal{L}_{2}} \\ &\leq &\left( 2\pi \right) ^{-1}\frac{\tau _{n}^{2}k_{n}^{2}}{n^{2} \sum_{m=1}^{k}\delta _{m}\sup_{z\in \mathcal{B}_{m}}\left\{ \left\Vert \left( zI-\Gamma \right) ^{-1/2}\Gamma ^{1/2}\right\Vert _{\infty }\left\Vert S\left( zI-\Gamma \right) ^{-1/2}\right\Vert _{\infty }\right\} \\ &\leq &\left( 2\pi \right) ^{-1}\left\Vert S\right\Vert _{\infty }\frac{\tau _{n}^{2}k_{n}^{2}}{n^{2}}\sum_{m=1}^{k}\sqrt{\delta _{m}m}. \end{eqnarray*} Now from $\sum_{m=1}^{+\infty }m\delta _{m}<+\infty $ we get $\sqrt{\delta _{m}m}\leq c/\sqrt{m\log m}$ hence $\frac{\tau _{n}^{2}k_{n}^{2}}{n^{2} \sum_{m=1}^{k}\sqrt{\delta _{m}m}=o\left( \sqrt{k_{n}/n}\right) $ whenever k_{n}^{4}\tau _{n}^{4}/n^{3}\rightarrow 0.$ The last step consists in controlling the right hand side of (\ref{rem}). In Cardot, Mas and Sarda (2007) this is done by classical Markov moment assumptions under the condition that $k_{n}^{5}\log ^{4}n/n$ tends to zero. Here, Bernstein's exponential inequality yields a tighter bound and ensures that $\mathbb{P}\left( \overline{\mathcal{J}}\right) =o\left( k_{n}/n\right) $ when $k_{n}^{2}\log ^{2}k_{n}/n$ tends to zero. The method of proof is close in spirit though slightly more intricate than Proposition \ref{P1}. \begin{proposition} \label{variance}Let $T_{n}=\frac{1}{n}\sum_{i=1}^{n}\varepsilon _{i}\left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle $, the \begin{equation*} \mathbb{E}\left\Vert T_{n}\right\Vert ^{2}=\frac{\sigma _{\varepsilon }^{2}} n}k+\frac{\mathrm{tr}\left[ \Gamma \mathbb{E}\left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) \right] }{n}. \end{equation*} \end{proposition} \begin{remark} We see that the right hand side in the display above matches the decomposition in (\ref{mse}) and $\mathrm{tr}\left[ \Gamma\mathbb{E}\left( \Gamma _{n}^{\dag}-\Gamma^{\dag}\right) \right] /n$ is precisely $B_{n}$ in Theorem \ref{TH2}. \end{remark} \textbf{Proof : } We hav \begin{equation*} \left\Vert T_{n}\right\Vert ^{2}=\frac{1}{n^{2}}\sum_{i=1}^{n}\left\Vert \varepsilon _{i}\right\Vert ^{2}\left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle ^{2}+\frac{1}{n^{2}}\sum_{i\neq i^{\prime }}^{n}\left\langle \varepsilon _{i},\varepsilon _{i^{\prime }}\right\rangle \left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle \left\langle \Gamma _{n}^{\dag }X_{i^{\prime }},X_{n+1}.\right\rangle \end{equation* We take expectations in the display above and we note that the distribution of each member of the first series on the right hand side does not depend on $n$ or $i$ and, due to linearity of expectation and $\mathbb{E}\left( \varepsilon _{i}|X_{i}\right) =0$, the expectation of the second series is null, henc \begin{align*} \mathbb{E}\left\Vert T_{n}\right\Vert ^{2}& =\frac{1}{n}\mathbb{E}\left[ \left\Vert \varepsilon _{1}\right\Vert ^{2}\left\langle \Gamma _{n}^{\dag }X_{1},X_{n+1}\right\rangle ^{2}\right] \\ & =\frac{1}{n}\mathbb{E}\left\{ \mathbb{E}\left[ \left\Vert \varepsilon _{1}\right\Vert ^{2}\left\langle \Gamma _{n}^{\dag }X_{1},X_{n+1}\right\rangle ^{2}|\varepsilon _{1},X_{1},...,X_{n}\right] \right\} \\ & =\frac{1}{n}\mathbb{E}\left[ \left\Vert \varepsilon _{1}\right\Vert ^{2}|X_{1}\right] \mathbb{E}\left\langle \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }X_{1},X_{1}\right\rangle . \end{align* We focus on $\mathbb{E}\left\langle \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }X_{1},X_{1}\right\rangle $ and we see that this expectation is nothing but the expectation of the trace of the operator $\Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }\cdot \left( X_{1}\otimes X_{1}\right) $, henc \begin{equation*} \mathbb{E}\left\langle \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }X_{1},X_{1}\right\rangle =\mathbb{E}\left\langle \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }X_{i},X_{i}\right\rangle =\mathbb{E}\left[ \mathrm{tr \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }\cdot \left( X_{i}\otimes X_{i}\right) \right] , \end{equation* an \begin{align*} \mathbb{E}\left\langle \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }X_{1},X_{1}\right\rangle & =\frac{1}{n}\mathbb{E}\left[ \mathrm{tr}\Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }\cdot \sum_{i=1}^{n}\left( X_{i}\otimes X_{i}\right) \right] \\ & =\mathbb{E}\mathrm{tr}\left[ \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }\Gamma _{n}\right] =\mathbb{E}\mathrm{tr}\left[ \Gamma _{n}^{\dag }\Gamma \widehat{\Pi }_{k}\right] =\mathbb{E}\mathrm{tr}\left[ \widehat{\Pi _{k}\Gamma _{n}^{\dag }\Gamma \right] =\mathrm{tr}\left[ \Gamma \mathbb{E \Gamma _{n}^{\dag }\right] . \end{align* At last we get \begin{align*} \mathbb{E}\left\langle \Gamma _{n}^{\dag }\Gamma \Gamma _{n}^{\dag }X_{1},X_{1}\right\rangle & =\mathrm{tr}\left[ \Gamma \Gamma ^{\dag }\right] +\mathrm{tr}\left[ \Gamma \mathbb{E}\left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) \right] \\ & =k+\mathrm{tr}\left[ \Gamma \mathbb{E}\left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) \right] . \end{align*} >From Lemma \ref{approx-var} just below, we deduce that $\mathrm{tr}\left[ \Gamma \mathbb{E}\left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) \right] =o\left( k\right) $, which finishes the proof of Proposition \ref{variance}. \begin{lemma} \label{approx-var}We have $\mathrm{tr}\left[ \Gamma \mathbb{E}\left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) \right] \leq Ck^{2}\left( \log k\right) /n,$ where $C$ does not depend on $S$, $n$ or $k.$ The preceding bound is an $o\left( k\right) $ since $k\left( \log k\right) /n\rightarrow 0$. \end{lemma} \textbf{Proof : }We focus on \begin{align*} \left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) & =-\int_{C_{n}}\frac{1}{z \left( zI-\Gamma _{n}\right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}dz \\ & =-\int_{C_{n}}\frac{1}{z}\left( zI-\Gamma \right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}dz \\ & -\int_{C_{n}}\frac{1}{z}\left( zI-\Gamma _{n}\right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}dz. \end{align*} But $\mathbb{E}\int_{C_{n}}\frac{1}{z}\left( zI-\Gamma \right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}dz=\int_{C_{n} \frac{1}{z}\left( zI-\Gamma \right) ^{-1}\mathbb{E}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}dz=0$ so we consider the second term abov \begin{align*} R_{n}& =\int_{C_{n}}\frac{1}{z}\left( zI-\Gamma _{n}\right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1}dz \\ & =\int_{C_{n}}\frac{1}{z}\left( zI-\Gamma \right) ^{-1/2}T_{n}\left( z\right) A_{n}\left( z\right) A_{n}\left( z\right) \left( zI-\Gamma \right) ^{-1/2}dz, \end{align* wher \begin{equation*} T_{n}\left( z\right) =\left( zI-\Gamma \right) ^{1/2}\left( zI-\Gamma _{n}\right) ^{-1}\left( zI-\Gamma \right) ^{1/2},\quad A_{n}\left( z\right) =\left( zI-\Gamma \right) ^{-1/2}\left( \Gamma _{n}-\Gamma \right) \left( zI-\Gamma \right) ^{-1/2} \end{equation* whenc \begin{align*} & \mathrm{tr}\left[ \Gamma R_{n}\right] =\sum_{j=1}^{+\infty }\int_{C_{n} \frac{\lambda _{j}}{z-\lambda _{j}}\left\langle T_{n}\left( z\right) A_{n}\left( z\right) A_{n}\left( z\right) \left( e_{j}\right) ,\left( e_{j}\right) \right\rangle dz \\ & =\int_{C_{n}}\sum_{j=1}^{+\infty }\frac{\lambda _{j}}{z-\lambda _{j} \left\langle T_{n}\left( z\right) A_{n}\left( z\right) A_{n}\left( z\right) \left( e_{j}\right) ,\left( e_{j}\right) \right\rangle dz=\int_{C_{n} \mathrm{tr}\left[ \left( zI-\Gamma \right) ^{-1}\Gamma T_{n}\left( z\right) A_{n}\left( z\right) A_{n}\left( z\right) \right] dz, \end{align* and $\left\vert \mathrm{tr}\left[ \Gamma R_{n}\right] \right\vert \leq \int_{C_{n}}\left[ \left\Vert \left( zI-\Gamma \right) ^{-1}\Gamma T_{n}\left( z\right) \right\Vert _{\infty }\left\Vert A_{n}\left( z\right) \right\Vert _{\mathcal{L}_{2}}^{2}\right] dz.$ Indeed, if we denote \begin{equation*} \mathrm{tr}\left[ \left( zI-\Gamma \right) \Gamma T_{n}\left( z\right) A_{n}\left( z\right) A_{n}\left( z\right) \right] =\mathrm{tr}\left[ A_{n}\left( z\right) \widetilde{T}_{n}\left( z\right) A_{n}\left( z\right) \right] \end{equation* with $\widetilde{T}_{n}\left( z\right) =\Gamma ^{1/2}\left( zI-\Gamma _{n}\right) ^{-1}\Gamma ^{1/2}$ symmetric, we obtain \begin{equation*} \mathrm{tr}\left[ A_{n}\left( z\right) \widetilde{T}_{n}\left( z\right) A_{n}\left( z\right) \right] =\left\Vert \widetilde{T}_{n}^{1/2}\left( z\right) A_{n}\left( z\right) \right\Vert _{\mathcal{L}_{2}}^{2}\leq \left\Vert \widetilde{T}_{n}^{1/2}\left( z\right) \right\Vert _{\infty }^{2}\left\Vert A_{n}\left( z\right) \right\Vert _{\mathcal{L}_{2}}^{2}. \end{equation* Now let us fix $m$. We have $\left\Vert \widetilde{T}_{n}^{1/2}\left( z\right) \right\Vert _{\infty }^{2}\leq \left\Vert \widetilde{T}_{n}\left( z\right) \right\Vert _{\infty }$ and $\sup_{z\in \mathcal{B}_{m}}\left\Vert \widetilde{T}_{n}\left( z\right) \right\Vert _{\infty }\leq Cm\quad a.s.$ The first inequality comes from the fact that $\widetilde{T}_{n}\left( z\right) $ is symmetric, hence $\left\Vert \widetilde{T}_{n}\left( z\right) \right\Vert _{\infty }=\sup_{\left\Vert u\right\Vert \leq 1}\left\vert \left\langle \widetilde{T}_{n}\left( z\right) u,u\right\rangle \right\vert . The last one comes from \begin{equation*} \widetilde{T}_{n}\left( z\right) =\Gamma ^{1/2}\left( zI-\Gamma \right) ^{-1/2}\left( zI-\Gamma \right) ^{1/2}\left( zI-\Gamma _{n}\right) ^{-1}\left( zI-\Gamma \right) ^{1/2}\left( zI-\Gamma \right) ^{-1/2}\Gamma ^{1/2}, \end{equation* and \begin{equation*} \left\Vert \widetilde{T}_{n}\left( z\right) \right\Vert _{\infty }\leq \left\Vert \left( zI-\Gamma \right) ^{1/2}\left( zI-\Gamma _{n}\right) ^{-1}\left( zI-\Gamma \right) ^{1/2}\right\Vert _{\infty }\left\Vert \left( zI-\Gamma \right) ^{-1}\Gamma \right\Vert _{\infty }. \end{equation* These facts prove (\ref{interm}). Now, by Lemma \ref{L1}, we can write \mathbb{E}\left\Vert A_{n}\left( z\right) \right\Vert _{\mathcal{L _{2}}^{2}\leq C\left( j\log j\right) ^{2}/n,$and consequently $\mathbb{E \left\vert \mathrm{tr}\left[ \Gamma R_{n}\right] \right\vert \leq C\sum_{j=1}^{k}\delta _{j}\frac{j^{3}\left( \log j\right) ^{2}}{n}=C\frac{1} n}\sum_{j=1}^{k}\left( \lambda _{j}-\lambda _{j+1}\right) j^{3}\left( \log j\right) ^{2}.$ By an Abel transform, we get \begin{align*} \sum_{j=1}^{k}\left( \lambda _{j}-\lambda _{j+1}\right) j^{3}\left( \log j\right) ^{2}& \leq \frac{\lambda _{k+1}}{n}k^{3}\left( \log k\right) ^{2} \frac{1}{n}\sum_{j=1}^{k}\lambda _{j}j^{2}\left( \log j\right) ^{2} \\ & \leq \frac{k^{2}\left( \log k\right) }{n}+\frac{1}{n}\sum_{j=1}^{k}j\left( \log j\right) \leq \frac{k^{2}\left( \log k\right) }{n}, \end{align* which yields $\mathbb{E}\left\vert \mathrm{tr}\left[ \Gamma R_{n}\right] \right\vert \leq C\frac{k^{2}\left( \log k\right) }{n},$where $C$ is a universal constant. Finally $\left\vert \mathrm{tr}\left[ \Gamma \mathbb{E \left( \Gamma _{n}^{\dag }-\Gamma ^{\dag }\right) \right] \right\vert /k\leq Ck\left( \log k\right) /n\rightarrow 0$ and we proved Lemma \ref{approx-var . Now we are ready to turn to Theorem \ref{TH2}.\bigskip \textbf{Proof of Theorem \ref{TH2} :} >From equation (\ref{decomp-pred}), we obtai \begin{equation*} \mathbb{E}\left\Vert S_{n}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}=\mathbb{E}\left\Vert S\widehat{\Pi }_{k}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}+\mathbb{E}\left\Vert \frac{1}{n}\sum_{i=1}^{n}\varepsilon _{i}\left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle \right\Vert ^{2}. \end{equation* >From Proposition \ref{variance} followed by Lemma \ref{approx-var}, the second term is $\frac{\sigma _{\varepsilon }^{2}}{n}k+B_{n}.$ It follows from Proposition \ref{ks} and basic calculations that \begin{equation*} \mathbb{E}\left\Vert S\widehat{\Pi }_{k}\left( X_{n+1}\right) -S\left( X_{n+1}\right) \right\Vert ^{2}=\mathbb{E}\left\Vert S\left( \Pi _{k}-I\right) \left( X_{n+1}\right) \right\Vert ^{2}+A_{n}, \end{equation* where $A_{n}$ matches the bound of the Theorem. At last $\mathbb{E \left\Vert S\left( \Pi _{k}-I\right) \left( X_{n+1}\right) \right\Vert ^{2}=\sum_{j\geq k+1}\lambda _{j}\left\Vert Se_{j}\right\Vert ^{2}$ which finishes the proof.\bigskip \textbf{Proof of Theorem \ref{TH2bis} :} Our proof follows the lines of Cardot, Johannes (2010) through a modified version of Assouad's lemma. To simplify notations we set $k_{n}^{\ast }=k_{n}.$ Take $S^{\theta }=\sum_{j=1}^{k_{n}}\eta _{i}\omega _{i}e_{i}\otimes e_{1}$ where $\omega _{i}\in \left\{ -1,1\right\} $ and $\theta =\left[ \omega _{1},...,\omega _{k}\right] $ and $\eta _{i}\in \mathbb{R}^{+}$ will be fixed later such that $S^{\theta }\in \mathcal{L}_{2}\left( \varphi ,C\right) $ for all \theta $. Denote $\theta _{-i}=\left[ \omega _{1},...,-\omega _{i},...,\omega _{k}\right] $ and $\mathbb{P}_{\theta }:=\mathbb{P}_{\theta \left[ \left( Y_{1},X_{1}\right) ,...,\left( Y_{n},X_{n}\right) \right] $ denote the distribution of the data when $S=S^{\theta }$. Let $\rho $ stand for Hellinger's affinity, $\rho \left( \mathbb{P}_{0},\mathbb{P}_{1}\right) =\int \sqrt{d\mathbb{P}_{0}d\mathbb{P}_{1}}$ and $\mathbf{KL}\left( \mathbb{ }_{0},\mathbb{P}_{1}\right) $ for K\"{u}llback-Leibler divergence then $\rho \left( \mathbb{P}_{0},\mathbb{P}_{1}\right) \geq \left( 1-\frac{1}{2}\mathbf KL}\left( \mathbb{P}_{0},\mathbb{P}_{1}\right) \right) .$ Note that considering models based on $S^{\theta }$ above comes down to projecting the model on a one-dimensional space. We are then faced with a linear model with real output and finally confine ourselves to proving that the optimal rate is unchanged (see Hall, Horowitz (2007)). \begin{eqnarray*} \mathcal{R}_{n}\left( T_{n}\right) &=&\sup_{S\in \mathcal{L}_{2}\left( \varphi ,C\right) }\mathbb{E}\left\Vert \left( T_{n}-S\right) \Gamma ^{1/2}\right\Vert _{2}^{2}\geq \frac{1}{2^{k}}\sum_{\omega \in \left\{ -1,1\right\} ^{k}}\sum_{i=1}^{k_{n}}\lambda _{i}\mathbb{E}_{\theta }\left\langle \left( T_{n}-S^{\theta }\right) e_{i},e_{1}\right\rangle ^{2} \\ &=&\frac{1}{2^{k}}\sum_{\omega \in \left\{ -1,1\right\} ^{k}}\frac{1}{2 \sum_{i=1}^{k_{n}}\lambda _{i}\left[ \mathbb{E}_{\theta }\left\langle \left( T_{n}-S^{\theta }\right) e_{i},e_{1}\right\rangle ^{2}+\mathbb{E}_{\theta _{_{-i}}}\left\langle \left( T_{n}-S^{_{\theta _{_{-i}}}}\right) e_{i},e_{1}\right\rangle ^{2}\right] \\ &\geq &\frac{1}{2^{k}}\sum_{\omega \in \left\{ -1,1\right\} ^{k}}\sum_{i=1}^{k_{n}}\lambda _{i}\eta _{i}^{2}\rho ^{2}\left( \mathbb{P _{\theta },\mathbb{P}_{\theta _{-i}}\right) \end{eqnarray*} The last line was obtained by a slight variant of the bound (A.9) in Cardot, Johannes (2010), p.405 detailed below \begin{eqnarray*} \rho \left( \mathbb{P}_{\theta },\mathbb{P}_{\theta _{-i}}\right) &\leq &\int \frac{\left\langle \left( T_{n}-S^{\theta }\right) e_{i},e_{1}\right\rangle }{\left\vert \left\langle \left( S^{\theta _{-i}}-S^{\theta }\right) e_{i},e_{1}\right\rangle \right\vert }\sqrt{ \mathbb{P}_{0}d\mathbb{P}_{1}}+\int \frac{\left\langle \left( T_{n}-S^{\theta _{-i}}\right) e_{i},e_{1}\right\rangle }{\left\vert \left\langle \left( S^{\theta _{-i}}-S^{\theta }\right) e_{i},e_{1}\right\rangle \right\vert }\sqrt{d\mathbb{P}_{0}d\mathbb{P}_{1}} \\ &\leq &\frac{1}{2\eta _{i}}\left( \int \left\langle \left( T_{n}-S^{\theta }\right) e_{i},e_{1}\right\rangle ^{2}d\mathbb{P}_{\theta }\right) ^{1/2}+\left( \int \left\langle \left( T_{n}-S^{\theta _{-i}}\right) e_{i},e_{1}\right\rangle \mathbb{P}_{\theta _{-i}}\right) ^{1/2} \end{eqnarray*} by Cauchy-Schwartz inequality and since $\left\vert \left\langle \left( S^{_{\theta _{-i}}}-S^{\theta }\right) e_{i},e_{1}\right\rangle \right\vert =2\eta _{i}$. Then \begin{equation*} 2\eta _{i}^{2}\rho ^{2}\left( \mathbb{P}_{\theta },\mathbb{P}_{\theta _{-i}}\right) \leq \mathbb{E}_{\theta }\left\langle \left( T_{n}-S^{\theta }\right) e_{i},e_{1}\right\rangle ^{2}+\mathbb{E}_{\theta _{-i}}\left\langle \left( T_{n}-S^{\theta _{-i}}\right) e_{i},e_{1}\right\rangle ^{2} \end{equation* yields \begin{equation*} \mathcal{R}_{n}\left( T_{n}\right) \geq \inf_{\omega \in \left\{ -1,1\right\} ^{k}}\inf_{i}\rho \left( \mathbb{P}_{\theta },\mathbb{P _{\theta _{-i}}\right) \sum_{i}\lambda _{i}\eta _{i}^{2} \end{equation*} We show below that $\mathbf{KL}\left( \mathbb{P}_{\theta },\mathbb{P _{\theta _{-i}}\right) \leq 4n\lambda _{i}\eta _{i}^{2}/\sigma _{1}^{2}$. Choosing $\eta _{i}=\sigma _{1}/2\sqrt{n\lambda _{i}}$ for $1\leq i\leq k_{n} $ gives $S^{\theta }\in \mathcal{L}_{2}\left( \varphi ,1\right) $ and \sup_{\omega ,i}\mathbf{KL}\left( \mathbb{P}_{\theta },\mathbb{P}_{\theta _{-i}}\right) \leq 1$, $\inf_{\omega ,i}\rho \left( \mathbb{P}_{\theta } \mathbb{P}_{\theta _{-i}}\right) \leq 1/2$ an \begin{equation*} \mathcal{R}_{n}\left( T_{n}\right) \geq \frac{1}{2}\sum_{i=1}^{k_{n}}\lambda _{i}\eta _{i}^{2}=\frac{1}{2}\frac{k_{n}}{n} \end{equation* whatever the choise of the estimate $T_{n}$. This proves the lower bound \begin{equation*} \lim \sup_{n\rightarrow +\infty }\varphi _{n}^{-1}\inf_{T_{n}}\sup_{S\in \mathcal{L}_{2}\left( \varphi ,C\right) }E\left\Vert \left( T_{n}-S\right) \Gamma ^{1/2}\right\Vert ^{2}>\frac{1}{2}, \end{equation* and the Theorem stems from this last display. We finish by proving that $\mathbf{KL}\left( \mathbb{P}_{\theta },\mathbb{P _{\theta _{-i}}\right) \leq 4n\lambda _{i}\eta _{i}^{2}/\sigma _{1}^{2}.$ It suffices to notice that \begin{equation*} \mathbf{KL}\left( \mathbb{P}_{\theta },\mathbb{P}_{\theta _{-i}}\right) =\int \log \left( d\mathbb{P}_{\theta |X}/d\mathbb{P}_{\theta _{-i}|X}\right) d\mathbb{P}_{\theta } \end{equation* where $\mathbb{P}_{\theta |X}$ stand for the likelihood of $Y$ conidtionally to $X$. In this Hilbert setting we must clarify the existence of this likelihood ratio. It suffices to prove that $\mathbb{P}_{\theta |X}\left( Y\right) \ll \mathbb{P}_{0|X}\left( Y\right) $ which in turn is true when S^{\theta }X$ belongs to the RKHS associated to $\varepsilon $ (see Lifshits (1995)). With other words we need that almost surely $\Gamma _{\varepsilon }^{-1/2}S^{\theta }X$ is finite where $\Gamma _{\varepsilon }$ is the covariance operator of the noise. But $\Gamma _{\varepsilon }^{-1/2}S^{\theta }=S^{\theta }/\sigma _{1}$. Set $\omega _{l}^{\prime }=\omega _{l}$ if $l\neq i$ with $\omega _{i}^{\prime }=-\omega _{i}$ \begin{eqnarray*} \log \frac{d\mathbb{P}_{\theta |X}\left( Y\right) }{d\mathbb{P}_{\theta _{-i}|X}\left( Y\right) } &=&-\left( \left\langle Y,e_{1}\right\rangle -\sum_{l=1}^{k_{n}}\omega _{l}\eta _{l}\left\langle X,e_{l}\right\rangle \right) ^{2}+\left( \left\langle Y,e_{1}\right\rangle -\sum_{l=1}^{k_{n}}\omega _{l}^{\prime }\eta _{l}\left\langle X,e_{l}\right\rangle \right) ^{2} \\ &=&-2\omega _{i}\eta _{i}\frac{\left\langle X,e_{i}\right\rangle }{\sigma _{1}^{2}}\left( 2\left\langle \varepsilon ,e_{1}\right\rangle +\sum_{l=1}^{k_{n}}\omega _{l}\eta _{l}\left\langle X,e_{l}\right\rangle -\sum_{l=1}^{k_{n}}\omega _{l}^{\prime }\eta _{l}\left\langle X,e_{l}\right\rangle \right) \\ &=&-2\omega _{i}\eta _{i}\frac{\left\langle X,e_{i}\right\rangle }{\sigma _{1}^{2}}\left( 2\left\langle \varepsilon ,e_{1}\right\rangle +2\omega _{i}\eta _{i}\left\langle X,e_{i}\right\rangle \right) \end{eqnarray* and $\mathbb{E}_{\theta }\left[ \log d\mathbb{P}_{\theta |X}\left( Y\right) /d\mathbb{P}_{\theta _{-i}|X}\left( Y\right) \right] =4\eta _{i}^{2}\mathbb{ }_{\theta }\left\langle X,e_{i}\right\rangle ^{2}/\sigma _{1}^{2}=4\eta _{i}^{2}\mathbb{\lambda }_{i}/\sigma _{1}^{2}$\bigskip Now we focus on the problem of weak convergence.\bigskip \textbf{Proof of Theorem \ref{TH1} :} Consider (\ref{decomp}). We claim that weak convergence of $S_{n}$ will depend on the series $\left( 1/n\right) \sum_{i=1}^{n}\varepsilon _{i}\otimes \Gamma _{n}^{\dag }X_{i}$. This fact can be checked by inspecting the proof of Theorem \ref{TH2}. We are going to prove that \left( 1/n\right) \sum_{i=1}^{n}\varepsilon _{i}\otimes \Gamma ^{\dag }X_{i}$ cannot converge for the classical (supremum) operator norm. We replace the random $\Gamma _{n}^{\dag }$ by the non-random $\Gamma ^{\dag }$. It is plain that non-convergence of the second series implies non-convergence of the first. Suppose that for some sequence $\alpha _{n}\uparrow +\infty $ the centered series $\left( \alpha _{n}/n\right) \sum_{i=1}^{n}\varepsilon _{i}\otimes \Gamma ^{\dag }X_{i}\overset{w}{\rightarrow }Z,$in operator norm, where $Z$ is a fixed random operator (not necessarily gaussian). Then for all fixed $x$ and $y$ in $H,\frac{\alpha _{n}}{n}\sum_{i=1}^{n}\lef \langle \varepsilon _{i},y\right\rangle \left\langle \Gamma ^{\dag }X_{i},x\right\rangle \overset{w}{\rightarrow }\left\langle Zx,y\right\rangle ,$as real random variables. First take $x$ in the domain of $\Gamma ^{-1}$. From $\left\Vert \Gamma ^{-1}x\right\Vert <+\infty $, we see that $\mathbb{E}\left\langle \varepsilon _{i},y\right\rangle ^{2}\left\langle \Gamma ^{\dag }X_{i},x\right\rangle ^{2}<+\infty $ implies that $\alpha _{n}=\sqrt{n}$ (and $Z$ is gaussian since we apply the central limit theorem for independent random variables). Now take a $x$ such that \left\Vert \Gamma ^{-1}x\right\Vert =+\infty $, then $\mathbb{E}\left\langle \varepsilon _{1},y\right\rangle ^{2}\left\langle \Gamma ^{\dag }X_{1},x\right\rangle ^{2}=\mathbb{E}\left\langle \varepsilon _{1},y\right\rangle ^{2}\mathbb{E}\left\langle \Gamma ^{\dag }x,x\right\rangle $, and is is easily seen from the definition of $\Gamma ^{\dag }$ that $\mathbb{E}\left\langle \Gamma ^{\dag }x,x\right\rangle $ -which is positive and implicitely depend on $n$ through $k$- tends to infinity. Consequently $\left( 1/\sqrt{n}\right) \sum_{i=1}^{n}\varepsilon _{i}\otimes \Gamma ^{\dag }X_{i}$ cannot converge weakly anymore since the margins related to the $x$'s do not converge in distribution. This proves the Theorem.\bigskip The two next Lemmas prepare the proof of Theorem \ref{TH3}. We set $T_{n} \frac{1}{n}\sum_{i=1}^{n}\varepsilon _{i}\left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle $ and this series is the crucial term that determines weak convergence. We go quickly through the first Lemma since it is close to Lemma 8 p.355 in Cardot, Mas, Sarda (2007). \begin{lemma} \label{conv.loi.proj.pred}Fix $x$ in $H,$ then $\sqrt{n/k_{n}}\left\langle T_{n},x\right\rangle \overset{w}{\rightarrow }\mathcal{N}\left( 0,\sigma _{\varepsilon ,x}^{2}\right) $, where $\sigma _{\varepsilon ,x}^{2}=\mathbb{ }\left\langle \varepsilon _{k},x\right\rangle ^{2}$. \end{lemma} \textbf{Proof :} Let $\mathcal{F}_{n}$ be the $\sigma $-algebra generated by $\left( \varepsilon _{1},...,\varepsilon _{n},X_{1},...,X_{n}\right) $. We see that $Z_{i,n}^{x}=\left\langle \varepsilon _{i},x\right\rangle \left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle $ is a real-valued martingale difference, beside \begin{equation*} \mathbb{E}\left[ \left( Z_{i,n}^{x}\right) ^{2}|\mathcal{F}_{n}\right] =\sigma _{\varepsilon ,x}^{2}\left\langle \Gamma _{n}^{\dag }X_{i},X_{n+1}\right\rangle ^{2}. \end{equation* Applying Lemma \ref{approx-var} and results by McLeish (1974) on weak convergence for martingale differences arrays yields the Lemma. \begin{lemma} \label{flat-conc}The random sequence $\sqrt{\frac{k_{n}}{n}}T_{n}$ is flatly concentrated and uniformly tight. In fact, if $\mathcal{P}_{m}$ is the projection operator on the $m$ first eigenvectors of $\Gamma _{\varepsilon }$ and $\eta >0$ is a real number \begin{equation*} \limsup_{m\rightarrow +\infty }\sup_{n}\mathbb{P}\left( \left\Vert \sqrt \frac{n}{k_{n}}}\left( I-\mathcal{P}_{m}\right) T_{n}\right\Vert >\eta \right) =0. \end{equation*} \end{lemma} \textbf{Proof :} Let $\mathcal{P}_{m}$ be the projection operator on the $m$ first eigenvectors of $\Gamma _{\varepsilon }$. For $\sqrt{k_{n}/n}T_{n}$ to be flatly concentrated it is sufficient to prove that for any $\eta >0$, \begin{equation*} \limsup_{m\rightarrow +\infty }\sup_{n}\mathbb{P}\left( \left\Vert \sqrt \frac{n}{k_{n}}}\left( I-\mathcal{P}_{m}\right) T_{n}\right\Vert >\eta \right) =0. \end{equation* We have \begin{align*} & \mathbb{P}\left( \left\Vert \sqrt{\frac{n}{k_{n}}}\left( I-\mathcal{P _{m}\right) T_{n}\right\Vert >\eta \right) \\ & \leq \frac{1}{\eta ^{2}}\mathbb{E}\left\Vert \sqrt{\frac{n}{k_{n}}}\left( I-\mathcal{P}_{m}\right) T_{n}\right\Vert ^{2}=\frac{1}{\eta ^{2}k_{n} \mathbb{E}\left\langle \Gamma _{n}^{\dag }X_{1},X_{n+1}\right\rangle ^{2 \mathbb{E}\left\Vert \left( I-\mathcal{P}_{m}\right) \varepsilon _{1}\right\Vert ^{2}. \end{align* We see first that $\sup_{n}\mathbb{P}\left( \left\Vert \sqrt{\frac{n}{k_{n}} \left( I-\mathcal{P}_{m}\right) T_{n}\right\Vert >\eta \right) \leq \frac{C} \eta ^{2}}\mathbb{E}\left\Vert \left( I-\mathcal{P}_{m}\right) \varepsilon _{1}\right\Vert ^{2}$ where $C$ is some constant and once again following Lemma \ref{approx-var}. Now it is plain that \begin{equation*} \limsup_{m\rightarrow +\infty }\mathbb{E}\left\Vert \left( I-\mathcal{P _{m}\right) \varepsilon _{1}\right\Vert ^{2}=0, \end{equation* because $\mathcal{P}_{m}$ was precisely chosen to be projector on the $m$ first eigenvectors of the trace-class operator $\Gamma _{\varepsilon }$. In fact $\mathbb{E}\left\Vert \left( I-\mathcal{P}_{m}\right) \varepsilon _{1}\right\Vert ^{2}=\mathrm{tr}\left[ \left( I-\mathcal{P}_{m}\right) \Gamma _{\varepsilon }\left( I-\mathcal{P}_{m}\right) \right] ,$and this trace is nothing but the series summing the eigenvalues of $\Gamma _{\varepsilon }$ from order $m+1$ to infinity, hence the result.\bigskip \textbf{Proof of Theorem \ref{TH3} : }We only prove the second part of the theorem : weak convergence with no bias. The first part follows immediately. We start again from the decomposition (\ref{decomp-pred}). As announced just above, the two first terms vanish with respect to convergence in distribution. For $S\left[ \widehat{\Pi }_{k}-\Pi _{k}\right] \left( X_{n+1}\right) $, we invoke Proposition \ref{ks} to claim that, whenever k^{2}\log ^{2}k/n\rightarrow 0$, then $\left( n/k\right) \mathbb{E \left\Vert S\left[ \widehat{\Pi }_{k}-\Pi _{k}\right] \left( X_{n+1}\right) \right\Vert ^{2}\rightarrow 0$ and we just have to deal with the first term, related to bias : $S\left( \Pi _{k}-I\right) \left( X_{n+1}\right) .$ Assume first that the mean square of the latter reminder, $\left( n/k\right) \sum_{j=k+1}^{+\infty }\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}$, decays to zero. Then the proof of the Theorem is immediate from Lemmas \ref{conv.loi.proj.pred} and \ref{flat-conc}. The sequence $\sqrt{n/k_{n}}T_{n}$ is uniformly tight and its finite dimensional distributions (in the sense of "all finite-dimensional projections of $\sqrt n/k_{n}}T_{n}$") converge weakly to $\mathcal{N}\left( 0,\sigma _{\varepsilon ,x}^{2}\right) $. This is enough to claim that Theorem \re {TH3} holds. We refer for instance to de Acosta (1970) or Araujo and Gin\'{e} (1980) for checking the validity of this conclusion. Finally, the only fact to be proved is $\lim_{n\rightarrow +\infty }\left( n/k\right) \sum_{j=k+1}^{+\infty }\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}=0$ when tightening conditions on the sequence k_{n}.$ This looks like an Abelian theorem which could be proved by special techniques but we prove it in a simple direct way. First, we know by previous remarks (since $\lambda _{j}$ and $\left\Vert S\left( e_{j}\right) \right\Vert ^{2}$ are convergent series) that $\lambda _{j}\left\Vert S\left( e_{j}\right) \right\Vert ^{2}=\tau _{j}\left( j^{2}\log ^{2}j\right) ,$where $\tau _{j}$ tends to zero. Taking as in the first part of the theorem $n=k^{2}\log ^{2}k/\sqrt{\gamma _{k}}$, we can focus on \lim_{k+\infty }\frac{k\log ^{2}k}{\sqrt{\gamma _{k}}}\sum_{j=k+1}^{+\infty }\tau _{j}/\left( j^{2}\log ^{2}j\right) .$ We know that for a sufficiently large $k$ and for all $j\geq k,$ $0\leq \tau _{j}\leq \epsilon $ where \epsilon >0$ is fixed. The \begin{align*} \frac{1}{\sqrt{\gamma _{k}}}\sum_{j=k+1}^{+\infty }\tau _{j}\frac{k\log ^{2} }{j^{2}\log ^{2}j}& =\frac{1}{\sqrt{\gamma _{k}}}\sum_{m=1}^{+\infty }\sum_{j=km+1}^{km+k}\tau _{j}\frac{k\log ^{2}k}{j^{2}\log ^{2}j} \\ & \leq \frac{1}{\sqrt{\gamma _{k}}}\sum_{m=1}^{+\infty }\left( \sup_{km+1\leq j\leq km}\tau _{j}\right) \frac{k^{2}\log ^{2}k} k^{2}m^{2}\log ^{2}km} \\ & \leq \frac{1}{\sqrt{\gamma _{k}}}\left( \sup_{k\leq j}\tau _{j}\right) \sum_{m=1}^{+\infty }\frac{1}{m^{2}}=C\sqrt{\gamma _{k}}\rightarrow 0, \end{align* which removes the bias term and is the desired result.
proofpile-arXiv_065-6203
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction \label{sec:Introduction}} A variety of systems display dramatically different thermal diffusivities. For example, the thermal conductivity is estimated at 3000 W/mK for an isolated multiwall carbon nanotube (CNT) and between 1750 and 6600 W/mK for a single wall carbon nanotube at room temperature.\cite{ESChoi, JChe, JHone,SBerber} Typical polymer matrices, in contrast, have thermal conductivities that are three orders of magnitude smaller. Composites using carbon nanotubes have been suggested as cheap materials with average thermal conductivity.\cite{RBBird} However making improved thermal conducting polymer composites has been hindered due to the Kapitza thermal resistance and various processing issues. It may be possible to alter this resistance through functionalizing the ends or surface of the CNTs, although this may decrease the thermal conductivity of the material. The problem of optimizing the thermal conductivity of CNT composites presents an intriguing combination of high conductivity contrast, strong disorder, and incorporated materials with an extremely high aspect ratio. Thus an efficient and reliable method to calculate effective thermal conductivity and varying interface resistance in this two phase medium is desirable. This problem has been studied in the context of electrical conductivity.\cite{elect1,elect2} However, the inclusions in thermal transport can have be quite asymmetric and entangled. In these cases it the isotropic averaging models may not apply. One approach to model the thermal transport in composites is to use a random walk algorithm in which the transport is assumed to be diffusive. For instance, Tomadakis and Sotirchos \cite{Toma1,Toma2} has used this approach to find the effective transport properties of random arrays of cylinders in a conductive matrix. Recently, Doung et.al \cite{cnt1, cnt2,cnt3} developed a random walk algorithm to model thermal transport in carbon nanotube-polymer composites and the simulation results showed a reasonable agreement with the experimental data for Epoxy-SWNT composites\cite{cnt2}. In this approach thermal transport is described by random jumps of thermal markers carrying a certain amount of energy ($\Delta E$). The step size($\Delta x=x_2-x_1$) of this thermal markers follows the gaussian distribution.(See Eq.\ref{eq:oneDgaussian}) The standard deviation ($\sigma $) of the gaussian step distribution in each one of the space dimensions is $\sigma_{M/I}=\sqrt{2 D_{M/I} \Delta t}$, where $\Delta t$ is the time increment, $M/I$ refers to matrix or inclusions and $D_{M/I}$ is the thermal diffusivity. However problem arises when their is a high contrast in thermal diffusivity of matrix and inclusions. The step size in highly conducting inclusions $\Delta x_{I}$ become very large compare to the that of poorly conducting matrix $\Delta x_{M}$. Eventually this leads the markers to jump out the inclusions as soon as they enter. This can be avoided having very small steps inside the matrix so that the steps inside the inclusions are within the dimensions of the inclusions. But this is computationally expensive. When the thermal diffusivity of the inclusion is very high relative to the matrix it is reasonable to assume that the thermal diffusivity of the inclusions is infinite. This obviates the need to model random walks inside the inclusions. In this approach markers entering an infinite conductivity inclusion (ICI) are distributed uniformly inside the inclusion on the next time step. Some fraction will leave on the next time step and they always leave from the surface of the inclusion. (Otherwise the simulation wastes time on walkers that hop within the ICI.) However, we must be careful in choosing how the walkers leave the the ICI since incorrect approaches can lead to the unphysical result of a system at uniform temperature spontaneously developing a temperature gradient at the interface between the inclusion and the medium.\cite{cnt2} While the effect is apparently small, it must be remembered that diffusion occurs at these same interfaces. In this paper we provide a rigorous approach for implementing a random walk algorithm with emphasis on the treatment at the interface between the inclusions and the matrix material for high conductivity contrast composites, and we quantify the errors made when gaussian and modified step distributions are employed. This paper is divided into four parts. In the first, we briefly describe the algorithm for ``infinite conductivity'' inclusions. Next, we show the rigorous way to handle inclusions in one dimensional systems. We verify our results numerically in ordered and disordered systems, and compare them to results obtained by assuming that the walkers leave the surface with a gaussian step distribution. In the next section we develop this approach to spheres in three dimensions and again verify it numerically, showing quantitatively the errors that develop if a gaussian step distribution from the surface is used. Interestingly, the errors in thermal conductivity are larger in 3D than in 1D and larger for random arrays than for regular ones. In the final section we conclude with a summary and a discussion of future work. \section{The Model} The goal is to calculate the thermal conductivity of a composite composed of a matrix containing a distribution of ``infinite conductors'' (ICs). This conductivity is calculated by fixing the heat flux through the computational volume and measuring the resulting average temperature gradient. The diffusion of heat is modelled by the motion of random walkers within the domain. The computational cell is divided in bins, and the temperature distribution is calculated from the number of walkers in each bin. To maintain a constant heat flux in the $x$ direction through the computational cell, random walkers carrying $+\Delta E$ energy are periodically added at the surface $x=x_{\mathrm{min}}$, and then allowed to move with random jumps that follow a gaussian distribution into the computational cell. In order to fix an ``outward'' energy flux on the opposite surface, random walkers carrying $-\Delta E$ energy are added at the surface $x=x_{\mathrm{max}}$ at the same rate as the positive markers. The $+\Delta E$ and $-\Delta E$ thermal markers are often called "hot" and "cold" walkers. The exact size of $\Delta E$ is arbitrary: the heat flux might be modelled by many small walkers or one large one. However using too many walkers is computionally ineffcient, while too few produces noisy results that requires more runs to get better averages. In the $y$ and $z$ direction the computaional domain is assumed to be periodic. The solution at steady states yields a linear temperature profiile and the thermal conductivity can be extracted from Fourier's law. To incorporate the effect of the Kapitza thermal resistance, walkers in the matrix that would normally attempt to jump into the IC can only do so with a probability $f_{m,IC}$.\cite{Kap,Shen,JLB,CJTwu} Thus they stay in the matrix phase with a probability $1-f_{m,IC}$. The value of $f_{m,IC}$ is determined by the Kapitza resistance. This can be estimated using acoustic mismatch model when the physical properties of the materials are known.\cite{SchwartzPohl} Similarly, random walkers located within the IC have a probability to hop out on each time step. Exactly what fraction of the walkers should leave in each time step, and the exact nature of the probability distribution for the steps they should take from the surafce are determined in the next two sections. However, those that do leave, exit at random positions on the IC. This is done to model the ``infinite'' conductivity of IC so that the walker distribution within the IC is uniform. Collisions between walkers are ignored. The random walk reflects the scattering of phonons in the disordered matrix material. Walker-walker scattering would reflect nonlinear thermal conductivities which are typically small. Similarly, we assume that the properties of the materials (e.g. density, specific heat, thermal relaxation length) do not change with temperature over the range modelled. Finally, we assume that the product of the mass density of IC and specific heat capacity equals that of the matrix, so that in thermal equilibrium the walker density would be uniform inside and out of the IC's. This is done for simplicity, so that the local temperature is simply proportional to the difference of the average density of hot and cold walkers. Without this assumption we would have to alter the probability of walkers entering and leaving the IC's so that in thermal equilibrium, the ratio of average walker density inside the IC to that of the matrix equals the ratio of their volumetric heat capacities. Only then would the equilibrium walker distribution represent a uniform temperature. \section{Random walks with infinite conductivity inclusions in 1D. \label{sec:1D}} Below we describe how to efficiently handle the random walks in a fashion that satisfies the second law of thermodynamics. The difficulty lies in properly handling the random walkers that jump out from the high conductivity material. To make the explanation clear, we first look at the one dimensional case. We subsequently address the three dimensional case for spherical inclusions in section \ref{sec:3D}. \subsection{Analytic results for one dimensional walks \label{subsec:oneD}} We consider a set of random walkers moving in a one dimensional ring, half made from an ``infinitely conducting material'' as show in fig.\ref{fig:oneDmodel}. We can view this as a one dimensional line with boundaries at $x=\pm 1$. We know that in equilibrium the density of random walkers throughout the whole ring should be uniform. Consider a surface located at $x=s$, as shown in fig.\ref{fig:oneDmodel}.b. The flux of random walkers from the left through the surface must equal that from the right. This is not a problem for a surface located near the center of the interval. However, if $1>\sigma >1-s$, the flux of random walkers from right \textit{in the matrix medium} cannot balance those from the left; there are too few of them. The solution lies in that the difference must be made up from random walkers leaving the \textquotedblleft infinite\textquotedblright\ conductivity material. If they were distributed uniformly throughout the infinite conducting material, their flux would maintain the equilibrium. \begin{figure}[bt] \centering \includegraphics[scale=0.5]{oneDmodel.eps} \caption{{\protect\small An illustration of the one dimensional model. In (a) the darker material on the left is of ``infinite'' conductivity. The one dimesional system therefore has two boundaries as shown in (b), where we assume $\protect\sigma<<1$. The difficulty is that in equilibrium the net flux through any surface (e.g. the dashed line) must be zero. Walkers hopping through the surface from the shaded region in Fig.(b) must be balanced by walkers leaving the right hand infinite conductor. This implies that if random walkers always leave the \textit{surface} of the infinite conductor (IC), they must have a different jump distribution than random walkers inside the ``normal'' region. }} \label{fig:oneDmodel} \end{figure} We do not wish to model the inside of the IC inclusions because random walkers within them move on a much faster time scale than those outside. We assume that a random walker instantly leaves from any point on the surface (in this case from $x=\pm 1$). However, since they leave always exactly from the surface, their step distribution must be different from that of random walkers within the matrix medium. In each step of the simulation we move the walkers inside the interval $ -1<x<1$, as well as those outside. We wish to do this in a fashion that is in agreement with the second law of thermodynamics. Let the probability that a walker in the matrix medium jumps from $x_{1}$ to $x_{2}$ be given by: \begin{equation} P(x_{1},x_{2})={\frac{1}{\sqrt{2\pi }\sigma }}e^{\frac{-(x_{2}-x_{1})^{2}}{ 2\sigma ^{2}}} \label{eq:oneDgaussian} \end{equation} In each time interval we can see that only a fraction of the walkers inside the IC will leave, or else their density would not equal that in the normal medium. In this simple model, we require that the number inside the matrix medium ($N_{i}$) equals that outside ($N_{o}$) in the IC. We also require that the net flux through a surface located at $x=s$ is zero. \vspace{25pt} The flux to the left from particles lying in the matrix region, $s<x<1$ is balanced by the flux to the right for those lying between $2s-1<x<s$. However those in the shaded region of fig.\ref{fig:oneDmodel} are not so compensated. They must be balanced by a net flux of walkers leaving the righthand boundary. Denote the flux from the shaded region to the right by $N_{r}$; it is : \begin{eqnarray} N_{r} &=&\rho _{0}\int_{-1}^{2s-1}dx_{1}\int_{s}^{\infty }dx_{2}P(x_{1},x_{2}) \\ &=&{\frac{\rho _{0}}{2}}\int_{0}^{\infty }\mathrm{\,Erfc\,{\left( \frac{z-s+1 }{\sqrt{2}\sigma }\right) }}\,dz \end{eqnarray} where $\mathrm{\,Erfc\,{(x)}}$ is the complementary error function, $\mathrm{ \,Erfc\,{(x)}}=1-\mathrm{\,Erf\,{(x)}}$. We are summing over any walker starting in the shaded region ending up anywhere to the right of the barrier. We have let the upper limit of the endpoint of the jump to infinity since $ \sigma <<1$; we extend the lower limit of the first integral to $-\infty $, and we have shifted variables to $z=2s-1-x_{2}$. We neglect any walkers leaping from the IC on the left boundary $x=-1$ all the way through $s$. The flux $N_{r}$ must be balanced by the flux from walkers leaving the IC on the right. Let the probability that a random walker in the IC leaves it be given by $\lambda $, and the probability that it jumps to a point $x$, leaving from the right hand boundary, be $f(x)$. Then the flux to the left through the surface at $x=s$ due to these walkers is \begin{equation} N_{\ell }=N_{o}\lambda \int_{-\infty }^{s}f(x)\,dx \end{equation} We set $N_{\ell }=N_{r}$, and take the derivative of both sides with respect to $s$. This gives us an integral expression for $f(s)$: \begin{equation} f(s)={\frac{\rho _{0}}{2N_{0}\lambda }}\left[ \mathrm{\,Erf\,{\left( \frac{ s-1}{\sqrt{2}\sigma }\right) }}+1\right] \end{equation} The requirements that $N_{i}=N_{o}$ and the balancing of the fluxes when $s=1 $ is enough to solve for $f(s)$. The distribution of steps, $\tilde{f} (u)\equiv f(1-u)$ is given by: \begin{equation} \tilde{f}(u)=\sqrt{\frac{\pi }{2}}{\frac{1}{\sigma }}\left( 1-\mathrm{\,Erf\, {\left( \frac{u}{\sqrt{2}\sigma }\right) }}\right) \label{eq:oneDstep} \end{equation} \subsection{Numerical results for thermal conductivity in 1D \label{subsec:1Dtc}} The above analytical calculation provides the correct step distribution for walkers leaving the edge of the infinite conductors. We can compare it to a simple model where we simply have the walkers take a step with a Gaussian probability distribution (mean size 0.20) from the surface. Fig.\ref{fig:oneDres} is the spatial distribution of random walkers in such a one dimensional system.\cite{probability} Plotted are the average number of walkers in each of 20 bins, equally spaced $-1<x<1$ over the course of $ 10^{5}$ Monte Carlo steps. Starting with 500 random walkers, half should be in the \textquotedblleft matrix\textquotedblright\ region, so that the average number/bin should be 12.5. The dashed line is the result of performing the simulation incorrectly, and letting the walkers have a Gaussian step distribution as in eq.\ref{eq:oneDgaussian}; There are too many walkers in the interval, and their distribution is not uniform. The solid line is the result of using eq.\ref{eq:oneDstep}, which yields the correct result, and is uniform. \begin{figure}[bt] \centering \includegraphics[scale=0.35]{oneDres.eps} \caption{{\protect\small (Color online) Results for the one dimensional model of a random walk on a ring with an infinite conductivity inclusion. The steps in the random walk have a Gaussian distribution with a mean of $0.20$. Plotted are the average number of walkers in each of 20 bins, equally spaced $-1<x<1$ over the course of $10^5$ Monte Carlo steps. Starting with 500 random walkers, half should be in the ``normal'' region, so that the average number/bin should be 12.5. The dashed line is the result of performing the simulation incorrectly, and letting the walkers have a Gaussian step distribution as in eq.\protect\ref{eq:oneDgaussian}; There are too many walkers in the interval, and their distribution is not uniform. The solid line is the result of using eq.\protect\ref{eq:oneDstep}, which yields the correct result. }} \label{fig:oneDres} \end{figure} In order to determine the significance of this error, we place several ICs in the computational volume and run at constant heat flux until the temperature distribution converges. We then extract the gradient in walker density and calculate the thermal conductivity. Sample results are plotted in fig.(\ref{fig:1Ddens}), where we show the results for Gaussian steps (lower curve) and steps governed by eq.(\ref{eq:oneDstep}) (upper curve). The latter gives physically reasonable results (with noise), in which the temperature is constant the ICs and uniformly decreasing in the matrix. The thermal conductivity is extracted from the ratio of the slope of the temperature (the walker density) to the applied flux. In fig.(\ref{fig:1Dvar} ) we plot the average value of the percent error in the thermal conductivity as a function of the transmission probability, $f_{m,IC}$, for regular and random 1D arrays. In this simulation the ICs were 0.50 units long and the material between them was 1.00 units wide. The results are averaged over five runs each lasting for 40,000 time steps. The percent error is defined as the difference between the results of simulations using the Gaussian steps and the results using eq.\ref{eq:oneDstep}. The error bars represent the variation in thermal conductivities over the runs. Thus we see that the error can range as large as five percent, and that can vary substantially. \begin{figure}[bt] \centering \vbox{\includegraphics[width=3.25in]{1Ddens.eps}} \caption{{\protect\small (Color online) Plots of the temperature distribution in a periodic array of infinite conductivity inclusions in a 1D matrix. The temperature is determined by the average of the difference between the number of positive and negative walkers in a given region. This plot was generated over a run of 62,500 time steps where 10,000 positive (negative) walkers were added every 10 time steps at the left (right) border. The ICs are 0.50 units long and the matrix spacer between them is 1.00 units. The probability to cross into an IC from the matrix is 0.16. Lower line: The temperature distribution generated by having random walkers depart the ICs with Gaussian steps. Upper line: The temperature distribution (shifted up one unit for clarity) given by an algorithm using the step probability distribution of eq.\protect\ref{eq:oneDstep}. Note that temperature gradients spontaneously appear at interfaces when the incorrect jump distribution is used. }} \label{fig:1Ddens} \end{figure} \begin{figure}[bt] \centering \vbox{\includegraphics[width=3.25in]{1Dvar.eps}} \caption{{\protect\small (Color online) Plot of the relative percent error for the the thermal conductivity calculated using a Gaussian step distribution ($\protect\sigma=0.1$) as compared to that of eq.(\protect\ref{eq:oneDstep}) as a function of $f_{m-IC}$, the probability for a walker to enter into an inclusion. The results are for a one dimensional system with 20 inclusions; the diamonds are for a regular array of ICs and the circles are random ICs. The error bars are based on a sample of five different configurations and are included to give an indication of how large the the errors can be. }} \label{fig:1Dvar} \end{figure} \section{Three dimensional models \label{sec:3D}} We have shown above that errors in one dimensional simulations are avoidable, but only a few percent. Below we generalize the above problem to three dimensions for spherical inclusions, and show that the effect can be significant. \subsection{Analytic Derivation of Random Walks with Spherical Inclusions \label{subsec:Sphere}} In our model random walkers that land inside the sphere are immediately moved to a random point on the surface of the sphere. On the next time step they can move in the radial direction away from the sphere. We assume that if we choose the fraction that leave and their step distribution correctly, then when we are in equilibrium we will obtain a uniform, stationary density outside and inside the sphere. The number of random walkers entering the sphere from a region $d\vec{r}$ near $\vec{r}\,$ landing inside the sphere is given by: \begin{equation} n(\vec{r})=\,d\vec{r}\,\,\,{\frac{\rho _{0}f_{k}}{(2\pi )^{3/2}\sigma ^{3}}} \int_{r^{\prime }<R}d\vec{r^{\prime}}\,e^{\frac{-(\vec{r}-\vec{r^{\prime}})^2}{2\sigma^2}} \end{equation} where we have generalized the probability distribution of eq.(\ref{eq:oneDgaussian}) to three dimensions. The factor $f_{k}$ represents the Kapitsa resistance; it is the probability that a random walker will enter the spherical inclusion. When $f_{k}=0$, no walkers enter the inclusion and the Kapitsa resistance is infinite; when $f_{k}=1$ then walkers can freely step into the inclusion and the Kapitsa resistance is zero. The total number entering a sphere of radius $R$ is \begin{equation} N_{\mathrm{in}}(R)=\int_{r>R} d\vec{r}\,\,\,{\frac{\rho _{0}f_{k}}{(2\pi )^{3/2}\sigma ^{3}}} \int_{r^{\prime }<R}d\vec{r^{\prime}}\,e^{\frac{-(\vec{r}-\vec{r^{\prime}})^2}{2\sigma^2}} \end{equation} For any value of ${\vec{r}}$ we can rotate our primed coordinate system so that $\hat{z}^{\prime }\parallel {\vec{r}}$ so that the angle between ${\vec{ r}}$ and ${\vec{r}\,^{\prime }}$ is simply $\theta ^{\prime }$, the spherical polar angle in the primed system. The angular integrals can then all be done in closed form giving \begin{equation} N_{\mathrm{in}}(R)=\sqrt{8\pi }\,{\frac{\rho _{0}f_{k}}{\sigma }} \int_{r>R}r\,dr\int_{r^{\prime }<R}r^{\prime }dr^{\prime }\,\left( e^{\frac{ -(r+r^{\prime})^2}{2\sigma ^{2}}}-e^{\frac{-(r-r^{\prime})^2}{2\sigma ^{2}} }\right) \end{equation} The resulting integral can also be found exactly yielding \begin{equation} N_{\mathrm{in}}(R)={\frac{2}{3}}\rho _{0}f_{k}\left[ \sqrt{2\pi }\sigma \left( 3R^{2}-\sigma ^{2}\right) +2\pi R^{3}\,\mathrm{\,Erfc\,{({\frac{\sqrt{ 2}R}{\sigma }})}}+\sqrt{2\pi }\sigma e^{\frac{-2R^{2}}{{\sigma ^{2}}} }(\sigma ^{2}-R^{2})\right] \label{eq:Nin} \end{equation} If the density of walkers is uniform then the number inside the sphere is $ V_s \rho_0$ where $V_s$ is the volume of the sphere. (This is only true when the product of the density and specific heat capacity of the matrix and IC's are equal. When this constraint does not hold the number of walkers inside the IC is $V_s \rho_0 \frac{C_M \rho_M}{C_{IC} \rho_{IC}}$. Where $C_{IC}$($C_{M}$) and $\rho_{IC}$($\rho_{M}$) are specific heat capacity and mass density of the IC(Matrix).) In each time step we allow a fraction $\lambda$ of them to leave. In equilibrium the flux into the sphere (Eq.\ref{eq:Nin}) equals the flux out ($V_s \rho_0 \lambda$), allowing us to calculate $\lambda$: \begin{equation} \lambda={\frac{2}{3}} {\frac{f_k }{V_s}} \left[ \sqrt{2\pi} \sigma (3 R^2 - \sigma^2) +2 \pi R^3 \mathrm{\,erfc\,{({\frac{\sqrt{2} R }{\sigma}})}}+ \sqrt{2 \pi} \sigma e^{\frac{-2 R^2 }{{\sigma^2}}} (\sigma^2 -R^2) \right] \label{eq:lambda} \end{equation} When $R>>\sigma$, we expect the geometry of the inclusion to be irrelevant. In this limit, if the random walkers had a \textit{flat} distribution of steps bounded by $\sigma$, then the flux into the sphere would come from a thin spherical shell of thickness $\sigma$ and radius $R$. The volume of this shell is $\sigma A_s$, where $A_s$ is the surface area of the sphere. The flow in from this shell is balanced by the flow out of the volume, $V_s \rho \lambda$. It is useful to write this in terms of a new constant, $c_0$, defined via \begin{equation} c_0 \equiv {\frac{\lambda V }{\sigma A}} \end{equation} which is dimensionless and becomes shape independent as $\sigma\to 0$. In this case \begin{equation} c_0 = f_k \left[ {\frac{1}{\sqrt{2 \pi}}} \left( 1-{\frac{\sigma^2 }{3 R^2}} \right) + {\frac{R}{3 \sigma}} \mathrm{\,erfc\,{({\frac{ \sqrt{2} R}{\sigma } })}} \right] \end{equation} This quantity is bounded by $f_k/\sqrt{2\pi}$, the result one would get for an infinite slab. The factor $1/\sqrt{2\pi}$ arises from the fact that walkers have a gaussian distribution of step sizes, and not a flat one. \begin{figure}[bt] \centering \includegraphics[scale=0.4]{sphere.eps} \caption{{\protect\small An illustration of the three dimensional model for spheres. The grey material is a sphere of ``infinite'' conductivity with radius $R$. We wish to calculate the distribution of steps sizes taken by random walkers leaving the surface of the sphere. We do this by requiring that in equilibrium the net flux through any surface (e.g. the dashed line indicating a sphere of radius $s$) must be zero. The number of walkers hopping in through the dotted surface ($N_{in}(s)$) must be balanced by the total number of walkers hopping out, both those from the matrix material ($ N_{out}(R,s)$) and those from the surface of the sphere ($N_{\mathrm{sphere} }(s)$). This condition allows us to calculate the step distribution. }} \label{fig:sphere} \end{figure} Next we have to calculate the distribution of steps for random walkers leaving the surface of the sphere. As in the one dimensional case of subsection \ref{subsec:oneD} above, we can calculate the desired result by balancing fluxes in equilibrium. We draw an imaginary surface of radius $s$ about the spherical inclusion. In equilibrium, the net flux through this surface must be zero, as illustrated in fig.(\ref{fig:sphere}). \vspace{25pt} \begin{equation} N_{in}(s)= N_{out}(R,s) + N_{\mathrm{sphere}}(R,s) \label{eq:sphereBalance} \end{equation} The flux in through the sphere of radius $s$, $N_{in}(s)$, is the result eq.( \ref{eq:Nin}) evaluated for a radius of $s$. The flux outward from the matrix material is given by the integral: \begin{equation} N_{\mathrm{out}}(R,s) = \int_{R<r<s} d{\vec r} {\frac{\rho_0 f_k }{(2 \pi )^{3/2} \sigma^3}} \int_{r^{\prime }> s} d\vec r\,^{\prime \frac{-(\vec r - \vec r\,^{\prime 2 })}{2\sigma^2}} \end{equation} We can again evaluate this integral analytically to obtain: \begin{eqnarray} N_{\mathrm{out}}(R,s)= {\frac{2}{3}} f_k \rho_0 \left[ 2 \pi s^3 \mathrm{ \,erfc\,{(\frac{\sqrt{2} s}{\sigma})}} +\sqrt{2 \pi} \sigma (\sigma^2 - s^2) e^{\frac{-2 s^2}{\sigma^2}} - \sqrt{2 \pi}\sigma (\sigma^2-3 s^2) \right. \nonumber \\ \left.+ \sqrt{2 \pi}\sigma (\sigma^2-s^2-R^2-Rs) e^\frac{-(s-R)^2}{2 \sigma^2 } - \sqrt{2 \pi}\sigma (\sigma^2-s^2-R^2+Rs) e^\frac{-(s+R)^2}{2 \sigma^2} \right. \nonumber \\ \left. +\pi (s^3-R^3)\mathrm{\,erfc\,{(\frac{s-R}{\sqrt{2}\sigma} )}}- \pi (s^3+R^3) \mathrm{\,erfc\,{(\frac{s+R}{\sqrt{2}\sigma})}} \right] \label{eq:NoutRs} \end{eqnarray} Finally, we can write an expression for the flux of random walkers (originating on the inclusion surface) that hop out through the sphere of radius $s$: \begin{equation} N_{\mathrm{sphere}}(s)= V_s \rho_0 \lambda \int_s^\infty f(r)\, dr \label{eq:Nsphere} \end{equation} where $f(r)\, dr$ gives the fraction of walkers that jump radially outward to a distance between $r$ and $r+dr$ from the center of the inclusion. Eqns.(\ref{eq:Nin}), (\ref{eq:NoutRs}) and (\ref{eq:Nsphere}) give us enough information to calculate the step distribution function $f(r)$. However in computer applications we do not actually use $f(r)$. Rather algorithms typically generates a random number, $p$, in a flat distribution $0<p<1$, and use that to select a random step $\delta(p)$ from the center of the sphere, $\delta>R$. We can do this by first calculating the integral of $f(r) $: \begin{equation} P(\delta) \equiv \int_R^{\delta} f(r^{\prime }) \, dr^{\prime } \label{eq:integdist} \end{equation} Note that $P(R) = 0$ and $\lim_{\delta\to \infty} P(\delta) = 1$. We then must invert this functional relationship to get $\delta(P)$, which gives us the step generating function we desire. We note that from eq.(\ref{eq:Nsphere}) and (\ref{eq:integdist}) we have: \begin{equation} N_{\mathrm{sphere}}(R,s) = V_s\, \rho_0 \lambda \left[ 1-P(s) \right] \end{equation} Equating this via eq.(\ref{eq:sphereBalance}) and dropping exponentially small terms we have \begin{eqnarray} P(s) & = & 1- \nonumber \\ & & {} \left[ \pi(R^3-s^3) \mathrm{\,erfc\,}{\left({\frac{{s-R}}{{\sqrt{2} \sigma}}}\right)} - \sqrt{2 \pi}\sigma (\sigma^2-s^2-R^2-Rs) e^{\frac{{ -(s-R)^2}}{{2 \sigma^2}}} \right. \nonumber \\ & & \frac{ {} + \left. \sqrt{2 \pi}\sigma (\sigma^2-s^2-R^2+Rs) e^\frac{ -(s+R)^2}{2 \sigma^2} +\pi (s^3+R^3)\mathrm{\,erfc\,}{\left(\frac{s+R}{\sqrt{2 }\sigma}\right)}\right] }{{\displaystyle {\sqrt{2 \pi} \sigma (3R^2 -\sigma^2) +\sqrt{2 \pi} \sigma (\sigma^2-R^2)e^{\frac{-2 R^2}{\sigma^2}} + 2\pi R^3 \mathrm{\,erfc\,}\left(\frac{\sqrt{2}R}{\sigma}\right)}}} \label{eq:Prob3D} \end{eqnarray} This result has the desired behavior at the limits, $P(R)=0$ and $\lim_{s\to \infty} P(s)=1$. This function is not analytically invertible; in implementation it is evaluated on a mesh and the inverse is calculated via interpolation. \subsection{Numerical Results in 3D \label{subsec:NumResults3D}} We implemented a random walk algorithm in three dimensions similar to that of section \ref{sec:1D} above. In three dimensions we applied periodic boundary conditions in the $y$ and $z$ directions. A temperature profile in the $x$ direction was obtained simply by binning all walkers in a given range of $x$ for all $y$ and $z$; such slices would cross inclusions as well as matrix material. Walkers that were labelled as inside a given inclusion were assigned a random position inside the inclusion for the purpose of doing this averaging. The simulation volume was $10\times 10 \times 10$, and the random walk in the matrix was described by a Gaussian distribution with a rms value of 0.10 in these units. The transition probability $f_{m,IC}$ was fixed at 1.0. In fig.(\ref{fig:evn3D}) the percent error (defined as the ratio of the difference of thermal conductivities measured using the Gaussian step distribution and that of eq.(\ref{eq:Prob3D}), divided by the former) is plotted as a function of the volume fraction of infinite conductivity inclusions, for a fixed surface area for the inclusions. (If there were only a single spherical inclusion, it would have had a volume fraction of 5\%.)\cite{fudge} As the number of inclusions at fixed surface area increases, their total volume decreases as $N^{-3/2}$. (For example, the largest volume fraction, 0.20, corresponds to $100$ spheres of radius $0.9772$). The results at several values of $N$ were calculated for five random configurations and the average and standard deviation are plotted. The effect of using a simplified step distribution is \textit{larger} in three dimensions, and can affect the results by up to 18\%. The percent error was also calculated as a function of the surface area for fixed volume fraction and plotted in fig.(\ref{fig:eva3D}). The volume fraction was fixed at 5\%, and the surface area increases with $N$ as $ N^{2/3}$. Five simulations were run for $N=100, 200, \dots 1000$ and the average and standard deviation were plotted as a function of the surface area relative the minimum surface area, $A_0$, the area of a sphere that is 5\% of the volume. Again, the effect of using the wrong simulation algorithm is shown to be substantial. \begin{figure}[bt] \centering \vbox{\includegraphics[width=3.25in]{evv3D.eps}} \caption{{\protect\small Plot of the percent error in the thermal conductivity as a function of the volume fraction for fixed surface area. The percent error is defined as the ratio of the difference of thermal conductivities measured using the Gaussian step distribution and that of eq. (\protect\ref{eq:Prob3D}) divided by the former. Note that the error varies only slightly with volume fraction. }} \label{fig:evn3D} \end{figure} \begin{figure}[bt] \centering \vbox{\includegraphics[width=3.25in]{eva3D.eps}} \caption{{\protect\small Plot of the percent error in the thermal conductivity as a function of the surface area of the spherical inclusions at a fixed volume fraction of 5\%. The surface area is measured in terms of $ A_0$, the surface area of a sphere with 5\% of the total volume. The percent error is defined as the ratio of the difference of thermal conductivities measured using the Gaussian step distribution and that of eq.(\protect\ref{eq:Prob3D}). divided by the former. }} \label{fig:eva3D} \end{figure} \section{Conclusions and Future work \label{sec:Conclusions}} Transport in composites with a large disparity in conductivities is important to a large number of systems. In this paper we have demonstrated an efficient and physically sound algorithm for calculating effective conductivities of composites with large contrasts in conductivity. We have shown that the errors introduced are small but measurable in one dimension, and moderately significant in 3D. The spherical inclusion case is the simplest 3D problem, but not the most relevant to many systems. Carbon nanotubes might be approximated as cylinders, to lowest order. However, in that case the 1D integrals of section(\ref{subsec:Sphere}) become more complicated and handling the endcaps of the cylinders becomes problematic. A simple approach might be to simply ignore transport through the endcaps, or treat a nanotube as an extremely prolate spheroid so that diffusion from the inclusion can again be treated as a one dimensional walk normal to the surface. These approximations are the subject of current research. \begin{acknowledgments} This project was supported in part by the US National Science Foundation under Grant~\mbox{MRSEC DMR-0080054}, and \mbox{EPS-9720651}, and~\mbox{PHY--0071031}. Dimitrios Papavassiliou acknowledges support from the DoE-funded Carbon Nanotubes Technology Center - (CANTEC, Award Register\#: ER64239 0012293). \end{acknowledgments}
proofpile-arXiv_065-6216
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Arguably the most basic model of individual and collective choice is a \emph{choice function}, which associates with each set~$A$ of feasible alternatives a non-empty subset~$S(A)\subseteq A$. Apparently, not every choice function complies with our intuitive understanding of rationality. Consider, for example, the choice function~$S$ with $S(\set{a,b})=\set a$ and $S(\set{a,b,c})=\set{b}$. Doubts as to an agent's rationality could be raised, if, when offered the choice between apple pie and brownies, he were to choose the former, but the latter, when told that chocolate mousse is also an option.\footnote{\citet{Sen93a,Sen97a} gives examples where rational choosers actually make choices as described and generally argues against imposing internal consistency conditions on rational choice. His examples usually involve a kind of context-dependence. For instance, a modest person may be unwilling to take the largest piece of cake and thus his choice depends on the other pieces that are available. Our view is that violation of internal consistency conditions need not necessarily point at irrational behavior as such but can just as well indicate the presence of situational features that affect choice but are not (appropriately) represented in the mathematical model. In some cases the set of alternatives can be redefined so as to capture all aspects that affect their choice. For instance, the choice of the modest person above is arguably not between mere pieces of cake, but rather between tuples that consist of a piece of cake and the pieces left for others to choose from.} In microeconomic theory, the existence of a binary relation~$R$ on all alternatives such that~$S$ returns precisely the maximal elements according to~$R$ from any feasible set is commonly taken as a minimal rationality condition on choice functions. Choice functions for which this is the case are called \emph{rationalizable} \citep[see, \eg][]{Rich66a,Herz73a,BBKS76a,Moul85a}.\footnote{Rationalizable choice functions have also been referred to as \emph{binary} \citep{Schw76a}, \emph{normal} \citep{Sen77a}, and \emph{reasonable} \citep{Alli99a}.} Rationalizable choice functions have been characterized using two \emph{consistency} conditions that relate choices within feasible sets of variable size, namely conditions $\alpha$ and $\gamma$ \citep{Sen71a}. Clearly, acyclicity of the strict part~$P$ of~$R$ is necessary and sufficient for~$S$ to be rationalizable if every finite set of alternatives is feasible. Stronger rationality conditions can be obtained by requiring the rationalizing relation~$R$ to satisfy certain structural restrictions, such as completeness, transitivity, or quasi-transitivity (\ie transitivity of~$P$). The above considerations have had a profound impact on the theory of social choice, in particular on the interpretation of Arrow's general impossibility theorem \citep{Arro51a}, which states the impossibility of social choice functions that satisfy four intuitive criteria, including rationalizability via a transitive preference relation. An obvious way around Arrow's disturbing result is to try to relax this condition, \eg by requiring social choice functions to be merely rationalizable. Although this approach does allow for some social choice functions that also meet the remaining three criteria, these functions turned out to be highly objectionable, usually on grounds of involving a weak kind of dictatorship or violating other conditions deemed to be indispensable for rational social choice \citep[for an overview of the extensive literature, see ][]{BBKS76a,Kell78a,Schw86a,Sen77a,Sen86a,CaKe02a}. \citet[][page 5]{Sen95a} concludes that \begin{quotation} [\dots] the arbitrariness of power of which Arrow's case of dictatorship is an extreme example, lingers in one form or another even when transitivity is dropped, so long as \emph{some} regularity is demanded (such as the absence of cycles).\hfill \end{quotation} One possibility to escape the haunting impossibility of rationalizable social choice is to require only~$\alpha$ or~$\gamma$ but not both at the same time. It turns out that $\alpha$ (and even substantially weakened versions of $\alpha$) give rise to impossibility results that retain Arrow's spirit \citep{Sen77a}. By contrast, there are a number of social choice functions that satisfy~$\gamma$. An attractive one among these based on majority rule is the \emph{uncovered set} \citep{Fish77a,Mill80a,Moul86a}. In this paper, we approach the matter from a slightly different angle. Choice functions are defined so as to select \emph{subsets} of alternatives from each feasible set, rather than a single alternative. Still, the consistency and rationality conditions on choice functions have been defined in terms of alternatives. Taking cue from this observation, we propose an alternative notion of rationality called \emph{set-rationalizability}. A choice function~$S$ is \emph{set-rationalizable} if a binary relation~$R$ on all non-empty subsets of alternatives can be found such that for each feasible subset~$A$, $S(A)$ is maximal with respect to~$R$ among all non-empty subsets of~$A$. We find that set-rationalizable choice functions can be characterized by $\widehat\alpha$, a natural variant of~$\alpha$ defined in terms of sets rather than alternatives. Despite its intuitive appeal, $\widehat\alpha$ has played a remarkably small role in (social) choice theory \citep[][]{Cher54a,AiAl95a}. Yet, it differentiates quite a number of well-known choice functions. In particular, we will show that various prominent social choice functions---such as all scoring rules, all scoring runoff rules, and all weak Condorcet extensions---do {not} satisfy $\widehat\alpha$, whereas several Condorcet extensions---such as weak closure maximality, the minimal covering set, and the essential set---do. For our second result, we introduce a new property~$\widehat\gamma$, which varies on~$\gamma$ in an analogous way as~$\widehat\alpha$ varies on~$\alpha$. It turns out that~$\widehat\alpha$ and~$\widehat\gamma$ characterize the class of \emph{self-stable} choice functions, whose definition is inspired by earlier work of~\citet{Dutt88a} and~\citet{Bran08a}. Despite the logical independence of $\widehat\alpha$ and $\widehat\gamma$, the class of self-stable social choice functions also contains the Condorcet extensions mentioned above. These Condorcet extensions also satisfy all conditions typically appearing in Arrovian impossibility results except rationalizability, \ie $\alpha$ and $\gamma$. Accordingly, by replacing~$\alpha$ and~$\gamma$ with~$\widehat\alpha$ and~$\widehat\gamma$, the impossibility of rationalizable social choice can be avoided and turned into a possibility result. \section{Preliminaries}\label{sec:prelim} We assume there to be a universe~$U$ of at least three \emph{alternatives}. Any subset of~$U$ from which alternatives are to be chosen is a \emph{feasible set} (sometimes also called an \emph{issue} or \emph{agenda}). Throughout this paper we assume the set of feasible subsets of~$U$ to be given by~$\mathcal{F}(U)$, the set of finite and non-empty subsets of~$U$, and generally refer to finite non-empty subsets of~$U$ as feasible sets. Our central object of study are \emph{choice functions}, \ie functions $S:\mathcal{F}(U)\rightarrow \mathcal{F}(U)$ such that $S(A)\subseteq A$ for all feasible sets~$A$. A choice function~$S$ is called \emph{rationalizable} if there exists a binary relation~$R$ on~$U$ such that for each feasible set~$A$, \[ S(A)=\set{a\in A\colon \text{$x\mathrel Pa$ for no~$x\in A$}} \] where~$P$ is the strict part of~$R$. Observe that acyclicity of~$P$ is required to guarantee that~$S$ invariably returns a non-empty set. Two typical candidates for the rationalizing relation are the \emph{base relation} $\overline R_S$~\citep{Herz73a} and the \emph{revealed preference relation} $R_S$~\citep{Samu38a}, which, for all alternatives~$x$ and~$y$, are given by \begin{gather*} \text{$a\mathrel{\overline R_S}b$ if and only if $a\in S(\set{a,b})$, and}\\ \text{ $a\mathrel{R_S}b$ if and only if $a\in S(X)$ for some~$X$ with~$b\in X$. } \end{gather*} Thus, the revealed preference relation relates~$a$ to~$b$ if $a$ is chosen in the presence of~$b$ and possibly other alternatives, whereas the base relation only relates~$a$ to~$b$ if $a$ is chosen in the exclusive presence of~$b$. Rationalizable choice functions are characterized by a consistency axiom, which \citet{Schw76a} defined such that for all feasible sets~$A$ and~$B$ and all alternatives~$x\in A\cap B$, % \[ \text{ $x\in S(A\cup B)$ if and only if $x\in S(A)$ and $x\in S(B)$. } \] The above equivalence can be factorized into two implications, viz. the conditions~$\alpha$ and $\gamma$ \citep{Sen71a} for feasible sets~$A$ and~$B$ and alternatives~$x\in A\cap B$,\footnote{The definitions of $\alpha$ and $\gamma$ given here are equivalent, but not syntactically identical, to Sen's original ones. They are chosen so as they reveal their similarity to $\widehat\alpha$ and $\widehat\gamma$ below.} \settowidth{\WORDWIDTH}{$\alpha$} \begin{gather} \tag{\text{$\alpha$}} \text{ if $x\in S(A\cup B)$ then $x\in S(A)$ and $x\in S(B)$, }\\[1ex] \tag{\makebox[\WORDWIDTH][c]{$\gamma$}} \text{ if $x\in S(A)$ and $x\in S(B)$ then $x\in S(A\cup B)$. } \end{gather} Axiom~$\alpha$ is a \emph{contraction} consistency property, which states that alternatives that are chosen in a feasible set are still chosen in feasible subsets. By contrast,~$\gamma$ is an \emph{expansion} consistency property, which states that alternatives chosen in two feasible sets are also chosen in their union. \citet{Sen71a} proved that a choice function~$S$ is rationalizable if and only if it satisfies both~$\alpha$ and~$\gamma$, with the witnessing relations~$\overline R_S$ and $R_S$, which are identical in the presence of~$\alpha$. \begin{theorem}[\citeauthor{Sen71a}, \citeyear{Sen71a}]\label{thm:Sen71} A choice function is rationalizable if and only if it satisfies both~$\alpha$ and~$\gamma$. \end{theorem} Similar results can also be obtained if stronger requirements are imposed on the rationalizing relation \citep[see, \eg][]{Sen77a,Moul85a,Schw76a}. For instance, \citet{Arro59a} showed that a choice function can be rationalized by a complete and transitive relation if and only if it satisfies the \emph{weak axiom of revealed preference (WARP)}---a consistency condition, first proposed by \citet{Samu38a}, which is stronger than the conjunction of~$\alpha$ and~$\gamma$ and central to large parts of microeconomic theory. Formally, WARP is defined such that for all feasible sets~$A$ and~$B$ with $B\subseteq A$, \[ \tag{WARP} \text{if $S(A)\cap B\neq\emptyset$ then $S(A)\cap B=S(B)$. } \] \section{Set-Rationalizable Choice} In analogy to the definitions of \secref{sec:prelim}, we now define the concept of set-rationalizability, the base and revealed preference relations over sets of alternatives, and properties~$\widehat\alpha$ and $\widehat\gamma$. The main result of this section is that set-rationalizable choice is completely characterized by~$\widehat\alpha$. We say a choice function is \emph{set-rationalizable} if it can be rationalized via a preference relation on sets of alternatives. \begin{definition} A choice function~$S$ is \emph{set-rationalizable} if there exists a binary relation~$R\subseteq\mathcal{F}(U)\times\mathcal{F}(U)$ such that for each feasible set~$A$ there is no~$X\in \mathcal{F}(A)$ with~$X\mathrel P S(A)$ where~$P$ is the strict part of~$R$. \end{definition} We define the \emph{base relation $\overline R_S$} and the \emph{revealed preference relation~$\widehat R_S$} of a choice function~$S$ on sets as follows:\footnote{Given a choice function~$S$, the base relation on sets is a natural extension of the base relation on alternatives and, hence, both are denoted by~$\overline R_S$.} \begin{gather*} \text{$A\mathrel{\overline R_S} B$ if and only if $A=S(A\cup B)$,}\\ \text{$A\mathrel{\widehat R_S} B$ if and only if $A=S(X)$ for some $X$ with $B\subseteq X$.} \end{gather*} \subsection{Set-contraction consistency} Condition $\widehat\alpha$ is defined as a natural variant of~$\alpha$ that makes reference to the entire set of chosen alternatives rather than its individual elements. \begin{definition} A choice function~$S$ satisfies $\widehat\alpha$, if for all feasible sets~$A$, $B$, and~$X$ with $X\subseteq A\cap B$, \[ \tag{$\widehat \alpha$} \text{ if $X=S(A\cup B)$ then $X=S(A)$ and $X=S(B)$\text. } \] \end{definition} $\widehat\alpha$ is not implied by the standard contraction consistency condition $\alpha$ (see \exref{ex:setrevrel2}). Moreover, $\widehat\alpha$ is a not a contraction consistency property according to Sen's original terminology \citep[see, \eg][]{Sen77a}. It does not only require that chosen alternatives remain in the choice set when the feasible set is reduced, but also that unchosen alternatives remain outside the choice set. Thus, it has the flavor of both contraction and expansion consistency. $\widehat \alpha$ can be split into two conditions that fall in Sen's categories: an expansion condition known as $\epsilon^+$ \citep{Bord83a} and \emph{Aizerman} \citep{Moul86a}, which requires that $S(B)\subseteq S(A)$ for all $S(A)\subseteq B\subseteq A$, and a corresponding expansion condition. Similarly, $\widehat \gamma$ can be factorized into two conditions. In this paper, however, we are concerned with the choice set as a whole and $\widehat\alpha$ merely says that the set~$S(A)$ chosen from a feasible set~$A$ is also chosen from any subset~$B$ of~$A$, provided the former contains~$S(A)$. This reading is reflected by the useful characterization of $\widehat\alpha$ given in the following lemma, which reveals that $\widehat\alpha$ is equivalent to such established notions as \citeauthor{Cher54a}'s \emph{postulate $5^*$} \citep{Cher54a}, the \emph{strong superset property} \citep{Bord79a}, and \emph{outcast}~\citep{AiAl95a}. \begin{lemma}\label{lemma:setSSP} A choice function~$S$ satisfies~$\widehat\alpha$ if and only if for all feasible sets $A$ and $B$, \[ \text{if $S(A)\subseteq B\subseteq A$ then $S(A)=S(B)$.} \] \end{lemma} \begin{proof} For the direction from left to right, let $S(A)\subseteq B\subseteq A$. Then, both $A\cup B=A$ and $B=A\cap B$. Hence, $S(A\cup B)=S(A)\subseteq B=A\cap B\text.$ Since $S$ satisfies~$\widehat\alpha$, $S(A)=S(B)$. For the opposite direction, assume for an arbitrary non-empty set~$X$, both $X\subseteq A\cap B$ and $X=S(A\cup B)$. Then, obviously, both $S(A\cup B)\subseteq A\subseteq A\cup B$ and $S(A\cup B)\subseteq B\subseteq A\cup B$. It follows that $S(A\cup B)=S(A)$ and $S(A\cup B)=S(B)$. \end{proof} As a corollary of \lemref{lemma:setSSP}, we have that choice functions~$S$ satisfying~$\widehat\alpha$, like those satisfying~$\alpha$, are \emph{idempotent}, \ie $S(S(A))=S(A)$ for all feasible sets~$A$. An influential and natural consistency condition that also has the flavor of both contraction and expansion is \emph{path independence} \citep{Plot73a}. Choice function~$S$ satisfies path independence if $S(A\cup B)=S(S(A)\cup S(B))$ for all feasible sets $A$ and $B$. \citet{AiMa81a} have shown that path independence is equivalent to the conjunction of $\alpha$ and $\epsilon^+$. Since $\alpha$ is the strongest contraction consistency property and implies the contraction part of $\widehat \alpha$, we obtain the following alternative characterization of path independence. \begin{proposition} A choice function satisfies path independence if and only if it satisfies $\alpha$ and $\widehat \alpha$. \end{proposition} It can easily be verified that the revealed preference relation on sets~$\widehat R_S$ of any choice function~$S$ that satisfies~$\widehat\alpha$ is closed under intersection, \ie for all feasible sets~$X$, $Y$, and~$Z$ such that $Y\cap Z\neq\emptyset$, \[ \text{$X\mathrel{\widehat R_S} Y$ and $X\mathrel{\widehat R_S} Z$ imply $X\mathrel{\widehat R_S} Y\cap Z$.} \] \subsection{Set-expansion Consistency} We define $\widehat\gamma$ in analogy to~$\gamma$ as follows. \begin{definition} A choice function~$S$ satisfies $\widehat\gamma$ if for all feasible sets~$A$, $B$, and~$X$, \[ \tag{$\widehat\gamma$} \text{ if $X=S(A)$ and $X=S(B)$ then $X=S(A\cup B)$\text. } \] \end{definition} Thus, a choice function satisfies~$\widehat\gamma$, if whenever it chooses $X$ from two different sets, it also chooses $X$ from their union. \exref{ex:setrevrel2} shows that $\widehat \alpha$ is not a weakening of $\alpha$ (and not even of the conjunction of $\alpha$ and $\gamma$). To see that $\widehat \gamma$ is not implied by $\gamma$, consider the following choice function over the universe $\{a,b,c\}$, which satisfies $\gamma$ but not $\widehat \gamma$: \[ \begin{array}{ll} X & S(X)\\\midrule \set{a,b} & \set{a} \\ \set{a,c} & \set{a} \\ \set{b,c} & \set{b} \\ \set{a,b,c} & \set{a,b,c} \end{array} \] However, $\widehat \gamma$ is implied by the conjunction of~$\alpha$ and~$\gamma$. \begin{proposition}\label{prop:properties} Every rationalizable choice function satisfies~$\widehat\gamma$. \end{proposition} \begin{proof} Assume both~$\alpha$ and~$\gamma$ to hold for an arbitrary choice function~$S$ and consider an arbitrary feasible sets~$X$,~$A$, and~$B$ with $X=S(A)$ and $X=S(B)$. The inclusion of $X$ in $S(A\cup B)$ follows immediately from~$\gamma$. To appreciate that also $S(A\cup B)\subseteq X$, consider an arbitrary $x\notin X$ and assume for contradiction that $x\in S(A\cup B)$. Then, either $x\in A$ or $x\in B$. Without loss of generality we may assume the former. Clearly, $x\in (A\cup B)\cap A$ and $\alpha$ now implies that $x\in S(A)$, a contradiction. \end{proof} \citet{Schw76a} has shown that quasi-transitive rationalizability is equivalent to the conjunction of $\alpha$, $\gamma$, and $\epsilon^+$. Since $\alpha$ implies the contraction part of $\widehat \alpha$ and $\alpha$ and $\gamma$ imply $\widehat \gamma$, we obtain the following alternative characterization of quasi-transitive rationalizability. As a consequence, WARP implies both~$\widehat \alpha$ and~$\widehat \gamma$. \begin{proposition} A choice function is quasi-transitively rationalizable if and only if it satisfies $\alpha$, $\widehat \alpha$, and $\widehat \gamma$. \end{proposition} $\widehat\gamma$ is reminiscent of the generalized Condorcet condition \citep[see, \eg][]{BBKS76a}, which requires that for all feasible sets~$A$ and all $a\in A$, \[ \text{if $S(\{a,b\})=\set a$ for all $b\in A$ then $S(A)=\set a$.} \] Choice functions that satisfy this condition we will refer to as \emph{generalized Condorcet extensions}. It is easily appreciated that $\widehat\gamma$ implies the generalized Condorcet condition. In the setting of social choice, \emph{Condorcet extensions} are commonly understood to be social choice functions for which additionally choice over pairs is determined by majority rule (see \secref{sec:condorcet}). In analogy to the relationship between closure under intersection of $\widehat R_S$ and $\widehat\alpha$, $\widehat R_S$ of a choice function~$S$ that satisfies~$\widehat\gamma$ is closed under union,\footnote{This condition is also known as \emph{robustness} \citep[][see also \citet{BBP04a}]{Arle03a}.} \ie for all feasible sets~$X$, $Y$, and~$Z$, \[ \text{$X\mathrel{\widehat R_S} Y$ and $X\mathrel{\widehat R_S} Z$ imply $X\mathrel{\widehat R_S} Y\cup Z$.} \] \subsection{Set-Rationalizability} As in the case of $\alpha$ and $\gamma$, a single intuitive consistency condition summarizes the conjunction of $\widehat\alpha$ and $\widehat\gamma$: for all feasible sets $A$, $B$, and~$X$ with $X\subseteq A\cap B$, \[ \text{ $X=S(A)$ and $X=S(B)$ if and only if $X=S(A\cup B)$\text. } \] For illustrative purposes consider the following examples. \begin{example}\label{ex:setrevrel} Let the choice function~$S$ over the universe~$\set{a,b,c}$ be given by the table in~\figref{fig:Srevprefrel}. For~$S$ the revealed preference relation on sets~$\widehat R_{S}$ and the base relation on sets~$\overline R_S$ coincide and are depicted in the graph on the right. A routine check reveals that this choice function satisfies both~$\widehat\alpha$ and~$\widehat\gamma$ (while it fails to satisfy $\alpha$). Also observe that each feasible set~$X$ contains a subset that is maximal (with respect to~$\widehat R_S$) among the non-empty subsets of~$X$, \eg $\set{a,b,c}$ in~$\set{a,b,c}$ and $\set{a}$ in~$\set{a,b}$. \thmref{thm:SetRatiffhatalphahatgamma}, below, shows that this is no coincidence. \end{example} \begin{figure} \centering \begin{minipage}{7em} $\begin{array}{c} \begin{array}[b]{ll} X & S(X)\\\midrule \set{a,b} & \set{a}\\ \set{a,c} &\set{c}\\ \set{b,c} &\set{b}\\ \set{a,b,c} &\set{a,b,c}\\ \end{array} \end{array}$ \end{minipage} \hspace{2em} \begin{minipage}{20em}\centering \begin{tikzpicture}[xscale=.7,yscale=1.5] \tikzstyle{every node}=[inner sep=1pt] \draw (0,0) node(a){\set{a}}; \draw (3,0) node(b){\set{b}}; \draw (6,0) node(c){\set{c}}; \draw (0,1) node(ab){\set{a,b}}; \draw (6,1) node(ac){\set{a,c}}; \draw (3,1) node(bc){\set{b,c}}; \draw (3,2) node(abc){\set{a,b,c}}; \draw[latex-] (a) ..controls ++(2,-.75) and ++(-2,-.75).. (c); \draw[-latex] (a) -- (b); \draw[-latex] (a) -- (ab); \draw[-latex] (b) -- (c); \draw[-latex] (b) -- (bc); \draw[-latex] (c) -- (ac); \draw[-latex] (abc) ..controls ++(-1.7,-.5) and ++(.25,.25).. (a); \draw[-latex] (abc) ..controls ++(-1,-.5) and ++(-1,.5).. (b); \draw[-latex] (abc) ..controls ++(1.5,-.5) and ++(-.25,.25).. (c); \draw[-latex] (abc) ..controls ++(-1.5,-.25) and ++(.5,.35).. (ab); \draw[-latex] (abc) ..controls ++(1.5,-.25) and ++(-.5,.35).. (ac); \draw[-latex] (abc) -- (bc); \draw[-latex] (a) ..controls ++(-.6,-.6) and ++(.6,-.6).. (a); \draw[-latex] (b) ..controls ++(-.6,-.6) and ++(.6,-.6).. (b); \draw[-latex] (c) ..controls ++(-.6,-.6) and ++(.6,-.6).. (c); \draw[-latex] (abc) ..controls ++(-.6,.6) and ++(.6,.6).. (abc); \end{tikzpicture} \end{minipage} \caption{\label{fig:Srevprefrel} The revealed preference relation~${\widehat R_S}$ of the choice function~$S$ as in \exref{ex:setrevrel}.} \end{figure} \begin{figure} \centering \begin{minipage}{7em} $\begin{array}{c} \begin{array}{ll} X & S(X)\\\midrule \set{a,b} & \set{a,b}\\ \set{a,c} &\set{a}\\ \set{b,c} &\set{c}\\ \set{a,b,c} &\set{a}\\ \end{array} \end{array}$ \end{minipage} \hspace{2em} \begin{minipage}{20em}\centering \begin{tikzpicture}[xscale=.7,yscale=1.5] \tikzstyle{every node}=[inner sep=1pt] \draw (0,0) node(a){\set{a}}; \draw (3,0) node(b){\set{b}}; \draw (6,0) node(c){\set{c}}; \draw (0,1) node(ab){\set{a,b}}; \draw (6,1) node(ac){\set{a,c}}; \draw (3,1) node(bc){\set{b,c}}; \draw (3,2.1) node(abc){\set{a,b,c}}; \draw[-latex] (a) -- (b); \draw[-latex] (a) ..controls ++(2,-.75) and ++(-2,-.75).. (c); \draw[-latex] (a) .. controls ++(-.85,.5) and ++(-.35,-.25).. (ab); \draw[-latex] (a) ..controls ++(-4.4,.9) and ++(-7,2).. (ac); \draw[-latex] (a) ..controls ++(-3.15,1.2) and ++(-3,1.2).. (bc); \draw[-latex] (a) ..controls ++(-4,.3) and ++(-6.5,.4).. (abc); \draw[-latex] (c) -- (b); \draw[latex-] (bc) -- (c); \draw[-latex] (ab) ..controls ++(-.6,.6) and ++(.6,.6).. (ab); \draw[-latex] (ab) .. controls ++(.85,-.5) and ++( .35,.25).. (a); \draw[-latex] (ab) -- (b); \draw[-latex] (a) ..controls ++(-.6,-.6) and ++(.6,-.6).. (a); \draw[-latex] (b) ..controls ++(-.6,-.6) and ++(.6,-.6).. (b); \draw[-latex] (c) ..controls ++(-.6,-.6) and ++(.6,-.6).. (c); \end{tikzpicture} \end{minipage} \caption{\label{fig:Srevprefrel2} The revealed preference relation~${\widehat R_S}$ of the choice function~$S$ as in \exref{ex:setrevrel2}.} \end{figure} \exref{ex:setrevrel} also shows that the revealed preference relation over sets need not be complete. Some reflection reveals that the relation is always incomplete. \begin{example}\label{ex:setrevrel2} The table in \figref{fig:Srevprefrel2} summarizes a choice function that is rationalizable by any acyclic relation~$P$ with $a\mathrel{P}c\mathrel{P}b$. Nevertheless, the revealed preference relation over sets does not set-rationalize this choice function. Observe that both $\set a$ and $\set{a,b}$ are maximal in~$\set{a,b}$ with respect to the strict part of~$\widehat R_S$. As $S(\set{a,b,c})=\set{a}$ and $S(\set{a,b})=\set{a,b}$,~$S$ clearly does not satisfy~$\widehat\alpha$. Again \thmref{thm:SetRatiffhatalphahatgamma}, below, shows that this is no coincidence. \end{example} By definition, the base relation $\overline R_S$ of any choice function $S$ is anti-symmetric, \ie $X\mathrel{\overline R_S}Y$ and $Y\mathrel{\overline R_S}X$ imply $X=Y$. In the presence of~$\widehat\alpha$,~$\widehat R_S$ and~$\overline R_S$ coincide and are thus both anti-symmetric. Set-rationalizable choice functions are characterized by~$\widehat\alpha$.\footnote{\citeauthor{Moul85a} shows a similar statement for single-valued choice functions~\citep{Moul85a}.} \begin{theorem}\label{thm:SetRatiffhatalphahatgamma} A choice function is set-rationalizable if and only if it satisfies~$\widehat\alpha$. \end{theorem} \begin{proof} For the direction from left to right, assume~$S$ is set-rationalizable and let $\mathrel R$ be the witnessing binary relation on sets. Now consider arbitrary feasible sets~$A$, $B$ and~$X$ with $X\subseteq A\cap B$ and assume that $X=S(A\cup B)$. Then, $S(A\cup B)\subseteq A\cap B$. Hence, $Y\mathrel P S(A\cup B)$, for no non-empty subset $Y\subseteq A\cup B$. It follows that there is no non-empty subset~$Y$ of $A$ such that $Y\mathrel P S(A\cup B)$ either. Hence, $S(A\cup B)$ is maximal with respect to $\mathrel R$ within~$A$. As~$S$ is set-rationalizable, $S(A\cup B)$ has also to be the unique such subset in~$A$. The argument that $S(A\cup B)$ is also the unique maximal element of~${R}$ in~$B$ runs along analogous lines. It follows that both $S(A)=S(A\cup B)$ and $S(B)=S(A\cup B)$. For the opposite direction assume~$S$ to satisfy~$\widehat\alpha$ and consider arbitrary feasible sets~$A$ and~$B$ such that $B\subseteq A$ and $S(A)\neq B$. Then, \[ S(A)\subseteq B\cup S(A)\subseteq A\text. \] Hence, by virtue of \lemref{lemma:setSSP}, \[ S(A)=S(B\cup S(A))\text, \] which implies that $S(A)\mathrel{\overline R_S}B$. Since $\overline R_S$ is anti-symmetric, it is thus impossible that $B\mathrel{\overline R_S} S(A)$. A similar argument holds for~$\widehat R_S$, which coincides with $\overline R_S$ in the presence of $\widehat \alpha$. \end{proof} In the proof of \thmref{thm:SetRatiffhatalphahatgamma}, it is precisely the revealed preference relation on sets that is witness to the fact that choice functions satisfying~$\widehat\alpha$ are set-rationalizable. In contrast to \citeauthor{Sen71a}'s \thmref{thm:Sen71}, however, the revealed preference relation on sets is not the unique relation that can achieve this. It is also worth observing that the proof shows that for each feasible set~$X$ and choice function~$S$ satisfying~$\widehat\alpha$, the selected set $S(X)$ is not merely a maximal set but also the unique \emph{maximum} set within~$X$ given~$\widehat R_S$, \ie $S(X)\mathrel{\widehat R_S} Y$ for all non-empty subsets~$Y$ of~$X$. \section{Self-Stability} It turns out that the notions of set consistency introduced in the previous section bear a strong relationship to the stability of choice sets as introduced by \citet{Dutt88a} and generalized by \citet{Bran08a}. Stability of choice sets is based on the notions of internal and external stability by \citet{vNM44a}, which can be merged in the following fixed-point characterization. \begin{definition} Let $A,X$ be feasible sets and $S$ a choice function. $X$ is \emph{$S$-stable in~$A$} if \[ X=\set{a\in A\colon a\in S(X\cup\set a)}\text. \] \end{definition} Alternatively,~$X$ is $S$-stable in $A$ if it satisfies both \emph{internal} and \emph{external $S$-stability in~$A$}: \begin{gather} \tag{internal $S$-stability} S(X)=X \text, \\[1ex] \tag{external $S$-stability} a\not\in S(X\cup\{a\}) \text{ for all } a\in A\setminus X\text. \end{gather} The intuition underlying stable sets is that there should be no reason to restrict the selection by excluding some alternative from it and, secondly, there should be an argument against each proposal to include an outside alternative into the selection. For some choice functions~$S$, a unique inclusion-minimal $S$-stable set generally exists. If that is the case, we use~$\widehat S$ to denote the choice function that returns the unique minimal $S$-stable set in each feasible set and say that $\widehat S$ is \emph{well-defined}. Within the setting of social choice, a prominent example is \citeauthor{Dutt88a}'s \emph{minimal covering set $\mc$}~\citep{Dutt88a,DuLa99a}, which is defined as the unique minimal stable set with respect to the uncovered set~$\uc$, \ie $\mc=\widehat\uc$. Proving that a choice function~$\widehat S$ is well-defined frequently turns out to be highly non-trivial~\citep{Bran08a}. We find that there is a close connection between~$\widehat\gamma$ and minimal $S$-stable sets. \begin{lemma}\label{lem:hatplusuniqueimphatgamma} Let $S$ be a choice function such that $\widehat S$ is well-defined. Then $\widehat S$ satisfies~$\widehat\gamma$. \end{lemma} \begin{proof} Consider arbitrary feasible sets~$A,B,X$ and assume that $\widehat S(A)=\widehat S(B)=X$. Trivially, as~$X$ is internally $S$-stable in~$A$, so is~$X$ in~$A\cup B$. To appreciate that~$X$ is also externally $S$-stable in~$A\cup B$, consider an arbitrary~$x\in(A\cup B)\setminus X$. Then, $x\in A\setminus X$ or $x\in B\setminus X$. In either case, $S(X\cup\set{x})=X$, by external $S$-stability of~$X$ in~$A$ if the former, and by external $S$-stability of~$X$ in~$b$ if the latter. Also observe that any subset of~$X$ that is $S$-stable in~$A\cup B$, would also have been $S$-stable in both~$A$ and~$B$. Hence,~$X$ is minimal $S$-stable in~$A\cup B$. Having assumed that~$\widehat S$ is well-defined, we may conclude that~$\widehat S(A\cup B)=X$. \end{proof} We now introduce the notion of self-stability. A choice function~$S$ is said to be {self-stable} if for each feasible set $A$, $S(A)$ is the unique (minimal) $S$-stable set in~$A$. \begin{definition} A choice function~$S$ is \emph{self-stable} if $\widehat S$ is well-defined and $S=\widehat S$. \end{definition} In the next section we argue that self-stability defines an interesting class of choice functions, containing a number of well-known and important social choice functions. First, and on a more abstract level, however, we establish that the class of self-stable choice functions is characterized by the conjunction of $\widehat\alpha$ and~$\widehat{\gamma}$. \begin{theorem}\label{thm:selfstabilityiffhatalphahatgamma} A choice function is self-stable if and only if it satisfies both~$\widehat\alpha$ and~$\widehat\gamma$. \end{theorem} \begin{proof} For the direction from left to right, assume~$S$ to be self-stable. \lemref{lem:hatplusuniqueimphatgamma} implies that~$S$ satisfies~$\widehat \gamma$. For~$\widehat\alpha$, consider arbitrary feasible sets $A,B$ such that $S(A)\subseteq B\subseteq A$. By virtue of \lemref{lemma:setSSP}, it suffices to show that $S(B)=S(A)$. As, moreover, $\widehat S(B)=S(B)$ and $\widehat S$ is well-defined, $S(B)$ is the unique $S$-stable set in~$B$. Hence, it suffices to show that~$S(A)$ is both internally and externally $S$-stable in~$B$. Internal $S$-stability of~$S(A)$ in~$B$ is trivial since $S(S(A))=S(A)$ by $S(A)$'s being internally $S$-stable in~$A$. To appreciate that $S(A)$ is also externally $S$-stable in~$B$, consider an arbitrary $x\in B\setminus S(A)$. Then also $x\in A\setminus S(A)$ and by $S(A)$'s being externally $S$-stable in~$A$, we obtain $S(A)=S(S(A)\cup\set x)$. It follows that $S(A)$ is also externally $S$-stable in~$B$. For the other direction, assume~$S$ satisfies both~$\widehat\alpha$ and~$\widehat\gamma$ and consider an arbitrary feasible set~$A$. To show that~$S$ satisfies internal $S$-stability, observe that trivially $S(A)\subseteq S(A)\subseteq A$. Hence, by \lemref{lemma:setSSP}, $S(A)=S(S(A))$. To appreciate that~$S$ also satisfies external $S$-stability, let $x\in A\setminus S(A)$. Then, $S(A)\subseteq S(A)\cup\set{x}\subseteq A$ and, again by \lemref{lemma:setSSP}, $S(A)=S(S(A)\cup\set x)$. To see that $\widehat S$ is well-defined, consider an arbitrary $S$-stable set~$Y$ in~$A$ and let $A\setminus Y=\set{x_1,\ldots,x_k}$, \ie $A=Y\cup\set{x_1,\ldots,x_k}$. By external $S$-stability of~$Y$, then $S(Y\cup\set{x_i})=S(Y)$ for each~$i$ with $1\le i\le k$. Thus by $k-1$ applications of $\widehat\gamma$, we obtain $S(Y)=S(Y\cup\set{x_1,\ldots,x_k})=S(A)$. \end{proof} As an immediate consequence of \thmref{thm:selfstabilityiffhatalphahatgamma} and the observation that~$\widehat\gamma$ implies the generalized Condorcet condition, we have the following corollary. \begin{corollary}\label{cor:condorcet} Every self-stable choice function is a generalized Condorcet extension. \end{corollary} \noindent Examples of self-stable Condorcet extensions will be given in \secref{sec:condorcet}. \section{Social Choice}\label{sec:social_choice} In this section, we assess the consequences of the reflections in the previous two sections on the theory of social choice. Before we do so, however, we introduce some additional terminology and notation. \subsection{Social Choice Functions} We consider a finite set~$N=\{1,\dots,n\}$ of at least two agents. Each agent~$i$ entertains preferences over the alternatives in~$U$, which are represented by a transitive and complete preference relation~$R_i$. In some cases, we will assume preferences to be linear, \ie also satisfying anti-symmetry, but otherwise we impose no further restrictions on preference relations. We have~$a \mathrel{R_i} b$ denote that agent~$i$ values alternative~$a$ at least as much as alternative~$b$. We write~$P_i$ for the strict part of~$R_i$, \ie~$a \mathrel{P_i} b$ if~$a \mathrel{R_i} b$ but not~$b \mathrel{R_i} a$. Similarly,~$I_i$ denotes~$i$'s indifference relation, \ie $a \mathrel{I_i} b$ if both~$a \mathrel{R_i} b$ and~$b \mathrel{R_i} a$. The set of all preference relations over the universal set of alternatives $U$ will be denoted by~$\mathcal{R}(U)$. The set of \emph{preference profiles}, with typical element~$R=(R_1,\ldots,R_n)$, is then given by~$\mathcal R(U)^N$. The central object of study in this section are \emph{social choice functions}, \ie functions that map the individual preferences of the agents and a feasible set to a set of socially preferred alternatives. \begin{definition} A \emph{social choice function (SCF)} is a function $f:\mathcal{R}(U)^N\times\mathcal{F}(U) \rightarrow \mathcal{F}(U)$ such that $f(R,A)\subseteq A$ for all preference profiles~$R$ and feasible sets~$A$. \end{definition} Clearly, every SCF~$f$ together with a preference profile~$R$ in~$\mathcal R(U)$ defines a choice function $S_{f,R}$ on $U$ in a natural way by letting for each feasible set~$A$, $S_{f,R}(A)= f(R,A)$. We say that $f$ satisfies WARP, rationalizability, or any other condition defined for choice functions, if $S_{f,R}$ does for every preference profile~$R$. \emph{Pareto-optimality}, \emph{independence of irrelevant alternatives}, and \emph{non-dictatorship} are conditions that are more specifically defined for SCFs. Pareto-optimality requires that an alternative should not be chosen if there exists another alternative that \emph{all} agents unanimously prefer to the former. \begin{definition}\label{def:Pareto_optimality} An SCF $f$ satisfies (pairwise) \emph{Pareto-optimality} if for all preference profiles~$R$ and all alternatives $a,b$, if $b \mathrel{P_i} a$ for all $i\in N$ then $a\not\in f(R,\{a,b\})$. \end{definition} Independence of irrelevant alternatives reflects the idea that choices from a set of feasible alternatives should not depend on preferences over alternatives that are not contained in this set. \begin{definition} An SCF $f$ satisfies \emph{independence of irrelevant alternatives (IIA)} if $f(R,A)=f(R',A)$ for all feasible sets~$A$ and preference profiles $R,R'$ such that $R|_A=R'|_{A}$. \end{definition} In the context of SCFs, IIA constitutes no more than a framework requirement for social choice. Another minimal requirement for any SCF is that it should be sensitive to the preferences of more than one agent. In particular, there should not be a single agent who can enforce the inclusion of alternatives in the choice set no matter which preferences the other agents have. Such an agent is usually called a (weak) \emph{dictator}.\footnote{For presentational purposes we employ the notion of a \emph{weak dictator} or \emph{vetoer} in all impossibility theorems, although \thmref{thm:arrow} holds for an even weaker notion of non-dictatorship.} \begin{definition}\label{def:Non-dictatorship} An SCF $f$ is (pairwise) \emph{non-dictatorial} if there is no agent~$i$ such that for all preference profiles~$R$ and alternatives $a,b$, if $a \mathrel{P_i} b$ then $a\in f(R,\{a,b\})$. \end{definition} \defref{def:Pareto_optimality} through \defref{def:Non-dictatorship} are also referred to as the \emph{Arrovian conditions}. Other useful and frequently imposed requirements on SCFs are \emph{neutrality}, \emph{anonymity}, and \emph{positive responsiveness}. Neutrality can be seen as a strengthening of IIA and requires SCFs to be invariant under renaming alternatives, \ie all alternatives are to be treated equally. \begin{definition} An SCF $f$ is \emph{neutral} if $\pi(f(R,A))=f(R',A)$ for all feasible sets $A$, preference profiles $R$, $R'$, and permutations $\pi:A\rightarrow A$ such that $a \mathrel{R'_i} b$ if and only if $\pi(a)\mathrel{R_i}\pi(b)$ for all alternatives $a,b$ and agents~$i$. \end{definition} By contrast, anonymity says that SCFs be invariant under renaming agents and as such is a strong variant of non-dictatorship. \begin{definition} An SCF $f$ is \emph{anonymous} if $f(R,A)=f(R',A)$ for all feasible sets~$A$, preference profiles~$R$ and $R'$, and permutations $\pi:N\rightarrow N$ such that $R'_i=R_{\pi(i)}$ for all agents~$i$. \end{definition} It also appears reasonable to demand that SCFs are monotonic in the sense that increased support may not hurt an alternative. \begin{definition} An SCF $f$ is (pairwise) \emph{positive responsive} if for all alternatives~$a,b$ and all preference profiles $R$, $R'$, there is some agent~$i$ such that $R_j=R'_j$ for all agents $j\ne i$ and either both $a\mathrel{I_i}b$ and $a\mathrel{P'_i}b$ or both $b\mathrel{P_i}a$ and $a\mathrel{R'_i}b$, \[ \text{if $a\in f(R,\set{a,b})$ then $f(R',\set{a,b})=\set a$.} \] \end{definition} \subsection{Impossibility Results} Famously, \citeauthor{Arro51a}'s general impossibility theorem, as formulated for SCFs, states that no SCF that satisfies all of the Arrovian conditions exists. \begin{theorem}[\citeauthor{Arro51a}, \citeyear{Arro51a,Arro59a}]\label{thm:arrow} No SCF satisfies Pareto-optimality, IIA, WARP, and non-dictatorship. \end{theorem} As the Arrovian conditions cannot be satisfied by any SCF, at least one of them needs to be excluded or relaxed to obtain positive results. Clearly, dropping non-dictatorship is unacceptable and, as already mentioned, IIA merely states that the SCF represents a reasonable model of preference aggregation \citep[see, \eg][]{Schw86a,BoTi91a}. \citet{Wils72a} has shown that without Pareto-optimality only SCFs that are constant (\ie completely unresponsive) or fully determined by the preferences of a single agent are possible. Moreover, it could be argued that not requiring Pareto-optimality runs counter to the very idea of \emph{social} choice. Accordingly, the only remaining possibility is to exclude WARP. Imposing weaker rationality conditions than WARP, however, offers little relief as it turns out that the vicious essence of Arrow's impossibility remains. There is a range of results stating the impossibility of SCFs satisfying weaker versions of WARP in a satisfactory way \citep[see, \eg][]{Kell78a,Schw86a,CaKe02a,Bank95a}. Among these, the results by \citet{MCSo72a} and \citet{BlDe77a} deserve special mention as they concern rationalizability instead of WARP. We will employ a variant of \citeauthor{BlDe77a}'s theorem due to \citet{AuBa00a}. \begin{theorem}[\citeauthor{MCSo72a}, \citeyear{MCSo72a}]\label{thm:MCSo72a} No SCF satisfies Pareto-optimality, positive responsiveness, IIA, rationalizability, and non-dictatorship, provided that $n> 3$. \end{theorem} By strengthening IIA to neutrality and assuming that the number of alternatives exceeds the number of agents, positive responsiveness is no longer required. \begin{theorem}[\citeauthor{AuBa00a}, \citeyear{AuBa00a}]\label{thm:AuBa00a} No SCF satisfies Pareto-optimality, neutrality, rationalizability, and non-dictatorship, provided that $|U|> n$. \end{theorem} For further characterizations of rationalizable social choice the reader be referred to \citet{Moul85b}, \citet{Bank95a}, and \citet{AuBa00a}. \subsection{Condorcet Extensions and Scoring Rules} In light of the severe problems that $\alpha$ and $\gamma$ entail in social choice, we now investigate which of the well-known SCFs satisfy~$\widehat\alpha$ and~$\widehat\gamma$. We focus on two types of SCFs, namely \emph{Condorcet extensions} and \emph{scoring rules}. \subsubsection{Condorcet Extensions}\label{sec:condorcet} Despite the Arrovian impossibility results, social choice over two alternatives is unproblematic. \citet{May52a} has shown that the \emph{simple majority rule}---choosing the alternative that a majority prefers to the other alternative, and in case of a tie, choose both---can be characterized by neutrality, anonymity, and positive responsiveness. Thus, it seems reasonable to require of SCFs~$f$ that they reflect majority rule on pairs. Extending any such SCF~$f$ to feasible sets with more than two alternatives, one immediately runs into arguably one of the earliest Arrovian impossibility results, viz. the \emph{Condorcet paradox} \citep{Cond85a}. Consider the preference profile depicted in \tabref{tab:Condorcet}. \begin{table} \[ \begin{array}{ccc} 1 & 1 & 1\\ \midrule a &c &b\\ b &a &c\\ c &b &a \end{array} \quad\quad\quad\quad \begin{array}{ll} \set{x,y} & f(R,\set{x,y})\\\midrule \set{a,b} & \set{a}\\ \set{b,c} & \set{b}\\ \set{a,c} & \set{c} \end{array} \] \caption{\label{tab:Condorcet}On the left a preference profile (figures indicate numbers of agents) leading to the Condorcet paradox. On the right the corresponding choice function on pairs if determined by majority rule.} \end{table} Then, if $f$ on pairs is determined by majority rule, the base relation~$\overline R_{S_{f,R}}$ fails to be acyclic, and therefore $S_{f,R}$ does not satisfy~$\alpha$. Observe, moreover, that $f(R,\set{a,b,c})=\set{a,b,c}$, if~$f$ satisfies~$\widehat\alpha$. For suppose otherwise, then, without loss of generality, we may assume that either $f(R,\set{a,b,c})=\set{a}$ or $f(R,\set{a,b,c})=\set{a,b}$. Assuming that~$f$ satisfies~$\widehat\alpha$, however, the former is at variance with $f(R,\set{a,c})=\set{c}$, and the latter with $f(R,\set{a,b})=\set{a}$. Thus, $S_{f,R}$ coincides with the choice function~$S$ of \exref{ex:setrevrel} and \figref{fig:Srevprefrel} depicts its weak revealed preference relation~$\widehat R_{S_{f,R}}$. By \thmref{thm:selfstabilityiffhatalphahatgamma}, the class of SCFs that satisfy both $\widehat\alpha$ and~$\widehat\gamma$ consists precisely of all self-stable SCFs. By virtue of \thmref{thm:SetRatiffhatalphahatgamma} and \coref{cor:condorcet} the SCFs in this class are all set-rationalizable generalized Condorcet extensions. May's characterization furthermore implies that all self-stable SCFs that satisfy anonymity, neutrality, and positive responsiveness are Condorcet extensions. Among them are well-known rules like \emph{weak closure maximality} (also known as the \emph{top cycle}, \emph{GETCHA}, or the \emph{Smith set}), the \emph{minimal covering set}, the \emph{essential set}, and their generalizations \citep{Bord76a,DuLa99a,Lasl00a}.\footnote{\citet{Bran08a} defines an infinite hierarchy of self-stable SCFs. If we assume an odd number of agents with linear preferences, the class of self-stable SCFs is also conjectured to contain the \emph{tournament equilibrium set} \citep{Schw90a} and the \emph{minimal extending set}. Whether this is indeed the case depends on a certain graph-theoretic conjecture \citep{LLL93a,Bran08a}.} Interestingly, well-known Condorcet extensions that satisfy only one of~$\widehat\alpha$ and $\widehat\gamma$ appear to be less common. Still, Schwartz's \emph{strong} closure maximality \citep{Schw72a}, which he refers to as GOCHA, is an example of an SCF that satisfies~$\widehat\gamma$ but not~$\widehat\alpha$. By contrast, the \emph{iterated uncovered set} \citep[\eg][]{Dutt88a} satisfies~$\widehat\alpha$ but not~$\widehat\gamma$. When pairwise choice is determined via majority rule, the set of \emph{weak Condorcet winners} for a given preference profile~$R$ and feasible set~$A$ is defined as $\{a\in A \colon a\in f(R,\{a,b\}) \text{ for all }b\in A\}$. An SCF is called a \emph{weak Condorcet extension} if it returns the set of weak Condorcet winners whenever this set is non-empty. Clearly, every weak Condorcet extension is a Condorcet extension. The converse is not generally the case, but many Condorcet extensions (such as Kemeny's rule, Dodgson's rule, Nanson's rule, and the minimax rule) are also weak Condorcet extensions \citep[see][]{Fish77a}. It turns out that no weak Condorcet extension is set-rationalizable. \begin{theorem}\label{thm:NoWChatalpha} No weak Condorcet extension satisfies~$\widehat\alpha$. \end{theorem} \begin{proof} Let $f$ be a weak Condorcet extension and consider the linear preference profile~$R$ with preferences over~$A=\set{a,b,c}$ as given in \tabref{tab:Scoring_rules}. \begin{table} \[ \begin{array}{cccl} 3 & 2 & 1\\\cmidrule{1-3} a & b & c\\ c & a & b\\ b & c & a \end{array} \] \caption{\label{tab:Scoring_rules} A preference profile (figures indicate numbers of agents) showing that no weak Condorcet extension and scoring rule satisfies~${\widehat\alpha}$. For every weak Condorcet extension and every scoring rule the choice function for this profile is as in \exref{ex:setrevrel2} (also compare \figref{fig:Srevprefrel2}).} \end{table} Since alternative~$a$ is the unique weak Condorcet winner---three out of six agents prefer it to~$b$ and all agents but one prefer it over~$c$---$f(R,\{a,b,c\})=\{a\}$. Now observe that the preferences of the same agents over the subset~$\set{a,b}$ are such that three agents prefer~$a$ to~$b$ and three~$b$ to~$a$. Accordingly, $f(R,\set{a,b})=\set{a,b}$. As $c\notin f(R,\set{a,b,c})$ but $f(R,\set{a,b,c})\neq f(R,\set{a,b})$, we may conclude that~$f$ does not satisfy~$\widehat\alpha$. \end{proof} \subsubsection{Scoring Rules} Scoring rules are based on the idea that the voters each rank the alternatives in a feasible set according to their preferences, which, for technical convenience we will here assume to be linear. Each time an alternative is ranked $m$th by some voter it gets a particular score~$s_m$. The scores of each alternative are then added and the alternatives with the highest cumulative score are selected. The class of scoring rules includes several well-known SCFs, like the \emph{Borda rule}---alternative~$a$ gets $k$ points from agent~$i$ if~$i$ prefers~$a$ to $k$ other alternatives---and the \emph{plurality rule}---the cumulative score of an alternative equals the number of agents by which it is ranked first. Formally, we define a \emph{score vector of length~$k$} as a vector $s=(s_1,\ldots,s_k)$ in~$\mathbb R^k$ such that $s_1\ge\cdots\ge s_k$ and $s_1>s_k$. For example, $(1,0,0)$, $(2,1,0)$, and $(1,1,0)$ are the score vectors of length~$3$ for the \emph{plurality rule}, the \emph{Borda rule}, and the \emph{anti-plurality rule}, respectively. Given a feasible set~$X$ of~$k$ alternatives, an $x\in X$, and a linear preference profile~$R$, we have $s(x,i)$ denote the score alternative~$x$ obtains from voter~$i$, \ie $s(x,i)=s_m$ if and only if $x$ is ranked $m$th by~$i$ within~$X$. Then, the (cumulative) score~$s(x)$ of an alternative~$x$ within~$X$ given~$R$ is then defined such that \[ s(x)= \sum_{i\in N}s(x,i)\text. \] A \emph{scoring rule} is an SCF that selects from each feasible set~$X$ for each preferences profile the set of alternatives~$x$ in~$X$ with the highest score~$s(x)$ according to some score vector~$s$ of length~$|X|$. Observe that no restrictions are imposed on how the score vectors for different lengths are to be related. As every scoring rule fails to select the Condorcet winner for some preference profile \citep{Fish73a} and coincides with majority rule on two alternatives, they generally do not satisfy~$\widehat\gamma$. We find that no scoring rule can satisfy~$\widehat\alpha$ either. It follows that no scoring rule is set-rationalizable. \begin{theorem}\label{thm:NoSRhatalpha} No scoring rule satisfies~$\widehat\alpha$. \end{theorem} \begin{proof} Let $f$ be a scoring rule. Let further $s=(s_1,s_2,s_3)$ and $s'=(s'_1,s'_2)$ be its associated score vectors of lengths~3 and~2, respectively. Without loss of generality we may assume that $s_1=1$ and $s_3=0$. Consider the preference profile~$R$ with preferences over~$A=\set{a,b,c}$ as depicted in \tabref{tab:Scoring_rules}. Then, $s(a)=3+2s_2$, $s(b)=2+s_2$ and $s(c)=1+3s_2$. Since $1\le s_2\le 0$, it can easily be appreciated that $s(a)>s(b)$ as well as $s(a)>s(c)$. Hence, $f(R,\set{a,b,c})=\set{a}$. As in the proof of \thmref{thm:NoWChatalpha}, $f(R,\set{a,b})=\set{a,b}$ and we may conclude that~$f$ does not satisfy~$\widehat\alpha$. \end{proof} Using the same example as in the proof of \thmref{thm:NoSRhatalpha}, the reader can easily verify that all scoring run-off rules---such as single transferable vote (STV)---also fail to satisfy~$\widehat\alpha$ and as such are not set-rationalizable. An interesting question in this context is whether the impossibility shown in \thmref{thm:NoSRhatalpha} can be generalized to \emph{rank-based} SCFs, \ie SCFs that merely take into account the positions of alternatives in the individual rankings \citep{Lasl96a}. It turns out this is not the case because, for instance, the (rather unattractive) SCF that chooses all alternatives that are ranked first by at least one voter is rank-based and satisfies $\widehat\alpha$.\footnote{This SCF has also been mentioned by \citet{Gard76a} and \citet{Kell77a}. \citet{Tayl05a} calls it the \emph{omninomination rule}.} It might also be worth observing that this SCF happens to satisfy~$\widehat\gamma$ and Pareto-optimality as well. \section{Summary and Conclusion} Problems relating to the possibility of reasonable social choice functions have proved to be rather tenacious. In particular, attempts to circumvent \citeauthor{Arro51a}'s impossibility result by replacing the weak axiom of revealed preference (WARP), which requires a transitive and complete preference relation on alternatives underlying choice, by weaker conditions on the underlying preference relation have generally failed to deliver. By weakening WARP to set-rationalizability, we have shown that social choice functions that also satisfy the other Arrovian postulates do exist. These social choice functions are generally characterized by their satisfying~$\widehat\alpha$. This condition, which also goes by the names of strong superset property, $5^*$, and outcast, is no stranger in choice theory, but nevertheless has played a surprisingly small role therein. In an early publication, \citeauthor{Cher54a} writes that ``{\it postulate~$\mathit{5^*}$ is not imposed in our definition of a rational solution}'' \citep[][page~430]{Cher54a} and \citeauthor{AiAl95a} chime in by stating that ``\textit{this property} [$\widehat\alpha$] \textit{did not find wide use in the choice theory literature}'' \citep[][page~21]{AiAl95a}. The characterization of set-rationalizable choice functions via $\widehat\alpha$ can be interpreted as a strong argument for this postulate. By also imposing~$\widehat\gamma$, the set-expansion property that forms the counterpart of~$\widehat\alpha$, we obtain the class of self-stable generalized Condorcet extensions. In the context of social choice, this class comprises appealing social choice functions like the minimal covering set and the essential set, yet excludes other well-known rules like all scoring rules, all scoring runoff rules, and all weak Condorcet extensions. As such, self-stability defines a fascinating class of social choice functions, which offers an interesting way around the impossibility results that have haunted social choice theory for such a long time. \section*{Acknowledgements} We thank Nicholas Baigent and Cristopher Tyson for helpful suggestions. This material is based upon work supported by the Deutsche Forschungsgemeinschaft under grant BR~2312/3-3.
proofpile-arXiv_065-6240
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Existence and uniqueness under non-resonance conditions} Consider the boundary value problem \begin{equation}\label{eqnonres} X_t^\prime + \int_0^t f(s,X_s) ds = X_0^\prime + \omega_t,\qquad X_0=X_1=0, \;\; \omega \in \Omega, \end{equation} and assume that $f:[0,1]\times {\mathbb R} \to{\mathbb R}$ is continuous and differentiable with respect to its second argument with bounded derivative. \noindent By Lemma \ref{equiv2} and Proposition \ref{serve}, solvability of \eqref{eqnonres} is proved if, for any $\omega \in \Omega$, there exists a unique function $u \in C^1_0$ which satisfies \begin{align} \label{fo} u_t - {\cal K}(f(\cdot, u ))(t) = \int_0^1\frac{\partial K}{\partial s}(t,s)\omega_s ds,\;\; t \in [0,1]. \end{align} Write $ H =L^2(0,1)$ and introduce \begin{equation}\label{mapPhi} \Phi:H \longrightarrow H, \qquad u \mapsto f(\cdot, u (\cdot) ). \end{equation} Notice that the existence and uniqueness of the solution of (\ref{fo}) for every $\omega\in \Omega$ is guaranteed, {\it in particular, } if the map $$ (I - {\cal K}\Phi):H \longrightarrow H, \qquad u \mapsto u - {\cal K}(f(\cdot, u (\cdot))) $$ is a global homeomorphism. In order to apply a variant of the abstract global implicit function theorem (cf. \cite[Theorem 3.9, page 29]{CH}) to (\ref{mapPhi}), we shall need the following \begin{lemma}\label{operatori} (\cite[Lemma 3.4, page 95]{CH}) Let $M$ be a real Hilbert space and ${ K}:M \to M$ be a compact, symmetric, positive definite operator. Let $0<\lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_n \leq \dots$ be its eigenvalues (counted according to their multiplicity). Consider a family ${\cal A}$ of symmetric linear operators on $M$, and assume that there exist $\mu_n,\mu_{n+1}$, such that \begin{equation} \label{ipooper} \lambda_n I < \mu_n I \leq A \leq \mu_{n+1} I < \lambda_{n+1} I, \;\;\;\; n \ge 1, \end{equation} for each $A \in {\cal A}.$ Then, the linear map $F: M \to M$, $x \mapsto x - { K} A x$, for each $A \in {\cal A}$ has a bounded inverse and there exists $N>0$ such that \begin{equation} \label{stimaoper} \|(I - { K} A)^{-1}\|_{{\cal L}(M, M)} \leq N, \qquad {\rm for \, all}\,\, A \in {\cal A}. \end{equation} \end{lemma} \noindent We can now state and prove the main result of this section. \begin{theorem}\label{Nonres} Assume that \begin{equation}\label{ipononres} \pi^2m^2< h \leq \frac{\partial f}{\partial x}(t,x) \leq k < \pi^2(m+1)^2, \quad t\in [0,1],\; x\in {\mathbb R}, \end{equation} where $m\geq 0$ is an integer and $h,k$ are real constants. Then (\ref{eqnonres}) has a unique solution. \end{theorem} \noindent The assumption on $ \frac{\partial f}{\partial x}$ is a non-resonance condition in the sense that zero is the only solution to the BVP associated to the linear problem $ v_t^{''}+ \frac{\partial f}{\partial x}(\tau,\xi) v_t = 0,$ for any fixed $\tau,\xi \in {\mathbb R}$. \begin{proof} We only give a sketch of the proof, since it is similar to the second proof of \cite[Theorem 3.3, page 93]{CH}. This proof consists of an application of \cite[Theorem 3.9, page 29]{CH} and Lemma \ref{operatori}. As mentioned above, we have to show that $(I - {\cal K} \Phi)$ is a global homeomorphism from $H$ onto $H$. To this end, it is sufficient to check that $\Phi$ in \eqref{mapPhi} is of class $C^1$ on $H$ and that $(I - {\cal K}D\Phi(u))^{-1}$ exists, for any $u \in H$ ($D\Phi(u)$ being the Fr\'echet derivative of $\Phi$ at $u \in H$) and satisfies, for some $N>0,$ the inequality \begin{equation}\label{difi} \|(I - {\cal K} D\Phi(u))^{-1}\|_{{\cal L}(H,H)} \leq N, \qquad {\rm for \, all} \,\, u \in H. \end{equation} \noindent From the assumptions on $f$, it follows that $\Phi$ is of class $C^1$. In order to verify \eqref{difi}, it suffices to apply Lemma \ref{operatori} with $M=H, K={\cal K}$, $\lambda_n=(n \pi)^2$, taking as $\cal A$ the family of all bounded linear operators on $H$ defined by $Ay(t)= D\Phi(u) [y](t)$ $ = {\frac{\partial f }{\partial x}}(t, u(t))y(t)$, for every $u \in H$. It is clear that the non-resonance hypothesis allows us to apply Lemma \ref{operatori}. \end{proof} \noindent We close this section with a short discussion of the Fredholm alternative in our context. Consider a linear BVP for which \begin{equation}\label{linear} f(t,X_t,X_t^\prime)=\mu X_t, \end{equation} with $\mu>0$ a real positive constant. By Lemma \ref{equiv2} we know that (\ref{np}) with (\ref{linear}) is equivalent to the linear integral equation \begin{equation} (I-\mu {\cal K})X(\omega)=Y(\omega), \;\;\; \omega \in \Omega, \label{fredholm} \end{equation} with $Y$ given by (\ref{g}). The operator ${\cal K}$ is self-adjoint in $L^2(0,1)$ and the eigenvalues and eigenfunctions of ${\cal K}$ are $\frac{1}{\mu}=\frac {1}{k^{2}\pi^{2}}$ and $\sin(k\pi t)$, with $k$ integer, $k \ge 1$. The classical Fredholm alternative states that, if $\mu\neq k^2\pi^2$, then $\ker(I-\mu{\cal K})=\{0\}$ and (\ref{integro}) admits a unique solution $X(\omega)=(I-\mu{\cal K})^{-1}Y(\omega) $, while for $\mu=k^2\pi^2$ there exist solutions if and only if $Y$ is orthogonal in $L^{2}$ to the eigenfunctions of ${\cal K}$. \noindent In our case, the requirement that $Y$ be orthogonal in $L^2(0,1)$ to the eigenfunctions of ${\cal K}$ yields \begin{align}\label{ortho} \nonumber \int_{0}^{1} Y_t(\omega)\sin(k\pi t)\,dt =- \int_{0}^{1}\sin(k\pi t)\, \Big( \int_{0}^{1} { K}(t,s)d\omega_s \Big) dt \\ = - \int_{0}^{1}\, \Big( \int_{0}^{1} { K}(t,s) \sin(k\pi t) dt \Big) d\omega_s = - \frac{1}{k^2 \pi^2} \int_{0}^{1} \sin(k\pi s) d\omega_s =0. \end{align} However, the stochastic integral $\frac{1}{k^2\pi^2} \int_{0}^{1}\sin (k\pi s )\,d\omega_s $ is a non-degenerate gaussian random variable (with mean 0 and variance $ \frac{1}{k^4\pi^4} \int_{0}^{1}\sin^2 (k\pi s )\,ds= \frac{1}{2 k^4\pi^4}$). It follows that the probability that (\ref{ortho}) is verified vanishes. This implies that (\ref{linear}) does not have a solution, for $\sqrt\mu=k\pi$. \noindent Hence, we have proved \begin{proposition}\label{fredstoc} (i) If $\mu \neq n^2 \pi^2$, the linear Dirichlet BVP associated to (\ref{linear}) has a unique solution. \par \noindent (ii) If $\mu = m^2 \pi^2$ for some $m \ge 1$, the linear Dirichlet BVP associated to (\ref{linear}) has no solution. \end{proposition} \begin{remark} {\rm As in the deterministic case, the above result can be also deduced from the explicit expression of the solution using Fourier series. } \end{remark} \begin{remark} {\rm A standard argument shows that the above result still holds in the general case \begin{equation}\label{linear0} f(t,X_t,X_t^\prime)=a X_t+b X_t^\prime ,\qquad a,b\in{\mathbb R}, \end{equation} where the condition for the existence and uniqueness of the solution of (\ref{eqnonres}) is now $a-b^2/4\ne k^2\pi^2$, with $k\in{\mathbb Z}$. } \end{remark} \subsection{Existence and uniqueness under Lipschitz-type conditions} In this section we give some other existence and pathwise uniqueness results for our BVP, taking into account Proposition \ref{serve} and using some tools of the theory of classical nonlinear ODEs. To this end, we will consider the solution $Y$ (see \eqref{y1}). Let $\omega \in \Omega$ and define $\hat f: [0,1] \times {\mathbb R}^2 \to {\mathbb R}$ by $$ \hat f (t,x,y):= f(t,x+Y_t(\omega),y+Y_t^\prime(\omega)).$$ A straightforward computation leads to \begin{lemma}\label{funge} Let $\omega \in \Omega$ be fixed. A function $u \in C^1_0 $ is a solution of \begin{equation} \label{vecchia} \left\{\begin{array}{l} u_t^\prime + \int_0^t f(s,u_s,u_s^\prime) ds = u_0^\prime + \omega_t\\ u_0=0=u_1 \end{array} \right. \end{equation} if and only if $z_t:= u_t-Y_t(\omega)$ belongs to $C^2([0,1])$ and is a solution of \begin{equation} \label{nuovaODE} \left\{\begin{array}{l} z_t^{\prime\prime}+ \hat f (t,z_t,z_t^\prime)=0 \\ z_0=0=z_1. \end{array} \right. \end{equation} \end{lemma} \noindent Note that, as a consequence of its definition, the function $\hat f$ has the same regularity of $f$ with respect to the second and third arguments. \noindent Lemma \ref{funge} allows to apply the classical existence and uniqueness results for boundary value problems by Bailey, Shampine and Waltman \cite{BSW}. To do this, let $K,L$ be real numbers and define \begin{equation} \label{DefAlfa} \alpha(L,K)=\left\{\begin{array}{ll} \frac{2}{\sqrt{4K-L^2}} \arccos \frac{L}{2\sqrt{K}} & \mbox{ if } 4K-L^2 >0 \\[6pt] \frac{2}{\sqrt{L^2-4K}}\, \text{arccosh} \frac{L}{2\sqrt{K}} & \mbox{ if } 4K-L^2 <0,L>0,K>0 \\[6pt] \frac{2}{L} & \mbox{ if } 4K-L^2 =0,L>0 \\[6pt] +\infty & \mbox{ otherwise } \end{array} \right. \end{equation} \noindent and \begin{equation} \label{DefBeta} \beta(L,K)=\alpha(-L,K). \end{equation} The first result of \cite{BSW} that we use here is based on the contraction mapping principle, and its proof consists in showing the existence and uniqueness of a fixed point of an operator defined through the Green's function for problem $(\ref{nuovaODE})$ (analogue to the integral operator introduced in Section 2). However, more work is needed in order to get an optimal result. \begin{theorem} \label{PrimoLip} (\cite[Theorem 3.5]{BSW}). Assume that there exist $K,L$ such that \begin{equation}\label{IpoLipPrima} |\hat f (t,x,y)-\hat f (t,\tilde x, \tilde y)| \leq K|x- \tilde x|+L|y- \tilde y|, \end{equation} for all $t\in [0,1]$ and for all $x,\tilde x,y, \tilde y \in {\mathbb R}$. Assume also that $1<2\alpha(K,L)$. Then $(\ref{nuovaODE})$ has a unique solution. \end{theorem} \begin{remark} {\rm The above result is optimal, in the sense that neither existence nor uniqueness are guaranteed when $1=2\alpha(K,L)$. } \end{remark} \noindent Recalling Proposition \ref{serve} and Lemma \ref{funge}, we obtain \begin{corollary} \label{PrimoCor} Assume that there exist $K,L$ such that \begin{equation}\label{IpoPrimaCor} |f (t,x,y)-f (t,\tilde x, \tilde y)| \leq K|x- \tilde x|+L|y- \tilde y|, \end{equation} for all $t\in [0,1]$ and for all $x,\tilde x,y, \tilde y \in {\mathbb R}$. Assume also that $1<2\alpha(K,L)$. Then \eqref{np} has a unique solution. In particular, if \begin{equation}\label{Inpartic} |f (t,x,y)-f (t,\tilde x, \tilde y)| \leq L(|x- \tilde x|+|y- \tilde y|), \end{equation} for all $t,x,\tilde x,y, \tilde y$ and $0<L<4$, then \eqref{np} has a unique solution. \end{corollary} \begin{proof} It is sufficient to apply Theorem $\ref{PrimoLip}$, Lemma $\ref{funge}$ and the definition of $\hat f$. As for the particular case when \eqref{Inpartic} holds, it is easy to check that if $0<L<4$ then we can get \begin{equation}\label{alfa} 1< \frac{4}{\sqrt{4L-L^2}} \arccos \frac{\sqrt{L}}{2}. \end{equation} From the definition of $\alpha$ it follows that the above inequality is equivalent to $1<2\alpha(L,L)$ and thus Theorem $\ref{PrimoLip}$ applies with $K=L$. \end{proof} \noindent Corollary $\ref{PrimoCor}$ improves Proposition 1.4 in \cite{NP}, which shows existence and uniqueness under the assumption that \begin{equation}\label{LipNP} |f (t,x,y)-f (t,\tilde x, \tilde y)| \leq L(|x- \tilde x|+|y- \tilde y|), \end{equation} for all $t\in [0,1]$ and for all $x,\tilde x,y, \tilde y \in {\mathbb R}$, and $L<1/3$. \noindent Corollary $\ref{PrimoCor}$ can be further improved by means of a generalized Lipschitz condition. To this end, we recall \begin{theorem} \label{SecondoLip} (\cite[Theorem 7.6]{BSW}). Assume that $\hat f$ is locally Lipschitz and that there exist $K,L_1,L_2$ such that \begin{equation}\label{IpoLipSec} \hat f (t,x,y)-\hat f (t,\tilde x, y) \leq K(x-\tilde x), \end{equation} for all $x \geq \tilde x, t\in [0,1], y\in {\mathbb R}$, \begin{equation}\label{IpoLipTerza} L_1(y-\tilde y) \leq \hat f (t,x,y)-\hat f (t,x, \tilde y) \leq L_2 (y-\tilde y), \end{equation} for all $y \geq \tilde y, t\in [0,1], x\in {\mathbb R}$. Assume also that $1<\alpha(L_2,K)+\beta(L_1,K)$. Then $(\ref{nuovaODE})$ has a unique solution. \end{theorem} \noindent Arguing as above, we obtain \begin{corollary} \label{SecondoCor} Assume that $f$ is locally Lipschitz and that there exist $K,L_1,L_2$ such that \begin{equation}\label{IpoSecCor} f (t,x,y)-f (t,\tilde x, y) \leq K(x-\tilde x), \end{equation} for all $x \geq \tilde x, t\in [0,1], y\in {\mathbb R}$, \begin{equation}\label{IpoTerzaCor} L_1(y-\tilde y) \leq f (t,x,y)-f (t,x, \tilde y) \leq L_2 (y-\tilde y), \end{equation} for all $y \geq \tilde y, t\in [0,1], x\in {\mathbb R}$. Assume also that $1<\alpha(L_2,K)+\beta(L_1,K)$. Then $(\ref{vecchia})$ has a unique solution. \end{corollary} \noindent Corollary $\ref{SecondoCor}$ can be compared with Proposition 1.3 in \cite{NP}, where it is assumed that $f=f(x,y)$ is nonincreasing in each coordinate and that it has linear growth. More precisely, the monotonicity condition in $x,y$ is contained in $(\ref{IpoSecCor}),(\ref{IpoTerzaCor})$ when we take $K=0$ and $L_2=0$, respectively. Moreover, it follows from the definitions that $\beta(L_1,0)=+\infty$. Notice that no linear growth restriction is required in Corollary $\ref{SecondoCor}$; the assumptions are satisfied also (as remarked in \cite{BSW}) by a nonlinearity of the form $f(t,x)=-e^x$. \section {Uniqueness in law} In this section we will give sufficient conditions to have uniqueness in law for solutions to the BVP associated to equation \eqref{np}. These conditions are not covered by the pathwise uniqueness results of previous sections. In this section (excluding Remark \ref{exi}) we will always assume that \begin{hypothesis} \label{law} The function $f : [0,1] \times {\mathbb R}^2 $ is continuous and bounded and has first and second spatial partial derivatives $f_x$, $f_y$, $f_{xx}$, $f_{xy}$ and $f_{yy}$ which are continuous and bounded. \end{hypothesis} \subsection {$H-$differentiability } Let $H = L^2(0,1)$ and $H_0 $ be the subspace of $\Omega$ introduced at the end of Section 1. Recall that a Hilbert-Schmidt operator $K : H \to H$ can be represented by a Kernel $K(t,s) \in L^2[(0,1)^2]$, i.e., $ K_t h = \int_0^1 K(t,s) h_s ds,$ $ t \in [0,1]. $ Identifying $H$ with $H_0$, a Hilbert-Schmidt operator $R: H_0 \to H_0$ can be represented by a Kernel $R(t,s) \in L^2[(0,1)^2]$ as follows: \begin{align} \label{f7} R_t f = \int_0^t dr\int_0^1 R(r,s) f_s' ds,\;\;\; f \in H_0, \;\; t \in [0,1]. \end{align} {\it In the sequel we will identify Hilbert-Schmidt operators from $H_0 $ into $H_0$ with their corresponding kernels in $ L^2[(0,1)^2]$; to stress this fact, we will also write $H_0 \otimes H_0 \simeq L^2[(0,1)^2]$.} The following definition is inspired from \cite{S} (compare also with \cite[Chapter 4]{N}, \cite[Section 3.3]{UZ1} and \cite[Definition B.6.2]{UZ1}). \begin{definition} \label{Hdiff} Let $K$ be a real separable Hilbert space. A measurable map ${\mathcal G} : \Omega \to K $ is said to be $H$-differentiable if the following conditions hold: \vskip 1mm \noindent (1) For any $\omega \in \Omega$, ${\mathbb P}$-a.s., the mapping ${\mathcal G} (\omega + \cdot ) : H_0 \to K$, $h \mapsto {\mathcal G} (\omega + h )$, is Fr\'echet differentiable on $H_0$. \noindent (2) For any $\omega \in \Omega$, ${\mathbb P}$-a.s., the $H$-derivative $D_H {\mathcal G}(\omega)$, which is defined by \begin{equation}\label{ramer4} D_H {\mathcal G}(\omega)[h] = \lim_{r \to 0} \frac{ {\mathcal G} (\omega + r h) - {\mathcal G} (\omega)}{r}, \;\;\;\; h \in H_0, \end{equation} is a Hilbert-Schmidt operator from $H_0$ into $K$. \noindent(3) the map $ \omega \mapsto \, D_H {\mathcal G} (\omega ) $ is measurable from $\Omega$ into $ H_0 \otimes K$. \end{definition} \begin{remark} {\rm In condition (1) we are requiring that ${\mathcal G}$ is differentiable along the directions of $H_0$ (the Cameron-Martin space or the space of admissible shifts for ${\mathbb P}$, see \cite{UZ1}). The space $H_0$ is densely and continuously embedded in $\Omega$ (the immersion $i: H_0 \to \Omega$ is even compact). The triple ($\Omega, H_0, {\mathbb P}$) is an important example of abstract Wiener space (see \cite[Section 4.1]{N}). The notion of $H$-differentiability can be more generally formulated in abstract Wiener spaces. } \end{remark} \noindent In the special case when $K = H_0$ we obtain (see \eqref{f7} and compare with \cite[Theorem 2.1]{NP}) \begin{definition} \label{Hdiff1} A measurable map ${\mathcal G} : \Omega \to H_0 $ is said to be $H$-differentiable if the following conditions hold: \vskip 1mm \noindent (1) For any $\omega \in \Omega$, ${\mathbb P}$-a.s., the mapping ${\mathcal G} (\omega + \cdot ) : H_0 \to H_0$ is Fr\'echet differentiable on $H_0$. \noindent (2) For any $\omega \in \Omega$, ${\mathbb P}$-a.s., there exists the $H$-derivative, i.e., a kernel $D_H {\mathcal G}(\omega) \in L^2 ([0,1]^2)$, such that, for any $\omega \in \Omega$, ${\mathbb P}$-a.s., \begin{equation}\label{ramer} \lim_{r \to 0} \frac{ {\mathcal G} (\omega + r h) - {\mathcal G} (\omega)}{r}= \int_0^{\, \cdot} (D_H {\mathcal G} (\omega)) [h'](s)ds, \;\;\; h \in H_0, \end{equation} where $(D_H {\mathcal G} (\omega) ) [h'](t) = \int_0^1 D_H {\mathcal G} (\omega)(t,s) h_s' ds$, $t \in [0,1].$ \noindent(3) the map $ \omega \mapsto \, D_H {\mathcal G} (\omega ) $ is measurable from $\Omega$ into $ L^2 ([0,1]^2) $. \end{definition} \noindent The concept of $H$-differentiability goes back to Gross at the beginning of the 60s and it is now well understood that it is strictly related to Malliavin Calculus (see also Appendix A). The relation between the $H$-differentiability and Malliavin derivative is completely clarified in \cite{S} (see also \cite[Section 4.1.3]{N}). It turns out that $D_H {\mathcal G} $ is {\it the Malliavin derivative of ${\mathcal G}$}. More precisely, we have the following result as a special case of \cite[Theorem 3.1]{S}. \begin{theorem} (Sugita \cite{S}) \label{sugita} Let $K$ be a real separable Hilbert space. Let us consider a measurable map ${\mathcal G} : \Omega \to K $ which is $H$-differentiable and such that ${\mathcal G} \in L^2(\Omega; K)$ and $$ D_H {\mathcal G} \in L^2(\Omega; H_0 \otimes K). $$ Then ${\mathcal G}$ belongs to $D^{1,2} (K)$ (see Appendix A). Moreover, we have $D_M {\mathcal G} = D_H {\mathcal G}$, ${\mathbb P}$-a.s.. \end{theorem} \noindent Let us go back to the map $T$ given in \eqref{t1}; $T : \Omega \to \Omega$, $T = I + G$, where $G : \Omega \to H_0$, \begin{equation} \label{ciao1} G_t(\omega) = \int_0^t f (s, Y_s (\omega), Y_s'(\omega) ) ds, \;\;\; \omega \in \Omega,\;\; t \in [0,1]. \end{equation} We have the following lemma. \begin{lemma} \label{fre} The following assertions hold: \noindent (i) The mapping $T: \Omega \to \Omega$ is continuously Fr\'echet differentiable on $\Omega$, with Fr\'echet derivative $DT(\omega) : \Omega \to \Omega$, \begin{align*} DT(\omega)[\theta] & = \theta + \int_0^{\; \cdot} \Big ( f_x (s, Y_s (\omega), Y'_s(\omega)) \, Y_s (\theta) + f_y (s, Y_s (\omega), Y'_s(\omega))Y_s' (\theta) \Big)ds \\ & = \theta + DG(\omega)[\theta], \;\; \omega, \, \theta \in \Omega. \end{align*} (ii) The mapping $G: \Omega \to H_0$ is $H$-differentiable, with the following $H$-derivative $D_H G(\omega)$, for any $\omega \in \Omega$, \begin{align*} D_H G(\omega)[h](t) = f_x (t, Y_t (\omega), Y_t'(\omega)) \, Y_t ( \tilde h) + f_y (t, Y_t (\omega), Y'_t(\omega)) \, Y_t' ( \tilde h) \\ = - a_t (\omega) \int_0^1 K (t,s) h_s ds- b_t (\omega) \int_0^1 \partial_t K(t,s) h_s ds , \;\; h \in H, \; t \in [0,1], \end{align*} where $a_t = a_t (\omega)= f_x (t,Y_t(\omega), Y_t'(\omega)) $ and $b_t = b_t (\omega)= f_y (t,Y_t(\omega), Y_t'(\omega))$. Moreover, the following relation between Fr\'echet and $H$-derivative holds: \begin{align} \label{cf} D G (\omega)[h] (t) = \int_0^{t} D_H G(\omega)[h'](s) ds, \;\;\; h \in H_0,\;\; t \in [0,1],\;\; \omega \in \Omega. \end{align} \end{lemma} \begin{proof} (i) It is straightforward to check that $T$ is continuously Fr\'echet differentiable on $\Omega$. First one verifies its G\^ateaux-differentiability at a fixed $\omega$, finding the G\^ateaux derivative $DT(\omega)$. The computations are easy, we only note the estimate $$ \sup_{s, r \in [0,1]} |Y_s (\omega + r \theta)| \le \| \omega\|_{\infty} + \| \theta\|_{\infty}. $$ Then one proves in a straightforward way that the mapping: $ \omega \mapsto DT(\omega) $ from $\Omega$ into ${\cal L}(\Omega)$ (${\cal L}(\Omega)$ denotes the Banach space of all linear and bounded operators from $\Omega$ into $\Omega$ endowed with the operator norm) is continuous and this gives the assertion. \noindent (ii) First note that the operator $$ h \mapsto D_H G(\omega) [h] = - a_t(\omega)\, \int_0^1 K (t,s) h_s ds - b_t (\omega) \int_0^1 \partial_t K(t,s) h_s ds , \;\; h \in H, $$ is a Hilbert-Schmidt operator on $H$. To check the $H$-differentiability of $G$, it is enough to verify that (the limit is in $H$) \begin{equation}\label{ramer1} \lim_{r \to 0} \frac{ G_t' \left(\omega + r \int_0^{\cdot} h_s ds \right) - G_t' (\omega) }{r} = f_x (t,Y_t (\omega), Y_t'(\omega)) \, Y_t ( \tilde h) + f_y (t,Y_t (\omega), Y_t'(\omega)) \, Y_t' ( \tilde h), \end{equation} $h \in H$, where $\tilde h_t $ $= \int_0^{t} h_s ds$, and also that \begin{equation} \label{ci} h \mapsto \, D_H G \left(\omega + \int_0^{\cdot} h_s ds \right) \;\; \text{ is continuous from } \; H \; \text{into}\; L^2 ([0,1]^2), \end{equation} for any $\omega \in \Omega$. The proof of \eqref{ramer1} is straightforward (formula \eqref{ramer1} also appears in \cite{NP}) and also the verification of \eqref{ci}. It remains to show the measurability property, i.e., that $\omega \mapsto D_H G (\omega)$ is measurable from $\Omega$ into $L^2([0,1]^2)$. We fix an orthonormal basis $(e_i)$ in $H$ and consider the orthonormal basis $(e_i \otimes e_j)$ in $ L^2([0,1]^2) $; recall that $e_i \otimes e_j (t,s) = e_i(t) e_j(s)$, $s,t \in [0,1]$ (cf. see \cite[Chapter VI]{RS}). To obtain the measurability property, it is enough to verify that, for any $i, j \ge 1$, the mapping: \begin{align} \label{measur} \omega \mapsto \int_0^1 \int_0^1 DF(\omega)(s,t) e_i(t) e_j(s) dt ds \end{align} is measurable from $\Omega$ into ${\mathbb R}$ and this follows easily. The proof is complete. \end{proof} \begin{remark} We have, for any $\omega \in \Omega$, \begin{align} \label{c4} \| D_H G(\omega) \|_{L^2([0,1]^2)} \le (\| f_x \|_0 + \| f_y \|_0)\, ( \| K \|_{L^2([0,1]^2)} + \| \partial_t K \|_{L^2([0,1]^2)} ). \end{align} \end{remark} \begin{lemma} \label{fre1} For any $\omega \in \Omega$, the Fr\'echet derivative $DT(\omega) : \Omega \to \Omega $ is such that \begin{equation} \label{f} \begin{aligned} D T (\omega) = I + D G(\omega) : \Omega \to \Omega \; \text {is an isomorphism } \; \Leftrightarrow\; \\ \text {the linearized equation} \;\; u_t'' + b_t u_t' + a_t u_t =0,\;\; u_0= u_1=0, \;\;\; \\ \text{with} \;\; a_t = a_t (\omega)= f_x (t,Y_t(\omega), Y_t'(\omega)),\; b_t = b_t (\omega)= f_y (t,Y_t(\omega), Y_t'(\omega)), \\ \text{has the unique zero solution.} \end{aligned} \end{equation} \end{lemma} \begin{proof} Since $DG (\omega)$ is a compact operator on $\Omega$, by the Fredholm alternative theorem it is enough to check that $ I + DG(\omega) $ is one to one. Fix $\omega$ and let $\theta \in \Omega$ be such that $$ \theta_t + \int_0^{t} \big( f_x ( s,Y_s(\omega), Y_s'(\omega) ) \, Y_s (\theta) + f_y ( s,Y_s(\omega), Y_s'(\omega) ) \, Y_s' (\theta) \big) ds =0,\;\;\; t \in [0,1]. $$ It follows that $\theta $ is differentiable and $$ \theta'_t + a_t (\omega) \, Y_t (\theta) + b_t (\omega) \, Y_t' (\theta)=0. $$ Recalling that $\theta'_t = Y_t'' (\theta)$, we find that $Y_t (\theta) = u_t$ solves the boundary value problem $ u_t'' + a_t u_t +b_t u_t' =0,$ $u_0= u_1=0.$ Hence $Y (\theta) =0$ and so $\theta =0$. \end{proof} \subsection { An anticipative Girsanov theorem involving a Carleman-Fredholm determinant } Here we present a non-adapted version of the Girsanov theorem proved recently in \cite[Theorem 3.3]{UZ}. This result will be used in the sequel to prove uniqueness in law for our boundary value problem \eqref{np}. Its formulation requires some concepts of Malliavin Calculus (see Appendix A). Recall that $H_0 \otimes H_0 \simeq L^2[(0,1)^2]$. \begin{hypothesis} \label{uz} \ (i) Let $F: \Omega \to H_0$ be a measurable mapping which belongs to $D^{2,2}(H_0)$. (ii) If $\delta (F)$ denotes the Skorohod integral of $F$ and $D_M F$ its Malliavin derivative, it holds \begin{align} \label{di} \exp \Big( - \delta (F) + \| D_M F \|_{L^2([0,1]^2)} \Big) \in L^{4}(\Omega). \end{align} \end{hypothesis} \noindent Let us comment the previous assumptions; (i) and (ii) are immediately obtained from the corresponding assumptions in \cite[Theorem 3.2]{UZ} with $r=2$ and $\gamma =3$. Consider $\Lambda_F : \Omega \to {\mathbb R}$, \begin{align} \label{lf} \Lambda_F(\omega) = \det{_{2}}(I + D_M F(\omega)) \, \exp \Big( - \delta (F)(\omega) \, - \, \frac{1}{2} |F(\omega)|_{H_0}^2 \Big). \end{align} As pointed out after \cite[Theorem 3.2]{UZ} (see also Appendix A.2 in \cite{UZ1}) under Hypothesis \ref{uz} we have $ \Lambda_F ,$ $ \Lambda_F (I + D_M F)^{-1} v \in L^{4}(\Omega), $ for any $v \in H_0$. \begin{theorem}\label{ustu} (\"Ust\"unel-Zakai \cite{UZ}) (H1) Assume that $F$ satisfies Hypothesis \ref{uz} and consider the associated measurable transformation ${\mathcal T} = {\mathcal T}_F : \Omega \to \Omega$, \begin{align} \label{t5} {\mathcal T} (\omega) = \omega + F(\omega),\;\;\; \omega \in \Omega. \end{align} (H2) Assume that, for any $\omega \in \Omega$, ${\mathbb P}$-a.s., $ [I + D_M F \,(\omega)] : H_0 \to H_0 $ is an isomorphism (here $I =I_{H_0}$). (H3) Assume that there exists a measurable (left inverse) transformation ${\mathcal T}_l : \Omega \to \Omega $ such that $$ {\mathcal T}_l ({\mathcal T} (\omega)) = \omega,\;\;\; \omega \in \Omega, \; {\mathbb P}-a.s.. $$ Then there exists a (Borel) probability measure ${\mathbb Q}$ on $\Omega$, which is equivalent to the Wiener measure ${\mathbb P}$, having density $\frac{d {\mathbb Q}}{d {\mathbb P}} = \Lambda_F$, and such that \begin{align} \label{qq} {\mathbb Q}({\mathcal T}^{-1}(A)) = {\mathbb Q} (\{ \omega \in \Omega\,:\, {\mathcal T} (\omega) \in A \}) = {\mathbb P} (A), \;\; \text{for any Borel set} \;\; A \subset \Omega. \end{align} \end{theorem} \noindent Note that the assertion says that the process $({\mathcal T}_t(\omega))_{t \in [0,1]}$ is a Wiener process on $(\Omega, {\mathcal F}, {\mathbb Q} ).$ The measure ${\mathbb Q}$ is called a Girsanov measure in \cite{UZ}. \begin{remark} \label{kusuoka} {\em It is useful to compare the previous theorem with another non-adapted extension of the Girsanov theorem known as the Ramer-Kusuoka theorem (see \cite{K}, \cite[Theorem 4.1.2]{N} and \cite[Section 3.5]{UZ1}). This result has been also applied in \cite{AN}, \cite{D}, \cite{DM} and \cite{NP}. Its formulation requires the following assumptions. } {\it \smallskip (H1) Assume that $F : \Omega \to H_0$ is $H$-differentiable and that the mapping: $ h \mapsto \, D_H F \left(\omega + h \right) $ is continuous from $H_0$ into $H_0 \otimes H_0$, for any $\omega \in \Omega$, ${\mathbb P}$-a.s.. (H2) Assume that the measurable transformation ${\mathcal T} = I +F : \Omega \to \Omega$ (see \eqref{t5}) is {\rm bijective.} (H3) Assume that, for any $\omega \in \Omega$, ${\mathbb P}$-a.s., $ [I + D_H F \,(\omega)] : H_0 \to H_0 $ is an isomorphism. If (H1)-(H3) hold, then there exists a (Borel) probability measure ${\mathbb Q}$ on $\Omega$, which is equivalent to ${\mathbb P}$, having density $\frac{d {\mathbb Q}}{d {\mathbb P}} = |\Lambda_F|$, such that \eqref{qq} holds. } \smallskip \noindent {\em Note that Theorem \ref{ustu} does not require the invertibility of ${\mathcal T}$. On the other hand, additional integrability assumptions on $F$ are imposed. There is also a difference in the expression of $\frac{d {\mathbb Q}}{d {\mathbb P}}$. Indeed Theorem \ref{ustu} claims that ${\det}_{2} (I + D_HF ) $ is positive, ${\mathbb P}$-a.s., while in the Ramer-Kusuoka theorem, we have to consider $|{\det}_{2} (I + D_HF )|$. } \end{remark} \subsection {Some results on $H$-differentiability and Malliavin derivatives} Let $X = (X_t)$, $X :\Omega \to \Omega$ be a measurable transformation. We introduce an associated measurable mapping $S^X = S : \Omega \to \Omega$, as follows \begin{equation} \label{s} \begin{aligned} & S_t(\omega) = \omega_t - \int_0^t f (s, X_s (\omega), X_s' (\omega)) ds = [(I + F)(\omega)]_t, \;\; \text {where} \; F = F^X : \Omega \to H_0, \\ & F_t (\omega) = - \int_0^t f (s, X_s (\omega), X_s' (\omega)) ds,\;\; t \in [0,1]. \end{aligned} \end{equation} \begin{proposition} \label{rt} A measurable mapping $X : \Omega \to \Omega $ is a solution if and only if there exists an admissible open set $\Gamma \subset \Omega$, such that $$ X_t(\omega) = Y_t (S(\omega)), \;\;\; \omega \in \Gamma, \;\; t \in [0,1]. $$ \end{proposition} \begin{proof} Recall that $ Y_t (\omega) = - t \int_0^1 \omega_s ds + \int_0^t \omega_s ds, $ so that $$ Y_t (\omega) = \int_0^1 \partial_s \big ( t \wedge s \, - ts \big) \, \omega_s ds. $$ Let $X $ be a solution. By Lemma \ref{equiv2} we have, for any $\omega \in \Gamma$, $$ X_t (\omega) = \int_0^1 \partial_s \big ( t \wedge s \, - ts \big) \, \big (\omega_s - \int_0^s f (r, X_r (\omega), X_r'(\omega)) dr \big) \, ds = Y_t (S(\omega)). $$ The reverse implication follows similarly. \end{proof} \noindent Let us go back to the continuous map $T : \Omega \to \Omega$. Recall that pathwise uniqueness can be characterized by the fact that $T$ is {\it bijective} (see the precise statement in Proposition \ref{serve}). In this section we are mainly interested in situations in which we do not know if $T$ is bijective or not. \noindent The following two results will be important. The first one says that $T$ is always a measurable {\it left inverse of $S$} (compare with Theorem \ref{ustu}). \begin{lemma} \label{1} Let $X$ be a solution to \eqref{integro} and let $S$ be the associated measurable mapping (see \eqref{s}). We have on the admissible open set $\Gamma \subset \Omega$ (see \eqref{integro}) \begin{equation} T \circ S = I; \end{equation} in particular $S$ is always {\it injective} on $\Gamma$ and $T$ {\it surjective} from $S(\Gamma)$ onto $\Gamma$. \end{lemma} \begin{proof} We have, for any $\omega \in \Gamma$, using Proposition \ref{rt}, \begin{eqnarray*} T_t (S(\omega)) = S_t(\omega) + \int_0^t f (s,Y_s (S (\omega)), Y_s' (S (\omega))) ds \\ = \omega_t - \int_0^t f (s,Y_s (S (\omega)), Y_s' (S (\omega))) ds + \int_0^t f (s, X_s (\omega), X_s'(\omega)) ds = \omega_t, \;\; t \in [0,1]. \end{eqnarray*} \end{proof} \noindent We introduce now an assumption on solutions to the boundary value problem under consideration. Let $X$ be a solution to \eqref{integro}. We say that {\it $X$ satisfies the hypothesis (L)} if there exists an admissible Borel set $\Omega_0 \subset \Omega$ such that \begin{align} \label{ll} \textbf {(L)} \begin{cases} \text{for any $\omega \in \Omega_0$, the linearized BVP} \; u_t'' + b_t u_t' + a_t u_t =0,\;\; u_0= u_1=0, \\ \text{where} \; a_t = a_t (\omega)= f_x (t,X_t(\omega), X_t'(\omega)) \; \text{ and} \; b_t = b_t (\omega)= f_y (t,X_t(\omega), X_t'(\omega)) \\ \text{\it has only the zero solution.} \end{cases} \end{align} \noindent If $T: \Omega \to \Omega$ is bijective (as it is always the case in \cite{NP}) a condition which implies (L) is \begin{align} \label{ll1} \textbf {(LY)} \begin{cases} \text{for any $\omega \in \Omega$, the linearized BVP} \; u_t'' + b_t u_t' + a_t u_t =0,\;\; u_0= u_1=0, \\ \text{where} \; a_t = a_t (\omega)= f_x (t,Y_t(\omega), Y_t'(\omega)) \; \text{ and} \; b_t = b_t (\omega)= f_y (t,Y_t(\omega), Y_t'(\omega)) \\ \text{\it has only the zero solution.} \end{cases} \end{align} \noindent Using Lemmas \ref{fre} and \ref{fre1} we can prove the following result (recall the admissible open set $\Gamma \subset \Omega$ given in \eqref{integro} and the fact that $T = I + G$ in (\ref{ciao1})). \begin{theorem} \label{2} Assume Hypothesis \ref{law}. Let $X$ be a solution to \eqref{integro} which satisfies (L) and let $S = I + F $ be the associated measurable mapping (see (\ref{s})). \noindent Then the map $F$ is $H$-differentiable and we have, for any $\omega \in \Omega$, ${\mathbb P}$-a.s., \begin{equation} \label{inv} [D_H F (\omega)] = [I + D_H G \,(S (\omega))]^{-1} - I = - D_H G (S(\omega))\, \big(I + D_H G (S(\omega)) \big)^{-1}. \end{equation} Moreover, for any $\omega \in \Omega$, ${\mathbb P}$-a.s. (setting $I = I_{H_0}$), $$ [I + D_H F \,(\omega)] : H_0 \to H_0 \;\;\; \text{is an isomorphism.} $$ \end{theorem} \begin{proof} The proof is divided into some steps. \noindent {\it I Step. We show that there exists an admissible open set $\Gamma_0 \subset \Gamma $, such that $S$ and $F$ are Fr\'echet differentiable at any $\omega \in \Gamma_0$. } \smallskip \noindent According to formula \eqref{f} the Fr\'echet derivative $D T (S (\omega))$ is an isomorphism from $\Omega $ into $\Omega$ if and only if \eqref{ll} holds for $\omega$ (recall that $X = Y \circ S$). Let $\Omega_0 \subset \Omega $ be the admissible Borel set such that \eqref{ll} holds for any $\omega \in \Omega_0$. Define $\Omega' = \Omega_0 \cap \Gamma.$ Clearly ${\mathbb P} (\Omega') =1$ and also $H_0 + \omega' \subset \Omega'$, for any $\omega \in \Omega'$, ${\mathbb P}$-a.s.. Thus $\Omega'$ is an admissible Borel set in $\Omega$. \noindent Fix $\omega \in \Omega'$. Since $D T (S (\omega))$ is an isomorphism, we can apply the inverse function theorem and deduce that $T $ is a local diffeomorphism from an open neighborhood $U_{S(\omega)}$ of $S(\omega)$ into an open neighborhood $V_{T(S(\omega))}= V_{\omega}$ of $T(S(\omega)) = \omega $. We may also assume that $V_{\omega} \subset \Gamma$, for any $\omega \in \Omega'$. Let us denote by $T^{-1}$ the local inverse function (we have $T^{-1} (V_{\omega}) = U_{S(\omega)}$). By Proposition \ref{rt}, we know that $$ \{ \theta \in \Gamma \, :\, S(\theta) \in T^{-1} (V_{\omega}) \} = V_{\omega}. $$ It follows that $S$ is Fr\'echet differentiable in any $\omega' \in V_{\omega}$ and that $$ D S(\omega' ) = (DT (S(\omega')))^{-1} = (I + DG (S(\omega')))^{-1}. $$ Introduce the open set $$ \Gamma_0 = \bigcup_{\omega \in \Omega'} \, V_{\omega} \subset \Gamma. $$ Since $ \Omega' \subset \Gamma_0$, we have that ${\mathbb P} (\Gamma_0) =1$. In addition $H_0 + \omega \subset \Gamma_0$, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s.. \noindent The restriction of $S$ to $\Gamma_0$ is a Fr\'echet-differentiable function with values in $\Omega$. It follows that also $F$ is Fr\'echet differentiable at any $\omega \in \Gamma_0$ with Fr\'echet derivative \begin{align} \label{f4} DF(\omega) = (I + DG (S(\omega)))^{-1} - I. \end{align} {\it II Step. We check that, for any $\omega \in \Gamma_0 $, $DF (\omega) [h] \in H_0$, if $h \in H_0$, and, moreover, for any $\omega \in \Gamma_0 $, $DF (\omega)\in H_0 \otimes H_0 $ (when considered as an operator from $H_0$ into $H_0$). We also show that, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., the map: \begin{align} \label{lim} DF(\omega + \cdot ) : H_0 \to H_0 \otimes H_0 \;\; \text{is continuous} \end{align} and that $ DF(\cdot) $ is measurable from $\Gamma_0 $ into $H_0 \otimes H_0$. } \smallskip \noindent Let us consider, for $\omega \in \Gamma_0$, $k= (I + DG (S(\omega)))^{-1} [h]$. We have $k + DG (S(\omega))[k] =h$. It follows that $k \in H_0, $ since $DG (S(\omega))[k] \in H_0$. By \eqref{cf} in Lemma \ref{fre}, we obtain that if $h \in H_0$, then $$ (I + DG (S(\omega)))^{-1} [h] = (I + D_H G (S(\omega)))^{-1} [h]. $$ By using the identity $$ (I + D_H G (S(\omega)))^{-1} - I = - D_H G (S(\omega)) (I + D_H G (S(\omega)))^{-1}, \;\; \omega \in \Gamma_0, $$ since $(I + D_H G (S(\omega)))^{-1}$ is a bounded operator and $D_H G (S(\omega))$ is Hilbert-Schmidt, we deduce that $ (I + D_H G (S(\omega)))^{-1} - I $ is a Hilbert-Schmidt operator on $H_0$ (see \eqref{hil}). We verify now the continuity property \eqref{lim}, i.e., that for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., for any $k \in H_0$, $$ \lim_{h \to k,\; h \in H_0} \! D_H G (S(\omega + h )) (I + D_H G (S(\omega +h )))^{-1} \! \! = D_H G (S(\omega + k)) (I + D_H G (S(\omega+k)))^{-1} $$ (note that, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., $DF(\omega + h)$ is well-defined at any $h \in H_0$). This requires the following considerations. \vskip 1mm \noindent (a) The mapping: $D_H G : \Omega \to H_0 \otimes H_0$ is continuous. Indeed we know (see Lemma \ref{fre}) $$ D_H G (\omega) = - f_x (t,Y_t (\omega), Y_t'(\omega)) \, K(t,s) - f_y (t,Y_t (\omega), Y_t'(\omega)) \, \partial_t K(t,s) $$ (identifying operators in $H_0 \otimes H_0$ with corresponding kernels in $L^2([0,1]^2)$). Since $Y $ and $Y'$ are continuous from $\Omega$ into $\Omega$ we get easily our assertion using Hypothesis \ref{law}. \vskip 1mm \noindent (b) Since $S : \Gamma_0 \to \Omega$ is continuous and $\Gamma_0$ is admissible, we get that, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., the map: $S(\omega \, + \, \cdot) : H_0 \to \Omega$ is continuous. Using also (a), we obtain that, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., $(D_H G \circ S) (\omega + \cdot) : H_0 \to H_0 \otimes H_0$ is continuous. \vskip 1mm\noindent (c) To get the assertion we use \eqref{hil} and the following fact: for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., we have $$ \lim_{h \to k} (I + D_H G (S(\omega+h)))^{-1} = (I + D_H G (S(\omega +k)))^{-1} $$ (limit in ${\mathcal L}(H_0, H_0)$) for any $k \in H_0$. This holds since, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., $(I+ D_H G (S(\omega +h )))$ is invertible for any $h \in H_0$, and, moreover, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., $\lim_{h \to k } (I + D_H G (S(\omega+h))) $ $= (I+ D_H G (S(\omega +k)))$ in ${\mathcal L}(H_0, H_0)$, for any $k \in H_0$. \smallskip \noindent To check the measurability property, we can repeat the argument before formula \eqref{measur}. \smallskip \noindent {\it III Step. There exists $c_0 >0$, depending on $\| f_x \|_0$ and $\| f_y\|_0$ such that, for any $\omega \in \Gamma_0$,} \begin{align} \label{c411} |DS(\omega) h|_{H_0} = | (I + D_H G (S(\omega)))^{-1} h |_{H_0} \le c_0 |h|_{H_0},\;\;\; h \in H_0. \end{align} This estimate follows from Corollary \ref{carle1} applied to $L = D_H G (S(\omega)). $ \smallskip \noindent {\it IV Step. We prove that $F$ is $H$-differentiable with $D_H F(\omega) = DF(\omega)$ (see \eqref{f7}), for any $\omega \in \Gamma_0$. } \smallskip \noindent The assertion will be proved if we show that there exists, for any $\omega \in \Gamma_0$, $R (\omega) \in H_0 \otimes H_0$, such that \begin{align} \label{gat1} \lim_{r \to 0} \frac{ {F} (\omega + r h) - {F} (\omega)}{r} = R(\omega) [h],\;\;\; h \in H_0 \end{align} (the limit is in $H_0$). Indeed, once this is checked we will get that $R(\omega) = DF(\omega)$ (because the topology of $H_0$ is stronger than the one in $\Omega$). Moreover, we will obtain (since $\Gamma_0$ is admissible) that, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., $F(\omega + \cdot ) : H_0 \to H_0$ is G\^ateaux differentiable on $H_0$. Combining this fact with \eqref{lim}, we will deduce the required property (1) in Definition \ref{Hdiff1}. \noindent To prove \eqref{gat1}, we first show that, for any $t \in [0,1]$, $\omega \in \Gamma_0$, and $h \in H_0$, \begin{align} \label{xt} (i) \; \lim_{r \to 0} \frac{ {X_t} (\omega + r h) - {X_t} (\omega)}{r} = Y_t ( DS(\omega)[h] ), \end{align} $$ (ii) \; \; \lim_{r \to 0} \frac{ {X_t'} (\omega + r h) - {X_t'} (\omega)}{r} = Y_t' ( DS(\omega)[h] ). $$ Let us only check (ii) (the proof of (i) is similar). Using the fact that $X = Y\circ S$ on $\Gamma_0$, we have (for $r$ small enough) $$ \frac{ {X_t'} (\omega + r h) - {X_t'} (\omega)}{r} = - \int_0^1 \Big ( \frac{ {S_s} (\omega + r h) - {S_s} (\omega)}{r} \Big) ds + \frac{ {S_t} (\omega + r h) - {S_t} (\omega)}{r} $$ and the assertion follows passing to the limit as $r \to 0$ (using also \eqref{c411}). \noindent Let us go back to \eqref{gat1}. Define, for $\omega \in \Gamma_0$, and $h \in H_0$, $$ R(\omega)[h](t) = \int_0^t \Big( a_s (\omega) Y_s ( DS(\omega)[h] ) + b_s (\omega) Y_s' ( DS(\omega)[h] ) \Big) ds, \;\; t \in [0,1]. $$ We have $$ \lim_{r \to 0} \Big| \frac{ {F} (\omega + r h) - {F} (\omega)}{r} - R(\omega) [h] \Big|_{H_0}^2 $$ $$ = \lim_{r \to 0} \int_0^1 \Big | - \frac{ f (s, X_s (\omega + rh), X_s'(\omega + rh)) - f (s, X_s (\omega), X_s'(\omega)) } {r} $$ $$ - a_s (\omega) Y_s ( DS(\omega)[h] ) - b_s (\omega) Y_s' ( DS(\omega)[h] )\Big|^2 ds. $$ Now an application of the dominated convergence theorem shows that the previous limit exists and is 0. The proof is complete. \end{proof} \noindent Next we provide useful properties of the Malliavin derivative of $F$, taking advantage of the techniques in \cite{GGK} (see Appendix B). The first one is an $L^{\infty}$-estimate for $D_H F$ and will be important in Section 4.5. \begin{proposition} \label{ne1} Under the assumptions of Theorem \ref{2}, there exists $C>0$, depending on $\| f_x \|_0$ and $\| f_y\|_0$, such that, for any $\omega \in \Omega$, ${\mathbb P}$-a.s. (identifying $L^2([0,1]^2)$ with $H_0 \otimes H_0$), \begin{align} \label{c41} \| D_H F(\omega) \|_{L^2([0,1]^2)} \le C. \end{align} \end{proposition} \begin{proof} Using \eqref{hil}, estimates \eqref{c4} and \eqref{c411} lead to the assertion. \end{proof} \noindent The following result provides an ``explicit expression'' for the Malliavin derivative $D_H F$. The formula follows from \eqref{inv} and Theorem \ref{carle2}. \begin{proposition} \label{esplic} Under the assumptions of Theorem \ref{2} (identifying $H_0 \otimes H_0$ with $L^2([0,1]^2)$), we have, for any $y \in L^2(0,1)$, $\omega \in \Omega$, ${\mathbb P}$-a.s., $$ D_H F(\omega)[y] = - \int_0^1 \gamma(t,s) y(s)ds,\;\;\; t \in [0,1], $$ with $$ \gamma(t,s) = \begin{cases} ({\frac{1}{W}}) [a_t u_2(s)\psi(t)+b_t u_2(s) \psi'(t)],\;\; \;\; 0 \le s < t \le 1, \\ ({\frac{1}{W}}) (a_t u_2(t) + b_t u'_2(t)) \varphi(s),\;\; \;\; 0 \le t < s \le 1. \end{cases} $$ Here $u_k, k =1,\, 2,$ denote the solutions to $u''_k + b_t u_k' + a_t u_k =0$ (the coefficients $a_t$ and $b_t$ depend on $\omega$ and are given in (\ref{ll})) with initial conditions $u_1 (0) = u_2' (0) =1$, $u_1' (0) = u_2 (0) =0$, respectively. Moreover, $W=u_1u'_2-u_2 u'_1$, $M=u_1(1)/u_2(1)$, and $$ \varphi(s)=-u_2(s)M+u_1(s), \; \psi(t)=u_2(t)M-u_1(t), \;\; t \in [0,1],\;\; s \in [0,1]. $$ \end{proposition} \noindent The next result is needed in Section 4.5. \begin{proposition} \label{dafare} Under the assumptions of Theorem \ref{2}, we have that $F \in D^{2,2}(H_0).$ \end{proposition} \begin{proof} The proof is divided into some steps. \smallskip \noindent {\it I Step. We check that $G \in D^{2,2}(H_0)$.} \noindent Since we already now that $G \in D^{1,2}(H_0)$, we only need to show that $D_H G \in D^{1,2}(H_0 \otimes H_0). $ Applying Theorem \ref{sugita}, it is enough to prove that $D_H G : \Omega \to H_0 \otimes H_0$ is $H$-differentiable and that $D_H(D_H G) \in L^{\infty} (\Omega, {\mathcal HS } (H_0, H_0 \otimes H_0)) $. We proceed similarly to the proof of Lemma \ref{fre} (with more involved computations). Recall that $H_0 \otimes H_0 \simeq L^2([0,1]^2)$. First we introduce a suitable operator $R(\omega) \in {\mathcal HS}(H_0, H_0 \otimes H_0)$, for any $\omega \in \Omega$. This operator can be identified with an integral operator acting from $L^2(0,1)$ into $L^2([0,1]^2)$, i.e., with a kernel in $L^2([0,1]^3)$. For any $\omega \in \Omega$, we set $$ c_t = c_t(\omega)= f_{xx} (t,Y_t (\omega), Y_t'(\omega)), \;\; d_t= d_t(\omega)= f_{xy} (t,Y_t (\omega), Y_t'(\omega)),$$ $$e_t=e_t(\omega)= f_{yy} (t,Y_t (\omega), Y_t'(\omega)).$$ Now $R(\omega) $ can be identified with the following kernel in $L^2([0,1]^3)$: $$ c_t K(t,s )K(t,r) + d_t \, \partial_t K(t,s)\, K(t,r) + d_t K(t,s) \, \partial_t K(t,r) + e_t \, \partial_t K(t,s) \, \partial_t K(t,r), $$ $t,s,r \in [0,1]$. We have, for any $h \in H$, $\omega \in \Omega$, $$ \lim_{r \to 0} \Big | \frac{ D_H G \left(\omega + r \int_0^{\cdot} h_s ds \right) - D_H G (\omega) }{r} - R(\omega)[h] \Big|_{L^2([0,1]^2)}=0, \;\; h \in H. $$ It is easy to check that $ h \mapsto \, $ $ R \left(\omega + \int_0^{\cdot} h_s ds \right)$ is continuous from $ H $ into $L^2 ([0,1]^3)$, for any $\omega \in \Omega$. In addition the mapping $\omega \to R (\omega)$ is measurable from $\Omega$ into $L^2 ([0,1]^3)$ (this can be done using the argument before formula \eqref{measur}). This shows that $D_H G$ is $H$-differentiable and moreover that $D_H^2 G (\omega) = R(\omega)$, $\omega \in \Omega$. Finally, it is easy to see that $D_H^2 G \in L^{\infty} (\Omega, L^2([0,1]^3)) $ (recall that $L^2([0,1]^3)$ $ \simeq {\mathcal HS } (H_0, H_0 \otimes H_0)).$ \smallskip \noindent {\it II Step. We prove that $D_H F$ is $H$-differentiable. } \noindent In order to check condition (1) in Definition \ref{Hdiff}, we use the admissible open set $\Gamma_0 \subset \Omega$ given in the proof of Theorem \ref{2} and prove that, for any $\omega \in \Gamma_0$, ${\mathbb P}$-a.s., the mapping: $$ h \mapsto D_H F(\omega + h) $$ from $H_0$ into $H_0 \otimes H_0$ is Fr\'echet differentiable on $H_0$. Let us consider a Borel set $\Omega'' \subset \Gamma_0$, with ${\mathbb P} (\Omega'')=1$ such that, for any $\omega \in \Omega''$, $\omega +H_0 \subset \Gamma_0$. Fix any $\omega \in \Omega''$. We would like to differentiate in formula \eqref{inv}, i.e., to differentiate the mapping \begin{align} \label{g6} h \mapsto (I + D_H G (S(\omega +h)))^{-1} - I \end{align} from $H_0$ into $H_0 \otimes H_0$, applying the usual composition rules for Fr\'echet derivatives. The only problem is that the mapping $h \mapsto S(\omega + h) = \omega + h + F(\omega + h) $ does not take values in $H_0$. This is the reason for which we will verify directly the Fr\'echet differentiability at a fixed $h_0 \in H_0$. By setting $(I + D_H G (S(\omega + h))) = M (h)$, we have, for any $h \in H_0$, $$ {M^{-1}(h) - M^{-1}(h_0 )} = M^{-1}(h) \big( M(h_0 ) - M(h) \big) M^{-1}(h_0 ) $$ $$ = - M^{-1}(h) \big( D_H G ( [S(\omega + h)- S(\omega + h_0 )] + S(\omega+ h_0)) - D_H G (S(\omega+ h_0)) \big) M^{-1}(h_0 ) $$ $$ = - M^{-1}(h) \Big( D_H^2 G ( S(\omega + h_0)) \big[S(\omega + h)- S(\omega + h_0) \big] \Big) M^{-1}(h_0 )$$ $$ + \, M^{-1}(h) \, o ([S(\omega + h)- S(\omega + h_0)]) \, M^{-1}(h_0 ) $$ $$ = - M^{-1}(h) \Big( D_H^2 G ( S(\omega + h_0)) \big\{ (h-h_0) + D_H F(\omega +h_0)[h-h_0] \big \} \Big) M^{-1}(h_0 )$$ $$ - \, M^{-1}(h) \Big( D_H^2 G (S(\omega + h_0)) \, [ o (h- h_0) ] \Big) M^{-1}(h_0) $$ $$ + \, M^{-1}(h) \, o ([S(\omega + h)- S(\omega + h_0)]) M^{-1}(h_0 ), $$ as $h \to h_0 $; we have used I Step together with the fact that $S(\omega + h)- S(\omega + h_0) = (h- h_0) + \big( F(\omega + h)- F(\omega+ h_0) \big) \in H_0$ and $ S(\omega + h)- S(\omega + h_0)$ $= (h- h_0) + D_H F(\omega + h_0)[h- h_0] + o(h-h_0)$ as $h \to h_0$. This shows the Fr\'echet differentiability of the mapping in \eqref{g6} at $h_0$, with Fr\'echet derivative along the direction $k \in H_0$ given by $$ V(\omega)[k]= - M^{-1}(h_0 ) \, \Big( (D^2_H G(S(\omega + h_0)) \big[ k + D_H F(\omega + h_0)[k] \big] \Big) \, M^{-1}(h_0 ). $$ Let $(e_j) $ be an orthonormal basis in $H_0$. Using \eqref{hil}, we find, for any $j \ge 1$, \begin{align} \label{che} \| V(\omega)[e_j]\|_{H_0 \otimes H_0} \le \| M^{-1}(h_0 )\|_{{\mathcal L}(H_0, H_0)}^2 \big( \| (D^2_H G(S(\omega + h_0)) [e_j]\|_{H_0 \otimes H_0} \end{align} $$ + \, \| D^2_H G(S(\omega + h_0)) \|_{{\mathcal L}(H_0, H_0 \otimes H_0)} \, | D_H F(\omega + h_0)[e_j] |_{H_0} \big). $$ It follows that, for any $\omega \in \Omega'',$ $V(\omega) \in {\mathcal HS}(H_0, H_0 \otimes H_0)$. Up to now we know that condition (1) in Definition \ref{Hdiff} holds for $ {\mathcal G}= D_H F$, with $D_H (D_H F)(\omega) = V(\omega)$, $\omega \in \Omega''$. It remains to check that $V(\cdot)$ is measurable from $\Omega''$ into ${\mathcal HS}(H_0, H_0 \otimes H_0)$. This holds if, for any $k \in H_0$, the mapping: $$ \omega \mapsto V(\omega)[k] $$ is measurable from $\Omega''$ into ${\mathcal HS}(H_0, H_0)$ and this is easy to check. The assertion is proved. \smallskip \noindent {\it III Step. We prove that $D_H(D_H F) \in L^{\infty} (\Omega, {\mathcal HS } (H_0, H_0 \otimes H_0)) $.} \noindent By Theorem \ref{sugita} this will imply that $F \in D^{2,2}(H_0)$. Taking into account the bounds \eqref{c411} and \eqref{c41} and the fact that $D_H^2 G \in L^{\infty} (\Omega, {\mathcal HS } (H_0, H_0 \otimes H_0))) $, we find (see \eqref{che}), for any $\omega \in \Omega$, ${\mathbb P}$-a.s., $$ \| V(\omega)\|_{ {\mathcal HS}(H_0, H_0 \otimes H_0) }^2 = \sum_{j \ge 1}\| V(\omega)[e_j]\|_{H_0 \otimes H_0}^2 \, \le \, C, $$ where $C>0$ depends on $\| f_x\|_0$, $\| f_y\|_0$, $\| f_{xx}\|_0$, $\| f_{xy}\|_0$ and $\| f_{yy}\|_0$. The proof is complete. \end{proof} \subsection{ Exponential integrability of the Skorohod integral $ \delta (F)$} We start with a technical result from \cite[Section 3.1]{N} which requires to introduce the space $L^{1,2}$ (see \cite[page 42]{N}). \noindent A real stochastic process $u \in L^{2} ([0,1] \times \Omega)$ belongs to the class $L^{1,2}$ if, for almost all $t \in [0,1]$, $u_t \in D^{1,2}({\mathbb R})$, and there exists a measurable version of the two-parameter process $D_M u_t $ which still belongs to $L^{2} ([0,1] \times \Omega)$. One can prove that $L^{1,2} \subset Dom(\delta)$. Moreover $L^{1,2}$ is a Hilbert space and has norm $$ \| u\|_{L^{1,2}}^2 = \| u\|^2_{L^2 ([0,1] \times \Omega) } + \| D_M u \|^2_{L^2([0,1] \times \Omega)}. $$ Let $u \in L^{1,2}$. Fix a partition $\pi $ of $[0,1]$, $\pi = \{t_0 =0 < t_1 < \ldots< t_N =1 \}$. Let $|\pi| = \sup_{0 \le i \le N-1} |t_{i+1} - t_i|$ and define the following random variable $$ \hat S^{\pi}(\omega) = \sum_{i=0}^{N-1} \frac{1}{t_{i+1} - t_i} \Big(\int_{t_i}^{t_{i+1}} {\mathbb E} \big[u_s / {\mathcal F}_{[t_i, t_{i+1}]^c} \big](\omega) \, ds \Big) \, (\omega(t_{i+1}) - \omega(t_i)),\;\; \omega \in \Omega, $$ ${\mathbb P}$-a.s.; here ${\mathbb E} \big[u_s / {\mathcal F}_{[t_i, t_{i+1}]^c} \big]$ denotes the conditional expectation of $u_s \in L^{2}(\Omega)$ with respect to the $\sigma$-algebra ${\mathcal F}_{[t_i, t_{i+1}]^c}$ (where $[t_i, t_{i+1}]^c = [0,1] \setminus [t_i, t_{i+1}])$. This is the $\sigma$-algebra (completed with respect to ${\mathbb P}$) generated by the random variables $\int_0^1 1_A(s) \, d\omega_s $, when $A$ varies over all Borel subsets of $[t_i, t_{i+1}]^c$ (see \cite[page 33]{N}). \smallskip \noindent According to \cite[page 173]{N}, when $u \in L^{1,2}$ there exists a sequence of partitions $(\pi^n)$ such that $\lim_{n \to \infty} |\pi^n|=0$ and \begin{align} \label{nua} \hat S^{\pi^n} \to \delta(u),\;\; \text{as} \; n \to \infty, \;\; {\mathbb P}-a.s.\;\; \text{and in } \;\; L^2(\Omega). \end{align} We can now prove the following estimate. \begin{proposition} \label{stima} Let $u \in L^{1,2} \cap L^{\infty}([0,1] \times \Omega) $. Then, for any $a >0$, we have $$ {\mathbb E} [\exp ( a \, |\delta (u)| \,)] \le 2 e^{\frac{a^2 \, \| u \|^2_{\infty}}{2}}, $$ where $\| u \|_{\infty} = \| u \|_{ L^{\infty}([0,1] \times \Omega) }.$ \end{proposition} \begin{proof} We will use assertion \eqref{nua}, with the previous notation. It is enough to prove the following bound, for any $n \ge 1$, \begin{align} \label{exp} {\mathbb E} [\exp ( a \, | \hat S^{\pi^n} | \,)] \le 2 e^{a^2 \, \frac{\| u \|^2_{\infty}}{2}}. \end{align} Once \eqref{exp} is proved, an application of the Fatou lemma will allow us to get the assertion. \smallskip \noindent By elementary properties of conditional expectation, we have, for almost all $s \in [0,1]$, $\omega $, ${\mathbb P}$-a.s., $$ |{\mathbb E} \big[u_s / {\mathcal F}_{[t_0, t_1]^c} \big] | \le \|u \|_{\infty}, $$ for any $0 \le t_0 < t_1 \le 1$. It follows that, for any $n \ge 1$, $\omega $, ${\mathbb P}$-a.s., $$ \Big| \frac{1}{t_{i+1}^n - t_i^n} \int_{t_i^n}^{t_{i+1}^n} {\mathbb E} \big[u_s / {\mathcal F}_{[t_i^n, t_{i+1}^n]^c} \big](\omega) \, ds \Big| \le \|u \|_{\infty}. $$ Setting $Z_{i ,n} = \frac{1}{t_{i+1}^n - t_i^n} \int_{t_i^n}^{t_{i+1}^n} {\mathbb E} \big[u_s / {\mathcal F}_{[t_i^n, t_{i+1}^n]^c} \big](\omega) \, ds$, we get $$ {\mathbb E} [\exp ( a \, | \hat S^{\pi^n} | \,)] = {\mathbb E} \Big [ e^{a | \sum_{i=0}^{N_n} Z_{i ,n} (\omega(t_{i+1}^n) - \omega(t_i^n)) | } \Big] \le {\mathbb E} \Big [ e^{a \sum_{i=0}^{N_n} |Z_{i ,n}| |\omega(t_{i+1}^n) - \omega(t_i^n) | } \Big] $$ $$ \le {\mathbb E} \Big [ e^{a \| u \|_{\infty} \sum_{i=0}^{N_n} |\omega(t_{i+1}) - \omega(t_i) | } \Big] = {\mathbb E} \Big [ \prod_{i=0}^{N_n} e^{a \| u \|_{\infty} |\omega(t_{i+1}^n) - \omega(t_i^n) | } \Big] $$ $$ = \prod_{i=0}^{N_n} {\mathbb E} \Big [ e^{a \| u \|_{\infty} |\omega(t_{i+1}^n) - \omega(t_i^n) | } \Big] = \prod_{i=0}^{N_n} {\mathbb E} \Big [ e^{a \| u \|_{\infty} |\omega(t_{i+1}^n - t_i^n) | } \Big] $$ (in the last step we have used the independence of increments and stationarity of the Wiener process). Now the bound \eqref{exp} follows easily, noting that $$ {\mathbb E} \big[ e^{c |\omega(t)|} \big] \le 2e^{\frac{c^2 \, t }{2}}, \;\;\; c>0,\;\; t \ge 0. $$ Indeed, we have, for any $n \ge 1,$ $$ {\mathbb E} [\exp ( a \, | \hat S^{\pi^n} | \,)] \le \prod_{i=0}^{N_n} 2 {\mathbb E} \Big [ e^{\frac{a^2}{2} \| u \|_{\infty}^2 \, (t_{i+1}^n - t_i^n) } \Big] = 2 e^{a^2 \, \frac{\| u \|^2_{\infty}}{2}}. $$ \end{proof} \noindent Identifying $F_t(\omega) = - \int_0^t f (s,X_s (\omega), X_s'(\omega)) ds $, $t \in [0,1]$, with the associated stochastic process $u \in L^{1,2}$ $$ u(t, \omega) = f (t, X_t (\omega), X_t'(\omega)), \;\; t \in [0,1], \;\; \omega \in \Omega $$ (see also \cite[Section 4.1.4]{N}) and applying the previous result, we obtain \begin{corollary} \label{cia} Assume that $f : {\mathbb R} \to {\mathbb R}$ is a bounded function. Then, for any $a > 0$, it holds: \begin{align} \label{d67} {\mathbb E} [\exp ( a \, |\delta (F)| \,)] \le 2 e^{\frac{a^2}{2} \| f \|^2_0}. \end{align} \end{corollary} \subsection{The main results } We state now our main result. This theorem implies as a corollary that uniqueness in law holds for our boundary value problem \eqref{np} in the class of solutions such that the corresponding linearized equations (see condition (L) in \eqref{ll}) have only the zero solution. {\it Hence uniqueness in law holds for \eqref{np} whenever all solutions $X$ to \eqref{np} satisfy (L).} For a concrete example, we refer to Section 4.6. \noindent We remark that a statement similar to the result below is given in \cite[Theorem 2.3]{NP} {\it assuming in addition that there is pathwise-uniqueness} for the boundary value problem \eqref{np}. Indeed pathwise uniqueness and uniqueness for the linearized equation (see \eqref{ll1}) lead by the Ramer-Kusuoka theorem (see Remark \ref{kusuoka}) to Theorem 2.3 in \cite{NP}. More information on \cite[Theorem 2.3]{NP} are collected in Remark \ref{nual}. \begin{theorem} \label{ab} Assume Hypothesis \ref{law}. Suppose that there exists a solution $X$ to \eqref{integro} such that (L) in \eqref{ll} holds. \noindent Then there exists a probability measure $\tilde {\mathbb Q}$ on $(\Omega, {\mathcal F})$, which is equivalent to ${\mathbb P}$, having (positive ${\mathbb P}$-a.s.) density \begin{align} \label{fr} \frac{ d \tilde {\mathbb Q}}{d {\mathbb P}} = \eta = \det{_{2}}(I + D_H G) \, \exp \Big( - \delta (G) \, - \, \frac{1}{2} |G|_{H_0}^2 \Big) \end{align} ($G$ is defined in (\ref{ciao1})), such that the law of $X$ under ${\mathbb P}$ is the same of $Y$ under $\tilde {\mathbb Q}$, i.e., \begin{align} \label{00} {\mathbb P} (\omega \,:\, X^{}(\omega) \in A) = {\mathbb P} (X \in A) = \tilde {\mathbb Q} (Y \in A),\;\;\; A \in {\mathcal F}. \end{align} \end{theorem} \begin{proof} {\it Part I.} We verify applicability of Theorem \ref{ustu} with $$ {\mathcal T} := S = S^X $$ ($S$ is defined in \eqref{s} and $S = I + F$). First we have that hypothesis (H3) of Theorem \ref{ustu} holds with ${\mathcal T}_l = T $ by Lemma \ref{1} ($T$ is defined in \eqref{t1}). Moreover, also (H2) holds by Theorem \ref{2}. It remains to check (H1), i.e., assumptions (i) and (ii) in Hypothesis \ref{uz}. Note that (i) holds by Corollary \ref{dafare}. The main point is to check (ii). By \eqref{c41}, we easily find that $$ \exp \Big( \| D_M F \|_{L^2}^2 \Big) \in L^{4}(\Omega). $$ Thus to prove \eqref{di} it remains to check that $ \exp ( - \delta (F) ) \in L^{4}(\Omega) $ and this follows from Corollary \ref{cia}. {\vskip 1mm \noindent} {\it Part II.} We introduce the measure $\tilde {\mathbb Q}$ and establish \eqref{fr} (without proving the positivity of $\eta$). Recall that Theorem \ref{ustu} says that \begin{align} \label{gir} {\mathbb P} ( A ) = {\mathbb Q} (S^{-1} (A)), \;\;\; A \in {\cal F}, \end{align} where ${\mathbb Q}$ is a probability measure on $(\Omega, {\cal F})$, equivalent to ${\mathbb P}$, with the following (positive ${\mathbb P}$-a.s.) density $$ \frac{ d {\mathbb Q}}{d {\mathbb P}} = \Lambda_F = \det{_{2}}(I + D_M F) \, \exp \Big( - \delta (F) \, - \, \frac{1}{2} |F|_{H_0}^2 \Big); $$ recall that $X = Y \circ S$, i.e., $X_t (\omega)= Y_t (S(\omega))$, $\omega \in \Omega$, $t \in [0,1]$, and so (see \eqref{ciao1} and \eqref{s}) $$ F = - G \circ S. $$ {\vskip 1mm \noindent} We denote by ${\mathbb E} ^{{\mathbb P}}$ and ${\mathbb E}^{{\mathbb Q}}$ the expectations with respect to ${\mathbb P}$ and ${\mathbb Q}$. \noindent Let $A \in {\mathcal F}.$ Introducing $\Lambda_F^{-1} : \Omega \to {\mathbb R}_+$, where $\Lambda_F^{-1}(\omega) = \frac{1}{\Lambda_F^{} (\omega)}$ if $\Lambda_F(\omega) >0 $ and 0 otherwise (see \cite[Section 1.1]{UZ1}), we find \begin{eqnarray*} {\mathbb P} (X \in A) = {\mathbb P} (\omega\, :\, Y (S(\omega)) \in A ) = {\mathbb P} (\omega \, : \, S(\omega) \in Y^{-1} (A) ) \\ = {\mathbb E}^{{\mathbb P}} [ 1_{(S(\omega) \in Y^{-1} (A))}] = {\mathbb E}^{{\mathbb P}} \Big[ 1_{(S(\omega) \in Y^{-1} (A))} \, \frac{d {\mathbb Q} }{d {\mathbb P}} \frac{d {\mathbb P} }{d {\mathbb Q}} \Big] \\ = {\mathbb E}^{{\mathbb Q}} \Big [ 1_{(S(\omega) \in Y^{-1} (A))} \, \frac{d {\mathbb P} }{d {\mathbb Q}} \Big ] = {\mathbb E}^{{\mathbb Q}} \Big [ 1_{(S(\omega) \in Y^{-1} (A))} \, \Lambda_F^{-1} \Big ] \\ = \, {\mathbb E}^{{\mathbb Q}} \Big [ 1_{(S(\omega) \in Y^{-1} (A))} \, (\det{_{2}}(I + D_H F ) )^{-1} \, \exp \Big ( \delta (F) \, + \, \frac{1}{2} |F|_{H_0}^2 \Big) \Big]. \end{eqnarray*} By the properties of the Carleman-Fredholm determinant (see \cite[Lemma A.2.2]{UZ1}), setting $R = D_H F(\omega)$, $\omega \in \Omega$, we know that $$ (\det{_{2}}(I + R ) )^{-1} = \det{_{2}}\big( (I + R )^{-1} \big) \exp \big({\text{Trace}}(R^2 \, (I + R )^{-1}) \big), $$ where ${\text{Trace}}(R^2 \, (I + R )^{-1}) $ denotes the trace of the trace class (or nuclear) operator $R^2 \, (I + R )^{-1}$ (recall that the composition of two Hilbert-Schmidt operators is a trace class operator). Using \eqref{inv}, and the fact that ${\text{Trace}} (MN) = {\text{Trace}} (NM)$, for any Hilbert-Schmidt operators $M$ and $N$, we get \begin{eqnarray*} {\mathbb P} (X \in A) = {\mathbb E}^{{\mathbb Q}} \Big [ 1_{Y^{-1} (A)}(S(\cdot) ) \, \det{_{2}}(I + D_H G (S (\cdot) )) \cdot \\ \cdot\, \exp \Big( {\text{Trace}}\big( (D_H G)(S(\cdot))^2 \, (I + D_H G (S(\cdot)))^{-1} \big) \Big) \, \cdot \, \exp \Big ( - \delta (G \circ S) \, + \, \frac{1}{2} |G \circ S|_{H_0}^2 \Big) \Big]. \end{eqnarray*} Now remark the law ${\mathbb P}_0$ of $S$ under ${\mathbb P}$, i.e., ${\mathbb P}_0(A) = {\mathbb P} (S^{-1}(A))$, $A \in {\mathcal F}$, is equivalent to ${\mathbb P}$ by Theorem \ref{ustu} \cite[Lemma 2.1]{UZ}. Using this fact we can apply Theorem B.6.4 in \cite{UZ1} and obtain the following identity (${\mathbb P}$-a.s. and so also ${\mathbb Q}$-a.s.) $$ \delta (G \circ S) = (\delta (G)) \circ S - \langle G \circ S , F \rangle_{H_0} - {\text{Trace}}\big( (D_H G)(S(\cdot)) \, D_H F \big). $$ $$ =(\delta (G)) \circ S - \langle G \circ S , F \rangle_{H_0} + {\text{Trace}}\big( (D_H G)(S(\cdot))^2 \, (I + D_H G (S(\cdot)))^{-1} \big). $$ We get, since $F = - G \circ S$, \begin{align*} {\mathbb P} (X \in A) = {\mathbb E}^{{\mathbb Q}} \Big [ 1_{Y^{-1} (A)}(S(\cdot) ) \, \det{_{2}}(I + D_H G (S (\cdot) )) \, \cdot \\ \cdot \, \exp \Big ( - (\delta (G)) \circ S + \langle G \circ S , F \rangle_{H_0} \, + \, \frac{1}{2} |G \circ S|_{H_0}^2 \Big) \Big] \\ = {\mathbb E}^{{\mathbb Q}} \Big [ 1_{Y^{-1} (A)}(S(\cdot) ) \, \det{_{2}}(I + D_H G (S (\cdot) )) \, \cdot \\ \cdot \, \exp \Big ( - (\delta (G)) \circ S - \langle G \circ S , G \circ S \rangle_{H_0} + \, \frac{1}{2} |G \circ S|_{H_0}^2 \Big ) \, \\ = {\mathbb E}^{{\mathbb Q}} \Big [ 1_{Y^{-1} (A)} (S(\cdot) ) \, \det{_{2}}(I + D_H G (S (\cdot) )) \, \exp \Big ( - (\delta (G)) \circ S \, - \, \frac{1}{2} |G \circ S|_{H_0}^2 \Big ). \end{align*} The previous calculations show that $$\det{_{2}}(I + D_H G (S (\cdot) )) \, \exp \Big ( - (\delta (G)) \circ S \, - \, \frac{1}{2} |G \circ S|_{H_0}^2 \Big ) = \eta \circ S \in L^1(\Omega , {\mathbb Q}) $$ and that it is positive ${\mathbb Q}$-a.s. (or ${\mathbb P}$-a.s.). Using that ${\mathbb Q}$ is a Girsanov measure (i.e., that the law of $S$ under ${\mathbb Q}$ is ${\mathbb P}$), it is is elementary to check that $\eta \in L^1 (\Omega, {\mathbb P})$ and moreover \begin{align} \label{f9} {\mathbb P} (X \in A)= {\mathbb E}^{{\mathbb P}} \Big [ 1_{A}(Y) \, \det{_{2}}(I + D_H G ) \, \exp \Big ( - \delta (G) \, - \, \frac{1}{2} |G |_{H_0}^2 \Big) \Big]. \end{align} Up to now we know that $\eta \in L^{1}(\Omega)$ and ${\mathbb E}^{{\mathbb P}}[\eta ] =1$. {\vskip 1mm \noindent} {\it Part III.} It remains to show that $\eta >0$, ${\mathbb P}$-a.s., i.e., that $\gamma = \det{_{2}}(I + D_H G) >0$, ${\mathbb P}$-a.s. \smallskip \noindent By Theorem \ref{ustu}, we know that $\det{_{2}}(I + D_H F ) >0$, ${\mathbb P}$-a.s. (or ${\mathbb Q}$-a.s.). This is equivalent to say that $\gamma \circ S >0$, ${\mathbb P}$-a.s.. Assume by contradiction that there exists $A \in {\mathcal F}$ with ${\mathbb P} (A) >0$ such that $\gamma(\omega) \le 0$, for any $\omega \in A$. We have $$ 0 \ge {\mathbb E}^{{\mathbb P}} [1_A \cdot \gamma ] = {\mathbb E}^{{\mathbb Q}} [1_A(S(\cdot) ) \gamma(S(\cdot)) ]. $$ But ${\mathbb E}^{{\mathbb Q}} [1_A(S(\cdot) ) \gamma(S(\cdot)) ]$ is positive if ${\mathbb Q} (S^{-1}(A))>0$. This holds, since ${\mathbb E}^{{\mathbb Q}} [1_{S^{-1}(A)} ]$ $ = {\mathbb E}^{{\mathbb Q}} [1_{A}(S(\cdot)) ] = $ ${\mathbb E}^{{\mathbb P}}[1_A] >0 $. We have found a contradiction. The proof is complete. \end{proof} \noindent The assertion of the theorem implies that $ \det{_{2}}(I + D_H G) >0 $, ${\mathbb P}$-a.s.. This means that under the assumptions of Theorem \ref{ab} we have that condition (LY) in \eqref{ll1} holds ${\mathbb P}$-a.s.. \smallskip \noindent Since $\eta$ in Theorem \ref{ab} does not depend on $X$, we get immediately \begin{corollary} \label{dtt} Assume Hypothesis \ref{law}. Suppose that we have two solutions to \eqref{np}, $X^1 $ and $X^2$, which both satisfy hypothesis (L) in \eqref{ll}. Then $X^1$ and $X^2$ have the same law (i.e., for any Borel set $A \subset \Omega$, we have $ {\mathbb P} (\omega \,:\, X^{1}(\omega) \in A)$ $ = {\mathbb P} (\omega \,:\, X^{2}(\omega) \in A)).$ \end{corollary} \begin{remark} \label{nual} {\em In \cite[Theorem 2.3]{NP} it is shown that the assertion of our Theorem \ref{ab} holds with $|\det{_{2}}(I + D_H G)|$ instead of $\det{_{2}}(I + D_H G)$ if one assumes that (i) $f : {\mathbb R} \times {\mathbb R} \to {\mathbb R} $ is of class $C^1$; (ii) $T$ is bijective; (iii) the condition of \cite[Proposition 2.2]{NP} holds (such condition guarantees the validity of \eqref{ll1}, for any $\omega \in \Omega$, and so it implies \eqref{ll}, for any $\omega \in \Omega$). \noindent We point out that, in the notation of \cite{NP}, $\det{_{2}}(I + D_H G)$ is written as $\det{_{c}}(-D_H G)$). } \end{remark} \subsection {An application } Here we show an explicit stochastic boundary value problem for which uniqueness in law holds, but it seems that no known method allows to prove pathwise uniqueness (see, among others, \cite{CH} and the seminal paper \cite{MW}). For such problem we can also establish existence of solutions (see also Remark \ref{exi} for a more general existence theorem). The result looks similar to Theorem \ref{Nonres} (where we have proved existence and pathwise uniqueness). However, note that here {\it the non-resonance condition \eqref{ipononres} can be violated in a discrete set of points.} \begin{theorem}\label{Nonres1} Let us consider the boundary value problem \eqref{np} with $f(t,x,y) = f(x)$. Assume that $f \in C^2_b({\mathbb R})$ and, moreover, that \begin{align}\label{rese} & (i) \; 0< f^{\prime} (x) \le \pi^2, \quad \text{for any} \; x\in {\mathbb R}; \\ \nonumber & (ii) \; A = \{ x \in {\mathbb R} \, :\, f'(x) = \pi^2 \} \;\; \text{is discrete}. \end{align} Then there exists a solution $X$. Moreover, uniqueness in law holds for \eqref{np} (i.e., any solution $Z$ of \eqref{np} has the same law of $X$). \end{theorem} \begin{proof} \underline{\it Uniqueness.} We will suitably apply Corollary \ref{dtt}. To this purpose it is enough to show that any solution $X$ of \eqref{np} verifies condition (L), i.e., there exists an admissible Borel set $\Omega_0 \subset \Omega$ such that $$ \label{123} (L) \begin{cases} \text{for any $\omega \in \Omega_0$, ${\mathbb P}$-a.s., the BVP:} \;\; u_t'' + f'(X_t(\omega)) u_t =0,\;\; u_0= u_1=0, \\ \text{ has only the zero solution.} \end{cases} $$ Let us consider the following set $\Omega_0 $: $$ \Omega_0 = \{ \omega \in \Omega \; :\; f'(X_t(\omega)) < \pi^2, \;\;\; t \in [0,1], \; a.e.\}. $$ By looking at $\Omega \setminus \Omega_0$, it is not difficult to prove that $\Omega_0 $ is Borel. Note that $ \Omega \setminus \Omega_0$ contains all $\omega \in \Omega $ such that there exists an interval $I_{\omega} \subset [0,1]$ on which $t \mapsto f'(X_t(\omega)) = \pi^2$. The proof is now divided into three steps. {\vskip 1mm \noindent} {\it I Step.} We show that, for any $\omega \in \Omega_0$, (L) holds. We will use the following well-known result (it is a straightforward consequence of \cite[Lemma 3.1, page 92]{CH}). Let $\rho_t$, $t \in [0,1]$, be a real and measurable function. Assume that there exists $h >0$ such that $ h < \rho_t $ $ < \pi^2,$ $ t \in [0,1]$, {\em a.e..} Then the linear boundary value problem $v_t'' + \rho_t v_t =0,\;\; v_0= v_1=0,$ has only the zero solution. \noindent Let $\omega \in \Omega_0$. In order to apply the previous result, we remark that, \begin{align} \label{re} h_{\omega} < f'(X_t(\omega)) < \pi^2, \;\;\; t \in [0,1], \; a.e., \end{align} for some $h_{\omega} >0$. This follows, since $t \mapsto f'( X_t(\omega))$ is continuous and positive on $[0,1]$. {\vskip 1mm \noindent} {\it II Step.} We show that ${\mathbb P} (\Omega_0)=1$. \noindent Take any $\omega \in \Omega \setminus \Omega_0$. There exists a time interval $I_{\omega} \subset [0,1]$ such that $$ \pi^2 = f'(X_t(\omega)), \;\;\; t \in I_{\omega}. $$ By using the continuity of the mapping $t \mapsto f'(X_t(\omega)) $ and the fact that $A$ is discrete, we infer that there exists $x_{\omega} \in A$ such that $X_t(\omega) = x_{\omega}$, $t \in J_{\omega}$, for some time interval $J_{\omega}$ contained in $I_{\omega}.$ This means that $$ \int_{0}^{1} K (t,s) f(X_s(\omega))ds + Y_t(\omega) = x_{\omega}, $$ for any $t \in J_{\omega}$ (see Lemma \ref{equiv2}). Differentiating with respect to $t$, we get $$ \int_{0}^{1} \frac{\partial K}{\partial t}(t,s) f(X_s(\omega)) ds = - Y_t'(\omega) = \int_0^1 \omega_s ds - \omega _t, \;\;\; t \in J_{\omega}. $$ It is well-known that the map $\xi_t(\omega) = \int_{0}^{1} \frac{\partial K} {\partial t}(t,s) f(X_s(\omega)) ds $ belongs to $C^1([0,1])$. We have found $$ \omega _t = \int_0^1 \omega_s ds - \xi_t(\omega),\;\;\; t \in J_{\omega}. $$ On the right hand side, we have a function which is $C^1$ on $J_{\omega}$. This means that, for any $\omega \in \Omega \setminus \Omega_0$, there exists a time interval on which $\omega$ is a $C^1$-function. Since the Wiener process (see \eqref{wie}), ${\mathbb P}$-a.s., has trajectories which are never of bounded variation in any time interval of $[0, 1]$, we have that ${\mathbb P} (\Omega \setminus \Omega_0)=0$. {\vskip 1mm \noindent} {\it III Step. } We prove that, for any $\omega \in \Omega_0$, ${\mathbb P}$-a.s., we have $\omega + H_0 \subset \Omega_0$. Assume by contradiction that this is not true. This means that, there exists a Borel set $\Omega' \subset \Omega_0$ with ${\mathbb P} (\Omega') >0$, such that, for any $\omega \in \Omega' $ there exists $h \in H_0$ with $\omega + h \not \in \Omega_0$. Let us consider such $\omega$ and $h$. \noindent Arguing as before, we find that there exists a time interval $J_{\omega +h} \subset [0,1]$ and some $x_{\omega +h } \in A$ such that $X_t(\omega +h) = x_{\omega+h}$, $t \in J_{\omega +h}$. This means that $$ \omega _t + h_t = \int_0^1 \omega_s ds + \int_0^1 h_s ds - \xi_t(\omega +h),\;\;\; t \in J_{\omega +h}. $$ We have found that for each $\omega \in \Omega'$ there exists a time interval on which $\omega$ is of bounded variation. This contradicts the fact that $ {\mathbb P} (\Omega') >0$ and finishes the proof of uniqueness. {\vskip 1mm \noindent} \underline{\it Existence.} The proof is divided into three steps. {\vskip 1mm \noindent} {\it I Step.} For any $\omega \in \Omega$, consider the sequence $(X^n(\omega)) $, with $X^1_t(\omega) = 0$, $t \in [0,1]$, and $$ X^{n+1}_t(\omega) = \int_{0}^{1} K (t,s) f(X_s^n(\omega))ds + Y_t(\omega), \;\; n \ge 1, \;\; t \in [0,1]. $$ Using the boundedness of $f$, an application of the Ascoli-Arzel\`a theorem shows that, for any $\omega \in \Omega$, there exists a subsequence $(X^k(\omega))$ (possibly depending on $\omega$) which converges in $C([0,1])$ to a continuous function $X(\omega)$. It is then clear that, for any $\omega \in \Omega$, we have \begin{equation} \label{cr} X_t(\omega) = \int_{0}^{1} K (t,s) f(X_s(\omega))ds + Y_t(\omega), \; \;\; t \in [0,1]. \end{equation} The main difficulty is that the previous construction does not clarify the measurable dependence of $X$ on $\omega$. To this purpose we will suitably modify $X$ in order to obtain the required measurability property. {\vskip 1mm \noindent} {\it II Step.} We investigate when condition (LY) in \eqref{ll1} holds, i.e., for which $\omega \in \Omega $ \begin{align} \label{ll12} \begin{cases} \text{ the linearized BVP:} \; u_t'' + f' (Y_t(\omega)) u_t =0,\;\; u_0= u_1=0, \\ \text{ has only the zero solution.} \end{cases} \end{align} Arguing as in the proof of uniqueness, condition \eqref{ll12} holds in particular if $\omega$ satisfies \begin{align} \label{re2} h_{\omega} < f'(Y_t(\omega)) < \pi^2, \;\;\; t \in [0,1], \; a.e., \end{align} for some $h_{\omega} >0$. On the other hand, if \eqref{re2} does not hold for $\omega^0 \in \Omega$, then there exists $x_{\omega^0} \in A$ such that $Y_t(\omega^0) = x_{\omega^0}$, $t \in J_{\omega^0}$, for some time interval $J_{\omega^0} \subset [0,1]$. It follows that $ Y_t(\omega^0) = x_{\omega^0}, $ for any $t \in J_{\omega^0}$. Differentiating with respect to $t$, we get $$ 0 = -\int_0^1 \omega_s^0 ds + \omega _t^0, \;\;\; t \in J_{\omega^0}. $$ This implies that $\omega _t^0 = \int_0^1 \omega_s^0 ds$, $t \in J_{\omega^0}$. Let us introduce the set $\Lambda \subset \Omega$ of all $\omega$ such that there exists a time interval $I_{\omega} \subset [0,1]$ on which $\omega$ is a function of bounded variation. It is not difficult to prove that $\Lambda $ is a Borel subset of $\Omega$. Moreover, $ {\mathbb P} (\Lambda ) =0. $ We have just verified that \eqref{ll12} holds for any $\omega \in \Omega \setminus \Lambda $. {\vskip 1mm \noindent} {\it III Step.} Let us consider the mapping $X(\omega)$ of Step I and introduce $S : \Omega \to \Omega$, $$ S_t(\omega) = \omega_t - \int_0^t f (X_s (\omega)) ds. $$ We have $X(\omega) = Y(S(\omega))$ and $T(S(\omega)) = \omega$, for any $\omega \in \Omega$ as in Section 4.3. Although $S$ {\it is not necessarily measurable,} one can easily check that $$ S^{-1} (\Lambda) = \Lambda. $$ This implies that $S (\Omega \setminus \Lambda) = \Omega \setminus \Lambda $ (clearly ${\mathbb P} (\Omega \setminus \Lambda)=1$). Now we argue as in the proof of Theorem \ref{2} with its notations. Since we know that \eqref{ll12} is verified when $\omega = S(\theta)$, for some $\theta \in \Omega \setminus \Lambda$, we deduce that the Fr\'echet derivative $D T (S (\omega))$ is an isomorphism from $\Omega $ into $\Omega$, for any $\omega \in \Omega \setminus \Lambda$. \noindent By the inverse function theorem, $T $ is a local diffeomorphism from an open neighborhood $U_{S(\omega)}$ of $S(\omega)$ to an open neighborhood $V_{T(S(\omega))}= V_{\omega}$ of $T(S(\omega)) = \omega $, for any $\omega \in \Omega \setminus \Lambda$. Let us denote by $T^{-1}$ the local inverse function. We deduce that, for any $\omega \in \Omega \setminus \Lambda$, $ S(\theta) = T^{-1}(\theta),\;\;\; \theta \in V_{\omega}. $ \noindent Introduce the open set $$ \Phi = \bigcup_{\omega \in \Omega \setminus \Lambda} \, V_{\omega}. $$ Since $ \Omega \setminus \Lambda \subset \Phi $, we have that ${\mathbb P} (\Phi) =1$. In addition $\Phi$ is an admissible open set in $\Omega$, since, for any $ \omega \in \Omega \setminus \Lambda$, we have that $\omega + H_0 \subset \Omega \setminus \Lambda \subset \Phi.$ The restriction of $S$ to $\Phi$ is a $C^{1}$-function with values in $\Omega$. We define the measurable mapping $$ \hat S : \Omega \to \Omega, \;\;\; \hat S (\omega) = \begin{cases}S(\omega),\;\;\; \omega \in \Phi \\ 0,\;\;\; \omega \in \Omega \setminus \Phi \end{cases} $$ and introduce $ \hat X : \Omega \to \Omega $, $ \hat X_t (\omega) = Y_t(\hat S(\omega)),\;\;\; \omega \in \Omega, \;\; t \in [0,1]. $ \noindent It is clear that $\hat X$ is measurable. Moreover, since $\hat X (\omega) = X(\omega)$, when $\omega \in \Phi$, we have that $\hat X$ verifies \eqref{cr} for any $\omega \in \Phi$. This shows that $\hat X$ is a solution to \eqref{np} and finishes the proof. \end{proof} \noindent An example of $f$ which is covered by the previous result is $$f(x) = {\pi^2 } \int_0^x e^{-t^2} dt, \;\; x \in {\mathbb R}. $$ \begin{remark} \label{exi} {\em The previous proof shows that an existence result for \eqref{np} holds, more generally, if the following three conditions hold: {\vskip 1mm \noindent} (i) \; $f(t,x,y) = f(x)$ with $f \in C_b({\mathbb R}) \cap C^1({\mathbb R})$; {\vskip 1mm \noindent} (ii) \; there exists a Borel set $\Lambda \subset \Omega$ such that $\Omega \setminus \Lambda $ is admissible and, moreover, $ S (\Omega \setminus \Lambda ) \subset \Omega \setminus \Lambda $, where $S : \Omega \to \Omega$ is defined by $$ S_t(\omega) = \omega_t - \int_0^t f (Z_s (\omega)) ds, \;\;\; \omega \in \Omega, \;\; t \in [0,1], $$ where $Z : \Omega \to \Omega$ is any mapping (non necessarily measurable); {\vskip 1mm \noindent} (iii) condition \eqref{ll12} holds, for any $\omega \in \Omega \setminus \Lambda.$ \noindent Under (i)-(iii), the existence of solution can be proved by adapting the proof of Theorem \ref{Nonres1}. } \end{remark} \section {Remarks on computation of the Carleman-Fredholm determinant $\det_2(I + D_H G)$ } \medskip When dealing with non-adapted versions of the Girsanov theorem (see \cite{K}, \cite{R}\cite{UZ}) one delicate problem is to find some explicit expression for the Carleman-Fredholm determinant appearing also in \eqref{lf} of Section 4.2. This problem has been also considered in \cite{D}, \cite{DM}, \cite{NP} and \cite{UZ1} for different measurable transformations $\mathcal T$. In particular the Radon-Nykodim derivative appearing in Theorem \ref{ab} and \cite[Theorem 2.3]{NP} (see also Remark \ref{nual}) contains the explicit term $$ \det{_{2}}(I + D_H G(\omega)) $$ (in the notation of \cite{NP}, $\det{_{2}}(I + D_H G(\omega))$ becomes $\det{_{c}}(-D_H G(\omega))$). \noindent The assertion in our next result is a reformulation of \cite[Lemma 2.4]{NP}. It provides an explicit formula for $\det{_{2}}(I + D_H G(\omega))$. It is important to point out that our computation of the Carleman-Fredholm determinant $\det_2(I+ D_H G(\omega))$ has been developed with techniques which are completely different from those (based on Malliavin calculus) used for the proof of Lemma 2.4 in \cite{NP}. \noindent Our approach comes from \cite{GGK} and it uses functional analysis and the theory of linear ordinary differential equations. For the reader's convenience, we have collected in Appendix B some of the ideas (taken from \cite{GGK}) which have enabled us to perform our computation of the Carleman-Fredholm determinant and some important consequences of this approach. \noindent We believe that this method could be useful in other situations (cf. \cite{AN}, \cite{D}, \cite{DM}, \cite{UZ1}). \begin{lemma} \label{new} Assume that $f \in C^1$ and that the linearized BVP $$ u_t'' + b_t(\omega)u_t' + a_t (\omega) u_t =0,\;\; u_0= u_1=0, $$ where $a_t = f_x (t, Y_t(\omega), Y_t'(\omega) )$, $b_t = f_y (t, Y_t(\omega), Y_t'(\omega) )$, has the only zero solution, for any $\omega \in \Omega$. \noindent Then the following relation holds \begin{equation*} {\det}_{2} (I + D_H G(\omega) ) = Z_1(\omega) \, \exp \Big (\int_0^1 (t a_t + (1-t)b_t) dt \Big), \end{equation*} where $Z_t$ solves the Cauchy problem $$ u_t'' + b_t u'_t + a_t u_t =0,\;\; u_0=0,\;\; u'_0=1. $$ \end{lemma} \begin{proof} The proof is based on some ideas which are developed in Appendix B. More precisely, observe that, by \eqref{BElle}, the assumption in Lemma \ref{new} guarantees that we can apply Theorem \ref{carle0} with $L = D_H G(\omega)$. \end{proof}
proofpile-arXiv_065-6251
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\bf\boldmath Introduction } \label{Sec-Intro} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} Quantum Chromodynamics (QCD) has been established as the theory of the strong interaction and explains the properties of hadrons, such as the proton or the neutron, in particular at short distances. Hadrons are composite objects and made up of quarks and antiquarks, which are bound together by the exchange of gluons, the gauge field of the strong force. The corresponding charge is called color, leading to a $SU(3)_c$ gauge theory. This is analogous to the electric charge, which induces the $U(1)$ gauge group of electromagnetism. \\ The path to the discovery of QCD started in the $1960$ies. By that time, a large amount of hadrons had been observed in cosmic ray and accelerator experiments. Hadrons are strongly interacting particles which occur as mesons (spin$~=0,~1$) or baryons (spin$~=1/2,~3/2$). In the early $1960$ies investigations were undertaken to classify all hadrons, based on their properties such as flavor-- and spin quantum numbers and masses. In 1964, M.~Gell-Mann, \cite{GellMann:1964nj}, and G.~Zweig, \cite{Zweig:1964jf}, proposed the quark model as a mathematical description for these hadrons. Three fractionally charged quark flavors, up ($u$), down ($d$) and strange ($s$), known as valence quarks, were sufficient to describe the quantum numbers of the hadron spectrum which had been discovered by then. Baryons are thus considered as bound states of three quarks and mesons of a quark-antiquark pair. Assuming an approximate $SU(3)$ flavor symmetry, ``the eightfold way'', \cite{GellMann:1964xy,Kokkedee:1969,Close:1979bt}, mass formulas for hadrons built on the basis of quark states could be derived. A great success for the quark model was marked by the prediction of the mass of the $\Omega^-$-baryon before it was finally observed, \cite{Barnes:1964pd}. In the same year, G\"ursey and Radicati, \cite{Gursey:1992dc}, introduced spin into the model and proposed a larger $SU(6)_{spin-flavor}=SU(2)_{spin}\otimes SU(3)_{flavor}$ symmetry. This allowed the unification of the mass formulas for the spin--$1/2$ and spin--$3/2$ baryons and provided the tool to calculate the ratio of the magnetic moments of the proton and the neutron to be $\approx-3/2$, which is in agreement with experiment within $3\%$, \cite{Beg:1964nm,Sakita:1964qr}. However, this theory required the quarks that gave the correct low-lying baryons to be in a symmetric state under permutations, which contradicts the spin--statistics theorem, \cite{Pauli:1940zz}, since quarks have to be fermions. Greenberg, \cite{Greenberg:1964pe}, resolved this contradiction by introducing a ``symmetric quark model''. It allows quarks to have a new hidden three--valued charge, called color, which is expressed in terms of parafermi statistics. Finally, in 1965, Nambu, \cite{Nambu:1966}, and Han and Nambu, \cite{Han:1965pf}, proposed a new symmetry, $SU(3)_{color}$, which makes the hidden three--valued charge degree of freedom explicit and is equivalent to Greenberg's description. Since there was no explicit experimental evidence of this new degree of freedom, the assumption was made that all physical bound states must be color-neutral,~\cite{Nambu:1966,Han:1965pf,Fritzsch:1972jv}. \\ The possibility to study the substructure of nucleons arose at the end of the $1960$ies with the advent of the Stanford Linear Accelerator {\sf SLAC}, \cite{Mo:1965dv,*Taylor:1967qv}. This facility allowed to perform deeply inelastic lepton-nucleon scattering (DIS) experiments at much higher resolutions than previously possible. The cross section can be parametrized quite generally in terms of several structure functions $F_i$ of the nucleon, \cite{Drell:1963ej,*Derman:1978iz}. These were measured for the proton by the {\sf SLAC-MIT} experiments and depend both on the energy transfer $\nu$ and the $4$-momentum transfer $q^2=-Q^2$ from the lepton to the nucleon in the nucleon's rest frame. In the Bjorken limit, $\{Q^2,~\nu~\rightarrow~\infty$,~$Q^2/\nu=$~fixed$\}$, \cite{Bjorken:1968dy}, it was found that the structure functions depend on the ratio of $Q^2$ and $\nu$ only, $F_i(\nu,Q^2)=F_i(Q^2/\nu)$. This phenomenon was called scaling,~\cite{Coward:1967au,*Panofsky:1968pb,*Bloom:1969kc,*Breidenbach:1969kd} cf. also \cite{Kendall:1991np,*Taylor:1991ew,*Friedman:1991nq}, and had been predicted by Bjorken in his field theoretic analysis based on current algebra,~\cite{Bjorken:1968dy}. As the relevant parameter in the deep-inelastic limit he introduced the Bjorken-scaling variable $x=Q^2/2M\nu$, where $M$ is the mass of the nucleon. After scaling was discovered, R. Feynman gave a phenomenological explanation for this behavior of the structure functions within the parton model, \cite{Feynman:1969wa,Feynman:1969ej,Feynman:1973xc}. According to this model, the proton consists of several point-like constituents, the partons. His assumption was that during the interaction time - which is very short since high energies are involved - these partons behave as free particles off which the electrons scatter elastically. Therefore, the total cross section is just the incoherent sum of the individual electron-parton cross-sections, weighted by the probability to find the particular parton inside the proton. The latter is described by the parton density $f_i(z)$. It denotes the probability to find parton $i$ in the proton, carrying the fraction $z$ of the total proton momentum $P$. In the limit considered by Feynman, $z$ becomes equal to $x$, giving an explanation for scaling. This is a direct consequence of the {\sf rigid} correlation $M\nu=q.P$, as observed in experiment. Even more important for the acceptance of the quark parton model was the observation that the Callan-Gross relation, \cite{Callan:1969uq}, holds, namely that the longitudinal structure function $F_L$ vanishes in the situation of strict scaling. This experimental result favored the idea of the proton containing spin--$1/2$, point-like constituents and ruled out different approaches, such as the algebra of fields, \cite{Lee:1967iu}, or explanations assuming vector--meson dominance, \cite{Sakurai:1969,*Sakurai:1969ss,*Tsai:1969yk,*Fraas:1970vj}. Finally, Bjorken and Paschos,~\cite{Bjorken:1969ja}, linked the parton model to the group theoretic approach by identifying quarks and partons. \\ Today QCD forms one part of the Standard Model of elementary particle physics, supplementing the electroweak $SU_L(2) \times U_Y(1)$ sector, which had been proposed by S.~Weinberg in 1967,~\cite{Weinberg:1967tq}, extending earlier work by S.~Glashow,~\cite{Glashow:1961tr}, cf. also \cite{Salam:1964ry,*Salam:1968rm}, for the leptonic sector. This theory was proved to be renormalizable by G.~t'Hooft and M.~Veltman in $1972$, \cite{'tHooft:1972ue},~see also \cite{Taylor:1971ff,*Slavnov:1972fg,*Lee:1972fjxLee:1973fn}, if anomalies are canceled, \cite{Bell:1969ts,*Adler:1969gk,Bertlmann:1996xk}, requiring an appropriate representation for {\sf all} fermions. G.~t'~Hooft also proved renormalization for massless Yang-Mills theories, \cite{'tHooft:1971fh}. These gauge theories had first been studied by C.N.~Yang and R.L.~Mills in $1954$,~\cite{Yang:1954ek}, and have the distinctive property that their gauge group is non-abelian, leading to interactions between the gauge--bosons, \cite{Fritzsch:1972jv}, contrary to the case of Quantum Electrodynamics. In $1972/73$, M.~Gell-Mann, H.~Fritzsch and H.~Leutwyler,~\cite{Fritzsch:1973pi}, cf. also \cite{Nambu:1966}, proposed to gauge color which led to an extension of the Standard Model to $SU_L(2)\times U_Y(1)\times SU_c(3)$, including the strongly interacting sector. The dynamical theory of quarks and gluons, Quantum Chromodynamics, is thus a massless Yang-Mills theory which describes the interaction of different quark flavors via massless gluons. Among the semi-simple compact Lie-groups, $SU(3)_c$ turns out to be the only possible gauge group for this theory,~cf.~\cite{Reya:1979zk,Muta:1998vi}. In $1973$, D.~Gross and F.~Wilczek, \cite{Gross:1973id}, and H.~Politzer,~\cite{Politzer:1973fx}, proved by a $1$-loop calculation that Quantum Chromodynamics is an asymptotically free gauge theory, cf. also \cite{tHooft:unpub}, which allows to perform perturbative calculations for processes at large enough scales. There, the strong coupling constant becomes a sufficiently small perturbative parameter. In the beginning, QCD was not an experimentally well--established theory, which was mainly due to its non--perturbative nature. The large value of the strong coupling constant over a wide energy range prevents one from using perturbation theory. In the course of performing precision tests of QCD, the operator product expansion near the light--cone, the light--cone expansion (LCE),~\cite{Wilson:1969zs,*Zimmermann:1970,*Frishman:1971qn,*Brandt:1970kg}, proved to be important. By applying it to deep--inelastic processes, one facilitates a separation of hadronic bound state effects and the short distance effects. This is possible, since the cross sections of deeply inelastic processes receive contributions from two different resolution scales $\mu^2$. One is the short distance region, where perturbative techniques can be applied. The other describes the long distance region. Here bound state effects are essential and a perturbative treatment is not possible due to the large coupling involved. By means of the LCE, the two energy scales of the process are associated with two different quantities: the Wilson coefficients and the hadronic operator matrix elements or parton densities. The former contain the large scale contributions and can therefore be calculated perturbatively, whereas the latter describe the low scale behavior and are quantities which have to be extracted from experimental data or can be calculated by applying rigorous non--perturbative methods. Using the LCE, one may derive Feynman's parton model and show the equivalence of the approaches by Feynman and Bjorken in the twist--$2$ approximation,~\cite{Gross:1971wn}. The LCE also allows to go beyond the naive partonic description, which is formulated in the renormalization group improved parton model. Shortly after the formulation of QCD, logarithmic scaling violations of the deep inelastic cross section where observed,~\cite{Chang:1975sv,*Watanabe:1975su}, which had to be expected since QCD is not an essentially free field theory, neither is it conformally invariant, \cite{Ferrara:1973eg}. The theoretical explanation involves the calculation of higher order corrections to the Wilson coefficients as well as to the anomalous dimensions of the composite operators emerging in the LCE, \cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}, and predicts the correct logarithmic $Q^2$ dependence of the structure functions. In fact, the prediction of scaling violations is one of the strongest experimental evidences for QCD. Thus deeply inelastic scattering played a crucial role in formulating and testing QCD as the theory governing the dynamics of quark systems. Its two most important properties are the confinement postulate - all physical states have to be color singlets - and asymptotic freedom - the strength of the interaction becomes weaker at higher scales, i.e. at shorter distances, cf. e.g. \cite{PHOHAD:1971,*Politzer:1974fr,*Marciano:1977su,*Ellis:1979kt,Buras:1979yt,Reya:1979zk,Altarelli:1981ax,*Wilczek:1982yx,Jaffe:1985je,Collins:1987pm,Ellis:1988vi,Mueller:1989hs,Roberts:1990ww,Sterman:1994ce,*Ellis:1991qj,*Brock:1993sz,*Blumlein:1993ar,Mulders:1996}. An important step toward completing the Standard Model were the observations of the three heavy quarks charm (c), bottom (b) and top (t). In $1974$, two narrow resonances, called $\Psi~$ and $\Psi'$, were observed at ${\sf SLAC}$ in $e^+e^-$ collisions at $3.1~{\rm GeV}$ and $3.7~{\rm GeV}$, respectively,~\cite{Augustin:1974xw,*Abrams:1974yy}. At the same time another resonance called $J$ was discovered in proton-proton collisions at ${\sf BNL}$,~\cite{Aubert:1974js}, which turned out to be the same particle. Its existence could not be explained in terms of the three known quark flavors and was interpreted as a meson consisting of a new quark, the charm quark. This was an important success of the Standard Model since the existence of the charm had been postulated before,~\cite{Maki:1964ux,*Hara:1963gw,*Bjorken:1964gz}. It is necessary to cancel anomalies for the $2$nd family as well as for the GIM--mechanism, \cite{Glashow:1970gm}, in order to explain the absence of flavor changing neutral currents. With its mass of $m_c \approx 1.3~{\rm GeV}$ it is much heavier than the light quarks, $m_u \approx 2~{\rm MeV}~,m_d \approx 5~{\rm MeV}~,m_s \approx 104~{\rm MeV}$, \cite{Amsler:2008zzb}, and heavier than the nucleons. In later experiments, two other heavy quarks were detected. In $1977$, the $\Upsilon$ ($=b\overline{b}$) resonance was observed at ${\sf FERMILAB}$, \cite{Herb:1977ek}, and interpreted as a bound state of the even heavier bottom quark, with $m_b \approx 4.2~{\rm GeV}$,~\cite{Amsler:2008zzb}. Ultimately, the quark picture was completed in case of three fermionic families by the discovery of the heaviest quark, the top-quark, in $p\overline{p}$ collisions at the ${\sf TEVATRON}$ in $1995$, \cite{Abe:1994xtxAbe:1994stxAbe:1995hr,*Abachi:1995iq}. Its mass is given by roughly $m_t \approx 171~{\rm GeV}$,~\cite{Amsler:2008zzb}. Due to their large masses, heavy quarks cannot be considered as constituents of hadrons at rest or bound in atomic nuclei. They are rather excited in high energy experiments and may form short-lived hadrons, with the exception of the top-quark, which decays before it can form a bound state. \\ The theoretical calculation in this thesis relates to the production of heavy quarks in unpolarized deeply inelastic scattering via single photon exchange. In this case, the double differential scattering cross-section can be expressed in terms of the structure functions $F_2(x,Q^2)$ and $F_L(x,Q^2)$. Throughout the last forty years, many DIS experiments have been performed, \cite{Stein:1975yy,*Atwood:1976ys,*Bodek:1979rx,*Mestayer:1982ba,Allkover:1981,*Aubert:1985fx,Bollini:1982ac,*Benvenuti:1984duxBenvenuti:1987zjxBenvenuti:1989rhxBenvenuti:1989fm,Amaudruz:1991nwxAmaudruz:1992bf,*Arneodo:1995cq,Chang:1975sv,*Watanabe:1975su,Anderson:1979mt,Adams:1989emxAdams:1996gu,Jonker:1981dc,*Bergsma:1982ckxBergsma:1984ny,Berge:1989hr,Jones:1994pw,Shaevitz:1995yc,Bosetti:1978kz,*deGroot:1978hr,*Heagy:1980wj,*Morfin:1981kg,*Bosetti:1982yy,*Abramowicz:1982re,*MacFarlane:1983ax,*Allasia:1985hw}. The proton was probed to shortest distances at the Hadron-Elektron-Ring-Anlage {\sf HERA} at {\sf DESY} in Hamburg, \cite{:1981uka,Abt:1993wz,Derrick:1992nw,Ackerstaff:1998av,Hartouni:1995cf}. In these experiments, a large amount of data has been acquired, and in the case of {\sf HERA} it is still being processed, especially for those of the last running period, which was also devoted to the measurement of $F_L(x,Q^2)$, \cite{:2008tx,*Collaboration:2009na}. Up to now, the structure function $F_2(x,Q^2)$ is measured in a wide kinematic region, \cite{Amsler:2008zzb}, whereas $F_L(x,Q^2)$ was mainly measured in fixed target experiments, \cite{Whitlow:1990gk,*Dasu:1993vk,*Tao:1995uh,*Arneodo:1996qexArneodo:1996kd,*Liang:2004tk}, and determined in the region of large $\nu$,~\cite{Adloff:1996yz}. In the analysis of DIS data, the contributions of heavy quarks play an important role, cf. e.g. \cite{Klein:2004DISProc,Feltesse:2005aa,Lipka:2006ny,Thompson:2007mx,Jung:2009eq}. One finds that the scaling violations of the heavy quark contributions differ significantly from those of the light partons in a rather wide range starting from lower values of $Q^2$. This demands a detailed description. Additionally, it turns out that the heavy quark contributions to the structure functions may amount up to 25-35\%, especially in the small--$x$ region,~\cite{Lipka:2006ny,Chekanov:2008yd,Blumlein:1996sc,Thompson:2007mx}, which requires a more precise theoretical evaluation of these terms. \\ Due to the kinematic range of {\sf HERA} and the previous DIS experiments, charm is produced much more abundantly and gives a higher contribution to the cross section than bottom, \cite{Thompson:2007mx}. Therefore we subsequently limit our discussion to one species of a heavy quark. Intrinsic heavy quark production is not considered, since data from {\sf HERA} show that this production mechanism hardly gives any contribution, cf. \cite{Brodsky:1980pb,*Hoffmann:1983ah,*Derrick:1995sc,*Harris:1995jx,Adloff:1996xq}. The need for considering heavy quark production has several aspects. One of them is to obtain a better description of heavy flavor production and its contribution to the structure functions of the nucleon. On the other hand, increasing our knowledge on the perturbative part of deep--inelastic processes allows for a more precise determination of the QCD--scale $\Lambda_{\rm QCD}$ and the strong coupling constant $\alpha_s$, as well as of the parton--densities from experimental data. For the former, sufficient knowledge of the ${\sf NNLO}$ massive corrections in DIS is required to control the theory--errors on the level of the experimental accuracy and below, \cite{Bethke:2000ai,*Bethke:2004uy,Blumlein:2004ip,Alekhin:2005dxxAlekhin:2005dy,Dittmar:2005ed,Gluck:2006yz,Alekhin:2006zm,Blumlein:2006be,Blumlein:2007dk,Jung:2008tq}. The parton distribution functions are process independent quantities and can be used to describe not only deeply inelastic scattering, but also a large variety of scattering events at (anti--)proton--proton colliders such as the ${\sf TEVATRON}$ at ${\sf FERMILAB}$, and the Large--Hadron--Collider (${\sf LHC}$) at ${\sf CERN}$, \cite{Jung:2009eq}. Heavy quark production is well suited to extract the gluon density since at leading order (LO) only the photon--gluon fusion process contributes to the cross section, \cite{Witten:1975bh,*Babcock:1977fi,*Shifman:1977yb,*Leveille:1978px,Gluck:1980cp}. Next-to-leading order (NLO) calculations, as performed in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}, showed that this process is still dominant, although now other processes contribute, too. The gluon density plays a special role, since it carries roughly $50~\%$ of the proton momentum, as data from ${\sf FERMILAB}$ and ${\sf CERN}$ showed already in the $1970$ies, \cite{Taylor:1976rk}. Improved knowledge on the gluon distribution $G(x,Q^2)$ is also necessary to describe gluon-initiated processes at the ${\sf TEVATRON}$ and at the ${\sf LHC}$. The study of heavy quark production will also help to further understand the small-$x$ behavior of the structure functions, showing a steep rise, which is mainly attributed to properties of the gluon density. \\ The perturbatively calculable contributions to the DIS cross section are the Wilson coefficients. In case of light flavors only, these are denoted by $C_{(q,g),(2,L)}(x,Q^2/\mu^2)$~\footnote{$q$=quark, $g$=gluon} and at present they are known up to the third order in the strong coupling constant,~\cite{Zee:1974du,Bardeen:1978yd,Furmanski:1981cw,Duke:1981ga,*Devoto:1984wu,*Kazakov:1987jk,*Kazakov:1990fu,*SanchezGuillen:1990iq,*vanNeerven:1991nnxZijlstra:1991qcxZijlstra:1992qd,*Kazakov:1992xj,*Larin:1991fv,Moch:1999eb,Larin:1993vu,Larin:1996wd,Retey:2000nq,Moch:2004xu,Blumlein:2004xt,Vermaseren:2005qc}. Including massive quarks into the analysis, the corresponding terms are known exactly at ${\sf NLO}$. The ${\sf LO}$ terms have been derived in the late seventies,~\cite{Witten:1975bh,*Babcock:1977fi,*Shifman:1977yb,*Leveille:1978px,Gluck:1980cp}, and the ${\sf NLO}$ corrections semi--analytically in $z$--space in the mid--90ies, \cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}. A fast numerical implementation was given in \cite{Alekhin:2003ev}. In order to describe DIS at the level of twist $\tau=2$, also the anomalous dimensions of the local composite operators emerging in the LCE are needed. These have to be combined with the Wilson coefficients and describe, e.g., the scaling violations of the structure functions and parton densities, \cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}. This description is equivalent to the picture in $z$--space in terms of the splitting functions, \cite{Altarelli:1977zs}. The unpolarized anomalous dimensions are known up to ${\sf NNLO}$~\footnote{In Ref.~\cite{Baikov:2006ai}, the $2$nd moment of the $4$--loop ${\sf NS^+}$ anomalous dimension was calculated.}. At leading,~\cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}, and at next--to--leading--order level,~\cite{Floratos:1977auxFloratos:1977aue1,Floratos:1978ny,GonzalezArroyo:1979df,GonzalezArroyo:1979he,*Curci:1980uw,*Furmanski:1980cm,Hamberg:1991qt}, they have been known for a long time and were confirmed several times. The ${\sf NNLO}$ anomalous dimension were calculated by Vermaseren et. al. First, the fixed moments were calculated in Refs.~\cite{Larin:1996wd,Retey:2000nq,Blumlein:2004xt} and the complete result was obtained in Refs.~\cite{Moch:2004pa,Vogt:2004mw}. \\ The main parts of this thesis are the extension of the description of the contributions of heavy quark mass--effects to the deep--inelastic Wilson coefficients to ${\sf NNLO}$. In course of that, we also obtain a first independent calculation of fixed moments of the fermionic parts of the ${\sf NNLO}$ anomalous dimensions given in Refs.~\cite{Larin:1996wd,Retey:2000nq} before. The calculation of the 3-loop heavy flavor Wilson coefficients in the whole $Q^2$ region is currently not within reach. However, as noticed in Ref. \cite{Buza:1995ie}, a very precise description of the heavy flavor Wilson coefficients contributing to the structure function $F_2(x,Q^2)$ at ${\sf NLO}$ is obtained for $Q^2 \raisebox{-0.07cm 10~m^2_Q$, disregarding the power corrections $\propto (m_Q^2/Q^2)^k, k \geq 1$. If one considers the charm quark, this covers an important region for deep--inelastic physics at ${\sf HERA}$. In this limit, the massive Wilson coefficients factorize into universal massive operator matrix elements (OMEs) $A_{ij}(x,\mu^2/m^2_Q)$ and the light flavor Wilson coefficients $C_{(q,g),(2,L)}(x,Q^2/\mu^2)$. The former are process independent quantities and describe all quark mass effects. They are given by matrix elements of the leading twist local composite operators $O_i$ between partonic states $j$ ($i,j=q,g$), including quark masses. The process dependence is described by the massless Wilson coefficients. This factorization has been applied in Ref.~\cite{Blumlein:2006mh} to obtain the asymptotic limit for $F_L^{c\overline{c}}(x,Q^2)$ at {\sf NNLO}. However, unlike the case for $F_2^{c\overline{c}}$, the asymptotic result in this case is only valid for much higher values $Q^2 \raisebox{-0.07cm 800~m^2_Q$, outside the kinematic domain at ${\sf HERA}$ for this quantity. An analytic result for the ${\sf NLO}$ quarkonic massive operator matrix elements $A_{qj}$ needed for the description of the structure functions at this order was derived in Ref.~\cite{Buza:1995ie} and confirmed in Ref.~\cite{Bierenbaum:2007qe}. A related application of the massive OMEs concerns the formulation of a variable flavor number scheme (VFNS) to describe parton densities of massive quarks at sufficiently high scales. This procedure has been described in detail in Ref.~\cite{Buza:1996wv}, where the remaining gluonic massive OMEs $A_{gj}$ were calculated up to $2$--loop order, thereby giving a full ${\sf NLO}$ description. This calculation was confirmed and extended in \cite{Bierenbaum:2009zt}. \\ In this work, fixed moments of all contributing massive OMEs at the $3$--loop level are calculated and presented, which is a new result, \cite{Bierenbaum:2008dk,Bierenbaum:2008tt,Bierenbaum:2009HERA,Bierenbaum:2009mv}. The OMEs are then matched with the corresponding known $O(\alpha_s^3)$ light flavor Wilson coefficients to obtain the heavy flavor Wilson coefficients in the limit $Q^2\gg~m^2$, which leads to a precise description for $Q^2/m^2\raisebox{-0.07cm ~10$ in case of $F_2(x,Q^2)$. It is now possible to calculate all logarithmic contributions $\propto\ln(Q^2/m^2)^k$ to the massive Wilson coefficients in the asymptotic region for general values of the Mellin variable $N$. This applies as well for a large part of the constant term, where also the $O(\varepsilon)$ contributions at the $2$--loop level occur. The first calculation of the latter for all--$N$ forms a part of this thesis, too, \cite{Bierenbaum:2007rg,Bierenbaum:2008dk,Bierenbaum:2008tm,Bierenbaum:2008yu,Bierenbaum:2009zt,Bierenbaum:2009HERA}. Thus only the constant terms of the unrenormalized $3$--loop results are at present only known for fixed moments. Since the OMEs are given by the twist $\tau=2$ composite operators between on--shell partonic states, also fixed moments of the fermionic contributions to the ${\sf NNLO}$ unpolarized anomalous are obtained, which are thereby confirmed for the first time in an independent calculation, \cite{Bierenbaum:2008dk,Bierenbaum:2008tt,Bierenbaum:2009HERA,Bierenbaum:2009mv}. A more technical aspect of this thesis is the study of the mathematical structure of single scale quantities in renormalizable quantum field theories, \cite{Blumlein:2007dj,Bierenbaum:2007zu,Blumlein:2009tm,Blumlein:2009tj}. One finds that the known results for a large number of different hard scattering processes are most simply expressed in terms of nested harmonic sums, cf. \cite{Blumlein:1998if,Vermaseren:1998uu}. This holds at least up to 3--loop order for massless Yang--Mills theories, cf.~\cite{Blumlein:2004bb,Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc,Dittmar:2005ed,Blumlein:2005im,*Blumlein:2006rr,Blumlein:2007dj}, including the $3$--loop Wilson coefficients and anomalous dimensions. By studying properties of harmonic sums, one may thus obtain significant simplifications, \cite{GonzalezArroyo:1979df}, since they obey algebraic, \cite{Blumlein:2003gb}, and structural relations, \cite{Blumlein:2009ta,Blumlein:2009fz}. Performing the calculation in Mellin--space one is naturally led to harmonic sums, which is an approach we thoroughly adopt in our calculation. In course of this, new types of infinite sums occur if compared to massless calculations. In the latter case, summation algorithms such as presented in Refs. \cite{Vermaseren:1998uu,Weinzierl:2002hv,Moch:2005uc} may be used to calculate the respective sums. The new sums which emerge were calculated using the recent summation package~{\sf Sigma},~\cite{Refined,Schneider:2007,sigma1,sigma2}, written in ${\sf MATHEMATICA}$, which opens up completely new possibilities in symbolic summation and has been tested extensively through this work, \cite{Bierenbaum:2007zu}. For fixed values of $N$, single scale quantities reduce to zero--scale quantities, which can be expressed by rational numbers and certain special numbers as \emph{multiple zeta values (MZVs)}, \cite{Borwein:1999js,Blumlein:2009Zet}, and related quantities. Zero scale problems are much easier to calculate than single scale problems. By working in Mellin--space, single scale quantities are discrete and one can seek a description in terms of difference equations. One may think of an automated reconstruction of the all--$N$ relation out of a \emph{finite number} of Mellin moments given in analytic form. This is possible for recurrent quantities. At least up to {3-loop order}, presumably to even higher orders, single scale quantities belong to this class. In this work, \cite{Blumlein:2009tm,Blumlein:2009tj}, we report on a general algorithm for this purpose, which we applied to a problem being currently one of the most sophisticated ones: the determination of the anomalous dimensions and Wilson coefficients to 3--loop order for unpolarized deeply-inelastic scattering, \cite{Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc}. \\ The thesis is based on the publications Refs.~\cite{Bierenbaum:2008yu,Bierenbaum:2009zt,Blumlein:2009tj,Bierenbaum:2009mv}, the conference contributions \cite{Bierenbaum:2007pn,Bierenbaum:2007zz,Bierenbaum:2007rg,Bierenbaum:2007zu,Blumlein:2007dj,Bierenbaum:2008tt,Bierenbaum:2008dk,Bierenbaum:2008tm,Bierenbaum:2009HERA,Blumlein:2009tm} and the papers in preparation \cite{Bierenbaum:prep1,Blumlein:trans}. It is organized as follows. Deeply inelastic scattering within the parton model, the LCE and how one obtains improved results using the renormalization group are described in Section~\ref{Sec-DIS}. Section~\ref{Sec-HQDIS} is devoted to the production mechanisms of heavy quarks and their contributions to the cross section. We also discuss the framework of obtaining the heavy flavor Wilson coefficients using massive OMEs in the asymptotic limit $Q^2 \gg m_Q^2$ and comment on the different schemes one may apply to treat heavy quark production, \cite{Bierenbaum:2009zt,Bierenbaum:2009mv}. The massive operator matrix elements are considered in Section~\ref{Sec-REN} and we describe in detail the renormalization of these objects to $3$--loop order, cf. \cite{Bierenbaum:2008dk,Bierenbaum:2008tt,Bierenbaum:2008yu,Bierenbaum:2009HERA,Bierenbaum:2009zt,Bierenbaum:2009mv}. Section~\ref{Sec-REP} contains transformation formulas between the different renormalization schemes. We clarify an apparent inconsistency which we find in the renormalization of the massive contributions to the ${\sf NLO}$ Wilson coefficients given in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv} and the massive OMEs as presented in Refs.~\cite{Buza:1995ie,Buza:1996wv}. This is due to the renormalization scheme chosen, cf. Ref.~\cite{Bierenbaum:2009zt,Bierenbaum:2009mv}. In Section~\ref{Sec-2L} the calculation and the results for the $2$--loop massive operator matrix elements up to $O(\varepsilon)$ in dimensional regularization are presented. This confirms the results of Ref. \cite{Buza:1996wv}, cf. \cite{Bierenbaum:2009zt}. The $O(\varepsilon)$ terms are new results and are needed for renormalization at $O(\alpha_s^3)$, cf. \cite{Bierenbaum:2007rg,Bierenbaum:2008tm,Bierenbaum:2008dk,Bierenbaum:2008yu,Bierenbaum:2009zt,Bierenbaum:2009HERA}. We describe the calculation using hypergeometric functions to set up infinite sums containing the parameter $N$ as well. These sums are solved using the summation package {\sf Sigma}, cf. \cite{Bierenbaum:2008yu,Bierenbaum:2007zu}. All sums can then be expressed in terms of nested harmonic sums. The same structure is expected for the $3$--loop terms, of which we calculate fixed moments ($N=2,...,10(14)$) using the programs {\sf QGRAF}, \cite{Nogueira:1991ex}, {\sf FORM}, \cite{Vermaseren:2000nd,vanRitbergen:1998pn}, and {\sf MATAD}, \cite{Steinhauser:2000ry} in Section~\ref{Sec-3L}, cf. \cite{Bierenbaum:2008dk,Bierenbaum:2008tt,Bierenbaum:2009HERA,Bierenbaum:2009mv}. Thus we confirm the corresponding moments of the fermionic contributions to all unpolarized $3$--loop anomalous dimensions which have been calculated before in Refs.~\cite{Larin:1996wd,Retey:2000nq,Blumlein:2004xt,Moch:2004pa,Vogt:2004mw}. In Section~\ref{Sec-POL} we calculate the asymptotic heavy flavor Wilson coefficients for the polarized structure function $g_1(x,Q^2)$ to $O(\alpha_s^2)$ following Ref.~\cite{Buza:1996xr} and compare them with the results given there. We newly present the terms of $O(\alpha_s^2\varepsilon)$ which contribute to the polarized massive OMEs at $O(\alpha_s^3)$ through renormalization, \cite{Bierenbaum:2007zz,Bierenbaum:2007pn,Bierenbaum:prep1}. One may also consider the local flavor non--singlet tensor operator for transversity, \cite{Barone:2001sp}. This is done in Section~\ref{sec-1}. We derive the corresponding massive OMEs for general values of $N$ up to $O(\alpha_s^2\varepsilon)$ and for the fixed moments $N=1\ldots 13$ at $O(\alpha_s^3)$, \cite{Blumlein:trans}. A calculation keeping the full $N$ dependence has not been performed yet. In Section~\ref{Sec-FULL3L} we describe several steps which have been undertaken in this direction so far. This involves the calculation of several non--trivial $3$--loop scalar integrals for all $N$ and the description of a technique to reconstruct the complete result starting from a fixed number of moments, cf. \cite{Blumlein:2009tm,Blumlein:2009tj}. Section~\ref{Sec-CONC} contains the conclusions. Our conventions are summarized in Appendix \ref{App-Con}. The set of Feynman--rules used, in particular for the composite operators, is given in Appendix \ref{App-FeynRules}. In Appendix \ref{App-SpeFun} we summarize properties of special functions which frequently occurred in this work. Appendix \ref{App-Sums} contains examples of different types of infinite sums which had to be computed in the present calculation. The main results are shown in Appendices \ref{App-AnDim}--\ref{App-Trans}: various anomalous dimensions and the constant contributions of the different massive OMEs for fixed values of $N$ at $O(\alpha_s^3)$. All Figures in this work have been drawn using ${\sf Axodraw}$, \cite{Vermaseren:1994je}. \newpage \section{\bf\boldmath Deeply Inelastic Scattering} \label{Sec-DIS} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} Deep--inelastic scattering experiments provide one of the cleanest possibilities to probe the space--like short distance structure of hadrons through the reactions \begin{eqnarray} l^{\pm} N &\rightarrow& l^{\pm} + X \\ \nu_l (\overline{\nu}_l) N &\rightarrow& l^{\mp} + X \\ l^{\mp} N &\rightarrow& \nu_l (\overline{\nu}_l) + X~, \end{eqnarray} with $l = e, \mu$, $\nu_l = \nu_{e, \mu, \tau}$, $N = p, d$ or a nucleus, and $X$ the inclusive hadronic final state. The $4$-momentum transfers $q^2=-Q^2$ involved are at least of the order of $Q^2 \ge 4~{\rm GeV}^2$ and one may resolve spatial scales of approximately $1/\sqrt{Q^2}$. The different deep inelastic charged-- and neutral current reactions offer complementary sensitivity to unfold the quark flavor and gluonic structure of the nucleons. Furthermore, polarized lepton scattering off polarized targets is studied in order to investigate the spin structure of the nucleons. The electron--proton experiments performed at ${\sf SLAC}$ in $1968$, \cite{Mo:1965dv,*Taylor:1967qv,Coward:1967au,*Panofsky:1968pb,*Bloom:1969kc,*Breidenbach:1969kd}, cf. also \cite{Kendall:1991np,*Taylor:1991ew,*Friedman:1991nq}, and at ${\sf DESY}$, \cite{Albrecht:1969zyxAlbrecht:1969qmxAlbrecht:1969zb}, found the famous scaling behavior of the structure functions which had been predicted by Bjorken before, \cite{Bjorken:1968dy}. These measurements led to the creation of the parton model, \cite{Feynman:1969wa,Feynman:1969ej,Bjorken:1969ja}. Several years later, after a series of experiments had confirmed its main predictions, the partons were identified with the quarks, anti-quarks and gluons as real quantum fields, which are confined inside hadrons. Being formerly merely mathematical objects,~\cite{GellMann:1964nj,Zweig:1964jf}, they became essential building blocks of the Standard Model of elementary particle physics, besides the leptons and the electroweak gauge fields, thereby solving the anomaly-problem,~\cite{Bell:1969ts,*Adler:1969gk,Bertlmann:1996xk}. In the following years, more studies were undertaken at higher energies, such as the electron--proton/neutron scattering experiments at ${\sf SLAC}$,~\cite{Stein:1975yy,*Atwood:1976ys,*Bodek:1979rx,*Mestayer:1982ba}. Muons were used as probes of the nucleons by ${\sf EMC}$,~\cite{Allkover:1981,*Aubert:1985fx}, ${\sf BCDMS}$,~\cite{Bollini:1982ac,*Benvenuti:1984duxBenvenuti:1987zjxBenvenuti:1989rhxBenvenuti:1989fm}, and ${\sf NMC}$,~\cite{Amaudruz:1991nwxAmaudruz:1992bf,*Arneodo:1995cq}, at the ${\sf SPS}$,~\cite{Clifft:1974zt}, at ${\sf CERN}$, as well as by the ${\sf E26}$--,~\cite{Fox:1974ry,Chang:1975sv,*Watanabe:1975su}, ${\sf~CHIO}$--,~\cite{Anderson:1979mt}, and ${\sf E665}$--,~\cite{Adams:1989emxAdams:1996gu}, collaborations at ${\sf FERMILAB}$. For a general review of $\mu^{\pm}~N$--scattering, see \cite{Sloan:1988qj}. The latter experiments were augmented by several high energy neutrino scattering experiments by the ${\sf CHARM}$-- and ${\sf CDHSW}$--collaborations, \cite{Jonker:1981dc,*Bergsma:1982ckxBergsma:1984ny,Holder:1977gn,*VonRuden:1982fp,Berge:1989hr}, and the ${\sf WA21/25}$--experiments, \cite{Harigel:1977,Jones:1994pw}, at the ${\sf SPS}$, and by the ${\sf CCFR}$--collaboration,~\cite{Sakumoto:1990py,*King:1991gs,Shaevitz:1995yc}, at ${\sf FERMILAB}$. Further results on neutrinos were reported in Refs.~\cite{Bosetti:1978kz,*deGroot:1978hr,*Heagy:1980wj,*Morfin:1981kg,*Bosetti:1982yy,*Abramowicz:1982re,*MacFarlane:1983ax,*Allasia:1985hw}, cf. also \cite{Diemoz:1986kt,*Eisele:1986uz,*Mishra:1989jc,*Winter:1991ua,*Schmitz:1997}. The data of these experiments confirmed QCD as the theory describing the strong interactions within hadrons, most notably by the observation of logarithmic scaling violations of the structure functions at higher energies and lower values of $x$, which had been precisely predicted by theoretical calculations,~\cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}. All these experiments had in common that they were fixed target experiments and therefore could only probe a limited region of phase space, up to $x\ge~10^{-3},~Q^2\le~500{\rm GeV}^2$. The first electron--proton collider experiments became possible with the advent of the ${\sf HERA}$ facility, which began operating in the beginning of the $1990$ies at {\sf DESY},~\cite{:1981uka}. This allowed measurements at much larger values of $Q^2$ and at far smaller values of $x$ than before, $x\ge~10^{-4},~Q^2\le~20000{\rm GeV}^2$. The physics potential for the deep--inelastic experiments at ${\sf HERA}$ was studied during a series of workshops, see \cite{Peccei:1988pa,*Buchmuller:1992rq,*Blumlein:1992qi,*Faessler:1993ku,*Mathiot:1995ir,Bluemlein:1995uc,Ingelman:1996ge,Blumlein:1997ch,Blumlein:2001je}. ${\sf HERA}$ collected a vast amount of data until its shutdown in $2007$, a part of which is still being analyzed, reaching unprecedented experimental precisions below the level of $1$ $\%$. Two general purpose experiments to study inclusive and various semi-inclusive unpolarized deep--inelastic reactions, ${\sf H1}$, \cite{Abt:1993wz}, and ${\sf ZEUS}$, \cite{Derrick:1992nw}, were performed. Both experiments measured the structure functions $F_{2,L}(x,Q^2)$ as well as the heavy quark contributions to these structure functions to high precision. The theoretical calculations in this thesis are important for the analysis and understanding of the latter, as will be outlined in Section~\ref{Sec-HQDIS}. The {\sf HERMES}--experiment, \cite{Ackerstaff:1998av}, studied scattering of polarized electrons and positrons off polarized gas--targets. {\sf HERA-B}, \cite{Hartouni:1995cf}, was dedicated to the study of {\sf CP}--violations in the $B$--sector. In the following, we give a brief introduction into the theory of DIS and the theoretical tools which are used to predict the properties of structure functions, such as asymptotic scaling and scaling violations. In Section~\ref{SubSec-DISKin}, we discuss the kinematics of the DIS process and derive the cross section for unpolarized electromagnetic electron-proton scattering. In Section~\ref{SubSec-DISParton}, we give a description of the naive parton model, which was employed to explain the results obtained at {\sf SLAC} and gave a first correct qualitative prediction of the observed experimental data. A rigorous treatment of DIS can be obtained by applying the light--cone expansion to the forward Compton amplitude,~\cite{Wilson:1969zs,*Zimmermann:1970,*Frishman:1971qn,*Brandt:1970kg}, which is described in Section~\ref{SubSec-DISComptLCE}. This is equivalent to the QCD--improved parton model at the level of twist $\tau=2$, cf. e.g. \cite{Kogut:1972di,*Yan:1976np,Reya:1979zk,Roberts:1990ww,Muta:1998vi}. One obtains evolution equations for the structure functions and the parton densities with respect to the mass scales considered. The evolution is governed by the splitting functions,~\cite{Altarelli:1977zs}, or the anomalous dimensions,~\cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}, cf. Section~\ref{SubSec-DISEvol}. \subsection{\bf\boldmath Kinematics and Cross Section} \label{SubSec-DISKin} The schematic diagram for the Born cross section of DIS is shown in Figure~\ref{DISLO1} for single gauge boson exchange. \begin{figure}[h] \begin{center} \includegraphics[angle=0, width=8.0cm]{picmain1.eps} \end{center} \begin{center} \caption{\sf Schematic graph of deeply inelastic scattering for single boson exchange.} \label{DISLO1} \noindent \small \end{center} \normalsize \end{figure} A lepton with momentum $l$ scatters off a nucleon of mass $M$ and momentum $P$ via the exchange of a virtual vector boson with momentum $q$. The momenta of the outgoing lepton and the set of hadrons are given by $l'$ and $P_F$, respectively. Here $F$ can consist of any combination of hadronic final states allowed by quantum number conservation. We consider inclusive final states and thus all the hadronic states contributing to $F$ are summed over. The kinematics of the process can be measured from the scattered lepton or the hadronic final states, cf. e.g. \cite{Blumlein:1992we,Blumlein:1994ii,Arbuzov:1995id}, depending on the respective experiment. The virtual vector boson has space-like momentum with a virtuality $Q^2$ \begin{eqnarray} Q^2&\equiv&-q^2~,\quad q=l-l'~. \label{virtuality} \end{eqnarray} There are two additional independent kinematic variables for which we choose \begin{eqnarray} s &\equiv&(P+l)^2~, \label{sdeep} \\ W^2&\equiv&(P+q)^2=P_F^2~. \label{pdeepf} \end{eqnarray} Here, $s$ is the total cms energy squared and $W$ denotes the invariant mass of the hadronic final state. In order to describe the process, one usually refers to Bjorken's scaling variable $x$, the inelasticity $y$, and the total energy transfer $\nu$ of the lepton to the nucleon in the nucleon's rest frame,~\cite{Bjorken:1969mm}. They are defined by \begin{eqnarray} \nu&\equiv&\frac{P.q}{M}~ \,\,\, =~\frac{W^2+Q^2-M^2}{2M}~, \label{nudef} \\ x &\equiv&\frac{-q^2}{2P. q}~ =~\frac{Q^2}{2M\nu}~ \,\,\,\,\, =~\frac{Q^2}{W^2+Q^2-M^2}~, \label{Bjorkenx} \\ y &\equiv&\frac{P.q}{P.l}\hspace{3mm} =~\frac{2M\nu}{s-M^2}~ =~\frac{W^2+Q^2-M^2}{s-M^2}~, \label{Bjorkeny} \end{eqnarray} where lepton masses are disregarded. In general, the virtual vector boson exchanged can be a $\gamma,~Z$ or $W^{\pm}$--boson with the in-- and outgoing lepton, respectively, being an electron, muon or neutrino. In the following, we consider only unpolarized neutral current charged lepton--nucleon scattering. In addition, we will disregard weak gauge boson effects caused by the exchange of a $Z$--boson. This is justified as long as the virtuality is not too large, i.e. $Q^2 < 500~{\rm GeV}^2$, cf.~\cite{Blumlein:1987xk}. We assume the QED- and electroweak radiative corrections to have been carried out,~\cite{Kwiatkowski:1990es,Blumlein:1994ii,Arbuzov:1995id}. The kinematic region of DIS is limited by a series of conditions. The hadronic mass obeys \begin{eqnarray} W^2 \ge M^2~. \label{physreg} \end{eqnarray} Furthermore, \begin{eqnarray} \nu \ge 0~, \quad 0\le y \le 1~, s\ge M^2~. \label{physreg2} \end{eqnarray} From (\ref{physreg}) follows the kinematic region for Bjorken-$x$ via \begin{eqnarray} &&W^2=(P+q)^2=M^2-Q^2\Bigl(1-\frac{1}{x}\Bigr) \ge M^2 \hspace{6mm}\Longrightarrow~0 \le x \le 1~. \label{xregion} \end{eqnarray} Note that $x=1$ describes the elastic process, while the inelastic region is defined by $x < 1$. Additional kinematic constraints follow from the design parameters of the accelerator, \cite{Engelen:1998rf,*Abramowicz:1998ii}. In the case of {\sf HERA}, these were $820(920)~{\rm GeV}$ for the proton beam and $27.5~{\rm GeV}$ for the electron beam, resulting in a cms--energy $\sqrt{s}$ of $300.3(319)~{\rm GeV}$~\footnote{During the final running period of ${\sf HERA}$, low--energy measurements were carried out with $E_p=460~(575)~{\rm GeV}$ in order to extract the longitudinal structure function $F_L(x,Q^2)$,~\cite{:2008tx,*Collaboration:2009na}.}. This additionally imposes kinematic constraints which follow from \begin{eqnarray} Q^2&=&xy(s-M^2)~, \label{kincon1} \end{eqnarray} correlating $s$ and $Q^2$. For the kinematics at ${\sf HERA}$, this implies \begin{eqnarray} Q^2 \le sx \approx 10^5 x~. \label{kincon2} \end{eqnarray} In order to calculate the cross section of deeply inelastic $ep$--scattering, one considers the tree--level transition matrix element for the electromagnetic current. It is given by, cf. e.g. \cite{Reya:1979zk,Roberts:1990ww,Muta:1998vi}, \begin{eqnarray} M_{fi} =e^2\overline{u}(l',\eta')\gamma^{\mu}u(l,\eta) \frac{1}{q^2}\bra{P_F}J^{em}_{\mu}(0)\ket{P,\sigma}~.\label{mfi} \end{eqnarray} Here, the spin of the charged lepton or nucleon is denoted by $\eta (\eta')$ and $\sigma$, respectively. The state vectors of the initial--state nucleons and the hadronic final state are $\ket{P,\sigma}$ and $\ket{P_F}$. The Dirac--matrices are denoted by $\gamma_{\mu}$ and bi--spinors by $u$, see Appendix~\ref{App-Con}. Further $e$ is the electric unit charge and $J^{em}_{\mu}(\xi)$ the quarkonic part of the electromagnetic current operator, which is self-adjoint~: \begin{eqnarray} J_{\mu}^{\dagger}(\xi)=J_{\mu}(\xi)~.\label{jself} \end{eqnarray} In QCD, it is given by \begin{eqnarray} J^{em}_{\mu}(\xi)=\sum_{f,f'} \overline{\Psi}_f(\xi)\gamma_{\mu} \lambda^{em}_{ff'}\Psi_{f'}(\xi)~, \label{current} \end{eqnarray} where $\Psi_f(\xi)$ denotes the quark field of flavor $f$. For three light flavors, $\lambda^{em}$ is given by the following combination of Gell--Mann matrices of the flavor group $SU(3)_{flavor}$, cf. \cite{Blumlein:1999sc,Yndurain:1999ui}, \begin{eqnarray} \lambda^{em}=\frac{1}{2}\Bigl(\lambda_{flavor}^3 +\frac{1}{\sqrt{3}}\lambda_{flavor}^8 \Bigr)~. \label{lambdaem} \end{eqnarray} According to standard definitions,~\cite{Reya:1979zk,Field:1989uq,Roberts:1990ww,Muta:1998vi}, the differential inclusive cross section is then given by \begin{eqnarray} l_0'\frac{d\sigma}{d^3l'}=\frac{1}{32(2\pi)^3(l.P)} \sum_{\eta',\eta ,\sigma ,F} (2\pi)^4\delta^4(P_F+l'-P-l) |M_{fi}|^2~. \label{scatcro} \end{eqnarray} Inserting the transition matrix element (\ref{mfi}) into the relation for the scattering cross section (\ref{scatcro}), one notices that the trace over the leptonic states forms a separate tensor, $L^{\mu\nu}$. Similarly, the hadronic tensor $W_{\mu\nu}$ is obtained, \begin{eqnarray} L_{\mu \nu}(l,l')&=&\sum_{\eta',\eta} \Bigl[\overline{u}(l',\eta')\gamma^{\mu}u(l, \eta)\Bigr]^* \Bigl[\overline{u}(l',\eta')\gamma^{\nu}u(l ,\eta)\Bigr]~, \label{leptontens}\\ W_{\mu\nu}(q,P)&=&\frac{1}{4\pi}\sum_{\sigma ,F} (2\pi)^4\delta^4(P_F-q-P) \bra{P,\sigma}J^{em}_{\mu}(0)\ket{P_F} \bra{P_F}J^{em}_{\nu}(0)\ket{P,\sigma}~. \nonumber\\ \label{hadrontens} \end{eqnarray} Thus one arrives at the following relation for the cross section \begin{eqnarray} l_0'\frac{d\sigma}{d^3l'}&=&\frac{1}{4 P.l} \frac{\alpha^2}{Q^4} L^{\mu\nu}W_{\mu\nu} =\frac{1}{2(s-M^2)} \frac{\alpha^2}{Q^4} L^{\mu\nu}W_{\mu\nu}~,\label{crosssec} \end{eqnarray} where $\alpha$ denotes the fine-structure constant, see Appendix~\ref{App-Con}. The leptonic tensor in~(\ref{crosssec}) can be easily computed in the context of the Standard Model, \begin{eqnarray} L_{\mu \nu}(l,l')&=&Tr[l \hspace*{-1.3mm}/ \gamma^{\mu} l' \hspace*{-1.7mm}/ \gamma^{\nu}] =4\left(l_{\mu}l_{\nu}'+l_{\mu}'l_{\nu}-\frac{Q^2}{2} g_{\mu \nu}\right)~. \label{leptontens2} \end{eqnarray} This is not the case for the hadronic tensor, which contains non--perturbative hadronic contributions due to long-distance effects. To calculate these effects a priori, non-perturbative QCD calculations have to be performed, as in QCD lattice simulations. During the last years these calculations were performed with increasing systematic and numerical accuracy,~cf.~e.g.~\cite{Dolgov:2002zm,Gockeler:2007qs,*Baron:2007ti,*Bietenholz:2008fe,*Syritsyn:2009np}. The general structure of the hadronic tensor can be fixed using $S$--matrix theory and the global symmetries of the process. In order to obtain a form suitable for the subsequent calculations, one rewrites Eq.~(\ref{hadrontens}) as, cf. \cite{Itzykson:1980rh,Muta:1998vi}, \begin{eqnarray} W_{\mu\nu}(q,P) &=&\frac{1}{4\pi}\sum_{\sigma} \int d^4\xi\exp(iq\xi) \bra{P}[J^{em}_{\mu}(\xi), J^{em}_{\nu}(0)]\ket{P} \nonumber\\ &=&\frac{1}{2\pi} \int d^4\xi\exp(iq\xi) \bra{P}[J^{em}_{\mu}(\xi), J^{em}_{\nu}(0)]\ket{P}~. \label{hadrontens4} \end{eqnarray} Here, the following notation for the spin-average is introduced in Eq.~(\ref{hadrontens4}) \begin{eqnarray} \frac{1}{2}\sum_{\sigma}\bra{P,\sigma}X\ket{P,\sigma} \equiv \bra{P}X\ket{P}~. \label{spinshort} \end{eqnarray} Further, $[a,b]$ denotes the commutator of $a$ and $b$. Using symmetry and conservation laws, the hadronic tensor can be decomposed into different scalar structure functions and thus be stripped of its Lorentz--structure. In the most general case, including polarization, there are $14$ independent structure functions,~\cite{Blumlein:1996vs,Blumlein:1998nv}, which contain all information on the structure of the proton. However, in the case considered here, only two structure functions contribute. One uses Lorentz-- and time--reversal invariance,~\cite{Wilson:1969zs,*Zimmermann:1970,*Frishman:1971qn,*Brandt:1970kg}, and additionally the fact that the electromagnetic current is conserved. This enforces electromagnetic gauge invariance for the hadronic tensor, \begin{eqnarray} q_{\mu}W^{\mu\nu}=0~. \end{eqnarray} The leptonic tensor (\ref{leptontens2}) is symmetric and thus $W_{\mu\nu}$ can be taken to be symmetric as well, since all antisymmetric parts are canceled in the contraction. By making a general ansatz for the hadronic tensor using these properties, one obtains \begin{eqnarray} W_{\mu \nu}(q,P)=&& \frac{1}{2x}\left(g_{\mu \nu}+\frac{q_{\mu}q_{\nu}}{Q^2} \right)F_{L}(x,Q^2) \nonumber\\ &+&\frac{2x}{Q^2}\left( P_{\mu}P_{\nu}+\frac{q_{\mu}P_{\nu}+q_{\nu}P_{\mu}}{2x} -\frac{Q^2}{4x^2}g_{\mu\nu}\right)F_{2}(x,Q^2)~. \label{hadrontens2} \end{eqnarray} The dimensionless structure functions $F_2(x,Q^2)$ and $F_L(x,Q^2)$ depend on two variables, Bjorken-$x$ and $Q^2$, contrary to the case of elastic scattering, in which only one variable, e.g. $Q^2$, determines the cross section. Due to hermiticity of the hadronic tensor, the structure functions are real. The decomposition (\ref{hadrontens2}) of the hadronic tensor leads to the differential cross section of unpolarized DIS in case of single photon exchange \begin{eqnarray} \frac{d\sigma}{dxdy}=\frac{2\pi\alpha^2}{xyQ^2} \Bigg\{\Bigl[1+(1-y)^2\Bigr]F_2(x,Q^2) -y^2F_L(x,Q^2)\Biggr\}~.\label{crosssec1} \end{eqnarray} A third structure function, $F_1(x,Q^2)$, \begin{eqnarray} F_1(x,Q^2)=\frac{1}{2x}\Bigl[F_2(x,Q^2)-F_L(x,Q^2)\Bigr]~,\label{F1} \end{eqnarray} which is often found in the literature, is not independent of the previous ones. For completeness, we finally give the full Born cross section for the neutral current, including the exchange of $Z$--bosons, cf.~\cite{Arbuzov:1995id}. Not neglecting the lepton mass $m$, it is given by \begin{eqnarray} \frac {d^2 \sigma_{\mathrm{NC}}} {dx dy} &=& \frac{2\pi\alpha^2 }{xyQ^2} \Biggl\{ \Biggl[ 2\left(1-y\right)-2xy\frac{M^2}{s} +\left(1-2\frac{m^2}{Q^2}\right) \left(1+4x^2\frac{M^2}{Q^2} \right) \nonumber \\ && \times \frac{y^2}{1+R(x,Q^2)} \Biggr] {\cal F}_{2}(x,Q^2) +~x y(2-y){\cal F}_{3}(x,Q^2) \Biggr\}~. \label{born} \end{eqnarray} Here, $R(x,Q^2)$ denotes the ratio \begin{eqnarray} R(x,Q^2) = \frac{\sigma_L}{\sigma_T} &=& \left(1+4x^2\frac{M^2}{Q^2}\right) \frac{{\cal F}_{2}(x,Q^2)} {2x{\cal F}_{1}(x,Q^2)} - 1~, \label{rqcd} \end{eqnarray} and the {\sf effective} structure functions ${\cal F}_{l}(x,Q^2),~l = 1 ... 3$ are represented by the structure functions $F_l, G_l$ and $H_l$ via \begin{eqnarray} {\cal F}_{1,2}(x,Q^2) &=& F_{1,2}(x,Q^2) + 2 |Q_{e}| \left( v_{e} + \lambda a_e \right) \chi(Q^2) G_{1,2}(x,Q^2) \nonumber \\ && +~4 \left( v_{e}^{2} + a_{e}^{2} + 2 \lambda v_e a_e \right) \chi^2(Q^2) H_{1,2}(x,Q^2)~, \label{f112} \\ x{\cal F}_3(x,Q^2) &=& -2~\mbox{\rm{sign}}(Q_e) \Biggl\{ |Q_{e}|\left( a_{e} + \lambda v_e \right) \chi(Q^2) xG_{3}(x,Q^2) \nonumber \\ && +\left[2v_{e} a_e + \lambda \left(v_e^2 + a_e^2 \right) \right]\chi^2(Q^2) xH_{3}(x,Q^2) \Biggr\}~. \label{f123} \end{eqnarray} Here, $Q_e=-1,~a_e=1$ in case of electrons and \begin{eqnarray} \lambda&=&\xi \, \mbox{\rm{sign}}(Q_e)~, \label{laxi} \\ v_e&=&1-4 \sin^{2}\theta_{W}^{\mathrm{eff}}~,\\ \chi (Q^2) &=& {G_\mu \over\sqrt{2}}{M_{Z}^{2} \over{8\pi\alpha(Q^2)}}{Q^2 \over{Q^2+M_{Z}^{2}}}~, \label{chiq} \end{eqnarray} with $\xi$ the electron polarization, $\theta_{W}^{\mathrm{eff}}$ the effective weak mixing angle, $G_\mu$ the Fermi constant and $M_Z$ the $Z$--boson mass. \subsection{\bf\boldmath The Parton Model} \label{SubSec-DISParton} The structure functions (\ref{hadrontens2}) depend on two kinematic variables, $x$ and $Q^2$. Based on an analysis using current algebra, Bjorken predicted scaling of the structure functions, cf. \cite{Bjorken:1968dy}, \begin{eqnarray} \lim_{\{Q^2,~\nu\}~\rightarrow~\infty,~x=const.} F_{(2,L)}(x,Q^2)=F_{(2,L)}(x)~. \label{scaling} \end{eqnarray} This means that in the Bjorken limit $\{Q^2,~\nu~\}\rightarrow~\infty$, with $x$ fixed, the structure functions depend on the ratio $Q^2/\nu$ only. Soon after this prediction, approximate scaling was observed experimentally in electron-proton collisions at ${\sf SLAC}~(1968)$, \cite{Coward:1967au,*Panofsky:1968pb,*Bloom:1969kc,*Breidenbach:1969kd}, cf. also \cite{Kendall:1991np,*Taylor:1991ew,*Friedman:1991nq}~\footnote{The results obtained at ${\sf DESY}$, \cite{Albrecht:1969zyxAlbrecht:1969qmxAlbrecht:1969zb}, pointed in the same direction, but were less decisive, because not as large values of $Q^2$ as at ${\sf SLAC}$ could be reached.}. Similar to the $\alpha-$particle scattering experiments by Rutherford in $1911$, \cite{Rutherford:1911zz}, the cross section remained large at high momentum transfer $Q^2$, a behavior which is known from point--like targets. This was found in contradiction to the expectation that the cross section should decrease rapidly with increasing $Q^2$, since the size of the proton had been determined to be about $10^{-13}~$cm with a smooth charge distribution, \cite{Mcallister:1956ng,*Schopper:1961,*Hofstadter:1963}. However, only in rare cases a single proton was detected in the final state, instead it consisted of a large number of hadrons. A proposal by Feynman contained the correct ansatz. To account for the observations, he introduced the parton model, \cite{Feynman:1969wa,Feynman:1969ej}, cf. also \cite{Bjorken:1969ja,Feynman:1973xc,Roberts:1990ww,Reya:1979zk,Kogut:1972di,*Yan:1976np}. He assumed the proton as an extended object, consisting of several point-like particles, the partons. They are bound together by their mutual interaction and behave like free particles during the interaction with the highly virtual photon in the Bjorken-limit~\footnote{Asymptotic freedom, which was discovered later, is instrumental for this property.}. One arrives at the picture of the proton being ``frozen'' while the scattering takes place. The electron scatters elastically off the partons and this process does not interfere with the other partonic states, the ``spectators''. The DIS cross section is then given by the incoherent sum over the individual virtual electron--parton cross sections. Since no information on the particular proton structure is known, Feynman described parton $i$ by the parton distribution function (PDF) $f_i(z)$. It gives the probability to find parton $i$ in the ``frozen'' proton, carrying the fraction $z$ of its momentum. Figure \ref{partmod} shows a schematic picture of the parton model in Born approximation. The in-- and outgoing parton momenta are denoted by $p$ and $p'$, respectively. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=8.0cm]{picmain2.eps} \end{center} \begin{center} \caption{\sf Deeply inelastic electron-proton scattering in the parton model.} \label{partmod} \noindent \small \end{center} \normalsize \end{figure} \noindent Similar to the scaling variable $x$, one defines the partonic scaling variable $\tau$, \begin{eqnarray} \tau\equiv\frac{Q^2}{2 p.q}~. \label{taudef} \end{eqnarray} It plays the same role as the Bjorken-variable, but for the partonic sub-process. In the collinear parton model~\footnote{For other parton models, as the covariant parton model, cf. \cite{Nash:1971aw,*Landshoff:1971xb,Jackson:1989ph,*Roberts:1996ub,Blumlein:1996tp,Blumlein:2003wk}.}, which is applied throughout this thesis, $p=zP$ holds, i.e., the momentum of the partons is taken to be collinear to the proton momentum. From (\ref{taudef}) one obtains \begin{eqnarray} \tau z=x~. \end{eqnarray} Feynman's original parton model, referred to as the naive parton model, neglects the mass of the partons and enforces the strict correlation \begin{eqnarray} \delta\left(\frac{q.p}{M}-\frac{Q^2}{2M}\right)~,\label{feyncor} \end{eqnarray} due to the {\sf experimentally observed scaling behavior}, which leads to $z=x$. The naive parton model then assumes, in accordance with the quark hypothesis, \cite{Bjorken:1969ja,GellMann:1964nj,Zweig:1964jf}, that the proton is made up of three valence quarks, two up and one down type, cf. e.g. \cite{Close:1979bt}. This conclusion was generally accepted only several years after the introduction of the parton model, when various experiments had verified its predictions. Let us consider a simple example, which reproduces the naive parton model at {\sf LO} and incorporates already some aspects of the improved parton model. The latter allows virtual quark states (sea-quarks) and gluons as partons as well. In the QCD--improved parton model, cf. \cite{Roberts:1990ww,Reya:1979zk,Kogut:1972di,*Yan:1976np}, besides the $\delta$-distribution, (\ref{feyncor}), a function ${\cal W}^i_{\mu\nu}(\tau,Q^2)$ contributes to the hadronic tensor. It is called partonic tensor and given by the hadronic tensor, Eq. (\ref{hadrontens4}), replacing the hadronic states by partonic states $i$. The basic assumption is that the hadronic tensor can be factorized into the PDFs and the partonic tensor, cf. e.g. \cite{Amati:1978wx,*Libby:1978qf,*Libby:1978bx,*Mueller:1978xu,*Collins:1981ta,*Bodwin:1984hc,*Collins:1985ue,Collins:1987pm}. The PDFs are non-perturbative quantities and have to be extracted from experiment, whereas the partonic tensors are calculable perturbatively. A more detailed discussion of this using the LCE is given in Section~\ref{SubSec-DISComptLCE}. The hadronic tensor reads, cf. \cite{Mulders:1996}, \begin{eqnarray} W_{\mu\nu}(x,Q^2)=\frac{1}{4\pi}\sum_i \int_0^1 dz \int_0^1 d\tau \left(f_i(z)+f_{\overline{i}}(z) \right){\cal W}_{\mu\nu}^i(\tau,Q^2) \delta(x-z \tau)~. \label{hadrontens8} \end{eqnarray} Here, the number of partons and their respective type are not yet specified and we have included the corresponding PDF of the respective anti-parton, denoted by $f_{\overline{i}}(z)$. Let us assume that the electromagnetic parton current takes the simple form \begin{eqnarray} \bra{i}j^i_{\mu}(\tau)\ket{i}=-ie_i\overline{u}^i\gamma_{\mu}u^i~, \end{eqnarray} similar to the leptonic current, (\ref{mfi}). Here $e_i$ is the electric charge of parton $i$. At {\sf LO} one finds \begin{eqnarray} {\cal W}^i_{\mu\nu}(\tau,Q^2)=\frac{2\pi e_i^2}{q. p^i}\delta(1-\tau) \Bigl[2p^i_{\mu}p^i_{\nu}+p^i_{\mu}q_{\nu} +p^i_{\nu}q_{\mu} -g_{\mu\nu}q.p^i\Bigr]~.\label{partontensLO} \end{eqnarray} The $\delta$-distribution in (\ref{partontensLO}), together with the $\delta$-distribution in (\ref{hadrontens8}), just reproduces Feynman's assumption of the naive parton model, $z=x$. Substitution of (\ref{partontensLO}) into the general expression for the hadronic tensor (\ref{hadrontens2}) and projecting onto the structure functions yields \begin{eqnarray} F_L(x,Q^2)&=&0~,\nonumber\\ F_2(x,Q^2)&=&x\sum_ie^2_i\left(f_i(x)+f_{\overline{i}}(x)\right)~. \label{resfeynLO} \end{eqnarray} This result, at {\sf LO}, is the same as in the naive parton model. It predicts \begin{itemize} \item the Callan-Gross relation, cf. \cite{Callan:1969uq}, \begin{eqnarray} F_L(x,Q^2)=F_2(x,Q^2)-2xF_1(x,Q^2)=0~. \end{eqnarray} \item the structure functions are scale-independent. \end{itemize} These findings were a success of the parton model, since they reproduced the general behavior of the data as observed by the ${\sf MIT/SLAC}$ experiments. Finally, we present for completeness the remaining structure functions $G_{2,3}$ and $H_{2,3}$ at the Born level for the complete neutral current, cf. Eq.~(\ref{born}), \begin{eqnarray} G_2(x,Q^2)&=&x\sum_i|e_i|v_i\left(f_i(x)+f_{\overline{i}}(x)\right), \\ H_2(x,Q^2)&=&x\sum_i\frac{1}{4}\left(v_i^2+a_i^2\right) \left(f_i(x)+f_{\overline{i}}(x)\right), \\ xG_3(x,Q^2)&=&x\sum_i|e_i|a_i\left(f_i(x)-f_{\overline{i}}(x)\right), \\ xH_3(x,Q^2)&=&x\sum_i\frac{1}{2}v_i a_i \left(f_i(x)-f_{\overline{i}}(x)\right), \end{eqnarray} with $a_i=1$ and \begin{eqnarray} v_i&=&1-4 |e_i| \sin^{2}\theta_{W}^{\mathrm{eff}}. \end{eqnarray} \subsubsection{Validity of the Parton Model} \label{SubSubSec-DISValpart} The validity of the parton picture can be justified by considering an impulse approximation of the scattering process as seen from a certain class of reference frames, in which the proton momentum is taken to be very large ($P_\infty$-frames). Two things happen to the proton when combining this limit with the Bjorken--limit: The internal interactions of its partons are time dilated, and it is Lorentz contracted in the direction of the collision. As the cms energy increases, the parton lifetimes are lengthened and the time it takes the electron to interact with the proton is shortened. Therefore the condition for the validity of the parton model is given by, cf. \cite{Drell:1970yt,Bjorken:1969ja}, \begin{eqnarray} \frac{\tau_{\rm int}}{\tau_{\rm life}} \ll 1~.\label{cond} \end{eqnarray} Here $\tau_{\rm int}$ denotes the interaction time and $\tau_{\rm life}$ the average life time of a parton. If (\ref{cond}) holds, the proton will be in a single virtual state characterized by a certain number of partons during the entire interaction time. This justifies the assumption that parton $i$ carries a definite momentum fraction $z_i$, $0 \le z_i \le 1$, of the proton in the cms. This parton model is also referred to as collinear parton model, since the proton is assumed to consist out of a stream of partons with parallel momenta. Further $\sum_i z_i =1$ holds. In order to derive the fraction of times in (\ref{cond}), one aligns the coordinate system parallel to the proton's momentum. Thus one obtains in the limit ${P^2_3} \gg M^2$, \cite{Blumlein:1997}, \begin{eqnarray} P=\left(\sqrt{P_3^2+M^2};0,0,P_3\right) \approx \left(P_3+\frac{M^2}{2\cdot P_3};0,0,P_3\right)~. \label{Pnuc} \end{eqnarray} The photon momentum can be parametrized by \begin{eqnarray} q=(q_{0};q_{3},\vec q_{\bot})~, \label{qinf} \end{eqnarray} where $\vec q_{\bot}$ denotes its transverse momentum with respect to the proton. By choosing the cms of the initial states as reference and requiring that $\nu M$ and $q^2$ approach a limit independent of ${ P_3}$ as ${P_3} \rightarrow \infty$, one finds for the characteristic interaction time scale, using an (approximate) time--energy uncertainty relation, \begin{eqnarray} \tau_{\rm int} &\simeq& \frac{1}{q_0}=\frac{4P_3x}{Q^2(1-x)}~. \label{tauint} \end{eqnarray} The life time of the individual partons is estimated accordingly to be inversely proportional to the energy fluctuations of the partons around the average energy $E$ \begin{eqnarray} \tau_{\rm life} \simeq \frac{1}{\sum_{i}E_{i}-E} \label{taulife}~. \end{eqnarray} Here $E_i$ denote the energies of the individual partons. After introducing the two-momentum $\vec{k}_{\bot i}$ of the partons perpendicular to the direction of motion of the proton as given in (\ref{Pnuc}), a simple calculation yields,~cf.~\cite{Blumlein:1997}, \begin{eqnarray} \frac{\tau_{\rm int}}{\tau_{\rm life}} &=& \displaystyle\frac{2x}{Q^2(1-x)}\left( \sum_{i}\frac{(m_i^2+k_{\bot i}^2)}{z_i}-M^2 \right)~, \label{cond2} \end{eqnarray} where $m_i$ denotes the mass of the $i$-th parton. This expression is independent of $P_3$. The above procedure allows therefore to estimate the probability of deeply inelastic scattering to occur independently of the large momentum of the proton. Accordingly, we consider now the case of two partons with momentum fractions $x$~and~$1-x$~ and equal perpendicular momentum, neglecting all masses. One obtains \begin{eqnarray} \frac{\tau_{\rm int}}{\tau_{\rm life}}\approx \frac{2k_{\bot}^2}{Q^2(1-x)^2}~. \end{eqnarray} This example leads to the conclusion, that deeply inelastic scattering probes single partons if the virtuality of the photon is much larger than the transverse momenta squared of the partons and Bjorken-$x$ is neither close to one nor zero. In the latter case, $xP_3$ would be the large momentum to be considered. If one does not neglect the quark masses, one has to adjust this picture, as will be described in Section~\ref{SubSec-HQFlav}. \subsection{\bf\boldmath The Light--Cone Expansion} \label{SubSec-DISComptLCE} In quantum field theory one usually considers time-ordered products, denoted by ${\sf T}$, rather than a commutator as it appears in the hadronic tensor in Eq.~(\ref{hadrontens4}). The hadronic tensor can be expressed as the imaginary part of the forward Compton amplitude for virtual gauge boson--nucleon scattering, $T_{\mu\nu}(q,P)$. The optical theorem, depicted graphically in Figure \ref{picopt}, yields \begin{eqnarray} W_{\mu\nu}(q,P) &=&\frac{1}{\pi} {\sf Im}~T_{\mu\nu}(q,P) \label{opttheo}~, \end{eqnarray} where the Compton amplitude is given by, cf.~\cite{Blumlein:1999sc}, \begin{eqnarray} T_{\mu \nu}(q,P)&=&i\int d^4\xi~\exp(iq\xi)\bra{P} {\sf T} J_{\mu}(\xi) J_{\nu}(0)\ket{P}~. \label{comptontensor} \end{eqnarray} \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=10.0cm]{picmain3.eps} \end{center} \begin{center} \caption{\sf Schematic picture of the optical theorem.} \label{picopt} \noindent \small \end{center} \normalsize \end{figure} By applying the same invariance and conservation conditions as for the hadronic tensor, the Compton amplitude can be expressed in the unpolarized case by two amplitudes $T_L(x,Q^2)$ and $T_2(x,Q^2)$. It is then given by \begin{eqnarray} T_{\mu\nu}(q,P)=&&\frac{1}{2x}\left( g_{\mu\nu} +\frac{q_{\mu}q_{\nu}}{Q^2} \right) T_{L}(x,Q^2) \nonumber\\ &+& \frac{2x}{Q^2}\left( P_{\mu}P_{\nu} +\frac{ q_{\mu}P_{\nu} +q_{\nu}P_{\mu} }{2x} -\frac{Q^2}{4x^2}g_{\mu\nu} \right) T_{2}(x,Q^2)~. \label{comptontens} \end{eqnarray} Using translation invariance, one can show that (\ref{comptontensor}) is crossing symmetric under $q~\rightarrow~-q$, cf. \cite{Jackiw:1972ee,Blumlein:1996vs}, \begin{eqnarray} T_{\mu\nu}(q,P)=T_{\mu\nu}(-q,P)~,\label{crossing1} \end{eqnarray} with $q \rightarrow -q$ being equivalent to $\nu,x \rightarrow (-\nu),(-x)$. The corresponding relations for the amplitudes are then obtained by considering (\ref{comptontens}) \begin{eqnarray} T_{(2,L)}(x,Q^2)&=&T_{(2,L)}(-x,Q^2)~.\label{crossing2} \end{eqnarray} By (\ref{opttheo}) these amplitudes relate to the structure functions $F_L$ and $F_2$ as \begin{eqnarray} F_{(2,L)}(x,Q^2)&=&\frac{1}{\pi}{\sf Im}~T_{(2,L)} (x,Q^2)~.\label{structamp} \end{eqnarray} Another general property of the Compton amplitude is that $T_L$ and $T_2$ are real analytic functions of $x$ at fixed $Q^2$, cf.~\cite{Jaffe:1985je}, i.e. \begin{eqnarray} T_{(2,L)}(x^*,Q^2)&=&T^*_{(2,L)}(x,Q^2)~.\label{realan} \end{eqnarray} Using this description one can perform the LCE, \cite{Wilson:1969zs,*Zimmermann:1970,*Frishman:1971qn,*Brandt:1970kg}, or the cut--vertex method in the time--like case, \cite{Frishman:1973pp,Geyer:1977gv,Mueller:1981sg}, respectively, and derive general properties of the moments of the structure functions as will be shown in the subsequent Section. A technical aspect which has been proved very useful is to work in Mellin space rather than in $x$--space. The $N$th Mellin moment of a function $f$ is defined through the integral \begin{eqnarray} \mbox{\rm\bf M}[f](N)\equiv\int_0^1 dz~z^{N-1}f(z)~. \label{Mellintrans} \end{eqnarray} This transform diagonalizes the Mellin--convolution $f\otimes~g$ of two functions $f,~g$ \begin{eqnarray} [f \otimes g](z) = \int_0^1 dz_1 \int_0^1 dz_2~~ \delta(z - z_1 z_2) ~f(z_1) g(z_2)~. \label{Mellinconz} \end{eqnarray} The convolution (\ref{Mellinconz}) decomposes into a simple product of the Mellin-transforms of the two functions, \begin{eqnarray} \mbox{\rm\bf M}[f \otimes g](N) = \mbox{\rm\bf M}[f](N)\mbox{\rm\bf M}[g](N)~. \label{MellinconN} \end{eqnarray} In Eqs. (\ref{Mellintrans}, \ref{MellinconN}), $N$ is taken to be an integer. However, later on one may perform an analytic continuation to arbitrary complex values of $N$,~\cite{Blumlein:2000hw,*Blumlein:2005jg}. Note that it is enough to know all even {\sf or} odd integer moments -- as is the case for inclusive DIS -- of the functions $f,~g$ to perform an analytic continuation to arbitrary complex values $N\in\mathbb{C}$, \cite{Carlson:thesis,*Titchmarsh:1939}. Then Eq.~(\ref{Mellinconz}) can be obtained from the relation for the moments, (\ref{MellinconN}), by an inverse Mellin--transform. Hence in this case the $z$-- and $N$--space description are equivalent, which we will frequently use later on. \subsubsection{Light--Cone Dominance} \label{SubSubSec-LCEdomLCE} It can be shown that in the Bjorken limit, $Q^2\rightarrow \infty,~ \nu \rightarrow \infty$, $x$ fixed, the hadronic tensor is dominated by its contribution near the light--cone, i.e. by the values of the integrand in (\ref{hadrontens4}) at $\xi^2 \approx 0$,~cf.~\cite{Wilson:1969zs,*Zimmermann:1970,*Frishman:1971qn,*Brandt:1970kg}. This can be understood by considering the infinite momentum frame, see Section~\ref{SubSubSec-DISValpart}, \begin{eqnarray} P&=&(P_3;0,0,P_3)~,\\ q&=&\Bigl(\frac{\nu}{2 P_3};\sqrt{Q^2},0,\frac{-\nu}{2 P_3}\Bigr)~,\\ P_3 &\approx& \sqrt{\nu} \rightarrow \infty~. \end{eqnarray} According to the Riemann--Lebesgue theorem, the integral in (\ref{hadrontens4}) is dominated by the region where $q.\xi \approx 0$ due to the rapidly oscillating exponential $\exp(iq.\xi)$, \cite{Reya:1979zk}. One can now rewrite the dot product as, cf. \cite{Yndurain:1999ui}, \begin{eqnarray} q. \xi=\frac{1}{2}(q^0-q^3)(\xi^0+\xi^3) +\frac{1}{2}(q^0+q^3)(\xi^0-\xi^3) -q^1\xi^1 \label{qdotz}~, \end{eqnarray} and infer that the condition $q.\xi \approx 0$ in the Bjorken-limit is equivalent to \begin{eqnarray} \xi^0 \pm \xi^3 \propto \frac{1}{\sqrt{\nu}}~,\quad \xi^1 \propto \frac{1}{\sqrt{\nu}}~, \label{limit} \end{eqnarray} which results in \begin{eqnarray} \xi^2~\approx~0~, \end{eqnarray} called {\sf light--cone dominance}: for DIS in the Bjorken-limit the dominant contribution to the hadronic tensor $W_{\mu\nu}(q,P)$ and the Compton Amplitude comes from the region where $\xi^2 \approx 0$. This property allows to apply the LCE of the current--current correlation in Eq.~(\ref{hadrontens4}) and for the time ordered product in Eq.~(\ref{comptontensor}), respectively. In the latter case it reads for scalar currents, cf. \cite{Wilson:1969zs,*Zimmermann:1970,*Frishman:1971qn,*Brandt:1970kg}, \begin{eqnarray} \lim_{\xi^2 \rightarrow 0} {\sf T} J(\xi),J(0) \propto \sum_{i,N,\tau} \overline{C}^N_{i,\tau}(\xi^2,\mu^2) \xi_{\mu_1}... \xi_{\mu_N} O_{i,\tau}^{\mu_1...\mu_N}(0,\mu^2)~.\label{lighex} \end{eqnarray} The $O_{i,\tau}(\xi,\mu^2)$ are local operators which are finite as $\xi^2 \rightarrow 0$. The singularities which appear for the product of two operators as their arguments become equal are shifted to the $c$-number coefficients $\overline{C}^N_{i,\tau}(\xi^2,\mu^2)$, the Wilson coefficients, and can therefore be treated separately. In Eq.~(\ref{lighex}), $\mu^2$ is the factorization scale describing at which point the separation between the perturbative and non--perturbative contributions takes place. The summation index $i$ runs over the set of allowed operators in the model, while the sum over $N$ extends to infinity. Dimensional analysis shows that the degree of divergence of the functions $\overline{C}_{i,\tau}^N$ as $\xi^2 \rightarrow 0$ is given by \begin{eqnarray} \overline{C}^{N}_{i,\tau}(\xi^2,\mu^2) \propto \Biggl(\frac{1}{\xi^2}\Biggr)^{-\tau/2+d_J}~. \label{Cdiv} \end{eqnarray} Here, $d_J$ denotes the canonical dimension of the current $J(\xi)$ and $\tau$ is the twist of the local operator $O_{i,\tau}^{\mu_1..\mu_N}(\xi,\mu^2)$, which is defined by,~cf.~\cite{Gross:1971wn}, \begin{eqnarray} \tau\equiv D_O-N~. \label{twist} \end{eqnarray} $D_O$ is the canonical (mass) dimension of $O_{i,\tau}^{\mu_1..\mu_N}(\xi,\mu^2)$ and $N$ is called its spin. From (\ref{Cdiv}) one can infer that the most singular coefficients are those related to the operators of lowest twist, i.e. in the case of the LCE of the electromagnetic current (\ref{current}), twist $\tau=2$. The contributions due to higher twist operators are suppressed by factors of $(\overline{\mu}^2/Q^2)^k$, with $\overline{\mu}$ a typical hadronic mass scale of $O(1~{\rm GeV})$. In a wide range of phase--space it is thus sufficient to consider the leading twist contributions only, which we will do in the following and omit the index $\tau$. \subsubsection{A Simple Example} \label{SubSubSec-DISex} In this Section, we consider a simple example of the LCE applied to the Compton amplitude and its relation to the hadronic tensor, neglecting all Lorentz--indices and model dependence, cf. Ref.~\cite{Muta:1998vi,Bardeen:1978yd}. The generalization to the case of QCD is straightforward and hence we will already make some physical arguments which apply in both cases. The scalar expressions corresponding to the hadronic tensor and the Compton amplitude are given by \begin{eqnarray} W(x,Q^2)&=& \frac{1}{2\pi} \int d^4\xi\exp(iq\xi) \bra{P}[J(\xi), J(0)]\ket{P}~, \label{hadrontensEx} \\ T(x,Q^2)&=&i\int d^4\xi~\exp(iq\xi)\bra{P} {\sf T} J(\xi) J(0)\ket{P}~. \label{comptontensorEx} \end{eqnarray} Eq.~(\ref{comptontensorEx}) can be evaluated in the limit $\xi^2 \rightarrow 0$ for twist $\tau=2$ by using the LCE given in Eq.~(\ref{lighex}), where for brevity only one local operator is considered. The coefficient functions in momentum space are defined as \begin{eqnarray} \int \exp(iq.\xi) \xi_{\mu_1}..\xi_{\mu_N} \overline{C}^{N}(\xi^2,\mu^2) &\equiv& -i\Bigl(\frac{2}{-q^2}\Bigr)^{N} q_{\mu_1}...q_{\mu_N} C^{N}\Bigl(\frac{Q^2}{\mu^2}\Bigr) ~. \label{CLExFour} \end{eqnarray} The nucleon states act on the composite operators only and the corresponding matrix elements can be expressed as \begin{eqnarray} \bra{P} O^{\mu_1...\mu_N}(0,\mu^2) \ket{P} &=& A^{N}\Bigl(\frac{P^2}{\mu^2}\Bigr) P^{\mu_1}...P^{\mu_N} + \mbox{\rm trace terms}. \label{HadOMEs} \end{eqnarray} The trace terms in the above equation can be neglected, because due to dimensional counting they would give contributions of the order $1/Q^2,~1/\nu$ and hence are irrelevant in the Bjorken--limit. Thus the Compton amplitude reads, cf. e.g. \cite{Roberts:1990ww,Muta:1998vi}, \begin{eqnarray} T(x',Q^2)&=&2\sum_{N=0,2,4,..} C^{N}\Bigl(\frac{Q^2}{\mu^2}\Bigr) A^{N}\Bigl(\frac{P^2}{\mu^2}\Bigr) {x'}^N~,~x'=\frac{1}{x} \label{CompAmplEx1} \end{eqnarray} In (\ref{CompAmplEx1}) only the even moments contribute. This is a consequence of crossing symmetry, Eq.~(\ref{crossing2}), and holds as well in the general case of unpolarized DIS for single photon exchange. In other cases the projection is onto the odd moments. Depending on the type of the observable the series may start at different initial values, cf. e.g.~\cite{Blumlein:1996vs,Blumlein:1998nv}. The sum in Eq. (\ref{CompAmplEx1}) is convergent in the unphysical region $x\ge~1$ and an analytic continuation to the physical region $0 \leq x \leq 1$ has to be performed. Here, one of the assumptions is that scattering amplitudes are analytic in the complex plane except at values of kinematic variables allowing intermediate states to be on mass--shell. This general feature has been proved to all orders in perturbation theory, \cite{Landau:1959fi,Bjorken:1959fd}. In QCD, it is justified on grounds of the parton model. When $\nu \ge Q^2/2M$, i.e. $0\le~x\le~1$, the virtual photon-proton system can produce a physical hadronic intermediate state, so the $T_{(2,L)}(x,Q^2)$ and $T(x,Q^2)$, respectively, have cuts along the positive (negative) real $x$-axis starting from $1$($-1$) and poles at $\nu=Q^2/2M$ $(x = 1,-1)$. The discontinuity along the cut is then just given by (\ref{opttheo}). The Compton amplitude can be further analyzed by applying (subtracted) dispersion relations, cf. \cite{Blumlein:1996vs,Blumlein:1998nv}. Equivalently, one can divide both sides of Eq.~(\ref{CompAmplEx1}) by ${x'}^{m}$ and integrate along the path shown in Figure \ref{CONTOUR}, cf. \cite{Muta:1998vi,Mulders:1996}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=4.0cm]{picmain4.eps} \end{center} \begin{center} \caption{\sf Integration contour in the complex $x'$-plane.} \label{CONTOUR} \noindent \small \end{center} \normalsize \end{figure} \noindent For the left--hand side of (\ref{CompAmplEx1}) one obtains \begin{eqnarray} \frac{1}{2\pi i}\oint dx' \frac{T(x',Q^2)}{{x'}^m} = \frac{2}{\pi} \int_1^{\infty} \frac{dx'}{{x'}^m} {\sf Im} T(x',Q^2) = 2\int_0^1 dx~x^{m-2} W(x,Q^2)~, \label{ContInt1} \end{eqnarray} where the optical theorem, (\ref{opttheo}), and crossing symmetry, (\ref{crossing2}) have been used. The right--hand side of (\ref{CompAmplEx1}) yields \begin{eqnarray} \frac{1}{\pi i}\sum_{N=0,2,4,..} C^{N}\Bigl(\frac{Q^2}{\mu^2}\Bigr) A^{N}\Bigl(\frac{P^2}{\mu^2}\Bigr) \oint dx'~{x'}^{N-m}=2 C^{m-1}\Bigl(\frac{Q^2}{\mu^2}\Bigr) A^{m-1}\Bigl(\frac{P^2}{\mu^2}\Bigr)~. \label{ContInt2} \end{eqnarray} Thus from Eqs. (\ref{ContInt1}) and (\ref{ContInt2}) one obtains for the moments of the scalar hadronic tensor defined in Eq.~(\ref{hadrontensEx}) \begin{eqnarray} \int_0^1 dx~x^{N-1} W(x,Q^2)&=& C^{N}\Bigl(\frac{Q^2}{\mu^2}\Bigr) A^{N}\Bigl(\frac{P^2}{\mu^2}\Bigr)~. \label{MomHadEx} \end{eqnarray} \subsubsection{The Light--Cone Expansion applied to DIS} \label{SubSec-LCE} In order to derive the moment--decomposition of the structure functions one essentially has to go through the same steps as in the previous Section. The LCE of the physical forward Compton amplitude (\ref{comptontensor}) at the level of twist $\tau=2$ in the Bjorken--limit is given by, cf. \cite{Bardeen:1978yd,Buras:1979yt}, \begin{eqnarray} T_{\mu\nu}(q,P)\!\!&\rightarrow&\!\!\sum_{i,N}\Biggl\{ \hspace{1mm} \Bigl[ Q^2g_{\mu\mu_1}g_{\nu\mu_2} +g_{\mu\mu_1}q_{\nu}q_{\mu_2} +g_{\nu\mu_2}q_{\mu}q_{\mu_1} -g_{\mu\nu}q_{\mu_1}q_{\mu_2} \Bigr] C_{i,2}\Bigl(N,\frac{Q^2}{\mu^2}\Bigr) \nonumber\\ && \hspace{-4mm} +\Bigl[ g_{\mu\nu} +\frac{q_{\mu}q_{\mu}}{Q^2} \Bigr] q_{\mu_1}q_{\mu_2} C_{i,L} \Bigl(N,\frac{Q^2}{\mu^2}\Bigr) \Biggr\} q_{\mu_3}...q_{\mu_N} \Bigl(\frac{2}{Q^2}\Bigr)^N \bra{P}O_i^{\mu_1...\mu_N}(\mu^2)\ket{P}~. \nonumber\\ \label{LCETmunu} \end{eqnarray} Additionally to Section~\ref{SubSubSec-DISex}, the index $i$ runs over the allowed operators which emerge from the expansion of the product of two electromagnetic currents, Eq.~(\ref{current}). The possible twist--2 operators are given by~\footnote{Here we consider only the spin--averaged case for single photon exchange. Other operators contribute for parity--violating processes, in the polarized case and for transversity, cf. Sections \ref{Sec-POL} and \ref{sec-1}.},~\cite{Geyer:1977gv}, \begin{eqnarray} \label{COMP1} O^{\sf NS}_{q,r;\mu_1, \ldots, \mu_N} &=& i^{N-1} {\bf S} [\overline{\psi}\gamma_{\mu_1} D_{\mu_2} \ldots D_{\mu_N} \frac{\lambda_r}{2}\psi] - {\rm trace~terms}~, \\ \label{COMP2} O^{\sf S}_{q;\mu_1, \ldots, \mu_N} &=& i^{N-1} {\bf S} [\overline{\psi}\gamma_{\mu_1} D_{\mu_2} \ldots D_{\mu_N} \psi] - {\rm trace~terms}~, \\ \label{COMP3} O^{\sf S}_{g;\mu_1, \ldots, \mu_N} &=& 2 i^{N-2} {\bf S} {\rm \bf Sp}[F_{\mu_1 \alpha}^a D_{\mu_2} \ldots D_{\mu_{N-1}} F_{\mu_N}^{\alpha,a}] - {\rm trace~terms}~. \end{eqnarray} Here, $\bf S$ denotes the symmetrization operator of the Lorentz indices $\mu_1, \ldots, \mu_N$. $\lambda_r$ is the flavor matrix of $SU(n_f)$ with $n_f$ light flavors, $\psi$ denotes the quark field, $F_{\mu\nu}^a$ the gluon field--strength tensor, and $D_{\mu}$ the covariant derivative. The indices $q,~g$ represent the quark-- and gluon--operator, respectively. ${\bf Sp}$ in (\ref{COMP3}) is the color--trace and $a$ the color index in the adjoint representation, cf. Appendix \ref{App-Con}. The quark--fields carry color indices in the fundamental representation, which have been suppressed. The classification of the composite operators (\ref{COMP1}--\ref{COMP3}) in terms of flavor singlet (${\sf S}$) and non-singlet (${\sf NS}$) refers to their symmetry properties with respect to the flavor group $SU(n_f)$. The operator in Eq.~(\ref{COMP1}) belongs to the adjoint representation of $SU(n_f)$, whereas the operators in Eqs. (\ref{COMP2}, \ref{COMP3}) are singlets under $SU(n_f)$. Neglecting the trace terms, one rewrites the matrix element of the composite operators in terms of its Lorentz structure and the scalar operator matrix elements, cf. \cite{Yndurain:1999ui,Roberts:1990ww}, \begin{eqnarray} \bra{P}O_i^{\mu_1...\mu_N}\ket{P}&=& A_i\Bigl(N,\frac{P^2}{\mu^2}\Bigr) P^{\mu_1}...P^{\mu_N}~. \label{NucOMEs} \end{eqnarray} Eq.~(\ref{LCETmunu}) then becomes \begin{eqnarray} T_{\mu\nu}(q,P)&=&2\sum_{i,N}\Biggl\{ \hspace{3mm} \frac{2x}{Q^2}\Bigl[ P_{\mu}P_{\nu} +\frac{P_{\mu}q_{\nu}+P_{\nu}q_{\mu}}{2x} -\frac{Q^2}{4x^2}g_{\mu\nu} \Bigr] C_{i,2}\Bigl(N,\frac{Q^2}{\mu^2}\Bigr) \nonumber \\ && \hspace{12mm} +\frac{1}{2x}\Bigl[ g_{\mu\nu} +\frac{q_{\mu}q_{\mu}}{Q^2} \Bigr] C_{i,L} \Bigl(N,\frac{Q^2}{\mu^2}\Bigr) \Biggr\} \frac{1}{x^{N-1}} A_i\Bigl(N,\frac{P^2}{\mu^2}\Bigr)~. \label{LCETmunu2} \end{eqnarray} Comparing Eq.~(\ref{LCETmunu2}) with the general Lorentz structure expected for the Compton amplitude, Eq.~(\ref{comptontens}), the relations of the scalar forward amplitudes to the Wilson coefficients and nucleon matrix elements can be read off \begin{eqnarray} \label{eqT2l} T_{(2,L)}(x,Q^2)&=&2\sum_{i,N}\frac{1}{x^{N-1}} C_{i,(2,L)}\Bigl(N,\frac{Q^2}{\mu^2}\Bigr) A_i\Bigl(N,\frac{P^2}{\mu^2}\Bigr)~. \label{T2LMOM} \end{eqnarray} Eq.~(\ref{eqT2l}) is of the same type as Eq.~(\ref{CompAmplEx1}) and one thus obtains for the moments of the structure functions \begin{eqnarray} F_{(2,L)}(N,Q^2)&=&\mbox{\rm\bf M}[F_{(2,L)}(x,Q^2)](N) \label{eqMOM} \\ &=&\sum_{i} C_{i,(2,L)}\Bigl(\frac{Q^2}{\mu^2},N\Bigr) A_i\Bigl(\frac{P^2}{\mu^2},N\Bigr)~. \label{F2LMOM} \end{eqnarray} The above equations have already been written in Mellin space, which we will always do from now on, if not indicated otherwise. Eqs. (\ref{T2LMOM}, \ref{F2LMOM}), together with the general structure of the Compton amplitude, Eqs. (\ref{comptontens}, \ref{LCETmunu2}), and the hadronic tensor, Eq.~(\ref{hadrontens2}), are the basic equations for theoretical or phenomenological analysis of DIS in the kinematic regions where higher twist effects can be safely disregarded. Note that the generalization of these equations to electroweak or polarized interactions is straightforward by including additional operators and Wilson coefficients. In order to interpret Eqs. (\ref{T2LMOM}, \ref{F2LMOM}), one uses the fact that the Wilson coefficients $C_{i,(2,L)}$ are independent of the proton state. This is obvious since the wave function of the proton only enters into the definition of the operator matrix elements, cf. Eq.~(\ref{NucOMEs}). In order to calculate the Wilson coefficients, the proton state has therefore to be replaced by a suitably chosen quark or gluon state $i$ with momentum $p$. The corresponding partonic tensor is denoted by ${\cal W}^i_{\mu\nu}(q,p)$, cf. below Eq.~(\ref{feyncor}), with scalar amplitudes ${\cal F}^i_{(2,L)}(\tau,Q^2)$. Here $\tau$ is the partonic scaling variable defined in Eq.~(\ref{taudef}). The LCE of the electromagnetic current does not change and the replacement only affects the operator matrix elements. The forward Compton amplitude for photon--quark (gluon) scattering corresponding to ${\cal W}^i_{\mu\nu}(q,p)$ can be calculated order by order in perturbation theory, provided the scale $Q^2$ is large enough for the strong coupling constant to be small. In the same manner, the contributing operator matrix elements with external partons may be evaluated. Finally, one can read off the Wilson coefficients from the partonic equivalent of Eq.~(\ref{T2LMOM})~\footnote{Due to the optical theorem, one may also obtain the Wilson coefficients by calculating the inclusive hard scattering cross sections of a virtual photon with a quark(gluon) using the standard Feynman--rules and phase--space kinematics.}. By identifying the nucleon OMEs (\ref{NucOMEs}) with the PDFs, one obtains the QCD improved parton model. At ${\sf LO}$ it coincides with the naive parton model, which we described in Section~\ref{SubSec-DISParton}, as can be inferred from the discussion below Eq.~(\ref{hadrontens8}). The improved parton model states that in the Bjorken limit at the level of twist $\tau=2$ the unpolarized nucleon structure functions $F_i(x,Q^2)$ are obtained in Mellin space as products of the universal parton densities $f_i(N,\mu^2)$ with process--dependent Wilson coefficients $C_{i,(2,L)}(N,Q^2/\mu^2)$ \begin{eqnarray} F_{(2,L)}(N,Q^2) = \sum_i C_{i,(2,L)}\Bigl(N,\frac{Q^2}{\mu^2}\Bigr) f_i(N,\mu^2) \label{STR} \end{eqnarray} to all orders in perturbation theory. This property is also formulated in the factorization theorems,~\cite{Amati:1978wx,*Libby:1978qf,*Libby:1978bx,*Mueller:1978xu,*Collins:1981ta,*Bodwin:1984hc,*Collins:1985ue}, cf. also \cite{Collins:1987pm}, where it is essential that an inclusive, infrared--safe cross section is considered, \cite{Bassetto:1984ik}. We have not yet dealt with the question of how renormalization is being performed. However, we have already introduced the scale $\mu^2$ into the right--hand side of Eq. (\ref{STR}). This scale is called factorization scale. It describes a mass scale at which the separation of the structure functions into the perturbative hard scattering coefficients $C_{i,(2,L)}$ and the non--perturbative parton densities $f_i$ can be performed. This choice is arbitrary at large enough scales and the physical structure functions do not depend on it. This {\sf independence} is used in turn to establish the corresponding renormalization group equation, \cite{Stuckelberg:1951gg,*GellMann:1954fq,*Bogolyubov:1980nc,Symanzik:1970rt,*Callan:1970yg}, which describes the scale--evolution of the Wilson coefficients, parton densities and structure functions w.r.t. to $\mu^2$ and $Q^2$, cf. Refs.~\cite{Buras:1979yt,Reya:1979zk,Altarelli:1989ue,Owens:1992hd,Roberts:1990ww,Blumlein:1995cm} and Section~\ref{SubSec-DISEvol}. These evolution equations then predict scaling violations and are used to analyze experimental data in order to unfold the twist--2 parton distributions at some scale $Q^2_0$, together with the QCD--scale $\Lambda_{\rm QCD}$, cf. \cite{Duke:1984ge,Altarelli:1989ue,Bethke:1992gh}. Before finishing this Section, we describe the quantities appearing in Eq.~(\ref{STR}) in detail. Starting from the operators defined in Eqs. (\ref{COMP1})--(\ref{COMP3}), three types of parton densities are expected. Since the question how heavy quarks are treated in this framework will be discussed in Section~\ref{Sec-HQDIS}, we write the following equations for $n_f$ light flavors in massless QCD. The gluon density is denoted by $G(n_f,N,\mu^2)$ and multiplies the gluonic Wilson coefficients $C_{g,(2,L)}(n_f,N,Q^2/\mu^2)$, which describe the interaction of a gluon with a photon and emerge for the first time at $O(\alpha_s)$. Each quark and its anti--quark have a parton density, denoted by $f_{k(\overline{k})}(n_f,N,\mu^2)$. These are grouped together into the flavor singlet combination $\Sigma(n_f,N,\mu^2)$ and a non--singlet combination $\Delta_k(n_f,N,\mu^2)$ as follows \begin{eqnarray} \Sigma(n_f,N,\mu^2) &=& \sum_{l=1}^{n_f} \Big[ f_l(n_f,N,\mu^2) + f_{\bar l}(n_f,N,\mu^2) \Big]~, \label{SIGMAPDF}\\ \Delta_k(n_f,N,\mu^2) &=& f_k(n_f,N,\mu^2) + f_{\bar k}(n_f,N,\mu^2) -\frac{1}{n_f}\Sigma(n_f,N,\mu^2)~. \label{DELTAPDF} \end{eqnarray} The distributions multiply the quarkonic Wilson coefficients $C^{\sf S,NS}_{q,(2,L)}(n_f,N,Q^2/\mu^2)$, which describe the hard scattering of a photon with a light quark. The complete factorization formula for the structure functions is then given by \begin{eqnarray} F_{(2,L)}(n_f,N,Q^2)=\frac{1}{n_f} \sum_{k=1}^{n_f} e_k^2 \Biggl[&& \Sigma(n_f,N,\mu^2) C_{q,(2,L)}^{\sf S}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big) \nonumber \\ &+&G(n_f,N,\mu^2) C_{g,(2,L)}^{\sf S}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big) \nonumber \\ &+&n_f \Delta_k(n_f,N,\mu^2) C_{q,(2,L)}^{\sf NS}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)\Biggr]~. \nonumber\\ \label{FACT2} \end{eqnarray} Note, that one usually splits the quarkonic ${\sf S}$ contributions into a ${\sf NS}$ and pure--singlet (${\sf PS}$) part via ${\sf S~=PS+NS}$. The perturbative expansions of the Wilson coefficients read \begin{eqnarray} C_{g,(2,L)}^{\sf S}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)&=& \sum_{i=1}^{\infty}a_s^i C_{g,(2,L)}^{(i),{\sf S}}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)~, \label{Cg2Lpert} \\ C_{q,(2,L)}^{\sf PS}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)&=& \sum_{i=2}^{\infty}a_s^i C_{q,(2,L)}^{(i),{\sf PS}}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)~, \label{Cq2LPSpert} \\ C_{q,(2,L)}^{\sf NS}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)&=& \delta_2+\sum_{i=1}^{\infty}a_s^i C_{q,(2,L)}^{(i),{\sf NS}}\Big(n_f,N,\frac{Q^2}{\mu^2}\Big)~, \label{Cq2LNSpert} \end{eqnarray} where $a_s\equiv\alpha_s/(4\pi)$ and \begin{eqnarray} \delta_2&=&1\mbox{ for }F_2\mbox{ and }\delta_2=0\mbox{ for }F_L~. \label{delta2def} \end{eqnarray} These terms are at present known up to $O(a_s^3)$. The $O(a_s)$ terms have been calculated in Refs.~\cite{Zee:1974du,Bardeen:1978yd,Furmanski:1981cw} and the $O(a_s^2)$ contributions by various groups in Refs.~\cite{Duke:1981ga,*Devoto:1984wu,*Kazakov:1987jk,*Kazakov:1990fu,*SanchezGuillen:1990iq,*vanNeerven:1991nnxZijlstra:1991qcxZijlstra:1992qd,*Kazakov:1992xj,*Larin:1991fv,Moch:1999eb}. The $O(a_s^3)$ terms have first been calculated for fixed moments in Refs.~\cite{Larin:1993vu,Larin:1996wd,Retey:2000nq,Blumlein:2004xt} and the complete result for all $N$ has been obtained in Refs.~\cite{Vermaseren:2005qc}~\footnote{Recently, the $O(a_s^3)$ Wilson coefficient for the structure function $xF_3(x,Q^2)$ was calculated in Ref.~\cite{Moch:2008fj}.}. \subsection{\bf\boldmath RGE--improved Parton Model and Anomalous Dimensions} \label{SubSec-DISEvol} In the following, we present a derivation of the RGEs for the Wilson coefficients, and subsequently, the evolution equations for the parton densities. When calculating scattering cross sections in quantum field theories, they usually contain divergences of different origin. The infrared and collinear singularities are connected to the limit of soft-- and collinear radiation, respectively. Due to the Bloch--Nordsieck theorem,~\cite{Bloch:1937pw,*Yennie:1961ad}, it is known that the infrared divergences cancel between virtual and bremsstrahlung contributions. The structure functions are inclusive quantities. Therefore, all final state collinear (mass) singularities cancel as well, which is formulated in the Lee--Kinoshita--Nauenberg theorem, \cite{Kinoshita:1962ur,*Lee:1964is}. Thus in case of the Wilson coefficients, only the initial state collinear divergences of the external light partons and the ultraviolet divergences remain. The latter are connected to the large scale behavior and are renormalized by a redefinition of the parameters of the theory, as the coupling constant, the masses, the fields, and the composite operators, \cite{Peterman:1978tb,Collins:1984xc}. This introduces a renormalization scale $\mu_r$, which forms the subtraction point for renormalization. The scale which appears in the factorization formulas (\ref{STR}, \ref{FACT2}) is denoted by $\mu_f$ and called factorization scale, cf. \cite{Amati:1978wx,*Libby:1978qf,*Libby:1978bx,*Mueller:1978xu,*Collins:1981ta,*Bodwin:1984hc,*Collins:1985ue,Collins:1987pm}. Its origin lies in the arbitrariness of the point at which short-- and long--distance effects are separated and is connected to the redefinition of the bare parton densities by absorbing the initial state collinear singularities of the Wilson coefficients into them. Note, that one usually adopts dimensional regularization to regularize the infinities in perturbative calculations, cf. Section~\ref{Sec-REN}, which causes another scale $\mu$ to appear. It is associated to the mass dimension of the coupling constant in $D\neq 4$ dimensions. In principle all these three scales have to be treated separately, but we will set them equal in the subsequent analysis, $\mu = \mu_r = \mu_f$. The renormalization group equations are obtained using the argument that all these scales are arbitrary and therefore physical quantities do not alter when changing these scales, \cite{Stuckelberg:1951gg,*GellMann:1954fq,*Bogolyubov:1980nc,Symanzik:1970rt,*Callan:1970yg,Collins:1984xc,Peterman:1978tb}. One therefore defines the total derivative w.r.t. to $\mu^2$ \begin{eqnarray} {\cal D}(\mu^2)&\equiv& \mu^2\frac{\partial}{\partial\mu^2} +\beta(a_s(\mu^2))\frac{\partial}{\partial a_s(\mu^2)} -\gamma_m(a_s(\mu^2)) m(\mu^2)\frac{\partial}{\partial m(\mu^2)}. \label{totdiff} \end{eqnarray} Here the $\beta$--function and the anomalous dimension of the mass, $\gamma_m$, are given by \begin{eqnarray} \beta(a_s(\mu^2))&\equiv& \mu^2\frac{\partial a_s(\mu^2)}{\partial \mu^2}~, \label{betdef1} \\ \gamma_m(a_s(\mu^2)))&\equiv& -\frac{\mu^2}{m(\mu^2)} \frac{\partial m(\mu^2)}{\partial \mu^2}~, \end{eqnarray} cf. Sections \ref{SubSec-RENMa}, \ref{SubSec-RENCo}. The derivatives have to be performed keeping the bare quantities $\hat{a}_s$, $\hat{m}$ fixed. Additionally, we work in Feynman--gauge and therefore the gauge--parameter is not present in Eq.~(\ref{totdiff}). In the following we will consider only one mass $m$. The composite operators (\ref{COMP1})--(\ref{COMP3}) are renormalized introducing operator $Z$--factors \begin{eqnarray} O^{\sf NS}_{q,r;\mu_1,...,\mu_N}&=& Z^{\sf NS}(\mu^2)\hat{O}^{\sf NS}_{q,r;\mu_1,...,\mu_N}~, \label{ZNSdef}\\ O^{\sf S}_{i;\mu_1,...,\mu_N}&=& Z^{\sf S}_{ij}(\mu^2) \hat{O}^{\sf S}_{j;\mu_1,...,\mu_N}~,\quad~i=q,g~, \label{ZSijdef} \end{eqnarray} where in the singlet case mixing occurs since these operators carry the same quantum numbers. The anomalous dimensions of the operators are defined by \begin{eqnarray} \gamma_{qq}^{\sf NS}&=& \mu Z^{-1, {\sf NS}}(\mu^2) \frac{\partial}{\partial \mu} Z^{\sf NS}(\mu^2)~, \label{gammazetNS}\\ \gamma_{ij}^{\sf S}&=& \mu Z^{-1, {\sf S}}_{il}(\mu^2) \frac{\partial}{\partial \mu} Z_{lj}^{\sf S}(\mu^2)~. \label{gammazetS} \end{eqnarray} We begin by considering the partonic structure functions calculated with external fields $l$. Here we would like to point out that we calculate matrix elements of currents, operators, etc. and not vacuum expectation values of time--ordered products with the external fields included. The anomalous dimensions of the latter therefore do not contribute, \cite{Yndurain:1999ui}, and they are parts of the anomalous dimensions of the composite operators, respectively. The RGE reads \begin{eqnarray} {\cal D}(\mu^2) {\cal F}^l_{(2,L)}(N,Q^2) =0~. \label{RENFi2L} \end{eqnarray} On the partonic level, Eq.~(\ref{F2LMOM}) takes the form \begin{eqnarray} {\cal F}^l_{(2,L)}(N,Q^2)=\sum_j C_{j,(2,L)} \Bigl(N,\frac{Q^2}{\mu^2}\Bigr)\bra{l}O_j(\mu^2)\ket{l}~. \label{Fi2Lfac} \end{eqnarray} From the operator renormalization constants of the $O_i$, Eqs. (\ref{ZNSdef}, \ref{ZSijdef}), the following RGE is derived for the matrix elements, \cite{Buras:1979yt}, \begin{eqnarray} \label{EVOL1} \sum_j \Bigl({\cal D}(\mu^2)~\delta_{ij} +\frac{1}{2}\gamma_{ij}^{\sf S, NS}\Bigr) \bra{l}O_j(\mu^2)\ket{l}&=&0~, \end{eqnarray} where we write the ${\sf S}$ and ${\sf NS}$ case in one equation for brevity and we remind the reader that in the latter case, $i,j,l=q$ only. Combining Eqs. (\ref{RENFi2L}, \ref{Fi2Lfac}, \ref{EVOL1}), one can determine the RGE for the Wilson coefficients. It reads \begin{eqnarray} \label{EVOL2} \sum_i \Bigl({\cal D}(\mu^2)~\delta_{ij} -\frac{1}{2}\gamma_{ij}^{\sf S, NS}\Bigr) C_{i,(2,L)}\Bigl(N,\frac{Q^2}{\mu^2}\Bigr)&=&0~. \end{eqnarray} The structure functions, which are observables, obey the same RGE as on the partonic level \begin{eqnarray} {\cal D}(\mu^2)F_{(2,L)}(N,Q^2) = \mu^2 \frac{d}{d\mu^2}F_{(2,L)}(N,Q^2) = 0~. \end{eqnarray} Using the factorization of the structure functions into Wilson coefficients and parton densities, Eqs. (\ref{STR}, \ref{FACT2}), together with the RGE derived for the Wilson coefficients in Eq. (\ref{EVOL2}), one obtains from the above formula the QCD evolution equations for the parton densities, cf. e.g. \cite{Buras:1979yt,Reya:1979zk,Altarelli:1989ue,Owens:1992hd,Roberts:1990ww,Blumlein:1995cm}, \begin{eqnarray} \label{EVOL3} \frac{d}{d\ln \mu^2} f^{\sf S, NS}_i(n_f,N,\mu^2) &=&- \frac{1}{2} \sum_j \gamma_{ij}^{\sf S, NS} f_j^{\sf S, NS}(n_f,N,\mu^2)~. \end{eqnarray} Eq.~(\ref{EVOL3}) describes the change of the parton densities w.r.t. the scale $\mu$. In the more familiar matrix notation, these equations read \begin{eqnarray} \frac{d}{d\ln \mu^2} \begin{pmatrix} \Sigma(n_f,N,\mu^2) \\ G(n_f,N,\mu^2) \end{pmatrix} &=&-\frac{1}{2} \begin{pmatrix} \gamma_{qq} & \gamma_{qg} \\ \gamma_{gq} & \gamma_{gg} \end{pmatrix} \begin{pmatrix} \Sigma(n_f,N,\mu^2) \\ G(n_f,N,\mu^2) \end{pmatrix}~, \label{EVOL4} \\ \frac{d}{d\ln \mu^2} \Delta_k(n_f,N,\mu^2) &=& -\frac{1}{2}\gamma_{qq}^{\sf NS} \Delta_k(n_f,N,\mu^2)~,\label{EVOL5} \end{eqnarray} where we have used the definition for the parton densities in Eqs.~(\ref{SIGMAPDF}, \ref{DELTAPDF}). The anomalous dimensions in the above equations can be calculated order by order in perturbation theory. At ${\sf LO}$, \cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}, and ${\sf NLO}$, \cite{Floratos:1977auxFloratos:1977aue1,Floratos:1978ny,GonzalezArroyo:1979df,GonzalezArroyo:1979he,*Curci:1980uw,*Furmanski:1980cm,Hamberg:1991qt}, they have been known for a long time. The ${\sf NNLO}$ anomalous dimension were calculated first for fixed moments in Refs.~\cite{Larin:1996wd,Retey:2000nq,Blumlein:2004xt} and the complete result for all moments has been obtained in Refs.~\cite{Moch:2004pa,Vogt:2004mw}~\footnote{Note that from our convention in Eqs. (\ref{gammazetNS}, \ref{gammazetS}) follows a relative factor $2$ between the anomalous dimensions considered in this work compared to Refs.~\cite{Moch:2004pa,Vogt:2004mw}.}. As described, the PDFs are non--perturbative quantities and have to be extracted at a certain scale from experimental data using the factorization relation (\ref{STR}). If the scale $\mu^2$ is large enough to apply perturbation theory, the evolution equations can be used to calculate the PDFs at another perturbative scale, which provides a detailed QCD test comparing to precision data. There are similar evolution equations for the structure functions and Wilson coefficients, cf. e.g. \cite{Buras:1979yt,Reya:1979zk,Altarelli:1989ue,Owens:1992hd,Roberts:1990ww,Blumlein:1995cm}. Different groups analyze the evolution of the parton distribution functions based on precision data from deep--inelastic scattering experiments and other hard scattering cross sections. Analyzes were performed by the Dortmund group, \cite{Gluck:1980cp,Gluck:1988xx,Gluck:1989ze,Gluck:1991ng,Gluck:1994uf,Gluck:1998xa,Gluck:2006yz,Gluck:2007ck,JimenezDelgado:2008hf}, by Alekhin et. al., \cite{Alekhin:2005gq,Alekhin:2006zm}, Bl{\"u}mlein et. al., \cite{Blumlein:2004ip,Blumlein:2006be}, the ${\sf MSTW}$--, \cite{Martin:2009iq}, ${\sf QTEQ}$--, \cite{Lai:1999wy}, and the ${\sf NNPDF}$--collaborations, \cite{Ball:2008by}. The PDFs determined in this way can e.g. be used as input data for the $pp$ collisions at the LHC, since they are universal quantities and only relate to the structure of the proton and not to the particular kind of scattering events considered. Apart from performing precision analyzes of the PDFs, one can also use the evolution equations to determine $a_s$ more precisely, \cite{Gluck:1980cp,Blumlein:2004ip,Alekhin:2005gq,Alekhin:2006zm,Blumlein:2006be,Gluck:2006yz,Gluck:2007ck,JimenezDelgado:2008hf,Martin:2009iq}. The evolution equations (\ref{EVOL3}, \ref{EVOL4}, \ref{EVOL5}) are written for moments only. The representation in $x$--space is obtained by using~(\ref{Mellintrans},~\ref{Mellinconz},~\ref{MellinconN}) and is usually expressed in terms of the splitting functions $P_{ij}(x)$, \cite{Altarelli:1977zs}. At the level of twist--$2$ the latter are connected to the anomalous dimensions by the Mellin--transform \begin{eqnarray} \gamma_{ij}(N)=-\mbox{\rm\bf M}[P_{ij}](N)~. \label{splitan} \end{eqnarray} The behavior of parton distribution functions in the small $x$ region attracted special interest due to possibly new dynamical contributions, such as Glauber--model based screening corrections, \cite{Gribov:1981ac,*Mueller:1985wy,*Collins:1990cw,*Bartels:1990zk,*Altmann:1992vm,*DelDuca:1995hf,*Lipatov:1996ts}, and the so-called BFKL contributions, a `leading singularity' resummation in the anomalous dimensions for all orders in the strong coupling constant, \cite{Fadin:1975cb,*Balitsky:1978ic,*Kirschner:1983di,*Bartels:1996wc,*Fadin:1998py}. For both effects there is no evidence yet in the data both for $F_2(x,Q^2)$ and $F_L(x,Q^2)$, beyond the known perturbative contributions to $O(a_s^3)$. This does not exclude that at even smaller values of $x$ contributions of this kind will be found. The BFKL contributions were investigated on the basis of a consistent renormalization group treatment, together with the fixed order contributions in Refs.~\cite{Blumlein:1995jpxBlumlein:1996ddxBlumlein:1996hbxBlumlein:1997em,Blumlein:1998pp}. One main characteristic, comparing with the fixed order case, is that several sub-leading series, which are unknown, are required to stabilize the results, see also~\cite{Blumlein:1998mg}. This aspect also has to be studied within the framework of recent approaches, \cite{Altarelli:2008aj,*Ciafaloni:2007gf}. \newpage \section{\bf\boldmath Heavy Quark Production in DIS} \label{Sec-HQDIS} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} In the Standard Model, the charm, bottom and top quark are treated as heavy quarks, all of which have a mass larger than the QCD--scale $\Lambda_{\sf QCD}(n_f=4)\approx~240~{\rm MeV}$,~\cite{Alekhin:2002fv,Blumlein:2004ip,Alekhin:2006zm,Blumlein:2006be,Gluck:2006yz,Gluck:2007ck,JimenezDelgado:2008hf}. The up, down and strange quark are usually treated as massless. Because of confinement, the quarks can only be observed via the asymptotic states baryons and mesons, in which they are contained. In the following, we concentrate on the inclusive production of one species of a heavy quark, denoted by $Q(\overline{Q})$, with mass $m$. In the case of ${\sf HERA}$ kinematics, $Q=c$. This is justified to a certain extent by the observation that bottom quark contributions to DIS structure functions are much smaller, cf. \cite{Thompson:2007mx}~\footnote{Likewise, for even higher scales the $b$--quark could be considered as the heavy quark with $u,d,s,c$ being effectively massless, cf. e.g. \cite{Chuvakin:2000jm}.}. Since the ratio $m_c^2/m_b^2\approx~1/10$ is not small, there are regions in which both masses are potentially important. The description of these effects is beyond the scope of the formalism outlined below and of comparable order as the $m_c^2/Q^2$ corrections. Top--quark production in $l^{\pm}N$ scattering is usually treated as a semi--inclusive process, cf. \cite{Schuler:1987wj,*Baur:1987ai,Gluck:1987ukxGluck:1987uke1}. Charmed mesons are more abundantly produced at ${\sf HERA}$ than baryons. $D$--mesons are bound states of charm and lighter quarks, e.g. $D_u=\overline{u}c$, $D_d=\overline{d}c$ etc. Furthermore also $c\overline{c}$ resonances contribute, such as $J/\Psi$, by the observation of which charm was discovered,~\cite{Augustin:1974xw,*Abrams:1974yy,Aubert:1974js}. The charm contributions to the structure functions are determined in experiment by tagging charm quarks in the final state, e.g. through the $D$--meson decay channel $D^*\rightarrow~D^0\pi_s\rightarrow~K\pi\pi_s$. In the case of DIS, the measured visible cross section is then extrapolated to the full inclusive phase space using theoretical models if structure functions are considered,~\cite{Adloff:1996xq,Breitweg:1999ad,*Adloff:2001zj,Chekanov:2003rb,*Aktas:2004az,Aktas:2005iw,Thompson:2007mx}. Within the approach of this thesis, the main objective for studying heavy quark production in DIS is to provide a framework allowing for more precise measurements of $a_s$ and of the parton densities and for a better description of the structure functions $F_2^{c\overline{c}}$, $F_2^{b\overline{b}}$. The current world data for the nucleon structure functions $F_2^{p,d}(x,Q^2)$ reached the precision of a few per cent over a wide kinematic region. Both the measurements of the heavy flavor contributions to the deep-inelastic structure functions, cf.~\cite{Chekanov:2003rb,*Aktas:2004az,Lipka:2006ny,Thompson:2007mx}, and numerical studies,~\cite{Eichten:1984eu,Gluck:1993dpa,Blumlein:1996sc}, based on the leading, \cite{Witten:1975bh,*Babcock:1977fi,*Shifman:1977yb,*Leveille:1978px,Gluck:1980cp}, and next-to-leading order heavy flavor Wilson coefficients, \cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}, show that the scaling violations of the light and the heavy contributions to (\ref{FACT2}) exhibit a different behavior over a wide range of $Q^2$. This is both due to the logarithmic contributions $\ln^k(Q^2/m^2)$ and power corrections $\propto (m^2/Q^2)^k,~k \geq 1$. Moreover, in the region of smaller values of $x$ the heavy flavor contributions amount to 20--40\%. Therefore, the precision measurement of the QCD parameter $\Lambda_{\rm QCD}$,~\cite{Gluck:1980cp,Alekhin:2002fv,Bethke:2000ai,*Bethke:2004uy,Blumlein:2004ip,Blumlein:2006be,Gluck:2006yz,Alekhin:2006zm,Gluck:2007ck,Blumlein:2007dk,JimenezDelgado:2008hf}, and of the parton distribution functions in deeply inelastic scattering requires the analysis at the level of the $O(a_s^3)$ corrections to control the theory-errors at the level of the experimental accuracy and below, \cite{Bethke:2000ai,*Bethke:2004uy,Gluck:2006yz,Alekhin:2006zm,Blumlein:2006be,Blumlein:2007dk}. The precise value of $\Lambda_{\rm QCD}$, a fundamental parameter of the Standard Model, is of central importance for the quantitative understanding of all strongly interacting processes. Moreover, the possible unification of the gauge forces, \cite{Georgi:1974sy,*Fritzsch:1974nn}, depends crucially on its value. In recent non--singlet analyzes, \cite{Blumlein:2004ip,Blumlein:2006be,Gluck:2006yz}, errors for $a_s(M_Z^2)$ of $O(1.5~\%)$ were obtained, partially extending the analysis effectively to ${\sf N^3LO}$. In the flavor singlet case the so far unknown 3--loop heavy flavor Wilson coefficients do yet prevent a consistent 3--loop analysis,\cite{Alekhin:2005dxxAlekhin:2005dy,Dittmar:2005ed,Jung:2008tq}. Due to the large statistics in the lower $x$ region, one may hope to eventually improve the accuracy of $a_s(M_Z^2)$ beyond the above value. Of similar importance is the detailed knowledge of the PDFs for all hadron-induced processes, notably for the interpretation of all scattering cross sections measured at the ${\sf TEVATRON}$ and the LHC. For example, the process of Higgs-boson production at the LHC, cf. e.g. \cite{Djouadi:2005gixDjouadi:2005gj}, depends on the gluon density and its accuracy is widely determined by this distribution. In Section~\ref{SubSec-HQElProd}, we describe the general framework of electroproduction of heavy quarks in DIS within the fixed--flavor--number--scheme (FFNS), treating only the light quarks and the gluon as constituents of the nucleon. In the following Section,~\ref{SubSec-HQAsym}, we outline the method, which we use to extract all but the power suppressed contributions $\propto~(m^2/Q^2)^k,k\ge~1$ of the heavy flavor Wilson coefficients, \cite{Buza:1995ie}. The latter are equivalent to the Wilson coefficients introduced in Section~\ref{SubSec-LCE}, including heavy quarks. Finally, in Section~\ref{SubSec-HQFlav} we comment on the possibility to define heavy quark parton densities within a variable--flavor--number--scheme~(VFNS), \cite{Buza:1996wv}. \subsection{\bf\boldmath Electroproduction of Heavy Quarks} \label{SubSec-HQElProd} We study electroproduction of heavy quarks in unpolarized DIS via single photon exchange, cf. \cite{Witten:1975bh,*Babcock:1977fi,*Shifman:1977yb,*Leveille:1978px,Gluck:1979aw,Gluck:1980cp}, at sufficiently large virtualities $Q^2$, $Q^2\ge~5{\rm GeV}$~\footnote{One may however, also consider photoproduction of heavy quarks in $ep$ collisions where $Q^2\approx~0$, which is a widely hadronic process, cf. \cite{Gluck:1978bf,Berger:1980ni}, and especially important for the production of heavy quark resonances, as e.g. the $J/\Psi$.}. Here, one can distinguish two possible production mechanisms for heavy quarks: extrinsic production and intrinsic heavy quark excitation. In the latter case, one introduces a heavy quark state in the nucleon wave function, i.e. the heavy quark is treated at the same level as the light quarks in the factorization of the structure functions, cf. Eqs.~(\ref{FACT2})--(\ref{Cq2LNSpert}). The ${\sf LO}$ contribution is then given by the flavor excitation process shown in Figure \ref{CintLO}, \begin{eqnarray} \gamma^*+Q(\overline{Q})\rightarrow~Q(\overline{Q})~. \label{CintLOa} \end{eqnarray} \begin{figure}[htb] \begin{center} \includegraphics[angle=0, width=7.0cm]{picmain5.eps} \end{center} \begin{center} \caption{\sf ${\sf LO}$ intrinsic heavy quark production.} \label{CintLO} \noindent \small \end{center} \normalsize \end{figure} Several experimental and theoretical studies suggest that the intrinsic contribution to the heavy flavor cross section is of the order of $1\%$ or smaller, \cite{Brodsky:1980pb,*Hoffmann:1983ah,*Derrick:1995sc,*Harris:1995jx,Adloff:1996xq}, and we will not consider it any further. In extrinsic heavy flavor production, the heavy quarks are produced as final states in virtual gauge boson scattering off massless partons. This description is also referred to as the fixed flavor number scheme. At higher orders, one has to make the distinction between whether one considers the complete inclusive structure functions or only those heavy quark contributions, which can be determined in experiments by tagging the final state heavy quarks. In the former case, virtual corrections containing heavy quark loops have to be included into the theoretical calculation as well, cf. also Section~\ref{SubSec-HQElProdWave}. We consider only twist-2 parton densities in the Bjorken limit. Therefore no transverse momentum effects in the initial parton distributions will be allowed, since these contributions are related, in the kinematic sense, to higher twist operators. From the conditions for the validity of the parton model, Eqs. (\ref{cond}, \ref{cond2}), it follows that in the region of not too small nor too large values of the Bjorken variable $x$, the partonic description holds for massless partons. Evidently, iff $Q^2 (1-x)^2/m^2 \gg \hspace*{-5mm}/ \hspace*{2mm}1$ {\sf no} partonic description for a potential heavy quark distribution can be obtained. The question under which circumstances one may introduce a heavy flavor parton density will be further discussed in Section~\ref{SubSec-HQFlav}. In a general kinematic region the parton densities in Eq.~(\ref{FACT2}) are enforced to be massless and the heavy quark mass effects are contained in the inclusive Wilson coefficients. These are calculable perturbatively and denoted by \begin{eqnarray} {\cal C}^{\sf S,PS,NS}_{i, (2,L)} \Bigl(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\Bigr)~. \label{Calldef} \end{eqnarray} The argument $n_f+1$ denotes the presence of $n_f$ light and one heavy flavor. $\tau$ is the partonic scaling variable defined in Eq.~(\ref{taudef}) and we will present some of the following equations in $x$--space rather than in Mellin space. One may identify the massless flavor contributions in Eq.~(\ref{Calldef}) and separate the Wilson coefficients into a purely light part $C_{i,(2,L)}$, cf. Eq.~(\ref{FACT2}), and a heavy part \begin{eqnarray} \label{eqLH} {\cal C}^{\sf S,PS,NS}_{i,(2,L)} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right) = && C_{i,(2,L)}^{\sf S,PS,NS}\left(\tau,n_f,\frac{Q^2}{\mu^2}\right) \nonumber\\ &&\hspace{-25mm} + H_{i,(2,L)}^{\sf S,PS} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right) + L_{i,(2,L)}^{\sf S,PS,NS} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)~.\nonumber\\ \label{Callsplit} \end{eqnarray} Here, we denote the heavy flavor Wilson coefficients by $L_{i,j}$ and $H_{i,j}$, respectively, depending on whether the photon couples to a light $(L)$ or heavy $(H)$ quark line. From this it follows that the light flavor Wilson coefficients $C_{i,j}$ depend on $n_f$ light flavors only, whereas $H_{i,j}$ and $L_{i,j}$ may contain light flavors in addition to the heavy quark, indicated by the argument $n_f+1$. The perturbative series of the heavy flavor Wilson coefficients read \begin{eqnarray} H_{g,(2,L)}^{\sf S} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)&=& \sum_{i=1}^{\infty}a_s^i H_{g,(2,L)}^{(i), \sf S} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)~, \label{Hg2Lpert} \\ H_{q,(2,L)}^{\sf PS} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)&=& \sum_{i=2}^{\infty}a_s^i H_{q,(2,L)}^{(i), \sf PS} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)~, \label{Hq2LPSpert} \\ L_{g,(2,L)}^{\sf S} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)&=& \sum_{i=2}^{\infty}a_s^i L_{g,(2,L)}^{(i), \sf S} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)~, \label{Lg2Lpert} \end{eqnarray} \begin{eqnarray} L_{q,(2,L)}^{\sf S} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)&=& \sum_{i=2}^{\infty}a_s^i L_{q,(2,L)}^{(i), \sf S} \left(\tau,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\right)~. \label{Lq2LSpert} \end{eqnarray} Note that we have not yet specified a scheme for treating $a_s$, but one has to use the same scheme when combining the above terms with the light flavor Wilson coefficients. At ${\sf LO}$, only the term $H_{g,(2,L)}$ contributes via the photon--gluon fusion process shown in Figure \ref{CCbarLO}, \begin{eqnarray} \gamma^*+g\rightarrow~Q+\overline{Q}~. \label{VBfusion} \end{eqnarray} \begin{figure}[h] \begin{center} \includegraphics[angle=0, width=7.0cm]{picmain6.eps} \end{center} \begin{center} \caption{\sf ${\sf LO}$ extrinsic heavy quark production.} \label{CCbarLO} \noindent \small \end{center} \normalsize \end{figure} The ${\sf LO}$ Wilson coefficients corresponding to this process are given by, \cite{Witten:1975bh,*Babcock:1977fi,*Shifman:1977yb,*Leveille:1978px,Gluck:1979aw,Gluck:1980cp}~\footnote{Eqs. (16), (17) in Ref.~\cite{Bierenbaum:2009zt} contain misprints}, \begin{eqnarray} H_{g,2}^{(1)}\left(\tau,\frac{m^2}{Q^2}\right) &=& 8T_F\Bigl\{ v\Bigl[ -\frac{1}{2}+4\tau(1-\tau)+2\frac{m^2}{Q^2}\tau(\tau-1) \Bigr] \nonumber\\ &+& \Bigl[ -\frac{1}{2}+\tau-\tau^2+2\frac{m^2}{Q^2}\tau(3\tau-1) +4\frac{m^4}{Q^4}\tau^2 \Bigr]\ln\left(\frac{1-v}{1+v}\right) \Bigr\} \label{Hg2LO} ~,\\ H_{g,L}^{(1)}\left(\tau,\frac{m^2}{Q^2}\right) &=& 16T_F\Bigl[ \tau(1-\tau)v +2\frac{m^2}{Q^2}\tau^2\ln\left(\frac{1-v}{1+v}\right) \Bigr] \label{HgLLO}~. \end{eqnarray} The cms velocity $v$ of the produced heavy quark pair is given by \begin{eqnarray} v &=& \sqrt{1 - \frac{4 m^2 \tau}{Q^2(1-\tau)}}~. \end{eqnarray} The ${\sf LO}$ heavy flavor contributions to the structure functions are then \begin{eqnarray} F^{Q\overline{Q}}_{(2,L)}(x,Q^2,m^2) = e_Q^2 a_s \int_{ax}^1 \frac{dz}{z} H^{(1)}_{g,(2,L)}\left(\frac{x}{z}, \frac{m^2}{Q^2}\right) G(n_f,z,Q^2)~,~~a=1+4 m^2/Q^2~, \label{FcLO} \end{eqnarray} where the integration boundaries follow from the kinematics of the process. Here $e_Q$ denotes the electric charge of the heavy quark. At $O(a_s^2)$, the terms $H_{q,(2,L)}^{\sf PS}$ and $L_{q,(2,L)}^{\sf S}$ contribute as well. They result from the process \begin{eqnarray} \gamma^*+q(\overline{q})\rightarrow~q(\overline{q})+X~, \end{eqnarray} where $X=Q+\overline{Q}$ in case of extrinsic heavy flavor production. The latter is of phenomenological relevance if the heavy quarks are detected in the final states, e.g. via the produced $D_c$--mesons in case $Q=c$. For a complete inclusive analysis summing over all final states, both light and heavy, one has to include radiative corrections containing virtual heavy quark contributions as well. The term $L_{q,(2,L)}^{\sf S}$ can be split into a ${\sf NS}$ and a ${\sf PS}$ piece via \begin{eqnarray} L_{q,(2,L)}^{\sf S}=L_{q,(2,L)}^{\sf NS}+L_{q,(2,L)}^{\sf PS}, \end{eqnarray} where the ${\sf PS}$--term emerges for the first time at $O(a_s^3)$ and the ${\sf NS}$--term at $O(a_s^2)$, respectively. Finally, $L_{g,(2,L)}^{\sf S}$ contributes for the first time at $O(a_s^3)$ in case of heavy quarks in the final state but there is a $O(a_s^2)$ term involving radiative corrections, which will be commented on in Section~\ref{SubSec-HQElProdWave}. The terms $H_{g,(2,L)}^{(2)}$, $H_{q,(2,L)}^{(2), {\sf PS}}$ and $L_{q,(2,L)}^{(2), {\sf NS}}$ have been calculated in $x$--space in the complete kinematic range in semi-analytic form in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}~\footnote{A precise representation in Mellin space was given in \cite{Alekhin:2003ev}.}, considering heavy quarks in the final states only. The heavy quark contribution to the structure functions $F_{(2,L)}(x,Q^2)$ for one heavy quark of mass $m$ and $n_f$ light flavors is then given by, cf. \cite{Buza:1996wv} and Eq.~(\ref{FACT2}), \begin{eqnarray} \label{eqF2} F_{(2,L)}^{Q\overline{Q}}(x,n_f\!\!\!&+&\!\!\!1,Q^2,m^2) =\nonumber\\ &&\sum_{k=1}^{n_f}e_k^2\Biggl\{ L_{q,(2,L)}^{\sf NS}\left(x,n_f+1,\frac{Q^2}{m^2} ,\frac{m^2}{\mu^2}\right) \otimes \Bigl[f_k(x,\mu^2,n_f)+f_{\overline{k}}(x,\mu^2,n_f)\Bigr] \nonumber\\ &&\hspace{14mm} +\frac{1}{n_f}L_{q,(2,L)}^{\sf PS}\left(x,n_f+1,\frac{Q^2}{m^2} ,\frac{m^2}{\mu^2}\right) \otimes \Sigma(x,\mu^2,n_f) \nonumber\\ &&\hspace{14mm} +\frac{1}{n_f}L_{g,(2,L)}^{\sf S}\left(x,n_f+1,\frac{Q^2}{m^2} ,\frac{m^2}{\mu^2}\right) \otimes G(x,\mu^2,n_f) \Biggr\} \nonumber\\ &+&e_Q^2\Biggl[ H_{q,(2,L)}^{\sf PS}\left(x,n_f+1,\frac{Q^2}{m^2} ,\frac{m^2}{\mu^2}\right) \otimes \Sigma(x,\mu^2,n_f) \nonumber\\ &&\hspace{7mm} +H_{g,(2,L)}^{\sf S}\left(x,n_f+1,\frac{Q^2}{m^2} ,\frac{m^2}{\mu^2}\right) \otimes G(x,\mu^2,n_f) \Biggr]~, \end{eqnarray} where the integration boundaries of the Mellin--convolutions follow from phase space kinematics, cf. Eq.~(\ref{FcLO}). \subsection{\bf\boldmath Asymptotic Heavy Quark Coefficient Functions} \label{SubSec-HQAsym} An important part of the kinematic region in case of heavy flavor production in DIS is located at larger values of $Q^2$, cf. e.g. \cite{Gluck:1987ukxGluck:1987uke1,Ingelman:1988qn}. As has been shown in Ref.~\cite{Buza:1995ie}, cf. also ~\cite{Berends:1987abxBerends:1987abe1,*vanNeerven:1997gf,Buza:1996wv}, the heavy flavor Wilson coefficients $H_{i,j},~L_{i,j}$ factorize in the limit $Q^2\gg~m^2$ into massive operator matrix elements $A_{ki}$ and the massless Wilson coefficients $C_{i,j}$, if one heavy quark flavor and $n_f$ light flavors are considered. The massive OMEs are process independent quantities and contain all the mass dependence except for the power corrections $\propto~(m^2/Q^2)^k,~k\ge~1$. The process dependence is given by the light flavor Wilson coefficients only. This allows the analytic calculation of the ${\sf NLO}$ heavy flavor Wilson coefficients, \cite{Buza:1995ie,Bierenbaum:2007qe}. Comparing these asymptotic expressions with the exact ${\sf LO}$ and ${\sf NLO}$ results obtained in Refs.~\cite{Witten:1975bh,*Babcock:1977fi,*Shifman:1977yb,*Leveille:1978px,Gluck:1980cp} and \cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}, respectively, one finds that this approximation becomes valid in case of $F_2^{Q\overline{Q}}$ for $Q^2/m^2 \raisebox{-0.07cm 10$. These scales are sufficiently low and match with the region analyzed in deeply inelastic scattering for precision measurements. In case of $F_L^{Q\overline{Q}}$, this approximation is only valid for $Q^2/m^2 \raisebox{-0.07cm 800$, \cite{Buza:1995ie}. For the latter case, the 3--loop corrections were calculated in Ref.~\cite{Blumlein:2006mh}. This difference is due to the emergence of terms $\propto (m^2/Q^2) \ln(m^2/Q^2)$, which vanish only slowly in the limit $Q^2/m^2 \rightarrow \infty$. In order to derive the factorization formula, one considers the inclusive Wilson coefficients ${\cal C}^{\sf S,PS,NS}_{i,j}$, which have been defined in Eq.~(\ref{Calldef}). After applying the LCE to the partonic tensor, or the forward Compton amplitude, corresponding to the respective Wilson coefficients, one arrives at the factorization relation, cf. Eq.~(\ref{F2LMOM}), \begin{eqnarray} {\cal C}^{{\sf S,PS,NS}, \small{{\sf \sf asymp}}}_{j,(2,L)} \Bigl(N,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\Bigr) &=& \nonumber\\ && \hspace{-55mm} \sum_{i} A^{\sf S,PS,NS}_{ij}\Bigl(N,n_f+1,\frac{m^2}{\mu^2}\Bigr) C^{\sf S,PS,NS}_{i,(2,L)} \Bigl(N,n_f+1,\frac{Q^2}{\mu^2}\Bigr) +O\Bigl(\frac{m^2}{Q^2}\Bigr)~. \label{CallFAC} \end{eqnarray} Here $\mu$ refers to the factorization scale between the heavy and light contributions in ${\cal {C}}_{j,i}$ and {\sf 'asymp'} denotes the limit $Q^2\gg~m^2$. The $C_{i,j}$ are precisely the light Wilson coefficients, cf. Eqs. (\ref{FACT2})--(\ref{Cq2LPSpert}), taken at $n_f+1$ flavors. This can be inferred from the fact that in the LCE, Eq.~(\ref{lighex}), the Wilson coefficients describe the singularities for very large values of $Q^2$, which can not depend on the presence of a quark mass. The mass dependence is given by the OMEs $A_{ij}$, cf. Eqs. (\ref{HadOMEs},\ref{NucOMEs}), between partonic states. Eq.~(\ref{CallFAC}) accounts for all mass effects but corrections which are power suppressed, $(m^2/Q^2)^k, k\ge~1$. This factorization is only valid if the heavy quark coefficient functions are defined in such a way that all radiative corrections containing heavy quark loops are included. Otherwise, (\ref{CallFAC}), would not show the correct asymptotic $Q^2$--behavior, \cite{Buza:1996wv}. An equivalent way of describing Eq.~(\ref{CallFAC}) is obtained by considering the calculation of the massless Wilson coefficients. Here, the initial state collinear singularities are given by evaluating the massless OMEs between off--shell partons, leading to transition functions $\Gamma_{ij}$. The $\Gamma_{ij}$ are given in terms of the anomalous dimensions of the twist--$2$ operators and transfer the initial state singularities to the bare parton--densities due to mass factorization, cf. e.g. \cite{Buza:1995ie,Buza:1996wv}. In the case at hand, something similar happens: The initial state collinear singularities are transferred to the parton densities except for those which are regulated by the quark mass and described by the OMEs. Instead of absorbing these terms into the parton densities as well, they are used to reconstruct the asymptotic behavior of the heavy flavor Wilson coefficients. Here, \begin{eqnarray} \label{eqAij} A_{ij}^{\sf S,NS}\Bigl(N,n_f+1,\frac{m^2}{\mu^2}\Bigr) = \langle j| O_i^{\sf S,NS}|j \rangle =\delta_{ij}+\sum_{i=1}^{\infty}a_s^i A_{ij}^{(i),{\sf S,NS}} \label{pertomeren} \end{eqnarray} are the operator matrix elements of the local twist--2 operators being defined in Eqs. (\ref{COMP1})--(\ref{COMP3}) between on--shell partonic states $|j\rangle,~~j = q, g$. As usual, the ${\sf S}$ contribution can be split into a ${\sf NS}$ and ${\sf PS}$ part via \begin{eqnarray} A_{qq}^{\sf S} = A_{qq}^{\sf NS} + A_{qq}^{\sf PS}~. \label{splitS} \end{eqnarray} Due to the on--shell condition, all contributions but the $O(a_s^0)$ terms vanish~\footnote{In Ref.~\cite{Larin:1996wd} use was made of this fact to calculate the massless Wilson coefficients without having to calculate the massless OMEs.} if no heavy quark is present in the virtual loops. This is due to the fact that integrals without scale vanish in dimensional regularization, cf. Section~\ref{SubSec-RENReg}. Hence only those terms with a mass remain and these are referred to as massive OMEs. The calculation of these massive OMEs is the main objective of this thesis. In case of the gluon operator, (\ref{COMP3}), the contributing terms are denoted by $A_{gq,Q}$ and $A_{gg,Q}$, where the perturbative series of the former starts at $O(a_s^2)$ and the one of the latter at $O(a_s^0)$~\footnote{The $O(a_s^0)$ term of $A_{gg}$ does not contain a heavy quark, but still remains in Eq.~(\ref{CallFAC}) because no loops have to be calculated.}. For the quark operator, one distinguishes whether the operator couples to a heavy or light quark. In the ${\sf NS}$--case, the operator by definition couples to the light quark. Thus there is only one term, $A_{qq,Q}^{\sf NS}$, which contributes at $O(a_s^0)$. In the ${\sf S}$ and ${\sf PS}$--case, two OMEs can be distinguished, $\displaystyle{\{A_{qq,Q}^{\sf PS},~A_{qg,Q}^{\sf S}\}}$ and $\displaystyle{\{A_{Qq}^{\sf PS},~A_{Qg}^{\sf S}\}}$, where in the former case the operator couples to a light quark and in the latter case to a heavy quark. The terms $A_{qi,Q}$ emerge for the first time at $O(a_s^3)$, $A_{Qq}^{\sf PS}$ at $O(a_s^2)$ and $A_{Qg}^{\sf S}$ at $O(a_s)$. In this work we refer only to the even moments, cf. Section~\ref{SubSec-DISComptLCE}. In the non--singlet case we will obtain, however, besides the ${\sf NS^+}$ contributions for the even moments also the ${\sf NS^-}$ terms, which correspond to the odd moments. Eq.~(\ref{CallFAC}) can now be split into its parts by considering the different $n_f$--terms. We adopt the following notation for a function $f(n_f)$ \begin{eqnarray} {\tilde{f}}(n_f)&\equiv&\frac{f(n_f)}{n_f}~. \label{gammapres2} \end{eqnarray} This is necessary in order to separate the different types of contributions in Eq.~(\ref{eqF2}), weighted by the electric charges of the light and heavy flavors, respectively. Since we concentrate on only the heavy flavor part, we define as well for later use \begin{eqnarray} \hat{f}(n_f)&\equiv&f(n_f+1)-f(n_f)~, \label{gammapres1} \end{eqnarray} with $\hat{\hspace*{-1mm}{\tilde{f}}}(n_f) \equiv \widehat{[{\tilde{f}}(n_f)]}$. The following Eqs. (\ref{LNSFAC})--(\ref{HgFAC}) are the same as Eqs.~(2.31)--(2.35) in Ref.~\cite{Buza:1996wv}. We present these terms here again, however, since Ref.~\cite{Buza:1996wv} contains a few inconsistencies regarding the $\tilde{f}$--description. Contrary to the latter reference, the argument corresponding to the number of flavors stands for all flavors, light or heavy. The separation for the ${\sf NS}$--term is given by \begin{eqnarray} C_{q,(2,L)}^{\sf NS}\Bigl(N,n_f,\frac{Q^2}{\mu^2}\Bigr) + L_{q,(2,L)}^{\sf NS} \Bigl(N,n_f+1,\frac{Q^2}{\mu^2},\frac{m^2}{\mu^2}\Bigr) &=& \nonumber\\ && \hspace{-50mm} A_{qq,Q}^{\sf NS}\Bigl(N,n_f+1,\frac{m^2}{\mu^2}\Bigr) C_{q,(2,L)}^{\sf NS}\Bigl(N,n_f+1,\frac{Q^2}{\mu^2}\Bigr)~. \label{LNSFAC} \end{eqnarray} Here and in the following, we omit the index $"{\sf asymp}"$ to denote the asymptotic heavy flavor Wilson coefficients, since no confusion is to be expected. For the remaining terms, we suppress for brevity the arguments $N$, $Q^2/\mu^2$ and $m^2/\mu^2$, all of which can be inferred from Eqs. (\ref{Callsplit}, \ref{CallFAC}). Additionally, we will suppress from now on the index ${\sf S}$ and label only the ${\sf NS}$ and ${\sf PS}$ terms explicitly. The contributions to $L_{i,j}$ read \begin{eqnarray} C_{q,(2,L)}^{\sf PS}(n_f) +L_{q,(2,L)}^{\sf PS} (n_f+1) &=& \Bigl[ A_{qq,Q}^{\sf NS}(n_f+1) +A_{qq,Q}^{\sf PS}(n_f+1) +A_{Qq}^{\sf PS}(n_f+1) \Bigr] \nonumber\\ && \times n_f \tilde{C}_{q,(2,L)}^{\sf PS}(n_f+1) +A_{qq,Q}^{\sf PS}(n_f+1) C_{q,(2,L)}^{\sf NS}(n_f+1) \nonumber\\ && +A_{gq,Q}(n_f+1) n_f \tilde{C}_{g,(2,L)}(n_f+1)~, \nonumber\\ \label{LPSFAC} \\ C_{g,(2,L)}(n_f) +L_{g,(2,L)}(n_f+1) &=& A_{gg,Q}(n_f+1) n_f \tilde{C}_{g,(2,L)}(n_f+1) \nonumber\\ && + A_{qg,Q}(n_f+1) C_{q,(2,L)}^{\sf NS}(n_f+1) \nonumber\\ && +\Bigl[ A_{qg,Q}(n_f+1) +A_{Qg}(n_f+1) \Bigr] n_f\tilde{C}_{q,(2,L)}^{\sf PS}(n_f+1)~.\nonumber\\ \label{LgFAC} \end{eqnarray} The terms $H_{i,j}$ are given by \begin{eqnarray} H_{q,(2,L)}^{\sf PS} (n_f+1) &=& A_{Qq}^{\sf PS}(n_f+1) \Bigl[ C_{q,(2,L)}^{\sf NS}(n_f+1) +\tilde C_{q,(2,L)}^{\sf PS} (n_f+1) \Bigr] \nonumber\\ && +\Bigl[ A_{qq,Q}^{\sf NS}(n_f+1) +A_{qq,Q}^{\sf PS}(n_f+1) \Bigr] \tilde{C}_{q,(2,L)}^{\sf PS}(n_f+1) \nonumber\\ && +A_{gq,Q}(n_f+1) \tilde{C}_{g,(2,L)}(n_f+1)~, \label{HPSFAC} \\ H_{g,(2,L)}(n_f+1) &=& A_{gg,Q}(n_f+1) \tilde{C}_{g,(2,L)}(n_f+1) +A_{qg,Q}(n_f+1) \tilde{C}_{q,(2,L)}^{\sf PS}(n_f+1) \nonumber\\ && + A_{Qg}(n_f+1) \Bigl[ C_{q,(2,L)}^{\sf NS}(n_f+1) +\tilde{C}_{q,(2,L)}^{\sf PS}(n_f+1) \Bigr]~. \label{HgFAC} \end{eqnarray} Expanding the above equations up to $O(a_s^3)$, we obtain, using Eqs. (\ref{gammapres2}, \ref{gammapres1}), the heavy flavor Wilson coefficients in the asymptotic limit~: \begin{eqnarray} \label{eqWIL1} L_{q,(2,L)}^{\sf NS}(n_f+1) &=& a_s^2 \Bigl[A_{qq,Q}^{(2), {\sf NS}}(n_f+1)~\delta_2 + \hat{C}^{(2), {\sf NS}}_{q,(2,L)}(n_f)\Bigr] \nonumber\\ &+& a_s^3 \Bigl[A_{qq,Q}^{(3), {\sf NS}}(n_f+1)~\delta_2 + A_{qq,Q}^{(2), {\sf NS}}(n_f+1) C_{q,(2,L)}^{(1), {\sf NS}}(n_f+1) \nonumber \\ && \hspace*{5mm} + \hat{C}^{(3), {\sf NS}}_{q,(2,L)}(n_f)\Bigr]~, \\ \label{eqWIL2} L_{q,(2,L)}^{\sf PS}(n_f+1) &=& a_s^3 \Bigl[~A_{qq,Q}^{(3), {\sf PS}}(n_f+1)~\delta_2 + A_{gq,Q}^{(2)}(n_f)~~n_f\tilde{C}_{g,(2,L)}^{(1)}(n_f+1) \nonumber \\ && \hspace*{5mm} + n_f \hat{\tilde{C}}^{(3), {\sf PS}}_{q,(2,L)}(n_f)\Bigr]~, \\ \label{eqWIL3} L_{g,(2,L)}^{\sf S}(n_f+1) &=& a_s^2 A_{gg,Q}^{(1)}(n_f+1)n_f \tilde{C}_{g,(2,L)}^{(1)}(n_f+1) \nonumber\\ &+& a_s^3 \Bigl[~A_{qg,Q}^{(3)}(n_f+1)~\delta_2 + A_{gg,Q}^{(1)}(n_f+1)~~n_f\tilde{C}_{g,(2,L)}^{(2)}(n_f+1) \nonumber\\ && \hspace*{5mm} + A_{gg,Q}^{(2)}(n_f+1)~~n_f\tilde{C}_{g,(2,L)}^{(1)}(n_f+1) \nonumber\\ && \hspace*{5mm} + ~A^{(1)}_{Qg}(n_f+1)~~n_f\tilde{C}_{q,(2,L)}^{(2), {\sf PS}}(n_f+1) + n_f \hat{\tilde{C}}^{(3)}_{g,(2,L)}(n_f)\Bigr]~, \\ \nonumber \\ H_{q,(2,L)}^{\sf PS}(n_f+1) &=& a_s^2 \Bigl[~A_{Qq}^{(2), {\sf PS}}(n_f+1)~\delta_2 +~\tilde{C}_{q,(2,L)}^{(2), {\sf PS}}(n_f+1)\Bigr] \nonumber\\ &+& a_s^3 \Bigl[~A_{Qq}^{(3), {\sf PS}}(n_f+1)~\delta_2 +~\tilde{C}_{q,(2,L)}^{(3), {\sf PS}}(n_f+1) \nonumber \end{eqnarray} \begin{eqnarray} && \hspace*{-20mm} + A_{gq,Q}^{(2)}(n_f+1)~\tilde{C}_{g,(2,L)}^{(1)}(n_f+1) + A_{Qq}^{(2), {\sf PS}}(n_f+1)~C_{q,(2,L)}^{(1), {\sf NS}}(n_f+1) \Bigr]~, \label{eqWIL4} \\ \nonumber\\ H_{g,(2,L)}^{\sf S}(n_f+1) &=& a_s \Bigl[~A_{Qg}^{(1)}(n_f+1)~\delta_2 +~\tilde{C}^{(1)}_{g,(2,L)}(n_f+1) \Bigr] \nonumber\\ &+& a_s^2 \Bigl[~A_{Qg}^{(2)}(n_f+1)~\delta_2 +~A_{Qg}^{(1)}(n_f+1)~C^{(1), {\sf NS}}_{q,(2,L)}(n_f+1)\nonumber\\ && \hspace*{5mm} +~A_{gg,Q}^{(1)}(n_f+1)~\tilde{C}^{(1)}_{g,(2,L)}(n_f+1) +~\tilde{C}^{(2)}_{g,(2,L)}(n_f+1) \Bigr] \nonumber\\ &+& a_s^3 \Bigl[~A_{Qg}^{(3)}(n_f+1)~\delta_2 +~A_{Qg}^{(2)}(n_f+1)~C^{(1), {\sf NS}}_{q,(2,L)}(n_f+1) \nonumber\\ && \hspace*{5mm} +~A_{gg,Q}^{(2)}(n_f+1)~\tilde{C}^{(1)}_{g,(2,L)}(n_f+1) \nonumber\\ && \hspace*{5mm} +~A_{Qg}^{(1)}(n_f+1)\Bigl\{ C^{(2), {\sf NS}}_{q,(2,L)}(n_f+1) +~\tilde{C}^{(2), {\sf PS}}_{q,(2,L)}(n_f+1)\Bigr\} \nonumber\\ && \hspace*{5mm} +~A_{gg,Q}^{(1)}(n_f+1)~\tilde{C}^{(2)}_{g,(2,L)}(n_f+1) +~\tilde{C}^{(3)}_{g,(2,L)}(n_f+1) \Bigr]~. \label{eqWIL5} \end{eqnarray} Note that $\delta_2$ has been defined in Eq.~(\ref{delta2def}). The above equations include radiative corrections due to heavy quark loops to the Wilson coefficients. Therefore, in order to compare e.g. with the calculation in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}, these terms still have to be subtracted. Since the light flavor Wilson coefficients were calculated in the $\overline{\sf MS}$--scheme, the {\sf same} scheme has to be used for the massive OMEs. It should also be thoroughly used for renormalization to derive consistent results in QCD analyzes of deep-inelastic scattering data and to be able to compare to other analyzes directly. This means that one has to take special attendance of which scheme for the definition of $a_s$ was used. In Section~\ref{SubSec-RENCo} we will describe a scheme for $a_s$, to which one is naturally led in the course of renormalization. We refer to this scheme as ${\sf MOM}$--scheme and present the transformation formula to the $\overline{\sf MS}$ as well. How this affects the asymptotic heavy flavor Wilson coefficients is described in Section~\ref{SubSec-HQElProdWave}, where we compare Eqs. (\ref{eqWIL1})--(\ref{eqWIL5}) to those presented in Ref.~\cite{Buza:1995ie}. \subsection{\bf\boldmath Heavy Quark Parton Densities} \label{SubSec-HQFlav} The FFNS forms a general starting point to describe and to calculate the heavy flavor contributions to the DIS structure functions. Approaching higher values of $Q^2$, one may think of the heavy quark becoming effectively light and thus acquiring an own parton density. Different variable flavor scheme treatments were considered in the past, cf. e.g. \cite{Aivazis:1993pi,*Thorne:2008xf}. Here we follow \cite{Buza:1996wv} to obtain a description in complete accordance with the renormalization group in the ${\sf \overline{MS}}$--scheme. In the kinematic region in which the factorization relation (\ref{CallFAC}) holds, one may redefine the results obtained in the FFNS, which allows for a partonic description at the level of $(n_f+1)$ flavors. In the strict sense, only massless particles can be interpreted as partons in hard scattering processes since the lifetime of these quantum-fluctuations off the hadronic background $\tau_{\sf life}\propto 1/(k_\perp^2+m_Q^2)$ has to be large against the interaction time $\tau_{\sf int} \propto 1/Q^2$ in the infinite momentum frame, \cite{Drell:1970yt}, cf. also Section~\ref{SubSubSec-DISValpart}. In the massive case, $\tau_{\sf life}$ is necessarily finite and there exists a larger scale $Q^2_0$ below which any partonic description fails. From this it follows, that the heavy quark effects are genuinely described by the {\sf process dependent} Wilson coefficients. Since parton-densities are {\sf process independent} quantities, only process independent pieces out of the Wilson coefficients can be used to define them for heavy quarks at all. Clearly this is impossible in the region close to threshold but requires $Q^2/m_Q^2 = r \gg 1$, with $ r \raisebox{-0.07cm 10$ in case of $F_2(x,Q^2)$. For $F_L(x,Q^2)$ the corresponding ratio even turns out to be $r \raisebox{-0.07cm 800$,~\cite{Buza:1995ie,Blumlein:2006mh,Gluck:1993dpa}. Heavy flavor parton distributions can thus be constructed only for scales $\mu^2 \gg m_Q^2$. This is done under the further assumption that for the other heavy flavors the masses $m_{Q_i}$ form a hierarchy $m_{Q_1}^2 \ll m_{Q_2}^2 \ll~~{\sf etc.}$ Their use in observables is restricted to a region, in which the power corrections can be safely neglected. This range may strongly depend on the observable considered as the examples of $F_2$ and $F_L$ show. Also in case of the structure functions associated to transverse virtual gauge boson polarizations, like $F_2(x,Q^2)$, the factorization (\ref{CallFAC}) only occurs far above threshold, $Q_{{\sf thr}}^2 \sim 4 m_Q^2 x/(1-x)$, and at even larger scales for $F_L(x,Q^2)$. In order to maintain the process independence of the parton distributions, we define them for $(n_f+1)$ flavors from the light flavor parton distribution functions for $n_f$ flavors together with the massive operator matrix elements. The following set of parton densities is obtained in Mellin--space,~\cite{Buza:1996wv}~: \begin{eqnarray} && f_k(n_f+1,N,\mu^2,m^2) + f_{\overline{k}}(n_f+1,N,\mu^2,m^2)= \nonumber\\ && \hspace{10mm} A_{qq,Q}^{\sf NS}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot\bigl[f_k(n_f,N,\mu^2)+f_{\overline{k}}(n_f,N,\mu^2)\bigr] \nonumber\\ && \hspace{10mm} +\frac{1}{n_f}A_{qq,Q}^{\sf PS}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot\Sigma(n_f,N,\mu^2) \nonumber\\ && \hspace{10mm} +\frac{1}{n_f}A_{qg,Q}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot G(n_f,N,\mu^2), \label{HPDF1} \\ && f_Q(n_f+1,N,\mu^2,m^2) + f_{\overline{Q}}(n_f+1,N,\mu^2,m^2)= \nonumber\\ && \hspace{10mm} A_{Qq}^{\sf PS}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot \Sigma(n_f,N,\mu^2) \nonumber\\ && \hspace{10mm} +A_{Qg}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot G(n_f,N,\mu^2)~. \label{fQQB} \end{eqnarray} The flavor singlet, non--singlet and gluon densities for $(n_f+1)$ flavors are given by \begin{eqnarray} \Sigma(n_f+1,N,\mu^2,m^2) &=& \Biggl[ A_{qq,Q}^{\sf NS}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) +A_{qq,Q}^{\sf PS}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \nonumber \\ && \hspace*{-20mm} +A_{Qq}^{\sf PS}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \Biggr] \cdot \Sigma(n_f,N,\mu^2) \nonumber \\ && \hspace*{-23mm} +\left[ A_{qg,Q}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) +A_{Qg}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \right] \cdot G(n_f,N,\mu^2)~, \nonumber\\ \\ \Delta_k(n_f+1,N,\mu^2,m^2) &=& f_k(n_f+1,N,\mu^2,m^2)+f_{\overline{k}}(n_f+1,N,\mu^2,m^2) \nonumber\\ && -\frac{1}{n_f+1}\Sigma(n_f+1,N,\mu^2,m^2)~, \\ \label{HPDF2} G(n_f+1,N,\mu^2,m^2) &=& A_{gq,Q}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot \Sigma(n_f,N,\mu^2) \nonumber\\ && +A_{gg,Q}\left(N,n_f+1,\frac{\mu^2}{m^2}\right) \cdot G(n_f,N,\mu^2)~. \end{eqnarray} Note, that the {\sf new} parton densities depend on the renormalized heavy quark mass $m^2=m^2(a_s^2(\mu^2))$. As will be outlined in Sections \ref{Sec-REN}, \ref{Sec-REP}, the corresponding relations for the operator matrix elements depend on the mass--renormalization scheme. This has to be taken into account in QCD-analyzes, in particular, $m^2$ cannot be chosen constant. The quarkonic and gluonic operators obtained in the light--cone expansion can be normalized arbitrarily. It is, however, convenient to chose the relative factor such, that the non-perturbative nucleon-state expectation values, $\Sigma(n_f,N,\mu^2)$ and $G(n_f,N,\mu^2)$, obey \begin{eqnarray} \Sigma(n_f,N=2,\mu^2)+G(n_f,N=2,\mu^2) = 1 \end{eqnarray} due to 4-momentum conservation. As a consequence, the OMEs fulfill the relations, \cite{Buza:1996wv}, \begin{eqnarray} A_{qq,Q}^{\sf NS}(N=2) +A_{qq,Q}^{\sf PS}(N=2) +A_{Qq}^{\sf PS}(N=2) +A_{gq,Q}(N=2) &=& 1~, \label{sumrule1} \\ A_{qg,Q}(N=2) +A_{Qg}(N=2) + A_{gg,Q}(N=2) &=& 1~. \label{sumrule2} \end{eqnarray} The above scenario can be easily followed up to 2-loop order. Also here diagrams contribute which carry two different heavy quark flavors. At this level, the additional heavy degree of freedom may be absorbed into the coupling constant and thus decoupled temporarily. Beginning with 3-loop order the situation becomes more involved since there are graphs in which two different heavy quark flavors occur in nested topologies, i.e., the corresponding diagrams depend on the ratio $\rho = m_c^2/m_b^2$ yielding power corrections in $\rho$. There is no strong hierarchy between these two masses. The above picture, leading to heavy flavor parton distributions whenever $Q^2 \gg m^2$ will not hold anymore, since, in case of the two-flavor graphs, one cannot decide immediately whether they belong to the $c$-- or the $b$--quark distribution. Hence, the partonic description can only be maintained within a certain approximation by {\sf assuming} $\rho \ll 1$. Conversely, one may extend the kinematic regime for deep-inelastic scattering to define the distribution functions (\ref{HPDF1})--(\ref{HPDF2}) upon knowing the power corrections which occur in the heavy flavor Wilson coefficients ${\sf H}_{i,j}=H_{i,j},~L_{i,j}$. This is the case for 2-loop order. We separate \begin{eqnarray} {\sf H}_{i,j}\left(x,\frac{Q^2}{m^2},\frac{m^2}{\mu^2}\right) = {\sf H}_{i,j}^{{\sf asymp}}\left(x,\frac{Q^2}{m^2}, \frac{m^2}{\mu^2}\right) + {\sf H}_{i,j}^{{\sf power}}\left(x,\frac{Q^2}{m^2} ,\frac{m^2}{\mu^2}\right)~, \label{splitpower} \end{eqnarray} where ${\sf H}_{i,j}^{{\sf asymp}}(x,Q^2/m^2,m^2/\mu^2)$ denotes the part of the Wilson coefficient given in Eq.~(\ref{CallFAC}). If one accounts for ${\sf H}_{i,j}^{{\sf power}}(x,Q^2/m^2,m^2/\mu^2)$ in the fixed flavor number scheme, Eqs.~(\ref{HPDF1})--(\ref{HPDF2}) are still valid, but they do not necessarily yield the dominant contributions in the region closer to threshold. There, the kinematics of heavy quarks is by far not collinear, which is the main reason that a partonic description has to fail. Moreover, relation Eq.~(\ref{cond}) may be violated. In any case, it is not possible to use the partonic description (\ref{HPDF1})--(\ref{HPDF2}) alone for other hard processes in a kinematic domain with significant power corrections. For processes in the high $p_\perp$ region at the LHC, in which condition (\ref{cond}) is fulfilled and the characteristic scale $\mu^2$ obeys $\mu^2\gg~m^2$, one may use heavy flavor parton distributions by proceeding as follows. In the region $Q^2 \raisebox{-0.07cm 10~m^2$ the heavy flavor contributions to the $F_2(x,Q^2)$--world data are very well described by the asymptotic representation in the FFNS. For large scales one can then form a variable flavor representation including one heavy flavor distribution, \cite{Buza:1996wv}. This process can be iterated towards the next heavier flavor, provided the {\sf universal} representation holds and all power corrections can be safely neglected. One has to take special care of the fact, that the matching scale in the coupling constant, at which the transition $n_f \rightarrow n_f+1$ is to be performed, often differs rather significantly from $m$, cf. \cite{Blumlein:1998sh}. \newpage \section{\bf\boldmath Renormalization of Composite Operator Matrix Elements} \label{Sec-REN} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} Before renormalizing the massive OMEs, they have to be calculated applying a suitable regularization scheme, for which we apply dimensional regularization in $D=4+\varepsilon$ dimensions, see Section~\ref{SubSec-RENReg}. The unrenormalized massive OMEs are then denoted by a double--hat and are expanded into a perturbative series in the bare coupling constant $\hat{a}_s$~\footnote{We would like to remind the reader of the definition of the hat--symbol for a function f, Eq. (\ref{gammapres1}), which is not to be confused with the hat--symbol denoting unrenormalized quantities} via \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{ij}\Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr)&=& \delta_{ij}+ \sum_{l=1}^{\infty} \hat{a}_s^l~ \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(l)} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) \nonumber\\ &=&\delta_{ij}+ \sum_{l=1}^{\infty} \hat{a}_s^l\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{l\varepsilon/2} ~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(l)} \Bigl(\hat{m}^2=\mu^2,\varepsilon,N\Bigr)~. \label{pertome1} \end{eqnarray} The OMEs in Eq.~(\ref{pertome1}) depend on $\varepsilon$, the Mellin--Parameter $N$, the bare mass $\hat{m}$ and the renormalization scale $\mu=\mu_R$. Also the factorization scale $\mu_F$ will be identified with $\mu$ in the following. Note that in the last line of (\ref{pertome1}), the dependence on the ratio of the mass and the renormalization scale was made explicit for each order in $\hat{a}_s$. The possible values of the indices $ij$ have been described in Section~\ref{SubSec-HQAsym}, below Eq.~(\ref{pertomeren}). The factorization between the massive OMEs and the massless Wilson coefficients (\ref{CallFAC}) requires the external legs of the operator matrix elements to be on--shell, \begin{eqnarray} \label{OS} p^2 = 0~, \end{eqnarray} where $p$ denotes the external momentum. Unlike in the massless case, where the scale of the OMEs is set by an off--shell momentum $-p^2 < 0$, in our framework the internal heavy quark mass yields the scale. In the former case, one observes a mixing of the physical OMEs with non--gauge invariant (NGI) operators, cf. \cite{Hamberg:1991qt,Collins:1994ee,*Harris:1994tp}, and contributions originating in the violation of the equations of motion (EOM). Terms of this kind do not contribute in the present case, as will be discussed in Section~\ref{SubSec-RENProj}. Renormalizing the OMEs then consists of four steps. First, mass and charge renormalization have to performed. The former is done in the on--mass--shell--scheme and described in Section~\ref{SubSec-RENMa}. For the latter, we present the final result in the $\overline{\sf MS}$--scheme, but in an intermediate step, we adopt an on--shell subtraction scheme (${\sf MOM}$--scheme) for the gluon propagator, cf. Section~\ref{SubSec-RENCo}. This is necessary to maintain condition (\ref{OS}), i.e., to keep the external massless partons on--shell. Note, that there are other, differing ${\sf MOM}$--schemes used in the literature, cf. e.g.~\cite{Chetyrkin:2008jk}. After mass and coupling constant renormalization, we denote the OMEs with a single hat, $\hat{A}_{ij}$. The remaining singularities are then connected to the composite operators and the particle kinematics of the corresponding Feynman--diagrams. One can distinguish between ultraviolet (UV) and collinear (C) divergences. In Section~\ref{SubSec-RENOp}, we describe how the former are renormalized via the operator $Z$--factors. The UV--finite OMEs are denoted by a bar, $\bar{A}_{ij}$. Finally, the C--divergences are removed via mass factorization, cf. Section~\ref{SubSec-RENMassFac}. The renormalized OMEs are then denoted by $A_{ij}$. Section~\ref{SubSec-RENPred} contains the general structure of the massive OMEs up to $O(a_s^3)$ in terms of renormalization constants and lower order contributions. \subsection{\bf\boldmath Regularization Scheme} \label{SubSec-RENReg} When evaluating momentum integrals of Feynman diagrams in $D=4$ dimensions, one encounters singularities, which have to be regularized. A convenient method is to apply $D$-dimensional regularization, \cite{'tHooft:1972fi,Ashmore:1972uj,*Cicuta:1972jf,*Bollini:1972ui}. The dimensionality of space--time is analytically continued to values $D\neq 4$, for which the corresponding integrals converge. After performing a Wick rotation, integrals in Euclidean space of the form \begin{eqnarray} \int \frac{d^Dk}{(2\pi)^D}\frac{(k^2)^r}{(k^2+R^2)^m}= \frac{1}{(4\pi)^{D/2}}\frac{\Gamma(r+D/2)\Gamma(m-r-D/2)}{\Gamma(D/2) \Gamma(m)}(R^2)^{r+D/2-m}~ \label{Dint} \end{eqnarray} are obtained. Note that within dimensional regularization, this integral vanishes if $R=0$, i.e. if it does not contain a scale, \cite{Collins:1984xc}. The properties of the $\Gamma$--function in the complex plane are well known, see Appendix~\ref{App-SpeFun}. Therefore one can analytically continue the right-hand side of Eq.~(\ref{Dint}) from integer values of $D$ to arbitrary complex values. In order to recover the physical space-time dimension, we set $D=4+\varepsilon$. The singularities can now be isolated by expanding the $\Gamma$--functions into Laurent-series around $\varepsilon=0$. Note that this method regularizes both UV- and C- singularities and one could in principle distinguish their origins by a label, $\varepsilon_{UV},~\varepsilon_{C}$, but we treat all singularities by a common parameter $\varepsilon$ in the following. Additionally, all other quantities have to be considered in $D$ dimensions. This applies for the metric tensor $g_{\mu\nu}$ and the Clifford-Algebra of $\gamma$--matrices, see Appendix~\ref{App-Con}. Also the bare coupling constant $\hat{g}_s$, which is dimensionless in $D=4$, has to be continued to $D$ dimensions. Due to this it acquires the dimension of mass, \begin{eqnarray} \hat{g}_{s,D}=\mu^{-\varepsilon/2}\hat{g}_s \label{mudef}~, \end{eqnarray} which is described by a scale $\mu$ corresponding to the renormalization scale in Eq.~(\ref{pertome1}). From now on, Eq.~(\ref{mudef}) is understood to have been applied and we set \begin{eqnarray} \frac{\hat{g}^2_s}{(4\pi)^2}=\hat{a}_s~. \label{asdef} \end{eqnarray} Dimensional regularization has the advantage, unlike the Pauli--Villars regularization, \cite{Pauli:1949zm}, that it obeys all physical requirements such as Lorentz-invariance, gauge invariance and unitarity, \cite{'tHooft:1972fi,Speer:1974cz}. Hence it is suitable to be applied in perturbative calculations in quantum field theory including Yang--Mills fields. Using dimensional regularization, the poles of the unrenormalized results appear as terms $1/\varepsilon^i$, where in the calculations in this thesis $i$ can run from $1$ to the number of loops. In order to remove these singularities, one has to perform renormalization and mass factorization. To do this, a suitable scheme has to be chosen. The most commonly used schemes in perturbation theory are the ${\rm MS}$-scheme, \cite{'tHooft:1973mm}, and the $\overline{\sf MS}$-scheme, \cite{Bardeen:1978yd}, to which we will refer in the following. \\ In the ${\rm MS}$-scheme only the pole terms in $\varepsilon$ are subtracted. More generally, the $\overline{\sf MS}$-scheme makes use of the observation that $1/\varepsilon$--poles always appear in combination with the spherical factor \begin{eqnarray} S_{\varepsilon} \equiv \exp \Bigl[\frac{\varepsilon}{2}(\gamma_E-\ln(4\pi))\Bigr] \label{Sep}~, \end{eqnarray} which may be bracketed out for each loop order. Here $\gamma_E$ denotes the Euler-Mascheroni constant \begin{eqnarray} \gamma_E\equiv\lim_{N\rightarrow \infty} \Bigl(\sum_{k=1}^{N}\frac{1}{k} -\ln(N)\Bigr) \approx 0.577215664901\ldots~. \label{gammaesum} \end{eqnarray} By subtracting the poles in the form $S_{\varepsilon}/\varepsilon$ in the $\overline{\sf MS}$-scheme, no terms containing $\ln^k(4\pi ),~\gamma^k_E$ will appear in the renormalized result, simplifying the expression. This is due to the fact that for a $k$--loop calculation, one will always obtain the overall term \begin{eqnarray} \frac{\Gamma(1-k\frac{\varepsilon}{2})}{(4\pi)^{\frac{k\varepsilon}{2}}}&=&S_{\varepsilon}^k \exp \Bigl(\sum_{i=2}^{\infty}\frac{\zeta_i}{i} \Bigl(\frac{k\varepsilon}{2}\Bigr)^{i}\Bigr)~, \label{SepA} \end{eqnarray} with $\zeta_i$ being Riemann's $\zeta$--values, cf. Appendix~\ref{App-SpeFun}. In the following, we will always assume that the $\overline{\sf MS}$-scheme is applied and set $S_{\varepsilon}\equiv 1$. \subsection{\bf\boldmath Projectors } \label{SubSec-RENProj} We consider the expectation values of the local operators (\ref{COMP1})--(\ref{COMP3}) between partonic states $j$ \begin{eqnarray} G_{ij,Q}=\bra{j}O_i\ket{j}_Q~. \label{Greens} \end{eqnarray} Here, $i,j=q,g$ and the subscript $Q$ denotes the presence of one heavy quark. In case of massless QCD, one has to take the external parton $j$ of momentum $p$ off--shell, $p^2 < 0$, which implies that the OMEs derived from Eq.~(\ref{Greens}) are not gauge invariant. As has been outlined in Ref.~\cite{Matiounine:1998ky}, they acquire unphysical parts which are due to the breakdown of the equations of motion (EOM) and the mixing with additional non--gauge--invariant (NGI) operators. The EOM terms may be dealt with by applying a suitable projection operator to eliminate them, \cite{Matiounine:1998ky}. The NGI terms are more difficult to deal with, since they affect the renormalization constants and one has to consider additional ghost-- and alien-- OMEs, see \cite{Hamberg:thesis,Hamberg:1991qt,Collins:1994ee,*Harris:1994tp,Matiounine:1998ky} for details. In the case of massive OMEs, these difficulties do not occur. The external particles are massless and taken to be on--shell. Hence the equations of motion are not violated. Additionally, the OMEs remain gauge invariant quantities, since the external states are physical and therefore no mixing with NGI--operators occurs, \cite{Hamberg:thesis,Hamberg:1991qt,Matiounine:1998ky,Collins:1984xc}. \\ \noindent The computation of the Green's functions will reveal trace terms which do not contribute since the local operators are traceless and symmetric under the Lorentz group. It is convenient to project these terms out from the beginning by contracting with an external source term \begin{eqnarray} J_N \equiv \Delta_{\mu_1}...\Delta_{\mu_N}~. \label{Jsource} \end{eqnarray} Here $\Delta_{\mu}$ is a light-like vector, $\Delta^2 = 0$. In this way, the Feynman--rules for composite operators can be derived, cf. Appendix \ref{App-FeynRules}. In addition, one has to amputate the external field. Note that we nonetheless choose to renormalize the mass and the coupling multiplicative and include self--energy insertions containing massive lines on external legs into our calculation. The Green's functions in momentum space corresponding to the OMEs with external gluons are then given by \begin{eqnarray} \epsilon^\mu(p) G^{ab}_{Q,\mu\nu} \epsilon^\nu(p)&=& \epsilon^\mu(p) J_N \bra{A^a_{\mu}(p)} O_{Q;\mu_1 ... \mu_N} \ket{A^b_{\nu}(p)} \epsilon^\nu(p)~, \label{GabmnQgdef} \\ \epsilon^\mu(p) G^{ab}_{q,Q,\mu\nu} \epsilon^\nu(p)&=& \epsilon^\mu(p) J_N \bra{A^a_{\mu}(p)} O_{q;\mu_1 ... \mu_N} \ket{A^b_{\nu}(p)}_Q \epsilon^\nu(p)~, \label{GabmnqgQdef} \\ \epsilon^\mu(p) G^{ab}_{g,Q,\mu\nu} \epsilon^\nu(p)&=& \epsilon^\mu(p) J_N \bra{A^a_{\mu}(p)} O_{g;\mu_1 ... \mu_N} \ket{A^b_{\nu}(p)}_Q \epsilon^\nu(p)~. \label{GabmnggQdef} \end{eqnarray} In Eqs. (\ref{GabmnQgdef}-\ref{GabmnggQdef}), $A^{a}_{\mu}$ denote the external gluon fields with color index $a$, Lorentz index $\mu$ and momentum $p$. The polarization vector of the external gluon is given by $\epsilon^{\mu}(p)$. Note that in Eq.~(\ref{GabmnQgdef}), the operator couples to the heavy quark. In Eqs. (\ref{GabmnqgQdef},~\ref{GabmnggQdef}) it couples to a light quark or gluon, respectively, with the heavy quark still being present in virtual loops. \\ \noindent In the flavor non--singlet case, there is only one term which reads \begin{eqnarray} \overline{u}(p,s) G^{ij, {\sf NS}}_{q,Q} \lambda_r u(p,s)&=& J_N \bra{\overline{\Psi}_i(p)}O_{q,r;\mu_1...\mu_N}^{\sf NS}\ket{\Psi^j(p)}_Q~ \label{GijNS}~, \end{eqnarray} with $u(p,s),~\overline{u}(p,s)$ being the bi--spinors of the external massless quark and anti--quark, respectively. The remaining Green's functions with an outer quark are given by \begin{eqnarray} \overline{u}(p,s) G^{ij}_{Q} u(p,s)&=& J_N\bra{\overline{\Psi}_i(p)} O_{Q,\mu_1 ... \mu_N} \ket{\Psi^j(p)}~, \label{GijQqPS} \\ \overline{u}(p,s) G^{ij}_{q,Q} u(p,s)&=& J_N\bra{\overline{\Psi}_i(p)}O_{q,\mu_1...\mu_N} \ket{\Psi^j(p)}_Q \label{GijqqQPS} ~, \\ \overline{u}(p,s) G^{ij}_{g,Q} u(p,s)&=& J_N\bra{\overline{\Psi}_i(p)}O_{g,\mu_1...\mu_N} \ket{\Psi^j(p)}_Q \label{GijgqQ}~. \end{eqnarray} Note that in the quarkonic case the fields $\overline{\Psi},~\Psi$ with color indices $i,j$ stand for the external light quarks only. Further, we remind that the ${\sf S}$-- contributions are split up according to Eq.~(\ref{splitS}), which is of relevance for Eq.~(\ref{GijqqQPS}). The above tensors have the general form, cf. \cite{Buza:1995ie,Matiounine:1998ky}, \begin{eqnarray} \hat{G}^{ab}_{Q,\mu\nu}&=& \hat{\hspace*{-1mm}\hat{A}}_{Qg} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) \delta^{ab} (\Delta \cdot p)^N \Big [- g_{\mu\nu} +\frac{p_{\mu}\Delta_{\nu}+\Delta_{\mu}p_{\nu}} {\Delta \cdot p} \Big ] ~, \label{omeGluOpQ} \\ \hat{G}^{ab}_{l,Q,\mu\nu}&=& \hat{\hspace*{-1mm}\hat{A}}_{lg,Q} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) \delta^{ab} (\Delta \cdot p)^N \Big [- g_{\mu\nu} +\frac{p_{\mu}\Delta_{\nu}+\Delta_{\mu}p_{\nu}} {\Delta \cdot p} \Big ] ~,~~ l=g,q~, \label{omeGluOpgq} \\ \hat{G}_{Q}^{ij} &=& \hat{\hspace*{-1mm}\hat{A}}_{Qq}^{\sf PS} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) \delta^{ij} (\Delta \cdot p)^{N-1} /\!\!\!\! \Delta ~,\label{omeQuaPS} \\ \hat{G}_{l,Q}^{ij,r} &=& \hat{\hspace*{-1mm}\hat{A}}_{lq,Q}^{r} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) \delta^{ij} (\Delta \cdot p)^{N-1} /\!\!\!\! \Delta ~, \quad l=g,q~,~ \quad r={\sf S,~NS,~PS}~. \label{omelqproj} \end{eqnarray} Here, we have denoted the Green's function with a hat to signify that the above equations are written on the unrenormalized level. In order to simplify the evaluation, it is useful to define projection operators which, applied to the Green's function, yield the corresponding OME. For outer gluons, one defines \begin{eqnarray} P^{(1)}_g \hat{G}^{ab}_{l,(Q),\mu\nu} &\equiv& - \frac{\delta_{ab}}{N_c^2-1} \frac{g^{\mu\nu}}{D-2} (\Delta\cdot p)^{-N} \hat{G}^{ab}_{l,(Q),\mu\nu} ~, \label{projG1} \\ P^{(2)}_g \hat{G}^{ab}_{l,(Q),\mu\nu} &\equiv& \frac{\delta_{ab} }{N_c^2-1} \frac{1}{D-2} (\Delta\cdot p)^{-N} \Bigl(-g^{\mu\nu} +\frac{p^{\mu}\Delta^{\nu}+p^{\nu}\Delta^{\mu}}{\Delta\cdot p} \Bigr)\hat{G}^{ab}_{l,(Q),\mu\nu} ~. \label{projG2} \end{eqnarray} The difference between the gluonic projectors, Eq. (\ref{projG1}) and Eq.~(\ref{projG2}), can be traced back to the fact that in the former case, the summation over indices $\mu,\nu$ includes unphysical transverse gluon states. These have to be compensated by adding diagrams with external ghost lines, which is not the case when using the physical projector in Eq. (\ref{projG2}). In the case of external quarks there is only one projector which reads \begin{eqnarray} P_q \hat{G}^{ij}_{l,(Q)} &\equiv& \frac{\delta^{ij}}{N_c} ( \Delta\cdot p)^{-N} \frac{1}{4} {\sf Tr}[~/\!\!\!\! p~\hat{G}^{ij}_{l,(Q)}]~. \label{projQ} \end{eqnarray} In Eqs. (\ref{projG1})--(\ref{projQ}), $N_c$ denotes the number of colors, cf. Appendix \ref{App-Con}. The unrenormalized OMEs are then obtained by \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{lg}\Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) &=&P_g^{(1,2)}\hat{G}^{ab}_{l,(Q),\mu\nu}~, \label{proGa}\\ \hat{\hspace*{-1mm}\hat{A}}_{lq}\Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) &=&P_q \hat{G}^{ij}_{l,(Q)}~. \label{proQa} \end{eqnarray} The advantage of these projection operators is that one does not have to resort to complicated tensorial reduction. In perturbation theory, the expressions in Eqs. (\ref{proGa}, \ref{proQa}) can then be evaluated order by order in the coupling constant by applying the Feynman-rules given in Appendix~\ref{App-FeynRules}. \subsection{\bf\boldmath Renormalization of the Mass} \label{SubSec-RENMa} In a first step, we perform mass renormalization. There are two traditional schemes for mass renormalization: the on--shell--scheme and the $\overline{\sf MS}$--scheme. In the following, we will apply the on--shell--scheme, defining the renormalized mass $m$ as the pole of the quark propagator. The differences to the $\overline{\sf MS}$--scheme will be discussed in Section~\ref{Sec-REP}. The bare mass in Eq.~(\ref{pertome1}) is replaced by the renormalized on--shell mass $m$ via \begin{eqnarray} \hat{m}&=&Z_m m = m \Bigl[ 1 + \hat{a}_s \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} \delta m_1 + \hat{a}_s^2 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \delta m_2 \Bigr] + O(\hat{a}_s^3)~. \label{mren1} \end{eqnarray} The constants in the above equation are given by~\footnote{Note that there is a misprint in the double--pole term of Eq.~(28) in Ref.~\cite{Bierenbaum:2008yu}.} \begin{eqnarray} \delta m_1 &=&C_F \Bigl[\frac{6}{\varepsilon}-4+\Bigl(4+\frac{3}{4}\zeta_2\Bigr)\varepsilon \Bigr] \label{delm1} \\ &\equiv& \frac{\delta m_1^{(-1)}}{\varepsilon} +\delta m_1^{(0)} +\delta m_1^{(1)}\varepsilon~, \label{delm1exp} \\ \delta m_2 &=& C_F \Biggl\{\frac{1}{\varepsilon^2}\Bigl(18 C_F-22 C_A+8T_F(n_f+N_h) \Bigr) +\frac{1}{\varepsilon}\Bigl(-\frac{45}{2}C_F+\frac{91}{2}C_A \nonumber\\ && -14T_F(n_f+N_h)\Bigr) +C_F\Bigl(\frac{199}{8}-\frac{51}{2}\zeta_2+48\ln(2)\zeta_2 -12\zeta_3\Bigr) \nonumber\\ && +C_A\Bigl(-\frac{605}{8} +\frac{5}{2}\zeta_2-24\ln(2)\zeta_2+6\zeta_3\Bigr) \nonumber\\ && +T_F\Bigl[n_f\Bigl(\frac{45}{2}+10\zeta_2\Bigr)+N_h \Bigl(\frac{69}{2}-14\zeta_2\Bigr)\Bigr]\Biggr\} \label{delm2} \\ &\equiv& \frac{\delta m_2^{(-2)}}{\varepsilon^2} +\frac{\delta m_2^{(-1)}}{\varepsilon} +\delta m_2^{(0)}~. \label{delm2exp} \end{eqnarray} Eq.~(\ref{delm1}) is easily obtained. In Eq.~(\ref{delm2}), $n_f$ denotes the number of light flavors and $N_h$ the number of heavy flavors, which we will set equal to $N_h=1$ from now on. The pole contributions were given in Refs.~\cite{Tarrach:1980up,Nachtmann:1981zg}, and the constant term was derived in Refs.~\cite{Gray:1990yh,*Broadhurst:1991fy}, cf. also \cite{Fleischer:1998dw}. In Eqs. (\ref{delm1exp}, \ref{delm2exp}), we have defined the expansion coefficients in $\varepsilon$ of the corresponding quantities. After mass renormalization, the OMEs read up to $O(\hat{a}_s^3)$ \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{ij}\Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) &=&\delta_{ij}+ \hat{a}_s~ \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)}\Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) \nonumber\\ && \hspace{-25mm} + \hat{a}_s^2 \left[~ \hat{\hspace*{-1mm}\hat{A}}^{(2)}_{ij} \Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) + \delta m_1 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} \frac{md}{dm}~ \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} \Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) \right] \nonumber\\ && \hspace{-25mm} + \hat{a}_s^3 \Biggl[~ \hat{\hspace*{-1mm}\hat{A}}^{(3)}_{ij} \Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) +\delta m_1 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} \frac{md}{dm}~ \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(2)} \Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) \nonumber\\ && \hspace{-15mm} + \delta m_2 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \frac{md}{dm}~ \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} \Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) + \frac{\delta m_1^2}{2} \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \frac{m^2d^2}{dm^2}~ \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} \Bigl(\frac{m^2}{\mu^2},\varepsilon,N\Bigr) \Biggr]~. \nonumber \\ \label{maren} \end{eqnarray} \subsection{\bf\boldmath Renormalization of the Coupling} \label{SubSec-RENCo} Next, we consider charge renormalization. At this point it becomes important to define in which scheme the strong coupling constant is renormalized, cf. Section~\ref{SubSec-HQAsym}. We briefly summarize the main steps in the massless case for $n_f$ flavors in the $\overline{\sf MS}$--scheme. The bare coupling constant $\hat{a}_s$ is expressed by the renormalized coupling $a_s^{\overline{{\sf MS}}}$ via \begin{eqnarray} \hat{a}_s &=& {Z_g^{\overline{{\sf MS}}}}^2(\varepsilon,n_f) a^{\overline{{\sf MS}}}_s(\mu^2) \nonumber\\ &=& a^{\overline{{\sf MS}}}_s(\mu^2)\left[ 1 + \delta a^{\overline{{\sf MS}}}_{s, 1}(n_f) a^{\overline{{\sf MS}}}_s(\mu^2) + \delta a^{\overline{{\sf MS}}}_{s, 2}(n_f) {a^{\overline{{\sf MS}}}_s}^2(\mu^2) \right] + O({a^{\overline{{\sf MS}}}_s}^3)~. \label{asrenMSb} \end{eqnarray} The coefficients in Eq.~(\ref{asrenMSb}) are, \cite{Khriplovich:1969aa,tHooft:unpub,Politzer:1973fx,Gross:1973id} and \cite{Caswell:1974gg,*Jones:1974mm}, \begin{eqnarray} \delta a^{\overline{{\sf MS}}}_{s, 1}(n_f) &=& \frac{2}{\varepsilon} \beta_0(n_f)~, \label{deltasMSb1} \\ \delta a^{\overline{{\sf MS}}}_{s, 2}(n_f) &=& \frac{4}{\varepsilon^2} \beta_0^2(n_f) + \frac{1}{\varepsilon} \beta_1(n_f)~, \label{deltasMSb2} \end{eqnarray} with \begin{eqnarray} \beta_0(n_f) &=& \frac{11}{3} C_A - \frac{4}{3} T_F n_f \label{beta0}~, \\ \beta_1(n_f) &=& \frac{34}{3} C_A^2 - 4 \left(\frac{5}{3} C_A + C_F\right) T_F n_f \label{beta1}~. \end{eqnarray} From the above equations, one can determine the $\beta$--function, Eq.~(\ref{betdef1}), which describes the running of the strong coupling constant and leads to asymptotic freedom in case of QCD, \cite{Politzer:1973fx,Gross:1973id}. It can be calculated using the fact that the bare strong coupling constant does not depend on the renormalization scale $\mu$. Using Eq.~(\ref{mudef}), one obtains \begin{eqnarray} 0&=&\frac{d\hat{a}_{s,D}}{d \ln \mu^2} =\frac{d}{d \ln \mu^2} \hat{a}_s \mu^{-\varepsilon} =\frac{d}{d \ln \mu^2} a_s(\mu^2)Z_g^2(\varepsilon,n_f,\mu^2) \mu^{-\varepsilon}~, \\ \Longrightarrow \beta &=& \frac{\varepsilon}{2}a_s(\mu^2) -2a_s(\mu^2) \frac{d}{d \ln \mu^2}\ln Z_g(\varepsilon,n_f,\mu^2)~. \label{betdef2} \end{eqnarray} Note that in Eq.~(\ref{betdef2}) we have not specified a scheme yet and kept a possible $\mu$--dependence for $Z_g$, which is not present in case of the $\overline{\sf MS}$--scheme. From (\ref{betdef2}), one can calculate the expansion coefficients of the $\beta$--function. Combining it with the result for $Z_g^{\overline{{\sf MS}}}$ in Eqs. (\ref{deltasMSb1}, \ref{deltasMSb2}), one obtains in the $\overline{\sf MS}$-scheme for $n_f$ light flavors, cf. \cite{Khriplovich:1969aa,Gross:1973id,Politzer:1973fx,tHooft:unpub,Caswell:1974gg,*Jones:1974mm}, \begin{eqnarray} \beta^{\overline{{\sf MS}}}(n_f)&=&-\beta_0(n_f){a^{\overline{{\sf MS}}}_s}^2-\beta_1(n_f){a^{\overline{{\sf MS}}}_s}^3 +O({a^{\overline{{\sf MS}}}_s}^4)~. \end{eqnarray} Additionally, it follows \begin{eqnarray} \frac{d a_s(\mu^2)}{d \ln(\mu^2)} &=& \frac{1}{2} \varepsilon a_s(\mu^2)-\sum_{k=0}^\infty \beta_k a_s^{k+2}(\mu^2)~. \label{runningas} \end{eqnarray} The factorization relation (\ref{CallFAC}) strictly requires that the external massless particles are on--shell. Massive loop corrections to the gluon propagator violate this condition, which has to be enforced subtracting the corresponding corrections. These can be uniquely absorbed into the strong coupling constant applying the background field method,~\cite{Abbott:1980hw,*Rebhan:1985yf,*Jegerlehner:1998zg}, to maintain the Slavnov-Taylor identities of QCD. We thus determine the coupling constant renormalization in the $\overline{\sf MS}$-scheme as far as the light flavors and the gluon are concerned. In addition, we make the choice that the heavy quark decouples in the running coupling constant $a_s(\mu^2)$ for $\mu^2 < m^2$ and thus from the renormalized OMEs. This implies the requirement that $\Pi_H(0, m^2) = 0$, where $\Pi_H(p^2, m^2)$ is the contribution to the gluon self-energy due to the heavy quark loops, \cite{Buza:1995ie}. Since this condition introduces higher order terms in $\varepsilon$ into $Z_g$, we left the $\overline{\sf MS}$--scheme. This new scheme is a ${\sf MOM}$--scheme. After mass renormalization in the on--shell--scheme via Eq.~(\ref{mren1}), we obtain for the heavy quark contributions to the gluon self--energy in the background field formalism \begin{eqnarray} \hat{\Pi}^{\mu\nu}_{H,ab,\mbox{\tiny{BF}}}(p^2,m^2,\mu^2,\varepsilon,\hat{a}_s)&=& i(-p^2g^{\mu\nu}+p^{\mu}p^{\nu})\delta_{ab} \hat{\Pi}_{H,\mbox{\tiny{BF}}}(p^2,m^2,\mu^2,\varepsilon,\hat{a}_s)~, \nonumber\\ \hat{\Pi}_{H,\mbox{\tiny{BF}}}(0,m^2,\mu^2,\varepsilon,\hat{a}_s)&=& \hat{a}_s \frac{2\beta_{0,Q}}{\varepsilon} \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} \exp \Bigl(\sum_{i=2}^{\infty}\frac{\zeta_i}{i} \Bigl(\frac{\varepsilon}{2}\Bigr)^{i}\Bigr) \nonumber\\ && \hspace{-18mm} +\hat{a}_s^2 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{20}{3}T_FC_A -4T_FC_F \Bigr) -\frac{32}{9}T_FC_A +15T_FC_F \nonumber\\ && \hspace{-18mm} +\varepsilon \Bigl( -\frac{86}{27}T_FC_A -\frac{31}{4}T_FC_F -\frac{5}{3}\zeta_2T_FC_A -\zeta_2T_FC_F \Bigr) \Biggl]~, \label{GluSelfBack} \end{eqnarray} with \begin{eqnarray} \beta_{0,Q} &=&\hat{\beta}_0(n_f)=-\frac{4}{3}T_F~. \label{b0Q} \end{eqnarray} Note that Eq.~(\ref{GluSelfBack}) holds only up to order $O(\varepsilon)$, although we have partially included higher orders in $\varepsilon$ in order to keep the expressions shorter. We have used the Feynman--rules of the background field formalism as given in Ref.~\cite{Yndurain:1999ui}. In the following, we define \begin{eqnarray} f(\varepsilon)&\equiv& \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} \exp \Bigl(\sum_{i=2}^{\infty}\frac{\zeta_i}{i} \Bigl(\frac{\varepsilon}{2}\Bigr)^{i}\Bigr)~. \label{fep} \end{eqnarray} The renormalization constant of the background field $Z_A$ is related to $Z_g$ via \begin{eqnarray} Z_A=Z_g^{-2}~. \label{ZAZg} \end{eqnarray} The light flavor contributions to $Z_A$, $Z_{A,l}$, can thus be determined by combining Eqs.~(\ref{deltasMSb1},~\ref{deltasMSb2},~\ref{ZAZg}). The heavy flavor part follows from the condition \begin{eqnarray} \Pi_{H,\mbox{\tiny{BF}}}(0,m^2)+Z_{A,H}\equiv 0~, \label{ZAcond} \end{eqnarray} which ensures that the on--shell gluon remains strictly massless. Thus we newly define the renormalization constant of the strong coupling with $n_f$ light and one heavy flavor as \begin{eqnarray} Z^{\tiny{\mbox{MOM}}}_g(\varepsilon,n_f+1,\mu^2,m^2) \equiv \frac{1}{(Z_{A,l}+Z_{A,H})^{1/2}}~ \label{Zgnfp1} \end{eqnarray} and obtain \begin{eqnarray} {Z_g^{\tiny{\mbox{MOM}}}}^2(\varepsilon,n_f+1,\mu^2,m^2)&=& 1+a^{\tiny{\mbox{MOM}}}_s(\mu^2) \Bigl[ \frac{2}{\varepsilon} (\beta_0(n_f)+\beta_{0,Q}f(\varepsilon)) \Bigr] \nonumber\\ && \hspace{-18mm} +{a^{\tiny{\mbox{MOM}}}_s}^2(\mu^2) \Bigl[ \frac{\beta_1(n_f)}{\varepsilon} +\frac{4}{\varepsilon^2} (\beta_0(n_f)+\beta_{0,Q}f(\varepsilon))^2 \nonumber\\ && \hspace{-18mm} +\frac{1}{\varepsilon}\Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \Bigl(\beta_{1,Q}+\varepsilon\beta_{1,Q}^{(1)} +\varepsilon^2\beta_{1,Q}^{(2)} \Bigr) \Bigr]+O({a^{\tiny{\mbox{MOM}}}_s}^3)~, \label{Zgheavy2} \end{eqnarray} with \begin{eqnarray} \beta_{1,Q} &=&\hat{\beta_1}(n_f)= - 4 \left(\frac{5}{3} C_A + C_F \right) T_F~, \label{b1Q} \\ \beta_{1,Q}^{(1)}&=& -\frac{32}{9}T_FC_A +15T_FC_F~, \label{b1Q1} \\ \beta_{1,Q}^{(2)}&=& -\frac{86}{27}T_FC_A -\frac{31}{4}T_FC_F -\zeta_2\left(\frac{5}{3}T_FC_A +T_FC_F\right)~. \label{b1Q2} \end{eqnarray} The coefficients corresponding to Eq.~(\ref{asrenMSb}) then read in the ${\sf MOM}$--scheme \begin{eqnarray} \delta a_{s,1}^{\tiny{\mbox{MOM}}}&=&\Bigl[\frac{2\beta_0(n_f)}{\varepsilon} +\frac{2\beta_{0,Q}}{\varepsilon}f(\varepsilon) \Bigr]~,\label{dela1} \\ \delta a_{s,2}^{\tiny{\mbox{MOM}}}&=&\Bigl[\frac{\beta_1(n_f)}{\varepsilon}+ \Bigl(\frac{2\beta_0(n_f)}{\varepsilon} +\frac{2\beta_{0,Q}}{\varepsilon}f(\varepsilon)\Bigr)^2 \nonumber\\ && +\frac{1}{\varepsilon}\Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \Bigl(\beta_{1,Q}+\varepsilon\beta_{1,Q}^{(1)} +\varepsilon^2\beta_{1,Q}^{(2)} \Bigr)\Bigr] +O(\varepsilon^2)~.\label{dela2} \end{eqnarray} Since the $\overline{\sf MS}$--scheme is commonly used, we transform our results back from the {\sf MOM}--description into the $\overline{\sf MS}$--scheme, in order to be able to compare to other analyzes. This is achieved by observing that the bare coupling does not change under this transformation and one obtains the condition \begin{eqnarray} {Z_g^{\overline{{\sf MS}}}}^2(\varepsilon,n_f+1) a^{\overline{{\sf MS}}}_s(\mu^2) = {Z_g^{\tiny{\mbox{MOM}}}}^2(\varepsilon,n_f+1,\mu^2,m^2) a^{\tiny{\mbox{MOM}}}_s(\mu^2) \label{condas1}~. \end{eqnarray} The following relations hold~: \begin{eqnarray} a_s^{\tiny{\mbox{MOM}}}&=& a_s^{\overline{{\sf MS}}} -\beta_{0,Q}\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) {a_s^{\overline{{\sf MS}}}}^2 \nonumber \\ && +\Biggl[ \beta^2_{0,Q}\ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) -\beta_{1,Q}\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) -\beta_{1,Q}^{(1)} \Biggr] {a_s^{\overline{{\sf MS}}}}^3 +O({a_s^{\overline{{\sf MS}}}}^4)~, \label{asmoma} \end{eqnarray} or, \begin{eqnarray} a_s^{\overline{{\sf MS}}}&=& a_s^{\tiny{\mbox{MOM}}} +{a_s^{\tiny{\mbox{MOM}}}}^2\Biggl( \delta a^{\tiny{\mbox{MOM}}}_{s, 1} -\delta a^{\overline{{\sf MS}}}_{s, 1}(n_f+1) \Biggr) +{a_s^{\tiny{\mbox{MOM}}}}^{3}\Biggl( \delta a^{\tiny{\mbox{MOM}}}_{s, 2} -\delta a^{\overline{{\sf MS}}}_{s, 2}(n_f+1) \nonumber\\ && -2\delta a^{\overline{{\sf MS}}}_{s, 1}(n_f+1)\Bigl[ \delta a^{\tiny{\mbox{MOM}}}_{s, 1} -\delta a^{\overline{{\sf MS}}}_{s, 1}(n_f+1) \Bigr] \Biggr)+O({a_s^{\tiny{\mbox{MOM}}}}^4)~, \label{asmsa} \end{eqnarray} vice versa. Eq.~(\ref{asmsa}) is valid to all orders in $\varepsilon$. Here, $a_s^{\sf \overline{{\sf MS}}} = a_s^{\sf \overline{{\sf MS}}}(n_f + 1)$. Applying the on--shell--scheme for mass renormalization and the described {\sf MOM}--scheme for the renormalization of the coupling, one obtains as general formula for mass and coupling constant renormalization up to $O({a^{\tiny{\mbox{MOM}}}_s}^3)$ \begin{eqnarray} {\hat{A}}_{ij} &=& \delta_{ij} + a^{\tiny{\mbox{MOM}}}_s~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} + {a^{\tiny{\mbox{MOM}}}_s}^2 \left[~\hat{\hspace*{-1mm}\hat{A}}^{(2)}_{ij} + \delta m_1 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} m \frac{d}{dm}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} + \delta a^{\tiny{\mbox{MOM}}}_{s,1}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)}\right] \nonumber\\ && + {a^{\tiny{\mbox{MOM}}}_s}^3 \Biggl[~\hat{\hspace*{-1mm}\hat{A}}^{(3)}_{ij} + \delta a^{\tiny{\mbox{MOM}}}_{s,2}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} + 2 \delta a^{\tiny{\mbox{MOM}}}_{s,1} \left(~\hat{\hspace*{-1mm}\hat{A}}^{(2)}_{ij} + \delta m_1 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} m \frac{d}{dm}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} \right) \nonumber\\ && + \delta m_1 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} m \frac{d}{dm}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(2)} + \delta m_2 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} m \frac{d}{dm}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} \nonumber\\ && + \frac{\delta m_1^2}{2} \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} m^2 \frac{d^2}{{dm}^2}~\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(1)} \Biggr]~,\label{macoren} \end{eqnarray} where we have suppressed the dependence on $m,~\varepsilon$ and $N$ in the arguments~\footnote{Here we corrected a typographical error in \cite{Bierenbaum:2008yu}, Eq.~(48).}. \subsection{\bf\boldmath Operator Renormalization} \label{SubSec-RENOp} The renormalization of the UV singularities of the composite operators is being performed introducing the corresponding $Z_{ij}$-factors, which have been defined in Eqs. (\ref{ZNSdef}, \ref{ZSijdef}). We consider first only $n_f$ massless flavors, cf.~\cite{Matiounine:1998ky}, and do then include subsequently one heavy quark. In the former case, renormalization proceeds in the $\overline{\sf MS}$--scheme via \begin{eqnarray} A_{qq}^{\sf NS}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f,N\Bigr) &=&Z^{-1,{\sf NS}}_{qq}(a_s^{\overline{{\sf MS}}},n_f,\varepsilon,N) \hat{A}_{qq}^{\sf NS}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f ,\varepsilon,N\Bigr) \label{renAqqnf}~, \\ A_{ij}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f,N\Bigr) &=&Z^{-1}_{il}(a_s^{\overline{{\sf MS}}},n_f,\varepsilon,N) \hat{A}_{lj}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f,\varepsilon,N\Bigr) ~,~i,j=q,g, \nonumber\\ \label{renAijnf} \end{eqnarray} with $p$ a space--like momentum. As is well known, operator mixing occurs in the singlet case, Eq.~(\ref{renAijnf}). As mentioned before, we neglected all terms being associated to EOM and NGI parts, since they do not contribute in the renormalization of the massive on--shell operator matrix elements. The ${\sf NS}$ and ${\sf PS}$ contributions are separated via \begin{eqnarray} Z_{qq}^{-1}&=&Z_{qq}^{-1, {\sf PS}}+Z_{qq}^{-1, {\sf NS}}~,\\ A_{qq} &=&A_{qq}^{\sf PS}+A_{qq}^{\sf NS} \label{ZPSNS}~. \end{eqnarray} The anomalous dimensions $\gamma_{ij}$ of the operators are defined in Eqs. (\ref{gammazetNS}, \ref{gammazetS}) and can be expanded in a perturbative series as follows \begin{eqnarray} \gamma_{ij}^{{\sf S,PS,NS}}(a_s^{\overline{{\sf MS}}},n_f,N) &=&\sum_{l=1}^{\infty}{a^{\overline{{\sf MS}}}_s}^l \gamma_{ij}^{(l), {\sf S,PS,NS}}(n_f,N)~. \label{pertgamma} \end{eqnarray} Here, the ${\sf PS}$ contribution starts at $O(a_s^2)$. In the following, we do not write the dependence on the Mellin--variable $N$ for the OMEs, the operator $Z$--factors and the anomalous dimensions explicitly. Further, we will suppress the dependence on $\varepsilon$ for unrenormalized quantities and $Z$--factors. From Eqs. (\ref{gammazetNS}, \ref{gammazetS}), one can determine the relation between the anomalous dimensions and the $Z$--factors order by order in perturbation theory. In the general case, one finds up to $O({a_s^{\overline{{\sf MS}}}}^3)$ \begin{eqnarray} Z_{ij}(a^{\overline{{\sf MS}}}_s,n_f) &=& \delta_{ij} +a^{\overline{{\sf MS}}}_s \frac{\gamma_{ij}^{(0)}}{\varepsilon} +{a^{\overline{{\sf MS}}}_s}^2 \Biggl\{ \frac{1}{\varepsilon^2} \Bigl( \frac{1}{2} \gamma_{il}^{(0)} \gamma_{lj}^{(0)} + \beta_0 \gamma_{ij}^{(0)} \Bigr) + \frac{1}{2 \varepsilon} \gamma_{ij}^{(1)} \Biggr\} \nonumber \\ && + {a^{\overline{{\sf MS}}}_s}^3 \Biggl\{ \frac{1}{\varepsilon^3} \Bigl( \frac{1}{6}\gamma_{il}^{(0)} \gamma_{lk}^{(0)} \gamma_{kj}^{(0)} + \beta_0 \gamma_{il}^{(0)} \gamma_{lj}^{(0)} + \frac{4}{3} \beta_0^2 \gamma_{ij}^{(0)} \Bigr) \nonumber\\ && + \frac{1}{\varepsilon^2} \Bigl( \frac{1}{6} \gamma_{il}^{(1)} \gamma_{lj}^{(0)} + \frac{1}{3} \gamma_{il}^{(0)} \gamma_{lj}^{(1)} + \frac{2}{3} \beta_0 \gamma_{ij}^{(1)} + \frac{2}{3} \beta_1 \gamma_{ij}^{(0)} \Bigr) + \frac{\gamma_{ij}^{(2)}}{3 \varepsilon} \Biggr\}~. \label{Zijnf} \end{eqnarray} The ${\sf NS}$ and ${\sf PS}$ $Z$--factors are given by~\footnote{In Eq.~(\ref{ZqqPSnf}) we corrected typographical errors contained in Eq.~(34), \cite{Bierenbaum:2008yu}.} \begin{eqnarray} Z_{qq}^{\sf NS}(a^{\overline{{\sf MS}}}_s,n_f) &=& 1 +a^{\overline{{\sf MS}}}_s \frac{\gamma_{qq}^{(0),{\sf NS}}}{\varepsilon} +{a^{\overline{{\sf MS}}}_s}^2 \Biggl\{ \frac{1}{\varepsilon^2} \Bigl( \frac{1}{2}{\gamma_{qq}^{(0),{\sf NS}}}^2 + \beta_0 \gamma_{qq}^{(0),{\sf NS}} \Bigr) + \frac{1}{2 \varepsilon} \gamma_{qq}^{(1),{\sf NS}} \Biggr\} \nonumber\\ && +{a^{\overline{{\sf MS}}}_s}^3 \Biggl\{ \frac{1}{\varepsilon^3} \Bigl( \frac{1}{6} {\gamma_{qq}^{(0),{\sf NS}}}^3 + \beta_0 {\gamma_{qq}^{(0),{\sf NS}}}^2 + \frac{4}{3} \beta_0^2 \gamma_{qq}^{(0),{\sf NS}} \Bigr) \nonumber\\ && + \frac{1}{\varepsilon^2} \Bigl( \frac{1}{2} \gamma_{qq}^{(0),{\sf NS}} \gamma_{qq}^{(1),{\sf NS}} +\frac{2}{3} \beta_0 \gamma_{qq}^{(1),{\sf NS}} +\frac{2}{3} \beta_1 \gamma_{qq}^{(0),{\sf NS}} \Bigr) + \frac{1}{3 \varepsilon} \gamma_{qq}^{(2), {\sf NS}} \Biggr\}~, \nonumber \\ \label{ZqqNSnf}\\ Z_{qq}^{\sf PS}(a^{\overline{{\sf MS}}}_s,n_f) &=& {a^{\overline{{\sf MS}}}_s}^2 \Biggl\{ \frac{1}{2\varepsilon^2} \gamma_{qg}^{(0)} \gamma_{gq}^{(0)} + \frac{1}{2\varepsilon} \gamma_{qq}^{(1), {\sf PS}} \Biggr\} +{a^{\overline{{\sf MS}}}_s}^3 \Biggl\{ \frac{1}{\varepsilon^3} \Bigl( \frac{1}{3}\gamma_{qq}^{(0)} \gamma_{qg}^{(0)} \gamma_{gq}^{(0)} \nonumber\\ && +\frac{1}{6}\gamma_{qg}^{(0)} \gamma_{gg}^{(0)} \gamma_{gq}^{(0)} +\beta_0 \gamma_{qg}^{(0)} \gamma_{gq}^{(0)} \Bigr) + \frac{1}{\varepsilon^2} \Bigl( \frac{1}{3}\gamma_{qg}^{(0)} \gamma_{gq}^{(1)} \nonumber\\ && +\frac{1}{6}\gamma_{qg}^{(1)} \gamma_{gq}^{(0)} +\frac{1}{2} \gamma_{qq}^{(0)} \gamma_{qq}^{(1), {\sf PS}} +\frac{2}{3} \beta_0 \gamma_{qq}^{(1), {\sf PS}} \Bigr) +\frac{\gamma_{qq}^{(2), {\sf PS}}}{3\varepsilon} \Biggr\}~. \label{ZqqPSnf} \end{eqnarray} All quantities in Eqs. (\ref{Zijnf})--(\ref{ZqqPSnf}) refer to $n_f$ light flavors and renormalize the massless off--shell OMEs given in Eqs. (\ref{renAqqnf}, \ref{renAijnf}). In the next step, we consider an additional heavy quark with mass $m$. We keep the external momentum artificially off--shell for the moment, in order to deal with the UV--singularities only. For the additional massive quark, one has to account for the renormalization of the coupling constant we defined in Eqs.~(\ref{dela1}, \ref{dela2}). The $Z$--factors including one massive quark are then obtained by taking Eqs. (\ref{Zijnf})--(\ref{ZqqPSnf}) at $(n_f+1)$ flavors and performing the scheme transformation given in (\ref{asmsa}). The emergence of $\delta a_{s,k}^{\sf MOM}$ in $Z_{ij}$ is due to the finite mass effects and cancels singularities which emerge for real radiation and virtual processes at $p^2 \rightarrow 0$. Thus one obtains up to $O({a_s^{\tiny{\mbox{MOM}}}}^3)$ \begin{eqnarray} Z_{ij}^{-1}(a_s^{\tiny{\mbox{MOM}}},n_f+1,\mu^2)&=& \delta_{ij} -a_s^{\tiny{\mbox{MOM}}}\frac{\gamma_{ij}^{(0)}}{\varepsilon} +{a^{\tiny{\mbox{MOM}}}_s}^2\Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{1}{2}\gamma_{ij}^{(1)} -\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{ij}^{(0)} \Bigr) \nonumber\\ && +\frac{1}{\varepsilon^2}\Bigl( \frac{1}{2}\gamma_{il}^{(0)}\gamma_{lj}^{(0)} +\beta_0\gamma_{ij}^{(0)} \Bigr) \Biggr] +{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{1}{3}\gamma_{ij}^{(2)} -\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{ij}^{(1)} \nonumber\\ && -\delta a^{\tiny{\mbox{MOM}}}_{s,2}\gamma_{ij}^{(0)} \Bigr) +\frac{1}{\varepsilon^2}\Bigl( \frac{4}{3}\beta_0\gamma_{ij}^{(1)} +2\delta a^{\tiny{\mbox{MOM}}}_{s,1}\beta_0\gamma_{ij}^{(0)} +\frac{1}{3}\beta_1\gamma_{ij}^{(0)} \nonumber\\ && +\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{il}^{(0)}\gamma_{lj}^{(0)} +\frac{1}{3}\gamma_{il}^{(1)}\gamma_{lj}^{(0)} +\frac{1}{6}\gamma_{il}^{(0)}\gamma_{lj}^{(1)} \Bigr) +\frac{1}{\varepsilon^3}\Bigl( -\frac{4}{3}\beta_0^{2}\gamma_{ij}^{(0)} \nonumber\\ && -\beta_0\gamma_{il}^{(0)}\gamma_{lj}^{(0)} -\frac{1}{6}\gamma_{il}^{(0)}\gamma_{lk}^{(0)} \gamma_{kj}^{(0)} \Bigr) \Biggr]~, \label{ZijInfp1} \end{eqnarray} and \begin{eqnarray} Z_{qq}^{-1,{\sf NS}}(a_s^{\tiny{\mbox{MOM}}},n_f+1,\mu^2)&=& 1 -a^{\tiny{\mbox{MOM}}}_s\frac{\gamma_{qq}^{(0),{\sf NS}}}{\varepsilon} +{a^{\tiny{\mbox{MOM}}}_s}^2\Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{1}{2}\gamma_{qq}^{(1),{\sf NS}} -\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{qq}^{(0),{\sf NS}} \Bigr) \nonumber\\ && +\frac{1}{\varepsilon^2}\Bigl( \beta_0\gamma_{qq}^{(0),{\sf NS}} +\frac{1}{2}{\gamma_{qq}^{(0),{\sf NS}}}^{2} \Bigr) \Biggr] +{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{1}{3}\gamma_{qq}^{(2),{\sf NS}} \nonumber\\ && -\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{qq}^{(1),{\sf NS}} -\delta a^{\tiny{\mbox{MOM}}}_{s,2}\gamma_{qq}^{(0),{\sf NS}} \Bigr) +\frac{1}{\varepsilon^2}\Bigl( \frac{4}{3}\beta_0\gamma_{qq}^{(1),{\sf NS}} \nonumber\\ && +2\delta a^{\tiny{\mbox{MOM}}}_{s,1}\beta_0\gamma_{qq}^{(0),{\sf NS}} +\frac{1}{3}\beta_1\gamma_{qq}^{(0),{\sf NS}} +\frac{1}{2}\gamma_{qq}^{(0),{\sf NS}} \gamma_{qq}^{(1),{\sf NS}} \nonumber\\ && +\delta a^{\tiny{\mbox{MOM}}}_{s,1}{\gamma_{qq}^{(0),{\sf NS}}}^{2} \Bigr) +\frac{1}{\varepsilon^3}\Bigl( -\frac{4}{3}\beta_0^{2}\gamma_{qq}^{(0),{\sf NS}} \nonumber\\ && -\beta_0{\gamma_{qq}^{(0),{\sf NS}}}^{2} -\frac{1}{6}{\gamma_{qq}^{(0),{\sf NS}}}^{3} \Bigr) \Biggr]~, \label{ZNSInfp1} \\ Z_{qq}^{-1,{\sf PS}}(a_s^{\tiny{\mbox{MOM}}},n_f+1,\mu^2)&=& {a^{\tiny{\mbox{MOM}}}_s}^2\Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{1}{2}\gamma_{qq}^{(1), {\sf PS}} \Bigr) +\frac{1}{\varepsilon^2}\Bigl( \frac{1}{2}\gamma_{qg}^{(0)}\gamma_{gq}^{(0)} \Bigr) \Biggr] \nonumber\\ && +{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl[ \frac{1}{\varepsilon}\Bigl( -\frac{1}{3}\gamma_{qq}^{(2), {\sf PS}} -\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{qq}^{(1), {\sf PS}} \Bigr) +\frac{1}{\varepsilon^2}\Bigl( \frac{1}{6}\gamma_{qg}^{(0)}\gamma_{gq}^{(1)} \nonumber\\ && +\frac{1}{3}\gamma_{gq}^{(0)}\gamma_{qg}^{(1)} +\frac{1}{2}\gamma_{qq}^{(0)}\gamma_{qq}^{(1), {\sf PS}} +\frac{4}{3}\beta_0\gamma_{qq}^{(1), {\sf PS}} +\delta a^{\tiny{\mbox{MOM}}}_{s,1}\gamma_{qg}^{(0)}\gamma_{gq}^{(0)} \Bigr) \nonumber\\ && +\frac{1}{\varepsilon^3}\Bigl( -\frac{1}{3}\gamma_{qg}^{(0)}\gamma_{gq}^{(0)} \gamma_{qq}^{(0)} -\frac{1}{6}\gamma_{gq}^{(0)}\gamma_{qg}^{(0)} \gamma_{gg}^{(0)} -\beta_0\gamma_{qg}^{(0)}\gamma_{gq}^{(0)} \Bigr) \Biggr]~.\nonumber\\ \label{ZPSInfp1} \end{eqnarray} The above equations are given for $n_f+1$ flavors. One re-derives the expressions for $n_f$ light flavors by setting $(n_f+1) =: n_f$ and $\delta a^{\tiny{\mbox{MOM}}}_s=\delta a^{\overline{{\sf MS}}}_s$. As a next step, we split the OMEs into a part involving only light flavors and the heavy flavor part \begin{eqnarray} {\hat{A}}_{ij}(p^2,m^2,\mu^2,a_s^{\tiny{\mbox{MOM}}},n_f+1)&=& {\hat{A}}_{ij}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f\Bigr) \nonumber\\ && + {\hat{A}}^Q_{ij}(p^2,m^2,\mu^2,a_s^{\tiny{\mbox{MOM}}},n_f+1)~. \label{splitNSHL1} \end{eqnarray} In (\ref{splitNSHL1}, \ref{eqXX}), the light flavor part depends on $a_s^{\overline{{\sf MS}}}$, since the prescription adopted for coupling constant renormalization only applies to the massive part. ${\hat{A}}^Q_{ij}$ denotes any massive OME we consider. The correct UV--renormalization prescription for the massive contribution is obtained by subtracting from Eq.~(\ref{splitNSHL1}) the terms applying to the light part only~: \begin{eqnarray} \bar{A}^Q_{ij}(p^2,m^2,\mu^2,a_s^{\tiny{\mbox{MOM}}},n_f+1)&=& Z^{-1}_{il}(a_s^{\tiny{\mbox{MOM}}},n_f+1,\mu^2) \hat{A}^Q_{ij}(p^2,m^2,\mu^2,a_s^{\tiny{\mbox{MOM}}},n_f+1) \nonumber\\ && +Z^{-1}_{il}(a_s^{\tiny{\mbox{MOM}}},n_f+1,\mu^2) \hat{A}_{ij}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f\Bigr) \nonumber\\ && -Z^{-1}_{il}(a_s^{\overline{{\sf MS}}},n_f,\mu^2) \hat{A}_{ij}\Bigl(\frac{-p^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f\Bigr)~, \label{eqXX} \end{eqnarray} where \begin{eqnarray} Z_{ij}^{-1} = \delta_{ij} + \sum_{k=1}^\infty a_s^k Z_{ij}^{-1, (k)}~. \end{eqnarray} In the limit $p^2=0$, integrals without a scale vanish within dimensional regularization. Hence for the light flavor OMEs only the term $\delta_{ij}$ remains and one obtains the UV--finite massive OMEs after expanding in $a_s$ \begin{eqnarray} \bar{A}^Q_{ij}\Bigl(\frac{m^2}{\mu^2},a_s^{\tiny{\mbox{MOM}}},n_f+1\Bigr) &=& a_s^{\tiny{\mbox{MOM}}}\Biggl( \hat{A}_{ij}^{(1),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(1)}_{ij}(n_f+1,\mu^2) -Z^{-1,(1)}_{ij}(n_f) \Biggr) \nonumber\\ && \hspace{-45mm} + {a_s^{\tiny{\mbox{MOM}}}}^2\Biggl( \hat{A}_{ij}^{(2),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(2)}_{ij}(n_f+1,\mu^2) -Z^{-1,(2)}_{ij}(n_f) \nonumber\\ && \hspace{-45mm} \phantom{{a_s^{\tiny{\mbox{MOM}}}}^2\Biggl(} +Z^{-1,(1)}_{ik}(n_f+1,\mu^2) \hat{A}_{kj}^{(1),Q}\Bigl(\frac{m^2}{\mu^2}\Bigr) \Biggr) \nonumber\\ && \hspace{-45mm} +{a_s^{\tiny{\mbox{MOM}}}}^3\Biggl( \hat{A}_{ij}^{(3),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(3)}_{ij}(n_f+1,\mu^2) -Z^{-1,(3)}_{ij}(n_f) \nonumber\\ && \hspace{-45mm} \phantom{{a_s^{\tiny{\mbox{MOM}}}}^3\Biggl(} +Z^{-1,(1)}_{ik}(n_f+1,\mu^2) \hat{A}_{kj}^{(2),Q}\Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(2)}_{ik}(n_f+1,\mu^2) \hat{A}_{kj}^{(1),Q}\Bigl(\frac{m^2}{\mu^2}\Bigr) \Biggr)~. \label{GenRen1} \end{eqnarray} The $Z$--factors at $n_f+1$ flavors refer to Eqs. (\ref{ZijInfp1})--(\ref{ZPSInfp1}), whereas those at $n_f$ flavors correspond to the massless case. \subsection{\bf\boldmath Mass Factorization} \label{SubSec-RENMassFac} Finally, we have to remove the collinear singularities contained in $\bar{A}_{ij}$, which emerge in the limit $p^2 = 0$. They are absorbed into the parton distribution functions and are not present in case of the off--shell massless OMEs. As a generic renormalization formula, generalizing Eqs. (\ref{renAqqnf}, \ref{renAijnf}), one finds \begin{eqnarray} A_{ij}&=&Z^{-1}_{il} \hat{A}_{lk} \Gamma_{kj}^{-1}~. \label{genren} \end{eqnarray} The renormalized operator matrix elements are obtained by \begin{eqnarray} A^Q_{ij}\Bigl(\frac{m^2}{\mu^2},a_s^{\tiny{\mbox{MOM}}},n_f+1\Bigr)&=& \bar{A}^Q_{il}\Bigl(\frac{m^2}{\mu^2},a_s^{\tiny{\mbox{MOM}}},n_f+1\Bigr) \Gamma_{lj}^{-1}~. \label{genren1} \end{eqnarray} If all quarks were massless, the identity, \cite{Buza:1995ie}, \begin{eqnarray} \Gamma_{ij} = Z^{-1}_{ij}~. \label{GammaZ} \end{eqnarray} would hold. However, due to the presence of a heavy quark $Q$, the transition functions $\Gamma(n_f)$ refer only to massless sub-graphs. Hence the $\Gamma$--factors contribute up to $O(a_s^2)$ only and do not involve the special scheme adopted for the renormalization of the coupling. Due to Eq.~(\ref{GammaZ}), they can be read off from Eqs. (\ref{Zijnf})--(\ref{ZqqPSnf}). The renormalized operator matrix elements are then given by: \begin{eqnarray} && A^Q_{ij}\Bigl(\frac{m^2}{\mu^2},a_s^{\tiny{\mbox{MOM}}},n_f+1\Bigr)= \nonumber\\&&\phantom{+} a^{\tiny{\mbox{MOM}}}_s~\Biggl( \hat{A}_{ij}^{(1),Q}\Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(1)}_{ij}(n_f+1) -Z^{-1,(1)}_{ij}(n_f) \Biggr) \nonumber\\&& +{a^{\tiny{\mbox{MOM}}}_s}^2\Biggl( \hat{A}_{ij}^{(2),Q}\Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(2)}_{ij}(n_f+1) -Z^{-1,(2)}_{ij}(n_f) +Z^{-1,(1)}_{ik}(n_f+1)\hat{A}_{kj}^{(1),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ &&\phantom{+{a^{\tiny{\mbox{MOM}}}_s}^2\Biggl(} +\Bigl[ \hat{A}_{il}^{(1),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(1)}_{il}(n_f+1) -Z^{-1,(1)}_{il}(n_f) \Bigr] \Gamma^{-1,(1)}_{lj}(n_f) \Biggr) \nonumber\\ && +{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl( \hat{A}_{ij}^{(3),Q}\Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(3)}_{ij}(n_f+1) -Z^{-1,(3)}_{ij}(n_f) +Z^{-1,(1)}_{ik}(n_f+1)\hat{A}_{kj}^{(2),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\N\\ &&\phantom{+{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl(} +Z^{-1,(2)}_{ik}(n_f+1)\hat{A}_{kj}^{(1),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +\Bigl[ \hat{A}_{il}^{(1),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(1)}_{il}(n_f+1) \nonumber\\ &&\phantom{+{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl(} -Z^{-1,(1)}_{il}(n_f) \Bigr] \Gamma^{-1,(2)}_{lj}(n_f) +\Bigl[ \hat{A}_{il}^{(2),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) +Z^{-1,(2)}_{il}(n_f+1) -Z^{-1,(2)}_{il}(n_f) \nonumber\\ &&\phantom{+{a^{\tiny{\mbox{MOM}}}_s}^3\Biggl(} +Z^{-1,(1)}_{ik}(n_f+1)\hat{A}_{kl}^{(1),Q} \Bigl(\frac{m^2}{\mu^2}\Bigr) \Bigr] \Gamma^{-1,(1)}_{lj}(n_f) \Biggr)+O({a_s^{\tiny{\mbox{MOM}}}}^4)~. \label{GenRen3} \end{eqnarray} From (\ref{GenRen3}) it is obvious that the renormalization of $A^Q_{ij}$ to $O(a_s^3)$ requires the $1$--loop terms up to $O(\varepsilon^2)$ and the $2$--loop terms up to $O(\varepsilon)$, cf.~\cite{Buza:1995ie,Buza:1996wv,Bierenbaum:2007qe, Bierenbaum:2008yu,Bierenbaum:2009zt}. These terms are calculated in Section~\ref{Sec-2L}. Finally, we transform the coupling constant back into the $\overline{\sf MS}$--scheme by using Eq.~(\ref{asmoma}). We do not give the explicit formula here, but present the individual renormalized OMEs after this transformation in the next Section as perturbative series in $a_s^{\overline{{\sf MS}}}$, \begin{eqnarray} A_{ij}^Q\Bigl(\frac{m^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f+1\Bigr)&=& \delta_{ij}+ a_s^{\overline{{\sf MS}}} A_{ij}^{Q, (1)}\Bigl(\frac{m^2}{\mu^2},n_f+1\Bigr) +{a_s^{\overline{{\sf MS}}}}^2 A_{ij}^{Q, (2)}\Bigl(\frac{m^2}{\mu^2},n_f+1\Bigr) \nonumber \\ \phantom{A_{ij}^Q\Bigl(\frac{m^2}{\mu^2},a_s^{\overline{{\sf MS}}},n_f+1\Bigr)}&& \!\! +{a_s^{\overline{{\sf MS}}}}^3 A_{ij}^{Q, (3)}\Bigl(\frac{m^2}{\mu^2},n_f+1\Bigr) +O({a_s^{\overline{{\sf MS}}}}^4)~. \label{PertOmeren} \end{eqnarray} As stated in Section~\ref{Sec-HQDIS}, one has to use the same scheme when combining the massive OMEs with the massless Wilson coefficients in the factorization formula (\ref{CallFAC}). The effects of the transformation between the ${\sf MOM}$-- and $\overline{\sf MS}$--scheme are discussed in Section~\ref{Sec-REP}. The subscript $Q$ was introduced in this Section to make the distinction between the massless and massive OMEs explicit and will be dropped from now on, since no confusion is expected. Comparing Eqs. (\ref{GenRen3}) and (\ref{PertOmeren}), one notices that the term $\delta_{ij}$ is not present in the former because it was subtracted together with the light flavor contributions. However, as one infers from Eq.~(\ref{CallFAC}) and the discussion below, this term is necessary when calculating the massive Wilson coefficients in the asymptotic limit and we therefore have re--introduced it into Eq.~(\ref{PertOmeren}). \subsection{\bf\boldmath General Structure of the Massive Operator Matrix Elements} \label{SubSec-RENPred} In the following, we present the general structure of the unrenormalized and renormalized massive operator matrix elements for the specific partonic channels. The former are expressed as a Laurent--series in $\varepsilon$ via \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{ij}^{(l)}\Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) &=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{l\varepsilon/2} \sum_{k=0}^{\infty} \frac{a^{(l,k)}_{ij}}{\varepsilon^{l-k}}~. \end{eqnarray} Additionally, we set \begin{eqnarray} a^{(l,l)}_{ij}\equiv a^{(l)}_{ij}~,\quad a^{(l,l+1)}_{ij}\equiv \overline{a}^{(l)}_{ij}~, \mbox{etc.}~. \end{eqnarray} The pole terms can all be expressed by known renormalization constants and lower order contributions to the massive OMEs, which provides us with a strong check on our calculation. In particular, the complete ${\sf NLO}$ anomalous dimensions, as well as their $T_F$--terms at ${\sf NNLO}$, contribute at $O(a_s^3)$. The moments of the $O(\varepsilon^0)$--terms of the unrenormalized OMEs at the $3$--loop level, $a_{ij}^{(3)}$, are a new result of this thesis and will be calculated in Section~\ref{Sec-3L}, cf. \cite{Bierenbaum:2009mv}. The $O(\varepsilon)$ terms at the $2$--loop level, $\overline{a}_{ij}^{(2)}$, contribute to the non--logarithmic part of the renormalized $3$--loop OMEs and are calculated for general values of $N$ in Section~\ref{Sec-2L}, cf. \cite{Bierenbaum:2008yu,Bierenbaum:2009zt}. The pole terms and the $O(\varepsilon^0)$ terms, $a_{ij}^{(2)}$, at $2$--loop have been calculated for the first time in Refs.~\cite{Buza:1995ie,Buza:1996wv}. The terms involving the quark operator, (\ref{COMP1}, \ref{COMP2}), were confirmed in \cite{Bierenbaum:2007qe} and the terms involving the gluon operator (\ref{COMP3}) by the present work, cf. \cite{Bierenbaum:2009zt}. In order to keep up with the notation used in \cite{Buza:1995ie,Buza:1996wv}, we define the 2--loop terms $a_{ij}^{(2)},~\overline{a}_{ij}^{(2)}$ {\sf after} performing mass renormalization in the on--shell--scheme. This we {\sf do not} apply for the $3$--loop terms. We choose to calculate one--particle reducible diagrams and therefore have to include external self--energies containing massive quarks into our calculation. Before presenting the operator matrix elements up to three loops, we first summarize the necessary self--energy contributions in the next Section. The remaining Sections, (\ref{Sec-NS})--(\ref{SubSec-AggQ}), contain the general structure of the unrenormalized and renormalized massive OMEs up to $3$--loops. In these Sections, we always proceed as follows: From Eqs. (\ref{macoren},~\ref{GenRen3}), one predicts the pole terms of the respective unrenormalized OMEs by demanding that these terms have to cancel through renormalization. The unrenormalized expressions are then renormalized in the ${\sf MOM}$--scheme. Finally, Eq.~(\ref{asmoma}) is applied and the renormalized massive OMEs are presented in the $\overline{\sf MS}$--scheme. \subsubsection{Self--energy contributions} \label{Sec-elf} The gluon and quark self-energy contributions due to heavy quark lines are given by \begin{eqnarray} \hat{\Pi}_{\mu\nu}^{ab}(p^2,\hat{m}^2,\mu^2,\hat{a}_s) &=& i\delta^{ab} \left[-g_{\mu\nu}p^2 +p_\mu p_\nu\right] \hat{\Pi}(p^2,\hat{m}^2,\mu^2,\hat{a}_s)~, \\ \hat{\Pi}(p^2,\hat{m}^2,\mu^2,\hat{a}_s)&=& \sum_{k=1}^{\infty}\hat{a}_s^k\hat{\Pi}^{(k)}(p^2,\hat{m}^2,\mu^2). \\ \label{pertPiGlu} \hat{\Sigma}_{ij}(p^2,\hat{m}^2,\mu^2,\hat{a}_s)&=& i~~\delta_{ij}~/\!\!\!\! p~~ \hat{\Sigma}(p^2,\hat{m}^2,\mu^2,\hat{a_s})~, \\ \hat{\Sigma}(p^2,\hat{m}^2,\mu^2,\hat{a}_s)&=& \sum_{k=2}^{\infty}\hat{a}_s^k\hat{\Sigma}^{(k)}(p^2,\hat{m}^2,\mu^2)~. \label{pertSiQu} \end{eqnarray} Note, that the quark self--energy contributions start at 2--loop order. These self--energies are easily calculated using {\sf MATAD}, \cite{Steinhauser:2000ry}, cf. Section~\ref{Sec-3L}. The expansion coefficients for $p^2=0$ of Eqs.~(\ref{pertPiGlu},~\ref{pertSiQu}) are needed for the calculation of the gluonic and quarkonic OMEs, respectively. The contributions to the gluon vacuum polarization for general gauge parameter $\xi$ are \begin{eqnarray} \label{eqPI1} \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)&=& T_F\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon/2} \Biggl( -\frac{8}{3\varepsilon} \exp \Bigl(\sum_{i=2}^{\infty}\frac{\zeta_i}{i} \Bigl(\frac{\varepsilon}{2}\Bigr)^{i}\Bigr) \Biggr)~, ~\label{GluSelf1} \\ \hat{\Pi}^{(2)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)&=& T_F\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon}\Biggl( -\frac{4}{\varepsilon^2} C_A + \frac{1}{\varepsilon} \Bigl\{-12 C_F + 5 C_A\Bigr\} + C_A \Bigl(\frac{13}{12} -\zeta_2\Bigr) - \frac{13}{3} C_F \nonumber\\ &&\hspace{-10mm} + \varepsilon \left\{C_A \Bigl(\frac{169}{144} + \frac{5}{4} \zeta_2 - \frac{\zeta_3}{3} \Bigr) + C_F \Bigl( - \frac{35}{12} -3 \zeta_2 \Bigr) \right\}\Biggr) + O(\varepsilon^2)~, \label{GluSelf2} \\ \hat{\Pi}^{(3)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)&=& T_F\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2}\Biggl( \frac{1}{\varepsilon^3}\Biggl\{ -\frac{32}{9}T_FC_A\Bigl(2n_f+1\Bigr) +C_A^2\Bigl( \frac{164}{9} +\frac{4}{3}\xi \Bigr) \Biggr\} \nonumber\\ &&\hspace{-10mm} +\frac{1}{\varepsilon^2}\Biggl\{ \frac{80}{27}\Bigl( C_A-6C_F \Bigr)n_fT_F +\frac{8}{27} \Bigl( 35C_A-48C_F \Bigr)T_F +\frac{C_A^2}{27} \Bigl( -781 \nonumber\\ &&\hspace{-10mm} +63\xi \Bigr) +\frac{712}{9}C_AC_F \Biggr\} +\frac{1}{\varepsilon}\Biggl\{ \frac{4}{27}\Bigl( C_A(-101-18\zeta_2) -62C_F \Bigr)n_fT_F \nonumber\\ &&\hspace{-10mm} +\frac{2}{27} \Bigl( C_A(-37-18\zeta_2) -80C_F \Bigr)T_F +C_A^2 \Bigl( -12\zeta_3 +\frac{41}{6}\zeta_2 +\frac{3181}{108} +\frac{\zeta_2}{2}\xi \nonumber\\ &&\hspace{-10mm} +\frac{137}{36}\xi \Bigr) +C_AC_F \Bigl( 16\zeta_3 -\frac{1570}{27} \Bigr) +\frac{272}{3}C_F^2 \Biggr\} +n_fT_F \Biggl\{ C_A\Bigl( \frac{56}{9}\zeta_3 +\frac{10}{9}\zeta_2 \nonumber\\ &&\hspace{-10mm} -\frac{3203}{243} \Bigr) +C_F\Bigl( -\frac{20}{3}\zeta_2 -\frac{1942}{81} \Bigr) \Biggr\} +T_F \Biggl\{ C_A\Bigl( -\frac{295}{18}\zeta_3 +\frac{35}{9}\zeta_2 +\frac{6361}{486} \Bigr) \nonumber\\ &&\hspace{-10mm} +C_F\Bigl( -7\zeta_3 -\frac{16}{3}\zeta_2 -\frac{218}{81} \Bigr) \Biggr\} +C_A^2 \Biggl\{ 4{\sf B_4} -27\zeta_4 +\frac{1969}{72}\zeta_3 -\frac{781}{72}\zeta_2 \nonumber\\ &&\hspace{-10mm} +\frac{42799}{3888} -\frac{7}{6}\zeta_3\xi +\frac{7}{8}\zeta_2\xi +\frac{3577}{432}\xi \Biggr\} +C_AC_F \Biggl\{ -8{\sf B_4} +36\zeta_4 -\frac{1957}{12}\zeta_3\nonumber \end{eqnarray} \begin{eqnarray} && +\frac{89}{3}\zeta_2 +\frac{10633}{81} \Biggr\} +C_F^2 \Biggl\{ \frac{95}{3}\zeta_3 +\frac{274}{9} \Biggr\} \Biggr) + O(\varepsilon)~, \label{GluSelf3} \end{eqnarray} and for the quark self--energy, \begin{eqnarray} \hat{\Sigma}^{(2)}(0,\frac{\hat{m}^2}{\mu^2}) &=& T_F C_F \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon} \left\{\frac{2}{\varepsilon} +\frac{5}{6} + \left[\frac{89}{72} + \frac{\zeta_2}{2} \right] \varepsilon \right\} + O(\varepsilon^2)~, \label{QuSelf2} \\ \hat{\Sigma}^{(3)}(0,\frac{\hat{m}^2}{\mu^2}) &=& T_F C_F \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2} \Biggl( \frac{8}{3\varepsilon^3}C_A \{1-\xi\} +\frac{1}{\varepsilon^2} \Bigl\{ \frac{32}{9}T_F(n_f+2) -C_A\Bigl(\frac{40}{9}+4\xi\Bigr) \nonumber\\ && -\frac{8}{3}C_F \Bigl\} +\frac{1}{\varepsilon} \Biggl\{ \frac{40}{27}T_F(n_f+2) +C_A\Bigl\{ \zeta_2 +\frac{454}{27} -\zeta_2\xi -\frac{70}{9}\xi \Bigr\} -26C_F \Biggl\} \nonumber\\ && +n_fT_F\Bigl\{ \frac{4}{3}\zeta_2 +\frac{674}{81} \Bigr\} + T_F\Bigl\{ \frac{8}{3}\zeta_2 +\frac{604}{81} \Bigr\} + C_A\Bigl\{ \frac{17}{3}\zeta_3 -\frac{5}{3}\zeta_2 +\frac{1879}{162} \nonumber\\ && +\frac{7}{3}\zeta_3\xi -\frac{3}{2}\zeta_2\xi -\frac{407}{27}\xi \Bigr\} + C_F\Bigl\{ -8\zeta_3 -\zeta_2 -\frac{335}{18} \Bigr\} \Biggr) + O(\varepsilon)~, \label{QuSelf3} \end{eqnarray} see also~\cite{Chetyrkin:1999ysxChetyrkin:1999qi,Chetyrkin:2008jk}. In Eq.~(\ref{GluSelf3}) the constant \begin{eqnarray} {\sf B_4}&=&-4\zeta_2\ln^2(2) +\frac{2}{3}\ln^4(2) -\frac{13}{2}\zeta_4 +16 {\sf Li}_4\Bigl(\frac{1}{2}\Bigr) ~\approx~ -1.762800093...~ \label{B4} \end{eqnarray} appears due to genuine massive effects, cf. \cite{Broadhurst:1991fi,Avdeev:1994db,*Laporta:1996mq,Broadhurst:1998rz,Boughezal:2004ef}. \subsubsection{$A_{qq,Q}^{\sf NS}$} \label{Sec-NS} The lowest non--trivial ${\sf NS}$--contribution is of $O(a_s^2)$, \begin{eqnarray} A_{qq,Q}^{\sf NS}&=&1 +a_s^2A_{qq,Q}^{(2), {\sf NS}} +a_s^3A_{qq,Q}^{(3), {\sf NS}} +O(a_s^4)~. \label{NSpert} \end{eqnarray} The expansion coefficients are obtained in the ${\sf MOM}$--scheme from the bare quantities, using Eqs.~(\ref{macoren},~\ref{GenRen3}). After mass-- and coupling constant renormalization, the OMEs are given by \begin{eqnarray} A_{qq,Q}^{(2), \sf NS, \tiny{\mbox{MOM}}}&=& \hat{A}_{qq,Q}^{(2),{\sf NS},\tiny{\mbox{MOM}}} +Z^{-1,(2), {\sf NS}}_{qq}(n_f+1) -Z^{-1,(2), {\sf NS}}_{qq}(n_f) ~, \label{2LNSRen1} \\ A_{qq,Q}^{(3), \sf NS, \tiny{\mbox{MOM}}}&=& \hat{A}_{qq,Q}^{(3), {\sf NS},\tiny{\mbox{MOM}}} +Z^{-1,(3), {\sf NS}}_{qq}(n_f+1) -Z^{-1,(3), {\sf NS}}_{qq}(n_f) \nonumber\\ && +Z^{-1,(1), {\sf NS}}_{qq}(n_f+1) \hat{A}_{qq,Q}^{(2), {\sf NS},\tiny{\mbox{MOM}}} +\Bigl[ \hat{A}_{qq,Q}^{(2), {\sf NS},\tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(2), {\sf NS}}_{qq}(n_f+1) -Z^{-1,(2), {\sf NS}}_{qq}(n_f) \Bigr]\Gamma^{-1,(1)}_{qq}(n_f) ~. \label{3LNSRen1} \end{eqnarray} From (\ref{macoren}, \ref{GenRen3}, \ref{2LNSRen1}, \ref{3LNSRen1}), one predicts the pole terms of the unrenormalized OME. At second and third order they read \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(2),\sf NS}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon}\Biggl( \frac{\beta_{0,Q}\gamma_{qq}^{(0)}}{\varepsilon^2} +\frac{\hat{\gamma}_{qq}^{(1), {\sf NS}}}{2\varepsilon} +a_{qq,Q}^{(2),{\sf NS}} +\overline{a}_{qq,Q}^{(2),{\sf NS}}\varepsilon \Biggr)~, \label{Ahhhqq2NSQ} \\ \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(3),{\sf NS}}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2}\Biggl\{ -\frac{4\gamma_{qq}^{(0)}\beta_{0,Q}}{3\varepsilon^3} \Bigl(\beta_0+2\beta_{0,Q}\Bigr) +\frac{1}{\varepsilon^2} \Biggl( \frac{2\gamma_{qq}^{(1),{\sf NS}}\beta_{0,Q}}{3} -\frac{4\hat{\gamma}_{qq}^{(1),{\sf NS}}}{3} \Bigl[\beta_0+\beta_{0,Q}\Bigr] \nonumber\\ && +\frac{2\beta_{1,Q}\gamma_{qq}^{(0)}}{3} -2\delta m_1^{(-1)}\beta_{0,Q}\gamma_{qq}^{(0)} \Biggr) +\frac{1}{\varepsilon} \Biggl( \frac{\hat{\gamma}_{qq}^{(2), {\sf NS}}}{3} -4a_{qq,Q}^{(2),{\sf NS}}\Bigl[\beta_0+\beta_{0,Q}\Bigr] +\beta_{1,Q}^{(1)}\gamma_{qq}^{(0)}\nonumber \end{eqnarray} \begin{eqnarray} && +\frac{\gamma_{qq}^{(0)}\beta_0\beta_{0,Q}\zeta_2}{2} -2 \delta m_1^{(0)} \beta_{0,Q} \gamma_{qq}^{(0)} -\delta m_1^{(-1)}\hat{\gamma}_{qq}^{(1),{\sf NS}} \Biggr) +a_{qq,Q}^{(3), {\sf NS}} \Biggr\}~. \label{Ahhhqq3NSQ} \end{eqnarray} Note, that we have already used the general structure of the unrenormalized lower order OME in the evaluation of the $O(\hat{a}_s^3)$ term, as we will always do in the following. Using Eqs.~(\ref{macoren}, \ref{2LNSRen1}, \ref{3LNSRen1}), one can renormalize the above expressions. In addition, we finally transform back to the $\overline{\sf MS}$--scheme using Eq.~(\ref{asmoma}). Thus one obtains the renormalized expansion coefficients of Eq.~(\ref{NSpert}) \begin{eqnarray} A_{qq,Q}^{(2),\sf NS, \overline{{\sf MS}}}&=& \frac{\beta_{0,Q}\gamma_{qq}^{(0)}}{4} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{\hat{\gamma}_{qq}^{(1), {\sf NS}}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +a_{qq,Q}^{(2),{\sf NS}} -\frac{\beta_{0,Q}\gamma_{qq}^{(0)}}{4}\zeta_2~, \label{Aqq2NSQMSren} \\ A_{qq,Q}^{(3),{\sf NS}, \overline{{\sf MS}}}&=& -\frac{\gamma_{qq}^{(0)}\beta_{0,Q}}{6} \Bigl( \beta_0 +2\beta_{0,Q} \Bigr) \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{4} \Biggl\{ 2\gamma_{qq}^{(1),{\sf NS}}\beta_{0,Q} -2\hat{\gamma}_{qq}^{(1),{\sf NS}} \Bigl( \beta_0 +\beta_{0,Q} \Bigr) \nonumber\\ && +\beta_{1,Q}\gamma_{qq}^{(0)} \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{2} \Biggl\{ \hat{\gamma}_{qq}^{(2),{\sf NS}} -\Bigl( 4a_{qq,Q}^{(2),{\sf NS}} -\zeta_2\beta_{0,Q}\gamma_{qq}^{(0)} \Bigr)(\beta_0+\beta_{0,Q}) \nonumber\\ && +\gamma_{qq}^{(0)}\beta_{1,Q}^{(1)} \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +4\overline{a}_{qq,Q}^{(2),{\sf NS}}(\beta_0+\beta_{0,Q}) -\gamma_{qq}^{(0)}\beta_{1,Q}^{(2)} -\frac{\gamma_{qq}^{(0)}\beta_0\beta_{0,Q}\zeta_3}{6} \nonumber\\ && -\frac{\gamma_{qq}^{(1),{\sf NS}}\beta_{0,Q}\zeta_2}{4} +2 \delta m_1^{(1)} \beta_{0,Q} \gamma_{qq}^{(0)} +\delta m_1^{(0)} \hat{\gamma}_{qq}^{(1),{\sf NS}} +2 \delta m_1^{(-1)} a_{qq,Q}^{(2),{\sf NS}} \nonumber\\ && +a_{qq,Q}^{(3),{\sf NS}} ~. \label{Aqq3NSQMSren} \end{eqnarray} Note that in the ${\sf NS}$--case, one is generically provided with even and odd moments due to a Ward--identity relating the results in the polarized and unpolarized case. The former refer to the anomalous dimensions $\gamma_{qq}^{{\sf NS},+}$ and the latter to $\gamma_{qq}^{{\sf NS},-}$, respectively, as given in Eqs. (3.5, 3.7) and Eqs. (3.6, 3.8) in Ref.~\cite{Moch:2004pa}. The relations above also apply to other twist--2 non--singlet massive OMEs, as to transversity, for which the 2- and 3--loop heavy flavor corrections are given in Section~\ref{sec-1}, cf. also \cite{Blumlein:trans}. \subsubsection{$A_{Qq}^{\sf PS}$ and $A_{qq,Q}^{\sf PS}$} \label{SubSec-PS} There are two different ${\sf PS}$--contributions, cf. the discussion below Eq. \ref{splitS}, \begin{eqnarray} A_{Qq}^{\sf PS}&=& a_s^2A_{Qq}^{(2), {\sf PS}} +a_s^3A_{Qq}^{(3), {\sf PS}} +O(a_s^4)~, \label{PSQqpert}\\ A_{qq,Q}^{\sf PS}&=& a_s^3A_{qq,Q}^{(3), {\sf PS}} +O(a_s^4)~. \label{PSqqQpert} \end{eqnarray} Separating these contributions is not straightforward, since the generic renormalization formula for operator renormalization and mass factorization, Eq.~(\ref{GenRen3}), applies to the sum of these terms only. At $O(a_s^2)$, this problem does not occur and renormalization proceeds in the {\sf MOM}--scheme via \begin{eqnarray} A_{Qq}^{(2), \sf PS, \tiny{\mbox{MOM}}}&=& \hat{A}_{Qq}^{(2),{\sf PS}, \tiny{\mbox{MOM}}} +Z^{-1,(2), {\sf PS}}_{qq}(n_f+1) -Z^{-1,(2), {\sf PS}}_{qq}(n_f) \nonumber\\ && +\Bigl[ \hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z_{qg}^{-1,(1)}(n_f+1) -Z_{qg}^{-1,(1)}(n_f) \Bigr]\Gamma^{-1,(1)}_{gq}(n_f)~. \end{eqnarray} The unrenormalized expression is given by \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{Qq}^{(2),\sf PS}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon}\Biggl( -\frac{\hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)}}{2\varepsilon^2} +\frac{\hat{\gamma}_{qq}^{(1), {\sf PS}}}{2\varepsilon} +a_{Qq}^{(2),{\sf PS}} +\overline{a}_{Qq}^{(2),{\sf PS}}\varepsilon \Biggr)~.\label{AhhhQq2PS} \end{eqnarray} The renormalized result in the ${\sf \overline{{\sf MS}}}$--scheme reads \begin{eqnarray} A_{Qq}^{(2),\sf PS, \overline{{\sf MS}}}&=& -\frac{\hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)}}{8} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{\hat{\gamma}_{qq}^{(1), {\sf PS}}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +a_{Qq}^{(2),{\sf PS}} +\frac{\hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)}}{8}\zeta_2~. \label{AQq2PSMSON} \end{eqnarray} The corresponding renormalization relation at third order is given by \begin{eqnarray} &&A_{Qq}^{(3), \sf PS, \tiny{\mbox{MOM}}}+ A_{qq,Q}^{(3), \sf PS, \tiny{\mbox{MOM}}}= \hat{A}_{Qq}^{(3), {\sf PS}, \tiny{\mbox{MOM}}} +\hat{A}_{qq,Q}^{(3), {\sf PS}, \tiny{\mbox{MOM}}} +Z^{-1,(3), {\sf PS}}_{qq}(n_f+1) \nonumber\\ && \phantom{abc} -Z^{-1,(3), {\sf PS}}_{qq}(n_f) +Z^{-1,(1)}_{qq}(n_f+1)\hat{A}_{Qq}^{(2), {\sf PS}, \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1)\hat{A}_{gq,Q}^{(2), \tiny{\mbox{MOM}}} \nonumber\\ && \phantom{abc} +\Bigl[ \hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1) -Z^{-1,(1)}_{qg}(n_f) \Bigr]\Gamma^{-1,(2)}_{gq}(n_f) +\Bigl[ \hat{A}_{Qq}^{(2), {\sf PS}, \tiny{\mbox{MOM}}} \nonumber\\ && \phantom{abc} +Z^{-1,(2), {\sf PS}}_{qq}(n_f+1) -Z^{-1,(2), {\sf PS}}_{qq}(n_f) \Bigr]\Gamma^{-1,(1)}_{qq}(n_f) +\Bigl[ \hat{A}_{Qg}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{qg}(n_f+1) \nonumber\\ && \phantom{abc} -Z^{-1,(2)}_{qg}(n_f) +Z^{-1,(1)}_{qq}(n_f+1)A_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1)A_{gg,Q}^{(1), \tiny{\mbox{MOM}}} \Bigr]\Gamma^{-1,(1)}_{gq}(n_f)~.\nonumber\\ \label{AQqq3PSRen} \end{eqnarray} Taking into account the structure of the UV-- and collinear singularities of the contributing Feynman--diagrams, these two contributions can be separated. For the bare quantities we obtain \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{Qq}^{(3),{\sf PS}}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2}\Biggl[ \frac{\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)}}{6\varepsilon^3} \Biggl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +6\beta_0 +16\beta_{0,Q} \Biggr) +\frac{1}{\varepsilon^2}\Biggl( -\frac{4\hat{\gamma}_{qq}^{(1),{\sf PS}}}{3} \Bigl[ \beta_0 +\beta_{0,Q} \Bigr] \nonumber\\ && -\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(1)}}{3} +\frac{\hat{\gamma}_{qg}^{(0)}}{6} \Bigl[ 2\hat{\gamma}_{gq}^{(1)} -\gamma_{gq}^{(1)} \Bigr] +\delta m_1^{(-1)} \hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)} \Biggr) +\frac{1}{\varepsilon}\Biggl( \frac{\hat{\gamma}_{qq}^{(2),{\sf PS}}}{3} -n_f\frac{\hat{\tilde{\gamma}}_{qq}^{(2),{\sf PS}}}{3} \nonumber\\ && +\hat{\gamma}_{qg}^{(0)}a_{gq,Q}^{(2)} -\gamma_{gq}^{(0)}a_{Qg}^{(2)} -4(\beta_0+\beta_{0,Q})a_{Qq}^{(2),{\sf PS}} -\frac{\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)}\zeta_2}{16} \Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +6\beta_0 \Bigr] \nonumber\\ && +\delta m_1^{(0)} \hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)} -\delta m_1^{(-1)} \hat{\gamma}_{qq}^{(1),{\sf PS}} \Biggr) +a_{Qq}^{(3),{\sf PS}} \Biggr]~, \label{AhhhQq3PS} \\ \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(3),{\sf PS}}&=& n_f\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2}\Biggl[ \frac{2\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)}\beta_{0,Q}}{3\varepsilon^3} +\frac{1}{3\varepsilon^2} \Biggl( 2\hat{\gamma}_{qq}^{(1),{\sf PS}}\beta_{0,Q} +\hat{\gamma}_{qg}^{(0)}\hat{\gamma}_{gq}^{(1)} \Biggr) \nonumber\\ && +\frac{1}{\varepsilon} \Biggl( \frac{\hat{\tilde{\gamma}}_{qq}^{(2),{\sf PS}}}{3} +\hat{\gamma}_{qg}^{(0)}a_{gq,Q}^{(2)} -\frac{\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)}\beta_{0,Q}\zeta_2}{4} \Biggr) +\frac{a_{qq,Q}^{(3), {\sf PS}}}{n_f} \Biggr]~. \label{Ahhhqq3PSQ} \end{eqnarray} The renormalized terms in the ${\overline{{\sf MS}}}$--scheme are given by \begin{eqnarray} A_{Qq}^{(3),{\sf PS}, \overline{{\sf MS}}}&=& \frac{\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)}}{48} \Biggl\{ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +6\beta_0 +16\beta_{0,Q} \Biggr\} \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) + \frac{1}{8}\Biggl\{ -4\hat{\gamma}_{qq}^{(1),{\sf PS}} \Bigl( \beta_0 +\beta_{0,Q} \Bigr) \nonumber\\ && +\hat{\gamma}_{qg}^{(0)} \Bigl( \hat{\gamma}_{gq}^{(1)} -\gamma_{gq}^{(1)} \Bigr) -\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(1)} \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) + \frac{1}{16}\Biggl\{ 8\hat{\gamma}_{qq}^{(2),{\sf PS}} -8n_f\hat{\tilde{\gamma}}_{qq}^{(2),{\sf PS}} \nonumber\\ && -32a_{Qq}^{(2),{\sf PS}}(\beta_0+\beta_{0,Q}) +8\hat{\gamma}_{qg}^{(0)}a_{gq,Q}^{(2)} -8\gamma_{gq}^{(0)}a_{Qg}^{(2)} -\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)}\zeta_2\ \Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} \nonumber\\ && +6\beta_0 +8\beta_{0,Q} \Bigr) \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +4(\beta_0+\beta_{0,Q})\overline{a}_{Qq}^{(2),{\sf PS}} +\gamma_{gq}^{(0)}\overline{a}_{Qg}^{(2)} -\hat{\gamma}_{qg}^{(0)}\overline{a}_{gq,Q}^{(2)} \nonumber\\ && +\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)}\zeta_3}{48} \Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +6\beta_0 \Bigr) +\frac{\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(1)}\zeta_2}{16} -\delta m_1^{(1)} \hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)} +\delta m_1^{(0)} \hat{\gamma}_{qq}^{(1),{\sf PS}} \nonumber \end{eqnarray} \begin{eqnarray} && +2 \delta m_1^{(-1)} a_{Qq}^{(2),{\sf PS}} +a_{Qq}^{(3),{\sf PS}}~. \label{AQq3PSMSren} \\ A_{qq,Q}^{(3),{\sf PS}, \overline{{\sf MS}}}&=&n_f\Biggl\{ \frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)}\beta_{0,Q}}{12} \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{8}\Bigl( 4\hat{\gamma}_{qq}^{(1), {\sf PS}}\beta_{0,Q} +\hat{\gamma}_{qg}^{(0)}\hat{\gamma}_{gq}^{(1)} \Bigr)\ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ && +\frac{1}{4}\Bigl( 2\hat{\tilde{\gamma}}_{qq}^{(2), {\sf PS}} +\hat{\gamma}_{qg}^{(0)}\Bigl\{ 2a_{gq,Q}^{(2)} -\gamma_{gq}^{(0)}\beta_{0,Q}\zeta_2 \Bigr\} \Bigr)\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ && -\hat{\gamma}_{qg}^{(0)}\overline{a}_{gq,Q}^{(2)} +\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} \beta_{0,Q}\zeta_3}{12} -\frac{\hat{\gamma}_{qq}^{(1), {\sf PS}}\beta_{0,Q}\zeta_2}{4} \Biggr\} +a_{qq,Q}^{(3), {\sf PS}}~. \label{Aqq3PSQMSren} \end{eqnarray} \subsubsection{$A_{Qg}$ and $A_{qg,Q}$} \label{SubSec-AQqg} The OME $A_{Qg}$ is the most complex expression. As in the ${\sf PS}$--case, there are two different contributions \begin{eqnarray} A_{Qg}&=& a_s A_{Qg}^{(1)} +a_s^2A_{Qg}^{(2)} +a_s^3A_{Qg}^{(3)} +O(a_s^4)~. \label{AQgpert}\\ A_{qg,Q}&=& a_s^3A_{qg,Q}^{(3)} +O(a_s^4)~. \label{AqgQpert} \end{eqnarray} In the {\sf MOM}--scheme the $1$-- and $2$--loop contributions obey the following relations \begin{eqnarray} A_{Qg}^{(1), \tiny{\mbox{MOM}}}&=& \hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1) -Z^{-1,(1)}_{qg}(n_f) ~, \\ A_{Qg}^{(2), \tiny{\mbox{MOM}}}&=& \hat{A}_{Qg}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{qg}(n_f+1) -Z^{-1,(2)}_{qg}(n_f) +Z^{-1,(1)}_{qg}(n_f+1)\hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(1)}_{qq}(n_f+1)\hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} +\Bigl[ \hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z_{qg}^{-1,(1)}(n_f+1) \nonumber\\ && -Z_{qg}^{-1,(1)}(n_f) \Bigr]\Gamma^{-1,(1)}_{gg}(n_f)~. \label{RenAQg2MOM} \end{eqnarray} The unrenormalized terms are given by \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(1)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon/2} \frac{\hat{\gamma}_{qg}^{(0)}}{\varepsilon} \exp \Bigl(\sum_{i=2}^{\infty}\frac{\zeta_i}{i} \Bigl(\frac{\varepsilon}{2}\Bigr)^{i}\Bigr)~, \label{AhhhQg1} \\ \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon} \Biggl[ -\frac{\hat{\gamma}_{qg}^{(0)}}{2\varepsilon^2} \Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_{0} +4\beta_{0,Q} \Bigr) +\frac{ \hat{\gamma}_{qg}^{(1)} -2\delta m_1^{(-1)} \hat{\gamma}_{qg}^{(0)}} {2\varepsilon} +a_{Qg}^{(2)} \nonumber\\ && -\delta m_1^{(0)} \hat{\gamma}_{qg}^{(0)} -\frac{\hat{\gamma}_{qg}^{(0)}\beta_{0,Q}\zeta_2}{2} +\varepsilon\Bigl( \overline{a}_{Qg}^{(2)} -\delta m_1^{(1)} \hat{\gamma}_{qg}^{(0)} -\frac{\hat{\gamma}_{qg}^{(0)}\beta_{0,Q}\zeta_2}{12} \Bigr) \Biggr] ~.\label{AhhhQg2} \end{eqnarray} Note that we have already made the one--particle reducible contributions to Eq.~(\ref{AhhhQg2}) explicit, which are given by the ${\sf LO}$--term multiplied with the 1--loop gluon--self energy, cf. Eq.~(\ref{GluSelf1}). Furthermore, Eq.~(\ref{AhhhQg2}) already contains terms in the $O(\varepsilon^0)$ and $O(\varepsilon)$ expressions which result from mass renormalization. At this stage of the renormalization procedure they should not be present, however, we have included them here in order to have the same notation as in Refs.~\cite{Buza:1995ie,Buza:1996wv} at the $2$--loop level. The renormalized terms then become in the $\overline{\sf MS}$--scheme \begin{eqnarray} A_{Qg}^{(1), \overline{{\sf MS}}}&=& \frac{\hat{\gamma}_{qg}^{(0)}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) ~, \label{AQg1MSren} \\ A_{Qg}^{(2), \overline{{\sf MS}}}&=& -\frac{\hat{\gamma}_{qg}^{(0)}}{8} \Biggl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_{0} +4\beta_{0,Q} \Biggr] \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{\hat{\gamma}_{qg}^{(1)}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr)\nonumber \end{eqnarray} \begin{eqnarray} && +a_{Qg}^{(2)} +\frac{\hat{\gamma}_{qg}^{(0)}\zeta_2}{8} \Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_{0} \Bigr)~. \label{AQg2MSren} \end{eqnarray} The generic renormalization relation at the $3$--loop level is given by \begin{eqnarray} && A_{Qg}^{(3), \tiny{\mbox{MOM}}}+A_{qg,Q}^{(3), \tiny{\mbox{MOM}}} = \hat{A}_{Qg}^{(3), \tiny{\mbox{MOM}}} +\hat{A}_{qg,Q}^{(3), \tiny{\mbox{MOM}}} +Z^{-1,(3)}_{qg}(n_f+1) -Z^{-1,(3)}_{qg}(n_f) \nonumber\\ && \phantom{abc} +Z^{-1,(2)}_{qg}(n_f+1)\hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1)\hat{A}_{gg,Q}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{qq}(n_f+1)\hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} \nonumber\\ && \phantom{abc} +Z^{-1,(1)}_{qq}(n_f+1)\hat{A}_{Qg}^{(2), \tiny{\mbox{MOM}}} +\Bigl[ \hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1) \nonumber\\ && \phantom{abc} -Z^{-1,(1)}_{qg}(n_f) \Bigr]\Gamma^{-1,(2)}_{gg}(n_f) +\Bigl[ \hat{A}_{Qg}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{qg}(n_f+1) -Z^{-1,(2)}_{qg}(n_f) \nonumber\\ && \phantom{abc} +Z^{-1,(1)}_{qq}(n_f+1)A_{Qg}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{qg}(n_f+1)A_{gg,Q}^{(1), \tiny{\mbox{MOM}}} \Bigr]\Gamma^{-1,(1)}_{gg}(n_f) \nonumber\\ && \phantom{abc} +\Bigl[ \hat{A}_{Qq}^{(2), {\sf PS}, \tiny{\mbox{MOM}}} +Z^{-1,(2), {\sf PS}}_{qq}(n_f+1) -Z^{-1,(2), {\sf PS}}_{qq}(n_f) \Bigr]\Gamma^{-1,(1)}_{qg}(n_f) \nonumber\\ && \phantom{abc} +\Bigl[ \hat{A}_{qq,Q}^{(2), {\sf NS}, \tiny{\mbox{MOM}}} +Z^{-1,(2), {\sf NS}}_{qq}(n_f+1) -Z^{-1,(2), {\sf NS}}_{qq}(n_f) \Bigr]\Gamma^{-1,(1)}_{qg}(n_f)~. \end{eqnarray} Similar to the ${\sf PS}$--case, the different contributions can be separated and one obtains the following unrenormalized results \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(3)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2} \Biggl[ \frac{\hat{\gamma}_{qg}^{(0)}}{6\varepsilon^3} \Biggl( (n_f+1)\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\gamma_{qq}^{(0)} \Bigl[ \gamma_{qq}^{(0)} -2\gamma_{gg}^{(0)} -6\beta_0 -8\beta_{0,Q} \Bigr] +8\beta_0^2 \nonumber\\ && +28\beta_{0,Q}\beta_0 +24\beta_{0,Q}^2 +\gamma_{gg}^{(0)} \Bigl[ \gamma_{gg}^{(0)} +6\beta_0 +14\beta_{0,Q} \Bigr] \Biggr) +\frac{1}{6\varepsilon^2} \Biggl( \hat{\gamma}_{qg}^{(1)} \Bigl[ 2\gamma_{qq}^{(0)} -2\gamma_{gg}^{(0)} \nonumber\\ && -8\beta_0 -10\beta_{0,Q} \Bigr] +\hat{\gamma}_{qg}^{(0)} \Bigl[ \hat{\gamma}_{qq}^{(1), {\sf PS}}\{1-2n_f\} +\gamma_{qq}^{(1), {\sf NS}} +\hat{\gamma}_{qq}^{(1), {\sf NS}} +2\hat{\gamma}_{gg}^{(1)} -\gamma_{gg}^{(1)} -2\beta_1 \nonumber\\ && -2\beta_{1,Q} \Bigr] + 6 \delta m_1^{(-1)} \hat{\gamma}_{qg}^{(0)} \Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +3\beta_0 +5\beta_{0,Q} \Bigr] \Biggr) +\frac{1}{\varepsilon} \Biggl( \frac{\hat{\gamma}_{qg}^{(2)}}{3} -n_f \frac{\hat{\tilde{\gamma}}_{qg}^{(2)}}{3} \nonumber\\ && +\hat{\gamma}_{qg}^{(0)}\Bigl[ a_{gg,Q}^{(2)} -n_fa_{Qq}^{(2),{\sf PS}} \Bigr] +a_{Qg}^{(2)} \Bigl[ \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} -4\beta_0 -4\beta_{0,Q} \Bigr] +\frac{\hat{\gamma}_{qg}^{(0)}\zeta_2}{16} \Bigl[ \gamma_{gg}^{(0)} \Bigl\{ 2\gamma_{qq}^{(0)} \nonumber\\ && -\gamma_{gg}^{(0)} -6\beta_0 +2\beta_{0,Q} \Bigr\} -(n_f+1)\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\gamma_{qq}^{(0)} \Bigl\{ -\gamma_{qq}^{(0)} +6\beta_0 \Bigr\} -8\beta_0^2 \nonumber\\ && +4\beta_{0,Q}\beta_0 +24\beta_{0,Q}^2 \Bigr] + \frac{\delta m_1^{(-1)}}{2} \Bigl[ -2\hat{\gamma}_{qg}^{(1)} +3\delta m_1^{(-1)}\hat{\gamma}_{qg}^{(0)} +2\delta m_1^{(0)}\hat{\gamma}_{qg}^{(0)} \Bigr] \nonumber\\ && + \delta m_1^{(0)}\hat{\gamma}_{qg}^{(0)} \Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_0 +4\beta_{0,Q} \Bigr] -\delta m_2^{(-1)}\hat{\gamma}_{qg}^{(0)} \Biggr) +a_{Qg}^{(3)} \Biggr]~. \label{AhhhQg3} \\ \hat{\hspace*{-1mm}\hat{A}}_{qg,Q}^{(3)}&=& n_f\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2} \Biggl[ \frac{\hat{\gamma}_{qg}^{(0)}}{6\varepsilon^3} \Biggl( \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +2\beta_{0,Q}\Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_0 \Bigr] \Biggr) +\frac{1}{\varepsilon^2} \Biggl( \frac{\hat{\gamma}_{qg}^{(0)}}{6} \Bigl[ 2\hat{\gamma}_{gg}^{(1)} \nonumber\\ && +\hat{\gamma}_{qq}^{(1), {\sf PS}} -2\hat{\gamma}_{qq}^{(1), {\sf NS}} +4\beta_{1,Q} \Bigr] +\frac{\hat{\gamma}_{qg}^{(1)}\beta_{0,Q}}{3} \Biggr) +\frac{1}{\varepsilon} \Biggl( \frac{\hat{\tilde{\gamma}}_{qg}^{(2)}}{3} +\hat{\gamma}_{qg}^{(0)}\Bigl[ a_{gg,Q}^{(2)} -a_{qq,Q}^{(2),{\sf NS}} \nonumber\\ && +\beta_{1,Q}^{(1)} \Bigr] -\frac{\hat{\gamma}_{qg}^{(0)}\zeta_2}{16}\Bigl[ \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +2\beta_{0,Q}\Bigl\{ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_0 \Bigr\} \Bigr] \Biggr) +\frac{a_{qg,Q}^{(3)}}{n_f} \Biggr]~.\label{Ahhhqg3Q} \end{eqnarray} The renormalized expressions are \begin{eqnarray} A_{Qg}^{(3), \overline{{\sf MS}}}&=& \frac{\hat{\gamma}_{qg}^{(0)}}{48} \Biggl\{ (n_f+1)\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\gamma_{gg}^{(0)}\Bigl( \gamma_{gg}^{(0)} -2\gamma_{qq}^{(0)} +6\beta_0 +14\beta_{0,Q} \Bigr) +\gamma_{qq}^{(0)}\Bigl( \gamma_{qq}^{(0)} \nonumber\\ && -6\beta_0 -8\beta_{0,Q} \Bigr) +8\beta_0^2 +28\beta_{0,Q}\beta_0 +24\beta_{0,Q}^2 \Biggr\} \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{8}\Biggl\{ \hat{\gamma}_{qg}^{(1)} \Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} \nonumber\\ && -4\beta_0 -6\beta_{0,Q} \Bigr) +\hat{\gamma}_{qg}^{(0)} \Bigl( \hat{\gamma}_{gg}^{(1)} -\gamma_{gg}^{(1)} +(1-n_f) \hat{\gamma}_{qq}^{(1), {\sf PS}} +\gamma_{qq}^{(1), {\sf NS}} +\hat{\gamma}_{qq}^{(1), {\sf NS}} -2\beta_1 \nonumber\\ && -2\beta_{1,Q} \Bigr) \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\Biggl\{ \frac{\hat{\gamma}_{qg}^{(2)}}{2} -n_f\frac{\hat{\tilde{\gamma}}_{qg}^{(2)}}{2} +\frac{a_{Qg}^{(2)}}{2} \Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} -4\beta_0 -4\beta_{0,Q} \Bigr) \nonumber\\ && +\frac{\hat{\gamma}_{qg}^{(0)}}{2} \Bigl( a_{gg,Q}^{(2)} -n_fa_{Qq}^{(2), {\sf PS}} \Bigr) +\frac{\hat{\gamma}_{qg}^{(0)}\zeta_2}{16} \Bigl( -(n_f+1)\gamma_{gq}^{(0)} \hat{\gamma}_{qg}^{(0)} +\gamma_{gg}^{(0)}\Bigl[ 2\gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} -6\beta_0 \nonumber\\ && -6\beta_{0,Q} \Bigr] -4\beta_0[2\beta_0+3\beta_{0,Q}] +\gamma_{qq}^{(0)}\Bigl[ -\gamma_{qq}^{(0)} +6\beta_0 +4\beta_{0,Q} \Bigr] \Bigr) \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +\overline{a}_{Qg}^{(2)} \Bigl( \gamma_{gg}^{(0)} \nonumber\\ && -\gamma_{qq}^{(0)} +4\beta_0 +4\beta_{0,Q} \Bigr) +\hat{\gamma}_{qg}^{(0)}\Bigl( n_f\overline{a}_{Qq}^{(2), {\sf PS}} -\overline{a}_{gg,Q}^{(2)} \Bigr) +\frac{\hat{\gamma}_{qg}^{(0)}\zeta_3}{48} \Bigl( (n_f+1)\gamma_{gq}^{(0)} \hat{\gamma}_{qg}^{(0)} \nonumber\\ && +\gamma_{gg}^{(0)}\Bigl[ \gamma_{gg}^{(0)} -2\gamma_{qq}^{(0)} +6\beta_0 -2\beta_{0,Q} \Bigr] +\gamma_{qq}^{(0)}\Bigl[ \gamma_{qq}^{(0)} -6\beta_0 \Bigr] +8\beta_0^2 -4\beta_0\beta_{0,Q} \nonumber\\ && -24\beta_{0,Q}^2 \Bigr) +\frac{\hat{\gamma}_{qg}^{(1)}\beta_{0,Q}\zeta_2}{8} +\frac{\hat{\gamma}_{qg}^{(0)}\zeta_2}{16} \Bigl( \gamma_{gg}^{(1)} -\hat{\gamma}_{qq}^{(1), {\sf NS}} -\gamma_{qq}^{(1), {\sf NS}} -\hat{\gamma}_{qq}^{(1),{\sf PS}} +2\beta_1 \nonumber\\ && +2\beta_{1,Q} \Bigr) +\frac{\delta m_1^{(-1)}}{8} \Bigl( 16 a_{Qg}^{(2)} +\hat{\gamma}_{qg}^{(0)}\Bigl[ -24 \delta m_1^{(0)} -8 \delta m_1^{(1)} -\zeta_2\beta_0 -9\zeta_2\beta_{0,Q} \Bigr] \Bigr) \nonumber\\ && +\frac{\delta m_1^{(0)}}{2} \Bigl( 2\hat{\gamma}_{qg}^{(1)} -\delta m_1^{(0)} \hat{\gamma}_{qg}^{(0)} \Bigr) +\delta m_1^{(1)}\hat{\gamma}_{qg}^{(0)} \Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} -2\beta_0 -4 \beta_{0,Q} \Bigr) \nonumber\\ && +\delta m_2^{(0)}\hat{\gamma}_{qg}^{(0)} +a_{Qg}^{(3)}~. \label{AQg3MSren} \\ A_{qg,Q}^{(3), \overline{{\sf MS}}}&=&n_f\Biggl[ \frac{\hat{\gamma}_{qg}^{(0)}}{48}\Biggl\{ \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +2\beta_{0,Q}\Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_0 \Bigr) \Biggr\} \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{8}\Biggl\{ 2\hat{\gamma}_{qg}^{(1)}\beta_{0,Q} \nonumber\\ && +\hat{\gamma}_{qg}^{(0)} \Bigl( \hat{\gamma}_{qq}^{(1), {\sf PS}} -\hat{\gamma}_{qq}^{(1), {\sf NS}} +\hat{\gamma}_{gg}^{(1)} +2\beta_{1,Q} \Bigr) \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{2}\Biggl\{ \hat{\tilde{\gamma}}_{qg}^{(2)} +\hat{\gamma}_{qg}^{(0)} \Bigl( a_{gg,Q}^{(2)} \nonumber\\ && -a_{qq,Q}^{(2),{\sf NS}} +\beta_{1,Q}^{(1)} \Bigr) -\frac{\hat{\gamma}_{qg}^{(0)}}{8}\zeta_2 \Bigl( \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +2\beta_{0,Q}\Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_0 \Bigr] \Bigr) \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ && +\hat{\gamma}_{qg}^{(0)}\Bigl( \overline{a}_{qq,Q}^{(2),{\sf NS}} -\overline{a}_{gg,Q}^{(2)} -\beta_{1,Q}^{(2)} \Bigr) +\frac{\hat{\gamma}_{qg}^{(0)}}{48}\zeta_3\Bigl( \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +2\beta_{0,Q}\Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +2\beta_0 \Bigr] \Bigr) \nonumber\\ && -\frac{\zeta_2}{16}\Bigl( \hat{\gamma}_{qg}^{(0)}\hat{\gamma}_{qq}^{(1), {\sf PS}} +2\hat{\gamma}_{qg}^{(1)}\beta_{0,Q} \Bigr) +\frac{a_{qg,Q}^{(3)}}{n_f} \Biggr]~. \label{Aqg3QMSren} \end{eqnarray} \subsubsection{$A_{gq,Q}$} \label{SubSec-AgqQ} The $gq$--contributions start at $O(a_s^2)$, \begin{eqnarray} A_{gq,Q}&=& a_s^2A_{gq,Q}^{(2)} +a_s^3A_{gq,Q}^{(3)} +O(a_s^4)~. \label{AgqQpert} \end{eqnarray} The renormalization formulas in the {\sf MOM}--scheme read \begin{eqnarray} A_{gq,Q}^{(2),\tiny{\mbox{MOM}}}&=& \hat{A}_{gq,Q}^{(2),\tiny{\mbox{MOM}}} +Z_{gq}^{-1,(2)}(n_f+1) -Z_{gq}^{-1,(2)}(n_f) \nonumber\\ && +\Bigl( \hat{A}_{gg,Q}^{(1),\tiny{\mbox{MOM}}} +Z_{gg}^{-1,(1)}(n_f+1) -Z_{gg}^{-1,(1)}(n_f) \Bigr)\Gamma_{gq}^{-1,(1)}~, \\ A_{gq,Q}^{(3),\tiny{\mbox{MOM}}}&=& \hat{A}_{gq,Q}^{(3),\tiny{\mbox{MOM}}} +Z^{-1,(3)}_{gq}(n_f+1) -Z^{-1,(3)}_{gq}(n_f) +Z^{-1,(1)}_{gg}(n_f+1)\hat{A}_{gq,Q}^{(2),\tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(1)}_{gq}(n_f+1)\hat{A}_{qq}^{(2),\tiny{\mbox{MOM}}} +\Bigl[ \hat{A}_{gg,Q}^{(1),\tiny{\mbox{MOM}}} +Z^{-1,(1)}_{gg}(n_f+1) \nonumber\\ && -Z^{-1,(1)}_{gg}(n_f) \Bigr] \Gamma^{-1,(2)}_{gq}(n_f) +\Bigl[ \hat{A}_{gq,Q}^{(2),\tiny{\mbox{MOM}}} +Z^{-1,(2)}_{gq}(n_f+1) \nonumber\\ && -Z^{-1,(2)}_{gq}(n_f) \Bigr] \Gamma^{-1,(1)}_{qq}(n_f) +\Bigl[ \hat{A}_{gg,Q}^{(2),\tiny{\mbox{MOM}}} +Z^{-1,(2)}_{gg}(n_f+1) \nonumber\\ && -Z^{-1,(2)}_{gg}(n_f) +Z^{-1,(1)}_{gg}(n_f+1)\hat{A}_{gg,Q}^{(1),\tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(1)}_{gq}(n_f+1)\hat{A}_{Qg}^{(1),\tiny{\mbox{MOM}}} \Bigr] \Gamma^{-1,(1)}_{gq}(n_f) \label{AgqQRen1}~, \end{eqnarray} while the unrenormalized expressions are \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{gq,Q}^{(2)}&=&\Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon}\Biggl[ \frac{2\beta_{0,Q}}{\varepsilon^2}\gamma_{gq}^{(0)} +\frac{\hat{\gamma}_{gq}^{(1)}}{2\varepsilon} +a_{gq,Q}^{(2)} +\overline{a}_{gq,Q}^{(2)}\varepsilon \Biggr]~, \label{Ahhhgq2Q} \\ \hat{\hspace*{-1mm}\hat{A}}_{gq,Q}^{(3)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2}\Biggl\{ -\frac{\gamma_{gq}^{(0)}}{3\varepsilon^3} \Biggl( \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\Bigl[ \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} +10\beta_0 +24\beta_{0,Q} \Bigr]\beta_{0,Q} \Biggr) \nonumber\\ && +\frac{1}{\varepsilon^2} \Biggl( \gamma_{gq}^{(1)}\beta_{0,Q} +\frac{\hat{\gamma}_{gq}^{(1)}}{3}\Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} -4\beta_0 -6\beta_{0,Q} \Bigr] +\frac{\gamma_{gq}^{(0)}}{3}\Bigl[ \hat{\gamma}_{qq}^{(1), {\sf NS}} +\hat{\gamma}_{qq}^{(1), {\sf PS}} -\hat{\gamma}_{gg}^{(1)} \nonumber\\ && +2\beta_{1,Q} \Bigr] -4\delta m_1^{(-1)}\beta_{0,Q}\gamma_{gq}^{(0)} \Biggr) +\frac{1}{\varepsilon} \Biggl( \frac{\hat{\gamma}_{gq}^{(2)}}{3} +a_{gq,Q}^{(2)}\Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} -6\beta_{0,Q} -4\beta_0 \Bigr] \nonumber\\ && +\gamma_{gq}^{(0)}\Bigl[ a_{qq,Q}^{(2),{\sf NS}} +a_{Qq}^{(2),{\sf PS}} -a_{gg,Q}^{(2)} \Bigr] +\gamma_{gq}^{(0)}\beta_{1,Q}^{(1)} +\frac{\gamma_{gq}^{(0)}\zeta_2}{8}\Bigl[ \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\beta_{0,Q} ( \gamma_{qq}^{(0)} \nonumber\\&& -\gamma_{gg}^{(0)} +10\beta_0 ) \Bigr] -\delta m_1^{(-1)}\hat{\gamma}_{gq}^{(1)} -4\delta m_1^{(0)}\beta_{0,Q}\gamma_{gq}^{(0)} \Biggr) +a_{gq,Q}^{(3)} \Biggr\}~. \label{AhhhgqQ3} \end{eqnarray} The contributions to the renormalized operator matrix element are given by \begin{eqnarray} A_{gq,Q}^{(2), \overline{{\sf MS}}}&=&\frac{\beta_{0,Q}\gamma_{gq}^{(0)}}{2} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{\hat{\gamma}_{gq}^{(1)}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +a_{gq,Q}^{(2)}-\frac{\beta_{0,Q}\gamma_{gq}^{(0)}}{2}\zeta_2~, \label{Agq2QMSren} \\ A_{gq,Q}^{(3), \overline{{\sf MS}}}&=& -\frac{\gamma_{gq}^{(0)}}{24} \Biggl\{ \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} +10\beta_0 +24\beta_{0,Q} \Bigr)\beta_{0,Q} \Biggr\} \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ && +\frac{1}{8}\Biggl\{ 6\gamma_{gq}^{(1)}\beta_{0,Q} +\hat{\gamma}_{gq}^{(1)}\Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} -4\beta_0 -6\beta_{0,Q} \Bigr) +\gamma_{gq}^{(0)}\Bigl( \hat{\gamma}_{qq}^{(1), {\sf NS}} +\hat{\gamma}_{qq}^{(1), {\sf PS}} \nonumber\\ && -\hat{\gamma}_{gg}^{(1)} +2\beta_{1,Q} \Bigr) \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{8}\Biggl\{ 4\hat{\gamma}_{gq}^{(2)} + 4a_{gq,Q}^{(2)} \Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} -4\beta_0 \nonumber\\ && -6\beta_{0,Q} \Bigr) + 4\gamma_{gq}^{(0)} \Bigl( a_{qq,Q}^{(2),{\sf NS}} +a_{Qq}^{(2),{\sf PS}} -a_{gg,Q}^{(2)} +\beta_{1,Q}^{(1)} \Bigr) + \gamma_{gq}^{(0)}\zeta_2 \Bigl( \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\Bigl[ \gamma_{qq}^{(0)}\nonumber \end{eqnarray} \begin{eqnarray} && -\gamma_{gg}^{(0)} +12\beta_{0,Q} +10\beta_0 \Bigr]\beta_{0,Q} \Bigr) \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) + \overline{a}_{gq,Q}^{(2)} \Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} +4\beta_0 +6\beta_{0,Q} \Bigr) \nonumber\\ && + \gamma_{gq}^{(0)} \Bigl( \overline{a}_{gg,Q}^{(2)} -\overline{a}_{Qq}^{(2),{\sf PS}} -\overline{a}_{qq,Q}^{(2),{\sf NS}} \Bigr) -\gamma_{gq}^{(0)}\beta_{1,Q}^{(2)} -\frac{\gamma_{gq}^{(0)}\zeta_3}{24} \Bigl( \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +\Bigl[ \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} \nonumber\\ && +10\beta_0 \Bigr]\beta_{0,Q} \Bigr) -\frac{3\gamma_{gq}^{(1)}\beta_{0,Q}\zeta_2}{8} +2 \delta m_1^{(-1)} a_{gq,Q}^{(2)} +\delta m_1^{(0)} \hat{\gamma}_{gq}^{(1)} +4 \delta m_1^{(1)} \beta_{0,Q} \gamma_{gq}^{(0)} +a_{gq,Q}^{(3)}~. \nonumber \\ \label{Agq3QMSren} \end{eqnarray} \subsubsection{$A_{gg,Q}$} \label{SubSec-AggQ} The $gg$--contributions start at $O(a_s^0)$, \begin{eqnarray} A_{gg,Q}&=&1+ a_sA_{gg,Q}^{(1)} +a_s^2A_{gg,Q}^{(2)} +a_s^3A_{gg,Q}^{(3)} +O(a_s^4)~. \label{AggQpert} \end{eqnarray} The corresponding renormalization formulas read in the {\sf MOM}--scheme \begin{eqnarray} A_{gg,Q}^{(1), \tiny{\mbox{MOM}}}&=& \hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{gg}(n_f+1) -Z^{-1,(1)}_{gg}(n_f) ~, \label{AggQ1ren1} \\ A_{gg,Q}^{(2), \tiny{\mbox{MOM}}}&=& \hat{A}_{gg,Q}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{gg}(n_f+1) -Z^{-1,(2)}_{gg}(n_f) \nonumber\\ && +Z^{-1,(1)}_{gg}(n_f+1)\hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{gq}(n_f+1)\hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} \nonumber\\ && +\Bigl[ \hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} +Z_{gg}^{-1,(1)}(n_f+1) -Z_{gg}^{-1,(1)}(n_f) \Bigr]\Gamma^{-1,(1)}_{gg}(n_f) ~, \label{AggQ1ren2} \\ A_{gg,Q}^{(3), \tiny{\mbox{MOM}}}&=& \hat{A}_{gg,Q}^{(3), \tiny{\mbox{MOM}}} +Z^{-1,(3)}_{gg}(n_f+1) -Z^{-1,(3)}_{gg}(n_f) +Z^{-1,(2)}_{gg}(n_f+1)\hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(1)}_{gg}(n_f+1)\hat{A}_{gg,Q}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{gq}(n_f+1)\hat{A}_{Qg}^{(1), \tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(1)}_{gq}(n_f+1)\hat{A}_{Qg}^{(2), \tiny{\mbox{MOM}}} +\Bigl[ \hat{A}_{gg,Q}^{(1), \tiny{\mbox{MOM}}} +Z^{-1,(1)}_{gg}(n_f+1) \nonumber\\ && -Z^{-1,(1)}_{gg}(n_f) \Bigr]\Gamma^{-1,(2)}_{gg}(n_f) +\Bigl[ \hat{A}_{gg,Q}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{gg}(n_f+1) \nonumber\\ && -Z^{-1,(2)}_{gg}(n_f) +Z^{-1,(1)}_{gq}(n_f+1)A_{Qg}^{(1), \tiny{\mbox{MOM}}} \nonumber\\ && +Z^{-1,(1)}_{gg}(n_f+1)A_{gg,Q}^{(1), \tiny{\mbox{MOM}}} \Bigr]\Gamma^{-1,(1)}_{gg}(n_f) \nonumber\\ && +\Bigl[ \hat{A}_{gq,Q}^{(2), \tiny{\mbox{MOM}}} +Z^{-1,(2)}_{gq}(n_f+1) -Z^{-1,(2)}_{gq}(n_f) \Bigr]\Gamma^{-1,(1)}_{qg}(n_f) ~.\label{AggQ1ren3} \end{eqnarray} The general structure of the unrenormalized $1$--loop result is then given by \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(1)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon/2}\Biggl( \frac{\hat{\gamma}_{gg}^{(0)}}{\varepsilon} +a_{gg,Q}^{(1)} +\varepsilon\overline{a}_{gg,Q}^{(1)} +\varepsilon^2\overline{\overline{a}}_{gg,Q}^{(1)} \Biggr) ~. \label{AggQ1unren1} \end{eqnarray} One obtains \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(1)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon/2} \Bigl(-\frac{2\beta_{0,Q}}{\varepsilon}\Bigr) \exp \Bigl(\sum_{i=2}^{\infty}\frac{\zeta_i}{i} \Bigl(\frac{\varepsilon}{2}\Bigr)^{i}\Bigr) ~. \label{AggQ1unren2} \end{eqnarray} Using Eq.~(\ref{AggQ1unren2}), the $2$--loop term is given by \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(2)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{\varepsilon} \Biggl[ \frac{1}{2\varepsilon^2} \Bigl\{ \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +2\beta_{0,Q} \Bigl( \gamma_{gg}^{(0)} +2\beta_0 +4\beta_{0,Q} \Bigr) \Bigr\} +\frac{\hat{\gamma}_{gg}^{(1)} +4\delta m_1^{(-1)}\beta_{0,Q}}{2\varepsilon} \nonumber\\ && +a_{gg,Q}^{(2)} +2\delta m_1^{(0)}\beta_{0,Q} +\beta_{0,Q}^2\zeta_2 +\varepsilon\Bigl[\overline{a}_{gg,Q}^{(2)} +2\delta m_1^{(1)}\beta_{0,Q} +\frac{\beta_{0,Q}^2\zeta_3}{6} \Bigr] \Biggr]~. \label{AhhhggQ2} \end{eqnarray} Again, we have made explicit one--particle reducible contributions and terms stemming from mass renormalization in order to refer to the notation of Refs.~\cite{Buza:1995ie,Buza:1996wv}, cf. the discussion below (\ref{AhhhQg2}). The $3$--loop contribution becomes \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(3)}&=& \Bigl(\frac{\hat{m}^2}{\mu^2}\Bigr)^{3\varepsilon/2} \Biggl[ \frac{1}{\varepsilon^3} \Biggl( -\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)}}{6} \Bigl[ \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +6\beta_0 +4n_f\beta_{0,Q} +10\beta_{0,Q} \Bigr] \nonumber\\ && -\frac{2\gamma_{gg}^{(0)}\beta_{0,Q}}{3} \Bigl[ 2\beta_0 +7\beta_{0,Q} \Bigr] -\frac{4\beta_{0,Q}}{3} \Bigl[ 2\beta_0^2 +7\beta_{0,Q}\beta_0 +6\beta_{0,Q}^2 \Bigr] \Biggr) \nonumber\\ && +\frac{1}{\varepsilon^2} \Biggl( \frac{\hat{\gamma}_{qg}^{(0)}}{6} \Bigl[ \gamma_{gq}^{(1)} -(2n_f-1)\hat{\gamma}_{gq}^{(1)} \Bigr] +\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(1)}}{3} -\frac{\hat{\gamma}_{gg}^{(1)}}{3} \Bigl[ 4\beta_0 +7\beta_{0,Q} \Bigr] \nonumber\\ && +\frac{2\beta_{0,Q}}{3} \Bigl[ \gamma_{gg}^{(1)} +\beta_1 +\beta_{1,Q} \Bigr] +\frac{2\gamma_{gg}^{(0)}\beta_{1,Q}}{3} +\delta m_1^{(-1)} \Bigl[ -\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)} -2\beta_{0,Q}\gamma_{gg}^{(0)} \nonumber\\ && -10\beta_{0,Q}^2 -6\beta_{0,Q}\beta_0 \Bigr] \Biggr) +\frac{1}{\varepsilon} \Biggl( \frac{\hat{\gamma}_{gg}^{(2)}}{3} -2(2\beta_0+3\beta_{0,Q})a_{gg,Q}^{(2)} -n_f\hat{\gamma}_{qg}^{(0)}a_{gq,Q}^{(2)} \nonumber\\ && +\gamma_{gq}^{(0)}a_{Qg}^{(2)} +\beta_{1,Q}^{(1)} \gamma_{gg}^{(0)} +\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)}\zeta_2}{16} \Bigl[ \gamma_{gg}^{(0)} - \gamma_{qq}^{(0)} +2(2n_f+1)\beta_{0,Q} +6\beta_0 \Bigr] \nonumber\\ && +\frac{\beta_{0,Q}\zeta_2}{4} \Bigl[ \gamma_{gg}^{(0)} \{2\beta_0-\beta_{0,Q}\} +4\beta_0^2 -2\beta_{0,Q}\beta_0 -12\beta_{0,Q}^2 \Bigr] \nonumber\\ && +\delta m_1^{(-1)} \Bigl[ -3\delta m_1^{(-1)}\beta_{0,Q} -2\delta m_1^{(0)}\beta_{0,Q} -\hat{\gamma}_{gg}^{(1)} \Bigr] +\delta m_1^{(0)} \Bigl[ -\hat{\gamma}_{qg}^{(0)}\gamma_{gq}^{(0)} \nonumber\\ && -2\gamma_{gg}^{(0)}\beta_{0,Q} -4\beta_{0,Q}\beta_0 -8\beta_{0,Q}^2 \Bigr] +2 \delta m_2^{(-1)} \beta_{0,Q} \Biggr) +a_{gg,Q}^{(3)} \Biggr]~. \label{Ahhhgg3Q} \end{eqnarray} The renormalized results are \begin{eqnarray} A_{gg,Q}^{(1), \overline{{\sf MS}}}&=& - \beta_{0,Q} \ln\left(\frac{m^2}{\mu^2}\right)~, \label{AggQ1MSren}\\ A_{gg,Q}^{(2), \overline{{\sf MS}}}&=& \frac{1}{8}\Biggl\{ 2\beta_{0,Q} \Bigl( \gamma_{gg}^{(0)} +2\beta_0 \Bigr) +\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} +8\beta_{0,Q}^2 \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{\hat{\gamma}_{gg}^{(1)}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ && -\frac{\zeta_2}{8}\Bigl[ 2\beta_{0,Q} \Bigl( \gamma_{gg}^{(0)} +2\beta_0 \Bigr) +\gamma_{gq}^{(0)} \hat{\gamma}_{qg}^{(0)} \Bigr] +a_{gg,Q}^{(2)}~, \label{AggQ2MSren} \\ A_{gg,Q}^{(3), \overline{{\sf MS}}}&=& \frac{1}{48}\Biggl\{ \gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)} \Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} -6\beta_0 -4n_f\beta_{0,Q} -10\beta_{0,Q} \Bigr) -4 \Bigl( \gamma_{gg}^{(0)}\Bigl[ 2\beta_0 +7\beta_{0,Q} \Bigr] \nonumber\\ && +4\beta_0^2 +14\beta_{0,Q}\beta_0 +12\beta_{0,Q}^2 \Bigr)\beta_{0,Q} \Biggr\} \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{8}\Biggl\{ \hat{\gamma}_{qg}^{(0)} \Bigl( \gamma_{gq}^{(1)} +(1-n_f)\hat{\gamma}_{gq}^{(1)} \Bigr) \nonumber\\ && +\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(1)} +4\gamma_{gg}^{(1)}\beta_{0,Q} -4\hat{\gamma}_{gg}^{(1)}[\beta_0+2\beta_{0,Q}] +4[\beta_1+\beta_{1,Q}]\beta_{0,Q} \nonumber\\ && +2\gamma_{gg}^{(0)}\beta_{1,Q} \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{16}\Biggl\{ 8\hat{\gamma}_{gg}^{(2)} -8n_fa_{gq,Q}^{(2)}\hat{\gamma}_{qg}^{(0)} -16a_{gg,Q}^{(2)}(2\beta_0+3\beta_{0,Q}) \nonumber\\ && +8\gamma_{gq}^{(0)}a_{Qg}^{(2)} +8\gamma_{gg}^{(0)}\beta_{1,Q}^{(1)} +\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)}\zeta_2 \Bigl( \gamma_{gg}^{(0)} -\gamma_{qq}^{(0)} +6\beta_0 +4n_f\beta_{0,Q} +6\beta_{0,Q} \Bigr) \nonumber\\ && +4\beta_{0,Q}\zeta_2 \Bigl( \gamma_{gg}^{(0)} +2\beta_0 \Bigr) \Bigl( 2\beta_0 +3\beta_{0,Q} \Bigr) \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +2(2\beta_0+3\beta_{0,Q})\overline{a}_{gg,Q}^{(2)}\nonumber \end{eqnarray} \begin{eqnarray} && +n_f\hat{\gamma}_{qg}^{(0)}\overline{a}_{gq,Q}^{(2)} -\gamma_{gq}^{(0)}\overline{a}_{Qg}^{(2)} -\beta_{1,Q}^{(2)} \gamma_{gg}^{(0)} +\frac{\gamma_{gq}^{(0)}\hat{\gamma}_{qg}^{(0)}\zeta_3}{48} \Bigl( \gamma_{qq}^{(0)} -\gamma_{gg}^{(0)} -2[2n_f+1]\beta_{0,Q} \nonumber\\ && -6\beta_0 \Bigr) +\frac{\beta_{0,Q}\zeta_3}{12} \Bigl( [\beta_{0,Q}-2\beta_0]\gamma_{gg}^{(0)} +2[\beta_0+6\beta_{0,Q}]\beta_{0,Q} -4\beta_0^2 \Bigr) \nonumber\\ && -\frac{\hat{\gamma}_{qg}^{(0)}\zeta_2}{16} \Bigl( \gamma_{gq}^{(1)} +\hat{\gamma}_{gq}^{(1)} \Bigr) +\frac{\beta_{0,Q}\zeta_2}{8} \Bigl( \hat{\gamma}_{gg}^{(1)} -2\gamma_{gg}^{(1)} -2\beta_1 -2\beta_{1,Q} \Bigr) +\frac{\delta m_1^{(-1)}}{4} \Bigl( 8 a_{gg,Q}^{(2)} \nonumber\\ && +24 \delta m_1^{(0)} \beta_{0,Q} +8 \delta m_1^{(1)} \beta_{0,Q} +\zeta_2 \beta_{0,Q} \beta_0 +9 \zeta_2 \beta_{0,Q}^2 \Bigr) +\delta m_1^{(0)} \Bigl( \beta_{0,Q} \delta m_1^{(0)} +\hat{\gamma}_{gg}^{(1)} \Bigr) \nonumber\\ && +\delta m_1^{(1)} \Bigl( \hat{\gamma}_{qg}^{(0)} \gamma_{gq}^{(0)} +2 \beta_{0,Q} \gamma_{gg}^{(0)} +4 \beta_{0,Q} \beta_0 +8 \beta_{0,Q}^2 \Bigr) -2 \delta m_2^{(0)} \beta_{0,Q} +a_{gg,Q}^{(3)}~. \label{Agg3QMSren} \end{eqnarray} \newpage \section{\bf\boldmath Representation in Different Renormalization Schemes} \label{Sec-REP} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} As outlined in Section~\ref{Sec-REN}, there are different obvious possibilities to choose a scheme for the renormalization of the mass and the coupling constant. Concerning the coupling constant, we intermediately worked in a ${\sf MOM}$--scheme, which derives from the condition that the external gluon lines have to be kept on--shell. In the end, we transformed back to the $\overline{\sf MS}$--description via. Eq.~(\ref{asmoma}), since this is the commonly used renormalization scheme. If masses are involved, it is useful to renormalize them in the on--mass--shell--scheme, as it was done in the previous Section. In this scheme, one defines the renormalized mass $m$ as the pole of the quark propagator. In this Section, we present the relations required to transform the renormalized results from Section~\ref{SubSec-RENPred} into the different, related schemes. In Section~\ref{SubSec-HQElProdWave}, we show how these scheme transformations affect the ${\sf NLO}$ results. Denoting the $\overline{\sf MS}$--mass by $\overline{m}$, there are in addition to the $\{a^{\overline{{\sf MS}}},~m\}$--scheme adopted in Section~\ref{SubSec-RENPred} the following schemes \begin{eqnarray} \Bigl\{a_s^{\tiny{\mbox{MOM}}},~m\Bigr\}~,\quad~ \Bigl\{a_s^{\tiny{\mbox{MOM}}},~\overline{m}\Bigr\}~,\quad~ \Bigl\{a_s^{\overline{{\sf MS}}},~\overline{m}\Bigr\}~. \end{eqnarray} In case of mass renormalization in the $\overline{\sf MS}$--scheme, Eq.~(\ref{mren1}) becomes \begin{eqnarray} \hat{m}=Z_m^{\overline{{\sf MS}}} \overline{m} &=&\overline{m} \Bigl[ 1 + \hat{a}_s \delta \overline{m}_1 + \hat{a}_s^2 \delta \overline{m}_2 \Bigr] + O(\hat{a}_s^3)~. \label{mrenms} \end{eqnarray} The corresponding coefficients read, \cite{Tarrach:1980up}, \begin{eqnarray} \delta \overline{m}_1 &=& \frac{6}{\varepsilon}C_F \label{delm1MSbar} \equiv \frac{\delta \overline{m}_1^{(-1)}}{\varepsilon}~, \\ \delta \overline{m}_2 &=& \frac{C_F}{\varepsilon^2}\left(18 C_F-22 C_A+8T_F(n_f+1) \right) +\frac{C_F}{\varepsilon}\left(\frac{3}{2}C_F+\frac{97}{6}C_A -\frac{10}{3}T_F(n_f+1)\right) \nonumber\\ &\equiv& \frac{\delta \overline{m}_2^{(-2)}}{\varepsilon^2} +\frac{\delta \overline{m}_2^{(-1)}}{\varepsilon}~. \label{delm2MSbar} \end{eqnarray} One notices that the following relations hold between the expansion coefficients in $\varepsilon$ of the on--shell-- and $\overline{\sf MS}$--terms \begin{eqnarray} \delta \overline{m}_1^{(-1)}&=&\delta m_1^{(-1)}~, \\ \delta \overline{m}_2^{(-2)}&=&\delta m_2^{(-2)}~, \\ \delta \overline{m}_2^{(-1)}&=&\delta m_2^{(-1)} -\delta m_1^{(-1)}\delta m_1^{(0)} +2\delta m_1^{(0)}(\beta_0+\beta_{0,Q})~. \end{eqnarray} One has to be careful, since the choice of this scheme also affects the renormalization constant of the coupling in the ${\sf MOM}$--scheme. This is due to the fact that in Eq.~(\ref{GluSelfBack}) mass renormalization had been performed in the on--shell--scheme. Going through the same steps as in Eqs. (\ref{GluSelfBack})--(\ref{Zgnfp1}), but using the $\overline{\sf MS}$--mass, we obtain for $Z_g$ in the ${\sf MOM}$--scheme. \begin{eqnarray} {Z_g^{\tiny{\mbox{MOM}}}}^2(\varepsilon,n_f+1,\mu^2,\overline{m}^2)&=& 1+a^{\tiny{\mbox{MOM}}}_s(\mu^2) \Bigl[ \frac{2}{\varepsilon} (\beta_0(n_f)+\beta_{0,Q}f(\varepsilon)) \Bigr] \nonumber\\ &&\hspace{-35mm} +{a^{\tiny{\mbox{MOM}}}_s}^2(\mu^2) \Bigl[ \frac{\beta_1(n_f)}{\varepsilon} +\frac{4}{\varepsilon^2} (\beta_0(n_f)+\beta_{0,Q}f(\varepsilon))^2 +\frac{2\beta_{0,Q}}{\varepsilon}\delta \overline{m}_1^{(-1)} f(\varepsilon) \nonumber\\ &&\hspace{-35mm} +\frac{1}{\varepsilon}\Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr)^{\varepsilon} \Bigl( \overline{\beta}_{1,Q}+ \varepsilon\overline{\beta}_{1,Q}^{(1)} +\varepsilon^2\overline{\beta}_{1,Q}^{(2)} \Bigr) \Bigr]+O({a^{\tiny{\mbox{MOM}}}_s}^3)~, \label{Zgheavy2MSmass} \end{eqnarray} where in the term $f(\varepsilon)$, cf. Eq.~(\ref{fep}), the $\overline{\sf MS}$--mass has to be used. The coefficients differing from the on--shell--scheme in the above equation are given by, cf. Eqs.~(\ref{b1Q1},~\ref{b1Q2}) \begin{eqnarray} \overline{\beta}_{1,Q}&=&\beta_{1,Q}-2\beta_{0,Q}\delta m_1^{(-1)}~,\\ \overline{\beta}_{1,Q}^{(1)}&=& \beta_{1,Q}^{(1)}-2\beta_{0,Q}\delta m_1^{(0)}~,\\ \overline{\beta}_{1,Q}^{(2)}&=&\beta_{1,Q}^{(2)} -\frac{\beta_{0,Q}}{4}\Bigl( 8\delta m_1^{(1)} +\delta m_1^{(-1)}\zeta_2 \Bigr)~. \end{eqnarray} The transformation formulas between the different schemes follow from the condition that the unrenormalized terms are equal. In order to transform from the $\{a_s^{\overline{{\sf MS}}},~m\}$--scheme to the $\{a_s^{\tiny{\mbox{MOM}}},~m\}$--scheme, the inverse of Eq.~(\ref{asmoma}) \begin{eqnarray} a_s^{\overline{{\sf MS}}}(m^2)&=& a_s^{\tiny{\mbox{MOM}}}\Bigl[ 1 +\beta_{0,Q} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) a_s^{\tiny{\mbox{MOM}}} \nonumber\\ && \phantom{a_s^{\tiny{\mbox{MOM}}}\Bigl[1} +\Bigl\{ \beta_{0,Q}^2 \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\beta_{1,Q} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +\beta_{1,Q}^{(1)} \Bigr\}{a_s^{\tiny{\mbox{MOM}}}}^2 \Bigr] \label{aMSON2aMOMON} \end{eqnarray} is used. For the transformation to the $\{a_s^{\overline{{\sf MS}}},~\overline{m}\}$--scheme one obtains \begin{eqnarray} m(a_s^{\overline{{\sf MS}}})&=&\overline{m}(a_s^{\overline{{\sf MS}}})\Biggl( 1 +\Biggl\{ -\frac{\delta m^{(-1)}_1}{2} \ln \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) -\delta m_1^{(0)} \Biggr\}a_s^{\overline{{\sf MS}}} \nonumber\\ && +\Biggl\{ \frac{\delta m_1^{(-1)}}{8} \Bigl[ 2\beta_0 +2\beta_{0,Q} +\delta m_1^{(-1)} \Bigr] \ln^2 \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) +\frac{1}{2}\Bigl[ -\delta m_1^{(0)} \Bigl( 2\beta_0 +2\beta_{0,Q} \nonumber\\ && -3\delta m_1^{(-1)} \Bigr) +{\delta m_1^{(-1)}}^{2} -2\delta m_2^{(-1)} \Bigr] \ln \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) +\delta m_1^{(1)} \Bigl[ \delta m_1^{(-1)} -2\beta_0 -2\beta_{0,Q} \Bigr] \nonumber\\ && +\delta m_1^{(0)} \Bigl[\delta m_1^{(-1)}+\delta m_1^{(0)}\Bigr] -\delta m_2^{(0)} \Biggr\}{a_s^{\overline{{\sf MS}}}}^2 \Biggr)~.\label{mMSON2MSMS} \end{eqnarray} Finally, the transformation to the $\{a_s^{\tiny{\mbox{MOM}}},~\overline{m}\}$ is achieved via \begin{eqnarray} a_s^{\overline{{\sf MS}}}(m^2)&=& a_s^{\tiny{\mbox{MOM}}}\Bigl[ 1 +\beta_{0,Q} \ln \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) a_s^{\tiny{\mbox{MOM}}} +\Bigl\{ \beta_{0,Q}^2\ln^2 \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) \nonumber\\ && +\Bigl( \beta_{1,Q} -\beta_{0,Q}\delta m_1^{(-1)} \Bigr) \ln \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) +\beta_{1,Q}^{(1)} -2\delta m_1^{(0)}\beta_{0,Q} \Bigr\}{a_s^{\tiny{\mbox{MOM}}}}^2 \Bigr]~, \label{aMSON2aMOMMS} \end{eqnarray} and \begin{eqnarray} m(a_s^{\overline{{\sf MS}}})&=&\overline{m}(a_s^{\tiny{\mbox{MOM}}})\Biggl( 1 +\Biggl\{ -\frac{\delta m^{(-1)}_1}{2} \ln \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) -\delta m_1^{(0)} \Biggr\}a_s^{\tiny{\mbox{MOM}}} \nonumber\\ && +\Biggl\{ \frac{\delta m_1^{(-1)}}{8} \Bigl[ 2\beta_0 -2\beta_{0,Q} +\delta m_1^{(-1)} \Bigr] \ln^2 \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) +\frac{1}{2}\Bigl[ -\delta m_1^{(0)} \Bigl( 2\beta_0 +4\beta_{0,Q} \nonumber\\ && -3\delta m_1^{(-1)} \Bigr) +{\delta m_1^{(-1)}}^{2} -2\delta m_2^{(-1)} \Bigr] \ln \Bigl(\frac{\overline{m}^2}{\mu^2}\Bigr) +\delta m_1^{(1)} \Bigl[ \delta m_1^{(-1)} -2\beta_0 -2\beta_{0,Q} \Bigr] \nonumber\\ && +\delta m_1^{(0)} \Bigl[\delta m_1^{(-1)}+\delta m_1^{(0)}\Bigr] -\delta m_2^{(0)} \Biggr\}{a_s^{\tiny{\mbox{MOM}}}}^2 \Biggr)~.\label{mMSON2MOMMS} \end{eqnarray} The expressions for the OMEs in different schemes are then obtained by inserting the relations (\ref{aMSON2aMOMON})--(\ref{mMSON2MOMMS}) into the general expression (\ref{PertOmeren}) and expanding in the coupling constant. \subsection{\bf\boldmath Scheme Dependence at ${\sf NLO}$} \label{SubSec-HQElProdWave} Finally, we would like to comment on how the factorization formulas for the heavy flavor Wilson coefficients, (\ref{eqWIL1})--(\ref{eqWIL5}), have to be applied to obtain a complete description. Here, the renormalization of the coupling constant has to be carried out in the same way for all quantities contributing. The general factorization formula (\ref{CallFAC}) holds only for completely inclusive quantities, including radiative corrections containing heavy quark loops, \cite{Buza:1996wv}. One has to distinguish one-particle irreducible and reducible diagrams, which both contribute in the calculation. We would like to remind the reader of the background of this aspect. If one evaluates the heavy-quark Wilson coefficients, diagrams of the type shown in Figure \ref{2LOOPIRR} may appear as well. Diagram (a) contains a virtual heavy quark loop correction to the gluon propagator in the initial state and contributes to the terms $L_{g,i}$ and $H_{g,i}$, respectively, depending on whether a light or heavy quark pair is produced in the final state. Diagrams (b), (c) contribute to $L_{q,i}^{\sf NS}$ and contain radiative corrections to the gluon propagator due to heavy quarks as well. The latter diagrams contribute to $F_{(2,L)}(x,Q^2)$ in the inclusive case, but are absent in the semi--inclusive $Q\overline{Q}$--production cross section. The same holds for diagram (a) if a $q\overline{q}$--pair is produced. \begin{figure}[htb] \begin{center} \includegraphics[angle=0, width=10.0cm]{picmain7.eps} \end{center} \begin{center} \caption{\sf $O(a_s^2)$ virtual heavy quark corrections.} \label{2LOOPIRR} \noindent \small \end{center} \normalsize \end{figure} In Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}, the coupling constant was renormalized in the ${\sf MOM}$--scheme by absorbing the contributions of diagram (a) into the coupling constant, as a consequence of which the term $L_{g,i}$ appears for the first time at $O(a_s^3)$. This can be made explicit by considering the complete gluonic Wilson coefficient up to $O(a_s^2)$, including one heavy quark, cf. Eqs. (\ref{eqWIL3}, \ref{eqWIL5}), \begin{eqnarray} &&C_{g,(2,L)}(n_f)+L_{g,(2,L)}(n_f+1)+H_{g,(2,L)}(n_f+1) = a_s^{\overline{{\sf MS}}} \Bigl[~A_{Qg}^{(1), \overline{{\sf MS}}}~\delta_2 +C^{(1)}_{g,(2,L)}(n_f+1) \Bigr] \nonumber\\ &&\hspace{2mm} + {a_s^{\overline{{\sf MS}}}}^2 \Bigl[~A_{Qg}^{(2), \overline{{\sf MS}}}~\delta_2 +A_{Qg}^{(1), \overline{{\sf MS}}}~C^{(1), {\sf NS}}_{q,(2,L)}(n_f+1) +A_{gg,Q}^{(1), \overline{{\sf MS}}}~C^{(1)}_{g,(2,L)}(n_f+1) \nonumber\\ &&\hspace{18mm} +C^{(2)}_{g,(2,L)}(n_f+1) \Bigr]~. \label{Sec5Ex1} \end{eqnarray} The above equation is given in the $\overline{\sf MS}$--scheme, and the structure of the OMEs can be inferred from Eqs. (\ref{AQg1MSren},~\ref{AQg2MSren}). Here, diagram (a) gives a contribution, corresponding exactly to the color factor $T_F^2$. The transformation to the ${\sf MOM}$--scheme for $a_s$, cf. Eqs.~(\ref{asmoma}, \ref{asmsa}), yields \begin{eqnarray} &&C_{g,(2,L)}(n_f)+L_{g,(2,L)}(n_f+1)+H_{g,(2,L)}(n_f+1) = a_s^{\tiny{\mbox{MOM}}} \Bigl[~A_{Qg}^{(1), \overline{{\sf MS}}}~\delta_2 +C^{(1)}_{g,(2,L)}(n_f+1) \Bigr] \nonumber\\ &&\hspace{2mm} + {a_s^{\tiny{\mbox{MOM}}}}^2 \Bigl[~A_{Qg}^{(2), \overline{{\sf MS}}}~\delta_2 +~\beta_{0,Q}\ln\Bigl(\frac{m^2}{\mu^2}\Bigr)A_{Qg}^{(1), \overline{{\sf MS}}}\delta_2 +A_{Qg}^{(1), \overline{{\sf MS}}}~C^{(1), {\sf NS}}_{q,(2,L)}(n_f+1) \nonumber\\ &&\hspace{2mm} +A_{gg,Q}^{(1), \overline{{\sf MS}}}~C^{(1)}_{g,(2,L)}(n_f+1) +\beta_{0,Q}\ln\Bigl(\frac{m^2}{\mu^2}\Bigr)~C^{(1)}_{g,(2,L)}(n_f+1) +C^{(2)}_{g,(2,L)}(n_f+1) \Bigr]~. \label{Sec5Ex2} \end{eqnarray} By using the general structure of the renormalized OMEs, Eqs.~(\ref{AQg1MSren},~\ref{AQg2MSren},~\ref{AggQ1MSren}), one notices that all contributions due to diagram (a) cancel in the ${\sf MOM}$--scheme, i.e., the color factor $T_F^2$ does not occur at the $2$--loop level. Thus the factorization formula reads \begin{eqnarray} &&C_{g,(2,L)}(n_f)+L_{g,(2,L)}(n_f+1)+H_{g,(2,L)}(n_f+1) = \nonumber\\ &&\hspace{5mm} a_s^{\tiny{\mbox{MOM}}} \Bigl[~A_{Qg}^{(1), \tiny{\mbox{MOM}}}~\delta_2 +C^{(1)}_{g,(2,L)}(n_f+1) \Bigr] \nonumber\\ &&\hspace{2mm} + {a_s^{\tiny{\mbox{MOM}}}}^2 \Bigl[~A_{Qg}^{(2), \tiny{\mbox{MOM}}}~\delta_2 +A_{Qg}^{(1), \tiny{\mbox{MOM}}}~C^{(1), {\sf NS}}_{q,(2,L)}(n_f+1) +C^{(2)}_{g,(2,L)}(n_f+1) \Bigr]~. \label{Sec5Ex3} \end{eqnarray} Splitting up Eq.~(\ref{Sec5Ex3}) into $H_{g,i}$ and $L_{g,i}$, one observes that $L_{g,i}$ vanishes at $O(a_s^2)$ and the term $H_{g,i}$ is the one calculated in Ref.~\cite{Buza:1995ie}. This is the asymptotic expression of the gluonic heavy flavor Wilson coefficient as calculated in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}. Note that the observed cancellation was due to the fact that the term $A_{gg,Q}^{(1)}$ receives only contributions from the heavy quark loops of the gluon--self energy, which also enters into the definition of the ${\sf MOM}$--scheme. It is not clear whether this can be achieved at the $3$--loop level as well, i.e., transforming the general inclusive factorization formula (\ref{CallFAC}) in such a way that only the contributions due to heavy flavors in the final state remain. Therefore one should use these asymptotic expressions only for completely inclusive analyzes, where heavy and light flavors are treated together. This approach has also been adopted in Ref.~\cite{Buza:1996wv} for the renormalization of the massive OMEs, which was performed in the $\overline{\sf MS}$--scheme and not in the ${\sf MOM}$--scheme, as previously in Ref.~\cite{Buza:1995ie}. The radiative corrections in the ${\sf NS}$--case can be treated in the same manner. Here the scheme transformation affects only the light Wilson coefficients and not the OMEs at the $2$--loop level. In the ${\overline{\sf MS}}$--scheme, one obtains the following asymptotic expression up to $O(a_s^2)$ from Eqs. (\ref{LNSFAC}, \ref{eqWIL1}). \begin{eqnarray} \label{Sec5Ex4} &&C_{q,(2,L)}^{\sf NS}(n_f) +L_{q,(2,L)}^{\sf NS}(n_f+1) = \nonumber\\ && \hspace{2mm} 1+a_s^{\overline{{\sf MS}}} C_{q,(2,L)}^{(1), {\sf NS}}(n_f+1) +{a_s^{\overline{{\sf MS}}}}^2 \Bigl[A_{qq,Q}^{(2), {\sf NS}, \overline{{\sf MS}}}(n_f+1)~\delta_2 + C^{(2), {\sf NS}}_{q,(2,L)}(n_f+1)\Bigr]~. \end{eqnarray} Transformation to the ${\sf MOM}$--scheme yields \begin{eqnarray} \label{Sec5Ex5} &&C_{q,(2,L)}^{\sf NS}(n_f) +L_{q,(2,L)}^{\sf NS}(n_f+1) = \nonumber\\ && \hspace{10mm} 1+a_s^{\tiny{\mbox{MOM}}} C_{q,(2,L)}^{(1), {\sf NS}}(n_f+1) +{a_s^{\tiny{\mbox{MOM}}}}^2 \Bigl [A_{qq,Q}^{(2), {\sf NS}, \tiny{\mbox{MOM}}}(n_f+1)~\delta_2 \nonumber \\ && \hspace{20mm} +~\beta_{0,Q}\ln\Bigl(\frac{m^2}{\mu^2}\Bigr) C_{q,(2,L)}^{(1), {\sf NS}}(n_f+1) +C^{(2), {\sf NS}}_{q,(2,L)}(n_f+1) \Bigr]~. \end{eqnarray} Note that $A_{qq,Q}^{(2),{\sf NS}}$, Eq.~(\ref{Aqq2NSQMSren}), is not affected by this scheme transformation. As is obvious from Figure~\ref{2LOOPIRR}, the logarithmic term in Eq. (\ref{Sec5Ex5}) can therefore only be attributed to the massless Wilson coefficient. Separating the light from the heavy part one obtains \begin{eqnarray} \label{Sec5Ex6} &&L_{q,(2,L)}^{(2),{\sf NS}, \tiny{\mbox{MOM}}}(n_f+1)= \nonumber\\ && \hspace{5mm} A_{qq,Q}^{(2), {\sf NS}, \tiny{\mbox{MOM}}}(n_f+1)~\delta_2 +~\beta_{0,Q}\ln\Bigl(\frac{m^2}{\mu^2}\Bigr) C_{q,(2,L)}^{(1), {\sf NS}}(n_f+1) +\hat{C}^{(2), {\sf NS}}_{q,(2,L)}(n_f)~. \label{LNSFAC3} \end{eqnarray} This provides the same results as Eqs.~(4.23)--(4.29) of Ref.~\cite{Buza:1995ie}. These are the asymptotic expressions of the ${\sf NS}$ heavy flavor Wilson coefficients from Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv}, where only the case of $Q\overline{Q}$--production in the final state has been considered. Hence the logarithmic term in Eq. (\ref{LNSFAC3}) just cancels the contributions due to diagrams (b), (c) in Figure~\ref{2LOOPIRR}. \newpage \section{\bf\boldmath Calculation of the Massive Operator Matrix Elements up to $O(a_s^2\varepsilon)$} \label{Sec-2L} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} The quarkonic $2$--loop massive OMEs $A_{Qg}^{(2)}, A_{Qq}^{(2), {\sf PS}}$ and $A_{qq,Q}^{(2)}$ have been calculated for the first time in Ref.~\cite{Buza:1995ie} to construct asymptotic expressions for the ${\sf NLO}$ heavy flavor Wilson Coefficients in the limit $Q^2~\gg~m^2$, cf. Section~\ref{SubSec-HQAsym}. The corresponding gluonic OMEs $A_{gg,Q}^{(2)}$ and $A_{gq,Q}^{(2)}$ were calculated in Ref.~\cite{Buza:1996wv}, where they were used within a VFNS description of heavy flavors in high--energy scattering processes, see Section~\ref{SubSec-HQFlav}. In these calculations, the integration--by--parts technique, \cite{Chetyrkin:1980pr}, has been applied to reduce the number of propagators occurring in the momentum integrals. Subsequently, the integrals were calculated in $z$--space, which led to a variety of multiple integrals of logarithms, partially with complicated arguments. The final results were given in terms of polylogarithms and Nielsen--integrals, see Appendix \ref{App-SpeFunHarm}. The quarkonic terms have been confirmed in Ref.~\cite{Bierenbaum:2007qe}, cf. also \cite{SKdiploma}, where a different approach was followed. The calculation was performed in Mellin--$N$ space and by avoiding the integration--by--parts technique. Using representations in terms of generalized hypergeometric functions, the integrals could be expressed in terms of multiple finite and infinite sums with one free parameter, $N$. The advantage of this approach is that the evaluation of these sums can be automatized using various techniques, simplifying the calculation. The final result is then obtained in Mellin--space in terms of nested harmonic sums or $Z$--sums, cf. \cite{Blumlein:1998if,Vermaseren:1998uu} and Appendix \ref{App-SpeFunHarm}. An additional simplification was found since the final result, e.g., for $A_{Qg}^{(2)}$ can be expressed in terms of {\sf two} basic harmonic sums only, using algebraic, \cite{Blumlein:2003gb}, and structural relations, \cite{Blumlein:2009ta,Blumlein:2009fz}, between them. This is another example of an observation which has been made for many different single scale quantities in high--energy physics, namely that the Mellin--space representation is better suited to the problem than the $z$--space representation. As has been outlined in Section~\ref{Sec-REN}, the $O(\varepsilon)$--terms of the unrenormalized $2$--loop massive OMEs are needed in the renormalization of the $3$--loop contributions. In this Section, we calculate these terms based on the approach advocated in Ref.~\cite{Bierenbaum:2007qe}, which is a new result, \cite{Bierenbaum:2008yu,Bierenbaum:2009zt}. Additionally, we re--calculate the gluonic OMEs up to the constant term in $\varepsilon$ for the first time, cf. \cite{Bierenbaum:2009zt,Buza:1996wv}. Example diagrams for each OME are shown in Figure~\ref{diaex2L}. In Section~\ref{SubSec-2LF32}, we explain how the integrals are obtained in terms of finite and infinite sums using representations in terms of generalized hypergeometric functions, cf. \cite{Slater,Bailey,*Roy:2001} and Appendix~\ref{App-SpeFunFPQ}. For the calculation of these sums we mainly used the {\sf MATHEMATICA}--based program {\sf Sigma}, \cite{sigma1,sigma2}, which is discussed in Section~\ref{SubSec-2LInfSum}. The results are presented in Section~\ref{SubSec-2LRes}. Additionally, we make several remarks on the ${\sf MOM}$--scheme, which has to be adopted intermediately for the renormalization of the coupling constant, cf. Section~\ref{SubSec-RENCo}. In Section~\ref{SubSec-2LChecks}, different checks of the results are presented. \begin{figure}[htb] \begin{center} \includegraphics[angle=0, width=3cm]{picmain9.eps} \includegraphics[angle=0, width=2cm]{axogq.eps} \includegraphics[angle=0, width=3cm]{axom.eps} \includegraphics[angle=0, width=3cm]{axonsb.eps} \includegraphics[angle=0, width=3cm]{axopsb.eps} \end{center} {\small \hspace*{17mm} ($\sf Qg$) \hspace{1.5cm} ($\sf gq,Q$) \hspace{1.3cm} ($\sf gg,Q$) \hspace{1.8cm} ($\sf NS$) \hspace{1.7cm} ($\sf Qq,~PS$) \begin{center} \caption[{\sf Examples for 2--loop diagrams contributing to the massive OMEs.}] {\sf Examples for 2--loop diagrams contributing to the massive OMEs. Thick lines: heavy quarks, curly lines: gluons, full lines: quarks.} \label{diaex2L} \end{center}} \end{figure} \subsection{\bf\boldmath Representation in Terms of Hypergeometric Functions} \label{SubSec-2LF32} All diagrams contributing to the massive OMEs are shown in Figures 1--4 in Ref.~\cite{Buza:1995ie} and in Figures 3,4 in Ref.~\cite{Buza:1996wv}, respectively. They represent $2$--point functions with on--shell external momentum $p$, $p^2=0$. They are expressed in two parameters, the heavy quark mass $m$ and the Mellin--parameter $N$. Since the mass can be factored out of the integrals, the problem effectively contains a single scale. The parameter $N$ represents the spin of the composite operators, (\ref{COMP1})--(\ref{COMP3}), and enters the calculation via the Feynman--rules for these objects, cf. Appendix \ref{App-FeynRules}. Since the external momentum does not appear in the final result, the corresponding scalar integrals reduce to massive tadpoles if one sets $N=0$. In order to explain our method, we consider first the massive $2$--loop tadpole shown in Figure \ref{2LMa}, from which all OMEs can be derived at this order, by attaching $2$ outer legs and inserting the composite operator in all possible ways, i.e., both on the lines and on the vertices. \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=2cm]{picmain8.eps} \end{center} \begin{center} \caption[{\sf Basic $2$--loop massive tadpole }] {\sf Basic $2$--loop massive tadpole } \label{2LMa} \small \end{center} \normalsize \end{figure} In Figure \ref{2LMa}, the wavy line is massless and the full lines are massive. Here $\nu_i$ labels the power of the propagator. We adopt the convention ${\nu_{i...j}}\equiv{\nu_i}+...+{\nu_j}$ etc. The corresponding dimensionless momentum integral reads in Minkowski--space \begin{eqnarray} I_1&=& \int\int \frac{d^D k_1d^Dk_2}{(4\pi)^{4D}} \frac{(4\pi)^4(-1)^{\nu_{123}-1}(m^2)^{\nu_{123}-D}} {(k_1^2-m^2)^{\nu_1}(k_1^2-k_2^2)^{\nu_2} (k_2^2-m^2)^{\nu_3}}~, \label{momint2L} \end{eqnarray} where we have attached a factor $(4\pi)^4(-1)^{\nu_{123}-1}$ for convenience. Using standard Feynman--parametrization and Eq.~(\ref{Dint}) for momentum integration, one obtains the following Feynman--parameter integral \begin{eqnarray} I_1&=& \Gamma\Biggl[\frac[0pt]{\nu_{123}-4-\varepsilon}{\nu_1,\nu_2,\nu_3}\Biggr] \iint_0^1~dxdy~ \frac{ x^{1+\varepsilon/2-\nu_2} (1-x)^{\nu_{23}-3-\varepsilon/2} y^{\nu_3-1} (1-y)^{\nu_{12}-3-\varepsilon/2}} {(4\pi)^{\varepsilon}(1-xy)^{\nu_{123}-4-\varepsilon}}~,\nonumber\\ \label{parint2L} \end{eqnarray} which belongs to the class of the hypergeometric function $\empty_3F_2$ with argument $z=1$, see Appendix \ref{App-SpeFunFPQ}. Applying Eq.~(\ref{FPQint}), one obtains \begin{eqnarray} I_1&=& S_{\varepsilon}^2 \exp\Bigl( \sum_{i=2}^{\infty} \frac{\zeta_i}{i}\varepsilon^i\Bigr) \Gamma\Biggl[\frac[0pt]{{\nu_{123}}-4-\varepsilon,2+\varepsilon/2-{\nu_2}, {\nu_{23}}-2-\varepsilon/2,{\nu_{12}}-2-\varepsilon/2} {1-\varepsilon,{\nu_1},{\nu_2},{\nu_3} ,{\nu_{123}}-2-\varepsilon/2}\Biggr] \nonumber \\ && \times~\phantom{}_{{3}}{F}_{{2}}\Biggl[\frac[0pt]{{\nu_{123}}-4-\varepsilon ,2+\varepsilon/2-{\nu_2},{\nu_3}} {{\nu_3},{\nu_{123}}-2-\varepsilon/2};{1}\Biggr]~, \label{F322L} \end{eqnarray} where we have used Eq.~(\ref{SepA}). The term $\nu_3$ in the argument of the $\empty_3F_2$ cancels between nominator and denominator and thus one can use Gauss's theorem, Eq.~(\ref{Gauss}), to write the result in terms of $\Gamma$--functions \begin{eqnarray} I_1&=&\Gamma\Biggl[\frac[0pt]{ \nu_{123}-4-\varepsilon, 2+\varepsilon/2-\nu_2, \nu_{12}-2-\varepsilon/2, \nu_{23}-2-\varepsilon/2 }{ 1-\varepsilon, 2+\varepsilon/2, \nu_1, \nu_3, \nu_{123}+\nu_2-4-\varepsilon } \Biggr] S_{\varepsilon}^2 \exp\Bigl( \sum_{i=2}^{\infty} \frac{\zeta_i}{i}\varepsilon^i\Bigr)~. \nonumber\\ \label{F212L} \end{eqnarray} This calculation is of course trivial and Eq.~(\ref{F212L}) can be easily checked using {\sf MATAD}, cf. Ref.~\cite{Steinhauser:2000ry} and Section~\ref{SubSec-3LMatad}. Next, let us consider the case of arbitrary moments in presence of the complete numerator structure. Since the final result contains the factor $(\Delta.p)^N$, one cannot set $p$ to zero anymore. This increases the number of propagators and hence the number of Feynman--parameters in Eq.~(\ref{parint2L}). Additionally, the terms $(\Delta.q)^A$ in the integral lead to polynomials in the Feynman--parameters to a symbolic power in the integral, which can not be integrated trivially. Hence neither Eq.~(\ref{FPQint}) nor Gauss's theorem can be applied anymore in the general case. However, the structure of the integral in Eq.~(\ref{parint2L}) does not change. For {\sf any diagram} deriving from the $2$--loop tadpole, a general integral of the type \begin{eqnarray} I_2&=&C_2 \iint_0^1~dxdy~ \frac{x^a(1-x)^by^c(1-y)^d}{ (1-xy)^e}\int_0^1dz_1...\int_0^1dz_i~ {{\sf P}}\Bigl(x,y,z_1 \ldots z_i,{N}\Bigr)~ \label{Gen2L} \end{eqnarray} is obtained. Here ${\sf P}$ is a rational function of $x,y$ and possibly more parameters $z_1$...$z_i$. $N$ denotes the Mellin--parameter and occurs in some exponents. Note that operator insertions with more than two legs give rise to additional finite sums in ${\sf P}$, see Appendix \ref{App-FeynRules}. For fixed values of $N$, one can expand ${\sf P}$ and the integral $I_2$ turns into a finite sum over integrals of the type $I_1$. The terms $\nu_i$ in these integrals might have been shifted by integers, but after expanding in $\varepsilon$, the one--fold infinite sum can be performed, e.g., using the ${\sf FORM}$--based code ${\sf Summer}$, \cite{Vermaseren:1998uu}. To illustrate the sophistication occurring once one keeps the complete dependence on $N$ in an example, we consider the scalar integral contributing to $A_{Qg}^{(2)}$ shown in Figure \ref{diaex2L}. After momentum integration, it reads \begin{eqnarray} I_3 &=&\frac{(\Delta p)^{N-2}\Gamma(1-\varepsilon)}{(4\pi)^{4+\varepsilon}(m^2)^{1-\varepsilon}} \iiiint dudzdydx \frac{(1-u)^{-\varepsilon/2}z^{-\varepsilon/2}(1-z)^{\varepsilon/2-1}}{(1-u+uz)^{1-\varepsilon}(x-y)} \nonumber\\ &&\Biggl[ \Bigl(zyu+x(1-zu)\Bigr)^{N-1} -\Bigl((1-u)x+uy\Bigr)^{N-1}\Biggr]~, \end{eqnarray} where we have performed the finite sum already, which stems from the operator insertion. Here and below, the Feynman-parameter integrals are carried out over the respective unit-cube. This integral is of the type of Eq.~(\ref{Gen2L}) and the term $x-y$ in the denominator cancels for fixed values of $N$. Due to the operator insertion on an internal vertex, it is one of the more involved integrals in the $2$--loop case. For almost all other integrals, all but two parameters can be integrated automatically, leaving only a single infinite sum of the type of Eq.~(\ref{F322L}) with $N$ appearing in the parameters of the hypergeometric function, cf. e.g. \cite{Bierenbaum:2007qe,SKdiploma,Bierenbaum:2007dm}. In order to render this example calculable, suitable variable transformations, as, e.g., given in Ref.~\cite{Hamberg:thesis}, are applied, \cite{Bierenbaum:2007qe,SKdiploma}. Thus one arrives at the following double sum \begin{eqnarray} I_3 &=&\frac{S_{\varepsilon}^2(\Delta p)^{N-2}}{(4\pi)^4(m^2)^{1-\varepsilon}} \exp\Biggl\{\sum_{l=2}^{\infty}\frac{\zeta_l}{l}\varepsilon^l\Biggr\} \frac{2\pi}{N\sin(\frac{\pi}{2}\varepsilon)} \sum_{j=1}^{N}\Biggl\{\binom{N}{j}(-1)^j+\delta_{j,N}\Biggr\} \nonumber \\ && \times\Biggl\{ \frac{\Gamma(j)\Gamma(j+1-\frac{\varepsilon}{2})} {\Gamma(j+2-\varepsilon)\Gamma(j+1+\frac{\varepsilon}{2})} -\frac{B(1-\frac{\varepsilon}{2},1+j)}{j}~ _3F_2\Biggl[\frac[0pt]{1-\varepsilon,\frac{\varepsilon}{2},j+1}{1,j+2 -\frac{\varepsilon}{2}};1\Biggr] \Biggr\} \nonumber\\ &=&\frac{S_{\varepsilon}^2(\Delta p)^{N-2}}{(4\pi)^4(m^2)^{1-\varepsilon}} \Bigl\{I_3^{(0)}+I_3^{(1)}\varepsilon +O(\varepsilon^2)\Bigr\}~.\label{ffinal} \end{eqnarray} Note that in our approach no expansion in $\varepsilon$ is needed until a sum--representation of the kind of Eq.~(\ref{ffinal}) is obtained. Having performed the momentum integrations, the expressions of almost all diagrams were given in terms of single generalized hypergeometric series $_3F_2$ at $z=1$, with possibly additional finite summations. These infinite sums could then be safely expanded in $\varepsilon$, leading to different kinds of sums depending on the Mellin--parameter $N$. The summands are typically products of harmonic sums with different arguments, weighted by summation parameters and contain hypergeometric terms~\footnote{$f(k)$ is hypergeometric in $k$ iff $f(k+1)/f(k)=g(k)$ for some fixed rational function $g(k)$.}, like binomials or Beta--function factors $B(N,i)$, cf. Eq.~(\ref{betafun1}). Here $i$ is a summation--index. In the most difficult cases, double sums as in Eq.~(\ref{ffinal}) or even triple sums were obtained, which had to be treated accordingly. In general, these sums can be expressed in terms of nested harmonic sums and $\zeta$--values. Note that sums containing Beta--functions with different arguments, e.g. $B(i,i),~B(N+i,i)$, usually do not lead to harmonic sums in the final result. Some of these sums can be performed by the existing packages \cite{Vermaseren:1998uu,Weinzierl:2002hv,Moch:2005uc}. However, there exists so far no automatic computer program to calculate sums which contain Beta--function factors of the type $B(N,i)$ and single harmonic sums in the summand. These sums can be calculated applying analytic methods, as integral representations, and general summation methods, as encoded in the {\sf Sigma}~package \cite{Refined,Schneider:2007,sigma1,sigma2}. In the next Section, we will present details on this. Before finishing this Section, we give the result in terms of harmonic sums for the double sum in Eq.~(\ref{ffinal}) applying these summation methods. The $O(\varepsilon^0)$ of Eq.~(\ref{ffinal}) is needed for the constant term $a_{Qg}^{(2)}$, cf. Refs.~\cite{Bierenbaum:2007qe,Bierenbaum:2007dm}. The linear term in $\varepsilon$ reads \begin{eqnarray} I_3^{(1)}&=&\frac{1}{N} \Biggl[-2{S_{2,1}}+2{S_3} +\frac{4N+1}{N}{S_2} -\frac{{S^2_1}}{N}-\frac{4}{N}{S_1}\Biggr]~, \end{eqnarray} where we adopt the notation to take harmonic sums at argument $N$, if not stated otherwise. \subsection{\bf\boldmath Difference Equations and Infinite Summation} \label{SubSec-2LInfSum} Single scale quantities in renormalizable quantum field theories are most simply represented in terms of nested harmonic sums, cf. \cite{Blumlein:1998if,Vermaseren:1998uu} and Appendix \ref{App-SpeFunHarm}, which holds at least up to 3--loop order for massless Yang--Mills theories and for a wide class of different processes. This includes the anomalous dimensions and massless Wilson coefficients for unpolarized and polarized space- and time-like processes to 3--loop order, the Wilson coefficients for the Drell-Yan process and pseudoscalar and scalar Higgs--boson production in hadron scattering in the heavy quark mass limit, as well as the soft- and virtual corrections to Bhabha scattering in the on--mass--shell--scheme to 2--loop order, cf.~\cite{Blumlein:2004bb,Blumlein:2005im,*Blumlein:2006rr,Dittmar:2005ed,Blumlein:2007dj,Vermaseren:2005qc,Moch:2004pa,Vogt:2004mw}. The corresponding Feynman--parameter integrals are such that nested harmonic sums appear in a natural way, working in Mellin space, \cite{Blumlein:2009ta,Blumlein:2009fz}. Single scale massive quantities at 2 loops, like the unpolarized and polarized heavy-flavor Wilson coefficients in the region $Q^2 \gg m^2$ as considered in this thesis, belong also to this class, \cite{Buza:1995ie,Buza:1996xr,Blumlein:2006mh,Bierenbaum:2007qe,Bierenbaum:2007pn,Blumlein:pol,Bierenbaum:2006mq,Bierenbaum:2007dm,Blumlein:trans}. Finite harmonic sums obey algebraic, cf. \cite{Blumlein:2003gb}, and structural relations, \cite{Blumlein:2009ta}, which can be used to obtain simplified expressions and both shorten the calculations and yield compact final results. These representations have to be mapped to momentum-fraction space to use the respective quantities in experimental analyzes. This is obtained by an Mellin inverse transform which requires the analytic continuation of the harmonic sums w.r.t. the Mellin index $N~{\in}~\mathbb{C}$, \cite{Blumlein:2000hw,*Blumlein:2005jg,Blumlein:2009ta,Blumlein:2009fz}. Calculating the massive OMEs in Mellin space, new types of infinite sums occur if compared to massless calculations. In the latter case, summation algorithms as {\sf Summer},~\cite{Vermaseren:1998uu}, {\sf Nestedsums},~\cite{Weinzierl:2002hv}, and {\sf Xsummer},~\cite{Moch:2005uc}, may be used to calculate the respective sums. {\sf Summer} and {\sf Xsummer} are based on {\sf FORM}, while {\sf Nestedsums} is based on {\sf GiNaC}, \cite{Bauer:2000cp}. The new sums which emerge in \cite{Bierenbaum:2007qe,Bierenbaum:2007pn,Bierenbaum:2006mq,Bierenbaum:2007dm,Blumlein:pol,Bierenbaum:2008yu} can be calculated in different ways. In Ref.~\cite{Bierenbaum:2007qe,Bierenbaum:2007pn}, we chose analytic methods and in the former reference all sums are given which are needed to calculate the constant term of the massive OMEs. Few of these sums can be calculated using general theorems, as Gauss' theorem,~(\ref{Gauss}), Dixon's theorem, \cite{Slater}, or summation tables in the literature, cf.~\cite{Vermaseren:1998uu,Devoto:1983tc,*Blumlein:2004bs,*Blumleinunp}. In order to calculate the gluonic OMEs as well as the $O(\varepsilon)$--terms, many new sums had to be evaluated. For this we adopted a more systematic technique based on difference equations, which are the discrete equivalent of differential equations, cf.~\cite{Norlund,*Thomson}. This is a promising approach, since it allowed us to obtain all sums needed automatically and it may be applied to entirely different single--scale processes as well. It is based on applying general summation algorithms in computer algebra. A first method is Gosper's telescoping algorithm,~\cite{GOSPER}, for hypergeometric terms. For practical applications, Zeilberger's extension of Gosper's algorithm to creative telescoping,~\cite{Zeil:91,AequalB1}, can be considered as the breakthrough in symbolic summation. The recent summation package~{\sf Sigma},~\cite{Refined,Schneider:2007,sigma1,sigma2}, written in ${\sf MATHEMATICA}$ opens up completely new possibilities in symbolic summation. Based on Karr's $\Pi\Sigma$-difference fields,~\cite{Karr:1981}, and further refinements,~\cite{Refined,RefinedDF}, the package contains summation algorithms,~\cite{SigmaAlg}, that allow to solve not only hypergeometric sums, like Gosper's and Zeilberger's algorithms, but also sums involving indefinite nested sums. In this algebraic setting, one can represent completely algorithmically indefinite nested sums and products without introducing any algebraic relations between them. Note that this general class of expressions covers as special cases the harmonic sums or generalized nested harmonic sums,~cf.~\cite{GON,Borwein:1999js,PET,Moch:2001zr}. Given such an optimal representation, by introducing as less sums as possible, various summation principles are available in~{\sf Sigma}. In this work, we applied the following strategy which has been generalized from the hypergeometric case,~\cite{AequalB1,AequalB2}, to the $\Pi\Sigma$-field setting. \begin{enumerate} \item Given a definite sum that involves an extra parameter $N$, we compute a recurrence relation in $N$ that is fulfilled by the input sum. The underlying difference field algorithms exploit Zeilberger's creative telescoping principle,~\cite{AequalB1,AequalB2}. \item Then we solve the derived recurrence in terms of the so-called d'Alembertian solutions,~\cite{AequalB1,AequalB2}. Since this class covers the~harmonic~sums, we find all solutions in terms of~harmonic~sums. \item Taking the initial values of the original input sum, we can combine the solutions found from step~2 in order to arrive at a closed representation in terms of harmonic sums. \end{enumerate} In the following, we give some examples on how {\sf Sigma}~works. A few typical sums we had to calculate are listed in Appendix \ref{App-Sums} and a complete set of sums needed to calculate the 2--Loop OMEs up to $O(\varepsilon)$ can be found in Appendix B of Refs.~\cite{Bierenbaum:2007qe,Bierenbaum:2008yu}. Note that in this calculation also more well-known sums are occurring which can, e.g., be easily solved using ${\sf Summer}$. \subsubsection{The Sigma-Approach} \label{SubSubSec-2LSigma} As a first example we consider the sum \begin{eqnarray} T_1(N)\equiv \sum_{i=1}^{\infty} \frac{B(N,i)}{i+N+2}S_1(i)S_1(N+i)~. \label{1Beta25} \end{eqnarray} We treat the upper bound of the sum as a finite integer, i.e., we consider the truncated version $$T_1(a,N)\equiv\sum_{i=1}^{a} \frac{B(N,i)}{i+N+2}S_1(i)S_1(N+i) ,$$ for $a\in\mathbb{N}$. Given this sum as input, we apply {\sf Sigma}'s creative telescoping algorithm and find a recurrence for $T_1(a,N)$ of the form \begin{equation} \label{Equ:Rec} c_0(N)T(a,N)+\dots c_d(N)T(a,N+d)=q(a,N) \end{equation} with order $d=4$. Here, the $c_i(N)$ and $q(a,N)$ are known functions of $N$ and $a$. Finally, we perform the limit $a\to\infty$ and we end up at the recurrence \begin{multline*} -N (N+1)(N+2)^2 \Bigl\{4 N^5+68 N^4+455 N^3+1494 N^2+2402 N+1510\Bigr\} T_1(N)\\ -(N+1)(N+2)(N+3) \Bigl\{16 N^5+260N^4+1660 N^3+5188 N^2+7912 N+4699\Bigr\} \\ \times T_1(N+1) +(N+2)(N+4)(2 N+5) \Bigl\{4 N^6+74 N^5+542 N^4+1978 N^3+3680 N^2 \\ +3103N+767\Bigr\} T_1(N+2) +(N+4)(N+5) \Bigl\{16 N^6+276 N^5+1928 N^4+6968 N^3 \\ +13716 N^2+13929N+5707\Bigr\}T_1(N+3) -(N+4)(N+5)^2 (N+6) \Bigl\{4 N^5+48 N^4\\ +223 N^3 +497 N^2+527 N +211\Bigr\}T_1(N+4) =P_1(N)+P_2(N)S_1(N) \end{multline*} where \begin{align*} P_1(N)&=\Big(32 N^{18}+1232 N^{17}+21512 N^{16}+223472 N^{15}+1514464 N^{14}+6806114 N^{13}\\ &+18666770N^{12}+15297623 N^{11}-116877645 N^{10}-641458913 N^9-1826931522N^8\\ &-3507205291 N^7-4825457477 N^6-4839106893 N^5-3535231014 N^4\\ &-1860247616 N^3 -684064448 N^2-160164480 N-17395200\Big) \\ &\big/\big(N^3 (N+1)^3 (N+2)^3 (N+3)^2 (N+4) (N+5)\big) \end{align*} and \begin{align*} P_2(N)&=-4\Big((4 N^{14}+150 N^{13}+2610 N^{12}+27717 N^{11}+199197 N^{10}+1017704 N^9 \\ &+3786588N^8 +10355813 N^7+20779613 N^6+30225025 N^5+31132328 N^4\\ &+21872237 N^3+9912442N^2 +2672360 N+362400\Big) \\ &\big/\big(N^2 (N+1)^2 (N+2)^2 (N+3) (N+4) (N+5)\big)~. \end{align*} In the next step, we apply {\sf Sigma}'s recurrence solver to the computed recurrence and find the four linearly independent solutions \begin{eqnarray*} h_1(N)=\frac{1}{N+2},&h_2(N)&=\frac{(-1)^N}{N(N+1)(N+2)},\\ h_3(N)=\frac{S_1(N)}{N+2},& h_4(N)&=(-1)^N\frac{\big(1+(N+1)S_1(N)\big)}{N(N+1)^2(N+2)}~, \end{eqnarray*} of the homogeneous version of the recurrence and the particular solution \begin{eqnarray*} p(N) &=& \frac{2(-1)^N}{N(N+1)(N+2)}\Biggl[ 2S_{-2,1}(N) -3S_{-3}(N) -2S_{-2}(N)S_1(N) -\zeta_2S_1(N)\nonumber\\ && -\zeta_3 -\frac{2S_{-2}(N)+\zeta_2}{N+1} \Biggr] -2\frac{S_3(N)-\zeta_3}{N+2} -\frac{S_2(N)-\zeta_2}{N+2}S_1(N)\nonumber\\ && +\frac{2+7N+7N^2+5N^3+N^4} {N^3(N+1)^3(N+2)}S_1(N) +2\frac{2+7N+9N^2+4N^3+N^4} {N^4(N+1)^3(N+2)} \end{eqnarray*} of the recurrence itself. Finally, we look for constants $c_1,\dots,c_4$ such that $$T_1(N)=c_1\,h_1(N)+c_2\,h_2(N)+c_3\,h_3(N)+c_4\,h_4(N)+p(N)~.$$ The calculation of the necessary initial values for $N=0,1,2,3$ does not pose a problem for {\sf Sigma}\ and we conclude that $c_1=c_2=c_3=c_4=0$. Hence the final result reads \begin{eqnarray} T_1(N) &=& \frac{2(-1)^N}{N(N+1)(N+2)}\Biggl[ 2S_{-2,1}(N) -3S_{-3}(N) -2S_{-2}(N)S_1(N) -\zeta_2S_1(N)\nonumber\\ && -\zeta_3 -\frac{2S_{-2}(N)+\zeta_2}{N+1} \Biggr] -2\frac{S_3(N)-\zeta_3}{N+2} -\frac{S_2(N)-\zeta_2}{N+2}S_1(N)\nonumber\\ && +\frac{2+7N+7N^2+5N^3+N^4} {N^3(N+1)^3(N+2)}S_1(N) +2\frac{2+7N+9N^2+4N^3+N^4} {N^4(N+1)^3(N+2)}~. \nonumber\\ \label{2Beta25} \end{eqnarray} Using more refined algorithms of {\sf Sigma}, see e.g. \cite{AhlgrenPade1,*AhlgrenPade2}, even a first order difference equation can be obtained \begin{eqnarray} &&(N+2)T_1(N)-(N+3)T_1(N+1)\nonumber\\\nonumber\\ &=& 2\frac{(-1)^N}{N(N+2)}\Biggl( -\frac{3N+4} {(N+1)(N+2)}\Bigl(\zeta_2+2S_{-2}(N)\Bigr) -2\zeta_3-2S_{-3}(N)-2\zeta_2S_1(N) \nonumber\\ && -4S_{1,-2}(N) \Biggr) +\frac{N^6+8N^5+31N^4+66N^3+88N^2+64N+16}{N^3(N+1)^2(N+2)^3}S_1(N)\nonumber\\ &&+\frac{S_2(N)-\zeta_2}{N+1} +2\frac{N^5+5N^4+21N^3+38N^2+28N+8}{N^4(N+1)^2(N+2)^2}~. \label{Beta25eq2} \end{eqnarray} However, in deriving Eq.~(\ref{Beta25eq2}), use had to be made of further sums of less complexity, which had to be calculated separately. As above, we can easily solve the recurrence and obtain again the result~\eqref{2Beta25}. Here and in the following we applied various algebraic relations between harmonic sums to obtain a simplification of our results, cf.~\cite{Blumlein:2003gb}. \subsubsection{Alternative Approaches} \label{SubSubSec-2LAlt} As a second example we consider the sum \begin{eqnarray} T_2(N)\equiv\sum_{i=1}^{\infty}\frac{S^2_1(i+N)}{i^2}~,\label{1Harm8} \end{eqnarray} which does not contain a Beta--function. In a first attempt, we proceed as in the first example $T_1(N)$. The naive application of {\sf Sigma}\ yields a fifth order difference equation, which is clearly too complex for this sum. However, similar to the situation $T_1(N)$, {\sf Sigma}\ can reduce it to a third order relation which reads \begin{eqnarray} \label{eq2} &&T_2(N)(N+1)^2 -T_2(N+1)(3N^2+10N+9)\nonumber\\ &&+T_2(N+2)(3N^2+14N+17) -T_2(N+3)(N+3)^2\nonumber\\\nonumber\\ &=&\frac{ 6N^5+48N^4+143N^3+186N^2+81N-12 } {(N+1)^2(N+2)^3(N+3)^2} - 2\frac{2N^2+7N+7 } {(N+1)(N+2)^2(N+3)}S_1(N)\nonumber\\ && +\frac{-2N^6-24N^5-116N^4-288N^3-386N^2-264N-72 } {(N+1)^2(N+2)^3(N+3)^2}\zeta_2\label{1diffeqT3}~. \end{eqnarray} Solving this recurrence relation in terms of harmonic sums gives a closed form, see~\eqref{Harm8} below. Still (\ref{eq2}) represents a rather involved way to solve the problem. It is of advantage to map the numerator $S_1^2(i+N)$ into a linear representation, which can be achieved using Euler's relation \begin{eqnarray} S_a^2(N) = 2 S_{a,a}(N) - S_{2a}(N),~~~a > 0~. \end{eqnarray} This is realized in ${\sf Summer}$ by the {\sf basis}--command for general--type harmonic sums, \begin{eqnarray} T_2(N)=\sum_{i=1}^{\infty}\frac{2S_{1,1}(i+N)-S_2(i+N)}{i^2} ~.\label{2Harm8} \end{eqnarray} As outlined in Ref.~\cite{Vermaseren:1998uu}, sums of this type can be evaluated by considering the difference \begin{eqnarray} D_2(j)&=&T_2(j)-T_2(j-1) =2 \sum_{i=1}^{\infty}\frac{S_1(j+i)}{i} -\sum_{i=1}^{\infty} \frac{1}{i^2(j+i)^2}~.\label{2diffeqT2} \end{eqnarray} The solution is then obtained by summing (\ref{2diffeqT2}) to \begin{eqnarray} T_2(N)=\sum_{j=1}^ND_2(j)+T_2(0)~. \end{eqnarray} The sums in Eq.~(\ref{2diffeqT2}) are now calculable trivially or are of less complexity than the original sum. In the case considered here, only the first sum on the left hand side is not trivial. However, after partial fractioning, one can repeat the same procedure, resulting into another difference equation, which is now easily solved. Thus using this technique, the solution of Eq.~(\ref{1Harm8}) can be obtained by summing two first order difference equations or solving a second order one. The above procedure is well known and some of the summation--algorithms of ${\sf Summer}$ are based on it. As a consequence, infinite sums with an arbitrary number of harmonic sums with the same argument can be performed using this package. Note that sums containing harmonic sums with different arguments, see e.g Eq.~(\ref{1Harm27}), can in principle be summed automatically using the same approach. However, this feature is not yet built into ${\sf Summer}$. A third way to obtain the sum (\ref{1Harm8}) consists of using integral representations for harmonic sums, \cite{Blumlein:1998if}. One finds \begin{eqnarray} T_2(N) &=&2\sum_{i=1}^{\infty} \int_0^1dx\frac{x^{i+N}}{i^2}\Bigl(\frac{\ln(1-x)}{1-x}\Bigr)_+ -\sum_{i=1}^{\infty} \Biggl(\int_0^1dx\frac{x^{i+N}}{i^2}\frac{\ln(x)}{1-x} +\frac{\zeta_2}{i^2}\Biggr)\nonumber\\ &=&2\mbox{\rm\bf M}\Bigl[\Bigl(\frac{\ln(1-x)}{1-x}\Bigr)_+\mbox{Li}_2(x)\Bigr](N+1) \nonumber\\ && -\Biggl(\mbox{\rm\bf M}\Bigl[\frac{\ln(x)}{1-x}\mbox{Li}_2(x)\Bigr](N+1)+\zeta_2^2 \Biggr)~. \label{Harm8mellin} \end{eqnarray} Here the Mellin--transform is defined in Eq.~(\ref{Mellintrans}). Eq.~(\ref{Harm8mellin}) can then be easily calculated since the corresponding Mellin--transforms are well--known, \cite{Blumlein:1998if}. Either of these three methods above lead to \begin{eqnarray} T_2(N)= \frac{17}{10}\zeta_2^2 +4S_1(N)\zeta_3 +S^2_1(N)\zeta_2 -S_2(N)\zeta_2 -2S_1(N)S_{2,1}(N) -S_{2,2}(N) ~. \label{Harm8} \end{eqnarray} As a third example we would like to evaluate the sum \begin{eqnarray} \label{1Harm27} T_3(N) = \sum_{i=1}^{\infty} \frac{S_1^2(i+N) S_1(i)}{i}~. \end{eqnarray} Note that (\ref{1Harm27}) is divergent. In order to treat this divergence, the symbol $\sigma_1$, cf. Eq.~(\ref{sigmaval}), is used. The application of {\sf Sigma}~to this sum yields a fourth order difference equation \begin{eqnarray} &&(N+1)^2(N+2)T_2(N) -(N+2)\left(4N^2+15N+15\right)T_2(N+1) \nonumber\\ &&+(2N+5)\left(3N^2+15N+20\right)T_2(N+2) -(N+3)\left(4N^2+25N+40\right)T_2(N+3)\nonumber\\ &&+(N+3)(N+4)^2T_2(N+4)\nonumber\\ &=& \frac{6N^5+73N^4+329N^3+684N^2+645N+215} {(N+1)^2(N+2)^2(N+3)^2} +\frac{6N^2+19N+9} {(N+1)(N+2)(N+3)}S_1(N)\label{1diffeqT2}~, \nonumber\\ \end{eqnarray} which can be solved. As in the foregoing example the better way to calculate the sum is to first change $S_1^2(i+N)$ into a linear basis representation \begin{eqnarray} T_3(N)=\sum_{i=1}^{\infty}\frac{2S_{1,1}(i+N)-S_2(i+N)}{i}S_1(i) ~.\label{2Harm27} \end{eqnarray} One may now calculate $T_3(N)$ using telescoping for the difference \begin{eqnarray} D_3(j)&=&T_3(j)-T_3(j-1) =2\sum_{i=1}^{\infty}\frac{S_{1}(i+j)S_1(i)}{i(i+j)} -\sum_{i=1}^{\infty}\frac{S_1(i)}{i(i+j)^2}~,\label{2diffeqT3} \end{eqnarray} with \begin{eqnarray} T_3(N)=\sum_{j=1}^ND_2(j)+T_3(0)~. \end{eqnarray} One finally obtains \begin{eqnarray} T_3(N)&=& \frac{\sigma_1^4}{4} +\frac{43}{20}\zeta_2^2 +5S_1(N)\zeta_3 +\frac{3S^2_1(N)-S_2(N)}{2}\zeta_2 -2S_1(N)S_{2,1}(N) \nonumber\\ &&+S^2_1(N)S_2(N) +S_1(N)S_3(N) -\frac{S^2_2(N)}{4} +\frac{S^4_1(N)}{4} ~.\label{Harm27} \end{eqnarray} \subsection{\bf\boldmath Results} \label{SubSec-2LRes} For the singlet contributions, we leave out an overall factor \begin{eqnarray} \frac{1+(-1)^N}{2}~ \end{eqnarray} in the following. This factor emerges naturally in our calculation and is due to the fact that in the light--cone expansion, only even values of $N$ contribute to $F_2$ and $F_L$, cf. Section~\ref{SubSec-DISComptLCE}. Additionally, we do not choose a linear representation in terms of harmonic sums as was done in Refs.~\cite{Vermaseren:2005qc,Moch:2004pa,Vogt:2004mw}, since these are non--minimal w.r.t. to the corresponding quasi--shuffle algebra, \cite{Hoffman:1997,*Hoffman:2004bf}. Due to this a much smaller number of harmonic sums contributes. Remainder terms can be expressed in polynomials $P_i(N)$. Single harmonic sums with negative index are expressed in terms of the function $\beta(N+1)$, cf. Appendix \ref{App-SpeFunHarm}. For completeness, we also give all pole terms and the constant terms of the quarkonic OMEs. The latter have been obtained before in Refs.~\cite{Buza:1995ie,Bierenbaum:2007qe}. The pole terms can be expressed via the ${\sf LO}$--, \cite{Gross:1973juxGross:1974cs,*Georgi:1951sr}, and the fermionic parts of the ${\sf NLO}$,~\cite{Floratos:1977auxFloratos:1977aue1,Floratos:1978ny,GonzalezArroyo:1979df,GonzalezArroyo:1979he,*Curci:1980uw,*Furmanski:1980cm,Hamberg:1991qt}, anomalous dimensions and the $1$--loop $\beta$--function, \cite{Khriplovich:1969aa,tHooft:unpub,Politzer:1973fx,Gross:1973id}. We first consider the matrix element $A_{Qg}^{(2)}$, which is the most complex of the $2$--loop OMEs. For the calculation we used the projector given in Eq.~(\ref{projG1}) and therefore have to include diagrams with external ghost lines as well. The $1$--loop result is straightforward to calculate and has already been given in Eqs. (\ref{AhhhQg1},~\ref{AQg1MSren}). As explained in Section~\ref{Sec-REN}, we perform the calculation accounting for 1--particle reducible diagrams. Hence the 1--loop massive gluon self--energy term, Eq.~(\ref{GluSelf1}), contributes. The unrenormalized $2$--loop OME is then given in terms of 1--particle irreducible and reducible contributions by \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2)}&=& \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2), \mbox{irr}}- ~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(1)} \hat{\Pi}^{(1)}~\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)~. \end{eqnarray} Using the techniques described in the previous Sections, the pole--terms predicted by renormalization in Eq.~(\ref{AhhhQg2}) are obtained, which have been given in Refs.~\cite{Buza:1995ie,Bierenbaum:2007qe} before. Here, the contributing $1$--loop anomalous dimensions are \begin{eqnarray} \gamma_{qq}^{(0)}&=& 4 C_F\Biggl\{2S_1-\frac{3N^2+3N+2}{2N(N+1)}\Biggr\}~, \\ \hat{\gamma}_{qg}^{(0)} &=&-8T_F\frac{N^2+N+2}{N(N+1)(N+2)}~, \\ \gamma_{gg}^{(0)} &=&8C_A\Biggl\{S_1-\frac{2(N^2+N+1)} {(N-1)N(N+1)(N+2)}\Biggr\}-2 \beta_0~, \end{eqnarray} and the $2$--loop contribution reads \begin{eqnarray} \hat{\gamma}_{qg}^{(1)}&=& 8C_FT_F \Biggl\{ 2\frac{N^2+N+2} {N(N+1)(N+2)} \left[ S_2 -S_1^2 \right] +\frac{4}{N^2}S_1 -\frac{P_1} {N^3(N+1)^3(N+2)} \Biggr\} \nonumber\\ && +16C_AT_F \Biggl\{ \frac{N^2+N+2} {N(N+1)(N+2)} \left[ S_2 +S_1^2 -2\beta' -\zeta_2 \right] \nonumber\\ && -\frac{4(2N+3)S_1} {(N+1)^2(N+2)^2} -\frac{P_2} {(N-1)N^3(N+1)^3(N+2)^3} \Biggr\}~, \\ \nonumber\\ P_1&=&5N^6+15N^5+36N^4+51N^3+25N^2+8N+4~, \nonumber\\ P_2&=&N^9+6N^8+15N^7+25N^6+36N^5+85N^4+128N^3 \nonumber\\ && +104N^2+64N+16~. \end{eqnarray} These terms agree with the literature and provide a strong check on the calculation. The constant term in $\varepsilon$ in Eq.~(\ref{AhhhQg2}) is determined after mass renormalization, \cite{Buza:1995ie,Bierenbaum:2007qe,SKdiploma}. \begin{eqnarray} a_{Qg}^{(2)}&=& T_FC_F\Biggl\{ \frac{4(N^2+N+2)} {3N(N+1)(N+2)} \Bigl( 4S_3 -3S_2S_1 -S_1^3 -6S_1\zeta_2 \Bigr) +4\frac{3N+2} {N^2(N+2)}S_1^2 \nonumber\\ && \hspace{-10mm} +4\frac{N^4+17N^3+17N^2-5N-2} {N^2(N+1)^2(N+2)}S_2 +2\frac{(3N^2+3N+2)(N^2+N+2)} {N^2(N+1)^2(N+2)}\zeta_2 \nonumber\\ &&\hspace{-10mm} +4\frac{N^4-N^3-20N^2-10N-4} {N^2(N+1)^2(N+2)}S_1 +\frac{2P_3} {N^4(N+1)^4(N+2)} \Biggr\} \nonumber\\ &&\hspace{-10mm} +T_FC_A\Biggl\{ \frac{2(N^2+N+2)} {3N(N+1)(N+2)} \Bigl( -24S_{-2,1} +6\beta'' +16S_3 -24\beta'S_1 +18S_2S_1 +2S_1^3 \nonumber\\ &&\hspace{-10mm} -9\zeta_3 \Bigr) -16\frac{N^2-N-4} {(N+1)^2(N+2)^2}\beta' -4\frac{7N^5+21N^4+13N^3+21N^2+18N+16} {(N-1)N^2(N+1)^2(N+2)^2}S_2 \nonumber\\ &&\hspace{-10mm} -4\frac{N^3+8N^2+11N+2} {N(N+1)^2(N+2)^2}S_1^2 -8\frac{N^4-2N^3+5N^2+2N+2} {(N-1)N^2(N+1)^2(N+2)}\zeta_2 \nonumber\\ &&\hspace{-10mm} -\frac{4P_4} {N(N+1)^3(N+2)^3}S_1 +\frac{4P_5} {(N-1)N^4(N+1)^4(N+2)^4} \Biggr\}~, \label{aQg2} \end{eqnarray} where the polynomials in Eq.~(\ref{aQg2}) are given by \begin{eqnarray} P_3&=&12N^8+52N^7+132N^6+216N^5+191N^4+54N^3-25N^2\nonumber\\ && -20N-4~, \\ P_4&=&N^6+8N^5+23N^4+54N^3+94N^2+72N+8~, \\ P_5&=&2N^{12}+20N^{11}+86N^{10}+192N^9+199N^8-N^7-297N^6-495N^5 \nonumber\\ && -514N^4-488N^3-416N^2-176N-32~. \end{eqnarray} The newly calculated $O(\varepsilon)$ contribution to $A_{Qg}^{(2)}$, \cite{Bierenbaum:2008yu}, reads after mass renormalization \begin{eqnarray} \overline{a}_{Qg}^{(2)}&=& T_FC_F\Biggl\{ \frac{N^2+N+2} {N(N+1)(N+2)} \Bigl( 16S_{2,1,1} -8S_{3,1} -8S_{2,1}S_1 +3S_4 -\frac{4}{3}S_3S_1 -\frac{1}{2}S^2_2 \nonumber\\&&\hspace{-10mm} -S_2S^2_1 -\frac{1}{6}S^4_1 +2\zeta_2S_2 -2\zeta_2S^2_1 -\frac{8}{3}\zeta_3S_1 \Bigr) -8\frac{N^2-3N-2} {N^2(N+1)(N+2)}S_{2,1} \nonumber\\&&\hspace{-10mm} +\frac{2}{3}\frac{3N+2} {N^2(N+2)}S^3_1 +\frac{2}{3}\frac{3N^4+48N^3+43N^2-22N-8} {N^2(N+1)^2(N+2)}S_3 +2\frac{3N+2} {N^2(N+2)}S_2S_1 \nonumber\\&&\hspace{-10mm} +4\frac{S_1} {N^2}\zeta_2 +\frac{2}{3}\frac{(N^2+N+2)(3N^2+3N+2)} {N^2(N+1)^2(N+2)}\zeta_3 +\frac{P_{6}} {N^3(N+1)^3(N+2)}S_2 \nonumber\\&&\hspace{-10mm} +\frac{N^4-5N^3-32N^2-18N-4} {N^2(N+1)^2(N+2)}S^2_1 -2\frac{2N^5-2N^4-11N^3-19N^2-44N-12} {N^2(N+1)^3(N+2)}S_1 \nonumber\\&&\hspace{-10mm} -\frac{5N^6+15N^5+36N^4+51N^3+25N^2+8N+4} {N^3(N+1)^3(N+2)}\zeta_2 -\frac{P_{7}} {N^5(N+1)^5(N+2)} \Biggr\} \nonumber\\&&\hspace{-10mm} +T_F{C_A}\Biggl\{ \frac{N^2+N+2} {N(N+1)(N+2)} \Bigl( 16S_{-2,1,1} -4S_{2,1,1} -8S_{-3,1} -8S_{-2,2} -4S_{3,1} -\frac{2}{3}\beta''' \nonumber\\&&\hspace{-10mm} +9S_4 -16S_{-2,1}S_1 +\frac{40}{3}S_1S_3 +4\beta''S_1 -8\beta'S_2 +\frac{1}{2}S^2_2 -8\beta'S^2_1 +5S^2_1S_2 +\frac{1}{6}S^4_1 \nonumber\\&&\hspace{-10mm} -\frac{10}{3}S_1\zeta_3 -2S_2\zeta_2 -2S^2_1\zeta_2 -4\beta'\zeta_2 -\frac{17}{5}\zeta_2^2 \Bigr) -8\frac{N^2+N-1} {(N+1)^2(N+2)^2}\zeta_2S_1 \nonumber\\ \nonumber \\&&\hspace{-10mm} +\frac{4(N^2-N-4)} {(N+1)^2(N+2)^2} \Bigl( -4S_{-2,1} +\beta'' -4\beta'S_1 \Bigr) -\frac{2}{3}\frac{N^3+8N^2+11N+2} {N(N+1)^2(N+2)^2}S^3_1 \nonumber\\ \nonumber\\&&\hspace{-10mm} -\frac{16}{3}\frac{N^5+10N^4+9N^3+3N^2+7N+6} {(N-1)N^2(N+1)^2(N+2)^2}S_3 +8\frac{N^4+2N^3+7N^2+22N+20} {(N+1)^3(N+2)^3}\beta' \nonumber\\ \nonumber\\&&\hspace{-10mm} +2\frac{3N^3-12N^2-27N-2} {N(N+1)^2(N+2)^2}S_2S_1 -\frac{2}{3}\frac{9N^5-10N^4-11N^3+68N^2+24N+16} {(N-1)N^2(N+1)^2(N+2)^2}\zeta_3 \nonumber\\ \nonumber\\ &&\hspace{-10mm} -\frac{P_{8}S_2} {(N-1)N^3(N+1)^3(N+2)^3} -\frac{P_{10}S^2_1} {N(N+1)^3(N+2)^3} +\frac{2P_{11}S_1} {N(N+1)^4(N+2)^4} \nonumber\\ \nonumber\\ &&\hspace{-10mm} -\frac{2P_{9}\zeta_2} {(N-1)N^3(N+1)^3(N+2)^2} -\frac{2P_{12}} {(N-1)N^5(N+1)^5(N+2)^5} \Biggr\}~,\label{aQg2bar} \end{eqnarray} with the polynomials \begin{eqnarray} P_{6}&=&3N^6+30N^5+15N^4-64N^3-56N^2-20N-8~, \end{eqnarray} \begin{eqnarray} P_{7}&=&24N^{10}+136N^9+395N^8+704N^7+739N^6 +407N^5+87N^4 \nonumber\\ && +27N^3+45N^2+24N+4~, \\ P_{8}&=&N^9+21N^8+85N^7+105N^6+42N^5+290N^4+600N^3+456N^2\nonumber\\ && +256N+64~, \\ P_{9}&=&(N^3+3N^2+12N+4)(N^5-N^4+5N^2+N+2)~,\\ P_{10}&=&N^6+6N^5+7N^4+4N^3+18N^2+16N-8~,\\ P_{11}&=&2N^8+22N^7+117N^6+386N^5+759N^4+810N^3+396N^2\nonumber\\ && +72N+32~,\\ P_{12}&=&4N^{15}+50N^{14}+267N^{13}+765N^{12}+1183N^{11}+682N^{10} -826N^9 \nonumber\\&& -1858N^8-1116N^7+457N^6+1500N^5+2268N^4+2400N^3 \nonumber\\ && +1392N^2+448N+64~. \end{eqnarray} Note that the terms $\propto~\zeta_3$ in Eq.~(\ref{aQg2}) and $\propto~\zeta_2^2$ in Eq.~(\ref{aQg2bar}) are only due to the representation using the $\beta^{(k)}$--functions and are absent in representations using harmonic sums. The results for the individual diagrams contributing to $A_{Qg}^{(2)}$ can be found up to $O(\varepsilon^0)$ in Ref.~\cite{Bierenbaum:2007qe} and at $O(\varepsilon)$ in Ref.~\cite{Bierenbaum:2008yu}. Since harmonic sums appear in a wide variety of applications, it is interesting to study the pattern in which they emerge. In Table~\ref{table:CompRes}, we list the harmonic sums contributing to each individual diagram~\footnote{Cf. Ref.~\cite{Buza:1995ie} for the labeling of the diagrams.}. {\tiny \begin{table}[htb] \caption{\sf Complexity of the results for the individual diagrams contributing to $A_{Qg}^{(2)}$ } \label{table:CompRes} \begin{center} \renewcommand{\arraystretch}{1.1} \begin{tabular}{||l|c|c|c|c|c|c|c|c|c|c|c|c|c|r||} \hline \hline Diagram & $S_1$ & $S_2$ & $S_3$ & $S_4$ & $S_{-2}$ & $S_{-3}$ & $S_{-4}$ & $S_{2,1}$ & $S_{-2,1}$ & $S_{-2,2}$ & $S_{3,1}$ & $S_{-3,1}$ & $S_{2,1,1}$& $S_{-2,1,1}$ \\ \hline \hline A & &+&+& & & & & & & & & & & \\ B &+&+&+&+& & & &+& & &+& &+& \\ C & &+&+& & & & & & & & & & & \\ D &+&+&+& & & & &+& & & & & & \\ E &+&+&+& & & & &+& & & & & & \\ F &+&+&+&+& & & &+& & & & &+& \\ G &+&+&+& & & & &+& & & & & & \\ H &+&+&+& & & & &+& & & & & & \\ I &+&+&+&+&+&+&+&+&+&+&+&+&+& +\\ J & &+&+& & & & & & & & & & & \\ K & &+&+& & & & & & & & & & & \\ L &+&+&+&+& & & &+& & &+& &+& \\ M & &+&+& & & & & & & & & & & \\ N &+&+&+&+&+&+&+&+&+&+&+&+&+&+ \\ O &+&+&+&+& & & &+& & &+& &+& \\ P &+&+&+&+& & & &+& & &+& &+& \\ S & &+&+& & & & & & & & & & & \\ T & &+&+& & & & & & & & & & & \\ \hline\hline \end{tabular} \renewcommand{\arraystretch}{1.0} \end{center} \end{table} } The $\beta$--function and their derivatives can be traced back to the single non--alternating harmonic sums, allowing for half-integer arguments, cf. \cite{Blumlein:1998if} and Appendix \ref{App-SpeFunHarm}. Therefore, all single harmonic sums form an equivalence class being represented by the sum $S_1$, from which the other single harmonic sums are easily derived through differentiation and half-integer relations Additionally, we have already made use of the algebraic relations, \cite{Blumlein:2003gb}, between harmonic sums in deriving Eqs. (\ref{aQg2},~\ref{aQg2bar}). Moreover, the sums $S_{-2,2}$ and $S_{3,1}$ obey structural relations to other harmonic sums, i.e., they lie in corresponding equivalence classes and may be obtained by either rational argument relations and/or differentiation w.r.t. $N$. Reference to these equivalence classes is useful since the representation of these sums for $N~\epsilon~\mathbb{C}$ needs not to be derived newly, except of straightforward differentiations. All functions involved are meromorphic, with poles at the non--negative integers. Thus the $O(\varepsilon^0)$--term depends on two basic functions only, $S_1$ and $S_{-2,1}$~\footnote{The associated Mellin transform to this sum has been discussed in Ref.~\cite{GonzalezArroyo:1979df} first.}. This has to be compared to the $z$--space representation used in Ref.~\cite{Buza:1995ie}, in which 48 different functions were needed. As shown in \cite{Blumlein:1998if}, various of these functions have Mellin transforms containing triple sums, which do not occur in our approach even on the level of individual diagrams. Thus the method applied here allowed to compactify the representation of the heavy flavor matrix elements and Wilson coefficients significantly. The $O(\varepsilon)$--term consists of 6 basic functions only, which are given by \begin{eqnarray} && \{S_1,~S_2,~S_3,~S_4,~S_{-2},~S_{-3},~S_{-4}\},~ S_{2,1},~ S_{-2,1},~ S_{-3,1},~ S_{2,1,1},~ S_{-2,1,1}~, \label{6basic} \\ && S_{-2,2} \quad:\quad \mbox{depends on}~\quad S_{-2,1},~S_{-3,1} \nonumber\\ && S_{3,1}\hspace{2.4mm}\quad:\quad \mbox{depends on}~\quad S_{2,1}~. \nonumber \end{eqnarray} The absence of harmonic sums containing $\{-1\}$ as index was noted before for all other classes of space-- and time--like anomalous dimensions and Wilson coefficients, including those for other hard processes having been calculated so far, cf.~\cite{Blumlein:2004bb,Blumlein:2005im,*Blumlein:2006rr,Dittmar:2005ed}. This can not be seen if one applies the $z$--space representation or the linear representation in Mellin--space, \cite{Moch:1999eb}. Analytic continuation, e.g., for $S_{-2,1}$ proceeds via the equality, \begin{eqnarray} \mbox{\rm\bf M}\left[\frac{\mbox{Li}_2(x)}{1+x}\right](N+1) - \zeta_2 \beta(N+1) = (-1)^{N+1} \left[S_{-2,1}(N) + \frac{5}{8} \zeta_3\right] ~\label{SM21ANCONT} \end{eqnarray} with similar representations for the remaining sums, \cite{Blumlein:1998if} ~\footnote{Note that the argument of the Mellin-transform in Eq.~(36), Ref.~\cite{Blumlein:2006mh}, should read $(N+1)$.}. As discussed in \cite{Blumlein:2006mh}, the result for $a_{Qg}^{(2)}$ agrees with that in $z$--space given in Ref.~\cite{Buza:1995ie}. However, there is a difference concerning the complete renormalized expression for $A_{Qg}^{(2)}$. This is due to the scheme--dependence for the renormalization of the coupling constant, which has been described in Sections~\ref{SubSec-RENCo},~\ref{SubSec-HQElProdWave} and emerges for the first time at $O(a_s^2)$. Comparing Eq.~(\ref{AQg2MSren}) for the renormalized result in the ${\sf \overline{{\sf MS}}}$--scheme for the coupling constant with the transformation formula to the ${\sf MOM}$--scheme, Eq. (\ref{aMSON2aMOMON}), this difference is given by \begin{eqnarray} \label{AA1} A_{Qg}^{(2), \overline{{\sf MS}}} &=& A_{Qg}^{(2),\tiny{\mbox{MOM}}} - \beta_{0,Q} \frac{\hat{\gamma}_{qg}^{(0)}}{2} \ln^2\left(\frac{m^2}{\mu^2}\right)~. \end{eqnarray} As an example, the second moment of the massive OME up to $2$--loops reads in the ${\sf \overline{{\sf MS}}}$--scheme for coupling constant renormalization \begin{eqnarray} A_{Qg}^{\overline{{\sf MS}}}&=& a_s^{\overline{{\sf MS}}}\Biggl\{ -\frac{4}{3} T_F\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \Biggr\} +{a_s^{\overline{{\sf MS}}}}^2\Biggl\{ T_F\Bigl[ \frac{22}{9}C_A -\frac{16}{9}C_F -\frac{16}{9}T_F \Bigr] \ln^2\Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ &+& T_F\Bigl[ -\frac{70}{27}C_A -\frac{148}{27}C_F \Bigr] \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) -\frac{7}{9}C_AT_F +\frac{1352}{81}C_FT_F \Biggr\}~, \label{AQg2N2MSON} \end{eqnarray} and in the ${\sf MOM}$--scheme \begin{eqnarray} A_{Qg}^{\tiny{\mbox{MOM}}}&=& a_s^{\tiny{\mbox{MOM}}}\Biggl\{ -\frac{4}{3} T_F\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \Biggr\} +{a_s^{\tiny{\mbox{MOM}}}}^2\Biggl\{ T_F\Bigl[ \frac{22}{9}C_A -\frac{16}{9}C_F \Bigr] \ln^2\Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ &+& T_F\Bigl[ -\frac{70}{27}C_A -\frac{148}{27}C_F \Bigr] \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) -\frac{7}{9}C_AT_F +\frac{1352}{81}C_FT_F \Biggr\} ~. \label{AQg2N2MOMON} \end{eqnarray} As one infers from the above formulas, this difference affects at the $2$--loop level only the double logarithmic term and stems from the treatment of the 1--particle--reducible contributions. In Ref.~\cite{Buza:1995ie}, these contributions were absorbed into the coupling constant, applying the ${\sf MOM}$--scheme. This was motivated by the need to eliminate the virtual contributions due to heavier quarks (b, t) and was ${\sf also}$ extended to the charm--quark, thus adopting the same renormalization scheme as has been used in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv} for the exact calculation of the heavy flavor contributions to the Wilson coefficients. Contrary, in Ref.~\cite{Buza:1996wv}, the ${\sf \overline{{\sf MS}}}$--description was applied and the strong coupling constant depends on $n_f+1$ flavors, cf. the discussion in Section~\ref{SubSec-HQElProdWave}. The remaining massive OMEs are less complex than the term $A_{Qg}^{(2)}$ and depend only on single harmonic sums, i.e. on only one basic function, $S_1$. In the ${\sf PS}$--case, the ${\sf LO}$ and ${\sf NLO}$ anomalous dimensions \begin{eqnarray} \gamma_{gq}^{(0)} &=&-4C_F\frac{N^2+N+2}{(N-1) N (N+1)}~, \label{ggq0} \\ \hat{\gamma}_{qq}^{(1), {\sf PS}}&=& -16C_FT_F\frac{5N^5+32N^4+49N^3+38N^2+28N+8} {(N-1)N^3(N+1)^3(N+2)^2}~ \label{gqqhat1PS} \end{eqnarray} contribute. The pole--terms are given by Eq.~(\ref{AhhhQq2PS}) and we obtain for the higher order terms in $\varepsilon$ \begin{eqnarray} a_{Qq}^{(2), {\sf PS}}&=& C_FT_F\Biggl\{ -\frac{4(N^2+N+2)^2\left(2S_2+\zeta_2\right)} {(N-1)N^2(N+1)^2(N+2)} +\frac{4P_{13}} {(N-1)N^4(N+1)^4(N+2)^3} \Biggr\}, \nonumber\\ \label{aQq2PS}\\ \nonumber\\ P_{13}&=&N^{10}+8N^9+29N^8+49N^7-11N^6-131N^5-161N^4 \nonumber\\ && -160N^3-168N^2-80N -16~, \\ \nonumber\\ \overline{a}_{Qq}^{(2), {\sf PS}}&=& C_FT_F\Biggl\{ -2\frac{(5N^3+7N^2+4N+4)(N^2+5N+2)} {(N-1)N^3(N+1)^3(N+2)^2}\left(2S_2+\zeta_2\right) \nonumber\\ && -\frac{4(N^2+N+2)^2\left(3S_3+\zeta_3\right)} {3(N-1)N^2(N+1)^2(N+2)} +\frac{2P_{14}} {(N-1)N^5(N+1)^5(N+2)^4} \Biggr\}, \label{aQq2PSbar} \\ \nonumber\\ P_{14}&=&5N^{11}+62N^{10}+252N^9+374N^8-400N^6+38N^7-473N^5\nonumber\\&& -682N^4-904N^3-592N^2-208N-32~. \end{eqnarray} Since the ${\sf PS}$--OME emerges for the first time at $O(a_s^2)$, there is no difference between its representation in the ${\sf MOM}$-- and the ${\sf \overline{{\sf MS}}}$--scheme. The renormalized OME $A_{Qq}^{(2)\sf PS}$ is given in Eq.~(\ref{AQq2PSMSON}) and the second moment reads \begin{eqnarray} A_{Qq}^{{\sf PS}, \overline{{\sf MS}}}&=& {a_s^{\overline{{\sf MS}}}}^2\Biggl\{ -\frac{16}{9}\ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) -\frac{80}{27}\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) -4 \Biggr\}C_FT_F +O({a_s^{\overline{{\sf MS}}}}^3)~. \label{AQq2PSN2MSON} \end{eqnarray} The flavor non-singlet ${\sf NLO}$ anomalous dimension is given by \begin{eqnarray} \hat{\gamma}_{qq}^{(1), {\sf NS}}&=& \frac{4C_FT_F}{3} \Biggl\{ 8S_2 -\frac{40}{3}S_1 +\frac{3N^4+6N^3+47N^2+20N-12} {3N^2(N+1)^2} \Biggr\}~. \label{gqqhat1NS} \end{eqnarray} The unrenormalized OME is obtained from the 1--particle irreducible graphs and the contributions of heavy quark loops to the quark self--energy. The latter is given at $O(\hat{a}_s^2)$ in Eq.~(\ref{QuSelf2}). One obtains \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(2), {\sf NS}}&=& \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(2), {\sf NS}, \mbox{irred}}- \hat{\Sigma}^{(2)}(0,\frac{\hat{m}^2}{\mu^2})~. \end{eqnarray} Our result is of the structure given in Eq.~(\ref{Ahhhqq2NSQ}) and the higher order terms in $\varepsilon$ read \begin{eqnarray} a_{qq,Q}^{(2), {\sf NS}}&=& \frac{C_FT_F}{3}\Biggl\{ -8S_3 -8\zeta_2S_1 +\frac{40}{3}S_2 +2\frac{3N^2+3N+2} {N(N+1)}\zeta_2 -\frac{224}{9}S_1 \nonumber\\ && +\frac{219N^6+657N^5+1193N^4+763N^3-40N^2-48N+72} {18N^3(N+1)^3}\Biggr\} \label{aqq2NSQ}, \\ \nonumber\\ \overline{a}_{qq,Q}^{(2), {\sf NS}}&=& \frac{C_FT_F}{3}\Biggl\{ 4S_4 +4S_2\zeta_2 -\frac{8}{3}S_1\zeta_3 +\frac{112}{9}S_2 +\frac{3N^4+6N^3+47N^2+20N-12} {6N^2(N+1)^2}\zeta_2 \nonumber\\&& -\frac{20}{3}S_1\zeta_2 -\frac{20}{3}S_3 -\frac{656}{27}S_1 +2\frac{3N^2+3N+2} {3N(N+1)}\zeta_3 +\frac{P_{15}}{216N^4(N+1)^4} \Biggr\} ~, \label{aqq2NSQbar} \\ \nonumber\\ P_{15}&=&1551N^8+6204N^7+15338N^6+17868N^5+8319N^4 \nonumber\\ && +944N^3+528N^2-144N-432~. \end{eqnarray} The anomalous dimensions in Eqs.~(\ref{ggq0},~\ref{gqqhat1PS},~\ref{gqqhat1NS}) agree with the literature. Eqs.~(\ref{aQq2PS},~\ref{aqq2NSQ}), cf. Ref.~\cite{Bierenbaum:2007qe}, were first given in Ref.~\cite{Buza:1995ie} and agree with the results presented there. Eqs.~(\ref{aQq2PSbar},~\ref{aqq2NSQbar}), \cite{Bierenbaum:2008yu}, are new results of this thesis. As in the ${\sf PS}$ case, the ${\sf NS}$ OME emerges for the first time at $O(a_s^2)$. The corresponding renormalized OME $A_{qq,Q}^{(2),{\sf NS}}$ is given in Eq.~(\ref{Aqq2NSQMSren}) and the second moment reads \begin{eqnarray} A_{qq,Q}^{{\sf NS}, \overline{{\sf MS}}}&=& {a_s^{\overline{{\sf MS}}}}^2\Biggl\{ -\frac{16}{9}\ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) -\frac{128}{27}\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) -\frac{128}{27} \Biggr\}C_FT_F +O({a_s^{\overline{{\sf MS}}}}^3)~. \label{Aqq2NSQN2MSON} \end{eqnarray} Note that the first moment of the ${\sf NS}$--OME vanishes, even on the unrenormalized level up to $O(\varepsilon)$. This provides a check on the results in Eqs. (\ref{aqq2NSQ}, \ref{aqq2NSQbar}), because this is required by fermion number conservation. At this point an additional comment on the difference between the ${\sf MOM}$ and the ${\sf \overline{{\sf MS}}}$--scheme is in order. The ${\sf MOM}$--scheme was applied in Ref.~\cite{Buza:1995ie} for two different purposes. The first one is described below Eq.~(\ref{AQg2N2MOMON}). It was introduced to absorb the contributions of one--particle reducible diagrams and heavier quarks into the definition of the coupling constant. However, in case of $A_{Qg}^{(2)}$, renormalization in the ${\sf MOM}$--scheme and the scheme transformation from the ${\sf MOM}$--scheme to the ${\sf \overline{{\sf MS}}}$--scheme accidentally commute. This means, that one could apply Eq.~(\ref{RenAQg2MOM}) in the ${\sf \overline{{\sf MS}}}$--scheme, i.e., set \begin{eqnarray} \delta a_{s,1}^{\tiny{\mbox{MOM}}}=\delta a_{s,1}^{\overline{{\sf MS}}}(n_f+1) \end{eqnarray} from the start and obtain Eq.~(\ref{AQg2MSren}) for the renormalized result. This is not the case for $A_{qq,Q}^{(2), {\sf NS}}$. As mentioned earlier, the scheme transformation does not have an effect on this term at $2$--loop order. This means that Eq.~(\ref{2LNSRen1}) should yield the same renormalized result in the ${\sf MOM}$-- and in the ${\sf \overline{{\sf MS}}}$--scheme. However, in the latter case, the difference of $Z$--factors does not contain the mass. Thus a term \begin{eqnarray} \propto \frac{1}{\varepsilon}\ln \Bigl(\frac{m^2}{\mu^2}\Bigr)~, \end{eqnarray} which stems from the expansion of the unrenormalized result in Eq.~(\ref{Ahhhqq2NSQ}), can not be subtracted. The reason for this is the following. As pointed out in Ref.~\cite{Buza:1995ie}, the term $\hat{A}_{qq,Q}^{(2), {\sf NS}}$ is only UV--divergent. However, this is only the case if one imposes the condition that the heavy quark contributions to the gluon self--energy vanishes for on--shell momentum of the gluon. This is exactly the condition we imposed for renormalization in the ${\sf MOM}$--scheme, cf. Section~\ref{SubSec-RENCo}. Hence in this case, the additional divergences absorbed into the coupling are of the collinear type, contrary to the term in $A_{Qg}^{(2)}$. By applying the transformation back to the ${\sf \overline{{\sf MS}}}$--scheme, we treat these two different terms in a concise way. This is especially important at the three--loop level, since in this case both effects are observed for all OMEs and the renormalization would not be possible if not applying the ${\sf MOM}$--scheme first. \\ Let us now turn to the gluonic OMEs $A_{gg,Q}^{(2)},~A_{gq,Q}^{(2)}$, which are not needed for the asymptotic 2--loop heavy flavor Wilson coefficients. They contribute, however, in the VFNS--description of heavy flavor parton densities, cf. Ref.~\cite{Buza:1996wv} and Section~\ref{SubSec-HQFlav}. The $1$--loop term $A_{gg,Q}^{(1)}$ has already been given in Eqs. (\ref{AggQ1unren2}, \ref{AggQ1MSren}). In case of $A_{gg,Q}^{(2)}$, the part \begin{eqnarray} \hat{\gamma}_{gg}^{(1)}&=& 8C_FT_F \frac{N^8+4N^7+8N^6+6N^5-3N^4-22N^3-10N^2-8N-8} {(N-1)N^3(N+1)^3(N+2)} \nonumber\\ &&\hspace{-10mm} +\frac{32C_AT_F}{9} \Biggl\{ -5S_1 +\frac{3N^6+9N^5+22N^4+29N^3+41N^2+28N+6} {(N-1)N^2(N+1)^2(N+2)} \Biggr\}~ \end{eqnarray} of the $2$--loop anomalous dimension is additionally needed. As for $A_{Qg}^{(2)}$, the massive parts of the gluon self--energy contribute, Eqs. (\ref{GluSelf1},~\ref{GluSelf2}). The unrenormalized OME at the $2$--loop level is then given in terms of reducible and irreducible contributions via \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(2)}&=& \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(2), \mbox{irred}}- ~\hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(1)} \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) -\hat{\Pi}^{(2)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)~. \end{eqnarray} In the unrenormalized result, we observe the same pole structure as predicted in Eq.~(\ref{AhhhggQ2}). The constant and $O(\varepsilon)$ contributions $a_{gg,Q}^{(2)}$ and $\overline{a}_{gg,Q}^{(2)}$ are \begin{eqnarray} a_{gg,Q}^{(2)}&=& T_FC_A\Biggl\{ -\frac{8}{3}\zeta_2S_1 +\frac{16(N^2+N+1)\zeta_2} {3(N-1)N(N+1)(N+2)} -4\frac{56N+47} {27(N+1)}S_1 \nonumber\\ &&\hspace{-10mm} +\frac{2P_{16}} {27(N-1)N^3(N+1)^3(N+2)} \Biggr\} \nonumber\\ &&\hspace{-10mm} +T_FC_F\Biggl\{ \frac{4(N^2+N+2)^2\zeta_2} {(N-1)N^2(N+1)^2(N+2)} -\frac{P_{17}} {(N-1)N^4(N+1)^4(N+2)} \Biggr\}~, \label{agg2Q} \end{eqnarray} \begin{eqnarray} \overline{a}_{gg,Q}^{(2)}&=& T_FC_A\Biggl\{ -\frac{8}{9}\zeta_3S_1 -\frac{20}{9}\zeta_2S_1 +\frac{16(N^2+N+1)} {9(N-1)N(N+1)(N+2)}\zeta_3 +\frac{2N+1} {3(N+1)}S_2 \nonumber\\ &&\hspace{-10mm} -\frac{S_1^2}{3(N+1)} -2\frac{328N^4+256N^3-247N^2-175N+54} {81(N-1)N(N+1)^2}S_1 \nonumber\\ && \hspace{-10mm} +\frac{4P_{18}\zeta_2} {9(N-1)N^2(N+1)^2(N+2)} +\frac{P_{19}} {81(N-1)N^4(N+1)^4(N+2)} \Biggr\} \nonumber\\ &&\hspace{-10mm} +T_FC_F\Biggl\{ \frac{4(N^2+N+2)^2\zeta_3} {3(N-1)N^2(N+1)^2(N+2)} +\frac{P_{20}\zeta_2} {(N-1)N^3(N+1)^3(N+2)} \nonumber\\ && +\frac{P_{21}} {4(N-1)N^5(N+1)^5(N+2)} \Biggr\}~, \label{agg2Qbar} \end{eqnarray} \begin{eqnarray} P_{16}&=&15N^8+60N^7+572N^6+1470N^5+2135N^4 \nonumber\\ && +1794N^3+722N^2-24N-72~,\\ P_{17}&=&15N^{10}+75N^9+112N^8+14N^7-61N^6+107N^5+170N^4 +36N^3 \nonumber\\ && -36N^2-32N-16~,\\ P_{18}&=&3N^6+9N^5+22N^4+29N^3+41N^2+28N+6~,\\ P_{19}&=&3N^{10}+15N^9+3316N^8+12778N^7+22951N^6+23815N^5+14212N^4\nonumber\\ && +3556N^3-30N^2+288N+216~,\\ P_{20}&=&N^8+4N^7+8N^6+6N^5-3N^4-22N^3-10N^2-8N-8~,\\ P_{21}&=&31N^{12}+186N^{11}+435N^{10}+438N^9-123N^8-1170N^7-1527N^6 \nonumber\\ && -654N^5 +88N^4 -136N^2-96N-32~. \end{eqnarray} We agree with the result for $a_{gg,Q}^{(2)}$ given in \cite{Buza:1996wv}, which is presented in Eq. (\ref{agg2Q}). The new term $\overline{a}_{gg,Q}^{(2)}$, Eq.~(\ref{agg2Qbar}), contributes to all OMEs $A_{ij}^{(3)}$ through renormalization. The renormalized OME is then given by Eq. (\ref{AggQ2MSren}). Since this OME already emerges at ${\sf LO}$, the $O(a_s^2)$ term changes replacing the ${\sf MOM}$-- by the ${\sf \overline{{\sf MS}}}$--scheme. The second moment in the ${\sf \overline{{\sf MS}}}$--scheme reads \begin{eqnarray} A_{gg,Q}^{\overline{{\sf MS}}}&=& a_s^{\overline{{\sf MS}}}\Biggl\{ \frac{4}{3}T_F\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \Biggr\} +{a_s^{\overline{{\sf MS}}}}^2\Biggl\{ T_F\Bigl[ -\frac{22}{9}C_A +\frac{16}{9}C_F +\frac{16}{9}T_F \Bigr] \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr)\nonumber \end{eqnarray} \begin{eqnarray} &+& T_F\Bigl[ \frac{70}{27}C_A +\frac{148}{27}C_F \Bigr] \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{7}{9}C_AT_F -\frac{1352}{81}C_FT_F \Biggr\} +O({a_s^{\overline{{\sf MS}}}}^3)~. \label{Agg2QN2MSON} \end{eqnarray} In the ${\sf MOM}$--scheme it is given by \begin{eqnarray} A_{gg,Q}^{\tiny{\mbox{MOM}}}&=& a_s^{\tiny{\mbox{MOM}}}\Biggl\{ \frac{4}{3}T_F\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \Biggr\} +{a_s^{\tiny{\mbox{MOM}}}}^2\Biggl\{ T_F\Bigl[ -\frac{22}{9}C_A +\frac{16}{9}C_F \Bigr] \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\ &+& T_F\Bigl[ \frac{70}{27}C_A +\frac{148}{27}C_F \Bigr] \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{7}{9}C_AT_F -\frac{1352}{81}C_FT_F \Biggr\} +O({a_s^{\tiny{\mbox{MOM}}}}^3)~. \label{Agg2QN2MOMON} \end{eqnarray} The difference between the schemes reads \begin{eqnarray} \label{AA2} A_{gg,Q}^{(2), \overline{{\sf MS}}} = A_{gg,Q}^{(2), \tiny{\mbox{MOM}}} + \beta_{0,Q}^2 \ln^2\left(\frac{m^2}{\mu^2}\right)~. \end{eqnarray} The need for applying intermediately the ${\sf MOM}$--scheme for renormalization becomes obvious again for the term $A_{gg,Q}^{(2)}$. As in the ${\sf NS}$--case, renormalization in the ${\sf \overline{{\sf MS}}}$--scheme for the coupling constant does not cancel all singularities. The remaining term is $A_{gq,Q}^{(2)}$, which emerges for the first time at $O(a_s^2)$ and the same result is obtained in the ${\sf \overline{{\sf MS}}}$-- and ${\sf MOM}$--schemes. The corresponding ${\sf NLO}$ anomalous dimension is given by \begin{eqnarray} \hat{\gamma}_{gq}^{(1)}&=&\frac{32C_FT_F}{3}\Biggl\{ -\frac{(N^2+N+2)S_1}{(N-1)N(N+1)} +\frac{8N^3+13N^2+27N+16}{3(N-1)N(N+1)^2} \Biggr\}~. \end{eqnarray} Again, we obtain the pole terms as predicted in Eq.~(\ref{Ahhhgq2Q}). The constant and $O(\varepsilon)$ contributions $a_{gq,Q}^{(2)}$ and $\overline{a}_{gq,Q}^{(2)}$ then read \begin{eqnarray} a_{gq,Q}^{(2)}&=& T_FC_F\Biggl\{ \frac{4}{3}\frac{N^2+N+2}{(N-1)N(N+1)} \Bigl(2\zeta_2+S_2+S_1^2\Bigr) \nonumber\\ && -\frac{8}{9}\frac{8N^3+13N^2+27N+16} {(N-1)N(N+1)^2}S_1 +\frac{8}{27}\frac{P_{22}} {(N-1)N(N+1)^3} \Biggr\}~, \label{agq2Q} \\ \overline{a}_{gq,Q}^{(2)}&=& T_FC_F\Biggl\{ \frac{2}{9}\frac{N^2+N+2}{(N-1)N(N+1)} \Bigl(-2S_3-3S_2S_1-S_1^3+4\zeta_3-6\zeta_2S_1\Bigr)\nonumber\\ && +\frac{2}{9}\frac{8N^3+13N^2+27N+16} {(N-1)N(N+1)^2} \Bigl(2\zeta_2+S_2+S_1^2\Bigr) -\frac{4}{27}\frac{P_{22}S_1} {(N-1)N(N+1)^3}\nonumber\\ && +\frac{4}{81}\frac{P_{23}} {(N-1)N(N+1)^4} \Biggr\}~, \label{agq2Qbar} \end{eqnarray} with \begin{eqnarray} P_{22}&=&43N^4+105N^3+224N^2+230N+86 \\ P_{23}&=&248N^5+863N^4+1927N^3+2582N^2+1820N+496~. \end{eqnarray} The second moment of the renormalized result, cf. Eq.~(\ref{Agq2QMSren}), reads \begin{eqnarray} A_{gq,Q}^{\overline{{\sf MS}}}&=& {a_s^{\overline{{\sf MS}}}}^2\Biggl\{ \frac{32}{9}\ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{208}{27}\ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{236}{27} \Biggr\}C_FT_F +O({a_s^{\overline{{\sf MS}}}}^3)~. \label{Agq2QN2MSON} \end{eqnarray} We agree with the result for $a_{gq,Q}^{(2)}$ given in \cite{Buza:1996wv}, which is presented in (\ref{agq2Q}). Let us summarize so far. In this Section, we newly calculated the $O(\varepsilon)$ terms of the $2$--loop massive OMEs. We additionally recalculated for the first time the terms $a_{gg,Q}^{(2)}$, Eq.~(\ref{agg2Q}), and $a_{gq,Q}^{(2)}$, Eq.~(\ref{agq2Q}), which were given in Ref.~\cite{Buza:1996wv} and find full agreement. For completeness, we showed as well the terms $a_{qq,Q}^{(2), {\sf NS}}$, $a_{Qq}^{(2), {\sf PS}}$ and $a_{Qg}^{(2)}$, which have been calculated for the first time in Ref.~\cite{Buza:1995ie} and were recalculated in Refs.~\cite{Bierenbaum:2007qe,SKdiploma}. The latter terms contribute to the heavy flavor Wilson coefficients in deeply inelastic scattering to the non power-suppressed contributions at $O(a_s^2)$. In the renormalization of the heavy flavor Wilson coefficients to 3--loop order, all these terms contribute together with lower order single pole terms. The $O(a_s^2 \varepsilon)$ contributions form parts of the constant terms of the 3--loop heavy flavor unpolarized operator matrix elements needed to describe the 3--loop heavy flavor Wilson coefficients in the region $Q^2 \gg m^2$. The mathematical structure of our results is as follows. The terms $\overline{a}_{ij}^{(2)}$ can be expressed in terms of polynomials of the basic nested harmonic sums up to weight ${\sf w=4}$ and derivatives thereof. They belong to the complexity-class of the general two-loop Wilson coefficients or hard scattering cross sections in massless QED and QCD and are described by six basic functions and their derivatives in Mellin space. Their analytic continuation to complex values of $N$ is known in explicit form. The package {\sf Sigma}, \cite{Refined,Schneider:2007,sigma1,sigma2}, proved to be a useful tool to solve the sums occurring in the present problem and was extended accordingly by its author. \subsection{\bf\boldmath Checks on the Calculation} \label{SubSec-2LChecks} There are several checks which we can use for our results. First of all, the terms up to $O(\varepsilon^0)$ have been calculated in Refs.~\cite{Buza:1995ie,Buza:1996wv} and we agree with all unrenormalized results. As described in Sections \ref{SubSec-2LF32}, \ref{SubSec-2LInfSum}, we keep the complete $\varepsilon$--dependence until we expand the summand of the finite or infinite sums, which serves as a consistency check on the $O(\varepsilon)$ results. Another test is provided by the sum rules in Eqs. (\ref{sumrule1},~\ref{sumrule2}) for $N=2$, which are fulfilled by the renormalized OMEs presented here and in Refs.~\cite{Buza:1995ie,Buza:1996wv}. These rules are obeyed regardless of the renormalization scheme. We observe that they hold on the unrenormalized level as well, even up to $O(\varepsilon)$. For the term $A_{Qg}^{(2)}$, we evaluated fixed moments of $N$ for the contributing unrenormalized diagrams using the Mellin0-Barnes method, \cite{MB1a,*MB1b,*MB2,MB3,*MB4,Paris:2001}, cf. also Appendix \ref{App-SpeFunMB}. Here, we used an extension of a method developed for massless propagators in Ref.~\cite{Bierenbaum:2003ud} to massive on--shell operator matrix elements, \cite{Bierenbaum:2006mq,Bierenbaum:2007zz,Bierenbaum:2007dm}. The Mellin--Barnes integrals are then evaluated numerically using the package {\sf MB}, \cite{Czakon:2005rk}. Using this method, we calculated the even moments $N=2,4,6,8$ and agree with the corresponding fixed moments of our all--$N$ result~\footnote{In Table~2 of Ref. \cite{Bierenbaum:2008yu}, the moments $N=2$ and $N=6$ for the more difficult two--loop diagrams are presented.}. For the first moment of the Abelian part of the unrenormalized term $~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2)}$, there exists even another check. After analytic continuation from the {\sf even} values of $N$ to $N~\epsilon~\mathbb{C}$ is performed, one may consider the limit $N \rightarrow 1$. In this procedure the term $(1 + (-1)^N)/2$ equals to 1. At $O(a_s^2)$ the terms $\propto T_F C_A$ contain $1/z$ contributions in momentum fraction space and their first moment diverges. For the other contributions to the unrenormalized operator matrix element, after mass renormalization to 2--loop order, the first moment is related to the Abelian part of the transverse contribution to the gluon propagator $\Pi_V(p^2,m^2)|_{p^2=0}$, except the term $\propto T_F^2$ which results from wave function renormalization. This was shown in \cite{Buza:1995ie} up to the constant term in $\varepsilon$. One obtains \begin{eqnarray} \hat{\Pi}_V(p^2,m^2) = \hat{a}_s T_F \hat{\Pi}_V^{(1)}(p^2,m^2) + \hat{a}_s^2 C_F T_F \hat{\Pi}_V^{(2)}(p^2,m^2) + O(\hat{a}_s^3)~, \end{eqnarray} with \begin{eqnarray} \label{eqPI1abel} \lim_{p^2 \rightarrow 0} \hat{\Pi}_V^{(1)}(p^2,m^2) &=& \frac{1}{2} ~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(1), N=1}\\ \label{eqPI2abel} \lim_{p^2 \rightarrow 0} \hat{\Pi}_V^{(2)}(p^2,m^2) &=& \frac{1}{2} ~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2), N=1}|_{C_F}~. \end{eqnarray} Here, we extend the relation to the linear terms in $\varepsilon$. For the first moment the double pole contributions in $\varepsilon$ vanish in Eq.~(\ref{eqPI2abel}). We compare with the corresponding QED--expression for the photon--propagator, $\Pi_T^{V, (k)}$, which has been obtained in Ref.~\cite{Djouadi:1993ss}. Due to the transition from QED to QCD, the relative color factor at the $2$--loop level has to be adjusted to $1/4 = 1/(C_F C_A)$. After asymptotic expansion in $m^2/p^2$, the comparison can be performed up to the linear term in $\varepsilon$. One obtains \begin{eqnarray} \label{eqrPI1} \lim_{p^2 \rightarrow 0} \frac{1}{p^2} \hat{\Pi}_T^{V, (1)}(p^2,m^2) &=& \frac{1}{2 T_F} ~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(1), N=1} = - \left(\frac{m^2}{\mu^2}\right)^{\varepsilon/2} \left[\frac{8}{3 \varepsilon} + \frac{\varepsilon}{3} \zeta_2 \right] \\ \label{eqrPI2} \lim_{p^2 \rightarrow 0} \frac{1}{p^2} \hat{\Pi}_T^{V, (2)}(p^2,m^2) &=& \frac{1}{2 T_F C_F} ~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2), N=1}|_{C_F} = \left(\frac{m^2}{\mu^2}\right)^{\varepsilon} \left[ - \frac{4}{\varepsilon} +15 - \left(\frac{31}{4} + \zeta_2\right) \varepsilon \right]~. \nonumber\\ \end{eqnarray} Additionally, we notice that the renormalized results do not anymore contain $\zeta_2$--terms. The renormalized terms in Eqs. (\ref{Aqq2NSQMSren}, \ref{AQq2PSMSON}, \ref{AQg2MSren}, \ref{Agq2QMSren}, \ref{AggQ2MSren}) contain expressions proportional to $\zeta_2$ in the non--logarithmic contributions, which just cancel the corresponding $\zeta_2$--terms in $a_{ij}^{(2)}$, cf. Eqs. (\ref{aQg2}, \ref{aQq2PS}, \ref{aqq2NSQ}, \ref{agg2Q}, \ref{agq2Q}). For explicit examples of this cancellation, one may compare the second moments of the renormalized OMEs presented in Eqs. (\ref{AQg2N2MSON}, \ref{AQg2N2MOMON}, \ref{AQq2PSN2MSON}, \ref{Aqq2NSQN2MSON}, \ref{Agg2QN2MSON}, \ref{Agg2QN2MOMON}, \ref{Agq2QN2MSON}). The latter provides no stringent test, but is in accordance with general observations made in higher loop calculations, namely that even $\zeta$--values cancel for massless calculations in even dimensions in the renormalized results if presented in the ${\sf \overline{{\sf MS}}}$--scheme, \cite{Broadhurst:private1}. In the present work, this observation holds for the $\zeta_2$--terms in a single--scale massive calculation as well. The most powerful test is provided by the {\sf FORM}--based program {\sf MATAD},~\cite{Steinhauser:2000ry}, which we used to calculate fixed moments of the $2$--loop OMEs up to $O(\varepsilon)$. The setup is the same as in the $3$--loop case and is explained in the next Section. At the $2$--loop level we worked in general $R_{\xi}$--gauges and explicitly observe the cancellation of the gauge parameter. For the terms $A_{Qg}^{(2)},~A_{gg}^{(2)}$ we used both projection operators given in Eqs. (\ref{projG1},~\ref{projG2}), which serves as another consistency check. In the singlet case, we calculated the even moments $N=2,4,...,12$ and found full agreement with the results presented in this Section up to $O(\varepsilon)$. The same holds in the non--singlet case, where we calculated the odd moments as well, $N=1,2,3,...,12$. \newpage \section{\bf\boldmath Calculation of Moments at $O(a_s^3)$} \label{Sec-3L} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} In this Chapter, we describe the computation of the $3$--loop corrections to the massive operator matrix elements in detail, cf. \cite{Bierenbaum:2009mv}. Typical Feynman diagrams contributing for the different processes are shown in Figure~\ref{diaex}, where $\otimes$ denotes the corresponding composite operator insertions, cf. Appendix~\ref{App-FeynRules}. The generation of these diagrams with the {\sf FORTRAN}--based program {\sf QGRAF}, \cite{Nogueira:1991ex}, is described in Section~\ref{SubSec-3LGen} along with the subsequent steps to prepare the input for the {\sf FORM}--based program {\sf MATAD}, \cite{Steinhauser:2000ry}. The latter allows the calculation of massive tadpole integrals in $D$ dimensions up to three loops and relies on the {\sf MINCER} algorithm, \cite{Gorishnii:1989gt,Larin:1991fz}. The use of {\sf MATAD} and the projection onto fixed moments are explained in Section~\ref{SubSec-3LMatad}. Finally, we present our results for the fixed moments of the $3$--loop OMEs and the fermionic contributions to the anomalous dimensions in Section~\ref{SubSec-3LResUn}. The calculation is mainly performed using {\sf FORM} programs while in a few cases codes have also been written in {\sf MAPLE}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=1.8cm]{picmain11.eps} \includegraphics[angle=0, width=1.8cm]{picmain12.eps} \includegraphics[angle=0, width=1.8cm]{picmain13.eps} \includegraphics[angle=0, width=1.8cm]{picmain14.eps} \includegraphics[angle=0, width=1.8cm]{picmain15.eps} \includegraphics[angle=0, width=1.8cm]{picmain16.eps} \includegraphics[angle=0, width=1.8cm]{picmain17.eps} \includegraphics[angle=0, width=1.8cm]{picmain18.eps} \end{center} {\small \hspace*{5mm} ($\sf NS$) \hspace{0.9cm} ($\sf PS_H$) \hspace{0.9cm} ($\sf PS_l$) \hspace{0.85cm} ($\sf qg_H$) \hspace{0.9cm} ($\sf qg_l$) \hspace{0.9cm} ($\sf gq$) \hspace{0.9cm} ($\sf gg$) \hspace{0.9cm} {\sf ghost}} \begin{center} \caption[{\sf Examples for 3--loop diagrams contributing to the massive OMEs.}] {\sf Examples for 3--loop diagrams contributing to the massive operator matrix elements: NS - non--singlet, ${\sf PS_{H,l}}$ - pure--singlet, singlet ${\sf qg_{H,l}}$, {\sf gq}, gg and ghost contributions. Here the coupling of the gauge boson to a heavy or light fermion line is labeled by {\sf H} and {\sf l}, respectively. Thick lines: heavy quarks, curly lines: gluons, full lines: quarks, dashed lines: ghosts.} \label{diaex} \end{center} \end{figure} \noindent \vspace{-18mm} \subsection{\bf\boldmath Generation of Diagrams} \label{SubSec-3LGen} {\sf QGRAF} is a quite general program to generate Feynman diagrams and allows to specify various kinds of particles and interactions. Our main issue is to generate diagrams which contain composite operator insertions, cf. (\ref{COMP1})--(\ref{COMP3}) and Appendix~\ref{App-FeynRules}, as special vertices. To give an example, let us consider the contributions to $A_{Qg}^{(1)}$. Within the light--cone expansion, Section~\ref{SubSec-DISComptLCE}, this term derives from the Born diagrams squared of the photon--gluon fusion process shown in Figure~\ref{GENOPINS6}, cf. Section~\ref{SubSec-HQElProd} and Figure \ref{CCbarLO}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=14.0cm]{picmain19.eps} \end{center} \begin{center} \caption[{\sf Diagrams contributing to $H_{g,(2,L)}^{(1)}$ via the optical theorem.}] {\sf Diagrams contributing to $H_{g,(2,L)}^{(1)}$ via the optical theorem. Wavy lines: photons; \\ curly lines: gluons; full lines: quarks.} \label{GENOPINS6} \end{center} \end{figure} \noindent After expanding these diagrams with respect to the virtuality of the photon, the mass effects are given by the diagrams in Figure~\ref{GENOPINS7}. These are obtained by contracting the lines between the external photons. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=14.0cm]{picmain20.eps} \end{center} \begin{center} \caption{\sf Diagrams contributing to $A_{Qg}^{(1)}$.} \label{GENOPINS7} \end{center} \end{figure} \vspace{-8mm} \noindent Thus, one may think of the operator insertion as being coupled to two external particles, an incoming and an outgoing one, which carry the same momentum. Therefore, one defines in the model file of {\sf QGRAF} vertices which resemble the operator insertions in this manner, using a scalar field $\phi$, which shall not propagate in order to ensure that there is only one of these vertices for each diagram. For the quarkonic operators, one defines the vertices \begin{eqnarray} \phi+\phi+q+\overline{q}+n~g~~, \hspace*{3mm} 0 \le n \le 3~, \label{phiquark} \end{eqnarray} which is illustrated in Figure~\ref{GENOPINS1}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=8.0cm]{picmain21.eps} \end{center} \begin{center} \caption{\sf Generation of the operator insertion.} \label{GENOPINS1} \end{center} \end{figure} \vspace{-8mm} \noindent The same procedure can be used for the purely gluonic interactions and one defines in this case \begin{eqnarray} \phi+\phi+n~g~~, \hspace*{3mm} 0 \le n \le 4~. \label{phigluon} \end{eqnarray} The Green's functions we have to consider and their relation to the respective OMEs were given in Eqs. (\ref{omeGluOpQ})--(\ref{omelqproj}). The number of diagrams we obtain contributing to each OME is shown in Table \ref{table:numdiags}. \begin{table}[H] \label{table:numdiags} \begin{center} \newcommand{\hphantom{ }}{\hphantom{ }} \newcommand{\cc}[1]{\multicolumn{1}{c}{#1}} \renewcommand{\arraystretch}{1.3} \begin{tabular}{|llllllll|} \hline\hline Term & \phantom{00}\# & Term & \phantom{00}\# & Term & \phantom{00}\# & Term & \phantom{00}\# \\ \hline\hline $A_{Qg}^{(3)}$ & 1358 & $A_{qg,Q}^{(3)}$ & \phantom{0}140 & $A_{Qq}^{(3),{\sf PS}}$ & \phantom{0}125 & $A_{qq,Q}^{(3),{\sf PS}}$ & \phantom{000}8 \\[0.75em] $A_{qq,Q}^{(3),{\sf NS}}$ & \phantom{0}129 & $A_{gq,Q}^{(3)}$ & \phantom{00}89 & $A_{gg,Q}^{(3)}$ & \phantom{0}886 & & \\ [3mm] \hline\hline \end{tabular}\\[2pt] \caption{\sf Number of diagrams contributing to the $3$--loop massive OMEs. } \end{center} \end{table} \renewcommand{\arraystretch}{1.0} \noindent The next step consists in rewriting the output provided by {\sf QGRAF} in such a way, that the Feynman rules given in Appendix~\ref{App-FeynRules} can be inserted. Thus, one has to introduce Lorentz and color indices and align the fermion lines. Additionally, the integration momenta have to be written in such a way that {\sf MATAD} can handle them. For the latter step, all information on the types of particles, the operator insertion and the external momentum are irrelevant, leading to only two basic topologies to be considered at the $2$--loop level, which are shown in Figure~\ref{MATADTOP1}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=10.0cm]{picmain22.eps} \end{center} \begin{center} \caption[{\sf $2$--Loop topologies for ${\sf MATAD}$}] {\sf $2$--Loop topologies for ${\sf MATAD}$, indicating labeling of momenta.} \label{MATADTOP1} \end{center} \end{figure} \vspace{-8mm} \noindent Note, that in the case at hand the topology on the right--hand side of Figure~\ref{MATADTOP1} always yields zero after integration. At the $3$--loop level, the master topology is given in Figure~\ref{MATADTOP2}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, width=4.0cm]{picmain23.eps} \end{center} \begin{center} \caption[{\sf Master $3$--loop topology for MATAD.}] {\sf Master $3$--loop topology for MATAD, indicating labeling of momenta.} \label{MATADTOP2} \end{center} \end{figure} \vspace{-8mm} \noindent From this topology, five types of diagrams are derived by shrinking various lines. These diagrams are shown in Figure~\ref{MATADTOP3}. \begin{figure}[htb] \begin{center} \includegraphics[angle=0, width=12.0cm]{picmain24.eps} \end{center} \begin{center} \caption{\sf Additional $3$--loop topologies for ${\sf MATAD}$.} \label{MATADTOP3} \end{center} \vspace{-8mm} \end{figure} \noindent Finally the projectors given in Eqs. (\ref{projG1},~\ref{projQ}) are applied to project onto the scalar massive OMEs. We only use the physical projector (\ref{projG2}) as a check for lower moments, since it causes a significant increase of the computation time. To calculate the color factor of each diagram, we use the program provided in Ref.~\cite{vanRitbergen:1998pn} and for the calculation of fermion traces we use ${\sf FORM}$. Up to this point, all operations have been performed for general values of Mellin $N$ and the dimensional parameter $\varepsilon$. The integrals do not contain any Lorentz or color indices anymore. In order to use {\sf MATAD}, one now has to assign to $N$ a specific value. Additionally, the unphysical momentum $\Delta$ has to be replaced by a suitable projector, which we define in the following Section. \subsection{\bf\boldmath Calculation of Fixed $3$--Loop Moments Using {\sf MATAD}} \label{SubSec-3LMatad} We consider integrals of the type \begin{eqnarray} I_l(p,m,n_1\ldots n_j) &\equiv& \int \frac{d^Dk_1}{(2\pi)^D}\ldots \int \frac{d^Dk_l}{(2\pi)^D} (\Delta.q_1)^{n_1}\ldots (\Delta.q_j)^{n_j} f(k_1\ldots k_l,p,m)~. \nonumber \\ \label{ExInt1} \end{eqnarray} Here $p$ denotes the external momentum, $p^2=0$, $m$ is the heavy quark mass, and $\Delta$ is a light--like vector, $\Delta^2=0$. The momenta $q_{i}$ are given by any linear combination of the loop momenta $k_i$ and external momentum $p$. The exponents $n_i$ are integers or possibly sums of integers, see the Feynman rules in Appendix \ref{App-FeynRules}. Their sum is given by \begin{eqnarray} \sum_{i=1}^j n_i = N~. \end{eqnarray} The function $f$ in Eq.~(\ref{ExInt1}) contains propagators, of which at least one is massive, dot-products of its arguments and powers of $m$. If one sets $N=0$, (\ref{ExInt1}) is given by \begin{eqnarray} I_l(p,m,0\ldots 0)=I_l(m)= \int \frac{d^Dk_1}{(2\pi)^D}\ldots \int \frac{d^Dk_l}{(2\pi)^D} f(k_1\ldots k_l,m)~. \label{ExInt2} \end{eqnarray} From $p^2=0$ it follows, that the result can not depend on $p$ anymore. The above integral is a massive tadpole integral and thus of the type {\sf MATAD} can process. Additionally, {\sf MATAD} can calculate the integral up to a given order as a power series in $p^2/m^2$. Let us return to the general integral given in Eq.~(\ref{ExInt1}). One notes, that for fixed moments of $N$, each integral of this type splits up into one or more integrals of the same type with the $n_i$ having fixed integer values. At this point, it is useful to recall that the auxiliary vector $\Delta$ has only been introduced to get rid of the trace terms of the expectation values of the composite operators and has no physical significance. By undoing the contraction with $\Delta$, these trace terms appear again. Consider as an example \begin{eqnarray} I_l(p,m,2,1) &=& \int \frac{d^Dk_1}{(2\pi)^D}\ldots \int \frac{d^Dk_l}{(2\pi)^D} (\Delta.q_1)^2 (\Delta.q_2) f(k_1\ldots k_l,p,m) \label{ExInt5} \\ &=&\Delta^{\mu_1}\Delta^{\mu_2}\Delta^{\mu_3} \int \frac{d^Dk_1}{(2\pi)^D}\ldots \int \frac{d^Dk_l}{(2\pi)^D} q_{1,\mu_1}q_{1,\mu_2}q_{2,\mu_3} f(k_1\ldots k_l,p,m)~. \nonumber \\ \label{ExInt3} \end{eqnarray} One notices that the way of distributing the indices in Eq.~(\ref{ExInt3}) is somewhat arbitrary, since after the contraction with the totally symmetric tensor $\Delta^{\mu_1}\Delta^{\mu_2}\Delta^{\mu_3}$ only the completely symmetric part of the corresponding tensor integral contributes. This is made explicit by distributing the indices among the $q_i$ in all possible ways and dividing by the number of permutations one has used. Thus Eq.~(\ref{ExInt3}) is written as \begin{eqnarray} I_l(p,m,2,1) &=&\Delta^{\mu_1}\Delta^{\mu_2}\Delta^{\mu_3} \frac{1}{3} \int \frac{d^Dk_1}{(2\pi)^D}\ldots \int \frac{d^Dk_l}{(2\pi)^D} ( q_{1,\mu_2}q_{1,\mu_3}q_{2,\mu_1} +q_{1,\mu_1}q_{1,\mu_3}q_{2,\mu_2} \nonumber \\ && +q_{1,\mu_1}q_{1,\mu_2}q_{2,\mu_3} ) f(k_1\ldots k_l,p,m)~. \label{ExInt4} \end{eqnarray} Generally speaking, the symmetrization of the tensor resulting from \begin{eqnarray} \prod_{i=1}^j (\Delta.q_1)^{n_i} \end{eqnarray} can be achieved by shuffling indices, \cite{Borwein:1999js,Blumlein:1998if,Vermaseren:1998uu,Remiddi:1999ew, Moch:2001zr,Blumlein:2003gb}, and dividing by the number of terms. The shuffle product is given by \begin{eqnarray} C \left[\underbrace{(k_1, \ldots, k_1)}_{ \small n_1} \SHU \underbrace{(k_2, \ldots, k_2)}_{ \small n_2} \SHU \ldots \SHU \underbrace{(k_I, \ldots, k_I)}_{ \small n_I} \right]~, \end{eqnarray} where $C$ is the normalization constant \begin{eqnarray} C = \binom{N}{n_1, \ldots, n_I}^{-1}~. \end{eqnarray} As an example, the symmetrization of \begin{eqnarray} q_{1,\mu_1} q_{1,\mu_2} q_{2,\mu_3} \end{eqnarray} can be inferred from Eq.~(\ref{ExInt4}). After undoing the contraction with $\Delta$ in (\ref{ExInt1}) and shuffling the indices, one may make the following ansatz for the result of this integral, which follows from the necessity of complete symmetry in the Lorentz indices \begin{eqnarray} R_{\{\mu_1\ldots \mu_N\}} &\equiv&\sum_{j=1}^{[N/2]+1} A_j \Bigl(\prod_{k=1}^{j-1} g_{\{\mu_{2k}\mu_{2k-1}} \Bigr) \Bigl(\prod_{l=2j-1}^N p_{\mu_l\}} \Bigr) ~. \label{GenResInt} \end{eqnarray} In the above equation, $[~~]$ denotes the Gauss--bracket and $\{\}$ symmetrization with respect to the indices enclosed and dividing by the number of terms, as outlined above. The first few terms are then given by \begin{eqnarray} R_0 &\equiv& 1~, \\ R_{\{\mu_1\}} &=& A_1 p_{\mu_1}~, \\ R_{\{\mu_1\mu_2\}} &=& A_1 p_{\mu_1}p_{\mu_2}+A_2 g_{\mu_1\mu_2} ~, \\ R_{\{\mu_1\mu_2\mu_3\}} &=& A_1 p_{\mu_1}p_{\mu_2}p_{\mu_3} +A_2 g_{\{\mu_1\mu_2}p_{\mu_3\}} ~. \end{eqnarray} The scalars $A_j$ have in general different mass dimensions. By contracting again with $\Delta$, all trace terms vanish and one obtains \begin{eqnarray} I_l(p,m,n_1\ldots n_j) &=&\Delta^{\mu_1}\ldots \Delta^{\mu_N} R_{\{\mu_1\ldots \mu_N\}} \\ &=& A_1 (\Delta.p)^N \end{eqnarray} and thus the coefficient $A_1$ in Eq.~(\ref{GenResInt}) gives the desired result. To obtain it, one constructs a different projector, which is made up only of the external momentum $p$ and the metric tensor. By making a general ansatz for this projector, applying it to Eq.~(\ref{GenResInt}) and demanding that the result shall be equal to $A_1$, the coefficients of the different Lorentz structures can be determined. The projector reads \begin{eqnarray} \Pi_{\mu_1 \ldots \mu_N}&=&F(N) \sum_{i=1}^{[N/2]+1}C(i,N) \Bigl(\prod_{l=1}^{[N/2]-i+1} \frac{g_{\mu_{2l-1}\mu_{2l}}}{p^2} \Bigr) \Bigl(\prod_{k=2[N/2]-2i+3}^N \frac{p_{\mu_k}}{p^2} \Bigr)~. \label{Proj1} \end{eqnarray} For the overall pre-factors $F(N)$ and the coefficients $C(i,N)$, one has to distinguish between even and odd values of $N$, \begin{eqnarray} C^{odd}(k,N)&=&(-1)^{N/2+k+1/2} \frac{2^{2k-N/2-3/2}\Gamma(N+1)\Gamma(D/2+N/2+k-3/2)} {\Gamma(N/2-k+3/2)\Gamma(2k)\Gamma(D/2+N/2-1/2)}~,\nonumber\\ \\ F^{odd}(N) &=&\frac{2^{3/2-N/2}\Gamma(D/2+1/2)} {(D-1)\Gamma(N/2+D/2-1)}~, \\ C^{even}(k,N)&=&(-1)^{N/2+k+1} \frac{2^{2k-N/2-2}\Gamma(N+1)\Gamma(D/2+N/2-2+k)} {\Gamma(N/2-k+2)\Gamma(2k-1)\Gamma(D/2+N/2-1)}~, \\ F^{even}(N) &=&\frac{2^{1-N/2}\Gamma(D/2+1/2)} {(D-1)\Gamma(N/2+D/2-1/2)}~. \end{eqnarray} The projector obeys the normalization condition \begin{eqnarray} \Pi_{\mu_1\ldots \mu_N}R^{\mu_1\ldots \mu_N} &=&A_1 ~, \end{eqnarray} which implies \begin{eqnarray} \Pi_{\mu_1\ldots \mu_N}p^{\mu_1}\ldots p^{\mu_N}=1~. \\ \end{eqnarray} As an example for the above procedure, we consider the case $N=3$, \begin{eqnarray} \Pi_{\mu_1\mu_2\mu_3} &=&\frac{1}{D-1} \Bigl( -3\frac{g_{\mu_{1}\mu_{2}}p_{\mu_3}}{p^4} +(D+2) \frac{p_{\mu_1}p_{\mu_2}p_{\mu3}}{p^6} \Bigr)~. \end{eqnarray} Applying this term to (\ref{ExInt4}) yields \begin{eqnarray} I_l(p,m,2,1) &=& \frac{1}{(D-1)p^6} \int \frac{d^Dk_1}{(2\pi)^D}\ldots \int \frac{d^Dk_l}{(2\pi)^D} \Bigl( -2 p^2 q_1.q_2 p.q_1 \nonumber\\[1em] && -p^2 q_1^2 p.q_2 +(D+2) (q_1.p)^2 q_2.p \Bigr) f(k_1\ldots k_l,p,m)~. \label{ExInt6} \end{eqnarray} Up to $3$--loop integrals of the type (\ref{ExInt6}) can be calculated by ${\sf MATAD}$ as a Taylor series in $p^2/m^2$. It is important to keep $p$ artificially off--shell until the end of the calculation. By construction, the overall result will not contain any term $\propto~1/p^2$, since the integral one starts with is free of such terms. Thus, at the end, these terms have to cancel. The remaining constant term in $p^2$ is the desired result. The above projectors are similar to the harmonic projectors used in the ${\sf MINCER}$--program, cf. \cite{Larin:1991fz,Vermaseren:mincer}. These are, however, applied to the virtual forward Compton--amplitude to determine the anomalous dimensions and the moments of the massless Wilson coefficients up to 3--loop order. The calculation was performed in Feynman gauge in general. Part of the calculation was carried out keeping the gauge parameter in $R_\xi$--gauges, in particular for the moments $N=2,4$ in the singlet case and for $N=1,2,3,4$ in the non--singlet case, yielding agreement with the results being obtained using Feynman--gauge. In addition, for the moments $N=2,4$ in the terms with external gluons, we applied the physical projector in Eq.~(\ref{projG2}), which serves as another verification of our results. The computation of the more complicated diagrams was performed on various 32/64 Gb machines using {\sf FORM} and for part of the calculation {\sf TFORM},~\cite{Tentyukov:2007mu}, was used. The complete calculation required about 250 CPU days. \subsection{\bf\boldmath Results} \label{SubSec-3LResUn} We calculated the unrenormalized operator matrix elements treating the 1PI-contributions explicitly. They contribute to $A_{Qg}^{(3)}, A_{gg,Q}^{(3)}$ and $A_{qq,Q}^{(3), {\sf NS}}$. One obtains the following representations \begin{eqnarray} \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(3)}&=& \hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(3), \mbox{\small \sf irr}} -~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2), \mbox{\small \sf irr}} \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) -~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(1)} \hat{\Pi}^{(2)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) \nonumber\\ && +~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(1)} \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)~,\\ \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(3)}&=& \hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(3), \mbox{\small \sf irr}} -\hat{\Pi}^{(3)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) -~\hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(2), \mbox{\small \sf irr}} \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) \nonumber\\ && -2~\hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(1)} \hat{\Pi}^{(2)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) +~\hat{\hspace*{-1mm}\hat{A}}_{gg,Q}^{(1)} \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr) \hat{\Pi}^{(1)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)~, \\ \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(3), {\sf NS}}&=& \hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{(3), {\sf NS}, \mbox{\small \sf irr}} -~\hat{\Sigma}^{(3)}\Bigl(0,\frac{\hat{m}^2}{\mu^2}\Bigr)~. \end{eqnarray} The self-energies are given in Eqs.~(\ref{GluSelf1}, \ref{GluSelf2}, \ref{GluSelf3}, \ref{QuSelf3}). The calculation of the one-particle irreducible 3--loop contributions is performed as described in the previous Section~\footnote{Partial results of the calculation were presented in \cite{Bierenbaum:2008dk,Bierenbaum:2008tt}.}. The amount of moments, which could be calculated, depended on the available computer resources w.r.t. memory and computational time, as well as the possible parallelization using {\sf TFORM}. Increasing the Mellin moment from $N~\rightarrow~N+2$ demands both a factor of 6--8 larger memory and CPU time. We have calculated the even moments $N = 2, \ldots, 10$ for $A_{Qg}^{(3)}$, $A_{gg,Q}^{(3)}$, and $A_{qg,Q}^{(3)}$, for $A_{Qq}^{(3), \rm PS}$ up to $ N = 12$, and for $A_{qq,Q}^{(3), \rm NS}, A_{qq,Q}^{(3),\rm PS}, A_{gq,Q}^{(3)}$ up to $N=14$. In the ${\sf NS}$--case, we also calculated the odd moments $N=1,\ldots, 13$, which correspond to the ${\sf NS}^-$--terms. \vspace*{7mm}\noindent \underline {\large \sf $(i)$ Anomalous Dimensions :} \vspace*{2mm}\noindent The pole terms of the unrenormalized OMEs emerging in the calculation agree with the general structure we presented in Eqs.~(\ref{Ahhhqq3NSQ}, \ref{AhhhQq3PS}, \ref{Ahhhqq3PSQ}, \ref{AhhhQg3}, \ref{Ahhhqg3Q}, \ref{AhhhgqQ3}, \ref{Ahhhgg3Q}). Using lower order renormalization coefficients and the constant terms of the $2$--loop results, \cite{Buza:1995ie,Buza:1996wv,Bierenbaum:2007qe,Bierenbaum:2009zt}, allows to determine the fixed moments of the 2--loop anomalous dimensions and the contributions $\propto T_F$ of the $3$--loop anomalous dimensions, cf. Appendix~\ref{App-AnDim}. All our results agree with the results of Refs.~\cite{Gracey:1993nn,Larin:1996wd,Retey:2000nq,Moch:2002sn,Moch:2004pa,Vogt:2004mw}. The anomalous dimensions $\gamma_{qg}^{(2)}$ and $\gamma_{qq}^{(2), {\sf PS}}$ are obtained completely. The present calculation is fully independent both in the algorithms and codes compared to Refs.~\cite{Larin:1996wd,Retey:2000nq,Moch:2002sn,Moch:2004pa,Vogt:2004mw} and thus provides a stringent check on these results. \vspace*{7mm}\noindent \underline {\large \sf $(ii)$ The constant terms $a_{ij}^{(3)}(N)$:} \vspace*{2mm}\noindent The constant terms at $O(a_s^3)$, cf. Eqs. (\ref{Ahhhqq3NSQ}, \ref{AhhhQq3PS}, \ref{Ahhhqq3PSQ}, \ref{AhhhQg3}, \ref{Ahhhqg3Q}, \ref{AhhhgqQ3}, \ref{Ahhhgg3Q}), are the new contributions to the non--logarithmic part of the 3--loop massive operator matrix elements, which can not be constructed by other renormalization constants calculated previously. They are given in Appendix~\ref{App-OMEs}. All other contributions to the heavy flavor Wilson coefficients in the region $Q^2 \gg m^2$ are known for general values of $N$, cf. Sections~\ref{SubSec-RENPred} and \ref{Sec-2L}. The functions $a_{ij}^{(3)}(N)$ still contain coefficients $\propto \zeta_2$ and we will see below, under which circumstances these terms will contribute to the heavy flavor contributions to the deep--inelastic structure functions. The constant ${\sf B_4}$, (\ref{B4}), emerges as in other massive single--scale calculations, \cite{Broadhurst:1991fi,Avdeev:1994db,*Laporta:1996mq,Broadhurst:1998rz,Boughezal:2004ef}. \vspace*{7mm}\noindent \underline {\large \sf $(iii)$ Moments of the Constant Terms of the $3$--loop Massive OMEs} \vspace*{2mm}\noindent The logarithmic terms of the renormalized $3$--loop massive OMEs are determined by known renormalization constants and lower order contributions to the massive OMEs. They can be inferred from Eqs. (\ref{Aqq3NSQMSren}, \ref{AQq3PSMSren}, \ref{Aqq3PSQMSren}, \ref{AQg3MSren}, \ref{Aqg3QMSren}, \ref{Agq3QMSren}, \ref{Agg3QMSren}). In the following, we consider as examples the non--logarithmic contributions to the second moments of the renormalized massive OMEs. We refer to coupling constant renormalization in the $\overline{\sf MS}$--scheme and compare the results performing the mass renormalization in the on--shell--scheme $(m)$ and the $\overline{\sf MS}$--scheme $(\overline{m})$, cf. Section~\ref{Sec-REP}. For the matrix elements with external gluons, we obtain~: \begin{eqnarray} A_{Qg}^{(3), \overline{{\sf MS}}}(\mu^2=m^2,2) &=& T_FC_A^2 \Biggl( \frac{174055}{4374} -\frac{88}{9}{\sf B_4}+72\zeta_4 -\frac{29431}{324}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_FC_FC_A \Biggl( -\frac{18002}{729} +\frac{208}{9}{\sf B_4}-104\zeta_4 +\frac{2186}{9}\zeta_3 -\frac{64}{3}\zeta_2+64\zeta_2\ln(2) \Biggr) \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-35mm} +T_FC_F^2 \Biggl( -\frac{8879}{729} -\frac{64}{9}{\sf B_4}+32\zeta_4 -\frac{701}{81}\zeta_3+80\zeta_2-128\zeta_2\ln(2) \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_F^2C_A \Biggl( -\frac{21586}{2187} +\frac{3605}{162}\zeta_3 \Biggr) +T_F^2C_F \Biggl( -\frac{55672}{729} +\frac{889}{81}\zeta_3 +\frac{128}{3}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +n_fT_F^2C_A \Biggl( -\frac{7054}{2187} -\frac{704}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{22526}{729} +\frac{1024}{81}\zeta_3 -\frac{64}{3}\zeta_2 \Biggr)~.\label{AQg3N2ONMS} \\ A_{Qg}^{(3), \overline{{\sf MS}}}(\mu^2=\overline{m}^2,2) &=& T_FC_A^2 \Biggl( \frac{174055}{4374} -\frac{88}{9}{\sf B_4}+72\zeta_4 -\frac{29431}{324}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_FC_FC_A \Biggl( -\frac{123113}{729} +\frac{208}{9}{\sf B_4}-104\zeta_4 +\frac{2330}{9}\zeta_3 \Biggr) +T_FC_F^2 \Biggl( -\frac{8042}{729} -\frac{64}{9}{\sf B_4} \nonumber \\ \nonumber\\ && \hspace{-35mm} +32\zeta_4-\frac{3293}{81}\zeta_3 \Biggr) +T_F^2C_A \Biggl( -\frac{21586}{2187} +\frac{3605}{162}\zeta_3 \Biggr) +T_F^2C_F \Biggl( -\frac{9340}{729} +\frac{889}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +n_fT_F^2C_A \Biggl( -\frac{7054}{2187} -\frac{704}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( \frac{478}{729} +\frac{1024}{81}\zeta_3 \Biggr) ~. \label{AQg3N2MSMS} \\ A_{qg,Q}^{(3), \overline{{\sf MS}}}(\mu^2=m^2,2) &=& n_fT_F^2C_A \Biggl( \frac{64280}{2187} -\frac{704}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{7382}{729} +\frac{1024}{81}\zeta_3 \Biggr) ~. \nonumber \\ \label{Aqg3QN2ONMS} \\ A_{gg,Q}^{(3), \overline{{\sf MS}}}(\mu^2=m^2,2) &=& T_FC_A^2 \Biggl( -\frac{174055}{4374} +\frac{88}{9}{\sf B_4}-72\zeta_4 +\frac{29431}{324}\zeta_3 \Biggr) \nonumber \\ \nonumber\\ && \hspace{-35mm} +T_FC_FC_A \Biggl( \frac{18002}{729} -\frac{208}{9}{\sf B_4}+104\zeta_4 -\frac{2186}{9}\zeta_3 +\frac{64}{3}\zeta_2-64\zeta_2\ln(2) \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_FC_F^2 \Biggl( \frac{8879}{729} +\frac{64}{9}{\sf B_4}-32\zeta_4 +\frac{701}{81}\zeta_3-80\zeta_2+128\zeta_2\ln(2) \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_F^2C_A \Biggl( \frac{21586}{2187} -\frac{3605}{162}\zeta_3 \Biggr) +T_F^2C_F \Biggl( \frac{55672}{729} -\frac{889}{81}\zeta_3 -\frac{128}{3}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +n_fT_F^2C_A \Biggl( -\frac{57226}{2187} +\frac{1408}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( \frac{29908}{729} -\frac{2048}{81}\zeta_3 +\frac{64}{3}\zeta_2 \Biggr)~. \label{Agg3QN2ONMS} \\ A_{gg,Q}^{(3), \overline{{\sf MS}}}(\mu^2=\overline{m}^2,2) &=& T_FC_A^2 \Biggl( -\frac{174055}{4374} +\frac{88}{9}{\sf B_4}-72\zeta_4 +\frac{29431}{324}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_FC_FC_A \Biggl( \frac{123113}{729} -\frac{208}{9}{\sf B_4}+104\zeta_4 -\frac{2330}{9}\zeta_3 \Biggr) +T_FC_F^2 \Biggl( \frac{8042}{729} +\frac{64}{9}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-35mm} -32\zeta_4 +\frac{3293}{81}\zeta_3 \Biggr) +T_F^2C_A \Biggl( \frac{21586}{2187} -\frac{3605}{162}\zeta_3 \Biggr) +T_F^2C_F \Biggl( \frac{9340}{729} -\frac{889}{81}\zeta_3 \Biggr)\nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-35mm} +n_fT_F^2C_A \Biggl( -\frac{57226}{2187} +\frac{1408}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( \frac{6904}{729} -\frac{2048}{81}\zeta_3 \Biggr)~. \label{Agg3QN2MSMS} \end{eqnarray} Comparing the operator matrix elements in case of the on--shell--scheme and $\overline{\sf MS}$--scheme, one notices that the terms $\ln(2) \zeta_2$ and $\zeta_2$ are absent in the latter. The $\zeta_2$ terms, which contribute to $a_{ij}^{(3)}$, are canceled by other contributions through renormalization. Although the present process is massive, this observation resembles the known result that $\zeta_2$--terms do not contribute in space--like massless higher order calculations in even dimensions,~\cite{Broadhurst:private1}. This behavior is found for all calculated moments. The occurring $\zeta_4$--terms may partly cancel with those in the $3$--loop light Wilson coefficients, \cite{Vermaseren:2005qc}. Note, that Eq.~(\ref{Aqg3QN2ONMS}) is not sensitive to mass renormalization due to the structure of the contributing diagrams. An additional check is provided by the sum rule (\ref{sumrule2}), which is fulfilled in all renormalization schemes and also on the unrenormalized level. Unlike the operator matrix elements with external gluons, the second moments of the quarkonic OMEs emerge for the first time at $O(a_s^2)$. To 3--loop order, the renormalized quarkonic OMEs do not contain terms $\propto \zeta_2$. Due to their simpler structure, mass renormalization in the on--shell--scheme does not give rise to terms $\propto \zeta_2, \ln(2) \zeta_2$. Only the rational contribution in the color factor $\propto T_F C_F^2$ turns out to be different compared to the on--mass--shell--scheme and $A_{qq,Q}^{\sf PS, (3)}$, (\ref{eqqqQ3}), is not affected at all. This holds again for all moments we calculated. The non--logarithmic contributions are given by \begin{eqnarray} A_{Qq}^{(3), {\sf PS}, \overline{{\sf MS}}}(\mu^2=m^2,2) &=& T_FC_FC_A \Biggl( \frac{830}{2187} +\frac{64}{9}{\sf B_4}-64\zeta_4 +\frac{1280}{27}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_FC_F^2 \Biggl( \frac{95638}{729} -\frac{128}{9}{\sf B_4}+64\zeta_4 -\frac{9536}{81}\zeta_3 \Biggr) +T_F^2C_F \Biggl( \frac{53144}{2187} -\frac{3584}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +n_fT_F^2C_F \Biggl( -\frac{34312}{2187} +\frac{1024}{81}\zeta_3 \Biggr) ~. \\ A_{Qq}^{(3),{\sf PS}, \overline{{\sf MS}}}(\mu^2=\overline{m}^2,2) &=& T_FC_FC_A \Biggl( \frac{830}{2187} +\frac{64}{9}{\sf B_4}-64\zeta_4 +\frac{1280}{27}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +T_FC_F^2 \Biggl( \frac{78358}{729} -\frac{128}{9}{\sf B_4}+64\zeta_4 -\frac{9536}{81}\zeta_3 \Biggr) +T_F^2C_F \Biggl( \frac{53144}{2187} -\frac{3584}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-35mm} +n_fT_F^2C_F \Biggl( -\frac{34312}{2187} +\frac{1024}{81}\zeta_3 \Biggr)~. \\ A_{qq,Q}^{(3),{\sf PS}, \overline{{\sf MS}}}(\mu^2=m^2,2) &=& n_fT_F^2C_F \Biggl( -\frac{52168}{2187} +\frac{1024}{81}\zeta_3 \Biggr)~. \label{eqqqQ3} \\ A_{qq,Q}^{(3),{\sf NS}, \overline{{\sf MS}}}(\mu^2=m^2,2) &=& T_FC_FC_A \Biggl( -\frac{101944}{2187} +\frac{64}{9}{\sf B_4}-64\zeta_4 +\frac{4456}{81}\zeta_3 \Biggr) \nonumber \nonumber \\ \nonumber \\ && +T_FC_F^2 \Biggl( \frac{283964}{2187} -\frac{128}{9}{\sf B_4}+64\zeta_4 -\frac{848}{9}\zeta_3 \Biggr)\nonumber \end{eqnarray} \begin{eqnarray} &&\hspace{-35mm} +T_F^2C_F \Biggl( \frac{25024}{2187} -\frac{1792}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{46336}{2187} +\frac{1024}{81}\zeta_3 \Biggr) ~. \label{Aqq3NSQN2MSON} \\ A_{qq,Q}^{(3),{\sf NS}, \overline{{\sf MS}}}(\mu^2=\overline{m}^2,2) &=& T_FC_FC_A \Biggl( -\frac{101944}{2187} +\frac{64}{9}{\sf B_4}-64\zeta_4 +\frac{4456}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ &&\hspace{-35mm} +T_FC_F^2 \Biggl( \frac{201020}{2187} -\frac{128}{9}{\sf B_4}+64\zeta_4 -\frac{848}{9}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ &&\hspace{-35mm} +T_F^2C_F \Biggl( \frac{25024}{2187} -\frac{1792}{81}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{46336}{2187} +\frac{1024}{81}\zeta_3 \Biggr) ~. \\ A_{gq,Q}^{(3), \overline{{\sf MS}}}(\mu^2=m^2,2) &=& T_FC_FC_A \Biggl( \frac{101114}{2187} -\frac{128}{9}{\sf B_4}+128\zeta_4 -\frac{8296}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ &&\hspace{-35mm} +T_FC_F^2 \Biggl( -\frac{570878}{2187} +\frac{256}{9}{\sf B_4}-128\zeta_4 +\frac{17168}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ &&\hspace{-35mm} +T_F^2C_F \Biggl( -\frac{26056}{729} +\frac{1792}{27}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( \frac{44272}{729} -\frac{1024}{27}\zeta_3 \Biggr) ~. \\ A_{gq,Q}^{(3), \overline{{\sf MS}}}(\mu^2=\overline{m}^2,2) &=& T_FC_FC_A \Biggl( \frac{101114}{2187} -\frac{128}{9}{\sf B_4}+128\zeta_4 -\frac{8296}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ &&\hspace{-35mm} +T_FC_F^2 \Biggl( -\frac{436094}{2187} +\frac{256}{9}{\sf B_4}-128\zeta_4 +\frac{17168}{81}\zeta_3 \Biggr) \nonumber \\ \nonumber \\ &&\hspace{-35mm} +T_F^2C_F \Biggl( -\frac{26056}{729} +\frac{1792}{27}\zeta_3 \Biggr) +n_fT_F^2C_F \Biggl( \frac{44272}{729} -\frac{1024}{27}\zeta_3 \Biggr) ~. \end{eqnarray} Finally, the sum rule (\ref{sumrule2}) holds on the unrenormalized level, as well as for the renormalized expressions in all schemes considered. {\sf FORM}--codes for the constant terms $a_{ij}^{(3)}$, Appendix~\ref{App-OMEs}, and the corresponding moments of the renormalized massive operator matrix elements, both for the mass renormalization carried out in the on--shell-- and $\overline{\sf MS}$--scheme, are attached to Ref.~\cite{Bierenbaum:2009mv} and can be obtained upon request. Phenomenological studies of the 3--loop heavy flavor Wilson coefficients in the region $Q^2 \gg m^2$ will be given elsewhere, \cite{Blumlein:prep1}. \newpage \section{\bf\boldmath Heavy Flavor Corrections to Polarized Deep-Inelastic Scattering} \label{Sec-POL} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} The composition of the proton spin in terms of partonic degrees of freedom has attracted much interest after the initial experimental finding, \cite{Alguard:1976bmxAlguard:1978gf,*Baum:1983ha,*Ashman:1987hvxAshman:1989ig}, that the polarization of the three light quarks alone does not add to the required value 1/2. Subsequently, the polarized proton structure functions have been measured in great detail by various experiments, \cite{Adeva:1993km,*Anthony:1996mw,*Ackerstaff:1997ws,*Abe:1997cx,*Adeva:1998vv,*Abe:1998wq,*Airapetian:1998wi,*Anthony:1999rm,*Anthony:2000fn,*Zheng:2003un,*Airapetian:2004zf,*Ageev:2005gh,*Ageev:2007du,*Airapetian:2007mh}~\footnote{For theoretical surveys see \cite{Reya:1992ye,Lampe:1998eu,Burkardt:2008jw}.}. To determine the different contributions to the nucleon spin, both the flavor dependence as well as the contributions due to gluons and angular excitations at virtualities $Q^2$ in the perturbative region have to be studied in more detail in the future. As the nucleon spin contributions are related to the first moments of the respective distribution functions, it is desirable to measure to very small values of $x$ at high energies, cf. \cite{Bluemlein:1995uc,Blumlein:1997ch,Blumlein:1995gh}. A detailed treatment of the flavor structure requires the inclusion of heavy flavor. As in the unpolarized case, this contribution is driven by the gluon and sea--quark densities. Exclusive data on charm--quark pair production in polarized deep--inelastic scattering are available only in the region of very low photon virtualities at present, \cite{Kurek:2006fw,*Brona:2007ug}. However, the inclusive measurement of the structure functions $g_1(x,Q^2)$ and $g_2(x,Q^2)$ contains the heavy flavor contributions for hadronic masses $W^2 \geq (2 m + M)^2$. The polarized heavy flavor Wilson coefficients are known to first order in the whole kinematic range, \cite{Watson:1981ce,Gluck:1990in,Vogelsang:1990ug}. In these references, numerical illustrations for the ${\sf LO}$ contributions were given as well, cf. also \cite{Blumlein:2003wk}. The polarized parton densities have been extracted from deep-inelastic scattering data in \cite{Altarelli:1998nb,Gluck:2000dy,Bluemlein:2002be,*Hirai:2006sr,*Leader:2006xc,deFlorian:2009vb}. Unlike the case for photo-production, \cite{Bojak:1998zm}, the ${\sf NLO}$ Wilson coefficients have not been calculated for the whole kinematic domain, but only in the region $Q^2 \gg m^2$, \cite{Buza:1996xr}, applying the same technique as described in Section~\ref{SubSec-HQAsym}. As outlined in the same Section, the heavy flavor contributions to the structure function $F_2(x,Q^2)$ are very well described by the asymptotic representation for $Q^2/m^2 \raisebox{-0.07cm 10$, i.e., $Q^2 \raisebox{-0.07cm 22.5{\rm GeV}^2$, in case of charm. A similar approximation should hold in case of the polarized structure function $g_1(x,Q^2)$. In this chapter, we re-calculate for the first time the heavy flavor contributions to the longitudinally polarized structure function $g_1(x,Q^2)$ to $O(a_s^2)$ in the asymptotic region $Q^2 \gg m^2$, \cite{Buza:1996xr}. The corresponding contributions to the structure function $g_2(x,Q^2)$ can be obtained by using the Wandzura--Wilczek relation, \cite{Wandzura:1977qf}, at the level of twist--2 operators, as has been shown in Refs.~\cite{Jackson:1989ph,*Roberts:1996ub,Blumlein:1996vs,Blumlein:1996tp,Blumlein:2003wk} within the covariant parton model. In the polarized case, the twist-$2$ heavy flavor Wilson coefficients factorize in the limit $Q^2\gg~m^2$ in the same way as in the unpolarized case, cf. Section~\ref{SubSec-HQAsym} and \cite{Buza:1996xr}. The corresponding light flavor Wilson coefficients were obtained in Ref.~\cite{Zijlstra:1993shxZijlstra:1993she1xZijlstra:1993she2}. We proceed by calculating the $2$--loop polarized massive quarkonic OMEs, as has been done in Ref.~\cite{Buza:1996xr}. Additionally, we newly calculate the $O(\varepsilon)$ terms of these objects, which will be needed to evaluate the $O(a_s^3)$ corrections, cf. Section~\ref{Sec-REN}. The calculation is performed in the same way as described in Section~\ref{Sec-2L} and we therefore only discuss aspects that are specific to the polarized case. The notation for the heavy flavor Wilson coefficients is the same as in Eq.~(\ref{Calldef}) and below, except that the index $(2,L)$ has to be replaced by $(g_1,g_2)$. The polarized massive operator matrix elements are denoted by $\Delta A_{ij}$ and obey the same relations as in Sections \ref{Sec-HQDIS} and \ref{Sec-REN}, if one replaces the anomalous dimensions, cf. Eq.~(\ref{gammazetNS}, \ref{gammazetS}), by their polarized counterparts, $\Delta \gamma_{ij}$. The asymptotic heavy flavor corrections for polarized deeply inelastic scattering to $O(a_s^2)$, \cite{Buza:1996xr}, were calculated in a specific scheme for the treatment of $\gamma_5$ in dimensional regularization. This was done in order to use the same scheme as has been applied in the calculation of the massless Wilson coefficients in \cite{Zijlstra:1993shxZijlstra:1993she1xZijlstra:1993she2}. Here, we refer to the version prior to an Erratum submitted in 2007, which connected the calculation to the $\overline{\sf MS}$--scheme. In this chapter we would like to compare to the results given in Ref.~\cite{Buza:1996xr}, which requires to apply the conventions used there. In Section~\ref{sec-P2}, we summarize main relations such as the differential cross sections for polarized deeply inelastic scattering and the leading order heavy flavor corrections. We give a brief outline on the representation of the asymptotic heavy flavor corrections at ${\sf NLO}$. In Sections (\ref{sec-P4})--(\ref{sec-P5}), the contributions to the operator matrix elements $\Delta A_{qq,Q}^{(2), {\sf NS}}$, $\Delta A_{Qg}^{(2)}$ and $\Delta A_{Qq}^{{\sf PS},(2)}$ are calculated up to the linear terms in $\varepsilon$. \subsection{\bf \boldmath Polarized Scattering Cross Sections} \label{sec-P2} We consider the process of deeply inelastic longitudinally polarized charged lepton scattering off longitudinally (L) or transversely (T) polarized nucleons in case of single photon exchange~\footnote{For the basic kinematics of DIS, see Section~\ref{SubSec-DISKin}.}. The differential scattering cross section is given by \begin{eqnarray} \frac{d^3 \sigma}{dx dy d \theta} = \frac{y \alpha^2}{Q^4} L^{\mu\nu} W_{\mu\nu}~, \end{eqnarray} cf.~\cite{Lampe:1998eu,Blumlein:1996tp}. Here, $\theta$ is the azimuthal angle of the final state lepton. One may define an asymmetry between the differential cross sections for opposite nucleon polarization \begin{equation} A(x,y,\theta)_{L,T} = \frac{d^3 \sigma_{L,T}^{\rightarrow}}{dx dy d \theta} -\frac{d^3 \sigma_{L,T}^{\leftarrow}}{dx dy d \theta}~, \end{equation} which projects onto the asymmetric part of both the leptonic and hadronic tensors, $L^A_{\mu\nu}$ and $W^A_{\mu\nu}$. The hadronic tensor is then expressed by two nucleon structure functions \begin{equation} W^A_{\mu\nu} = i \varepsilon_{\mu\nu\lambda\sigma} \left[\frac{q^\lambda S^\sigma}{P.q} g_1(x,Q^2) + \frac{q^\lambda (P.q S^\sigma - S.q P^\sigma)}{(P.q)^2} g_2(x,Q^2) \right]~. \end{equation} Here $S$ denotes the nucleon's spin vector \begin{eqnarray} S_L &=& (0,0,0,M) \nonumber\\ S_T &=& M(0,\cos(\bar{\theta}),\sin(\bar{\theta}),0)~, \end{eqnarray} with $\bar{\theta}$ a fixed angle in the plane transverse to the nucleon beam. $\varepsilon_{\mu\nu\lambda\sigma}$ is the Levi--Civita symbol. The asymmetries $A(x,y,\theta)_{L,T}$ read \begin{eqnarray} A(x,y)_{L} &=& 4 \lambda \frac{\alpha^2}{Q^2} \left[ \left(2 - y - \frac{2xy M^2}{s} \right) g_1(x,Q^2) + 4 \frac{yx M^2}{s} g_2(x,Q^2) \right]~, \\ A(x,y,\bar{\theta},\theta)_{T} &=& -8 \lambda \frac{\alpha^2}{Q^2} \sqrt{\frac{M^2}{s}} \sqrt{\frac{x}{y} \left[1-y-\frac{xy M^2}{S}\right]} \cos(\bar{\theta} - \theta) [ y g_1(x,Q^2) \nonumber\\ && + 2 g_2(x,Q^2)]~, \end{eqnarray} where $\lambda$ is the degree of polarization. In case of $A(x,y)_{L}$, the azimuthal angle was integrated out, since the differential cross section depends on it only through phase space. The twist--2 heavy flavor contributions to the structure function $g_1(x,Q^2)$ are calculated using the collinear parton model. This is not possible in case of the structure function $g_2(x,Q^2)$. As has been shown in Ref.~\cite{Blumlein:2003wk}, the Wandzura--Wilczek relation holds for the gluonic heavy flavor contributions as well \begin{eqnarray} g_2^{\tau = 2}(x,Q^2) = - g_1^{\tau = 2}(x,Q^2) +\int_x^1 \frac{dz}{z} g_1^{\tau = 2}(z,Q^2)~, \end{eqnarray} from which $g_2(x,Q^2)$ can be calculated for twist $\tau=2$. At leading order the heavy flavor corrections are known for the whole kinematic region, \cite{Watson:1981ce,Gluck:1990in,Vogelsang:1990ug}, \begin{eqnarray} g_1^{Q\overline{Q}}(x,Q^2,m^2) = 4 e_Q^2 a_s \int_{ax}^1 \frac{dz}{z} H_{g,g_1}^{(1)}\left(\frac{x}{z},\frac{m^2}{Q^2}\right) \Delta G(z,n_f,Q^2)~ \end{eqnarray} and are of the same structure as in the unpolarized case, cf. Eq. (\ref{FcLO}). Here, $\Delta G$ is the polarized gluon density. The ${\sf LO}$ heavy flavor Wilson coefficient then reads \begin{eqnarray} H_{g,g_1}^{(1)}\left(\tau,\frac{m^2}{Q^2}\right) = 4 T_F \left[v (3- 4\tau) + (1- 2\tau) \ln \Biggl( \frac{1 - v}{1 + v} \Biggr)\right]~. \end{eqnarray} The support of $H_{g,g_1}^{(1)}\left(\tau,{m^2}/{Q^2}\right)$ is $\tau~\epsilon~[0,1/a]$. As is well known, its first moment vanishes \begin{eqnarray} \label{eq2new} \int_0^{1/a} d\tau H_{g_1}^{(1)}\left(\tau,\frac{m^2}{Q^2}\right) = 0~, \end{eqnarray} which has a phenomenological implication on the heavy flavor contributions to polarized structure functions, resulting in an oscillatory profile, \cite{Blumlein:2003wk}. The unpolarized heavy flavor Wilson coefficients, \cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv,Buza:1995ie,Bierenbaum:2007qe,Bierenbaum:2007dm,Bierenbaum:2006mq,Bierenbaum:2007rg,Bierenbaum:2008tm}, do not obey a relation like (\ref{eq2new}) but exhibit a rising behavior towards smaller values of $x$. At asymptotic values $Q^2 \gg m^2$ one obtains \begin{eqnarray} \label{eq1new} H_{g,g_1}^{(1),{\rm as}}\left(\tau,\frac{m^2}{Q^2}\right) = 4 T_F \left[(3- 4 \tau) - (1- 2\tau) \ln \left(\frac{Q^2}{m^2} \frac{1-\tau}{\tau}\right)\right]~. \end{eqnarray} The factor in front of the logarithmic term $\ln(Q^2/m^2)$ in (\ref{eq1new}) is the leading order splitting function $\Delta P_{qg}(\tau)$,~\cite{Altarelli:1977zs,Ito:1975pf,*Sasaki:1975hk,*Ahmed:1976ee}~\footnote{Early calculations of the leading order polarized singlet splitting functions in Refs.~\cite{Ito:1975pf,*Sasaki:1975hk,*Ahmed:1976ee} still contained some errors.}, \begin{eqnarray} \Delta P_{qg}(\tau) =8T_F\left[\tau^2-(1-\tau)^2\right]=8T_F\left[2\tau- 1\right]~. \end{eqnarray} The sum--rule (\ref{eq2new}) also holds in the asymptotic case extending the range of integration to $\tau~\epsilon~[0,1]$, \begin{eqnarray} \label{eq2as} \int_0^{1}d\tau H_{g,g_1}^{(1), {\rm as}}\left(\tau,\frac{m^2}{Q^2}\right) =0~. \end{eqnarray} \subsection{\bf \boldmath Polarized Massive Operator Matrix Elements} \label{sec-P2a} The asymptotic heavy flavor Wilson coefficients obey the same factorization relations in the limit $Q^2\gg~m^2$ as in the unpolarized case, Eqs. (\ref{LNSFAC})--(\ref{HgFAC}), if one replaces all quantities by their polarized counterparts. The corresponding polarized twist--$2$ composite operators, cf. Eqs. (\ref{COMP1})--(\ref{COMP3}), are given by \begin{eqnarray} \label{COMP1pol} O^{\sf NS}_{q,r;\mu_1, \ldots, \mu_N} &=& i^{N-1} {\bf S} [\overline{\psi}\gamma_5\gamma_{\mu_1} D_{\mu_2} \ldots D_{\mu_N} \frac{\lambda_r}{2}\psi] - {\rm trace~terms}~, \\ \label{COMP2pol} O^{\sf S}_{q;\mu_1, \ldots, \mu_N} &=& i^{N-1} {\bf S} [\overline{\psi}\gamma_5 \gamma_{\mu_1} D_{\mu_2} \ldots D_{\mu_N} \psi] - {\rm trace~terms}~, \\ \label{COMP3pol} O^{\sf S}_{g;\mu_1, \ldots, \mu_N} &=& 2 i^{N-2} {\bf S} {\rm \bf Sp}[\frac{1}{2} \varepsilon^{\mu_1 \alpha \beta \gamma} F_{\beta\gamma}^a D^{\mu_2} \ldots D^{\mu_{N-1}} F_{\alpha, a}^{\mu_N}] - {\rm trace~terms}~. \end{eqnarray} The Feynman rules needed are given in Appendix~\ref{App-FeynRules}. The polarized anomalous dimensions of these operators are defined in the same way as in Eqs. (\ref{gammazetNS}, \ref{gammazetS}), as is the case for the polarized massive OMEs, cf. Eq.~(\ref{pertomeren}) and below. In the subsequent investigation, we will follow Ref.~\cite{Buza:1996xr} and calculate the quarkonic heavy quark contributions to $O(a_s^2)$. The diagrams contributing to the corresponding massive OMEs are the same as in the unpolarized case and are shown in Figures 1--4 in Ref.~\cite{Buza:1995ie}. The formal factorization relations for the heavy flavor Wilson coefficients can be inferred from Eqs. (\ref{eqWIL1}, \ref{eqWIL4}, \ref{eqWIL5}). Here, we perform the calculation in the ${\sf MOM}$--scheme, cf. Section~\ref{SubSec-HQElProdWave}, to account for heavy quarks in the final state only. The same scheme has been adopted in Ref.~\cite{Buza:1996xr}. Identifying $\mu^2=Q^2$, the heavy flavor Wilson coefficients in the limit $Q^2\gg~m^2$ become, \cite{Buza:1996xr}, \begin{eqnarray} H_{g,g_1}^{(1)}\left(\frac{Q^2}{m^2}, N\right)&=& -\frac{1}{2} \Delta \hat{\gamma}_{qg}^{(0)} \ln\left(\frac{Q^2}{m^2}\right) +\hat{c}^{(1)}_{g,g_1}, \label{POLHgLO} \\ H_{g,g_1}^{(2)}\left(\frac{Q^2}{m^2}, N\right)&=& \Biggl\{ \frac{1}{8}\Delta\hat{\gamma}^{(0)}_{qg} \left[ \Delta \gamma_{qq}^{(0)} -\Delta \gamma_{gg}^{(0)} -2\beta_0 \right] \ln^2\left(\frac{Q^2}{m^2}\right) \nonumber\\ && -\frac{1}{2}\left[ \Delta\hat{\gamma}_{qg}^{(1)} +\Delta\hat{\gamma}_{qg}^{(0)}c_{q,g_1}^{(1)} \right] \ln\left(\frac{Q^2}{m^2}\right) \nonumber\\ && +\left[ \Delta \gamma_{gg}^{(0)} -\Delta \gamma_{qq}^{(0)} +2\beta_0 \right] \frac{\Delta\hat{\gamma}_{qg}^{(0)}\zeta_2}{8} +\hat{c}_{g,g_1}^{(2)} +\Delta a_{Qg}^{(2)} \Biggr\} ~, \label{POLHgNLO} \\ H_{q,g_1}^{(2), {\sf PS}}\left(\frac{Q^2}{m^2}, N\right)&=& \Biggl\{ -\frac{1}{8}\Delta\hat{\gamma}_{qg}^{(0)} \Delta\gamma_{gq}^{(0)} \ln^2\left(\frac{Q^2}{m^2}\right) -\frac{1}{2}\Delta\hat{\gamma}_{qq}^{(1), {\sf PS}} \ln\left(\frac{Q^2}{m^2}\right) \nonumber\\ && +\frac{\Delta\hat{\gamma}_{qg}^{(0)}\Delta\gamma_{gq}^{(0)}}{8} \zeta_2 +\hat{c}_{q,g_1}^{(2), {\sf PS}} +\Delta a_{Qq}^{(2), {\sf PS}} \Biggr\}~, \label{POLHPSNLO} \\ L_{q,g_1}^{(2), {\sf NS}}\left(\frac{Q^2}{m^2},N\right)&=& \Biggl\{ \frac{1}{4}\beta_{0,Q}\Delta\gamma^{(0)}_{qq} \ln^2\left(\frac{Q^2}{m^2}\right) -\left[ \frac{1}{2}\Delta\hat{\gamma}_{qq}^{(1), {\sf NS}} +\beta_{0,Q} c_{q,g_1}^{(1)} \right] \ln\left(\frac{Q^2}{m^2}\right) \nonumber\\ && -\frac{1}{4}\beta_{0,Q}\zeta_2\Delta\gamma_{qq}^{(0)} +\hat{c}_{q,g_1,Q}^{(2), {\sf NS}} +\Delta a_{qq,Q}^{(2), {\sf NS}} \Biggr\}~.\label{POLLNSNLO} \end{eqnarray} $c_{i,g_1}^{(k)}$ are the $k$th order non--logarithmic terms of the polarized coefficient functions. As has been described in \cite{Buza:1996xr}, the relations (\ref{POLHgNLO})--(\ref{POLLNSNLO}) hold if one uses the same scheme for the description of $\gamma_5$ in dimensional regularization for the massive OMEs and the light flavor Wilson coefficients. This is the case for the massive OMEs as calculated in \cite{Buza:1996xr}, to which we refer, and the light flavor Wilson coefficients as calculated in Ref.~\cite{Zijlstra:1993shxZijlstra:1993she1xZijlstra:1993she2}. \subsubsection{$\Delta A_{qq,Q}^{(2), {\sf NS}}$} \label{sec-P4} The non--singlet operator matrix element $\Delta A_{qq,Q}^{(2), {\sf NS}}$ has to be the same as in the unpolarized case due to the Ward--Takahashi identity, \cite{Ward:1950xp,*Takahashi:1957xn}. Since it is obtained as zero--momentum insertion on a graph for the transition $\langle p| \rightarrow |p \rangle$, one may write it equivalently in terms of the momentum derivative of the self--energy. The latter is independent of the operator insertion and yields therefore the same in case of $/\!\!\!\! \Delta (\Delta.p)^{N-1}$ and $/\!\!\!\! \Delta \gamma_5 (\Delta.p)^{N-1}$. Hence, $\Delta A_{qq,Q}^{(2), {\sf NS}}$ reads, cf. Eq.~(\ref{Aqq2NSQMSren}), \begin{eqnarray} \Delta A_{qq,Q}^{(2), {\sf NS}}\left(N,\frac{m^2}{\mu^2}\right) &&\hspace{-5mm}= A_{qq,Q}^{(2), {\sf NS}}\left(N,\frac{m^2}{\mu^2}\right) \nonumber \\ &&\hspace{-5mm}= \frac{\beta_{0,Q} \gamma_{qq}^{(0)}}{4} \ln^2\left(\frac{m^2}{\mu^2}\right) +\frac{\hat{\gamma}_{qq}^{(1), {\sf NS}}}{2} \ln\left(\frac{m^2}{\mu^2}\right) + a_{qq,Q}^{(2), {\sf NS}} - \frac{\gamma_{qq}^{(0)}}{4} \beta_{0,Q}\zeta_2~, \nonumber \\ \end{eqnarray} where the constant term in $\varepsilon$ of the unrenormalized result, Eq.~(\ref{Ahhhqq2NSQ}), is given in Eq.~(\ref{aqq2NSQ}) and the $O(\varepsilon)$--term in Eq.~(\ref{aqq2NSQbar}). \subsubsection{$\Delta A_{Qg}^{(2)}$} \label{sec-P3} To calculate the OME $\Delta A_{Qg}$ up to $O(a_s^2)$, the Dirac-matrix $\gamma_5$ is represented in $D = 4 + \varepsilon$ dimensions via, \cite{Buza:1996xr,'tHooft:1972fi,Akyeampong:1973xixAkyeampong:1973vkxAkyeampong:1973vj,*Breitenlohner:1976te}, \begin{eqnarray} /\!\!\!\! \Delta \gamma^5 &=& \frac{i}{6}\varepsilon_{\mu\nu\rho\sigma}\Delta^{\mu} \gamma^{\nu}\gamma^{\rho}\gamma^{\sigma}~. \label{gamma5} \end{eqnarray} The Levi--Civita symbol will be contracted later with a second Levi--Civita symbol emerging in the general expression for the Green's function, cf. Eq.~(\ref{omeGluOpQ}), \begin{eqnarray} \Delta \hat{G}^{ab}_{Q,\mu\nu}&=& \Delta~\hat{\hspace*{-1mm}\hat{A}}_{Qg} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) \delta^{ab}(\Delta \cdot p)^{N-1} \varepsilon_{\mu\nu\alpha\beta}\Delta^{\alpha}p^{\beta}~, \label{greensingpol} \end{eqnarray} by using the following relation in $D$--dimensions, \cite{Itzykson:1980rh}, \begin{eqnarray} \varepsilon_{\mu\nu\rho\sigma}\varepsilon^{\alpha\lambda\tau\gamma}&=& -{\sf Det} \left[g^{\beta}_{\omega}\right]~, \label{levidet} \\ &&\beta=\alpha,\lambda,\tau,\gamma~,\quad \omega=\mu,\nu,\rho,\sigma~. \nonumber \end{eqnarray} In particular, anti--symmetry relations of the Levi-Civita tensor or the relation $\gamma_5^2 = {\bf 1}$, holding in four dimensions, are not used. The projector for the gluonic OME then reads \begin{eqnarray} \Delta~\hat{\hspace*{-1mm}\hat{A}}_{Qg}&=& \frac{\delta^{ab}}{N_c^2-1} \frac{1}{(D-2)(D-3)} (\Delta.p)^{-N-1}\varepsilon^{\mu\nu\rho\sigma} \Delta \hat{G}^{ab}_{Q,\mu\nu} \Delta_{\rho}p_{\sigma}\label{projecsing}~. \end{eqnarray} In the following, we will present the results for the operator matrix element using the above prescription for $\gamma_5$. This representation allows a direct comparison to Ref.~\cite{Buza:1996xr} despite the fact that in this scheme even some of the anomalous dimensions are not those of the $\overline{\rm MS}$--scheme. We will discuss operator matrix elements for which only mass renormalization was carried out, cf. Section~\ref{SubSec-RENMa}. Due to the crossing relations of the forward Compton amplitude corresponding to the polarized case, only odd moments contribute. Therefore the overall factor \begin{eqnarray} \frac{1}{2} \left[1 - (-1)^N\right],~~N~\in~{\mathbb{N}}, \end{eqnarray} is implied in the following. To obtain the results in $x$--space the analytic continuation to complex values of $N$ can be performed starting from the odd integers. The $O(a_s)$ calculation is straightforward \begin{eqnarray} \Delta~\hat{\hspace*{-1mm}\hat{A}}^{(1)}_{Qg} &=& \left(\frac{m^2}{\mu^2}\right)^{\varepsilon/2} \left[\frac{1}{\varepsilon}+\frac{\zeta_2}{8}\varepsilon^2 +\frac{\zeta_3}{24}\varepsilon \right] ~ \Delta \hat{\gamma}_{qg}^{(0)} + O(\varepsilon^3) \\ &=& \left(\frac{m^2}{\mu^2}\right)^{\varepsilon/2} \left[\frac{1}{\varepsilon} \Delta \hat{\gamma}_{qg}^{(0)} +\Delta a_{Qg}^{(1)} +\varepsilon\Delta \overline{a}_{Qg}^{(1)} +\varepsilon^2\Delta\overline{\overline{a}}_{Qg}^{(1)} \right] + O(\varepsilon^3)~. \end{eqnarray} The matrix element contains the leading order anomalous dimension $\Delta \hat{\gamma}_{qg}^{(0)}$, \begin{eqnarray} \Delta A_{Qg}^{(1)}=\frac{1}{2} \Delta \hat{\gamma}_{qg}^{(0)} \ln\left(\frac{m^2}{\mu^2}\right)~, \end{eqnarray} where \begin{eqnarray} \label{eq11} \Delta \hat{\gamma}_{qg}^{(0)}&=&-8 T_F \frac{N-1}{N(N+1)}~. \end{eqnarray} The leading order polarized Wilson coefficient $c_{g,g_1}^{(1)}$ reads,~\cite{Bodwin:1989nz,Vogelsang:1990ug,Zijlstra:1993shxZijlstra:1993she1xZijlstra:1993she2}, \begin{eqnarray} \label{eq11a} c_{g,g_1}^{(1)} &=& -4 T_F\frac{N-1}{N(N+1)} \left[S_1+\frac{N-1}{N}\right]~. \end{eqnarray} The Mellin transform of Eq.~(\ref{eq1new}) then yields the same expression as one obtains from Eq.~(\ref{POLHgLO}) \begin{eqnarray} H_{g,g_1}^{(1),\rm as}\left(N,\frac{m^2}{Q^2}\right)= \left[ -\frac{1}{2} \Delta \hat{\gamma}_{qg}^{(0)} \ln\left(\frac{Q^2}{m^2}\right) +c_{g,g_1}^{(1)}\right]~, \end{eqnarray} for which the proportionality \begin{eqnarray} H_{g, g_1}^{(1),\rm as}\left(N,\frac{m^2}{Q^2}\right) \propto (N-1)~ \end{eqnarray} holds, leading to a vanishing first moment. At the $2$--loop level, we express the operator matrix element $\Delta~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2)}$, after mass renormalization, in terms of anomalous dimensions, cf. \cite{Buza:1995ie,Bierenbaum:2007qe,Bierenbaum:2007dm,Bierenbaum:2006mq,Bierenbaum:2007rg,Bierenbaum:2008tm}, by \begin{eqnarray} \label{eq30} \Delta~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2)}&=& \Bigl(\frac{m^2}{\mu^2}\Bigr)^\varepsilon \Biggl[ \frac{\Delta \hat{\gamma}^{(0)}_{qg}}{2\varepsilon^2} \Bigl\{ \Delta \gamma^{(0)}_{qq} -\Delta \gamma^{(0)}_{gg} -2\beta_0 \Bigr\} +\frac{\Delta \hat{\gamma}^{\prime (1)}_{qg}}{\varepsilon} +\Delta a^{\prime (2)}_{Qg} +\Delta \overline{a}^{\prime (2)}_{Qg}\varepsilon \Bigg] \nonumber\\ && -\frac{2}{\varepsilon}\beta_{0,Q}\Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon/2} \Biggl(1+\frac{\varepsilon^2}{8}\zeta_2+\frac{\varepsilon^3}{24}\zeta_3 \Biggr)\Delta~\hat{\hspace*{-1mm}\hat{A}}^{(1)}_{Qg} +O(\varepsilon^2)~, \label{AhhhQg2polD} \end{eqnarray} The remaining ${\sf LO}$ anomalous dimensions are \begin{eqnarray} \Delta \gamma_{qq}^{(0)} \!&=&\! -C_F\Biggl(-8S_1+2\frac{3N^2+3N+2}{N(N+1)}\Biggr)~, \label{eq42}\\ \Delta \gamma_{gg}^{(0)}\!&=&\!-C_A\Biggl( -8S_1 +2\frac{11N^2+11N+24}{3N(N+1)} \Biggr) +\frac{8}{3}T_Fn_f~. \label{eq38} \end{eqnarray} The renormalized expression in the ${\sf MOM}$--scheme is given by \begin{eqnarray} \Delta A_{Qg}^{\prime (2), \tiny{\mbox{MOM}}} &=& \frac{\Delta \hat{\gamma}^{(0)}_{qg}}{8} \Bigl[ \Delta \gamma^{(0)}_{qq} -\Delta \gamma^{(0)}_{gg} -2\beta_0 \Bigr] \ln^2\left(\frac{m^2}{\mu^2}\right) +\frac{\hat{\gamma}^{\prime (1)}_{qg}}{2} \ln\left(\frac{m^2}{\mu^2}\right) \nonumber\\ && +\left( \Delta \gamma^{(0)}_{gg} -\Delta \gamma^{(0)}_{qq} +2\beta_0\right ) \frac{\hat{\gamma}_{qg}^{(0)}}{8}\zeta_2 +\Delta a^{\prime (2)}_{Qg}~. \label{AQg2polD} \end{eqnarray} The ${\sf LO}$ anomalous dimensions which enter the double pole term in Eq.~(\ref{AhhhQg2polD}) and the $\ln^2(m^2/\mu^2)$ term in Eq.~(\ref{AQg2polD}), respectively, are scheme--independent. This is not the case for the remaining terms, which depend on the particular scheme we adopted in Eqs.~(\ref{levidet},~\ref{gamma5}) and are therefore denoted by a prime. The ${\sf NLO}$ anomalous dimension we obtain is given by \begin{eqnarray} \label{eq39} \Delta \hat{\gamma}_{qg}^{\prime (1)}\!&=&\! -T_FC_F\Biggl( -16\frac{N-1} {N(N+1)}S_2 +16\frac{N-1} {N(N+1)}S_1^2 -32\frac{N-1} {N^2(N+1)}S_1 \nonumber\\ &&\! +8\frac{(N-1)(5N^4+10N^3+8N^2+7N+2)} {N^3(N+1)^3} \Biggr) \nonumber\\ &&\! +T_FC_A\Biggl( 32\frac{N-1} {N(N+1)}\beta' +16\frac{N-1} {N(N+1)}S_2 +16\frac{N-1} {N(N+1)}S_1^2 \nonumber\\ &&\! -16\frac{N-1} {N(N+1)}\zeta_2 -\frac{64S_1} {N(N+1)^2} -16\frac{N^5+N^4-4N^3+3N^2-7N-2} {N^3(N+1)^3} \Biggr)~.\nonumber\\ \end{eqnarray} It differs from the result in the $\overline{\sf MS}$--scheme, \cite{Mertig:1995ny,Vogelsang:1995vh,*Vogelsang:1996im}, by a finite renormalization. This is due to the fact that we contracted the Levi--Civita symbols in $D$ dimensions. The correct ${\sf NLO}$ splitting function is obtained by \begin{eqnarray} \Delta \hat{\gamma}_{qg}^{(1)}=\Delta \hat{\gamma}_{qg}^{\prime (1)} +64T_FC_F\frac{N-1}{N^2(N+1)^2}~. \label{Pqg1r} \end{eqnarray} In an earlier version of Ref.~\cite{Zijlstra:1993shxZijlstra:1993she1xZijlstra:1993she2}, $\Delta \hat{\gamma}_{qg}^{\prime (1)}$ was used as the anomalous dimension departing from the $\overline{\sf MS}$ scheme. Therefore, in Ref.~\cite{Buza:1996xr} the finite renormalization (\ref{Pqg1r}), as the corresponding one for $c_{g,g_1}^{(2)}$, \cite{Zijlstra:1993shxZijlstra:1993she1xZijlstra:1993she2}, was not used for the calculation of $\Delta A_{Qg}^{(2)}$. For the higher order terms in $\varepsilon$ in Eq. (\ref{AhhhQg2polD}) we obtain \begin{eqnarray} \Delta a_{Qg}^{\prime (2)}&=& -T_F C_F\Biggl\{ \frac{4(N-1)}{3N(N+1)}\Bigl(-4S_3 +S^3_1 +3S_1S_2 +6S_1\zeta_2 \Bigr) -\frac{4(3N^2+3N-2)S^2_1} {N^2(N+1)(N+2)} \nonumber\\ &&\hspace{-15mm} -4\frac{N^4+17N^3+43N^2+33N+2} {N^2(N+1)^2(N+2)}S_2 -2\frac{(N-1)(3N^2+3N+2)} {N^2(N+1)^2}\zeta_2 \nonumber\\ &&\hspace{-15mm} -4\frac{N^3-2N^2-22N-36} {N^2(N+1)(N+2)}S_1 +\frac{2P_1} {N^4(N+1)^4(N+2)} \Biggr\} \nonumber\\ &&\hspace{-15mm} -T_FC_A\Biggl\{ 4\frac{N-1}{3N(N+1)}\Bigl( 12\mbox{\rm\bf M}\left[\frac{\mbox{Li}_2(x)}{1+x}\right](N+1) +3\beta'' -8S_3 -S^3_1 -9S_1S_2 \nonumber\\ &&\hspace{-15mm} -12S_1\beta' -12\beta\zeta_2 -3\zeta_3 \Bigr) -16\frac{N-1} {N(N+1)^2}\beta' +4\frac{N^2+4N+5} {N(N+1)^2(N+2)}S^2_1 \nonumber\\ &&\hspace{-15mm} +4\frac{7N^3+24N^2+15N-16} {N^2(N+1)^2(N+2)}S_2 +8\frac{(N-1)(N+2)} {N^2(N+1)^2}\zeta_2 \nonumber\\ &&\hspace{-15mm} +4\frac{N^4+4N^3-N^2-10N+2} {N(N+1)^3(N+2)}S_1 -\frac{4P_2} {N^4(N+1)^4(N+2)} \Biggr\}~, \label{aQg2t} \\ \Delta \overline{a}_{Qg}^{\prime (2)}&=& T_F C_F \Biggl\{ \frac{N-1} {N(N+1)} \Bigl( 16S_{2,1,1} -8S_{3,1} -8S_{2,1}S_1 +3S_4 -\frac{4}{3}S_3S_1 -\frac{1}{2}S^2_2 -\frac{1}{6}S^4_1 \nonumber\\&&\hspace{-15mm} -\frac{8}{3}S_1\zeta_3 -S_2S^2_1 +2S_2\zeta_2 -2S^2_1\zeta_2 \Bigr) -8\frac{S_{2,1}}{N^2} +\frac{3N^2+3N-2} {N^2(N+1)(N+2)} \Bigl( 2S_2S_1 +\frac{2}{3}S^3_1 \Bigl) \nonumber\\&&\hspace{-15mm} +2\frac{3N^4+48N^3+123N^2+98N+8} {3N^2(N+1)^2(2+N)}S_3 +\frac{4(N-1)} {N^2(N+1)}S_1\zeta_2 \nonumber\\&&\hspace{-15mm} +\frac{2}{3} \frac{(N-1)(3N^2+3N+2)} {N^2(N+1)^2}\zeta_3 +\frac{P_3S_2} {N^3(N+1)^3(N+2)} +\frac{N^3-6N^2-22N-36} {N^2(N+1)(N+2)}S^2_1 \nonumber\\&&\hspace{-15mm} +\frac{P_4\zeta_2} {N^3(N+1)^3} -2\frac{2N^4-4N^3-3N^2+20N+12} {N^2(N+1)^2(N+2)}S_1 +\frac{P_5} {N^5(N+1)^5(N+2)} \Biggr\} \nonumber\\&&\hspace{-15mm} + T_F C_A \Biggl\{ \frac{N-1} {N(N+1)} \Bigl( 16S_{-2,1,1} -4S_{2,1,1} -8S_{-3,1} -8S_{-2,2} -4S_{3,1} +\frac{2}{3}\beta''' \nonumber\\&&\hspace{-15mm} -16S_{-2,1} S_1 -4\beta'' S_1 +8\beta' S_2 +8 \beta' S^2_1 +9S_4 +\frac{40}{3} S_3 S_1 +\frac{1}{2}S^2_2 +5 S_2 S^2_1 +\frac{1}{6} S^4_1 \nonumber\\&&\hspace{-15mm} +4\zeta_2 \beta' -2\zeta_2 S_2 -2\zeta_2 S^2_1 -\frac{10}{3} S_1\zeta_3 -\frac{17}{5}\zeta_2^2 \Bigr) -\frac{N-1} {N(N+1)^2} \Bigl( 16 S_{-2,1} +4\beta'' -16 \beta' S_1 \Bigr) \nonumber\\&&\hspace{-15mm} -\frac{16}{3}\frac{N^3+7N^2+8N-6} {N^2(N+1)^2(N+2)}S_3 +\frac{2(3N^2-13)S_2S_1} {N(N+1)^2(N+2)} -\frac{2(N^2+4N+5)} {3N(N+1)^2(N+2)}S^3_1 \nonumber\\&&\hspace{-15mm} -\frac{8\zeta_2S_1} {(N+1)^2} -\frac{2}{3} \frac{(N-1)(9N+8)} {N^2(N+1)^2}\zeta_3 -\frac{8(N^2+3)} {N(N+1)^3}\beta' -\frac{P_6S_2} {N^3(N+1)^3(N+2)} \nonumber\\&&\hspace{-15mm} -\frac{N^4+2N^3-5N^2-12N+2} {N(N+1)^3(N+2)}S^2_1 -\frac{2P_7\zeta_2} {N^3(N+1)^3} +\frac{2P_8S_1} {N(N+1)^4(N+2)} \nonumber\\&&\hspace{-15mm} -\frac{2P_9} {N^5(N+1)^5(N+2)} \Biggr\}~, \label{aQg2tbar} \end{eqnarray} with the polynomials \begin{eqnarray} P_1 &=& 4N^8+12N^7+4N^6-32N^5-55N^4-30N^3-3N^2-8N-4~, \\ P_2 &=& 2N^8+10N^7+22N^6+36N^5+29N^4+4N^3+33N^2+12N+4~,\\ P_3 &=&3N^6+30N^5+107N^4+124N^3+48N^2+20N+8~, \\ P_4 &=&(N-1)(7N^4+14N^3+4N^2-7N-2)~, \\ P_5 &=&8N^{10}+24N^9-11N^8-160N^7-311N^6-275N^5 -111N^4-7N^3 \nonumber\\ && +11N^2+12N+4 ~, \\ P_6 &=&N^6+18N^5+63N^4+84N^3+30N^2-64N-16~, \\ P_7 &=&N^5-N^4-4N^3-3N^2-7N-2~, \\ P_8 &=&2N^5+10N^4+29N^3+64N^2+67N+8~, \\ P_9 &=&4N^{10}+22N^9+45N^8+36N^7-11N^6 -15N^5+25N^4-41N^3 \nonumber\\ && -21N^2-16N-4~. \end{eqnarray} The Mellin--transform in Eq.~(\ref{aQg2t}) is given in Eq. (\ref{SM21ANCONT}) in terms of harmonic sums. As a check, we calculated several lower moments ($N=1\ldots 9$) of each individual diagram contributing to $A_{Qg}^{(2)}$~\footnote{These are shown in Figure 2 of Ref.~\cite{Buza:1995ie}.} using the Mellin--Barnes method, \cite{Czakon:2005rk,Bierenbaum:2007dm}. In Table \ref{table:MBcheckpol}, we present the numerical results we obtain for the moments $N = 3,7$ of the individual diagrams. We agree with the results obtained for general values of $N$. The contributions from the individual diagrams are given in \cite{Bierenbaum:prep1}. Our results up to $O(\varepsilon^0)$, Eqs. (\ref{AhhhQg2polD}, \ref{aQg2t}), agree with the results presented in \cite{Buza:1996xr}, which we thereby confirm for the first time. Eq.~(\ref{aQg2tbar}) is a new result. In this calculation extensive, use was made of the representation of the Feynman-parameter integrals in terms of generalized hypergeometric functions, cf. Section~\ref{Sec-2L}. The infinite sums, which occur in the polarized calculation, are widely the same as in the unpolarized case, \cite{Bierenbaum:2007qe,Bierenbaum:2007dm,Bierenbaum:2006mq,Bierenbaum:2007rg,Bierenbaum:2008tm}. The structure of the result for the higher order terms in $\varepsilon$ is completely the same as in the unpolarized case as well, see Eq.~(\ref{aQg2}) and the following discussion. Especially, the structural relations between the finite harmonic sums, \cite{Blumlein:2004bb,Blumlein:2007dj,Blumlein:2009ta,Blumlein:2009fz}, allow to express $\Delta a_{Qg}^{\prime (2)}$ by only two basic Mellin transforms, $S_1$ and $S_{-2,1}$. This has to be compared to the $24$ functions needed in Ref.~\cite{Buza:1996xr} to express the constant term in $z$--space. Thus we reached a more compact representation. $\Delta \overline{a}_{Qg}^{\prime (2)}$ depends on the six sums $S_1(N), S_{\pm 2,1}(N), S_{-3,1}(N), S_{\pm 2,1,1}(N)$, after applying the structural relations. The $O(\varepsilon^0)$ term has the same complexity as the 2--loop anomalous dimensions, whereas the complexity of the $O(\varepsilon)$ term corresponds to the level observed for 2--loop Wilson coefficients and other hard scattering processes which depend on a single scale, cf.~\cite{Blumlein:2005im,*Blumlein:2006rr,Dittmar:2005ed}. \subsubsection{$\Delta A_{Qq}^{(2), {\sf PS}}$} \label{sec-P5} The operator matrix element $\Delta A_{Qq}^{(2), {\sf PS}}$ is obtained from the diagrams shown in Figure~3 of Ref.~\cite{Buza:1995ie}. In this calculation, we did not adopt any specific scheme for $\gamma_5$, but calculated the corresponding integrals without performing any traces or (anti)commuting $\gamma_5$. \begin{table}[H] \begin{center} \includegraphics[angle=0, width=10.0cm]{picmain25.eps} \end{center} \begin{center} \caption{\sf Numerical values for moments of individual diagrams of $\Delta~\hat{\hspace*{-1mm}\hat{A}}_{Qg}^{(2)}$.} \label{table:MBcheckpol} \small \end{center} \normalsize \end{table} \noindent The result can then be represented in terms of three bi--spinor structures \begin{eqnarray} \label{eq12} C_1(\varepsilon)&=&\frac{1}{\Delta.p} Tr\Bigl\{/\!\!\!\! \Delta~/\!\!\!\! p\gamma^{\mu}\gamma^{\nu}\gamma_5\Bigr\} /\!\!\!\! \Delta\gamma_{\mu}\gamma_{\nu} =\frac{24}{3+\varepsilon}/\!\!\!\! \Delta \gamma_5 \\ \label{eq13} C_2(\varepsilon)&=& Tr\Bigl\{/\!\!\!\! \Delta\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma_5\Bigr\} \gamma_{\mu}\gamma_{\nu}\gamma_{\rho} =24 /\!\!\!\! \Delta \gamma_5 \\ C_3(\varepsilon)&=&\frac{1}{m^2} Tr\Bigl\{/\!\!\!\! \Delta~/\!\!\!\! p\gamma^{\mu}\gamma^{\nu}\gamma_5\Bigr\} /\!\!\!\! p \gamma_{\mu}\gamma_{\nu}~.\label{factors} \end{eqnarray} These are placed between the states $\langle p| \ldots |p \rangle$, with \begin{eqnarray} /\!\!\!\! p |p \rangle &=& m_0 |p \rangle \end{eqnarray} and $m_0$ the light quark mass. Therefore, the contribution due to $C_3(\varepsilon)$ vanishes in the limit $m_0 \rightarrow 0$. The results for $C_{1,2}(\varepsilon)$ in the r.h.s. of Eqs.~(\ref{eq12},~\ref{eq13}) can be obtained by applying the projector \begin{eqnarray} \frac{-3}{2(D-1)(D-2)(D-3)}{\sf Tr}[~/\!\!\!\! p \gamma_5~C_i] \end{eqnarray} and performing the trace in $D$--dimensions using relations (\ref{gamma5}, \ref{levidet}). Note that the result in $4$--dimensions is recovered by setting $\varepsilon=0$. One obtains from the truncated $2$--loop Green's function $\Delta~G_{Qq}^{ij,(2)}$ \begin{eqnarray} \Delta~G_{Qq}^{ij,(2)} &= & \Delta~\hat{\hspace*{-1mm}\hat{A}}^{(2), {\sf PS}}_{Qq}~/\!\!\!\! \Delta \gamma_5 \delta^{ij}(\Delta.p)^{N-1}~, \label{GreenfPS} \label{eq14} \end{eqnarray} the following result for the massive OME \begin{eqnarray} \label{eq32} \Delta~\hat{\hspace*{-1mm}\hat{A}}^{(2),{\sf PS}}_{Qq}/\!\!\!\! \Delta \gamma_5&=& \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} C(\varepsilon) 8(N+2) \Biggl\{ -\frac{1}{\varepsilon^2}\frac{2(N-1)} {N^2(N+1)^2} +\frac{1}{\varepsilon}\frac{N^3+2N+1} {N^3(N+1)^3} \nonumber\\&&\hspace{-15mm} -\frac{(N-1)(\zeta_2+2S_2)} {2N^2(N+1)^2} -\frac{4N^3-4N^2-3N-1} {2N^4(N+1)^4} +\varepsilon\Bigl[ \frac{(N^3+2N+1) (\zeta_2+2S_2)} {4N^3(N+1)^3} \nonumber\\ &&\hspace{-15mm} -\frac{(N-1)(\zeta_3+3S_3)} {6N^2(N+1)^2} +\frac{N^5-7N^4+6N^3+7N^2+4N+1} {4N^5(N+1)^5} \Bigr] \Biggr\}+O(\varepsilon^2)~, \label{AhhhQq2PSpol} \end{eqnarray} where \begin{eqnarray} C(\varepsilon)&=&\frac{C_1(\varepsilon)\cdot(N-1)+C_2(\varepsilon)}{8(N+2)} = /\!\!\!\! \Delta \gamma_5 \frac{3(N+2+\varepsilon)}{(N+2)(3+\varepsilon)} \nonumber\\ &=&1+\frac{N-1}{3(N+2)}\Bigl( -\varepsilon +\frac{\varepsilon^2}{3} -\frac{\varepsilon^3}{9} \Bigr)+O(\varepsilon^4)~. \end{eqnarray} Comparing our result, Eq.~(\ref{AhhhQq2PSpol}), to the result obtained in \cite{Buza:1996xr}, one notices that there the factor $C(\varepsilon)$ was calculated in $4$--dimensions, i.e. $C(\varepsilon)=1$. Therefore, we do the same and obtain \begin{eqnarray} \Delta~\hat{\hspace*{-1mm}\hat{A}}^{(2), {\sf PS}}_{Qq}&=& S_\varepsilon^2 \Bigl(\frac{m^2}{\mu^2}\Bigr)^{\varepsilon} \Biggl[ -\frac{\Delta\hat{\gamma}^{(0)}_{qg} \Delta\gamma^{(0)}_{gq}}{2\varepsilon^2} +\frac{\Delta \hat{\gamma}^{(1), {\sf PS}}_{qq}}{2\varepsilon} + \Delta a^{(2), {\sf PS}}_{Qq} + \Delta \overline{a}^{(2), {\sf PS}}_{Qq}\varepsilon \Biggr]~. \nonumber \\ \label{AQq2f} \end{eqnarray} with \begin{eqnarray} \label{eq41} \Delta \gamma_{gq}^{(0)}&=&-4C_F\frac{N+2}{N(N+1)}~,\\ \Delta \hat{\gamma}_{qq}^{(1), {\sf PS}}&=& 16T_FC_F\frac{(N+2)(N^3+2N+1)}{N^3(N+1)^3} \label{splitpolPS}~, \\ \Delta a^{(2), {\sf PS}}_{Qq}&=& -\frac{(N-1)(\zeta_2+2S_2)} {2N^2(N+1)^2} -\frac{4N^3-4N^2-3N-1} {2N^4(N+1)^4} ~, \\ \Delta\overline{a}^{(2), {\sf PS}}_{Qq}&=& -\frac{(N-1)(\zeta_3+3S_3)} {6N^2(N+1)^2} \frac{(N^3+2N+1) (\zeta_2+2S_2)} {4N^3(N+1)^3} \nonumber\\ && +\frac{N^5-7N^4+6N^3+7N^2+4N+1} {4N^5(N+1)^5}~. \label{aQq2PStbar} \end{eqnarray} Here, we agree up to $O(\varepsilon^0)$ with Ref.~\cite{Buza:1996xr} and Eq.~(\ref{aQq2PStbar}) is a new result. Note, that Eq.~(\ref{splitpolPS}) is already the $\overline{\sf MS}$ anomalous dimension as obtained in Refs.~\cite{Mertig:1995ny,Vogelsang:1995vh,*Vogelsang:1996im}. Therefore any additional scheme dependence due to $\gamma_5$ can only be contained in the higher order terms in $\varepsilon$. As a comparison the anomalous dimension $\Delta \hat{\gamma}_{qq}^{\prime (1), {\sf PS}}$ which is obtained by calculating $C(\varepsilon)$ in $D$ dimensions, is related to the $\overline{\sf MS}$ one by \begin{eqnarray} \label{eq43} \Delta\hat{\gamma}_{qq}^{(1), {\sf PS}} &=& \Delta\hat{\gamma}_{qq}^{\prime (1), {\sf PS}} -T_FC_F\frac{16(N-1)^2}{3N^2(N+1)^2}~. \end{eqnarray} The renormalized result becomes \begin{eqnarray} \Delta A_{Qq}^{(2), {\sf PS}}&=& -\frac{\Delta\hat{\gamma}_{qg}^{(0)}\Delta\gamma_{gq}^{(0)}}{8} \ln^2\left(\frac{m^2}{\mu^2}\right) +\frac{\Delta\hat{\gamma}_{qq}^{(1), {\sf PS}}}{2} \ln\left(\frac{m^2}{\mu^2}\right) +\Delta a_{Qq}^{(2), {\sf PS}} +\frac{\Delta \hat{\gamma}_{qg}^{(0)} \Delta \gamma_{gq}^{(0)}}{8} \zeta_2 ~. \nonumber\\ \label{eqA2} \end{eqnarray} The results in this Section constitute a partial step towards the calculation of the asymptotic heavy flavor contributions at $O(a_s^2)$ in the ${\sf \overline{MS}}$--scheme, thereby going beyond the results of Ref.~\cite{Buza:1996xr}. The same holds for the $O(a_s^2\varepsilon)$--terms, which we calculated for the first time, using the same description for $\gamma_5$ as has been done in \cite{Buza:1996xr}. The correct finite renormalization to transform to the ${\sf \overline{MS}}$--scheme remains to be worked out and will be presented elsewhere, \cite{Bierenbaum:prep1}. \newpage \section{Heavy Flavor Contributions to Transversity} \label{sec-1} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} The transversity distribution $\Delta_T f(x,Q^2)$ is one of the three possible quarkonic twist-2 parton distributions besides the unpolarized and the longitudinally polarized quark distribution, $f(x,Q^2)$ and $\Delta f(x,Q^2)$, respectively. Unlike the latter distribution functions, it cannot be measured in inclusive deeply inelastic scattering in case of massless partons since it is chirally odd. However, it can be extracted from semi--inclusive deep-inelastic scattering (SIDIS) studying isolated meson production, \cite{Ralston:1979ys,*Jaffe:1991kpxJaffe:1991ra,Cortes:1991ja}, and in the Drell-Yan process,~\cite{Artru:1989zv,Cortes:1991ja,Collins:1992kk,*Jaffe:1993xb,*Tangerman:1994bb,*Boer:1997nt}, off transversely polarized targets~\footnote{For a review see Ref.~\cite{Barone:2001sp}.}. Measurements of the transversity distribution in different polarized hard scattering processes are currently performed or in preparation, \cite{Airapetian:2004twxAirapetian:2008sk,*Alexakhin:2005iw,*Afanasev:2007qh,*:2008dn,*Lutz:2009ff}. In the past, phenomenological models for the transversity distribution were developed based on bag-like models, chiral models, light--cone models, spectator models, and non-perturbative QCD calculations, cf. Section~8 of Ref.~\cite{Barone:2001sp}. The main characteristics of the transversity distributions are that they vanish by some power law both at small and large values of Bjorken--$x$ and exhibit a shifted bell-like shape. Recent attempts to extract the distribution out of data were made in Refs.~\cite{Anselmino:2007fs,*Anselmino:2008jk}. The moments of the transversity distribution can be measured in lattice simulations, which help to constrain it ab initio, where first results were given in Refs.~\cite{Aoki:1996pi,*Gockeler:1996es,*Khan:2004vw,*Diehl:2005ev,*Gockeler:2006zu,*Renner:private1,Dolgov:2002zm}. From these investigations there is evidence, that the up-quark distribution is positive while the down-quark distribution is negative, with first moments between $\{0.85 \ldots 1.0\}$ and $\{-0.20 \ldots -0.24\}$, respectively. This is in qualitative agreement with phenomenological fits. Some of the processes which have been proposed to measure transversity contain $k_\perp-$ and higher twist effects, cf.~\cite{Barone:2001sp}. We will limit our considerations to the class of purely twist--2 contributions, for which the formalism to calculate the heavy flavor corrections is established, cf. Section~\ref{Sec-HQDIS}. As for the unpolarized flavor non--singlet contributions, we apply the factorization relation of the heavy flavor Wilson coefficient (\ref{LNSFAC}) in the region $Q^2 \gg m^2$. As physical processes one may consider the SIDIS process $l N \rightarrow l' h + X$ off transversely polarized targets in which the transverse momentum of the produced final state hadron $h$ is integrated. The differential scattering cross section in case of single photon exchange reads \begin{eqnarray} \label{sidis1} \frac{d^3\sigma}{dxdydz}&=& \frac{4\pi\alpha^2}{xyQ^2} \sum_{i=q,\overline{q}}e_i^2 x \Biggl\{ \frac{1}{2}\Bigl[1+(1-y)^2\Bigr] F_i(x,Q^2) \tilde{D}_i(z,Q^2) \nonumber\\ && -(1-y)|{\bf S}_\perp||{\bf S}_{h\perp}| \cos\left(\phi_S + \phi_{S_h}\right)\Delta_T F_i(x,Q^2) \Delta_T \tilde{D}_i(z,Q^2)\Biggr\}~. \end{eqnarray} Here, in addition to the Bjorken variables $x$ and $y$, the fragmentation variable $z$ occurs. ${\bf S}_\perp$ and ${\bf S}_{h\perp}$ are the transverse spin vectors of the incoming nucleon $N$ and the measured hadron $h$. The angles $\phi_{S,S_h}$ are measured in the plane perpendicular to the $\gamma^* N$ (z--) axis between the $x$-axis and the respective vector. The transversity distribution can be obtained from Eq.~(\ref{sidis1}) for a {\sf transversely} polarized hadron $h$ by measuring its polarization. The functions $F_i, \tilde{D}_i, \Delta_T F_i, \Delta_T \tilde{D}_i$ are given by \begin{eqnarray} F_i(x,Q^2) &=& {\cal C}_i(x,Q^2) \otimes f_i(x,Q^2) \\ \tilde{D}_i(z,Q^2) &=& \tilde{{\cal C}}_i(z,Q^2) \otimes {D}_i(z,Q^2) \\ \Delta_T F_i(x,Q^2) &=& \Delta_T {\cal C}_i(x,Q^2) \otimes \Delta_T f_i(x,Q^2)\\ \Delta_T \tilde{D}_i(z,Q^2) &=& \Delta_T \tilde{{\cal C}}_i(z,Q^2) \otimes \Delta_T {D}_i(z,Q^2)~. \end{eqnarray} Here, $D_i, \Delta_T {D}_i$ are the fragmentation functions and $\tilde{{\cal C}}_i,~\Delta_T {\cal C}_i,~\Delta_T \tilde{{\cal C}}_i$ are the corresponding space- and time-like Wilson coefficients. The functions ${\cal C}_i$ are the Wilson coefficients as have been considered in the unpolarized case, cf. Sections \ref{Sec-DIS} and \ref{Sec-HQDIS}. The Wilson coefficient for transversity, $\Delta_T {\cal C}_i(x,Q^2)$, contains light-- and heavy flavor contributions, cf. Eq.~(\ref{Callsplit}), \begin{eqnarray} \Delta_T {\cal C}_i(x,Q^2) = \Delta_T C_i(x,Q^2) + \Delta_T H_i(x,Q^2)~. \end{eqnarray} $\Delta_T C_i$ denotes the light flavor transversity Wilson coefficient and $\Delta_T H_i(x,Q^2)$ the heavy flavor part. We dropped arguments of the type $n_f,~m^2,~\mu^2$ for brevity, since they can all be inferred from the discussion in Section~\ref{Sec-HQDIS}. Eq.~(\ref{sidis1}) holds for spin--$1/2$ hadrons in the final state, but the transversity distribution may also be measured in the leptoproduction process of spin--1 hadrons, \cite{Ji:1993vw}. In this case, the ${\bf P}_{h \perp}$-integrated Born cross section reads \begin{eqnarray} \frac{d^3\sigma}{dxdydz}&=&\frac{4\pi\alpha^2}{xyQ^2} \sin\left(\phi_S + \phi_{S_{LT}}\right) |{\bf S}_\perp||{S}_{LT}| (1-y) \sum_{i =q,\overline{q}} e_i^2 x \Delta_T F_i(x,Q^2) \widehat{H}_{i,1,LT}(z,Q^2)~.\nonumber\\ \label{sidis2} \end{eqnarray} Here, the polarization state of a spin--1 particle is described by a tensor with five independent components, \cite{Bacchetta:2000jk}. $\phi_{LT}$ denotes the azimuthal angle of $\vec{S}_{LT}$, with \begin{eqnarray} |S_{LT}| = \sqrt{\left(S_{LT}^x\right)^2 +\left(S_{LT}^y\right)^2}~. \end{eqnarray} $\widehat{H}_{a,1,LT}(z,Q^2)$ is a $T$- and chirally odd twist-2 fragmentation function at vanishing $k_\perp$. The process (\ref{sidis2}) has the advantage that the transverse polarization of the produced hadron can be measured from its decay products. The transversity distribution can also be measured in the transversely polarized Drell--Yan process, see Refs.~\cite{Vogelsang:1992jn,Vogelsang:1997ak,Shimizu:2005fp}. However, the SIDIS processes have the advantage that in high luminosity experiments the heavy flavor contributions can be tagged like in deep-inelastic scattering. This is not the case for the Drell-Yan process, where the heavy flavor effects appear as inclusive radiative corrections in the Wilson coefficients only. The same argument as in Section~\ref{SubSec-HQAsym} can be applied to obtain the heavy flavor Wilson coefficients for transversity in the asymptotic limit $Q^2\gg~m^2$. Since transversity is a ${\sf NS}$ quantity, the relation is the same as in the unpolarized ${\sf NS}$ case and reads up to $O(a_s^3)$, cf. Eq.~(\ref{eqWIL1}), \begin{eqnarray} \Delta_T H^{\mbox{\small Asym}}_q(n_f+1)=&& a_s^2\Bigl[ \Delta_T A_{qq,Q}^{(2),\sf NS}(n_f+1) +\Delta_T \hat{C}_q^{(2)}(n_f) \Bigr] \nonumber \\ &+&a_s^3\Bigl[ \Delta_T A_{qq,Q}^{(3),\sf NS}(n_f+1) +\Delta_T A_{qq,Q}^{(2),\sf NS}(n_f+1) \Delta_T C_q^{(1)}(n_f+1) \nonumber\\ && \hspace{5mm} +\Delta_T\hat{C}_q^{(3)}(n_f)\Bigr]~. \label{HwcofTR} \end{eqnarray} The operator matrix elements $\Delta_T A_{qq,Q}^{(2,3),\sf NS}$ are -- as in the unpolarized case -- universal and account for all mass contributions but power corrections. The respective asymptotic heavy flavor Wilson coefficients are obtained in combination with the light flavor process--dependent Wilson coefficients~\footnote{Apparently, the light flavor Wilson coefficients for SIDIS were not yet calculated even at $O(a_s)$, although this calculation and the corresponding soft-exponentiation should be straightforward. For the transversely polarized Drell-Yan process the $O(a_s)$ light flavor Wilson coefficients were given in \cite{Vogelsang:1997ak} and higher order terms due to soft resummation were derived in \cite{Shimizu:2005fp}.}. In the following, we will concentrate on the calculation of the massive operator matrix elements. The twist--$2$ local operator in case of transversity has a different Lorentz--structure compared to Eqs. (\ref{COMP1})--(\ref{COMP3}) and is given by \begin{eqnarray} \label{op2} O_{q,r;\mu, \mu_1, \ldots, \mu_N}^{\sf TR,NS} &=& i^{N-1} {\bf S} [\overline{\psi} \sigma^{\mu \mu_1} D^{\mu_2} \ldots D^{\mu_N} \frac{\lambda_r}{2} \psi] - {\rm trace~terms}~, \end{eqnarray} with $\sigma^{\nu\mu} = (i/2)\left[\gamma_\nu \gamma_\mu - \gamma_\mu \gamma_\nu \right]$ and the definition of the massive operator matrix element is the same as in Section~\ref{SubSec-HQAsym}. Since (\ref{op2}) denotes a twist--2 flavor non--singlet operator, it does not mix with other operators. After multiplying with the external source $J_N$, cf. Eq.~(\ref{Jsource}) and below, the Green's function in momentum space corresponding to the transversity operator between quarkonic states is given by \begin{eqnarray} \overline{u}(p,s) G^{ij, {\sf TR,NS}}_{\mu,q,Q} \lambda_r u(p,s)&=& J_N\bra{\overline{\Psi}_i(p)}O_{q,r;\mu, \mu_1, \ldots, \mu_N}^{{\sf TR,NS}} \ket{\Psi^j(p)}_Q~\label{GijTRNS}~. \end{eqnarray} It relates to the unrenormalized transversity OME via \begin{eqnarray} \hat{G}^{ij, {\sf TR,NS}}_{\mu,q,Q}&=& \delta_{ij}(\Delta \cdot p)^{N-1} \Bigl( \Delta_{\rho}\sigma^{\mu\rho} \Delta_T~\hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{\sf NS} \Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) +c_1 \Delta^\mu + c_2 p^\mu +c_3 \gamma^\mu p \hspace*{-2mm} / \nonumber\\ && \hspace{30mm} +c_4 \Delta \hspace*{-3mm}/~p \hspace*{-2mm}/ \Delta^\mu +c_5 \Delta \hspace*{-3mm}/~p \hspace*{-2mm}/ p^\mu \Bigr)~. \end{eqnarray} The Feynman rules for the operators multiplied with the external source are given in Appendix \ref{App-FeynRules}. The projection onto the massive OME is found to be \begin{eqnarray} \label{eqc3} \Delta_T~\hat{\hspace*{-1mm}\hat{A}}_{qq,Q}^{\sf NS}\Bigl(\frac{\hat{m}^2}{\mu^2},\varepsilon,N\Bigr) &=& - \frac{i\delta^{ij}}{4N_c(\Delta.p)^2 (D-2)} \Bigl\{ {\sf Tr}[ \Delta\hspace*{-3mm}/~p\hspace*{-2mm}/~ p^{\mu}\hat{G}^{ij, {\sf TR,NS}}_{\mu,q,Q}] -\Delta.p {\sf Tr}[p^{\mu}\hat{G}^{ij, {\sf TR,NS}}_{\mu,q,Q}] \nonumber\\ && \hspace{40mm} +i\Delta.p {\sf Tr}[\sigma_{\mu \rho} p^\rho \hat{G}^{ij, {\sf TR,NS}}_{\mu,q,Q}] \Bigr\}~. \end{eqnarray} Renormalization for transversity proceeds in the same manner as in the ${\sf NS}$--case. The structure of the unrenormalized expressions at the $2$-- and $3$--loop level are the same as shown in Eqs.~(\ref{Ahhhqq2NSQ},~\ref{Ahhhqq3NSQ}), if one inserts the respective transversity anomalous dimensions. The expansion coefficients of the renormalized OME then read up to $O(a_s^3)$ in the $\overline{\sf MS}$--scheme, cf. Eqs.~(\ref{Aqq2NSQMSren},~\ref{Aqq3NSQMSren}), \begin{eqnarray} \Delta_T A_{qq,Q}^{(2),{\sf NS, \overline{{\sf MS}}}}&=& \frac{\beta_{0,Q}\gamma_{qq}^{(0),\sf TR}}{4} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{\hat{\gamma}_{qq}^{(1), {\sf TR}}}{2} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) +a_{qq,Q}^{(2),{\sf TR}} -\frac{\beta_{0,Q}\gamma_{qq}^{(0),\sf TR}}{4}\zeta_2~, \nonumber \\ \label{Aqq2NSTRQMSren} \\ \Delta_T A_{qq,Q}^{(3),{\sf NS}, \overline{{\sf MS}}}&=& -\frac{\gamma_{qq}^{(0),{\sf TR}}\beta_{0,Q}}{6} \Bigl( \beta_0 +2\beta_{0,Q} \Bigr) \ln^3 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{4} \Biggl\{ 2\gamma_{qq}^{(1),{\sf TR}}\beta_{0,Q}\nonumber \end{eqnarray} \begin{eqnarray} && -2\hat{\gamma}_{qq}^{(1),{\sf TR}} \Bigl( \beta_0 +\beta_{0,Q} \Bigr) +\beta_{1,Q}\gamma_{qq}^{(0),{\sf TR}} \Biggr\} \ln^2 \Bigl(\frac{m^2}{\mu^2}\Bigr) +\frac{1}{2} \Biggl\{ \hat{\gamma}_{qq}^{(2),{\sf TR}} \nonumber\\ && -\Bigl( 4a_{qq,Q}^{(2),{\sf TR}} -\zeta_2\beta_{0,Q}\gamma_{qq}^{(0),{\sf TR}} \Bigr)(\beta_0+\beta_{0,Q}) +\gamma_{qq}^{(0),{\sf TR}}\beta_{1,Q}^{(1)} \Biggr\} \ln \Bigl(\frac{m^2}{\mu^2}\Bigr) \nonumber\\&& +4\overline{a}_{qq,Q}^{(2),{\sf TR}}(\beta_0+\beta_{0,Q}) -\gamma_{qq}^{(0)}\beta_{1,Q}^{(2)} -\frac{\gamma_{qq}^{(0),{\sf TR}}\beta_0\beta_{0,Q}\zeta_3}{6} -\frac{\gamma_{qq}^{(1),{\sf TR}}\beta_{0,Q}\zeta_2}{4} \nonumber\\ \nonumber \\&& +2 \delta m_1^{(1)} \beta_{0,Q} \gamma_{qq}^{(0),{\sf TR}} +\delta m_1^{(0)} \hat{\gamma}_{qq}^{(1),{\sf TR}} +2 \delta m_1^{(-1)} a_{qq,Q}^{(2),{\sf TR}} +a_{qq,Q}^{(3),{\sf TR}}~. \label{Aqq3NSTRQMSren} \end{eqnarray} Here, $\gamma_{qq}^{(k),{\sf TR}},~\{k = 0, 1, 2\}$, denote the transversity quark anomalous dimensions at $O(a_s^{k+1})$ and $a_{qq,Q}^{(2,3),{\sf TR}}, \overline{a}_{qq,Q}^{(2),{\sf TR}}$ are the constant and $O(\varepsilon)$ terms of the massive operator matrix element at 2-- and 3--loop order, respectively, cf. the discussion in Section~\ref{Sec-REN}. At ${\sf LO}$ the transversity anomalous dimension was calculated in~\cite{Baldracchini:1980uq,*Shifman:1980dk,*Bukhvostov:1985rn,*Mukherjee:2001zx,Artru:1989zv,Blumlein:2001ca}~\footnote{The small $x$ limit of the ${\sf LO}$ anomalous dimension was calculated in \cite{Kirschner:1996jj}.}, and at ${\sf NLO}$ in~\cite{Hayashigaki:1997dn,*Kumano:1997qp,Vogelsang:1997ak}~\footnote{For calculations in the non-forward case, see \cite{Belitsky:1997rh,*Hoodbhoy:1998vm,*Belitsky:2000yn,Blumlein:2001ca}.}. At three-loop order the moments $N=1 \ldots 8$ are known, \cite{Gracey:2003yrxGracey:2003mrxGracey:2006zrxGracey:2006ah}. The 2--loop calculation for all $N$ proceeds in the same way as described in Section~\ref{Sec-2L}. We also calculated the unprojected Green's function to check the projector (\ref{eqc3}). Fixed moments at the $2$-- and $3$--loop level were calculated using ${\sf MATAD}$ as described in Section~\ref{Sec-3L}. From the pole terms of the unrenormalized $2$--loop OMEs, the leading and next-to-leading order anomalous dimensions $\gamma_{qq}^{(0),{\sf TR}}$ and $\hat{\gamma}_{qq}^{(1),{\sf TR}}$ can be determined. We obtain \begin{eqnarray} \gamma_{qq}^{(0),{\sf TR}} = 2 C_F \left[ -3 + 4 S_1\right], \label{gqqTR0} \end{eqnarray} and \begin{eqnarray} \hat{\gamma}_{qq}^{(1), {\sf TR}} = \frac{32}{9} C_F T_F \left[ 3 S_2 - 5 S_1 + \frac{3}{8} \right]~, \label{gqqTR1} \end{eqnarray} confirming earlier results, \cite{Hayashigaki:1997dn,*Kumano:1997qp,Vogelsang:1997ak}. The finite and $O(\varepsilon)$ contributions are given by \begin{eqnarray} a_{qq,Q}^{(2), {\sf TR}} &=& C_F T_F \Biggl\{ -\frac{8}{3} S_3 +\frac{40}{9} S_2 -\left[ \frac{224}{27} +\frac{8}{3}\zeta_2 \right] S_1 +2 \zeta_2 +{\frac{ \left( 24+73\,N+73\,{N}^{2} \right)}{18 N \left( N+1 \right) }} \Biggr\}~, \nonumber\\ \label{aqqTR2} \\ \overline{a}_{qq,Q}^{(2), {\sf TR}} &=& C_F T_F \Biggl\{ - \left[ {\frac {656}{81}}\, +{\frac {20}{9}}\, \zeta_2 +{\frac {8}{9}}\, \zeta_3 \right] S_1 +\left[{\frac {112}{27}}\, +\frac{4}{3}\, \zeta_2 \right] S_2 -{\frac {20}{9}}\, S_3 \nonumber\\ && +\frac{4}{3}\, S_4 +\frac{1}{6}\, \zeta_2 +\frac{2}{3}\, \zeta_3 +\frac{ \left( -144-48\,N+757\,{N}^{2}+1034\,{N}^{3}+517\,{N}^{4} \right) } {216 {N}^{2} \left( N+1 \right) ^{2}} \Biggr\}~. \nonumber\\ \label{aqqTR2bar} \end{eqnarray} The renormalized $2$--loop massive OME (\ref{Aqq2NSTRQMSren}) reads \begin{eqnarray} \Delta_T A_{qq,Q}^{(2),{\sf NS, \overline{{\sf MS}}}} &=& C_F T_F \Biggl\{\left[ -\frac{8}{3} S_1 + 2 \right] \ln^2\left(\frac{m^2}{\mu^2}\right) + \left[-{\frac {80}{9}} S_1 + \frac{2}{3} +\frac{16}{3} S_2 \right] \ln\left(\frac{m^2}{\mu^2}\right) \nonumber\\ && \hspace*{1cm} - \frac{8}{3} S_3 + \frac{40}{9} S_2 - \frac{224}{27} S_1 + \frac {24+73N+73{N}^{2}}{18 N \left( N+1\right) } \Biggr\}~. \end{eqnarray} Using ${\sf MATAD}$, we calculated the moments $N=1\ldots 13$ at $O(a_s^2)$ and $O(a_s^3)$. At the $2$--loop level, we find complete agreement with the results presented in Eqs. (\ref{gqqTR0})--(\ref{aqqTR2bar}). At $O(a_s^3)$, we also obtain $\hat{\gamma}_{qq}^{(2),{\sf TR}}$, which can be compared to the $T_F$-terms in the calculation \cite{Gracey:2003yrxGracey:2003mrxGracey:2006zrxGracey:2006ah} for $N = 1 ...8$. This is the first re-calculation of these terms and we find agreement. For the moments $N = 9...13$ this contribution to the transversity anomalous dimension is calculated for the first time. We list the anomalous dimensions in Appendix~\ref{App-Trans}. There, also the constant contributions $a_{qq,Q}^{(3),{\sf TR}}$ are given for $N=1\ldots 13$, which is a new result. Furthermore, we obtain in the $3$--loop calculation the moments $N = 1...13$ of the complete 2--loop anomalous dimensions. These are in accordance with Refs.~\cite{Vogelsang:1997ak,Hayashigaki:1997dn,*Kumano:1997qp}. Finally, we show as examples the first moments of the $\overline{\sf MS}$--renormalized $O(a_s^3)$ massive transversity OME. Unlike the case for the vector current, the first moment does not vanish, since there is no conservation law to enforce this. \begin{eqnarray} \Delta_T~A_{qq,Q}^{(3),{\sf NS},\overline{{\sf MS}}}(1)&=&C_FT_F\Biggl\{ \Bigl( \frac{16}{27}T_F(1-n_f) +\frac{44}{27}C_A \Bigr)\ln^3\Bigl(\frac{m^2}{\mu^2}\Bigr) +\Bigl( \frac{104}{27}T_F \nonumber\\&&\hspace{-25mm} -\frac{106}{9}C_A +\frac{32}{3}C_F \Bigr)\ln^2\Bigl(\frac{m^2}{\mu^2}\Bigr) +\Biggl[ -\frac{604}{81}n_fT_F -\frac{4}{3}T_F +\Bigl( -\frac{2233}{81} -16\zeta_3 \Bigr)C_A \nonumber\\&&\hspace{-25mm} +\Bigl( 16\zeta_3 +\frac{233}{9} \Bigr)C_F \Biggr]\ln\Bigl(\frac{m^2}{\mu^2}\Bigr) +\Bigl( -\frac{6556}{729} +\frac{128}{27}\zeta_3 \Bigr)T_Fn_f \nonumber\\&&\hspace{-25mm} +\Bigl( \frac{2746}{729} -\frac{224}{27}\zeta_3 \Bigr)T_F +\Bigl( \frac{8}{3}B_4 +\frac{437}{27}\zeta_3 -24\zeta_4 -\frac{34135}{729} \Bigr)C_A \nonumber\\&&\hspace{-25mm} +\Bigl( -\frac{16}{3}B_4 +24\zeta_4 -\frac{278}{9}\zeta_3 +\frac{7511}{81} \Bigr)C_F \Biggr\}~,\\ \Delta_T~A_{qq,Q}^{(3),{\sf NS}, \overline{{\sf MS}}}(2)&=&C_FT_F\Biggl\{ \Bigl( \frac{16}{9}T_F(1-n_f) +\frac{44}{9}C_A \Bigr)\ln^3\Bigl(\frac{m^2}{\mu^2}\Bigr) +\Bigl( -\frac{34}{3}C_A \nonumber\\&&\hspace{-25mm} +8T_F \Bigr)\ln^2\Bigl(\frac{m^2}{\mu^2}\Bigr) +\Bigl[ -\frac{196}{9}n_fT_F -\frac{92}{27}T_F +\Bigl( -48\zeta_3 -\frac{73}{9} \Bigr)C_A +\Bigl( 48\zeta_3 \nonumber\\&&\hspace{-25mm} +15 \Bigr)C_F \Bigr]\ln\Bigl(\frac{m^2}{\mu^2}\Bigr) +\Bigl( \frac{128}{9}\zeta_3 -\frac{1988}{81} \Bigr)T_Fn_f +\Bigl( \frac{338}{27} -\frac{224}{9}\zeta_3 \Bigr)T_F +\Bigl( -56 \nonumber\\&&\hspace{-25mm} -72\zeta_4 +8B_4 +\frac{533}{9}\zeta_3 \Bigr)C_A +\Bigl( -16B_4 +\frac{4133}{27} +72\zeta_4 -\frac{310}{3}\zeta_3 \Bigr)C_F \Biggr\}~. \end{eqnarray} The structure of the result and the contributing numbers are the same as in the unpolarized case, cf. Eq.~(\ref{Aqq3NSQN2MSON}). We checked the moments $N=1\ldots 4$ keeping the complete dependence on the gauge--parameter $\xi$ and find that it cancels in the final result. Again, we observe that the massive OMEs do not depend on $\zeta_2$, cf. Section~\ref{SubSec-3LResUn}. Since the light flavor Wilson coefficients for the processes from which the transversity distribution can be extracted are not known to 2-- and 3--loop order, phenomenological studies on the effect of the heavy flavor contributions cannot yet be performed. However, the results of this Section can be used in comparisons with upcoming lattice simulations with (2+1+1)-dynamical fermions including the charm quark. More details on this calculation are given in~\cite{Blumlein:trans}. \newpage \section{\bf\boldmath First Steps Towards a Calculation of $A_{ij}^{(3)}$ for all Moments.} \label{Sec-FULL3L} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} In Section \ref{Sec-3L}, we described how the various massive OMEs are calculated for fixed integer values of the Mellin variable $N$ at $3$--loop order using ${\sf MATAD}$. The ultimate goal is to calculate these quantities for general values of $N$. So far no massive {\sf single scale calculation} at $O(a_s^3)$ has been performed. In the following we would like to present some first results and a general method, which may be of use in later work calculating the general $N$--dependence of the massive OMEs $A_{ij}^{(3)}$. In Section~\ref{Sec-FULL3LF1}, we solve, as an example, a $3$--loop ladder graph contributing to $A_{Qg}^{(3)}$ for general values of $N$ by direct integration, avoiding the integration--by--parts method. In Section~\ref{SubSec-FULL3LGuess}, Ref.~\cite{Blumlein:2009tm,Blumlein:2009tj}, we discuss a general algorithm which allows to determine from a sufficiently large but ${\sf finite}$ number of moments for a recurrent quantity its general $N$--dependence. This algorithm has been successfully applied in \cite{Blumlein:2009tj} to reconstruct the $3$--loop anomalous dimensions, \cite{Moch:2004pa,Vogt:2004mw}, and massless $3$--loop Wilson coefficients, \cite{Vermaseren:2005qc}, from their moments. These are the largest single scale quantities known at the moment and are well suited to demonstrate the power of this formalism. Similarly, one may apply this method to new problems of smaller size which emerge in course of the calculation of the OMEs $A_{ij}^{(3)}$ for general values of $N$. \subsection{\bf\boldmath Results for all--$N$ Using Generalized Hypergeometric Functions} \label{Sec-FULL3LF1} In Section \ref{SubSec-2LF32}, we showed that there is only one basic $2$--loop massive tadpole which needs to be considered. From it, all diagrams contributing to the massive $2$--loop OMEs can be derived by attaching external quark--, gluon-- and ghost--lines, respectively, and including one operator insertion according to the Feynman rules given in Appendix~\ref{App-FeynRules}. The corresponding parameter--integrals are then all of the same structure, Eq. (\ref{Gen2L}). If one knows a method to calculate the basic topology for arbitrary integer powers of the propagators, the calculation of the $2$--loop OMEs is straightforward for fixed values of $N$. In the general case, we arrived at infinite sums containing the parameter $N$. To calculate these sums, additional tools are needed, e.g. the program {\sf Sigma}, cf. Section~\ref{SubSec-2LInfSum}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=3.7cm]{picmain27.eps} \end{center} \begin{center} \caption[{\sf Basic $3$--loop topologies}] {\sf Basic $3$--loop topologies. Straight lines: quarks, curly lines: gluons. \\ \phantom{Figurex17:y}The gluon loop in (a) can also be replaced by a ghost loop.} \label{3La} \small \end{center} \normalsize \end{figure} \noindent We would like to follow the same approach in the $3$--loop case. Here, five basic topologies need to considered, which are shown in Figures~\ref{3La},~\ref{3Lb}. Diagram (a) and (b) -- if one of the quark loops corresponds to a massless quark -- can be reduced to $2$--loop integrals, because the massless loop can be integrated trivially. For the remaining terms, this is not the case. Diagrams (c) and (d) are the most complex topologies, the former giving rise to the number $B_4$, Eq. (\ref{B4}), whereas the latter yields single $\zeta$--values up to weight $4$, cf. e.g. \cite{Broadhurst:1991fi}. Diagram (b) -- if both quarks are massive -- and (e) are ladder topologies and of less complexity. Let us, as an example, consider diagram (e). \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=3.3cm]{picmain26.eps} \end{center} \begin{center} \caption[{\sf $3$--loop ladder graph }] {\sf $3$--loop ladder graph} \label{3Lb} \small \end{center} \normalsize \end{figure} \noindent Our notation is the same as in Section \ref{SubSec-2LF32}. The scalar $D$--dimensional integral corresponding to diagram (e) reads for arbitrary exponents of the propagators \begin{eqnarray} T_e&=&\iiint \frac{d^Dqd^Dkd^Dl}{(2\pi)^{3D}} \frac{i (-1)^{\nu_{12345}} (m^2)^{\nu_{12345}-3D/2} (4\pi)^{3D/2}} { (k^2)^{\nu_1}((k-l)^2-m^2)^{\nu_2} (l^2-m^2)^{\nu_3} ((q-l)^2-m^2)^{\nu_4}(q^2)^{\nu_5} }~. \nonumber\\ \end{eqnarray} Again, we have attached suitable normalization factors for convenience. After loop--by--loop integration of the momenta $k,q,l$ (in this order) using Feynman--parameters, one obtains after a few steps the following parameter integral \begin{eqnarray} T_e&=& \Gamma\Biggl[\frac[0pt]{\nu_{12345}-6-3\varepsilon/2}{\nu_1,\nu_2,\nu_3,\nu_4,\nu_5}\Biggr] \nonumber\\ && \int_0^1~dw_1\ldots \int_0^1~dw_4~\theta(1-w_1-w_2) \frac{w_1^{-3-\varepsilon/2+\nu_{12}}w_2^{-3-\varepsilon/2+\nu_{45}}(1-w_1-w_2)^{\nu_3-1}} {\displaystyle{(1+w_1\frac{w_3}{1-w_3}+w_2\frac{w_4}{1-w_4})^{\nu_{12345}-6-3\varepsilon/2}}} \nonumber\\ \nonumber\\&& \times w_3^{1+\varepsilon/2-\nu_1}(1-w_3)^{1+\varepsilon/2-\nu_{2}} w_4^{1+\varepsilon/2-\nu_5}(1-w_4)^{1+\varepsilon/2-\nu_{4}}~. \label{bubble3} \end{eqnarray} The $\theta$--function enforces $w_1+w_2\le 1$. In order to perform the $\{w_1,~w_2\}$ integration, one considers \begin{eqnarray} I&=&\int_0^1~dw_1\int_0^1~dw_2~\theta(1-w_1-w_2)w_1^{b-1}w_2^{b'-1} (1-w_1-w_2)^{c-b-b'-1}(1-w_1x-w_2y)^{-a}~. \nonumber\\ \end{eqnarray} The parameters $a,b,b',c$ shall be such that this integral is convergent. It can be expressed in terms of the Appell function $F_1$ via, \cite{Slater}~\footnote{Note that Eq.~(8.2.2) of Ref.~\cite{Slater} contains typos.}, \begin{eqnarray} I&=&\Gamma \Biggl[\frac[0pt]{b,b',c-b-b'}{c}\Biggr] \sum_{m,n=0}^{\infty} \frac{(a)_{m+n}(b)_n(b')_m} {(1)_m(1)_n(c)_{m+n}} x^ny^m~ \\ &=& \Gamma \Biggl[\frac[0pt]{b,b',c-b-b'}{c}\Biggr] F_1\Bigl[a;b,b';c;x,y\Bigr]~. \label{F1def} \end{eqnarray} The parameters $x,~y$ correspond to $w_3/(1-w_3)$ and $w_4/(1-w_4)$ in Eq.~(\ref{bubble3}), respectively. Hence the integral over these variables would yield a divergent sum. Therefore one uses the following analytic continuation relation for $F_1$, \cite{Slater}, \begin{eqnarray} F_1[a;b,b';c;\frac{x}{x-1},\frac{y}{y-1}] = (1-x)^b(1-y)^{b'}F_1[c-a;b,b';c;x,y]~. \label{F1ancont} \end{eqnarray} Finally one arrives at an infinite double sum \begin{eqnarray} T_e&=& \Gamma\Biggl[\frac[0pt]{-2-\varepsilon/2+\nu_{12},-2-\varepsilon/2+\nu_{45},-6-3\varepsilon/2+\nu_{12345}}{\nu_2,\nu_4,-4-\varepsilon+\nu_{12345}}\Biggr] \nonumber\\ && \times \sum_{m,n=0}^{\infty} \Gamma\Biggl[\frac[0pt]{2+m+\varepsilon/2-\nu_1,2+n+\varepsilon/2-\nu_5}{1+m,1+n,2+m+\varepsilon/2,2+n+\varepsilon/2}\Biggr] \nonumber\\ && \times \frac{(2+\varepsilon/2)_{n+m}(-2-\varepsilon/2+\nu_{12})_m(-2-\varepsilon/2+\nu_{45})_n}{(-4-\varepsilon+\nu_{12345})_{n+m}}~. \label{bubble3Res} \end{eqnarray} Here, we have adopted the notation for the $\Gamma$--function defined in (\ref{gammashort}) and $(a)_b$ is Pochhammer's symbol, Eq. (\ref{pochhammer}). As one expects, Eq.~(\ref{bubble3Res}) is symmetric w.r.t. exchanges of the indices $\{\nu_1,~\nu_2\}~\leftrightarrow~\{\nu_4,\nu_5\}$. For any values of $\nu_i$ of the type $\nu_i=a_i+b_i\varepsilon$, with $a_i\in~{\mathbb N},~b_i\in~{\mathbb C}$, the Laurent--series in $\varepsilon$ of Eq.~(\ref{bubble3Res}) can be calculated using e.g. ${\sf Summer}$, \cite{Vermaseren:1998uu}. We have checked Eq.~(\ref{bubble3Res}) for various values of the $\nu_i$ using ${\sf MATAD}$, cf. Section \ref{SubSec-3LMatad}. Next, let us consider the diagram shown in Figure \ref{3Lc}, which contributes to $A_{Qg}^{(3)}$ and derives from diagram (e). We treat the case where all exponents of the propagators are equal to one. \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=3.3cm]{axo1.eps} \end{center} \begin{center} \caption[{\sf Example $3$--loop graph }] {\sf Example $3$--loop graph } \label{3Lc} \small \end{center} \normalsize \end{figure} \noindent Including the factor $i(m^2)^{2-3\varepsilon/2}(4\pi)^{3D/2}$ and integrating $q,k,l$ (in this order), we obtain \begin{eqnarray} I_{ex}&=&\Gamma(2-3\varepsilon/2)\int_0^1 dw_i~ \theta(1-w_1-w_2) \frac{w_1^{-\varepsilon/2} w_2^{-\varepsilon/2} (1-w_1-w_2)} {\displaystyle{(1+w_1\frac{1-w_3}{w_3}+w_2\frac{1-w_4}{w_4})^{2-3\varepsilon/2}}} \nonumber \\ \nonumber \\&& \hspace{35mm} \times w_3^{-1+\varepsilon/2}(1-w_3)^{\varepsilon/2} w_4^{-1+\varepsilon/2} (1-w_4)^{\varepsilon/2} \nonumber \\ && \hspace{35mm} \times (1-w_5w_1-w_6w_2-(1-w_1-w_2)w_7)^N~, \label{IL1} \end{eqnarray} where all parameters $w_1\ldots w_7$ have to be integrated from $0\ldots 1$. As in the $2$--loop case, (\ref{Gen2L}), one observes that the integral--kernel given by the corresponding massive tadpole integral (\ref{bubble3}) is multiplied with a polynomial containing various integration parameters to the power $N$. The same holds true for the remaining $3$--loop diagrams. Hence, if a general sum representation for the corresponding tadpoles integrals is known and one knows how to evaluate the corresponding sums, at least fixed moments of the $3$--loop massive OMEs can be calculated right away. The presence of the polynomial to the power $N$ (which may also involve a finite sum, cf. the Feynman--rules in Appendix~\ref{App-FeynRules},) complicates the calculation further. One has to find a suitable way to deal with this situation, which depends on the integral considered. For $I_{ex}$, we split it up into several finite sums, rendering the integrals calculable in the same way as for $T_e$. We obtain \begin{eqnarray} I_{ex}&=& \frac{-\Gamma(2-3\varepsilon/2)}{(N+1)(N+2)(N+3)}\sum_{m,n=0}^{\infty} \Biggl\{ \nonumber\\ && \hspace{5mm} \sum_{t=1}^{N+2} \binom{3+N}{t} \frac{(t-\varepsilon/2)_m(2+N+\varepsilon/2)_{n+m}(3-t+N-\varepsilon/2)_n} {(4+N-\varepsilon)_{n+m}} \nonumber \\ && \hspace{10mm} \times \Gamma\Biggl[\frac[0pt]{t,t-\varepsilon/2,1+m+\varepsilon/2,1+n+\varepsilon/2,3-t+N,3-t+N-\varepsilon/2}{4+N-\varepsilon,1+m,1+n,1+t+m+\varepsilon/2,4-t+n+N+\varepsilon/2}\Biggr] \nonumber\\ && \hspace{1mm} -\sum_{s=1}^{N+3}\sum_{r=1}^{s-1} \binom{s}{r}\binom{3+N}{s}(-1)^s \frac{(r-\varepsilon/2)_m(-1+s+\varepsilon/2)_{n+m}(s-r-\varepsilon/2)_n} {(1+s-\varepsilon)_{n+m}} \nonumber\\ && \hspace{10mm} \times \Gamma\Biggl[\frac[0pt]{r,r-\varepsilon/2,s-r,1+m+\varepsilon/2,1+n+\varepsilon/2,s-r-\varepsilon/2}{1+m,1+n,1+r+m+\varepsilon/2,1+s-r+n+\varepsilon/2,1+s-\varepsilon}\Biggr] \Biggr\}~. \nonumber \\\label{IL2} \end{eqnarray} After expanding in $\varepsilon$, the summation can be performed using {\sf Sigma}~and the summation techniques explained in Section~\ref{SubSec-2LInfSum}. The result reads \begin{eqnarray} I_{ex}&=& -\frac{4(N+1){S_1}+4}{(N+1)^2(N+2)}{\zeta_3} +\frac{2{S_{2,1,1}}}{(N+2)(N+3)} +\frac{1}{(N+1)(N+2)(N+3)}\Biggl\{ \nonumber\\ && -2(3N+5){S_{3,1}} -\frac{{S_1}^4}{4} +\frac{4(N+1){S_1}-4N}{N+1}{S_{2,1}} +2\Bigl( (2N+3){S_1} +\frac{5N+6}{N+1} \Bigr){S_3} \nonumber\\ && +\frac{9+4N}{4}{S_2}^2 +\Bigl( 2\frac{7N+11}{(N+1)(N+2)} +\frac{5N}{N+1}{S_1} -\frac{5}{2}{S_1}^2 \Bigr){S_2} +\frac{2(3N+5){S_1}^2}{(N+1)(N+2)} \nonumber\\ && +\frac{N}{N+1}{S_1}^3 +\frac{4(2N+3){S_1}}{(N+1)^2(N+2)} -\frac{(2N+3){S_4}}{2} +8\frac{2N+3}{(N+1)^3(N+2)} \Biggr\} \nonumber\\ && +O(\varepsilon)~, \end{eqnarray} which agrees with the fixed moments $N=1\ldots 10$ obtained using ${\sf MATAD}$, cf. Section \ref{SubSec-3LMatad}. We have shown that in principle one can be apply similar techniques as on the $2$--loop level, Section \ref{SubSec-2LF32}, to calculate the massive $3$--loop OMEs considering only the five basic topologies. In this approach the integration-by-parts method is not used. We have given the necessary formulas for one non--trivial topology $(e)$ and showed for on of the cases there how the calculation proceeds keeping the all--$N$ dependence. In order to obtain complete results for the massive OMEs, suitable integral representations for diagrams (b), (c) and (d) of Figure~\ref{3La} have to be derived first. This will allow for a calculation of fixed moments not relying on ${\sf MATAD}$. Next, an automatization of the step from (\ref{IL1}) to (\ref{IL2}) has to be found in order to obtain sums which can be handled e.g. by {\sf Sigma}. The latter step is not trivial, since it depends on the respective diagram and the flow of the outer momentum $p$ through it. \subsection{\bf\boldmath Reconstructing General--$N$ Relations \\ from a Finite Number of Mellin--Moments} \label{SubSec-FULL3LGuess} Higher order calculations in Quantum Field Theories easily become tedious due to the large number of terms emerging and the sophisticated form of the contributing Feynman parameter integrals. This applies already to zero scale and single scale quantities. Even more this is the case for problems containing at least two scales. While in the latter case the mathematical structure of the solution of the Feynman integrals is widely unknown, it is explored to a certain extent for zero- and single scale quantities. Zero scale quantities emerge as the expansion coefficients of the running couplings and masses, as fixed moments of splitting functions, etc.. They can be expressed by rational numbers and certain special numbers as multiple zeta-values (MZVs), \cite{Borwein:1999js,Blumlein:2009Zet} and related quantities. Single scale quantities depend on a scale $z$ which may be given as a ratio of Lorentz invariants $s'/s$ in the respective physical problem. One may perform a Mellin transform over $z$, Eq.~(\ref{Mellintrans}). All subsequent calculations are then carried out in Mellin space and one assumes $N \in {\mathbb N},~N > 0$. By this transformation, the problem at hand becomes discrete and one may seek a description in terms of difference equations, \cite{Norlund,*Thomson}. Zero scale problems are obtained from single scale problems treating $N$ as a fixed integer or considering the limit $N \rightarrow \infty$. A main question concerning zero scale quantities is: Do the corresponding Feynman integrals always lead to MZVs? In the lower orders this is the case. However, starting at some order, even for single-mass problems, other special numbers will occur, \cite{Broadhurst:1998rz,Andre:2008,*Brown:2008um}. Since one has to known the respective basis completely, this makes it difficult to use methods like ${\sf PSLQ}$, \cite{Bailey:1991}, to determine the analytic structure of the corresponding terms even if one may calculate them numerically at high enough precision. Zero scale problems are much easier to calculate than single scale problems. In some analogy to the determination of the analytic structure in zero scale problems through integer relations over a known basis ({\sf PSLQ}) one may think of an automated reconstruction of the all--$N$ relation out of a finite number of Mellin moments given in analytic form. This is possible for recurrent quantities. At least up to 3-loop order, presumably even to higher orders, single scale quantities belong to this class. Here we report on a {\sf general algorithm} for this purpose, which we applied to the problem being currently the most sophisticated one: the anomalous dimensions and massless Wilson coefficients to $3$--loop order for unpolarized DIS, \cite{Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc}. Details of our calculation are given in Refs.~\cite{Blumlein:2009tj,Blumlein:2009tm}. \subsubsection{Single Scale Feynman Integrals as Recurrent Quantities} \label{SubSec-FULL3LGuessFeyn} For a large variety of massless problems single scale Feynman integrals can be represented as polynomials in the ring formed by the nested harmonic sums, cf. Appendix \ref{App-SpeFunHarm}, and the MZVs $\zeta_{a_1, \dots, a_l}$, which we set equal to the $\sigma$--values defined in Eq. (\ref{sigmaval}). Rational functions in $N$ and harmonic sums obey recurrence relations. Thus, due to closure properties, \cite{Salvy:1994,Mallinger:1996}, also any polynomial expression in such terms is a solution of a recurrence. Consider as an example the recursion \begin{eqnarray} F(N+1) - F(N) = \frac{{\rm sign}(a)^{N+1}}{(N+1)^{|a|}}~. \end{eqnarray} It is solved by the harmonic sum $S_a(N)$. Corresponding difference equations hold for harmonic sums of deeper nestedness. Feynman integrals can often be decomposed into a combination containing terms of the form \begin{eqnarray} \int_0^1 dz \frac{z^{N-1}-1}{1-z} H_{\vec{a}}(z),~~~~\int_0^1 dz \frac{(-z)^{N-1}-1}{1+z} H_{\vec{a}}(z)~, \end{eqnarray} with $H_{\vec{a}}(z)$ being a harmonic polylogarithm, \cite{Remiddi:1999ew}. This structure also leads to recurrences, \cite{Blumlein:2008pa}. Therefore, it is very likely that single scale Feynman diagrams do always obey difference equations. \subsubsection{Establishing and Solving Recurrences} \label{SubSec-FULL3LGuessRec} Suppose we are given a finite array of rational numbers, \begin{alignat*}1 &q_1,\ q_2,\ \dots,\ q_K~, \end{alignat*} which are the first terms of an infinite sequence~$F(N)$,~i.e., $F(1)=q_1$, $F(2)=q_2$, etc. Let us assume that $F(N)$ represents a physical quantity and satisfies a recurrence of type \begin{eqnarray} \label{DEQ} \sum_{k=0}^l\Bigl(\sum_{i=0}^d c_{i,k} N^i\Bigr)F(N+k)=0~, \end{eqnarray} which we would like to deduce from the given numbers $q_m$. In a strict sense, this is not possible without knowing how the sequence continues for $N>K$. One thing we can do is to determine the recurrence equations satisfied by the data we are given. Any recurrence for $F(N)$ must certainly be among those. To find the recurrence equations of $F(N)$ valid for the first terms, the simplest way to proceed is by making an ansatz with undetermined coefficients. Let us fix an order~$l\in\mathbb{N}$ and a degree $d\in\mathbb{N}$ and consider the generic recurrence (\ref{DEQ}), where the $c_{i,k}$ are unknown. For each specific choice $N=1,2,\dots,K-l$, we can evaluate the ansatz, because we know all the values of $F(N+k)$ in this range, and we obtain a system of $K-l$ homogeneous linear equations for $(l+1)(d+1)$ unknowns~$c_{i,j}$. If $K-l>(l+1)(d+1)$, this system is under-determined and is thus guaranteed to have nontrivial solutions. All these solutions will be valid recurrences for $F(N)$ for $N=1,\dots,K-l$, but they will most typically fail to hold beyond. If, on the other hand, $K-l\leq(l+1)(d+1)$, then the system is overdetermined and nontrivial solutions are not to be expected. But at least recurrence equations valid for all~$N$, if there are any, must appear among the solutions. We therefore expect in this case that the solution set will precisely consist of the recurrences of~$F(N)$ of order~$l$ and degree~$d$ valid for all~$N$. As an example, let us consider the contribution to the gluon splitting function $\propto C_A$ at leading order, $P_{gg}^{(0)}(N)$. The first 20 terms, starting with $N=3$, of the sequence $F(N)$ are \begin{alignat*}1 &\tfrac{14}{5},\ \tfrac{21}{5},\ \tfrac{181}{35},\ \tfrac{83}{14},\ \tfrac{4129}{630},\ \tfrac{319}{45},\ \tfrac{26186}{3465},\ \tfrac{18421}{2310},\ \tfrac{752327}{90090},\ \tfrac{71203}{8190},\ \tfrac{811637}{90090},\ \tfrac{128911}{13860},\ \tfrac{29321129}{3063060},\\ & \tfrac{2508266}{255255},\ \tfrac{292886261}{29099070},\ \tfrac{7045513}{684684},\ \tfrac{611259269}{58198140},\ \tfrac{1561447}{145860},\ \tfrac{4862237357}{446185740},\ \tfrac{988808455}{89237148}~. \end{alignat*} Making an ansatz for a recurrence of order~3 with polynomial coefficients of degree~3 leads to an overdetermined homogeneous linear system with 16 unknowns and 17 equations. Despite of being overdetermined and dense, this system has two linearly independent solutions. Using bounds for the absolute value of determinants depending on the size of a matrix and the bit size of its coefficients, one can very roughly estimate the probability for this to happen ``by coincidence'' to about~$10^{-65}$. And in fact, it did not happen by coincidence. The solutions to the system correspond to the two recurrence equations \begin{alignat}1 &(7 N^3+113 N^2+494 N+592) F(N)-(12 N^3+233 N^2+1289 N+2156) F(N+1) \notag\\ &{}+(3 N^3+118 N^2+1021 N+2476) F(N+2)+(2 N^3+2 N^2-226 N-912) F(N+3) \notag\\ &{}=0 \label{eq1} \end{alignat} and \begin{alignat}1 &(4 N^3+64 N^2+278 N+332) F(N)-(7 N^3+134 N^2+735N+1222) F(N+1)\notag\\ &{}+(2 N^3+71 N^2+595 N+1418) F(N+2)+(N^3-N^2-138 N-528) F(N+3) \notag\\ &{}=0, \end{alignat} which both are valid for all~$N\geq1$. If we had found that the linear system did not have a nontrivial solution, then we could have concluded that the sequence $F(N)$ would \emph{definitely} (i.e.\ without any uncertainty) not satisfy a recurrence of order~3 and degree~3. It might then still have satisfied recurrences with larger order or degree, but more terms of the sequence had to be known for detecting those. The method of determining (potential) recurrence equations for sequences as just described is not new. It is known to the experimental mathematics community as automated guessing and is frequently applied in the study of combinatorial sequences. Standard software packages for generating functions such as \textsf{gfun}~\cite{Salvy:1994} for ${\sf MAPLE}$ or \textsf{GeneratingFunctions.m}~\cite{Mallinger:1996} for ${\sf MATHEMATICA}$ provide functions which take as input a finite array of numbers, thought of as the first terms of some infinite sequence, and produce as output recurrence equations that are, with high probability, satisfied by the infinite sequence. These packages apply the method described above more or less literally, and this is perfectly sufficient for small examples. But if thousands of terms of a sequence are needed, there is no way to solve the linear systems using rational number arithmetic. Even worse, already for medium sized problems from our collection, the size of the linear system exceeds by far typical memory capacities of~16--64Gb. Let us consider as an example the difference equation associated to the contribution of the color factor $C_F^3$ for the 3-loop Wilson coefficient $C_{2,q}^{(3)}$ in unpolarized deeply inelastic scattering. {11~Tb} of memory would be required to establish~(\ref{DEQ}) in a naive way. Therefore refined methods have to be applied. We use arithmetic in finite fields together with Chinese remaindering, \cite{Geddes:1992,*Gathen:1999,*Kauers:2008zz}, which reduces the storage requirements to a few Gb of memory. The linear system approximately minimizes for {$l \approx d$}. If one finds more than one recurrence the different recurrences are joined to reduce {$l$} to a minimal value. It seems to be a general phenomenon that the recurrence of minimal order is that with the smallest integer coefficients, cf. also \cite{bostan:08}. For even larger problems than those dealt with in the present analysis, a series of further technical improvements may be carried out, \cite{Beckermann:1992,*Beckermann:2000}. For the solution of the recurrence low orders are clearly preferred. It is solved in depth-optimal {$\Pi\Sigma$ fields}, \cite{Karr:1981,Karr:1985,sigma1,Schneider:2007,Schneider:2001,*Schneider:2007a,*RISC3389,*Schneider:2008}; here we apply advanced symbolic summation methods as: efficient recurrence solvers and refined telescoping algorithms. They are available in the summation package {\sf Sigma}, \cite{sigma1,sigma2}. The solutions are found as linear combinations of rational terms in $N$ combined with functions, which cannot be further reduced in the $\Pi\Sigma$ fields. In the present application they turn out to be nested harmonic sums, cf. Appendix \ref{App-SpeFunHarm}. Other or higher order applications may lead to sums of different type as well, which are uniquely found by the present algorithm. \subsubsection{Determination of the 3-Loop Anomalous Dimensions \\ and Wilson Coefficients} \label{SubSec-FULL3LGuessWil} We apply the method to determine the unpolarized anomalous dimensions and massless Wilson coefficients to $3$--loop order. Here we apply the above method to the contributions stemming from a single color/$\zeta_i$-factor. These are 186 terms. As input we use the respective Mellin moments, which were calculated by a {\sf MAPLE}--code based on the harmonic sum representation calculated in Refs.~\cite{Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc}. We need very high moments and calculate the input recursively. As an example, let us illustrate the size of the moments for the $C_F^3$-contribution to the Wilson coefficient $C^{(3)}_{2,q}$. The highest moment required is $N = 5114$. It cannot be calculated simply using {\sf Summer}, \cite{Vermaseren:1998uu}, and we used a recursive algorithm in {\sf MAPLE} for it. The corresponding difference equations (\ref{DEQ}) are determined by a recurrence finder. Furthermore, the order of the difference equation is reduced to the smallest value possible. The difference equations are then solved order by order using the summation package {\sf Sigma}. For the $C_F^3$-term in $C^{(3)}_{q,2}$, the recurrence was established after 20.7 days of CPU time. Here 4h were required for the modular prediction of the dimension of the system, 5.8 days were spent on solving modular linear systems, and 11 days for the modular operator GCDs. The Chinese remainder method and rational reconstruction took 3.8 days. 140 word size primes were needed. As output one obtains a recurrence of 31 Mb, which is of order 35 and degree 938, with a largest integer of 1227 digits. The recurrence was solved by {\sf Sigma}~after 5.9 days. We reached a compactification from 289 harmonic sums needed in \cite{Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc} to 58 harmonic sums. The determination of the $3$--loop anomalous dimensions is a much smaller problem. Here the computation takes only about 18~h for the complete result. For the three most complicated cases, establishing and solving of the difference equations took $3+1$ weeks each, requiring $\leq 10$Gb on a 2~GHz processor. This led to an overall computation time of about sixteen weeks. In the final representation, we account for algebraic reduction, \cite{Blumlein:2003gb}. For this task we used the package {\sf HarmonicSums}, \cite{Ablinger:09}, which complements the functionality of~{\sf Sigma}. One observes that different color factor contributions lead to the same, or nearly the same, amount of sums for a given quantity. This points to the fact that the amount of sums contributing, after the algebraic reduction has been carried out, is governed by topology rather than the field- and color structures involved. The linear harmonic sum representations used in \cite{Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc} require many more sums than in the representation reached by the present analysis. A further reduction can be obtained using the structural relations, which leads to maximally 35 different sums up to the level of the $3$--loop Wilson coefficients, \cite{Blumlein:2008pa,Blumlein:2009ta,Blumlein:2009fz}. It is not unlikely that the present method can be applied to single scale problems in even higher orders. As has been found before in \cite{Blumlein:2008pa,Blumlein:2005im,*Blumlein:2006rr,Blumlein:2007dj,Blumlein:prep2,Blumlein:2006mh,Buza:1995ie,Bierenbaum:2008yu,Bierenbaum:2007qe,Berends:1987abxBerends:1987abe1,Blumlein:2007tr}, representing a large number of 2- and 3-loop processes in terms of harmonic sums, the basis elements emerging are always the same. In practice no method does yet exist to calculate such a high number of moments ab initio as required for the determination of the all--$N$ formulas in the 3--loop case. On the other hand, a proof of existence has been delivered of a quite general and powerful automatic difference-equation solver, standing rather demanding tests. It opens up good prospects for the development of even more powerful methods, which can be applied in establishing and solving difference equations for single scale quantities such as the classes of Feynman--parameter integrals contributing to the massive operator matrix elements for general values of $N$. \newpage \section{\bf\boldmath Conclusions} \label{Sec-CONC} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} In this thesis, we extended the description of the contributions of a single heavy quark to the unpolarized Wilson coefficients ${\cal C}_{(q,g),2}^{\sf S,PS,NS}$ to $O(a_s^3)$. In upcoming precision analyzes of deep--inelastic data, this will allow more precise determinations of parton distribution functions and of the strong coupling constant. We applied a factorization relation for the complete inclusive heavy flavor Wilson coefficients, which holds in the limit $Q^2\gg 10m^2$ in case of $F_2(x,Q^2)$, \cite{Buza:1995ie}, at the level of twist--$2$. It relates the asymptotic heavy flavor Wilson coefficients to a convolution of the corresponding light flavor Wilson coefficients, which are known up to $O(a_s^3)$, \cite{Vermaseren:2005qc}, and describe all process dependence, with the massive operator matrix elements. The latter are process independent quantities and describe all mass--dependent contributions but the power--suppressed terms ($(m^2/Q^2)^k,~k\ge~1$). They are obtained from the unpolarized twist--$2$ local composite operators stemming from the light--cone expansion of the electromagnetic current between on--shell partonic states, including virtual heavy quark lines. The first calculation of fixed moments of all $3$--loop massive OMEs is the main result of this thesis. \\ In Section~\ref{SubSec-HQAsym}, we applied the factorization formula at the $O(a_s^3)$--level. It holds for the inclusive heavy flavor Wilson coefficients, including radiative corrections due to heavy quark loops. In order to describe the production of heavy quarks in the final states only, further assumptions have to be made. This description succeeded at the $2$--loop level in Ref.~\cite{Buza:1995ie} because of the possible comparison with the exact calculation in Refs.~\cite{Laenen:1992zkxLaenen:1992xs,*Riemersma:1994hv} and since the contributing virtual heavy flavor corrections are easily identified, cf. Section~\ref{SubSec-HQElProdWave}. At $O(a_s^3)$ this is not possible anymore and only the inclusive description should be used, as has been done in Ref.~\cite{Buza:1996wv} in order to derive heavy flavor parton densities. These are obtained as convolutions of the light flavor densities with the massive OMEs, cf. Section~\ref{SubSec-HQFlav}. \\ In Section~\ref{Sec-REN}, we derived and presented in detail the renormalization of the massive operator matrix elements up to $O(a_s^3)$. This led to an intermediary representation in a defined ${\sf MOM}$--scheme to maintain the partonic description required for the factorization of the heavy flavor Wilson coefficients into OMEs and the light flavor Wilson coefficients. Finally, we applied the $\overline{\sf MS}$--scheme for coupling constant renormalization in order to refer to the inclusive heavy flavor Wilson coefficients and to be able to combine our results with the light flavor Wilson coefficients, which have been calculated in the same scheme. For mass renormalization we chose the on--mass--shell--scheme and provided in Section~\ref{Sec-REP} all necessary formulas to transform between the ${\sf MOM}$-- and the on--shell--scheme, respectively, and the ${\sf \overline{MS}}$--scheme. \\ For renormalization at $O(a_s^3)$, all $O(a_s^2)$ massive OMEs $A_{Qg},~A_{Qq}^{\sf PS},~A_{qq,Q}^{\sf NS},~A_{gg,Q},~A_{gq,Q}$ are needed up to $O(\varepsilon)$ in dimensional regularization. In Section~\ref{Sec-2L}, we newly calculated all the corresponding $O(\varepsilon)$ contributions in Mellin space for general values of $N$. This involved a first re--calculation of the complete terms $A_{gg,Q}^{(2)}$ and $A_{gq,Q}^{(2)}$, in which we agree with the literature, \cite{Buza:1996wv}. We made use of the representation of the Feynman--parameter integrals in terms of generalized hypergeometric functions. The $O(\varepsilon)$--expansion led to new infinite sums which had to be solved by analytic and algebraic methods. The results can be expressed in terms of polynomials of the basic nested harmonic sums up to weight ${\sf w=4}$ and derivatives thereof. They belong to the complexity-class of the general two-loop Wilson coefficients or hard scattering cross sections in massless QED and QCD and are described by six basic functions and their derivatives in Mellin space. The package {\sf Sigma}, \cite{sigma1,sigma2,Refined,Schneider:2007}, proved to be a useful tool to solve the sums occurring in the present problem, leading to extensions of this code by the author.\\ The main part of the thesis was the calculation of fixed moments of all 3--loop massive operator matrix elements $A_{Qg},~A_{qg,Q},~A_{Qq}^{\sf PS},~A_{qq,Q}^{\sf PS},~A_{qq,Q}^{\sf NS},~A_{gq,Q},~A_{gg,Q}$, cf. Section~\ref{Sec-3L}. These are needed to describe the asymptotic heavy flavor Wilson coefficients at $O(a_s^3)$ and to derive massive quark--distributions at the same level, \cite{Buza:1996wv}. We developed computer algebra codes which allow based on ${\sf QGRAF}$, \cite{Nogueira:1991ex}, the automatic generation of $3$--loop Feynman diagrams with local operator insertions. These were then projected onto massive tadpole diagrams for fixed values of the Mellin variable $N$. For the final calculation of the diagrams, use was made of the ${\sf FORM}$--code ${\sf MATAD}$, \cite{Steinhauser:2000ry}. The representation of the massive OMEs is available for general values of $N$ in analytic form, apart from the constant terms $a_{ij}^{(3)}$ of the unrenormalized 3--loop OMEs. This is achieved by combining our general expressions for the renormalized results, the all--$N$ results up to $O(a_s^2\varepsilon)$ and results given in the literature. A number of fixed Mellin moments of the terms $a_{ij}^{(3)}$ were calculated, reaching up to $N = 10, 12, 14$, depending on the complexity of the corresponding operator matrix element. The computation required about $250$ CPU days on $32/64~Gb$--machines. Through the renormalization of the massive OMEs, the corresponding moments of the complete 2-loop anomalous dimensions and the $T_F$--terms of the 3--loop anomalous dimensions were obtained, as were the moments of the complete anomalous dimensions $\gamma_{qq}^{(2), \sf PS}$ and $\gamma_{qg}^{(2)}$, in which we agree with the literature. This provides a first independent check of the moments of the fermionic contributions to the $3$--loop anomalous dimensions, which have been obtained in Refs.~\cite{Larin:1996wd,Retey:2000nq}. \\ In Section~\ref{Sec-POL}, we presented results on the effects of heavy quarks in polarized deep--inelastic scattering, using essentially the same description as in the unpolarized case. We worked in the scheme for $\gamma_5$ in dimensional regularization used in Ref.~\cite{Buza:1996xr} and could confirm the results given there for the $2$--loop massive OMEs $\Delta A_{Qq}^{\sf PS}$ and $\Delta A_{Qg}$. Additionally, we newly presented the $O(\varepsilon)$ contributions of these terms.\\ We calculated the $2$--loop massive OMEs of transversity for all--$N$ and the $3$--loop terms for the moments $N=1,\ldots,13$ in Section~\ref{sec-1}. This calculation is not yet of phenomenological use, since the corresponding light flavor Wilson coefficients have not been calculated so far. However, these results could be obtained by making only minor changes to the computer programs written for the unpolarized case. We confirmed for the first time the moments $N=1,\ldots,8$ of the fermionic contributions to the $3$--loop transversity anomalous dimension obtained in Refs.~\cite{Gracey:2003yrxGracey:2003mrxGracey:2006zrxGracey:2006ah}. Our results can, however, be used in comparison with lattice calculations. \\ Several steps were undertaken towards an all--$N$ calculation of the massive OMEs. Four non--trivial $3$--loop massive topologies contribute. We presented in an example a first all--$N$ result for a ladder--topology in Section~\ref{Sec-FULL3LF1}. In Section~\ref{SubSec-FULL3LGuess}, we described a general algorithm to calculate the exact expression for single scale quantities from a finite (suitably large) number of moments, which are zero scale quantities. The latter are much more easily calculable than single scale quantities. We applied the method to the anomalous dimensions and massless Wilson coefficients up to $3$--loop order, \cite{Moch:2004pa,Vogt:2004mw,Vermaseren:2005qc}. Solving $3$--loop problems in this way directly is not possible at present, since the number of required moments is too large for the methods available. Yet this method constitutes a proof of principle and may find application in medium--sized problems in the future. \newpage \thispagestyle{empty} \begin{flushleft} \end{flushleft} \vspace{70mm} \begin{center} \end{center} \newpage \thispagestyle{empty} \newpage \begin{flushleft} \end{flushleft} \newpage \begin{appendix} \section{\bf \boldmath Conventions} \label{App-Con} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} We use natural units \begin{eqnarray} \hbar=1~,\quad c=1~,\quad \varepsilon_0=1~, \end{eqnarray} where $\hbar$ denotes Planck's constant, $c$ the vacuum speed of light and $\varepsilon_0$ the permittivity of vacuum. The electromagnetic fine--structure constant $\alpha$~is given by \begin{eqnarray} \alpha=\alpha'(\mu^2=0)=\frac{e^2}{4\pi\varepsilon_0\hbar c} =\frac{e^2}{4\pi}\approx \frac{1}{137.03599911(46)}~. \end{eqnarray} In this convention, energies and momenta are given in the same units, electron volt (${\rm eV}$). The space--time dimension is taken to be $D=4+\varepsilon$ and the metric tensor $g_{\mu\nu}$ in Minkowski--space is defined as \begin{eqnarray} g_{00}=1~,\quad g_{ii}=-1~,i=1\ldots D-1~,\quad g_{ij}=0~,i\neq j~. \label{metricDdim} \end{eqnarray} Einstein's summation convention is used, i.e. \begin{eqnarray} x_{\mu}y^{\mu}:=\sum^{D-1}_{\mu=0}x_{\mu}y^{\mu}~. \end{eqnarray} Bold--faced symbols represent $(D-1)$--dimensional spatial vectors: \begin{eqnarray} x=(x_0,{\bf x})~. \end{eqnarray} If not stated otherwise, Greek indices refer to the $D$--component space--time vector and Latin ones to the $D-1$ spatial components only. The dot product of two vectors is defined by \begin{eqnarray} p.q=p_0q_0-\sum_{i=1}^{D-1}p_iq_i~. \end{eqnarray} The $\gamma$--matrices $\gamma_{\mu}$ are taken to be of dimension $D$ and fulfill the anti--commutation relation \begin{eqnarray} \{\gamma_{\mu},\gamma_{\nu}\}=2g_{\mu\nu} \label{gammaanticom}~. \end{eqnarray} It follows that \begin{eqnarray} \gamma_{\mu}\gamma^{\mu}&=&D \\ Tr \left(\gamma_{\mu}\gamma_{\nu}\right)&=&4g_{\mu\nu} \\ Tr \left(\gamma_{\mu}\gamma_{\nu}\gamma_{\alpha}\gamma_{\beta}\right) &=&4[g_{\mu\nu}g_{\alpha\beta}+ g_{\mu\beta}g_{\nu\alpha}- g_{\mu\alpha}g_{\nu\beta}] \label{gammarelations}~. \end{eqnarray} The slash--symbol for a $D$-momentum $p$ is defined by \begin{eqnarray} /\!\!\!\! p:=\gamma_{\mu}p^{\mu} \label{dagger}~. \end{eqnarray} The conjugate of a bi--spinor $u$ of a particle is given by \begin{eqnarray} \overline{u}=u^{\dagger}\gamma_0~, \end{eqnarray} where $\dagger$ denotes Hermitian and $*$ complex conjugation, respectively. The bi--spinors $u$ and $v$ fulfill the free Dirac--equation, \begin{eqnarray} (/\!\!\!\! p-m )u(p)&=&0~,~\quad \overline{u}(p)(/\!\!\!\! p-m )=0 \\ (/\!\!\!\! p+m )v(p)&=&0~,~\quad \overline{v}(p)(/\!\!\!\! p+m )=0~. \end{eqnarray} Bi--spinors and polarization vectors are normalized to \begin{eqnarray} \sum_{\sigma}u(p,\sigma)\overline{u}(p,\sigma)&=&/\!\!\!\! p+m \\ \sum_{\sigma}v(p,\sigma)\overline{v}(p,\sigma)&=&/\!\!\!\! p-m \\ \sum_{\lambda}\epsilon^{\mu}(k,\lambda )\epsilon^{\nu}(k,\lambda) &=&-g^{\mu \nu}~, \end{eqnarray} where $\lambda$ and $\sigma$ represent the spin. The commonly used caret~``~$\hat{\empty}~$''~to signify an operator, e.g. $\hat{O}$, is omitted if confusion is not to be expected. The gauge symmetry group of QCD is the Lie--Group $SU(3)_c$. We consider the general case of $SU(N_c)$. The non--commutative generators are denoted by $t^a$, where $a$ runs from $1$ to $N_c^2-1$. The generators can be represented by Hermitian, traceless matrices, \cite{Muta:1998vi}. The structure constants $f^{abc}$ and $d^{abc}$ of $SU(N_c)$ are defined via the commutation and anti--commutation relations of its generators, \cite{Yndurain:1999ui}, \begin{eqnarray} [t^a,t^b]&=&if^{abc}t^c \label{structconstf} \\ \{t^a,t^b\}&=& d^{abc}t^c+\frac{1}{N_c}\delta_{ab} \label{structconstd}~. \end{eqnarray} The indices of the color matrices, in a certain representation, are denoted by $i,j,k,l,..$. The color invariants most commonly encountered are \begin{eqnarray} \delta_{ab} C_A&=&f^{acd}f^{bcd} \label{CA} \\ \delta_{ij} C_F&=&t^a_{il}t^a_{lj} \label{CF} \\ \delta_{ab} T_F&=&t^a_{ik}t^b_{ki} \label{TR}~. \end{eqnarray} These constants evaluate to \begin{eqnarray} C_A=N_c~,\quad~C_F=\frac{N_c^2-1}{2N_c}~,\quad~T_F=\frac{1}{2}~.\label{Cval} \end{eqnarray} At higher loops, more color--invariants emerge. At $3$--loop order, one additionally obtains \begin{eqnarray} d^{abd}d_{abc}=(N_c^2-1)(N_c^2-4)/N_c~. \label{dabc2} \end{eqnarray} In case of $SU(3)_c$, $C_A=3~,~C_F=4/3~,~d^{abc}d_{abc}=40/3$ holds. \newpage \section{\bf \boldmath Feynman Rules} \label{App-FeynRules} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} For the QCD Feynman rules, Figure \ref{feynrulesqcd}, we follow Ref. \cite{Yndurain:1999ui}, cf. also Refs.~\cite{Veltman:1994wz,*'tHooft:1973pz}. $D$--dimensional momenta are denoted by $p_i$ and Lorentz-indices by Greek letters. Color indices are $a,b,...$ and $i,j$ are indices of the color matrices. Solid lines represent fermions, wavy lines gluons and dashed lines ghosts. Arrows denote the direction of the momenta. A factor $(-1)$ has to be included for each closed fermion-- or ghost loop. \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=17cm]{picapp1.eps} \end{center} \begin{center} \caption{\sf Feynman rules of QCD.} \label{feynrulesqcd} \noindent \small \end{center} \normalsize \end{figure} \noindent The Feynman rules for the quarkonic composite operators are given in Figure \ref{feynrulescompqua}. Up to $O(g^2)$ they can be found in Ref. \cite{Floratos:1977auxFloratos:1977aue1} and also in \cite{Mertig:1995ny}. Note that the $O(g)$ term in the former reference contains a typographical error. We have checked these terms and agree up to normalization factors, which may be due to other conventions being applied there. We newly derived the rule with three external gluons. The terms $\gamma_{\pm}$ refer to the unpolarized ($+$) and polarized ($-$) case, respectively. Gluon momenta are taken to be incoming. \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=16.5cm]{picapp2.eps} \end{center} \begin{center} \caption[{\sf Feynman rules for quarkonic composite operators.}] {\sf Feynman rules for quarkonic composite operators. $\Delta$ denotes a light-like $4$-vector,\\ \phantom{abcdefghijk} $\Delta^2=0$; $N$ is a suitably large positive integer.} \label{feynrulescompqua} \noindent \small \end{center} \normalsize \end{figure} \newpage \noindent The Feynman rules for the unpolarized gluonic composite operators are given in Figure \ref{feynrulescompglu}. Up to $O(g^2)$, they can be found in Refs. \cite{Floratos:1978ny} and \cite{Hamberg:1991qt}. We have checked these terms and agree up to $O(g^0)$. At $O(g)$, we agree with \cite{Floratos:1978ny}, but not with \cite{Hamberg:1991qt}. At $O(g^2)$, we do not agree with either of these results, which even differ from each other\footnote{We would like to thank J. Smith for the possibility to compare with their {\sf FORM}--code used in Refs. \cite{Buza:1995ie,Buza:1996xr,Matiounine:1998re,Matiounine:1998ky}, to which we agree.}. \begin{figure}[H] \begin{center} \includegraphics[angle=0, height=17cm]{picapp3.eps} \end{center} \begin{center} \caption[{\sf Feynman rules for gluonic composite operators.}] {\sf Feynman rules for gluonic composite operators. $\Delta$ denotes a light-like $4$-vector,\\ \phantom{abcdefghijk} $\Delta^2=0$; $N$ is an integer.} \label{feynrulescompglu} \noindent \small \end{center} \normalsize \end{figure} \newpage \section{\bf \boldmath Special Functions} \label{App-SpeFun} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} In the following we summarize for convenience some relations for special functions which occur repeatedly in quantum field theory and are used in this thesis. \subsection{The $\Gamma$--function} \label{App-SpeFunGA} The $\Gamma$-function, cf. \cite{stegun,Nielsen:1906}, is analytic in the whole complex plane except at single poles at the non-positive integers. Its inverse is given by Euler's infinite product \begin{eqnarray} \frac{1}{\Gamma(z)}=z\exp(\gamma_Ez) \prod_{i=1}^{\infty} \Biggl[\Bigl(1+\frac{z}{i}\Bigr)\exp(-z/i)\Biggr]~. \label{eulerprod} \end{eqnarray} The residues of the $\Gamma$-function at its poles are given by \begin{eqnarray} {\sf Res}[\Gamma(z)]_{z=-N}=\frac{(-1)^N}{N!}~,\quad N \in {\mathbb{N}}\cup 0~. \label{gammares} \end{eqnarray} In case of ${\sf Re}(z) > 0$, it can be expressed by Euler's integral \begin{eqnarray} \Gamma(z)=\int_0^{\infty} \exp(-t)t^{z-1}dt~, \end{eqnarray} from which one infers the well known functional equation of the $\Gamma$-function \begin{eqnarray} \Gamma(z+1)=z\Gamma(z)~,\label{funcrelgam} \end{eqnarray} which is used for analytic continuation. Around $z=1$, the following series expansion is obtained \begin{eqnarray} \Gamma(1-\varepsilon) &=&\exp(\varepsilon \gamma_E) \exp\Biggl\{\sum_{i=2}^{\infty}\zeta_i\frac{\varepsilon^i}{i}\Biggr\}~, \label{gammaser}\\ |\varepsilon|&<&1~. \end{eqnarray} Here and in (\ref{eulerprod}), $\gamma_E$ denotes the Euler-Mascheroni constant, see Eq. (\ref{gammaesum}). In (\ref{gammaser}) Riemann's $\zeta$--function is given by \begin{eqnarray} \zeta_k=\sum_{i=1}^{\infty}\frac{1}{i^k}~,\quad 2\le k \in \mathbb{N}~. \label{zeta} \end{eqnarray} A shorthand notation for rational functions of $\Gamma$--functions is \begin{eqnarray} \Gamma\Biggl[\frac[0pt]{a_1,...,a_i}{b_1,...,b_j}\Biggr]:= \frac{\Gamma(a_1)...\Gamma(a_i)}{\Gamma(b_1)...\Gamma(b_j)}~. \label{gammashort} \end{eqnarray} Functions closely related to the $\Gamma$-function are the function $\psi(x)$, the Beta-function $B(A,C)$ and the function $\beta(x)$. The Beta-function can be defined by Eq.~(\ref{gammashort}) \begin{eqnarray} B(A,C)=\Gamma\Biggl[\frac[0pt]{A,C}{A+C}\Biggr]~. \label{betafun1} \end{eqnarray} If ${\sf Re}(A),{\sf Re}(C) > 0 $, the following integral representation is valid \begin{eqnarray} B(A,C)=\int_0^1 dx~x^{A-1}(1-x)^{C-1}~. \label{betafun2} \end{eqnarray} For arbitrary values of $A$ and $C$, (\ref{betafun2}) can be continued analytically using Eqs. (\ref{eulerprod},~\ref{betafun1}). Its expansion around singularities can be performed via Eqs. (\ref{gammares},~\ref{gammaser}). The $\psi$-function and $\beta(x)$ are defined as derivatives of the $\Gamma$-function via \begin{eqnarray} \psi(x) &=& \frac{1}{\Gamma(x)} \frac{d}{dx} \Gamma(x)~. \label{psifun}\\ \beta(x) &=& \frac{1}{2} \left[ \psi\Bigl(\frac{x+1}{2}\Bigr) - \psi\Bigl(\frac{x}{2}\Bigr)\right]~. \label{smallbeta} \end{eqnarray} \subsection{The Generalized Hypergeometric Function} \label{App-SpeFunFPQ} The generalized hypergeometric function $\empty_{P}F_Q$ is defined by, cf. \cite{Slater,Bailey,*Roy:2001}, \begin{eqnarray} \empty_{P}F_Q\Biggl[\frac[0pt]{a_1,...,a_P} {b_1,...,b_Q} ;z\Biggr] =\sum_{i=0}^{\infty} \frac{(a_1)_i...(a_P)_i} {(b_1)_i...(b_Q)_i} \frac{z^i}{\Gamma(i+1)}~. \label{fpq} \end{eqnarray} Here $(c)_n$ is Pochhammer's symbol \begin{eqnarray} (c)_n=\frac{\Gamma(c+n)}{\Gamma(c)} \label{pochhammer}~, \end{eqnarray} for which the following relation holds \begin{eqnarray} (N+1)_{-i}&=&\frac{(-1)^i}{(-N)_i}~,~N\in~\mathbb{N}~. \label{reflect} \end{eqnarray} In (\ref{fpq}), there are $P$ numerator parameters $a_1...a_P$, $Q$ denominator parameters $b_1...b_Q$ and one variable $z$, all of which may be real or complex. Additionally, the denominator parameters must not be negative integers, since in that case (\ref{fpq}) is not defined. The generalized hypergeometric series $\empty_{P}F_Q$ are evaluated at a certain value of $z$, which in this thesis is always $z=1$ for the final integrals. \\ Gauss was the first to study this kind of functions, introducing the Gauss function $\empty_2F_1$, and proved the theorem, cf. \cite{Slater}, \begin{eqnarray} \empty_{2}F_1[a,b;c;1]= \Gamma\Biggl[\frac[0pt]{c,c-a-b}{c-a,c-b}\Biggr] \label{Gauss}~, \quad {\sf Re}(c-a-b)>0 \end{eqnarray} which is called Gauss' theorem. An integral representation for the Gauss function is given by the integral, cf. \cite{Slater}, \begin{eqnarray} \empty_2F_1\Biggl[\frac[0pt]{a,b+1}{c+b+2};z\Biggl]= \Gamma\Biggl[\frac[0pt]{c+b+2}{c+1,b+1}\Biggr] \int_0^1 dx~x^{b}(1-x)^c(1-zx)^{-a}~,\label{pochint} \end{eqnarray} provided that the conditions \begin{eqnarray} |z|< 1~,\quad {\sf Re}(c+1),~{\sf Re}(b+1)~> 0~,\label{condpoch} \end{eqnarray} are obeyed. Applying Eq. (\ref{pochint}) recursively, one obtains the following integral representation for a general $\empty_{P+1}F_P$--function \begin{eqnarray} &&\empty_{P+1}F_P\Biggl[\frac[0pt]{a_0,a_1,\ldots ,a_P} {b_1,\ldots ,b_P} ;z\Biggr] = \Gamma\Biggl[\frac[0pt]{b_1,\ldots ,b_P} {a_1,\ldots ,a_P,b_1-a_1,\ldots ,b_P-a_P}\Biggr] \times \nonumber\\ &&~ \int_0^1dx_1\ldots \int_0^1dx_P x_1^{a_1-1}(1-x_1)^{b_1-a_1-1}\ldots x_P^{a_P-1}(1-x_P)^{b_P-a_P-1} (1-zx_1\ldots x_P)^{-a_0}~,\nonumber \\ \label{FPQint} \end{eqnarray} under similar conditions as in Eq. (\ref{condpoch}). \subsection{Mellin--Barnes Integrals} \label{App-SpeFunMB} For the Gauss function, there exists a representation in terms of a complex contour integral over $\Gamma$-functions. It is given by, cf. \cite{Slater}, \begin{eqnarray} _2F_1\Biggl[\frac[0pt]{a,b}{c};z\Biggr]= \frac{\Gamma(c)}{2\pi i \Gamma(a)\Gamma(b)} \int_{-i\infty +\alpha}^{i\infty +\alpha} \frac{\Gamma(a+s)\Gamma(b+s)\Gamma(-s)}{\Gamma(c+s)}(-z)^s ds~, \label{intrepgauss} \end{eqnarray} under the conditions \begin{eqnarray} |z| < 1~,\quad |\arg(-z)| < \pi~. \label{comcon} \end{eqnarray} (\ref{intrepgauss}) only holds if one chooses the integration contour in the complex plane and the positive constant $\alpha$ in such a way that the poles of the $\Gamma$-functions containing $(+s)$ are separated from those arising from the $\Gamma$-functions containing $(-s)$ and closes the contour to the right. \\ Setting $b=1,~c=1$ in (\ref{intrepgauss}) one obtains \begin{eqnarray} _1F_0[a;z]=\frac{1}{(1-z)^a}~, \end{eqnarray} which yields the Mellin-Barnes transformation, cf. \cite{MB1a,*MB1b,*MB2,Paris:2001,Smirnov:2004ym}, \begin{eqnarray} \frac{1}{(X+Y)^{\lambda}}=\frac{1}{2\pi i\Gamma(\lambda)} \int_{-i\infty+\alpha}^{+i\infty+\alpha} ds \Gamma(\lambda + s) \Gamma(-s) \frac{Y^s}{X^{\lambda +s}}~. \label{mbtrafo} \end{eqnarray} The contour has to be chosen as in (\ref{intrepgauss}) and the conditions $0 < \alpha < {\sf Re}(\lambda)$~, $|\arg(Y/X)|< \pi$ have to be fulfilled. \subsection{Harmonic Sums and Nielsen--Integrals} \label{App-SpeFunHarm} Expanding the $\Gamma$--function in $\varepsilon$, its logarithmic derivatives, the $\psi^{(k)}$-functions, emerge. In many applications of perturbative QCD and QED, harmonic sums occur, cf. \cite{Blumlein:1998if,Vermaseren:1998uu}, which can be considered as generalization of the $\psi$-function and the $\beta$-function. These are defined by \begin{eqnarray} S_{a_1, \ldots, a_m}(N)&=& \sum_{n_1=1}^N \sum_{n_2=1}^{n_1} \ldots \sum_{n_m=1}^{n_{m-1}} \frac{({\rm sign}(a_1))^{n_1}}{n_1^{|a_1|}} \frac{({\rm sign}(a_2))^{n_2}}{n_2^{|a_2|}} \ldots \frac{({\rm sign}(a_m))^{n_m}}{n_m^{|a_m|}}~, \nonumber\\ && N~\in~{\mathbb{N}},~\forall~l~a_l~\in {\mathbb{Z}}\setminus 0~, \label{harmdef} \\ S_{\emptyset}&=&1~. \end{eqnarray} We adopt the convention \begin{eqnarray} S_{a_1, \ldots ,a_m} \equiv S_{a_1, \ldots ,a_m}(N)~, \end{eqnarray} i.e. harmonic sums are taken at argument $(N)$, if no argument is indicated. Related quantities are the $Z$--sums defined by \begin{eqnarray} \label{SZSums} Z_{m_1, \ldots, m_k}(N) &= \sum_{N \geq i_1 > i_2 \ldots > i_k > 0} \displaystyle{\frac {\prod_{l=1}^k [{\rm sign}(m_l)]^{i_l}}{i_l^{|m_l|}}}~. \end{eqnarray} The depth $d$ and the weight $w$ of a harmonic sum are given by \begin{eqnarray} d&:=&m~,\label{depth} \\ w&:=&\sum_{i=1}^m |a_i| ~.\label{weight} \end{eqnarray} Harmonic sums of depth $d=1$ are referred to as single harmonic sums. The complete set of algebraic relations connecting harmonic sums to other harmonic sums of the same or lower weight is known~\cite{Blumlein:2003gb}, see also \cite{Vermaseren:1998uu} for an implementation in ${\sf FORM}$. Thus the number of independent harmonic sums can be reduced significantly, e.g., for $w=3$ the $18$ possible harmonic sums can be expressed algebraically in terms of $8$ basic harmonic sums only. One introduces a product for the harmonic sums, the shuffle product \SH, cf.~\cite{Blumlein:2003gb}. For the product of a single and a general finite harmonic sum it is given by \begin{eqnarray} S_{a_1}(N) \SH S_{b_1, \ldots, b_m}(N) = S_{a_1, b_1, \ldots, b_m}(N) + S_{b_1, a_1, b_2, \ldots, b_m}(N) + \ldots + S_{b_1, b_2, \ldots, b_m, a_1}(N)~. \nonumber\\ \label{shuffle} \end{eqnarray} For sums $S_{a_1, \ldots, a_n}(N)$ and $S_{b_1, \ldots,b_m}(N)$ of arbitrary depth, the shuffle product is then the sum of all harmonic sums of depth $m+n$ in the index set of which $a_i$ occurs left of $a_j$ for $i < j$, likewise for $b_k$ and $b_l$ for $k < l$. Note that the shuffle product is symmetric. \\ One can show that the following relation holds, cf.~\cite{Blumlein:2003gb}, \begin{eqnarray} S_{a_1}(N) \cdot S_{b_1, \ldots, b_m}(N) &=& S_{a_1}(N) \SH S_{b_1, \ldots, b_m}(N) \nonumber\\ & & -S_{a_1 \wedge b_1, b_2, \ldots, b_m}(N) - \ldots - S_{b_1, b_2, \ldots, a_1 \wedge b_m}(N)~, \label{genshuff} \end{eqnarray} where the $\wedge$ symbol is defined as \begin{eqnarray} a \wedge b = {\rm sign}(a) {\rm sign}(b) \left(|a| + |b|\right)~. \label{wedge} \end{eqnarray} Due to the additional terms containing wedges ($\wedge$) between indices, harmonic sums form a quasi--shuffle algebra, \cite{Hoffman:1997,*Hoffman:2004bf}. By summing (\ref{genshuff}) over permutations, one obtains the symmetric algebraic relations between harmonic sums. At depth $2$ and $3$ these read,~\cite{Blumlein:1998if}, \begin{eqnarray} S_{m,n} + S_{n,m} &=& S_m S_n + S_{m \wedge n}~, \label{algrel1}\\ \sum_{{\rm perm}\{l,m,n\}} S_{l,m,n} &=& S_l S_m S_n + \sum_{{\rm inv~perm}\{l,m,n\}} S_{l} S_{m \wedge n} + 2~S_{l \wedge m \wedge n}~, \label{algrel2} \end{eqnarray} \normalsize which we used extensively to simplify our expressions. In (\ref{algrel1},~\ref{algrel2}), ``{\sf perm}'' denotes all permutations and ``{\sf inv perm}'' invariant ones. The limit $N \rightarrow \infty$ of finite harmonic sums exists only if $a_1\neq 1$ in (\ref{harmdef}). Additionally, one defines all $\sigma$-values symbolically as \begin{eqnarray} \sigma_{k_l, \ldots, k_1} = \lim_{N \rightarrow \infty} S_{a_1, \ldots, a_l}(N)~. \label{sigmaval} \end{eqnarray} The finite $\sigma$-values are related to multiple $\zeta$-values,~\cite{Blumlein:1998if,Vermaseren:1998uu,Euler:1775,Zagier:1994,Borwein:1999js}, Eq.~(\ref{zeta}). Further we define the symbol \begin{eqnarray} \sigma_0:=\sum_{i=1}^{\infty}1~. \end{eqnarray} It is useful to include these $\sigma$-values into the algebra, since they allow to treat parts of sums individually, accounting for the respective divergences, cf. also \cite{Vermaseren:1998uu}. These divergent pieces cancel in the end if the overall sum is finite. The relation of single harmonic sums with positive or negative indices to the $\psi^{(k)}$--functions is then given by \begin{eqnarray} S_1(N) &=&\psi(N+1)+\gamma_E~,\label{s1psi} \\ S_a(N) &=&\frac{(-1)^{a-1}}{\Gamma(a)}\psi^{(a-1)}(N+1) +\zeta_a~, k \ge 2~,\label{sapsi} \\ S_{-1}(N)&=&(-1)^N\beta(N+1)-\ln(2)~,\label{sm1beta}\\ S_{-a}(N)&=&-\frac{(-1)^{N+a}}{\Gamma(a)}\beta^{(a-1)}(N+1)- \left(1-2^{1-a}\right)\zeta_a~, \label{smabeta}k \ge 2~. \end{eqnarray} Thus single harmonic sums can be analytically continued to complex values of $N$ by these relations. At higher depths, harmonic sums can be expressed in terms of Mellin--transforms of polylogarithms and the more general Nielsen-integrals,~\cite{Nielsen:1909,Kolbig:1983qt,Devoto:1983tc}. The latter are defined by \begin{eqnarray} {\rm S}_{n,p}(z) = \frac{(-1)^{n+p-1}}{(n-1)! p!} \int_0^1 \frac{dx}{x} \log^{n-1}(x) \log^p(1-zx)~ \label{nielsenint} \end{eqnarray} and fulfill the relation \begin{eqnarray} \frac{d \mbox{S}_{n,p}(x)}{d \log(x)} = \mbox{S}_{n-1,p}(x)~. \end{eqnarray} If $p=1$, one obtains the polylogarithms \begin{eqnarray} \mbox{Li}_n(x) = \mbox{S}_{n-1,1}(x)~, \end{eqnarray} where \begin{eqnarray} \mbox{Li}_0(x) = \frac{x}{1-x}~. \end{eqnarray} These functions do not suffice for arbitrary harmonic sums, in which case the harmonic polylogarithms have to be considered, \cite{Remiddi:1999ew}. The latter functions obey a direct shuffle algebra, cf. \cite{Borwein:1999js,Blumlein:2003gb}. The representation in terms of Mellin--transforms then allows an analytic continuation of arbitrary harmonic sums to complex $N$, cf. \cite{Carlson:thesis,*Titchmarsh:1939,Blumlein:2000hw,*Blumlein:2005jg}. Equivalently, one may express harmonic sums by factorial series, \cite{Nielsen:1906,Knopp:1947,*Landau:1906}, up to polynomials of $S_1(N)$ and harmonic sums of lower degree, and use this representation for the analytic continuation to $N \in \mathbb{C}$, cf. \cite{GonzalezArroyo:1979df,Blumlein:2009ta}. \newpage \section{\bf \boldmath Finite and Infinite Sums} \label{App-Sums} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \vspace{1mm}\noindent In this appendix, we list some examples for infinite sums which were needed in the present analysis and are newly calculated. The calculation was done using the ${\sf Sigma}$--package as explained in Section \ref{SubSec-2LInfSum}. A complete set of sums contributing to the calculation of the 2--loop massive OMEs can be found in Appendix B of Refs. \cite{Bierenbaum:2007qe,Bierenbaum:2008yu}. \begin{eqnarray} \sum_{i=1}^{\infty} \frac{B(N-2,i)}{(i+N)^3} &=& (-1)^N \frac{ 4S_{1,-2} +2S_{-3} +2\zeta_2S_1 +2\zeta_3 -6S_{-2} -3\zeta_2 } {(N-2)(N-1)N} \nonumber\\ && +\frac{1}{(N-2)(N-1)N^2}~. \label{Beta2} \end{eqnarray} \begin{eqnarray} \sum_{i=1}^{\infty} \frac{B(N-2,i)}{(i+N)^2}S_1(i+N-2) &=& \frac{(-1)^{N+1}}{(N-2)(N-1)N} \Bigl( 8S_{1,-2} -4S_{-3} -4S_1S_{-2} -2\zeta_3 \nonumber\\ && +2\zeta_2S_1 -10S_{-2} -5\zeta_2 \Bigr) +\frac{N^2-3N+3}{(N-2)(N-1)^2N^2}S_1 \nonumber\\ && -\frac{N^3-5N+3}{(N-2)(N-1)^3N^3}~. \end{eqnarray} \begin{eqnarray} \sum_{i=1}^{\infty}\frac{B(N,i)}{i+N+2}S_1(i)S_1(N+i) &=& \frac{(-1)^N}{N(N+1)(N+2)} \Bigl( 4S_{-2,1} -6S_{-3} -4S_{-2}S_1 -2\zeta_3 \nonumber\\ && -2\zeta_2S_1 -2\frac{\zeta_2}{(N+1)} -4\frac{S_{-2}}{(N+1)} \Bigr) \nonumber\\ && +\frac{ -2S_3 -S_1 S_2 +\zeta_2 S_1 +2 \zeta_3 }{N+2} \nonumber\\ && +\frac{2+7N+7N^2+5N^3+N^4} {N^3(N+1)^3(N+2)}S_1 \nonumber\\ && +2\frac{2+7N+9N^2+4N^3+N^4} {N^4(N+1)^3(N+2)} ~. \label{Beta25} \end{eqnarray} \begin{eqnarray} \sum_{i=1}^{\infty}\frac{S_{1}(i+N)S^2_1(i)}{i+N} &=& \frac{\sigma^4_1}{4} -\frac{3\zeta^2_2}{4} +\Bigl(\frac{2}{N} -2S_1\Bigr)\zeta_3 +\Bigl(\frac{S_1}{N} -\frac{S^2_1}{2} -\frac{S_2}{2}\Bigr)\zeta_2 +\frac{S^3_1}{N} \nonumber\\ && -\frac{S^4_1}{4} +S^2_1\Bigl( -\frac{1}{N^2} -\frac{3S_2}{2} \Bigr) -\frac{S_2}{N^2} -\frac{S^2_2}{4} -\frac{S_{2,1}}{N} \nonumber\\ && +S_1\Bigl( 3\frac{S_2}{N} +S_{2,1} -2S_3 \Bigr) +2\frac{S_3}{N} +S_{3,1} -S_4 ~. \label{Harm58} \end{eqnarray} \begin{eqnarray} \sum_{i=1}^{\infty}\Bigl(S_1(i+N)-S_1(i)\Bigr)^3 &=& -\frac{3}{2}S^2_1 -S^3_1 -\frac{1}{2}S_2 +3NS_{2,1} -NS_3 +N\zeta_3 ~. \label{Harm37} \end{eqnarray} \begin{eqnarray} \sum_{k=1}^{\infty}\frac{B(k+\varepsilon/2,N+1)}{N+k} &=& (-1)^N\Bigl[2S_{-2}+\zeta_2\Bigr] \nonumber\\ &+&\frac{\varepsilon}{2}(-1)^N\Bigl[ -\zeta_3+\zeta_2S_1+2S_{1,-2}-2S_{-2,1} \Bigr] \nonumber\\ &+&\frac{\varepsilon^2}{4}(-1)^N\Biggl[ \frac{2}{5}\zeta_2^2-\zeta_3S_1+\zeta_2S_{1,1} \nonumber\\ && +2\Bigl\{S_{1,1,-2}+S_{-2,1,1}-S_{1,-2,1}\Bigr\} \Biggr] \nonumber\\ &+& \varepsilon^3(-1)^N\Biggl[ -\frac{\zeta_5}{8}+\frac{S_1}{20}\zeta_2^2 -\frac{S_{1,1}}{8}\zeta_3 +\frac{S_{1,1,1}}{8}\zeta_2\nonumber\\ &&+\frac{S_{1,-2,1,1}+S_{1,1,1,-2}-S_{-2,1,1,1} -S_{1,1,-2,1}}{4} \Biggr]\nonumber\\ &&+O(\varepsilon^4)~. \label{specialsum1} \end{eqnarray} An example for a double infinite sum we encountered is given by \begin{eqnarray} N~\sum_{i,j=1}^{\infty}\frac{S_1(i)S_1(i+j+N)}{i(i+j)(j+N)} &=& 4S_{2,1,1} -2S_{3,1} +S_1\Bigl( -3S_{2,1}+\frac{4S_3}{3} \Bigr) -\frac{S_4}{2} \nonumber\\ && -S^2_2 +S^2_1S_2 +\frac{S^4_1}{6} +6S_1\zeta_3 +\zeta_2\Bigl( 2S^2_1+S_2 \Bigr)~. \label{DoubleSum1} \end{eqnarray} A detailed description of the method to calculate this sum can be found in Appendix B of Ref. \cite{Bierenbaum:2008yu}. \newpage \section{\bf \boldmath Moments of the Fermionic Contributions to the \\ $3$--Loop Anomalous Dimensions} \label{App-AnDim} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} The pole terms of the unrenormalized OMEs in our calculation agree with the general structure we presented in Eqs. (\ref{Ahhhqq3NSQ}, \ref{AhhhQq3PS}, \ref{Ahhhqq3PSQ}, \ref{AhhhQg3}, \ref{Ahhhqg3Q}, \ref{AhhhgqQ3}, \ref{Ahhhgg3Q}). Using the lower order renormalization coefficients and the constant terms of the $2$--loop results, Eqs. (\ref{aQg2}, \ref{aQq2PS}, \ref{aqq2NSQ}, \ref{agg2Q}, \ref{agq2Q}), allows to determine the $T_F$--terms of the $3$--loop anomalous dimensions for fixed values of $N$. All our results agree with the results of Refs. \cite{Gracey:1993nn,Larin:1996wd,Retey:2000nq,Moch:2002sn,Moch:2004pa,Vogt:2004mw}. Note that in this way we obtain the complete expressions for the terms $\gamma_{qg}^{(2)}$ and $\gamma_{qq}^{(2), {\sf PS}}$, since they always involve an overall factor $T_F$. For them we obtain \vspace*{2mm}\noindent \underline{$(i)$~~~\large $\hat{\gamma}_{qg}^{(2)}$}~: \begin{eqnarray} \hat{\gamma}_{qg}^{(2)}(2)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( \frac{8464}{243}C_A -\frac{1384}{243}C_F \Bigr) +\frac{\zeta_3}{3} \Bigl( -416{C_AC_F} +288{C_A^2} \nonumber\\ \nonumber\\ && \hspace{-15mm} +128{C_F^2} \Bigr) -\frac{7178}{81}{C_A^2} +\frac{556}{9}{C_AC_F} -\frac{8620}{243}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qg}^{(2)}(4)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( \frac{4481539}{303750}C_A +\frac{9613841}{3037500}C_F \Bigr) +\frac{\zeta_3}{25} \Bigl( 2832{C_A^2} -3876{C_AC_F} \nonumber\\ \nonumber\\ && \hspace{-15mm} +1044{C_F^2} \Bigr) -\frac{295110931}{3037500}{C_A^2} +\frac{278546497}{2025000}{C_AC_F} -\frac{757117001}{12150000}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qg}^{(2)}(6)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( \frac{86617163}{11668860}C_A +\frac{1539874183}{340341750}C_F \Bigr) +\frac{\zeta_3}{735} \Bigl( 69864{C_A^2} \nonumber\\ \nonumber\\ && \hspace{-15mm} -94664{C_AC_F} +24800{C_F^2} \Bigr) -\frac{58595443051}{653456160}{C_A^2} +\frac{1199181909343}{8168202000}{C_AC_F} \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{2933980223981}{40841010000}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qg}^{(2)}(8)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( \frac{10379424541}{2755620000}C_A +\frac{7903297846481}{1620304560000}C_F \Bigr) \nonumber\\ \nonumber\\ && \hspace{-15mm} +\zeta_3 \Bigl( \frac{128042}{1575}{C_A^2} -\frac{515201}{4725}{C_AC_F} +\frac{749}{27}{C_F^2} \Bigr) -\frac{24648658224523}{289340100000}{C_A^2} \nonumber\\ \nonumber\\ && \hspace{-15mm} +\frac{4896295442015177}{32406091200000}{C_AC_F} -\frac{4374484944665803}{56710659600000}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qg}^{(2)}(10)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( \frac{1669885489}{988267500}C_A +\frac{1584713325754369}{323600780868750}C_F \Bigr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +\zeta_3 \Bigl( \frac{1935952}{27225}{C_A^2} -\frac{2573584}{27225}{C_AC_F} +\frac{70848}{3025}{C_F^2} \Bigr) -\frac{21025430857658971}{255684567600000}{C_A^2} \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} +\frac{926990216580622991}{6040547909550000}{C_AC_F} -\frac{1091980048536213833}{13591232796487500}{C_F^2}~\Biggr]~. \end{eqnarray} \vspace*{2mm}\noindent \underline{$(ii)$~~~\large $\hat{\gamma}_{qq}^{(2), {\sf PS}}$}~: \begin{eqnarray} \hat{\gamma}_{qq}^{(2),{\sf PS}}(2)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{5024}{243} +\frac{256}{3}\Bigl(C_F-C_A\Bigl)\zeta_3 +\frac{10136}{243}C_A \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{14728}{243}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf PS}}(4)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{618673}{151875} +\frac{968}{75}\Bigl(C_F-C_A\Bigl)\zeta_3 +\frac{2485097}{506250}C_A \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{2217031}{675000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf PS}}(6)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{126223052}{72930375} +\frac{3872}{735}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} +\frac{1988624681}{4084101000}C_A +\frac{11602048711}{10210252500}C_F~\Biggr], \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf PS}}(8)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{13131081443}{13502538000} +\frac{2738}{945}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{343248329803}{648121824000}C_A +\frac{39929737384469}{22684263840000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf PS}}(10)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{265847305072}{420260754375} +\frac{50176}{27225}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{1028766412107043}{1294403123475000}C_A +\frac{839864254987192}{485401171303125}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf PS}}(12)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{2566080055386457}{5703275664286200} +\frac{49928}{39039}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{69697489543846494691}{83039693672007072000}C_A +\frac{86033255402443256197}{54806197823524667520}C_F~\Biggr]~. \end{eqnarray} For the remaining terms, only the projection onto the color factor $T_F$ can be obtained~: \vspace*{2mm}\noindent \underline{$(iii)$~~~\large $\hat{\gamma}_{qq}^{(2), {\sf NS,+}}$}~: \begin{eqnarray} \hat{\gamma}_{qq}^{(2),{\sf NS},+}(2)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{1792}{243} +\frac{256}{3}\Bigl(C_F-C_A\Bigl)\zeta_3 -\frac{12512}{243}C_A \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} -\frac{13648}{243}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},+}(4)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{384277}{30375} +\frac{2512}{15}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{8802581}{121500}C_A -\frac{165237563}{1215000}C_F~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},+}(6)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{160695142}{10418625} +\frac{22688}{105}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{13978373}{171500}C_A -\frac{44644018231}{243101250}C_F~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},+}(8)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{38920977797}{2250423000} +\frac{79064}{315}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{1578915745223}{18003384000}C_A -\frac{91675209372043}{420078960000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},+}(10)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{27995901056887}{1497656506500} +\frac{192880}{693}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{9007773127403}{97250422500}C_A -\frac{75522073210471127}{307518802668000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},+}(12)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{65155853387858071}{3290351344780500} +\frac{13549568}{45045}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{25478252190337435009}{263228107582440000}C_A -\frac{35346062280941906036867}{131745667845011220000}C_F~\Biggr]~, \nonumber \\ \\ \hat{\gamma}_{qq}^{(2),{\sf NS},+}(14)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{68167166257767019}{3290351344780500} +\frac{2881936}{9009}\Bigl(C_F-C_A\Bigr)\zeta_3 \nonumber\\ && \hspace{-15mm} -\frac{92531316363319241549}{921298376538540000}C_A -\frac{37908544797975614512733}{131745667845011220000}C_F \Biggr]~.\nonumber\\ \end{eqnarray} \vspace*{2mm}\noindent \underline{$(iv)$~~~\large $\hat{\gamma}_{qq}^{(2), {\sf NS,-}}$}~: \begin{eqnarray} {\hat{\gamma}_{qq}^{(2),{\sf NS},-}(1)}&=& 0~, \\ \hat{\gamma}_{qq}^{(2),{\sf NS},-}(3)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{2569}{243} +\frac{400}{3}\Bigl(C_F-C_A\Bigl)\zeta_3 -\frac{62249}{972}C_A \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} -\frac{203627}{1944}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},-}(5)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{431242}{30375} +\frac{2912}{15}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{38587}{500}C_A -\frac{5494973}{33750}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},-}(7)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{1369936511}{83349000} +\frac{8216}{35}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{2257057261}{26671680}C_A -\frac{3150205788689}{15558480000}C_F~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},-}(9)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{20297329837}{1125211500} +\frac{16720}{63}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{126810403414}{1406514375}C_A -\frac{1630263834317}{7001316000}C_F~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},-}(11)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{28869611542843}{1497656506500} +\frac{1005056}{3465}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{1031510572686647}{10892047320000}C_A -\frac{1188145134622636787}{4612782040020000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{qq}^{(2),{\sf NS},-}(13)&=&T_FC_F\Biggl[ -(1+2n_f)T_F\frac{66727681292862571}{3290351344780500} +\frac{13995728}{45045}\Bigl(C_F-C_A\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{90849626920977361109}{921298376538540000}C_A -\frac{36688336888519925613757}{131745667845011220000}C_F~\Biggr]~. \nonumber \\ \end{eqnarray} \vspace*{2mm}\noindent \underline{$(v)$~~~\large $\hat{\gamma}_{gg}^{(2)}$}~: \begin{eqnarray} \hat{\gamma}_{gg}^{(2)}(2)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( -\frac{8464}{243}C_A +\frac{1384}{243}C_F \Bigr) +\frac{\zeta_3}{3} \Bigl( -288{C_A^2} +416C_AC_F \nonumber \\ \nonumber \\ && \hspace{-15mm} -128{C_F^2} \Bigr) +\frac{7178}{81}{C_A^2} -\frac{556}{9}C_AC_F +\frac{8620}{243}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{gg}^{(2)}(4)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( -\frac{757861}{30375}C_A -\frac{979774}{151875}C_F \Bigr) +\frac{\zeta_3}{25} \Bigl( -6264{C_A^2} +6528C_AC_F \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} -264{C_F^2} \Bigr) +\frac{53797499}{607500}{C_A^2} -\frac{235535117}{1012500}C_AC_F +\frac{2557151}{759375}{C_F^2}~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{gg}^{(2)}(6)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( -\frac{52781896}{2083725}C_A -\frac{560828662}{72930375}C_F \Bigr) +\zeta_3 \Bigl( -\frac{75168}{245}{C_A^2} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{229024}{735}C_AC_F -\frac{704}{147}{C_F^2} \Bigr) +\frac{9763460989}{116688600}{C_A^2} -\frac{9691228129}{32672808}C_AC_F \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{11024749151}{10210252500}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{gg}^{(2)}(8)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( -\frac{420970849}{16074450}C_A -\frac{6990254812}{843908625}C_F \Bigr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +\zeta_3 \Bigl( -\frac{325174}{945}{C_A^2} +\frac{327764}{945}C_AC_F -\frac{74}{27}{C_F^2} \Bigr) +\frac{2080130771161}{25719120000}{C_A^2} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{220111823810087}{648121824000}C_AC_F -\frac{14058417959723}{5671065960000}{C_F^2}~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{gg}^{(2)}(10)&=&T_F\Biggl[ (1+2n_f)T_F \Bigl( -\frac{2752314359}{101881395}C_A -\frac{3631303571944}{420260754375}C_F \Bigr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +\zeta_3 \Bigl( -\frac{70985968}{190575}{C_A^2} +\frac{71324656}{190575}C_AC_F -\frac{5376}{3025}{C_F^2} \Bigr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{43228502203851731}{549140719050000}{C_A^2} -\frac{3374081335517123191}{9060821864325000}C_FC_A \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3009386129483453}{970802342606250}{C_F^2}~\Biggr]~. \end{eqnarray} \vspace*{2mm}\noindent \underline{$(vi)$~~~\large $\hat{\gamma}_{gq}^{(2)}$}~: \begin{eqnarray} \hat{\gamma}_{gq}^{(2)}(2)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{2272}{81} +\frac{512}{3}\Bigl(C_A-C_F\Bigl)\zeta_3 +\frac{88}{9}C_A +\frac{28376}{243}C_F~\Biggr]~, \nonumber \\ \\ \hat{\gamma}_{gq}^{(2)}(4)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{109462}{10125} +\frac{704}{15}\Bigl(C_A-C_F\Bigl)\zeta_3 -\frac{799}{12150}C_A \nonumber\\ \nonumber\\ && \hspace{-15mm} +\frac{14606684}{759375}C_F~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{gq}^{(2)}(6)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{22667672}{3472875} +\frac{2816}{105}\Bigl(C_A-C_F\Bigl)\zeta_3 -\frac{253841107}{145860750}C_A \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} +\frac{20157323311}{2552563125}C_F~\Biggr]~,\\ \nonumber \\ \hat{\gamma}_{gq}^{(2)}(8)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{339184373}{75014100} +\frac{1184}{63}\Bigl(C_A-C_F\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{3105820553}{1687817250}C_A +\frac{8498139408671}{2268426384000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{gq}^{(2)}(10)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{1218139408}{363862125} +\frac{7168}{495}\Bigl(C_A-C_F\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{18846629176433}{11767301122500}C_A +\frac{529979902254031}{323600780868750}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{gq}^{(2)}(12)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{13454024393417}{5222779912350} +\frac{5056}{429}\Bigl(C_A-C_F\Bigl)\zeta_3 \nonumber\\ \nonumber\\ && \hspace{-15mm} -\frac{64190493078139789}{48885219979596000}C_A +\frac{1401404001326440151}{3495293228541114000}C_F~\Biggr]~, \\ \nonumber \\ \hat{\gamma}_{gq}^{(2)}(14)&=&T_FC_F\Biggl[ (1+2n_f)T_F\frac{19285002274}{9495963477} +\frac{13568}{1365}\Bigl(C_A-C_F\Bigr) \zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{37115284124613269}{35434552943790000}C_A -\frac{40163401444446690479}{104797690331258925000}C_F \Biggr]~. \end{eqnarray} \newpage \section{\bf \boldmath The $O(\varepsilon^0)$ Contributions to $\hat{\hspace*{-1mm}\hat{A}}_{ij}^{(3)}$} \label{App-OMEs} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} Finally, we present all moments we calculated. We only give the constant term in $\varepsilon$ of the unrenormalized result, cf. Eqs. (\ref{Ahhhqq3NSQ}, \ref{AhhhQq3PS}, \ref{Ahhhqq3PSQ}, \ref{AhhhQg3}, \ref{Ahhhqg3Q}, \ref{AhhhgqQ3}, \ref{Ahhhgg3Q}). These terms have to be inserted into the general results on the renormalized level, cf. Eqs. (\ref{Aqq3NSQMSren}, \ref{AQq3PSMSren}, \ref{Aqq3PSQMSren}, \ref{AQg3MSren}, \ref{Aqg3QMSren}, \ref{Agq3QMSren}, \ref{Agg3QMSren}). We obtain \vspace*{2mm}\noindent \underline{$(i)$~\large $a_{Qq}^{(3), \sf PS}$}~: \begin{eqnarray} a_{Qq}^{(3), {\sf PS}}(2)&=& T_FC_FC_A \Biggl( \frac{117290}{2187} +\frac{64}{9}{\sf B_4}-64\zeta_4 +\frac{1456}{27}\zeta_3 +\frac{224}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{42458}{243} -\frac{128}{9}{\sf B_4}+64\zeta_4 -\frac{9664}{81}\zeta_3 +\frac{704}{27}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{36880}{2187} -\frac{4096}{81}\zeta_3 -\frac{736}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{76408}{2187} +\frac{896}{81}\zeta_3 -\frac{112}{81}\zeta_2 \Biggr)~, \\ a_{Qq}^{(3), {\sf PS}}(4)&=& T_FC_FC_A \Biggl( \frac{23115644813}{1458000000} +\frac{242}{225}{\sf B_4} -\frac{242}{25}\zeta_4 +\frac{1403}{180}\zeta_3 +\frac{283481}{270000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{181635821459}{8748000000} -\frac{484}{225}{\sf B_4} +\frac{242}{25}\zeta_4 +\frac{577729}{40500}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{4587077}{1620000}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{2879939}{5467500} -\frac{15488}{2025}\zeta_3 -\frac{1118}{2025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{474827503}{109350000} +\frac{3388}{2025}\zeta_3 -\frac{851}{20250}\zeta_2 \Biggr)~, \\ a_{Qq}^{(3), {\sf PS}}(6)&=& T_FC_FC_A \Biggl( \frac{111932846538053}{10291934520000} +\frac{968}{2205}{\sf B_4} -\frac{968}{245}\zeta_4 +\frac{2451517}{1852200}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{5638039}{7779240}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{238736626635539}{5145967260000} -\frac{1936}{2205}{\sf B_4} +\frac{968}{245}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{19628197}{555660}\zeta_3 +\frac{8325229}{10804500}\zeta_2 \Biggr) +T_F^2C_F \Biggl( \frac{146092097}{1093955625} -\frac{61952}{19845}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{7592}{99225}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{82616977}{45378900} +\frac{1936}{2835}\zeta_3 -\frac{16778}{694575}\zeta_2 \Biggr)~, \\ a_{Qq}^{(3), {\sf PS}}(8)&=& T_FC_FC_A \Biggl( \frac{314805694173451777}{32665339929600000} +\frac{1369}{5670}{\sf B_4} -\frac{1369}{630}\zeta_4\nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} -\frac{202221853}{137168640}\zeta_3 +\frac{1888099001}{3429216000}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{25652839216168097959}{457314759014400000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{1369}{2835}{\sf B_4} +\frac{1369}{630}\zeta_4 +\frac{2154827491}{48988800}\zeta_3 +\frac{12144008761}{48009024000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( \frac{48402207241}{272211166080} -\frac{43808}{25515}\zeta_3 +\frac{1229}{142884}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{16194572439593}{15122842560000} +\frac{1369}{3645}\zeta_3 -\frac{343781}{14288400}\zeta_2 \Biggr)~, \\ a_{Qq}^{(3), {\sf PS}}(10)&=& T_FC_FC_A \Biggl( \frac{989015303211567766373}{107642563748181000000} +\frac{12544}{81675}{\sf B_4} -\frac{12544}{9075}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{1305489421}{431244000}\zeta_3 +\frac{2903694979}{6670805625}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{4936013830140976263563}{80731922811135750000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{25088}{81675}{\sf B_4} +\frac{12544}{9075}\zeta_4 +\frac{94499430133}{1940598000}\zeta_3 +\frac{282148432}{4002483375}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( \frac{430570223624411}{2780024890190625} -\frac{802816}{735075}\zeta_3 +\frac{319072}{11026125}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{454721266324013}{624087220246875} +\frac{175616}{735075}\zeta_3 -\frac{547424}{24257475}\zeta_2 \Biggr)~, \\ a_{Qq}^{(3), {\sf PS}}(12)&=& T_FC_FC_A \Biggl( \frac{968307050156826905398206547}{107727062441920086477312000} +\frac{12482}{117117}{\sf B_4} -\frac{12482}{13013}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{64839185833913}{16206444334080}\zeta_3 +\frac{489403711559293}{1382612282251200}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{190211298439834685159055148289}{2962494217152802378126080000} -\frac{24964}{117117}{\sf B_4} +\frac{12482}{13013}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{418408135384633}{8103222167040}\zeta_3 -\frac{72904483229177}{15208735104763200}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( \frac{1727596215111011341}{13550982978344011200} -\frac{798848}{1054053}\zeta_3 +\frac{11471393}{347837490}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{6621557709293056160177}{12331394510293050192000} +\frac{24964}{150579}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{1291174013}{63306423180}\zeta_2 \Biggr)~. \end{eqnarray} \vspace*{2mm}\noindent \underline{$(ii)$~\large $a_{qq,Q}^{(3), \sf PS}$}~: \begin{eqnarray} a_{qq,Q}^{(3), {\sf PS}}(2)&=& n_fT_F^2C_F \Biggl( -\frac{100096}{2187} +\frac{896}{81}\zeta_3 -\frac{256}{81}\zeta_2 \Biggr)~, \\ a_{qq,Q}^{(3), {\sf PS}}(4)&=& n_fT_F^2C_F \Biggl( -\frac{118992563}{21870000} +\frac{3388}{2025}\zeta_3 -\frac{4739}{20250}\zeta_2 \Biggr)~, \\ a_{qq,Q}^{(3), {\sf PS}}(6)&=& n_fT_F^2C_F \Biggl( -\frac{17732294117}{10210252500} +\frac{1936}{2835}\zeta_3 -\frac{9794}{694575}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf PS}}(8)&=& n_fT_F^2C_F \Biggl( -\frac{20110404913057}{27221116608000} +\frac{1369}{3645}\zeta_3 +\frac{135077}{4762800}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf PS}}(10)&=& n_fT_F^2C_F \Biggl( -\frac{308802524517334}{873722108345625} +\frac{175616}{735075}\zeta_3 +\frac{4492016}{121287375}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf PS}}(12)&=& n_fT_F^2C_F \Biggl( -\frac{6724380801633998071}{38535607844665781850} +\frac{24964}{150579}\zeta_3 +\frac{583767694}{15826605795}\zeta_2 \Biggr)~, \nonumber \\ \\ a_{qq,Q}^{(3), {\sf PS}}(14)&=& n_fT_F^2C_F \Biggl( -\frac{616164615443256347333}{7545433703850642600000} +\frac{22472}{184275}\zeta_3 \nonumber\\ \nonumber \\ && \hspace{-15mm} +\frac{189601441}{5533778250}\zeta_2 \Biggr)~. \nonumber\\ \end{eqnarray} \vspace*{2mm}\noindent \underline{$(iii)$~\large $a_{Qg}^{\rm (3)}$}~: \begin{eqnarray} a_{Qg}^{(3)}(2)&=& T_FC_A^2 \Biggl( \frac{170227}{4374} -\frac{88}{9}{\sf B_4} +72\zeta_4 -\frac{31367}{324}\zeta_3 +\frac{1076}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_FC_A \Biggl( -\frac{154643}{729} +\frac{208}{9}{\sf B_4} -104\zeta_4 +\frac{7166}{27}\zeta_3 -54\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{15574}{243} -\frac{64}{9}{\sf B_4}+32\zeta_4 -\frac{3421}{81}\zeta_3 +\frac{704}{27}\zeta_2 \Biggr) +T_F^2C_A \Biggl( -\frac{20542}{2187} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{4837}{162}\zeta_3 -\frac{670}{81}\zeta_2 \Biggr) +T_F^2C_F \Biggl( \frac{11696}{729} +\frac{569}{81}\zeta_3 +\frac{256}{9}\zeta_2 \Biggr) -\frac{64}{27}T_F^3\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_A \Biggl( -\frac{6706}{2187} -\frac{616}{81}\zeta_3 -\frac{250}{81}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( \frac{158}{243} +\frac{896}{81}\zeta_3 +\frac{40}{9}\zeta_2 \Biggr)~, \nonumber\\ \end{eqnarray} \begin{eqnarray} a_{Qg}^{(3)}(4)&=& T_FC_A^2 \Biggl( -\frac{425013969083}{2916000000} -\frac{559}{50}{\sf B_4} +\frac{2124}{25}\zeta_4 -\frac{352717109}{5184000}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{4403923}{270000}\zeta_2 \Biggr) +T_FC_FC_A \Biggl( -\frac{95898493099}{874800000} +\frac{646}{25}{\sf B_4} -\frac{2907}{25}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{172472027}{864000}\zeta_3 -\frac{923197}{40500}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{87901205453}{699840000} -\frac{174}{25}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{783}{25}\zeta_4 +\frac{937829}{12960}\zeta_3 +\frac{62019319}{3240000}\zeta_2 \Biggr) +T_F^2C_A \Biggl( \frac{960227179}{29160000} +\frac{1873781}{51840}\zeta_3 \nonumber \\ \nonumber \\&& \hspace{-15mm} +\frac{120721}{13500}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{1337115617}{874800000} +\frac{73861}{324000}\zeta_3 +\frac{8879111}{810000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{176}{135}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( \frac{947836283}{72900000} -\frac{18172}{2025}\zeta_3 -\frac{11369}{13500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( \frac{8164734347}{4374000000} +\frac{130207}{20250}\zeta_3 +\frac{1694939}{810000}\zeta_2 \Biggr)~, \\ a_{Qg}^{(3)}(6)&=& T_FC_A^2 \Biggl( -\frac{48989733311629681}{263473523712000} -\frac{2938}{315}{\sf B_4} +\frac{17466}{245}\zeta_4 -\frac{748603616077}{11379916800}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{93013721}{3457440}\zeta_2 \Biggr) +T_FC_FC_A \Biggl( \frac{712876107019}{55319040000} +\frac{47332}{2205}{\sf B_4} -\frac{23666}{245}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{276158927731}{1896652800}\zeta_3 +\frac{4846249}{11113200}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{38739867811364113}{137225793600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{2480}{441}{\sf B_4} +\frac{1240}{49}\zeta_4 +\frac{148514798653}{711244800}\zeta_3 +\frac{4298936309}{388962000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_A \Biggl( \frac{706058069789557}{18819537408000} +\frac{3393002903}{116121600}\zeta_3 +\frac{6117389}{555660}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{447496496568703}{54890317440000} -\frac{666922481}{284497920}\zeta_3 +\frac{49571129}{9724050}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{176}{189}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( \frac{12648331693}{735138180} -\frac{4433}{567}\zeta_3 +\frac{23311}{111132}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{8963002169173}{1715322420000} +\frac{111848}{19845}\zeta_3 +\frac{11873563}{19448100}\zeta_2 \Biggr)~, \\ a_{Qg}^{(3)}(8)&=& T_FC_A^2 \Biggl( -\frac{358497428780844484961}{2389236291993600000} -\frac{899327}{113400}{\sf B_4} +\frac{64021}{1050}\zeta_4\nonumber \end{eqnarray}\begin{eqnarray} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{12321174818444641}{112368549888000}\zeta_3 -\frac{19581298057}{612360000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_FC_A \Biggl( \frac{941315502886297276939}{8362327021977600000} +\frac{515201}{28350}{\sf B_4} -\frac{515201}{6300}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{5580970944338269}{56184274944000}\zeta_3 +\frac{495290785657}{34292160000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{23928053971795796451443}{36585180721152000000} -\frac{749}{162}{\sf B_4} +\frac{749}{36}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{719875828314061}{1404606873600}\zeta_3 +\frac{2484799653079}{480090240000}\zeta_2 \Biggr) +T_F^2C_A \Biggl( \frac{156313300657148129}{4147979673600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{58802880439}{2388787200}\zeta_3 +\frac{46224083}{4082400}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{986505627362913047}{87107573145600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{185046016777}{50164531200}\zeta_3 +\frac{7527074663}{3429216000}\zeta_2 \Biggr) -\frac{296}{405}T_F^3\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_A \Biggl( \frac{24718362393463}{1322697600000} -\frac{125356}{18225}\zeta_3 +\frac{2118187}{2916000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{291376419801571603}{32665339929600000} +\frac{887741}{174960}\zeta_3 -\frac{139731073}{1143072000}\zeta_2 \Biggr)~, \\ a_{Qg}^{(3)}(10)&=& T_FC_A^2 \Biggl( \frac{6830363463566924692253659}{685850575063965696000000} -\frac{563692}{81675}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{483988}{9075}\zeta_4 -\frac{103652031822049723}{415451499724800}\zeta_3 -\frac{20114890664357}{581101290000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_FC_A \Biggl( \frac{872201479486471797889957487}{2992802509370032128000000} +\frac{1286792}{81675}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{643396}{9075}\zeta_4 -\frac{761897167477437907}{33236119977984000}\zeta_3 +\frac{15455008277}{660342375}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{247930147349635960148869654541}{148143724213816590336000000} -\frac{11808}{3025}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{53136}{3025}\zeta_4 +\frac{9636017147214304991}{7122025709568000}\zeta_3 +\frac{14699237127551}{15689734830000}\zeta_2 \Biggr)\nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +T_F^2C_A \Biggl( \frac{23231189758106199645229}{633397356480430080000} +\frac{123553074914173}{5755172290560}\zeta_3 +\frac{4206955789}{377338500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{18319931182630444611912149}{1410892611560158003200000} -\frac{502987059528463}{113048027136000}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{24683221051}{46695639375}\zeta_2 \Biggr) -\frac{896}{1485}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( \frac{297277185134077151}{15532837481700000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{1505896}{245025}\zeta_3 +\frac{189965849}{188669250}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{1178560772273339822317}{107642563748181000000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{62292104}{13476375}\zeta_3 -\frac{49652772817}{93391278750}\zeta_2 \Biggr)~. \end{eqnarray} \vspace*{2mm}\noindent \underline{$(iv)$~\large $a_{qg,Q}^{\rm (3)}$}~: \begin{eqnarray}\hspace{-6mm} a_{qg,Q}^{(3)}(2)&=& n_fT_F^2C_A \Biggl( \frac{83204}{2187} -\frac{616}{81}\zeta_3 +\frac{290}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{5000}{243} +\frac{896}{81}\zeta_3 -\frac{4}{3}\zeta_2 \Biggr)~, \\ a_{qg,Q}^{(3)}(4)&=& n_fT_F^2C_A \Biggl( \frac{835586311}{14580000} -\frac{18172}{2025}\zeta_3 +\frac{71899}{13500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{21270478523}{874800000} +\frac{130207}{20250}\zeta_3 -\frac{1401259}{810000}\zeta_2 \Biggr)~,\\ a_{qg,Q}^{(3)}(6)&=& n_fT_F^2C_A \Biggl( \frac{277835781053}{5881105440} -\frac{4433}{567}\zeta_3 +\frac{2368823}{555660}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{36123762156197}{1715322420000} +\frac{111848}{19845}\zeta_3 -\frac{26095211}{19448100}\zeta_2 \Biggr)~,\\ a_{qg,Q}^{(3)}(8)&=& n_fT_F^2C_A \Biggl( \frac{157327027056457}{3968092800000} -\frac{125356}{18225}\zeta_3 +\frac{7917377}{2268000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{201046808090490443}{10888446643200000} +\frac{887741}{174960}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3712611349}{3429216000}\zeta_2 \Biggr)~, \end{eqnarray}\begin{eqnarray} a_{qg,Q}^{(3)}(10)&=& n_fT_F^2C_A \Biggl( \frac{6542127929072987}{191763425700000} -\frac{1505896}{245025}\zeta_3 +\frac{1109186999}{377338500}\zeta_2 \Biggr) \nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{353813854966442889041}{21528512749636200000} +\frac{62292104}{13476375}\zeta_3 -\frac{83961181063}{93391278750}\zeta_2 \Biggr)~. \nonumber \\ \end{eqnarray} \vspace*{2mm}\noindent \underline{$(v)$~\large $a_{gq,Q}^{\rm (3)}$}~: \begin{eqnarray} a_{gq,Q}^{(3)}(2)&=& T_FC_FC_A \Biggl( -\frac{126034}{2187} -\frac{128}{9}{\sf B_4}+128\zeta_4 -\frac{9176}{81}\zeta_3 -\frac{160}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{741578}{2187} +\frac{256}{9}{\sf B_4}-128\zeta_4 +\frac{17296}{81}\zeta_3 -\frac{4496}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( \frac{21872}{729} +\frac{2048}{27}\zeta_3 +\frac{416}{27}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( \frac{92200}{729} -\frac{896}{27}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{208}{27}\zeta_2 \Biggr)~,\\ a_{gq,Q}^{(3)}(4)&=& T_FC_FC_A \Biggl( -\frac{5501493631}{218700000} -\frac{176}{45}{\sf B_4} +\frac{176}{5}\zeta_4 -\frac{8258}{405}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{13229}{8100}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{12907539571}{145800000} +\frac{352}{45}{\sf B_4} -\frac{176}{5}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{132232}{2025}\zeta_3 -\frac{398243}{27000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( \frac{1914197}{911250} +\frac{2816}{135}\zeta_3 +\frac{1252}{675}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( \frac{50305997}{1822500} -\frac{1232}{135}\zeta_3 +\frac{626}{675}\zeta_2 \Biggr)~,\\ a_{gq,Q}^{(3)}(6)&=& T_FC_FC_A \Biggl( -\frac{384762916141}{24504606000} -\frac{704}{315}{\sf B_4} +\frac{704}{35}\zeta_4 -\frac{240092}{19845}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{403931}{463050}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{40601579774533}{918922725000} +\frac{1408}{315}{\sf B_4} -\frac{704}{35}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{27512264}{694575}\zeta_3 -\frac{24558841}{3472875}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{279734446}{364651875} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{11264}{945}\zeta_3 +\frac{8816}{33075}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( \frac{4894696577}{364651875} -\frac{704}{135}\zeta_3\nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +\frac{4408}{33075}\zeta_2 \Biggr)~,\\ a_{gq,Q}^{(3)}(8)&=& T_FC_FC_A \Biggl( -\frac{10318865954633473}{816633498240000} -\frac{296}{189}{\sf B_4} +\frac{296}{21}\zeta_4 -\frac{1561762}{178605}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{30677543}{85730400}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{305405135103422947}{11432868975360000} +\frac{592}{189}{\sf B_4} -\frac{296}{21}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{124296743}{4286520}\zeta_3 -\frac{4826251837}{1200225600}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{864658160833}{567106596000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{4736}{567}\zeta_3 -\frac{12613}{59535}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( \frac{9330164983967}{1134213192000} -\frac{296}{81}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{12613}{119070}\zeta_2 \Biggr)~,\\ a_{gq,Q}^{(3)}(10)&=& T_FC_FC_A \Biggl( -\frac{1453920909405842897}{130475834846280000} -\frac{1792}{1485}{\sf B_4} +\frac{1792}{165}\zeta_4 -\frac{1016096}{147015}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{871711}{26952750}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( -\frac{11703382372448370173}{667205973645750000} +\frac{3584}{1485}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{1792}{165}\zeta_4 +\frac{62282416}{2695275}\zeta_3 -\frac{6202346032}{2547034875}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{1346754066466}{756469357875} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{28672}{4455}\zeta_3 -\frac{297472}{735075}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( \frac{4251185859247}{756469357875} -\frac{12544}{4455}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{148736}{735075}\zeta_2 \Biggr)~,\\ a_{gq,Q}^{(3)}(12)&=& T_FC_FC_A \Biggl( -\frac{1515875996003174876943331}{147976734123516602304000} -\frac{1264}{1287}{\sf B_4} +\frac{1264}{143}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{999900989}{173918745}\zeta_3 -\frac{693594486209}{3798385390800}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{48679935129017185612582919}{4069360188396706563360000} +\frac{2528}{1287}{\sf B_4} -\frac{1264}{143}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{43693776149}{2260943685}\zeta_3 -\frac{2486481253717}{1671289571952}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{2105210836073143063}{1129248581528667600} +\frac{20224}{3861}\zeta_3 -\frac{28514494}{57972915}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( \frac{9228836319135394697}{2258497163057335200} -\frac{8848}{3861}\zeta_3 -\frac{14257247}{57972915}\zeta_2 \Biggr)~, \end{eqnarray}\begin{eqnarray} a_{gq,Q}^{(3)}(14)&=& T_FC_FC_A \Biggl( -\frac{1918253569538142572718209}{199199449781656964640000} -\frac{3392}{4095}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{3392}{455}\zeta_4 -\frac{2735193382}{553377825}\zeta_3 -\frac{1689839813797}{5113211103000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( -\frac{143797180510035170802620917}{17429951855894984406000000} +\frac{6784}{4095}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3392}{455}\zeta_4 +\frac{12917466836}{774728955}\zeta_3 -\frac{4139063104013}{4747981738500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{337392441268078561}{179653183425015300} +\frac{54272}{12285}\zeta_3 -\frac{98112488}{184459275}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( \frac{222188365726202803}{71861273370006120} -\frac{3392}{1755}\zeta_3 -\frac{49056244}{184459275}\zeta_2 \Biggr)~. \end{eqnarray} \vspace*{2mm}\noindent \underline{$(vi)$~\large $a_{gg,Q}^{\rm (3)}$}~: \begin{eqnarray} a_{gg,Q}^{(3)}(2)&=& T_FC_A^2 \Biggl( -\frac{170227}{4374} +\frac{88}{9}{\sf B_4}-72\zeta_4 +\frac{31367}{324}\zeta_3 -\frac{1076}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_FC_A \Biggl( \frac{154643}{729} -\frac{208}{9}{\sf B_4} +104\zeta_4 -\frac{7166}{27}\zeta_3+54\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{15574}{243} +\frac{64}{9}{\sf B_4}-32\zeta_4 +\frac{3421}{81}\zeta_3 -\frac{704}{27}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_A \Biggl( \frac{20542}{2187} -\frac{4837}{162}\zeta_3 +\frac{670}{81}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{11696}{729} -\frac{569}{81}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{256}{9}\zeta_2 \Biggr) +\frac{64}{27}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( -\frac{76498}{2187} +\frac{1232}{81}\zeta_3 -\frac{40}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( \frac{538}{27} -\frac{1792}{81}\zeta_3 -\frac{28}{9}\zeta_2 \Biggr)~,\\ a_{gg,Q}^{(3)}(4)&=& T_FC_A^2 \Biggl( \frac{29043652079}{291600000} +\frac{533}{25}{\sf B_4} -\frac{4698}{25}\zeta_4 +\frac{610035727}{2592000}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{92341}{6750}\zeta_2 \Biggr) +T_FC_FC_A \Biggl( \frac{272542528639}{874800000} -\frac{1088}{25}{\sf B_4} +\frac{4896}{25}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3642403}{17280}\zeta_3 +\frac{73274237}{810000}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{41753961371}{1749600000} \nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +\frac{44}{25}{\sf B_4} -\frac{198}{25}\zeta_4 +\frac{2676077}{64800}\zeta_3 -\frac{4587077}{1620000}\zeta_2 \Biggr) +T_F^2C_A \Biggl( -\frac{1192238291}{14580000} \nonumber \\ \nonumber \\&& \hspace{-15mm} -\frac{2134741}{25920}\zeta_3 -\frac{16091}{675}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{785934527}{43740000} -\frac{32071}{8100}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{226583}{8100}\zeta_2 \Biggr) +\frac{64}{27}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( -\frac{271955197}{1822500} +\frac{13216}{405}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{6526}{675}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{465904519}{27337500} -\frac{6776}{2025}\zeta_3 -\frac{61352}{10125}\zeta_2 \Biggr)~,\\ a_{gg,Q}^{(3)}(6)&=& T_FC_A^2 \Biggl( \frac{37541473421359}{448084224000} +\frac{56816}{2205}{\sf B_4} -\frac{56376}{245}\zeta_4 +\frac{926445489353}{2844979200}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{11108521}{555660}\zeta_2 \Biggr) +T_FC_FC_A \Biggl( \frac{18181142251969309}{54890317440000} -\frac{114512}{2205}{\sf B_4} +\frac{57256}{245}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{12335744909}{67737600}\zeta_3 +\frac{94031857}{864360}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{16053159907363}{635304600000} +\frac{352}{441}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{176}{49}\zeta_4 +\frac{3378458681}{88905600}\zeta_3 -\frac{8325229}{10804500}\zeta_2 \Biggr) +T_F^2C_A \Biggl( -\frac{670098465769}{6001128000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{25725061}{259200}\zeta_3 -\frac{96697}{2835}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{8892517283287}{490092120000} -\frac{12688649}{2540160}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{2205188}{77175}\zeta_2 \Biggr) +\frac{64}{27}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( -\frac{245918019913}{1312746750} +\frac{3224}{81}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{250094}{19845}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{71886272797}{3403417500} -\frac{3872}{2835}\zeta_3 -\frac{496022}{77175}\zeta_2 \Biggr)~,\\ a_{gg,Q}^{(3)}(8)&=& T_FC_A^2 \Biggl( \frac{512903304712347607}{18665908531200000} +\frac{108823}{3780}{\sf B_4} -\frac{162587}{630}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{2735007975361}{6502809600}\zeta_3 +\frac{180224911}{7654500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_FC_A \Biggl( \frac{13489584043443319991}{43553786572800000} -\frac{163882}{2835}{\sf B_4} +\frac{81941}{315}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3504113623243}{25082265600}\zeta_3 +\frac{414844703639}{3429216000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{5990127272073225467}{228657379507200000}\nonumber +\frac{37}{81}{\sf B_4} -\frac{37}{18}\zeta_4 +\frac{3222019505879}{87787929600}\zeta_3 \nonumber \nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} -\frac{12144008761}{48009024000}\zeta_2 \Biggr) +T_F^2C_A \Biggl( -\frac{16278325750483243}{124439390208000}\nonumber \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{871607413}{7962624}\zeta_3 -\frac{591287}{14580}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{7458367007740639}{408316749120000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{291343229}{52254720}\zeta_3 -\frac{2473768763}{85730400}\zeta_2 \Biggr) +\frac{64}{27}T_F^3\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_A \Biggl( -\frac{102747532985051}{486091368000} +\frac{54208}{1215}\zeta_3 -\frac{737087}{51030}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{1145917332616927}{51039593640000} -\frac{2738}{3645}\zeta_3 -\frac{70128089}{10716300}\zeta_2 \Biggr)~,\\ a_{gg,Q}^{(3)}(10)&=& T_FC_A^2 \Biggl( -\frac{15434483462331661005275759}{327337774462347264000000} +\frac{17788828}{571725}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{17746492}{63525}\zeta_4 +\frac{269094476549521109}{519314374656000}\zeta_3 +\frac{1444408720649}{55468759500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_FC_A \Biggl( \frac{207095356146239371087405921}{771581896946961408000000} -\frac{35662328}{571725}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{17831164}{63525}\zeta_4 -\frac{3288460968359099}{37093883904000}\zeta_3 +\frac{6078270984602}{46695639375}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{553777925867720521493231}{20667372239650752000000} +\frac{896}{3025}{\sf B_4} -\frac{4032}{3025}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{7140954579599}{198717235200}\zeta_3 -\frac{282148432}{4002483375}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_A \Biggl( -\frac{63059843481895502807}{433789788579840000} -\frac{85188238297}{729907200}\zeta_3 -\frac{33330316}{735075}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{655690580559958774157}{35787657557836800000} -\frac{71350574183}{12043468800}\zeta_3 -\frac{3517889264}{121287375}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{64}{27}T_F^3\zeta_3 +n_fT_F^2C_A \Biggl( -\frac{6069333056458984}{26476427525625} +\frac{215128}{4455}\zeta_3 -\frac{81362132}{5145525}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{100698363899844296}{4368610541728125} -\frac{351232}{735075}\zeta_3 -\frac{799867252}{121287375}\zeta_2 \Biggr)~. \end{eqnarray} \vspace*{2mm}\noindent \underline{$(vii)$~\large $a_{qq,Q}^{(3), \sf NS}$}~: \begin{eqnarray} a_{qq,Q}^{(3), {\sf NS}}(1)&=& 0~,\\ a_{qq,Q}^{(3), {\sf NS}}(2)&=& T_FC_FC_A \Biggl( \frac{8744}{2187} +\frac{64}{9}{\sf B_4} -64\zeta_4 +\frac{4808}{81}\zeta_3 -\frac{64}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{359456}{2187} -\frac{128}{9}{\sf B_4} +64\zeta_4 -\frac{848}{9}\zeta_3 +\frac{2384}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{28736}{2187} -\frac{2048}{81}\zeta_3 -\frac{512}{81}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{100096}{2187} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{896}{81}\zeta_3 -\frac{256}{81}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(3)&=& T_FC_FC_A \Biggl( \frac{522443}{34992} +\frac{100}{9}{\sf B_4}-100\zeta_4 +\frac{15637}{162}\zeta_3 +\frac{175}{162}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{35091701}{139968} -\frac{200}{9}{\sf B_4}+100\zeta_4 -\frac{1315}{9}\zeta_3 +\frac{29035}{648}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{188747}{8748} -\frac{3200}{81}\zeta_3 -\frac{830}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{1271507}{17496} +\frac{1400}{81}\zeta_3 -\frac{415}{81}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(4)&=& T_FC_FC_A \Biggl( \frac{419369407}{21870000} +\frac{628}{45}{\sf B_4} -\frac{628}{5}\zeta_4 +\frac{515597}{4050}\zeta_3 +\frac{10703}{4050}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{137067007129}{437400000} -\frac{1256}{45}{\sf B_4} +\frac{628}{5}\zeta_4 -\frac{41131}{225}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{4526303}{81000}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{151928299}{5467500} -\frac{20096}{405}\zeta_3 -\frac{26542}{2025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{1006358899}{10935000} +\frac{8792}{405}\zeta_3 -\frac{13271}{2025}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(5)&=& T_FC_FC_A \Biggl( \frac{816716669}{43740000} +\frac{728}{45}{\sf B_4} -\frac{728}{5}\zeta_4 +\frac{12569}{81}\zeta_3 +\frac{16103}{4050}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{13213297537}{36450000} -\frac{1456}{45}{\sf B_4} +\frac{728}{5}\zeta_4 -\frac{142678}{675}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{48391}{750}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{9943403}{303750} -\frac{23296}{405}\zeta_3 -\frac{31132}{2025}\zeta_2 \Biggr) \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{195474809}{1822500} +\frac{10192}{405}\zeta_3 -\frac{15566}{2025}\zeta_2 \Biggr)~, \\ a_{qq,Q}^{(3), {\sf NS}}(6)&=& T_FC_FC_A \Biggl( \frac{1541550898907}{105019740000} +\frac{5672}{315}{\sf B_4} -\frac{5672}{35}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{720065}{3969}\zeta_3 +\frac{1016543}{198450}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{186569400917}{463050000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{11344}{315}{\sf B_4} +\frac{5672}{35}\zeta_4 -\frac{7766854}{33075}\zeta_3 +\frac{55284811}{771750}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{26884517771}{729303750} -\frac{181504}{2835}\zeta_3 -\frac{1712476}{99225}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{524427335513}{4375822500} +\frac{11344}{405}\zeta_3 -\frac{856238}{99225}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(7)&=& T_FC_FC_A \Biggl( \frac{5307760084631}{672126336000} +\frac{2054}{105}{\sf B_4} -\frac{6162}{35}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{781237}{3780}\zeta_3 +\frac{19460531}{3175200}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{4900454072126579}{11202105600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{4108}{105}{\sf B_4} +\frac{6162}{35}\zeta_4 -\frac{8425379}{33075}\zeta_3 +\frac{1918429937}{24696000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{8488157192423}{210039480000} -\frac{65728}{945}\zeta_3 -\frac{3745727}{198450}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{54861581223623}{420078960000} +\frac{4108}{135}\zeta_3 -\frac{3745727}{396900}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(8)&=& T_FC_FC_A \Biggl( -\frac{37259291367883}{38887309440000} +\frac{19766}{945}{\sf B_4} -\frac{19766}{105}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{1573589}{6804}\zeta_3 +\frac{200739467}{28576800}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{3817101976847353531}{8166334982400000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{39532}{945}{\sf B_4} +\frac{19766}{105}\zeta_4 -\frac{80980811}{297675}\zeta_3 +\frac{497748102211}{6001128000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{740566685766263}{17013197880000} -\frac{632512}{8505}\zeta_3 -\frac{36241943}{1786050}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{4763338626853463}{34026395760000} +\frac{39532}{1215}\zeta_3 -\frac{36241943}{3572100}\zeta_2 \Biggr)~, \end{eqnarray}\begin{eqnarray} a_{qq,Q}^{(3), {\sf NS}}(9)&=& T_FC_FC_A \Biggl( -\frac{3952556872585211}{340263957600000} +\frac{4180}{189}{\sf B_4} -\frac{4180}{21}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{21723277}{85050}\zeta_3 +\frac{559512437}{71442000}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{1008729211999128667}{2041583745600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{8360}{189}{\sf B_4} +\frac{4180}{21}\zeta_4 -\frac{85539428}{297675}\zeta_3 +\frac{131421660271}{1500282000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{393938732805271}{8506598940000} -\frac{133760}{1701}\zeta_3 -\frac{19247947}{893025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{2523586499054071}{17013197880000} +\frac{8360}{243}\zeta_3 -\frac{19247947}{1786050}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(10)&=& T_FC_FC_A \Biggl( -\frac{10710275715721975271}{452891327565600000} +\frac{48220}{2079}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{48220}{231}\zeta_4 +\frac{2873636069}{10291050}\zeta_3 +\frac{961673201}{112266000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{170291990048723954490137}{328799103812625600000} -\frac{96440}{2079}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{48220}{231}\zeta_4 -\frac{10844970868}{36018675}\zeta_3 +\frac{183261101886701}{1996875342000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{6080478350275977191}{124545115080540000} -\frac{1543040}{18711}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{2451995507}{108056025}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{38817494524177585991}{249090230161080000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{96440}{2673}\zeta_3 -\frac{2451995507}{216112050}\zeta_2 \Biggr)~, \\ a_{qq,Q}^{(3), {\sf NS}}(11)&=& T_FC_FC_A \Biggl( -\frac{22309979286641292041}{603855103420800000} +\frac{251264}{10395}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{251264}{1155}\zeta_4 +\frac{283300123}{935550}\zeta_3 +\frac{1210188619}{130977000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{177435748292579058982241}{328799103812625600000} -\frac{502528}{10395}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{251264}{1155}\zeta_4 -\frac{451739191}{1440747}\zeta_3 +\frac{47705202493793}{499218835500}\zeta_2 \Biggr) \nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{6365809346912279423}{124545115080540000} -\frac{8040448}{93555}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{512808781}{21611205}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{40517373495580091423}{249090230161080000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{502528}{13365}\zeta_3 -\frac{512808781}{43222410}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(12)&=& T_FC_FC_A \Biggl( -\frac{126207343604156227942043}{2463815086971638400000} +\frac{3387392}{135135}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3387392}{15015}\zeta_4 +\frac{51577729507}{158107950}\zeta_3 +\frac{2401246832561}{243486243000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{68296027149155250557867961293}{122080805651901196900800000} -\frac{6774784}{135135}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{3387392}{15015}\zeta_4 -\frac{79117185295}{243486243}\zeta_3 +\frac{108605787257580461}{1096783781593500}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{189306988923316881320303}{3557133031815302940000} -\frac{108396544}{1216215}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{90143221429}{3652293645}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{1201733391177720469772303}{7114266063630605880000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{6774784}{173745}\zeta_3 -\frac{90143221429}{7304587290}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(13)&=& T_FC_FC_A \Biggl( -\frac{12032123246389873565503373}{181090408892415422400000} +\frac{3498932}{135135}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3498932}{15015}\zeta_4 +\frac{2288723461}{6548850}\zeta_3 +\frac{106764723181157}{10226422206000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{10076195142551036234891679659}{17440115093128742414400000} -\frac{6997864}{135135}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{3498932}{15015}\zeta_4 -\frac{81672622894}{243486243}\zeta_3 +\frac{448416864235277759}{4387135126374000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{196243066652040382535303}{3557133031815302940000} -\frac{111965824}{1216215}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{93360116539}{3652293645}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{1242840812874342588467303}{7114266063630605880000}\nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-15mm} +\frac{6997864}{173745}\zeta_3 -\frac{93360116539}{7304587290}\zeta_2 \Biggr)~,\\ a_{qq,Q}^{(3), {\sf NS}}(14)&=& T_FC_FC_A \Biggl( -\frac{994774587614536873023863}{12072693926161028160000} +\frac{720484}{27027}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{720484}{3003}\zeta_4 +\frac{6345068237}{17027010}\zeta_3 +\frac{37428569944327}{3408807402000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{72598193631729215117875463981}{122080805651901196900800000} -\frac{1440968}{27027}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{720484}{3003}\zeta_4 -\frac{2101051892878}{6087156075}\zeta_3 +\frac{461388998135343407}{4387135126374000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{40540032063650894708251}{711426606363060588000} -\frac{23055488}{243243}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{481761665447}{18261468225}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{256205552272074402170491}{1422853212726121176000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{1440968}{34749}\zeta_3 -\frac{481761665447}{36522936450}\zeta_2 \Biggr)~. \end{eqnarray} \newpage \section{\bf \boldmath $3$--loop Moments for Transversity} \label{App-Trans} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} We obtain the following fixed moments of the fermionic contributions to the $3$--loop transversity anomalous dimension $\gamma_{qq}^{(2),{\sf TR}}(N)$ \begin{eqnarray} \hat{\gamma}_{qq}^{(2),{\sf TR}}(1)&=&C_FT_F\Biggl[ -\frac{8}{3}T_F(1+2n_f) -\frac{2008}{27}C_A \nonumber\\ && \hspace{-15mm} +\frac{196}{9}C_F +32(C_F-C_A)\zeta_3 \Biggr]~,\\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(2)&=&C_FT_F\Biggl[ -\frac{184}{27}T_F(1+2n_f) -\frac{2084}{27}C_A \nonumber\\ && \hspace{-15mm} -60C_F +96(C_F-C_A)\zeta_3 \Biggr]~,\\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(3)&=&C_FT_F\Biggl[ -\frac{2408}{243}T_F(1+2n_f) -\frac{19450}{243}C_A \nonumber\\ && \hspace{-15mm} -\frac{25276}{243}C_F +\frac{416}{3}(C_F-C_A)\zeta_3 \Biggr]~,\\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(4)&=&C_FT_F\Biggl[ -\frac{14722}{1215}T_F(1+2n_f) -\frac{199723}{2430}C_A \nonumber\\ && \hspace{-15mm} -\frac{66443}{486}C_F +\frac{512}{3}(C_F-C_A)\zeta_3 \Biggr]~,\nonumber\\ \\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(5)&=&C_FT_F\Biggl[ -\frac{418594}{30375}T_F(1+2n_f) -\frac{5113951}{60750}C_A \nonumber\\ && \hspace{-15mm} -\frac{49495163}{303750}C_F +\frac{2944}{15}(C_F-C_A)\zeta_3 \Biggr]~,\\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(6)&=&C_FT_F\Biggl[ -\frac{3209758}{212625}T_F(1+2n_f) -\frac{3682664}{42525}C_A \nonumber\\ && \hspace{-15mm} -\frac{18622301}{101250}C_F +\frac{1088}{5}(C_F-C_A)\zeta_3 \Biggr]~,\\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(7)&=&C_FT_F\Biggl[ -\frac{168501142}{10418625}T_F (1+2n_f) -\frac{1844723441}{20837250}C_A \nonumber\\ && \hspace{-15mm} -\frac{49282560541}{243101250}C_F +\frac{8256}{35}(C_F-C_A)\zeta_3 \Biggr] \\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(8)&=&C_FT_F\Biggl[ -\frac{711801943}{41674500} T_F(1+2n_f) -\frac{6056338297}{66679200}C_A \nonumber \end{eqnarray} \begin{eqnarray} \nonumber\\ && \hspace{-15mm} -\frac{849420853541}{3889620000}C_F +\frac{8816}{35}(C_F-C_A)\zeta_3 \Biggr] \end{eqnarray} These moments $(N=1..8)$ agree with the corresponding terms obtained in \cite{Gracey:2003yrxGracey:2003mrxGracey:2006zrxGracey:2006ah}. The newly calculated moments read \begin{eqnarray} \hat{\gamma}_{qq}^{(2),{\sf TR}}(9)&=&C_FT_F\Biggl[ -\frac{20096458061}{1125211500}T_F(1+2n_f) -\frac{119131812533}{1285956000}C_A \nonumber\\ && \hspace{-15mm} -\frac{24479706761047}{105019740000}C_F +\frac{83824}{315}(C_F-C_A)\zeta_3 \Biggr] \\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(10)&=&C_FT_F\Biggl[ -\frac{229508848783}{12377326500}T_F(1+2n_f) -\frac{4264058299021}{45008460000}C_A \nonumber\\ && \hspace{-15mm} -\frac{25800817445759}{105019740000}C_F +\frac{87856}{315}(C_F-C_A)\zeta_3 \Biggr] \\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(11)&=&C_FT_F\Biggl[ -\frac{28677274464343}{1497656506500}T_F(1+2n_f) -\frac{75010870835743}{778003380000}C_A \nonumber \\ && \hspace{-15mm} -\frac{396383896707569599}{1537594013340000}C_F +\frac{1006736}{3465}(C_F-C_A)\zeta_3 \Biggr] \\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(12)&=&C_FT_F\Biggl[ -\frac{383379490933459}{19469534584500}T_F(1+2n_f) \nonumber\\ && \hspace{-15mm} -\frac{38283693844132279}{389390691690000}C_A \nonumber\\ && \hspace{-15mm} -\frac{1237841854306528417}{4612782040020000}C_F +\frac{1043696}{3465}(C_F-C_A)\zeta_3 \Biggr] \\ \hat{\gamma}_{qq}^{(2),{\sf TR}}(13)&=&C_FT_F\Biggl[ -\frac{66409807459266571}{3290351344780500}T_F(1+2n_f) \nonumber\\ && \hspace{-15mm} -\frac{6571493644375020121}{65807026895610000}C_A \nonumber\\ && \hspace{-15mm} -\frac{36713319015407141570017}{131745667845011220000}C_F +\frac{14011568}{45045}(C_F-C_A)\zeta_3 \Biggr]~. \end{eqnarray} The fixed moments of the constant terms $a_{qq,Q}^{(3), \rm TR}(N)$ of the unrenormalized OME, see Eq. (\ref{Aqq3NSTRQMSren}), are given by \begin{eqnarray} a_{qq,Q}^{(3), {\rm TR}}(1)&=& T_FC_FC_A \Biggl( -\frac{26441}{1458} +\frac{8}{3}{\sf B_4} -24\zeta_4 +\frac{481}{27}\zeta_3 -\frac{61}{27}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{15715}{162} -\frac{16}{3}{\sf B_4} +24\zeta_4 -\frac{278}{9}\zeta_3 +\frac{49}{3}\zeta_2 \Biggr)\nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{6548}{729} -\frac{256}{27}\zeta_3 -\frac{104}{27}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{15850}{729} +\frac{112}{27}\zeta_3 -\frac{52}{27}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(2)&=& T_FC_FC_A \Biggl( \frac{1043}{162}+8{\sf B_4}-72\zeta_4 +\frac{577}{9}\zeta_3+\frac{\zeta_2}{3} \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{10255}{54}-16{\sf B_4}+72\zeta_4 -\frac{310}{3}\zeta_3+33\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{1388}{81} -\frac{256}{9}\zeta_3-8\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{4390}{81} +\frac{112}{9}\zeta_3-4\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(3)&=& T_FC_FC_A \Biggl( \frac{327967}{21870} +\frac{104}{9}{\sf B_4}-104\zeta_4 +\frac{40001}{405}\zeta_3 +\frac{121}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{1170943}{4374} -\frac{208}{9}{\sf B_4}+104\zeta_4 -\frac{1354}{9}\zeta_3 +\frac{3821}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{52096}{2187} -\frac{3328}{81}\zeta_3 -\frac{904}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{168704}{2187} +\frac{1456}{81}\zeta_3 -\frac{452}{81}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(4)&=& T_FC_FC_A \Biggl( \frac{4400353}{218700} +\frac{128}{9}{\sf B_4}-128\zeta_4 +\frac{52112}{405}\zeta_3 +\frac{250}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{56375659}{174960} -\frac{256}{9}{\sf B_4}+128\zeta_4 -\frac{556}{3}\zeta_3 +\frac{4616}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{3195707}{109350} -\frac{4096}{81}\zeta_3 -\frac{1108}{81}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{20731907}{218700} +\frac{1792}{81}\zeta_3 -\frac{554}{81}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(5)&=& T_FC_FC_A \Biggl( \frac{1436867309}{76545000} +\frac{736}{45}{\sf B_4} -\frac{736}{5}\zeta_4 +\frac{442628}{2835}\zeta_3 +\frac{8488}{2025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{40410914719}{109350000} -\frac{1472}{45}{\sf B_4} +\frac{736}{5}\zeta_4 -\frac{47932}{225}\zeta_3 +\frac{662674}{10125}\zeta_2 \Biggr)\nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{92220539}{2733750} -\frac{23552}{405}\zeta_3 -\frac{31924}{2025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{596707139}{5467500} +\frac{10304}{405}\zeta_3 -\frac{15962}{2025}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(6)&=& T_FC_FC_A \Biggl( \frac{807041747}{53581500} +\frac{272}{15}{\sf B_4} -\frac{816}{5}\zeta_4 +\frac{172138}{945}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{10837}{2025}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{14845987993}{36450000} -\frac{544}{15}{\sf B_4} +\frac{816}{5}\zeta_4 -\frac{159296}{675}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{81181}{1125}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{5036315611}{133953750} -\frac{8704}{135}\zeta_3 -\frac{35524}{2025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{32472719011}{267907500} +\frac{3808}{135}\zeta_3 -\frac{17762}{2025}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(7)&=& T_FC_FC_A \Biggl( \frac{413587780793}{52509870000} +\frac{688}{35}{\sf B_4} -\frac{6192}{35}\zeta_4 +\frac{27982}{135}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{620686}{99225}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{12873570421651}{29172150000} -\frac{1376}{35}{\sf B_4} +\frac{6192}{35}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{8454104}{33075}\zeta_3 +\frac{90495089}{1157625}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{268946573689}{6563733750} -\frac{22016}{315}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{1894276}{99225}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{1727972700289}{13127467500} +\frac{1376}{45}\zeta_3 -\frac{947138}{99225}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(8)&=& T_FC_FC_A \Biggl( -\frac{91321974347}{112021056000} +\frac{2204}{105}{\sf B_4} -\frac{6612}{35}\zeta_4 +\frac{87613}{378}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{11372923}{1587600}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{1316283829306051}{2800526400000} -\frac{4408}{105}{\sf B_4} +\frac{6612}{35}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{9020054}{33075}\zeta_3 +\frac{171321401}{2058000}\zeta_2 \Biggr) +T_F^2C_F \Biggl( -\frac{4618094363399}{105019740000} -\frac{70528}{945}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{2030251}{99225}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{29573247248999}{210039480000} +\frac{4408}{135}\zeta_3 -\frac{2030251}{198450}\zeta_2 \Biggr) ~, \nonumber \\ \\ a_{qq,Q}^{(3), {\rm TR}}(9)&=& T_FC_FC_A \Biggl( -\frac{17524721583739067}{1497161413440000} +\frac{20956}{945}{\sf B_4} -\frac{20956}{105}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{9574759}{37422}\zeta_3 +\frac{16154189}{2041200}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{1013649109952401819}{2041583745600000} -\frac{41912}{945}{\sf B_4}\nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +\frac{20956}{105}\zeta_4 -\frac{85698286}{297675}\zeta_3 +\frac{131876277049}{1500282000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{397003835114519}{8506598940000} -\frac{670592}{8505}\zeta_3 -\frac{19369859}{893025}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{2534665670688119}{17013197880000} +\frac{41912}{1215}\zeta_3 -\frac{19369859}{1786050}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(10)&=& T_FC_FC_A \Biggl( -\frac{176834434840947469}{7485807067200000} +\frac{21964}{945}{\sf B_4} -\frac{21964}{105}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{261607183}{935550}\zeta_3 +\frac{618627019}{71442000}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{11669499797141374121}{22457421201600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{43928}{945}{\sf B_4} +\frac{21964}{105}\zeta_4 -\frac{3590290}{11907}\zeta_3 +\frac{137983320397}{1500282000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{50558522757917663}{1029298471740000} -\frac{702848}{8505}\zeta_3 -\frac{4072951}{178605}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{321908083399769663}{2058596943480000} +\frac{43928}{1215}\zeta_3 -\frac{4072951}{357210}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(11)&=& T_FC_FC_A \Biggl( -\frac{436508000489627050837}{11775174516705600000} +\frac{251684}{10395}{\sf B_4} -\frac{251684}{1155}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{3687221539}{12162150}\zeta_3 +\frac{149112401}{16038000}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{177979311179110818909401}{328799103812625600000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{503368}{10395}{\sf B_4} +\frac{251684}{1155}\zeta_4 -\frac{452259130}{1440747}\zeta_3 +\frac{191230589104127}{1996875342000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{6396997235105384423}{124545115080540000} -\frac{8053888}{93555}\zeta_3 -\frac{514841791}{21611205}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +n_fT_F^2C_F \Biggl( -\frac{40628987857774916423}{249090230161080000} +\frac{503368}{13365}\zeta_3 -\frac{514841791}{43222410}\zeta_2 \Biggr) ~, \\ a_{qq,Q}^{(3), {\rm TR}}(12)&=& T_FC_FC_A \Biggl( -\frac{245210883820358086333}{4783664647411650000} +\frac{260924}{10395}{\sf B_4} -\frac{260924}{1155}\zeta_4 \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{3971470819}{12162150}\zeta_3 +\frac{85827712409}{8644482000}\zeta_2 \Biggr) +T_FC_F^2 \Biggl( \frac{2396383721714622551610173}{4274388349564132800000} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{521848}{10395}{\sf B_4} +\frac{260924}{1155}\zeta_4 -\frac{468587596}{1440747}\zeta_3 +\frac{198011292882437}{1996875342000}\zeta_2 \Biggr) \nonumber \end{eqnarray}\begin{eqnarray} && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{1124652164258976877487}{21048124448611260000} -\frac{8349568}{93555}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{535118971}{21611205}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{7126865031281296825487}{42096248897222520000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{521848}{13365}\zeta_3 -\frac{535118971}{43222410}\zeta_2 \Biggr) ~,\\ a_{qq,Q}^{(3), {\rm TR}}(13)&=& T_FC_FC_A \Biggl( -\frac{430633219615523278883051}{6467514603300550800000} +\frac{3502892}{135135}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{3502892}{15015}\zeta_4 +\frac{327241423}{935550}\zeta_3 +\frac{15314434459241}{1460917458000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_FC_F^2 \Biggl( \frac{70680445585608577308861582893}{122080805651901196900800000} -\frac{7005784}{135135}{\sf B_4} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{3502892}{15015}\zeta_4 -\frac{81735983092}{243486243}\zeta_3 +\frac{449066258795623169}{4387135126374000}\zeta_2 \Biggr) \nonumber \\ \nonumber \\ && \hspace{-15mm} +T_F^2C_F \Biggl( -\frac{196897887865971730295303}{3557133031815302940000} -\frac{112092544}{1216215}\zeta_3 \nonumber \\ \nonumber \\ && \hspace{-15mm} -\frac{93611152819}{3652293645}\zeta_2 \Biggr) +n_fT_F^2C_F \Biggl( -\frac{1245167831299024242467303}{7114266063630605880000} \nonumber \\ \nonumber \\ && \hspace{-15mm} +\frac{7005784}{173745}\zeta_3 -\frac{93611152819}{7304587290}\zeta_2 \Biggr) ~. \end{eqnarray} \end{appendix} \newpage \thispagestyle{empty} \begin{flushleft} \end{flushleft} \newpage \thispagestyle{empty} \begin{flushleft} \end{flushleft} \newpage \bibliographystyle{h-physrev3} {
proofpile-arXiv_065-6270
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{introduction} In a directed graph $G=(V,E)$, each edge $e\in E$ is directed from its source $s(e)$ to its target $t(e)$. The directed line graph $\mathcal{L} G$ of $G$ with vertex set $E$, and with an edge $(e,f)$ for every pair of edges in $G$ such that $t(e)=s(f)$. A spanning tree of $G$ rooted at a vertex $r$ is an edge-induced subgraph of $G$ in which there is a unique path from $v$ to $r$, for all $v\in V$. We denote the indegree and outdegree of a vertex $v$ by $\text{indeg}(v)$ and $\text{outdeg}(v)$, respectively, and we denote the number of spanning trees of $G$ by $\kappa(G)$. Knuth proved that if every vertex of $G$ has indegree greater than 0, then \[\kappa(\mathcal{L} G)=\kappa(G)\prod_{v\in V} \text{outdeg}(v)^{\text{indeg}(v)-1}\] Knuth's proof relied on the Matrix-Tree Theorem. In his paper, he noted that the simple form of this result suggested that a bijective proof was possible, but that it was not at all obvious how to find such a bijection \cite{Kn}. In fact, there are even stronger relations between $\kappa(\mathcal{L} G)$ and $\kappa(G)$. Let $\{x_v|v\in V\}$ and $\{x_e|e\in E\}$ be variables indexed by the vertices and edges of $G$. The vertex and edge generating functions of $G$ are defined as follows, where the sums are taken over all rooted spanning trees $T$ of $G$. \[\kappa^{edge}(G)=\sum_T \prod_{e\in T}x_e,\ \ \kappa^{vertex}(G)=\sum_T \prod_{e\in T}x_{t(e)}\] Levine used linear algebraic methods to prove the following generalization of Knuth's result. Our first result in this paper is a bijective proof of Levine's theorem, which yields a bijective proof on Knuth's theorem as a special case. \\ \\ \noindent \textbf{Theorem~\ref{thm1.1}.} \textit{Let $G=(V,E)$ be a directed graph in which every vertex has indegree greater than 0. Then} \[\kappa^{vertex}(\mathcal{L}G)=\kappa^{edge}(G)\prod_{v\in V} \left(\sum_{s(e)=v} x_e\right)^{\text{indeg}(v)-1} \] \\ Using this bijection, we are able to answer the following open question posed by Stanley. \\ \\ \textbf{Exercise 5.73 from \cite{St}.} Let $\mathcal{B}(n)$ be the set of binary de Bruijn sequences of degree $n$, and let $\mathcal{S}_n$ be the set of all binary sequences of length $2^n$. Find an explicit bijection $\mathcal{B}(n)\times \mathcal{B}(n)\rightarrow \mathcal{S}(n)$. \\ \\ The critical group $K(G)$ of a graph $G$ is a finite abelian group whose order is the number of spanning trees of $G$. Critical groups have applications in statistical physics \cite{Ho}, algebraic combinatorics \cite{Le}, and arithmetic geometry \cite{Be}. We review the definition of this group in section \ref{definitions}. The Kautz graphs $\text{Kautz}_n(m)$ and the de Bruijn graphs $DB_n(m)$ are families of iterated line graphs. $\text{Kautz}_1(m)$ is the complete directed graph on $m+1$ vertices, without self-loops, and $DB_1(m)$ is the complete graph on $m$ vertices, with self-loops. These families are defined for $n>1$ as follows. \[\text{Kautz}_n(m)=\mathcal{L}^{n-1}\text{Kautz}_1(m),\ \ DB_n(m)=\mathcal{L}^{n-1}DB_1(m)\] Levine recently determined $K(DB_n(2))$ and $K(\text{Kautz}_n(m))$, where $m$ is prime \cite{Le}. We generalize these results, proving the following characterizations of the critical groups of all the Kautz and de Bruijn graphs. \\ \\ \noindent \textbf{Theorem~\ref{db}.} \textit{The critical group of $DB_n(m)$ is} \[K\left(DB_n(m)\right)=\left(\mathbb{Z}_{m^n}\right)^{m-2}\oplus\bigoplus_{i=1}^{n-1} \left(\mathbb{Z}_{m^i}\right)^{m^{n-1-i}(m-1)^2}\] \\ \noindent \textbf{Theorem~\ref{kautz}.} \textit{The critical group of $\text{Kautz}_n(m)$ is} \[K\left(\text{Kautz}_n(m)\right)=\left(\mathbb{Z}_{m+1}\right)^{m-1}\oplus \left(\mathbb{Z}_{m^{n-1}}\right)^{m^2-2}\oplus\bigoplus_{i=1}^{n-2} \left(\mathbb{Z}_{m^i}\right)^{m^{n-2-i}(m-1)^2(m+1)}\] \\ The rest of this paper is organized as follows. In Section~\ref{definitions} we provide background and definitions. In Section~\ref{bijection}, we introduce a bijection which proves Theorem~\ref{thm1.1}. We apply this bijection in Section~\ref{dbsequence} to construct a bijection betweeen binary de Bruijn sequences of order $n$ and binary sequences of length $2^{n-1}$. Finally, in Section~\ref{kautzsection}, we prove Theorem~\ref{db} and ~\ref{kautz}, giving a complete description of the critical groups of the Kautz and de Bruijn graphs. \section{Background and definitions} \label{definitions} In a directed graph $G=(V,E)$, each edge $e\in E$ is directed from its \textit{source} $s(e)$ to its \textit{target} $t(e)$. \begin{definition}[Directed line graph] Let $G=(V,E)$ be a directed graph. The \textit{directed line graph} $\mathcal{L}G$ is a directed graph with vertex set $E$, and with an edge $(e,f)$ for every pair of edges $e$ and $f$ of $G$ with $t(e)=s(f)$. \end{definition} \begin{center}\includegraphics[height=2.4cm]{dlg} \\ \textit{Figure 2.1. A directed graph and its line graph.} \end{center} At times we may speak of a subset $F$ of $E$ as a subgraphs of $G$ - in this case we mean the subgraph $(V,F)$. If $H$ is a subgraph of $G$ and $v$ is in $H$, we denote the indegree of $v$ in $H$ by $\text{indeg}_H(v)$, and the outdegree by $\text{outdeg}_H(v)$. \begin{definition}[Oriented spanning tree] Let $G=(V,E)$ be a directed graph. An \textit{oriented spanning tree} of $G$ is an acyclic subgraph of $G$ with a distinguished node, the \textit{root}, in which there is a unique path from every vertex $v\in V$ to the root. We refer to these trees as \textit{spanning trees}. \end{definition} Let $T$ be a spanning tree of $G$. Every vertex of $G$ has outdegree 1 in $T$, except the root, which has outdegree 0. We denote the number of spanning trees of $G$ by $\kappa(G)$, and the number of spanning trees rooted at $r$ by $\kappa(G,r)$. Let $G=(V,E)$ be a strongly-connected directed graph, and let $\mathbb{Z}^V$ be the free abelian group generated by vertices of $G$ -- the group of of formal linear combinations of vertices of $G$. We define $\Delta_v\in \mathbb{Z}^V$, for all $v\in V$, as follows. \[\Delta_v = \sum_{e\in E\ s.t\ s(e)=v} (t(e)-v) \] The \textit{sandpile group} $K(G,r)$ with \textit{sink} $r$ is the quotient group \[K(G,r)=\mathbb{Z}^V/(r,\Delta_v|v\in V\backslash r)\] It is well-known that the order of $K(G,r)$ is $\kappa(G,r)$. A directed graph $G$ is \textit{Eulerian} if $\text{indeg}(v)=\text{outdeg}(v)$ for all vertices $v$ in $V$. According to Lemma 4.12 of \cite{Ho}, if $G$ is Eulerian, the sandpile groups $K(G,r_1)$ and $K(G,r_2)$ are isomorphic for any two $r_1,r_2$ in $V$. In this case, we call the group the \textit{critical group} $K(G)$. \begin{definition}[The Laplacian] Let $G=(V,E)$ be a finite directed graph with vertices $v_1,v_2,\ldots v_{|V|}$. The \textit{adjacency matrix} $A(G)$ of $G$ is the $|V|\times |V|$ matrix in which $A(G)_{ij}$ is the multiplicity of the edge $(v_i,v_j)$ in $E$. The \textit{degree matrix} $D(G)$ is the $|V|\times |V|$ diagonal matrix in which $D_{ii}=\text{outdeg}(v_i)$. The \textit{Laplacian} $L(G)$ of $G$ is defined as $A(G)-D(G)$. \end{definition} Note that the row vectors of $L(G)$ are the elements $\Delta_v$. We consider $L(G)^T$ as a $\mathbb{Z}$-linear operator on $\mathbb{Z}^V$ -- its image is the subgroup generated by the $\Delta_v$. For a strongly-connected Eulerian graph $G$, the Laplacian has exactly one eigenvalue 0, so for such a graph $G$, we have \[\mathbb{Z}^V/\text{im}\ L(G)\cong \mathbb{Z}^V/\text{im}\ L(G)^T\cong \mathbb{Z}\oplus K(G)\] The following elementary row and column operations on matrices with entries in a ring $R$ are invertible over $R$. \begin{enumerate}[-] \item Permuting two rows (columns) \item Adding a multiple of a row (column) by an element of $R$ to another row (column) \item Multiplying the entries of a row (column) by a unit \end{enumerate} If $L'$ is obtained from $L(G)$ by invertible row and column operations over $\mathbb{Z}$, then $\mathbb{Z}^V/\text{Im}\ L' \cong \mathbb{Z}^V/\text{Im}\ L(G)$. Suppose that $R$ is a principal ideal domain. Under these operations, any matrix with entries in $R$ is equivalent to a matrix in \textit{Smith normal form}. A matrix in this form is diagonal, and its diagonal entries $x_{11},x_{22},\ldots x_{nn}$ are elements of $R$ such that $x_{(i+1)(i+1)}$ is a multiple of $x_{ii}$ for all $i<n$. These entries are called the \textit{invariant factors} of the original integer matrix, and they are unique up to multiplication by units. If the invariant factors of $L(G)$ over $\mathbb{Z}$ are $x_{11},x_{22},\ldots x_{nn}$ then \[\mathbb{Z}^V/\text{Im}\ L(G) = \bigoplus_{i=1}^n \mathbb{Z}_{x_{ii}}\] Thus, row-reducing the Laplacian yields information about the critical group. \section{Counting spanning trees} \label{bijection} Let $G=(V,E)$ be a directed graph, and let $\{x_v\}_{v\in V}$ and $\{x_e\}_{e\in E}$ be variables indexed by the vertices and edges of $G$. The \textit{edge and vertex generating functions}, which enumerate the spanning trees of $G$, are defined as follows \[\kappa^{edge}(G)=\sum_T \prod_{e\in T}x_e\] \[\kappa^{vertex}(G)=\sum_T \prod_{e\in T}x_{t(e)}\] \noindent where $T$ ranges over all spanning trees of $G$. In this section, we give a bijective proof of the following identity, solving a problem posed by Levine in \cite{Le} \begin{theorem} \label{thm1.1} Let G=(V,E) be a directed graph in which every vertex has indegree greater than 0. Then: \begin{equation} \label{kthm} \kappa^{vertex}(\mathcal{L}G)=\kappa^{edge}(G)\prod_{v\in V} \left(\sum_{s(e)=v} x_e\right)^{\text{indeg}(v)-1} \end{equation} \end{theorem} In order to find a bijection, we adopt the following strategy. We put an arbitrary total order on the edges in $E$. \begin{enumerate}[-] \item We provide a bijection between monomial terms on the right-hand side of Eq. (\ref{kthm}) and \textit{tree arrays}, which are arrays of lists, one list for each vertex $v\in V$. \item Then we present a map $\sigma$ that take a tree array to a spanning tree of $\mathcal{L} G$ which contributes the same term to the left-hand side of Eq. (\ref{kthm}). \item Finally, we show that $\sigma$ is bijective by constructing an inverse map $\pi$ which takes a spanning tree of $\mathcal{L} G$ to a tree array. \end{enumerate} We define a \textit{list} to be an ordered tuple of edges. We \textit{append} an element $x$ to a list $l$ by adding $x$ to the end of $l$. We \textit{pop} list $l$ by removing the first element of $l$. We denote the number of times an element $e$ appears in a list $l$ by $N(l,e)$ . Let $v$ be a vertex of $G$ and let $l'_v$ be a list with $\text{indeg}(v)-1$ elements, all of which are edges with source $v$. We map $l'_v$ to a monomial term of $(\sum_{s(e)=v} x_e)^{\text{indeg}(v)-1}$, as follows. \[l'_v=(e_1,e_2,\ldots e_{\text{indeg}(v)-1})\rightarrow x_{e_1}x_{e_2}\ldots x_{e_{\text{indeg}(v)-1}}\] This map provides a bijection between lists $l'_v$ and terms of $(\sum_{s(e)=v} x_e)^{\text{indeg}(v)-1}$. Therefore, a term on the right-hand side of Eq. (\ref{kthm}) corresponds to a choice of spanning tree $T$ of $G$ and a choice of one such list $l'_v$ for each vertex $v$. Suppose a monomial term on the right-hand side of Eq. (\ref{kthm}) corresponds to a spanning tree $T$ rooted at $r$ and an array of lists $\langle l'_v\rangle$. For each vertex $v\in V\backslash r$, we obtain $l_v$ by appending the unique edge $e$ in $T$ with source $v$ to the list $l'_v$. We obtain $l_r$ by appending a new variable $\Omega$ to $l'_r$. Each list $l_v$ has length $\text{indeg}(v)$, for $v\in V$. We call an array of lists $\langle l_v \rangle_{v\in V}$ obtained in this way a \textit{tree array}. By construction, terms on the right-hand side of Eq. (\ref{kthm}) are in bijection with tree arrays. We now define the bijective map $\sigma$, which takes a tree array of $G$ to a spanning tree of $\mathcal{L} G$. \vspace{.1in} \noindent \textbf{The bijection $\sigma$:} We start with a tree array $\langle l_v \rangle$ and an empty subgraph $T'$ of $\mathcal{L} G$. Then we run the following algorithm. \begin{enumerate}[Step 1.] \item Let $R$ be the subset of edges $e$ of $G$ for which $N(l_{s(e)},e)=0$ and $\text{outdeg}_{T'}(e)=0$. Let $f$ be the smallest edge in $R$ under the order on $E$. \item Pop the first element $g$ from the list $l_{t(f)}$. If $g$ is $\Omega$, then $\sigma(\langle l_v\rangle)=T'$. \item Otherwise, $g\in E$ and $s(g)=t(f)$. Add the edge $(f,g)$ to $T'$, and then return to step 1. \end{enumerate} We also define a map $\pi$ which takes a spanning tree of $\mathcal{L} G$ to a tree array of $G$. \vspace{.1in} \noindent \textbf{The inverse map $\pi$:} We start with a spanning tree $T'$ of $\mathcal{L} G$, and an empty list $l_v$ at each vertex $v\in V$. This map is given by another algorithm. \begin{enumerate}[Step 1.] \item Let $S$ be the set of leaves of $T'$. Let $f$ be the smallest edge in $S$ under the order on $E$. \item If $f$ is not the root of $T'$, remove $f$ and its outedge $(f,g)$ from $T'$, and append $g$ to $l_{t(f)}$. Go back to step 2. \item If $f$ is the root of $T'$, append $\Omega$ to $l_{t(f)}$, and return the array of lists. \end{enumerate} As an example, we apply $\sigma$ to a tree array in a small directed graph $G$. We order the edges of $G$ by the lexigraphic order. \begin{center}\includegraphics[height=2.4cm]{bij1} \\ \textit{Figure 3.1. The graph $G$, with a spanning tree $T$ highlighted in red. Below the graph is a monomial term of $\kappa^{vertex}(G)$, where $x_{ij}$ is the variable for edge $(i,j)$. The tree array corresponding to this term is shown to the right. In the term and the tree array, red elements correspond to edges of the tree.} \end{center} \vspace{.025in} \begin{center}\includegraphics[height=2.1cm]{bij2}\vspace{.025in} \\ \textit{Figure 3.2. The first two edges added to $T'$ by the algorithm for $\sigma$. Initially, the edges $(1,2)$ and $(1,3)$ do not appear in the lists. We pop $(2,5)$ from $l_2$ and add the edge $((1,2),(2,5))$ to $T'$. Then the edges $(1,3)$ and $(5,4)$ have outdegree 0 in $T'$ and do not appear in the lists. We pop $(3,4)$ from $l_3$ and add $((1,3),(3,4))$ to $T'$. } \end{center} \begin{center}\includegraphics[height=5.1cm]{bij3}\vspace{.025in} \\ \textit{Figure 3.3. The last three edges added to $T'$, and the final tree. The last element left in the lists of the tree array is $\Omega$.} \end{center} In order to prove Theorem~\ref{thm1.1}, we first prove three lemmas. In the definition of the algorithm for the map $\sigma$, we assumed that the set $R$ is always non-empty in step 1 and that the the list $l_{t(f)}$ is always non-empty in step 2. In Lemma~\ref{welldef}, we show that both assumptions are valid. \begin{lemma} \label{welldef} The algorithm used to define map $\sigma$ is well-defined: at step 1, the set $R$ is non-empty, and at step 2, the list $l_{t(f)}$ is non-empty. \end{lemma} \begin{proof} After $k$ edges have been added to $T'$, there are $|E|-k$ elements left in all the lists $l_v$, where one of the elements is $\Omega$. There are $|E|-k-1$ edges left in the lists, but there are $|E|-k$ edges of $G$ which do not have an outedge in $T'$, so $R$ must be non-empty in step 1. Every time we pop $l_v$, we add an edge $(f,g)$ to $T'$, where $t(f)=v$. When we are at step 2, $\text{outdeg}_{T'}(f)=0$, so at most $\text{indeg}(t(f))-1$ of the elements of $l_{t(f)}$ have been popped. Therefore, the list $l_{t(f)}$ is always nonempty at step 2. The algorithm is well-defined. \end{proof} \vspace{.05in} The following lemma shows that $\sigma$ takes a tree array corresponding to a term on the right-hand side of Eq. (\ref{kthm}) to a spanning tree which contributes the same term to the left-hand side. \begin{lemma} \label{correctterm} Suppose that $\langle l_v \rangle$ is a tree array and that $\sigma(\langle l_v \rangle)=T'$. Then $T'$ is a spanning tree of $\mathcal{L} G$, and $\text{indeg}_{T'}(e)=N(l_{s(e)},e)$, for all $e\in E$. \end{lemma} \begin{proof} Let $I(e)$ be the initial value of $N(l_{s(e)},e)$. By the definition of a tree array, the edges which are the last elements of the lists $l_v$ form a spanning tree $T$ of $G$. We claim that $T'$ is acylic, because the last edge of a cycle is never included in $T'$. While the algorithm is running, suppose that $(e_n,e_1)$ is not an edge of $T'$, and that it completes a cycle $(e_1,e_2),(e_2,e_3),\ldots (e_{n-1},e_n)$ of edges in $T'$. Since $(e_1,e_2)$ was already added to $T'$, $N(l_{s(e_1)},e_1)$ must be 0. Therefore, $(e_n,e_1)$ will never be added to $T'$. We say a vertex $v\in V$ is \textit{cleared} if all the elements of its list are popped. Suppose that $e=(v,w)$ is an edge in $T$. The list $l_w$ is cleared when all the edges of $G$ with target $w$ have an outedge in $T'$. Then $w$ can only be cleared after an outedge $(e,f)$ of $e$ is added to $T$. The edge $(e,f)$ can only be added to $T'$ when $N(l_v,e)=0$. Because $e$ is an edge of $T$, it is the last element of $l_v$, so $v$ must be cleared before $w$ can be cleared. The algorithm terminates when $\Omega$ is popped from $l_r$, which occurs when $r$ is cleared. There is a path $(v,v_1,v_2,\ldots v_k, r)$ in $T$ from any vertex $v$ to $r$. Therefore $r$ can only be cleared after all the vertices on this path are cleared. Thus, all the vertices of $G$ are cleared when the algorithm finishes, so there are $|E|-1$ edges in the subgraph $T'$. All the vertices of $\mathcal{L} G$ has an outedge in $T'$, except one. Since $T'$ is acyclic, it is a spanning tree of $\mathcal{L} G$. Because $\text{indeg}_{T'}(e)+N(l_{s(e)},e)$ is constant, when the algorithm returns $T'$, $\text{indeg}_{T'}(e)=I(e)$ for all $e\in E$. \end{proof} \vspace{.05in} In our final lemma, we show that $\pi$ will take a spanning tree $T'$ of $\mathcal{L} G$ and reconstruct a tree array $\langle l_v\rangle$. \begin{lemma} \label{Ttree} Suppose $T'$ is a spanning tree of $\mathcal{L} G$ with root $r'$, and that $\pi$ takes $T'$ to the array of lists $\langle l_v \rangle$. Then $\langle l_v \rangle$ is a tree array, which means that \begin{enumerate}[(a)] \item The length of $l_v$ is $\text{indeg}(v)$, for all $v\in V$. \item Every element of $l_v$ is an edge with source $v$, for all vertices $v$ except $t(r')$. The last element of $l_{t(r')}$ is $\Omega$, and every other element of $l_{t(r')}$ is an edge with source $t(r')$. \item The set $T$ of edges which are the last elements of the lists $\{l_v | v\in V\backslash t(r')\}$ is a spanning tree of $G$. \end{enumerate} \end{lemma} \begin{proof} We first show parts (a) and (b). Each time an edge $e\in E$ is removed from $T'$, an element is appended to the list $l_{t(e)}$. Since $r'$ can only be removed after all the other vertices of $T'$, this algorithm adds $\text{indeg}(v)$ elements to $l_v$ for all $v\in V$, so part (a) holds. Every element of the list $l_v$ is an edge with source $v$, with the exception of $\Omega$, which is the last element of $l_r$, so part (b) holds. While the algorithm $\pi$ is running, say a vertex $v\in V$ is \textit{filled} if $l_v$ has $\text{indeg}(v)$ elements. Every vertex is eventually filled, so the order in which vertices are filled is a total order on $V$. We claim that this order is a topological sort of the subgraph $T$. Suppose that $f=(v,w)$ is the last element of $l_v$ for some vertex $v$ other than $r$. Vertex $v$ was filled at step 2, right after some leaf $e$ and some edge $(e,f)$ were removed from $T'$ in step 1. However, $w$ cannot be filled until $f$ is removed from $T'$, which happens after $e$ and $(e,f)$ are removed from $T'$. Therefore $v$ is filled before $w$, so the filling ordering on $V$ is a topological sort of $T$, and $T$ is acyclic. After $\pi$ terminates, $t(r')$ has no outedge in $T$ and every other vertex of $G$ has one outedge. Then $T$ is a spanning tree of $G$, and part (c) holds. \end{proof} \vspace{.05in} We now prove the main result. \begin{proof}[\textbf{Proof of Theorem~\ref{thm1.1}}] Let $\langle l_v \rangle$ be a tree array and let $T'=\sigma(\langle l_v \rangle)$. We show the following claim by induction on $n$: after $n$ edges have been added to $T'$ by the algorithm for $\sigma(\langle l_v \rangle)$, and $n$ edges have been removed from $T'$ by the algorithm for $\pi(T')$, we have \begin{enumerate}[(a)] \item The set of edges added to $T'$ by $\sigma$ is the set of edges removed by $\pi$. \item The elements popped from $l_v$ by $\sigma$ are exactly the elements added to $l_v$ by $\pi$, in the same order. \end{enumerate} In the base case $n=0$, both claims hold trivially. Suppose both results hold for $n=k$. The edge $e$ is a leaf of $T'$ in $\pi$ if and only if it satisfies $N(l_{s(e)},e)=0$ and $\text{outdeg}_{T'}(e)=0$ in $\sigma$. Therefore, the $(k+1)$st edge $(f,g)$ added to $T'$ by $\sigma$ is also the $(k+1)$st edge removed from $T'$ by $\pi$, and the element $g$ popped from $l_{t(f)}$ in $\sigma$ is also the element appended to $l_{t(f)}$ by $\pi$. Both claims hold for $n=k+1$. By induction, they hold for all $n\le |E|-1$. When $n=|E|-1$, condition (b) implies that $\pi(T')=\langle l_v \rangle$. Then $\pi$ is a left inverse of $\sigma$, and $\sigma$ is injective. By similar reasoning, $\pi$ is a right inverse of $\sigma$, and $\sigma$ is surjective. So $\sigma$ is a bijection between tree arrays in $G$ and spanning trees of $\mathcal{L} G$. The bijection $\sigma$ induces between equal terms in Eq. (\ref{kthm}) proves Theorem~\ref{thm1.1}. \end{proof} \section{The de Bruijn bijection} \label{dbsequence} A \textit{binary de Bruijn sequence of degree $n$} is a cyclic binary sequence $B$ such that every binary sequence of length $n$ appears as a subsequence of consecutive elements of $B$ exactly once. For example, 0011 is a binary de Bruijn sequence of degree 2, since its cyclic subsequences of length 2 are 00, 01, 11, and 10. It is well-known that there are $2^{2^{n-1}}$ binary de Bruijn sequences of degree $n$. Stanley posed the following open problem in \cite{St}. \vspace{.1in} \noindent \textbf{Exercise 5.73 of \cite{St}.} Let $\mathcal{B}(n)$ be the set of binary de Bruijn sequences of degree $n$, and let $\mathcal{S}_n$ be the set of all binary sequences of length $2^n$. Find an explicit bijection $\mathcal{B}(n)\times \mathcal{B}(n) \rightarrow \mathcal{S}_n$. \vspace{.1in} Our solution to this problem involves the de Bruijn graphs, which are closely related to de Bruijn sequences. \begin{definition}[de Bruijn graph] The \textit{de Bruijn graph} $DB_n(m)$ has $m^n$ vertices, which are identified with the strings of length $n$ on $m$ symbols. The edges of the graph are labeled with the strings of length $n+1$ on $m$ symbols. The edge $s_0s_1\ldots s_n$ has source $s_0s_1\ldots s_{n-1}$ and target $s_1s_2\ldots s_n$. \end{definition} An edge of $DB_n(m)$ can be identified with the vertex of $DB_{n+1}(m)$ that is labeled with the same string of length $n+1$. With this identification, we have \[DB_n(m)=\LiDB_{n-1}(m)\] Each vertex $v=s_0s_1\ldots s_{n-1}$ of $DB_n(2)$ has two outedges, $s_1s_2,\ldots s_{n-1}0$ and $s_1s_2\ldots s_{n-1}1$. We call these edges the \textit{zero edge} of $v$ and the \textit{one edge} of $v$, respectively. It is well-known that binary de Bruijn sequences of degree $n$ are in bijection with Hamiltonian paths in $DB_n(2)$. Let $B=b_0b_1\ldots b_{2^n-1}$ be a binary de Bruijn sequence of degree $n$. Let $v_i=b_ib_{i+1}\ldots b_{i+n-1}$, for $0\le i\le 2^n-1$, where indices are taken mod $2^n$. The path $(v_0,v_1,\ldots v_{2^n-1})$ is the corresponding Hamiltonian path in $DB_n(2)$. \begin{theorem} There is an explicit bijection between $\mathcal{B}(n)$ and the set of binary sequences of length $2^{n-1}$, for $n>1$. \end{theorem} \begin{proof} We describe a bijection between Hamiltonian paths in $DB_n(2)$ and binary sequences of length $2^{n-1}$. By composing this bijection with the map between de Bruijn sequences and Hamiltonian paths, we construct the desired bijection. We order the vertices in $DB_k(2)$ by the lexicographic order on their associated binary strings, for $1\le k\le n$. Let $(v_1,\ldots v_{2^n})$ be a Hamiltonian path in $DB_n(2)$. This path is an oriented spanning tree of $DB_n(2)$, so we can apply the inverse map $\pi$ defined in Section \ref{bijection} to it. Let $A_{n-1}$ be the tree array $A_{n-1}=\pi(v_1,\ldots v_{2^n})$. We recursively define a sequence of tree arrays $A_k$, for $1\le k\le n-1$. Suppose we have a tree array $A_{k+1}$ in $DB_{k+1}(2)$. Let $T_{k+1}$ be the spanning tree consisting of the edges which are the last elements of the lists in $A_{k+1}$. We define $A_k$ to be $\pi(T_{k+1})$. We construct a binary sequence $s_1s_2\ldots s_{2^{n-1}}$ from these tree arrays. We denote vertex $w$'s list in the tree array $A_k$ by $(A_k)_w$. Let $s_{2^{n-1}}$ be 0 if the first element of $(A_{n-1})_{s(v_{2^n})}$ is the zero edge of $s(v_{2^n})$, and 1 otherwise. We define $s_{2^k}$ through $s_{2^{k+1}-1}$, for $1\le k\le n-2$, as follows. Let $w_1,w_2,\ldots w_{2^k}$ be the vertices of $DB_k(2)$, in lexicographic order. Let $s_{2^k+i-1}$ be 0 if the first element of $(A_k)_{w_i}$ is the zero edge of $w_i$, and 1 otherwise. Let $s_1$ be 0 if $T_1$ is rooted at vertex 0, and 1 otherwise. The string $s_1s_2\ldots s_{2^{n-1}}$ is the binary sequence that corresponds to the Hamiltonian path we began with. Now we construct the inverse map, from binary sequences to Hamiltonian paths. Given any binary sequence $S$ of length $2^{n-1}$, we use the first $2^{n-1}-1$ characters of the sequence to invert the previous procedure and construct a sequence of spanning trees $T_1,T_2,\ldots T_{n-1}$. The tree $T_k$ will be a spanning tree of $DB_k(2)$. We determine $T_k$ recursively. The tree $T_1$ in $DB_1(2)$ is rooted at 0 if $s_1$ is 0, and rooted at 1 otherwise. Assume that the first $2^k-1$ characters of $S$ determine a spanning tree $T_k$ of $DB_k(2)$, where $k\le n-2$. We choose a tree array $A_k$ of $DB_k(2)$ using this tree and the next $2^k$ characters of $S$, as follows. Let the vertices of $DB_k(2)$ be $w_1,w_2,\ldots w_{2^k}$, in lexicographic order. The first element of $(A_k)_{w_i}$ is the zero edge of $w_i$ if $s_{2^k+i-1}$ is 0, and the one edge of $w_i$ otherwise. The second element of $(A_k)_{w_i}$ comes from $T_k$. We define $T_{k+1}$ to be $\sigma(A_k)$, using the map defined in Section \ref{bijection}. We use $T_{n-1}$ to construct a tree array $A_{n-1}$ such that $\sigma(A_{n-1})$ is a Hamiltonian path in $DB_n(2)$. Let $r$ be the root of $T_{n-1}$, and let $v$ be another arbitrary vertex. The list $l_v$ in the array $A_{n-1}$ must contain two distinct edges, if $\sigma(A_{n-1})$ is a Hamiltonian path. The second edge in $l_v$ must be the unique edge in $T_{n-1}$ with source $v$, so $l_v$ is determined. Our only remaining choice is which of the two edges of $DB_{n-1}(2)$ with source $r$ to include in $l_r$, which we determine by $s_{2^{n-1}}$. Clearly, this map from binary sequences to Hamiltonian paths inverts the map from Hamiltonian paths to binary sequences. Therefore, our first map is the bijection we need. \end{proof} \vspace{.05in} This bijection can easily be generalized to count the $k$-ary de Bruijn sequences, in which the 2-symbol alphabet $\{0,1\}$ is replaced with the $k$-symbol alphabet $\{0,1,\ldots k-1\}$. \section{The Kautz and de Bruijn graphs} \label{kautzsection} In this section, we determine the critical groups of all the Kautz graphs and the de Bruijn graphs. The critical groups of these graphs have been found in some special cases by Levine \cite{Le}. The Kautz graphs are similar to the de Bruijn graphs, except that the vertices are indexed by \textit{Kautz strings}. A Kautz string is a string in which no two adjacent characters are the same. \begin{definition}[Kautz graph] The \textit{Kautz graph} $\text{Kautz}_n(m)$ has $(m+1)m^{n-1}$ vertices, identified with the Kautz strings of length $n$ on $m+1$ symbols. The edges of the graph are labeled with the Kautz strings of length $n+1$ on $m+1$ symbols, such that the edge $s_0s_1\ldots s_n$ has source $s_0s_1\ldots s_{n-1}$ and target $s_1s_2\ldots s_n$. \end{definition} We also consider the Kautz and de Bruijn graphs as families of iterated line graphs. $\text{Kautz}_1(m)$ is the complete directed graph on $m+1$ vertices, without self-loops, and $DB_1(m)$ is the complete directed graph on $m$ vertices, with self-loops. Then for $n>1$, we have \[\text{Kautz}_{n+1}(m)=\mathcal{L}\text{Kautz}_n(m)=\mathcal{L}^n\text{Kautz}_1(m)\] \[DB_{n+1}(m)=\LiDB_n(m)=\mathcal{L}^nDB_1(m)\] We say a directed graph $G=(V,E)$ is \textit{balanced k-regular} if $\text{indeg}(v)=\text{outdeg}(v)=k$ for all $v\in V$. Both $\text{Kautz}_n(m)$ and $DB_n(m)$ are balanced $m$-regular, for all $n\in \mathbb{N}$, which implies that they are Eulerian. Since these graphs are also strongly-connected, their critical groups are defined. Levine found the critical groups of the de Bruijn graphs $DB_n(2)$ and the Kautz graphs $K_n(p)$, where $p$ is prime \cite{Le}. In this section we characterize the critical groups of all the Kautz and de Bruijn graphs. We prove the following theorems. \begin{theorem} \label{db} The critical group of $DB_n(m)$ is \[K\left(DB_n(m)\right)=\left(\mathbb{Z}_{m^n}\right)^{m-2}\oplus\bigoplus_{i=1}^{n-1} \left(\mathbb{Z}_{m^i}\right)^{m^{n-1-i}(m-1)^2}\] \end{theorem} \begin{theorem} \label{kautz} The critical group of $\text{Kautz}_n(m)$ is \[K\left(\text{Kautz}_n(m)\right)=\left(\mathbb{Z}_{m+1}\right)^{m-1}\oplus \left(\mathbb{Z}_{m^{n-1}}\right)^{m^2-2}\oplus\bigoplus_{i=1}^{n-2} \left(\mathbb{Z}_{m^i}\right)^{m^{n-2-i}(m-1)^2(m+1)}\] \end{theorem} In order to prove these theorems, we first prove two lemmas about row-reducing the Laplacians $L(\text{Kautz}_n(m))$ and $L(DB_n(m))$. We refer to the row and column of a vertex $v$ in the Laplacian by $R(v)$ and $C(v)$, respectively. We also use $L(v,w)$ to denote the entry in the row of $v$ and the column of $w$. We say two strings of length $n$ are \textit{similar} if their last $n-1$ characters are equal. Similarity is an equivalence relation. We partition the vertices of $\text{Kautz}_n(m)$ and $DB_n(m)$ into equivalence classes, by grouping vertices labeled with similar strings in the same class. There are $m$ vertices in each class. \begin{lemma} \label{cycle} Let $G=(V,E)$ be a Kautz graph $\text{Kautz}_{n+1}(m)$ or a de Bruijn graph $DB_{n+1}(m)$, where $n\in \mathbb{N}$. Then $G$ contains a cycle $(v_1,v_2,\ldots v_c)$ of length $c=|V|/m$ which contains one vertex from each class. \end{lemma} \begin{proof} Let $G'$, the predecessor of $G$, be $\text{Kautz}_n(m)$ if $G$ is $\text{Kautz}_{n+1}(m)$, and $DB_n(m)$ if $G$ is $DB_{n+1}(m)$. First we show that there is a Hamiltonian cycle in $G'$. Such a cycle exists in the complete graphs $K_m$ and $K_{m+1}$, so the case $n=1$ is done. There is an Eulerian tour of $\text{Kautz}_{n-1}(m)$ and of $DB_{n-1}(m)$ for $n>1$, since graphs in both families are Eulerian. Because $G'$ is either $\mathcal{L} \text{Kautz}_{n-1}(m)$ or $\mathcal{L} DB_{n-1}(m)$, one of these Eulerian tours induces a Hamiltonian cycle in $G'$, for $n>1$. The Hamiltonian cycle in $G'$ can be represented as a string $S=s_1s_2\ldots s_{n+c-1}$, where the $i$th vertex of the cycle is labeled with $s_is_{i+1}\ldots s_{i+n-1}$. We use string $S$ to find a cycle in $G$. Let $v_i=s_is_{i+1}\ldots s_{i+n}$ for $i<c$, and let $v_c=s_cs_{c+1}\ldots s_{n+c-1}s_1$. By the construction of $S$, $(v_1,v_2,\ldots v_c)$ is a cycle which contains one vertex from each class. \end{proof} \vspace{.05cm} In the next lemma we show that every invariant factor of $L(\text{Kautz}_{n+1}(m))$ and $L(DB_n(m))$ is either a multiple of $m$ or relatively prime to $m$. We prove this lemma by row-reducing the Laplacian in an order derived from the cycle in Lemma~\ref{cycle}. \begin{lemma} \label{divbym} Let $G=(V,E)$ be a Kautz graph $\text{Kautz}_{n+1}(m)$ or a de Bruijn graph $DB_{n+1}(m)$, where $n\in \mathbb{N}$. The first $c=|V|/m$ invariant factors of $L(G)$ are relatively prime to $m$, and all of the rest are divisible by $m$. \end{lemma} \begin{proof} We reduce the Laplacian $L(G)$ over the principal ideal domain $\mathbb{Z}_m$. Let the invariant factors of $L(G)$ over $\mathbb{Z}$ be $x_1,x_2,\ldots x_{|V|}$. Any invertible row or column operation over $\mathbb{Z}$ descends to an invertible operation over $\mathbb{Z}_m$, so the invariant factors of $L(G)$ over $\mathbb{Z}_m$ are the $x_i$ mod $m$. Let $(v_1,v_2,\ldots v_c)$ be the cycle in $G$ from Lemma~\ref{cycle}, and let $[v_i]$ be the set of vertices in the class of $v_i$. We take indices mod $c$, so $v_{c+1}$ is $v_1$. Note that if $u$ and $v$ are vertices in the same class, then $(u,w)$ is an edge if and only if $(v,w)$ is, for all $w$. Therefore, the rows of $u$ and $v$ in the adjacency matrix $A(G)$ are the same. Because every vertex of $G$ has outdegree $m$, $L(G)\equiv A(G)$ mod $m$. Therefore rows $R(u)$ and $R(v)$ are congruent mod $m$ if $u$ and $v$ are in the same class. We reduce the Laplacian in $c$ stages. In the $i$th stage, we subtract row $R(v_i)$ from $R(v)$ for all $v\in [v_i]\backslash v_i$. After this operation, every entry of $R(v)$ is divisible by $m$. {\small \[\mbox{\bordermatrix & 01 & 02 & 10 & 12 & 20 & 21 \cr 01 & -2 & 0 & 1 & 1 & 0 & 0 \cr 02 & 0 & -2 & 0 & 0 & 1 & 1 \cr 10 & 1 & 1 & -2 & 0 & 0 & 0 \cr 12 & 0 & 0 & 0 & -2 & 1 & 1 \cr 20 & 1 & 1 & 0 & 0 & -2 & 0 \cr 21 & 0 & 0 & 1 & 1 & 0 & -2 \cr}} \rightarrow \mbox{\bordermatrix & 01 & 02 & 10 & 12 & 20 & 21 \cr 01 & -2 & 0 & 1 & 1 & 0 & 0 \cr 02 & 0 & -2 & 0 & 2 & 0 & 0 \cr 10 & 0 & 0 & -2 & 0 & 2 & 0 \cr 12 & 0 & 0 & 0 & -2 & 1 & 1 \cr 20 & 1 & 1 & 0 & 0 & -2 & 0 \cr 21 & 2 & 0 & 0 & 0 & 0 & -2 \cr}} \] } \begin{center} \textit{Figure 5.1. Reducing $L(\text{Kautz}_2(2))$ using the cycle $(01,12,20)$. The original Laplacian is on the left. We obtain the reduced Laplacian on the right by subtracting $R(01)$ from $R(21)$, $R(12)$ from $R(02)$, and $R(20)$ from $R(10)$. Every entry of rows $R(02)$, $R(10)$, and $R(21)$ is divisible by 2.}\end{center} The entry $L(v_i,v_{i+1})$ is 1 before and after these row operations, for $1\le i\le c$. We claim that in the reduced Laplacian, every entry of $C(v_{i+1})$ is divisible by $m$ except $L(v_i,v_{i+1})$. There are $m$ edges with target $v_{i+1}$ in $G$. The sources of these edges are the $m$ vertices in $[v_i]$. After the row operations, every entry of $R(v)$ is divisible by $m$ for $v\in [v_i]\backslash v_i$, so $L(v_i,v_{i+1})$ is the only entry in $C(v_{i+1})$ which is non-zero mod $m$, for $1\le i\le c$. By permuting rows and columns, we move $L(v_i,v_{i+1})$ to the $i$th diagonal entry $L_{ii}$ of the Laplacian. The reduced Laplacian is now in the form \[\begin{pmatrix}I_c & A \\ 0 & 0\end{pmatrix}\ \text{mod } m\] where $I_c$ is the $c\times c$ identity matrix. Using column operations, we can make all the entries in A divisible by $m$, without changing the rest of the matrix mod $m$. After this column operations, the Laplacian is in Smith normal form. The first $c$ invariant factors of $L(G)$ over $\mathbb{Z}_m$ are 1, so the first $c$ invariant factors of $L(G)$ over $\mathbb{Z}$ are relatively prime to $m$. The last $|V|-c$ invariant factors of $L(G)$ over $\mathbb{Z}_m$ are 0, so the last $|V|-c$ invariant factors of $L(G)$ over $\mathbb{Z}$ are divisible by $m$. The lemma holds. \end{proof} We use Lemmas~\ref{cycle} and \ref{divbym} to characterize the critical group of the Kautz and de Bruijn graphs. The first step is finding the orders of these groups. We apply Theorem~\ref{thm1.1} to $DB_n(m)$, and we let all the variables $x_e$ equal 1, to find that \[\kappa\left(DB_{n+1}(m)\right)=\kappa\left(DB_n(m) \right)\left( m^{(m-1)m^n}\right)\] The number of spanning trees of the complete graph $DB_1(m)$ is $m^{m-1}$ \cite{Ho}. By a simple induction, we have \[\kappa\left(DB_n(m)\right)=m^{m^n-1}\] In an Eulerian graph, the sandpile groups $K(G,v)$ are isomorphic for all vertices $v$, so $|K(G)|=\kappa(G)/|V|$. Therefore, we have \begin{equation}\label{dborder} |K\left(DB_n(m)\right)|=m^{m^n-n-1}\end{equation} Similarly, we have \[\kappa\left(\text{Kautz}_n(m)\right)=(m+1)^mm^{\left(m^n-1\right)(m+1)}\] \begin{equation}\label{order}|K\left(\text{Kautz}_n(m)\right)|=(m+1)^{m-1}m^{\left(m^n+m^{n-1}-m-n\right)}\end{equation} We are ready to prove theorems~\ref{db} and ~\ref{kautz}. \begin{proof}[\textbf{Proof of Theorem~\ref{db}}] We proceed by induction on $n$. The critical group of the complete graph on $m$ vertices is $(\mathbb{Z}_m)^{m-2}$, so the base case holds. Assume that Theorem~\ref{db} holds for $n-1$, where $n>1$. We prove it for $n$. As shown by Levine \cite{Le}, if $G$ is a balanced $k$-regular graph, then \begin{equation} \label{homo} k K(\mathcal{L} G) \cong K(G)\end{equation} We will use this fact to determine $Syl_p(K(DB_n(m)))$, the Sylow-$p$ subgroup of $K(DB_n(m))$, for any prime $p$. We break into two cases: either $p$ does not divide $m$, or $p$ divides $m$. If $p$ does not divide $m$, then by Eq. (\ref{homo}), we have \[\text{Syl}_p\left(K\left(DB_n(m)\right)\right)\cong \text{Syl}_p\left(K\left(DB_{n-1}(m)\right)\right)\] By the inductive hypothesis, $\text{Sylow}_p(K(DB_n(m)))$ is the trivial group. Now let $p$ be a prime that divides $m$, and suppose $p^k$ is the largest power of $p$ that divides $m$. Let the Sylow-$p$ subgroup of $DB_n(m)$ be \[\text{Syl}_p\left(K\left(DB_n(m)\right)\right) = \mathbb{Z}_p^{a_1}\oplus \mathbb{Z}_{p^2}^{a_2} \oplus \ldots \oplus \mathbb{Z}_{p^l}^{a_l}\] By Lemma~\ref{divbym}, $K(DB_n(m))$ can be written as a direct sum of cyclic groups, such that the order of each group is either non-zero mod $p$ or divisible by $p^k$. Thus, $a_i=0$ for $i<k$. Further, we can derive the order of $\text{Syl}_p(K(DB_n(m)))$ from Eq. (\ref{dborder}). We find that \begin{equation}\label{dbporder} \sum_{i=n}^l ia_i = k\left(m^n-n-1\right) \end{equation} because the expression on the right-hand side equals the number of factors of $p$ in $m^{m^n-n-1}$. By Eq. (\ref{homo}) and the inductive hypothesis, we know that \[p^k\text{Syl}_p\left(K\left(DB_n(m)\right)\right)=\mathbb{Z}_p^{a_{k+1}}\oplus \mathbb{Z}_{p^2}^{a_{k+2}} \oplus \ldots \oplus \mathbb{Z}_{p^{l-k}}^{a_{l}} \cong \text{Syl}_p\left(K\left(DB_{n-1}(m)\right)\right)\] \begin{equation}\label{dbindstep} \mathbb{Z}_p^{a_{k+1}}\oplus \mathbb{Z}_{p^2}^{a_{k+2}} \oplus \ldots \oplus \mathbb{Z}_{p^{l-k}}^{a_{l}} \cong \left(\mathbb{Z}_{p^{nk}}\right)^{m-2}\oplus \bigoplus_{i=1}^{n-2}\left(\mathbb{Z}_{p^{ik}}\right)^{m^{n-2-i}(m-1)^2} \end{equation} Eq. (\ref{dbindstep}) implies that $a_{nk}=m-2$, that $a_{(i+1)k}=m^{n-2-i}(m-1)^2$ for $1\le i \le n-2$, and that $a_i=0$ for $i>nk$ or $k\nmid i$. The only $a_i$ which we have not yet determined is $a_k$. We solve Eq. (\ref{dbporder}) for $a_k$, by moving all the other $a_{ik}$ to the right-hand side and dividing by $k$. \[a_k = \left(m^n-n-1\right)-n(m-2)-\sum_{i=2}^{n-1} i\left(m^{n-1-i}(m-1)^2\right)\] By evaluating the geometric series, we find that $a_k=m^{n-2}(m-1)^2$. With these values, we may write \[\text{Syl}_p\left(K\left(DB_n(m)\right)\right)=\left(\mathbb{Z}_{p^{nk}}\right)^{m-2}\oplus \bigoplus_{i=1}^{n-1}\left(\mathbb{Z}_{p^{in}}\right)^{m^{n-1-i}(m-1)^2}\] The Sylow-$p$ subgroups of $K(DB_n(m))$ are trivial for $p\nmid m$. Taking the direct sum of the Sylow-$p$ subgroups over primes $p$ which divide $m$, we find \[ K\left(DB_n(m)\right)\cong \bigoplus_{p\mid m}\text{Sylow}_p\left(K\left(DB_n(m)\right)\right)\cong\left(\mathbb{Z}_{m^n}\right)^{m-2}\oplus\bigoplus_{i=1}^{n-1} \left(\mathbb{Z}_{m^i}\right)^{m^{n-1-i}(m-1)^2} \] With this equation we complete the inductive step, as desired. \end{proof} \vspace{.05in} \begin{proof}[\textbf{Proof of Theorem~\ref{kautz}}] This proof is similar to the proof of Theorem~\ref{db}. Again, we induct on $n$. Because the critical group of the complete graph on $m+1$ vertices is $(\mathbb{Z}_{m+1})^{m-1}$, the base case holds. Assume that Theorem~\ref{kautz} holds for $n-1$, where $n>1$. Using Eq. (\ref{homo}), we calculate the direct sum of the Sylow-$p$ subgroups of $K(\text{Kautz}_n(m))$ over primes $p$ which do not divide $m$, as follows \begin{equation}\label{nodivide} \bigoplus_{p\nmid m} \text{Syl}_p\left( K\left(\text{Kautz}_n(m)\right) \right)= \bigoplus_{p\nmid m} \text{Syl}_p \left(K\left(\text{Kautz}_1(m)\right)\right) = \left(\mathbb{Z}_{m+1}\right)^{m-1} \end{equation} Now let $p$ be a prime that divides $m$. Suppose that $p^k$ is the largest power of $p$ that divides $m$, and that the Sylow-$p$ subgroup of $K(\text{Kautz}_n(m))$ is \[\text{Syl}_p\left(K\left(\text{Kautz}_n(m)\right)\right) = \mathbb{Z}_p^{a_1}\oplus \mathbb{Z}_{p^2}^{a_2} \oplus \ldots \oplus \mathbb{Z}_{p^l}^{a_l}\] Lemma~\ref{divbym} implies that $a_i=0$ for $i<k$. Furthermore, we know the order of the Sylow-$p$ subgroup of $K(\text{Kautz}_n(m))$ from Eq. (\ref{order}), which implies that \begin{equation}\label{porder} \sum_{i=k}^l ia_i = k\left(m^n+m^{n-1}-m-n\right) \end{equation} because the expression on the right-hand side equals the number of factors of $p$ in $m^{(m^n+m^{n-1}-m-n)}$. By Eq. (\ref{homo}), we have \[p^k\text{Syl}_p\left(K\left(\text{Kautz}_n(m)\right)\right)=\mathbb{Z}_p^{a_{k+1}}\oplus \mathbb{Z}_{p^2}^{a_{k+2}} \oplus \ldots \oplus \mathbb{Z}_{p^{l-k}}^{a_{l}} \cong \text{Syl}_p\left(K\left(\text{Kautz}_{n-1}(m)\right)\right)\] By the inductive hypothesis, we find that $a_{(i+1)k}={m^{n-3-i}(m-1)^2(m+1)}$ for $1\le i \le n-3$, that $a_{(n-1)k}=m^2-2$, and that $a_i=0$ for $i>(n-1)k$ or $k\nmid i$. We solve Eq. (\ref{porder}) for $a_k$. We find that $a_k=m^2-2$ if $n=2$ and that $a_k=m^n-m^{n-1}-m^{n-2}+1$ if $n>2$. Then we have \begin{equation} \label{divide} \text{Syl}_p\left(K\left(\text{Kautz}_n(m)\right)\right)=\left(\mathbb{Z}_{p^{(n-1)k}}\right)^{m^2-2}\oplus \bigoplus_{i=1}^{n-2}\left(\mathbb{Z}_{p^{ik}}\right)^{m^{n-2-i}(m-1)^2(m+1)} \end{equation} By taking the direct sum of Eq. (\ref{nodivide}) and Eq. (\ref{divide}) over all primes which divide $m$, we complete the inductive step and prove Theorem~\ref{kautz}, as desired. \end{proof}\vspace{.1in}
proofpile-arXiv_065-6282
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}
proofpile-arXiv_065-6314
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \PARstart{M}{any} computer vision and video analytics algorithms rely on background subtraction as the engine of choice for detecting areas of interest (change). Although a number of models have been developed for background subtraction, from single Gaussian \cite{Wren97} and mixture of Gaussians \cite{Stau00} to non-parametric kernel methods \cite{Elga02}\footnote{Although models that account for spatial relationships in background subtraction are known, their discussion is beyond the scope of this paper.}, they all share the underlying assumption that photometric scene properties (e.g., luminance, color) are either static or exhibit temporal stationarity. The static background assumption works quite well for some applications, e.g., indoor scenes under constant illumination, while the temporally-stationary background assumption is needed in other cases, such as outdoor scenes with natural phenomena (e.g., fluttering leaves). However, both models fail if one is interested in discovering {\it changes in scene dynamics} rather than those taking place in a static background. Examples of such scenario are: detection of unusual motor traffic patterns (e.g., too fast or too slow), detection of a moving group of individuals where a single walking person is expected, detection of a moving object against shimmering or turbulent water surface (background motion). Although each of these challenges can be addressed by a custom-built method, e.g., explicitly estimating object trajectories or discovering the number of moving objects, there is no approach to-date that can address {\it all} such scenarios in a single framework. In order to address this challenge, instead of searching for photometric deviations in time, one should look for dynamic deviations in time. To date, the problem has been attacked primarily by analyzing two-dimensional motion paths resulting from tracking objects or people \cite{Hu04,Johnson96,saleemi09,Sumpter00,Stau00}. Usually, reference motion paths are computed from a training video sequence first. Then, the same tracking algorithm is applied to an observed video sequence, and the resulting paths are compared with the reference motion paths. Unfortunately, such methods require many computing stages, from low-level detection to high-level inferencing \cite{Hu04}, and often result in failure due to multiple, sequential steps. In this paper, we propose a new model and computational framework that extend background subtraction to, what we call, {\it behavior subtraction} \cite{Jodo08vcip}, while at the same time addressing deficiencies of motion-path-based algorithms. Whereas in background subtraction static or stationary photometric properties (e.g., luminance or color) are assumed as the background image, we propose to use stationary scene dynamics as a ``background'' activity with which observed scene dynamics are compared. The approach we propose requires neither computation of motion nor object tracking, and, as such, is less prone to failure. Central to our approach is the concept of an {\it event}, that we define as short-term scene dynamics captured over a time window at a specific spatial location in the camera field of view. We compute events by time-aggregating motion labels and/or suitable object descriptors (e.g., size). Subsequently, we characterize events probabilistically as random variables that are independent and identically distributed ({\it iid}) in time. Since the estimation of a probability density function (PDF) at each location is both memory- and CPU-intensive, in practical implementation we resort to a low-memory, low-complexity surrogate. Using such a surrogate amounts to behavior subtraction, a new algorithm with some surprising properties. As we demonstrate experimentally, behavior subtraction is an effective tool in anomaly detection, including localization, but can also serve as motion detector very resilient to spurious background motion, e.g., resulting from camera jitter. Furthermore, it is content-blind, i.e., applicable to humans, cars, animals, and other objects in both uncluttered and highly-cluttered scenes. This paper is organized as follows. In Section~\ref{sec:previous}, we review previous work. In Section~\ref{sec:backsub}, we recall background subtraction and introduce notation. In Section~\ref{sec:behspace}, we introduce behavior space and the notion of an event, while in Section~\ref{sec:behsubframework} we describe the behavior subtraction framework. In Section~\ref{sec:expres}, we discuss our experimental results and in Section~\ref{sec:concl} we draw conclusions. \section{Previous work} \label{sec:previous} There are two fundamental approaches to anomaly detection. One approach is to explicitly model all anomalies of interest, thus constructing a dictionary of anomalies, and for each observed video to check if a match in the dictionary can be found. This is a typical case of classification, and requires that {\it all} anomaly types be known {\em a priori}. Although feasible in very constrained scenarios, such as detecting people carrying boxes/suitacases/handbags \cite{Chuang09}, detecting abandoned objects \cite{Smith06} or identifying specific crowd behavior anomalies \cite{Mehran09}, in general this approach is not practical for its inability to deal with unknown anomalies. An alternative approach is to model normality and then detect deviations from it. In this case, no dictionary of anomalies is needed but defining and modeling what constitutes normality is a very difficult task. One way of dealing with this difficulty is by applying machine learning that automatically models normal activity based on some training video. Then, any monitored activity different from the normal pattern is labeled as anomaly. A number of methods have been developed that apply learning to two-dimensional motion paths resulting from tracking of objects or people \cite{Hu04}. Typically, the approach is implemented in two steps. In the first step, a large number of ``normal'' individuals or objects are tracked over time. The resulting paths are then summarized by a set of motion trajectories, often translated into a symbolic representation of the background activity. In the second step, new paths are extracted from the monitored video and compared to those computed in the training phase. Whether one models anomaly or normality, the background activity must be somehow captured. One common approach is through graphical state-based representations, such as hidden Markov models or Bayesian networks \cite{Hu04,Kumar05,Bennewitz03,Oliver00,Vaswani05}. To the best of our knowledge Johnson and Hogg \cite{Johnson96} were the first to consider human trajectories in this context. The method begins by vector-quantizing tracks and clustering the result into a predetermined number of PDFs using a neural network. Based on the training data, the method predicts trajectory of a pedestrian and decides if it is anomalous or not. This approach was subsequently improved by simplifying the training step \cite{Sumpter00} and embedding it into a hierarchical structure based on co-occurrence statistics \cite{Stau00}. More recently, Saleemi {\em et al.} \cite{saleemi09} proposed a stochastic, non-parametric method for modeling scene tracks. The authors claim that the use of predicted trajectories and tracking method robust to occlusions jointly permit the analysis of more general scenes, unlike other methods that are limited to roads and walkways. Although there are advantages to using paths as motion features, there are clear disadvantages as well. First, tracking is a difficult task, especially in real time. Since the anomaly detection is directly related to the quality of tracking, a tracking error will inevitably bias the detection step. Secondly, since each individual or object monitored is related to a single path, it is hard to deal with people occluding each other. For this reason, path-based methods aren't well suited to highly-cluttered environments. Recently, a number of anomaly detection methods have been proposed that do not use tracking. These methods work at pixel level and use either motion vectors \cite{Kim09,Dong09,Adam08} or motion labels \cite{Cui07,Xiang06,Oh03} to describe activity in the scene. They all store motion features in an image-like 2D structure (be it probabilistic or not) thus easing memory and CPU requirements. For example, Xiang {\em et al.} \cite{Xiang06} represent moving objects by their position, size, temporal gradient and the so-called ``pixel history change'' (PHC) image that accumulates temporal intensity differences. During the training phase, an EM-based algorithm is used to cluster the moving blobs, while at run-time each moving object is compared to the pre-calculated clusters. The outlying objects are labeled as anomalous. Although the concept of PHC image is somewhat similar to the behavior image proposed here, Xiang {\it et al.} do not use it for anomaly detection but for identification of regions of interest to be further processed. A somewhat different approach using spatio-temporal intensity correlation has been proposed by Shechtman and Irani \cite{Boiman07}. Here, an observed sequence is built from spatio-temporal segments extracted from a training sequence. In this analysis-by-synthesis method, only regions that can be built from large contiguous chunks of the training data are considered normal. Our approach falls into the category of methods that model normality and look for outliers, however it is not based on motion paths but on simple pixel attributes instead. Thus, it avoids the pitfalls of tracking while affording explicit modeling of normality at low memory and CPU requirements. Our contributions are as follows. We introduce the concept of an event, or short-term scene dynamics captured over a time window at a specific spatial location in the camera field of view. With each event we associate features, such as size, direction, speed, busy time, color, etc., and propose a probabilistic model based on time-stationary random process. Finally, we develop a simple implementation of this model by using surrogate quantities that allow low-memory and low-CPU implementation. \section{Background Subtraction: Anomaly Detection in Photometric Space} \label{sec:backsub} We assume in this paper that the monitored video is captured by a fixed camera (no PTZ functionality) that at most undergoes jitter, e.g., due to wind load or other external factors. Let $\vec{I}$ denote a color video sequence with $\vec{I}_t({\vec{x}})$ denoting color attributes (e.g., $R,G,B$) at specific spatial location ${\vec{x}}$ and time $t$. We assume that $\vec{I}_t({\vec{x}})$ is spatially sampled on 2-D lattice $\Lambda$, i.e., ${\vec{x}}\in\Lambda\subset R^2$ is a pixel location. We also assume that it is sampled temporally, i.e., $t=k\Delta t$, $k\in Z$, where $\Delta t$ is the temporal sampling period dependent on the frame rate at which the camera operates. For simplicity, we assume $\Delta=1$ in this paper, i.e., normalized time. We denote by $\vec{I}_t$ a frame, i.e., a restriction of video $\vec{I}$ to specific time $t$. In traditional video analysis, color and luminance are pivotal quantities in the processing chain. For example, in background subtraction, the driving engine of many video analysis tasks, the color of the background is assumed either static or stationary. Although simple frame subtraction followed by thresholding may sometimes suffice in the static case, unfortunately it often fails due to acquisition noise or illumination changes. If the background includes spurious motion, such as environmental effects (e.g., rain, snow), fluttering tree leaves, or shimmering water, then determining outliers based on frame differences is insufficient. A significant improvement is obtained by determining outliers based on PDF estimates of features such as color. Assume that $P_{RGB}$ is a joint PDF of the three color components estimated using a 3-D variant of the mixture-of-Gaussians model \cite{Stau00} or the non-parametric model \cite{Elga02} applied to a training video sequence. $P_{RGB}$ can be used to test if a color at specific pixel and time in the monitored video is sufficiently probable, i.e., if $P_{RGB}(\vec{I}_t({\vec{x}})) > \tau$, where $\tau$ is a scalar threshold, then $\vec{I}_t({\vec{x}})$ is likely to be part of the modeled background, otherwise it is deemed moving. Although the thresholding of a PDF is more effective than the thresholding of frame differences, it is still executed in the space of photometric quantities (color, luminance, etc.), and thus unable to directly account for scene dynamics. However, modeling of background dynamics (activities) in the photometric space is very challenging. We propose an alternative that is both conceptually simple and computationally efficient. First, we remove the photometric component by applying background subtraction and learn the underlying stationary statistical characterization of scene dynamics based on a two-state (moving/static) renewal model. Then, we reliably infer novelty as a departure from the normality. \section{Behavior Space: From Frames to Events} \label{sec:behspace} As color and luminance contain little direct information on scene dynamics, we depart from this common representation and adopt motion label as our atomic unit. Let $L_t({\vec{x}})$ be a binary random variable embodying the presence of motion ($L=1$) or its absence ($L=0$) at position ${\vec{x}}$ and time $t$. Let $l_t({\vec{x}})$ be a specific realization of $L_t({\vec{x}})$ that can be computed by any of the methods discussed in Section~\ref{sec:backsub}, or by more advanced methods accounting for spatial label correlation \cite{Migd05,Shei05,McHu09spl}. While some of these methods are robust to noise and background activity, such as rain/snow or fluttering leaves, they often require a large amount of memory and are computationally intensive. Since simplicity and computational efficiency are key concerns in our approach, we detect motion by means of a very simple background subtraction method instead, namely \begin{eqnarray} l_t({\vec{x}}) = |I_t({\vec{x}})-b_t({\vec{x}})|>\tau, \label{eq:md} \end{eqnarray} where $\tau$ is a fixed threshold and $b_t$ is the background image computed as follows \begin{eqnarray} b_{t+1}({\vec{x}}) = (1-\rho)b_t({\vec{x}}) + \rho I_t({\vec{x}}) \end{eqnarray} with $\rho$ in the range 0.001-0.01. This linear background update allows to account for long-term changes. Although this method is sensitive to noise and background activity, it is trivial to implement, requires very little memory and processing power, and depends on one parameter only. Clearly, replacing this method with any of the advanced techniques will only improve the performance of our approach. Fig.~\ref{fig:MDTraining} shows an example realization of motion label field $L_t$ computed by the above method as well as a binary waveform showing temporal evolution of motion label at specific location ${\vec{x}}$ (Fig.~\ref{fig:MDTraining}.b). Each such waveform captures the amount of activity occurring at a given spatial location during a certain period of time and thus can be considered as a simple {\it behavior signature}. For instance, patterns associated with random activity (fluttering leaves), periodic activity (highway traffic), bursty activity (sudden vehicle movement after onset of green light), or no activity, all have a specific behavior signature. Other behavior signatures than a simple on/off motion label are possible. \begin{figure}[tb] \centering \begin{tabular}{cc} \footnotesize Video frame $\vec{I}_{t=t_0}$ & \footnotesize Motion label field $l_{t=t_0}$\\ \includegraphics [width=4cm]{figures/training_c_1.eps} & \includegraphics [width=4cm]{figures/training_c_2.eps} \end{tabular} \centerline{\small (a)} \footnotesize Motion label at pixel ``C'': $l({\vec{x}}=\vec{x}_C)$\\ \centerline{\includegraphics [width=8.5cm]{figures/training_c_8.eps}} \footnotesize Motion label at pixel ``D'': $l({\vec{x}}=\vec{x}_D)$\\ \centerline{\includegraphics [width=8.5cm]{figures/training_c_5.eps}} \centerline{\small (b)} \medskip \footnotesize Behavior signature at pixel ``C'': $f({\vec{x}}=\vec{x}_C)$\\ \centerline{\includegraphics [width=8.5cm]{figures/training_c_4.eps}} \footnotesize Behavior signature at pixel ``D'': $f({\vec{x}}=\vec{x}_D)$\\ \centerline{\includegraphics [width=8.5cm]{figures/training_c_3.eps}} \centerline{\small (c)} \caption{\small (a) Video frame $\vec{I}_{t=t_0}$ captured by a vibrating camera and the corresponding motion label field $l_{t=t_0}$. (b) Binary waveforms showing the time evolution of motion labels $l$ at two locations (marked $C$ and $D$ in (a)). (c) Behavior signatures at the same locations computed using the object-size descriptor (\ref{eqn:motobjdes}). The pixel located near intensity edge ($D$) is ``busy'', due to camera vibrations, compared to the pixel located in a uniform-intensity area ($C$). The large bursts of activity in behavior signatures correspond to pedestrians.} \label{fig:MDTraining} \vglue -0.4cm \end{figure} \paragraph{Object descriptor} A moving object leaves a behavior signature that depends on its features such as size, shape, speed, direction of movement, etc. For example, a large moving object will leave a wider impulse than a small object (Fig.~\ref{fig:MDTraining}.b), but this impulse will get narrower as the object accelerates. One can combine several features in a descriptor in order to make the behavior signature more unique. In fact, one can even add color/luminance to this descriptor in order to account for photometric properties as well. Thus, one can think of events as spatio-temporal units that describe what type of activity occurs and also what the moving object looks like. Let a random variable $F$ embody object description\footnote{$F$ is a random vector if the descriptor includes multiple features.}, with $f$ being its realization. In this paper, we concentrate on object descriptor based on moving object's size for two reasons. First, we found that despite its simplicity it performs well on a wide range of video material (motor traffic, pedestrians, objects on water, etc.); it seems the moving object size is a sufficiently discriminative characteristic. Secondly, the size descriptor can be efficiently approximated as follows: \begin{eqnarray}\label{eqn:motobjdes} f_t({\vec{x}}) = \frac{1}{N\times N} \sum_{{\vec{y}}\in {\cal N}({\vec{x}}); {\vec{y}}\Join{\vec{x}}} \delta(l_t({\vec{x}}),l_t({\vec{y}})), \label{eq:motionFeature} \end{eqnarray} where ${\cal N}({\vec{x}})$ is an $N\times N$ window centered at ${\vec{x}}$ and ${\vec{y}}\Join{\vec{x}}$ means that ${\vec{y}}$ and ${\vec{x}}$ are connected (are within the same connected component). $\delta(\cdot) = 1$ if and only if $l_t({\vec{x}})=l_t({\vec{y}})=1$, i.e., if both ${\vec{x}}$ and ${\vec{y}}$ are deemed moving, otherwise $\delta(\cdot)=0$. Note that $f_t({\vec{x}})=0$ whenever $l_t({\vec{x}})=0$. This descriptor is zero for a pixel away from the object, increases non-linearly as the pixel moves closer to the object and saturates at 1.0 for pixels inside a large object fully covering the window $\cal N$. Fig.~\ref{fig:MDTraining}.c shows an example of behavior signature based on the size descriptor. Clearly, $f_t({\vec{x}})=0$ means inactivity while $f_t({\vec{x}})>0$ means activity caused by a moving object; the larger the object, the larger the $f_t({\vec{x}})$ until it saturates at 1. The video frame shown has been captured by a vibrating camera and thus a noisy behavior signature for pixel ``D'' that is close to an intensity edge. \paragraph{Event model} An event needs to be associated with a time scale. For example, a short time scale is required to capture an illegal U-turn of a car, whereas a long time scale is required to capture a traffic jam. We define an event $E_t({\vec{x}})$ for pixel at ${\vec{x}}$ as the behavior signature (object size, speed, direction as the function of time $t$) left by moving objects over a $w$-frame time window, and model it by a Markov model shown in Fig.~\ref{fig:event_model}. \begin{figure}[tb] \centering \includegraphics[width=6.5cm]{figures/Markov_model.eps} \vglue -0.2cm \caption{Markov chain model for dynamic event $E$: $p,q$ are state probabilities (static and moving, respectively), and $1-p,1-q$ are transition probabilities. $\beta_1,\iota_1,\beta_2,\iota_2$ denote consecutive busy and idle intervals. With each busy interval is associated an object descriptor $F$, such as its size, speed/direction of motion, color, luminance, etc.} \label{fig:event_model} \vglue -0.4cm \end{figure} For now, consider only the presence/absence of activity ($L$) as the object descriptor. Assuming $\pi$ to be the initial busy-state probability ($L=1$), the probability of sequence $\{L_i=l_i\}_{\cal W} = (l_{t-w+1}({\vec{x}}),\,l_{t-w+2}({\vec{x}}),\,\ldots,\,l_t({\vec{x}}))$, at location ${\vec{x}}$ and within the time window ${\cal W}=[t-w+1,t]$, can be written as follows: \begin{eqnarray}\label{eqn:event_prob} P_{{\vec{x}}}(\{L_i=l_i\}_{\cal W}) &=& \pi q^{\beta_1} (1-q) p^{\iota_1} (1-p) q^{\beta_2} (1-q) p^{\iota_2} ...\nonumber\\ &=& \pi q^{\sum_k \beta_k} p^{\sum_k \iota_k} (1-q)^m (1-p)^n\\ &=& \pi (q/p)^{\sum_k \beta_k} p^w (1-q)^m (1-p)^n,\nonumber \end{eqnarray} where the binary sequence of 0's and 1's is implicitly expressed through the busy intervals $\beta_k$ (Fig.~\ref{fig:event_model}). Note that $m,n$ are the numbers of transitions ``moving $\rightarrow$ static" and "static $\rightarrow$ moving", respectively. The last line in (\ref{eqn:event_prob}) stems from the fact that the sum of busy and idle intervals equals the length of time window ${\cal W}$. This expression can be simplified by taking negative logarithm: \begin{eqnarray}\label{eqn:event_prob_log} -\log P_{\vec{x}}(\{L_i=l_i\}_{\cal W}) &=& -\log\pi - (\log q/p)\sum_k \beta_k - w \log p -\nonumber\\ & & m \log (1-q) - n \log (1-p),\\ &=& A_0 + A_1 \sum_{k=t-w+1}^t l_k({\vec{x}}) + A_2 \kappa_t({\vec{x}}),\nonumber \end{eqnarray} where $A_0,A_1,A_2$ are constants, the second term measures the total busy time using motion labels and $\kappa_t({\vec{x}})$ is proportional to the total number of transitions in time window ${\cal W}$ at ${\vec{x}}$. Thus far we have assumed that the moving object was described only by motion labels $L_t({\vec{x}})$. Suppose now that also a descriptor $F_t({\vec{x}})$, such as the size, is associated with the moving object at location ${\vec{x}}$ and time $t$ within a busy period in time window ${\cal W}$, i.e., $t\in\beta_k\subset{\cal W}$. The random variable (vector) $F_t({\vec{x}})$ is described by a conditional distribution dependent on the state of the Markov process, as illustrated in Fig.~\ref{fig:event_model}. We assume that $F_t({\vec{x}})$ is conditionally independent of other random variables $F_{t_0}({\vec{x}}), t_0\neq t$ when conditioned on the underlying state of the Markov process, and that its distribution has exponential form when busy and point mass when idle: \begin{eqnarray}\label{eqn:Gibbs_1} P_{\vec{x}}(F_t=f_t \mid L_t = k) = \left\{ \begin{array}{ll} \frac{1}{Z_1} e^{-A_3 f_t({\vec{x}})}, & k=1,\\ \delta(0), & k=0. \end{array}\right. \end{eqnarray} where $Z_1$ is a partition function and $\delta$ is the Kronecker delta. If the descriptor $F$ includes object size, the above distribution suggests that the larger the object passing through ${\vec{x}}$ the less likely it is, and also that with probability 1 it has size zero in idle intervals (consistent with Fig.~\ref{fig:event_model}). This is motivated by the observation that small-size detections are usually associated with false positives when computing $L_t$. Should $F$ include speed, faster objects would be less likely, a realistic assumption in urban setting. The model would have to be modified should the descriptor include direction of motion (e.g., horizontal motion more likely for highway surveillance with a suitably-oriented camera) or luminance/color (e.g., all photometric properties equally likely). Note that more advanced descriptor models can be incorporated as well. For instance, one can enforce temporal smoothness of the descriptor (e.g., size) for object passing through location ${\vec{x}}$ {\it via} a (temporal) Gibbs distribution with 2-element cliques: \begin{eqnarray*} \lefteqn{P_{\vec{x}}(\{F_i=f_i\}_{\cal W} \mid L=1 ) = } \\ && \frac{1}{Z_2} e^{-A_4 \sum_{k: \beta_k\subset{\cal W}} \sum_{(j,j+1)\in\beta_k} f_j({\vec{x}}) f_{j+1}({\vec{x}})}, \end{eqnarray*} where $\{F_i=f_i\}_{\cal W}$ denotes a sequence of descriptors appearing in the temporal window ${\cal W}$, and $A_4$ is a constant. This model controls temporal smoothness of the descriptor $F$, and can be used to limit, for example, size variations in time. Nevertheless, for simplicity we omit this model in our further developments. Combining the descriptor model (\ref{eqn:Gibbs_1}) with the $L$-based event model (\ref{eqn:event_prob}-\ref{eqn:event_prob_log}) leads to a joint distribution: \begin{eqnarray}\label{eqn:fl-jointdist} \lefteqn{P_{\vec{x}}(\{L_i=l_i\}_{\cal W},\{F_i=f_i\}_{\cal W}) =} \nonumber\\ && P_{\vec{x}}(\{F_i=f_i\}_{\cal W} \mid \{L_i=l_i\}_{\cal W})\cdot P_{\vec{x}}(\{L_i=l_i\}_{\cal W}) =\\ && \prod_{i\in{\cal W}} P_{\vec{x}}(F_i=f_i \mid L_i=l_i)\cdot P_{\vec{x}}(\{L_i=l_i\}_{\cal W})\nonumber \end{eqnarray} where the last line stems from the conditional independence of $F_i$'s when conditioned on $L$'s assumed earlier. Taking the negative logarithm and using equations (\ref{eqn:event_prob_log}) and (\ref{eqn:Gibbs_1}) results in: \begin{eqnarray}\label{eqn:event_prob_total} \lefteqn{-\log P_{\vec{x}}(\{L_i=l_i\}_{\cal W},\{F_i=f_i\}_{\cal W}) = A_0^\prime\ + }\\ && A_1 \sum_{k=t-w+1}^t l_k({\vec{x}}) + A_2 \kappa_t({\vec{x}}) + A_3 \sum_{k=t-w+1}^t f_k({\vec{x}})l_k({\vec{x}}),\nonumber \end{eqnarray} where $A_0^\prime$ accounts for $Z_1$ (\ref{eqn:Gibbs_1}) and the last term is the sum of descriptors in all busy periods in ${\cal W}$. Note that the constant $A_2$ is positive, thus reducing the probability when frequent ``moving $\rightarrow$ static" and "static $\rightarrow$ moving" transitions take place. The constant $A_1$ may be negative or positive depending on the particular values of $q$ and $p$ in the Markov model; increasing busy periods within ${\cal W}$ will lead to an increased ($q>p$) or decreased ($q<p$) joint probability. Note that at each location ${\vec{x}}$ the above model implicitly assumes independence among the busy and idle periods as well as conditional independence of $F_t$ when conditioned on $L_t=l_t$. This assumption is reasonable since different busy periods at a pixel correspond to different objects while different idle periods correspond to temporal distances between different objects. Typically, these are all independent\footnote{We have performed extensive experiments ranging from highway traffic to urban scenarios and the results appear to be consistent with these assumptions.}. With each time $t$ and position ${\vec{x}}$ we associate an event $E_t$ that represents the statistic described in (\ref{eqn:event_prob_total}), namely, \begin{eqnarray} E_t({\vec{x}}) = \sum_{k=t-w+1}^t (A_1 L_k({\vec{x}}) + A_3 F_k({\vec{x}})L_k({\vec{x}})) + A_2 {\cal K}_t({\vec{x}}), \label{eqn:event_prob_suff} \end{eqnarray} where the constant $A_0^\prime$ was omitted as it does not contribute to the characterization of dynamic behavior (identical value across all ${\vec{x}}$ and $t$) and ${\cal K}$ is a random variable associated with realization $\kappa$ (number of transitions). The main implication of the above event description is that it serves as a sufficient statistic for determining optimal decision rules \cite{Poor94}. \paragraph{Anomaly Detection Problem} We first describe anomaly detection abstractly. We are given data, $\omega \in \Omega \subset \mathbb{R}^d$. The nominal data are sampled from a multivariate density $g_0(\cdot)$ supported on the compact set $\Omega$. Anomaly detection~\cite{Zhao09} can be formulated as a composite hypothesis testing problem. Suppose the test data, $\omega$, come from a mixture distribution, namely, $f(\cdot) = (1-\xi) g_0(\cdot) + \xi g_1(\cdot)$ where $g_1(\cdot)$ is also supported on $\Omega$. Anomaly detection involves testing the following nominal hypothesis \begin{eqnarray*} H_0: \xi = 0\,\,\, \mbox{versus the alternative (anomaly)}\,\,\, H_1 : \xi>0. \end{eqnarray*} The goal is to maximize the detection power subject to false alarm level $\alpha$, namely, $\mbox{Prob}(\mbox{declare } H_1 \mid H_0) \leq \alpha$. Since the mixing density is unknown, it is usually assumed to be uniform. In this case the optimal uniformly most powerful test (over all values of $\xi$) amounts to thresholding the nominal density \cite{Poor94}. We choose a threshold $\tau(\alpha)$ and declare the observation, $\omega$, as an outlier according to the following log-likelihood test: \begin{eqnarray}\label{eqn:gtest} -\log(g_0(\omega)) \decide{H_1}{H_0} \tau(\alpha) \end{eqnarray} where $\tau(\alpha)$ is chosen to ensure that the false alarm probability is smaller than $\alpha$. It follows that such a choice is the uniformly most powerful decision rule. Now the main problem that arises is that $g_0(\cdot)$ is unknown and has to be learned in some way from the data. The issue is that $\omega$ could be high-dimensional and learning such distributions may not be feasible. This is further compounded in video processing by the fact that it is even unclear what $\omega$, i.e., the features, should be. It is worth reflecting how we have addressed these issues through our specific setup. We are given $w$ video frames, $I_{t-w+1},\,I_{t-w+2},\,\ldots,\,I_t$ and a specific location ${\vec{x}}$, and our task is to determine whether this sequence is consistent with nominal activity or, alternatively, it is anomalous. We also have training data that describes the nominal activity. In this context, our Markovian model provides a representation for the observed video frames. This representation admits a natural factorization, wherein increasingly complex features can be incorporated, for example through Markov-Gibbs models. Furthermore, the log-likelihood is shown to be reduced to a scalar sufficient statistic, which is parameterized by a finite set of parameters ($A_j$'s in (\ref{eqn:event_prob_suff})). Consequently, the issue of learning high-dimensional distribution is circumvented and one is left with estimating the finite number of parameters, which can be done efficiently using standard regression techniques. The problem of anomaly detection now reduces to thresholding the event $E_t=e_t$ according to (\ref{eqn:gtest}): \begin{eqnarray*} e_t({\vec{x}}) \decide{H_1}{H_0}\tau(\alpha), \end{eqnarray*} or, explicitly, \begin{eqnarray}\label{eqn:event_hyptest} \sum_{k=t-w+1}^t (A_1 l_k({\vec{x}}) + A_3 f_k({\vec{x}})l_k({\vec{x}})) + A_2 \kappa_t({\vec{x}})\decide{H_1}{H_0}\tau(\alpha). \end{eqnarray} Our task is to find an appropriate threshold $\tau(\alpha)$ so that the false alarms are bounded by $\alpha$. Note that our events are now scalar and learning the density function of a 1-D random variable can be done efficiently. The main requirement is that $E_t({\vec{x}})$ be a stationary ergodic stochastic process, which will ensure that the CDF can be accurately estimated: \begin{eqnarray*} \frac{1}{w} \sum_{k=t-w+1}^t {1{\hskip -2.5 pt}\hbox{I}}_{\{E_t({\vec{x}})\geq\eta\}}(e_t({\vec{x}})) \longrightarrow\mbox{Prob}_{{\vec{x}}} \{E \geq \eta \}, \end{eqnarray*} where ${1{\hskip -2.5 pt}\hbox{I}}_{\{E_t({\vec{x}})\geq\eta\}}(e_t({\vec{x}}))$ is an indicator function, equal to $1$ when $e_t({\vec{x}})>\eta$ and $0$ otherwise, while $\mbox{Prob}_{\vec{x}}$ denotes the representative stationary distribution for $E_t$ at any time $t$. For Markovian processes this type of ergodicity is standard~\cite{Karl75}. One extreme situation is to choose a threshold that ensures zero false alarms. This corresponds to choosing $\tau(0) = \max_t e_t$, i.e., the maximum value of the support of all events in the training data. Although the anomaly detection algorithm we describe in the next section requires no explicit estimation of the above CDF, it is nevertheless instructive to understand its properties. Fig.~\ref{fig:histograms} shows example PDFs for our test statistic $e_t(x)$ estimated from training data using smoothed histograms. Note different histogram shapes depending on the nature of local activity. \begin{figure}[tb] \centering \includegraphics[width=7cm]{figures/histograms.eps} \vglue -0.2cm \caption{Event model PDF estimated for four different pixels. The two pixels in traffic lanes have similar histograms due to the fact that their behaviors are very similar (continuous highway traffic). The pixel above the traffic is in the idle area of the video, so its histogram has a high peak near zero, the pixel on the overpass has a bimodal distribution caused by the traffic light.} \label{fig:histograms} \vglue -0.5cm \end{figure} \section{Behavior Subtraction Framework} \label{sec:behsubframework} In the previous section, we presented object and event models, and explained how they fit into the problem of anomaly detection. In principle, once the event model is known various statistical techniques can be applied but this would require significant memory commitment and computational resources. Below, we propose an alternative that is memory-light and processor-fast and yet produces very convincing results. \subsection{Behavior Images} \label{ssec:behimg} As mentioned in the previous section, one extreme situation in anomaly detection is to ensure zero false alarms. This requires a suitable threshold, namely $\tau(0) = \max_t e_t$, equal to the maximum value of the support of all events in the training data. This threshold is space-variant and can be captured by a 2-D array: \begin{eqnarray}\label{eqn:bbehimg1} B({\vec{x}}) = \max_{t\in[1,M]} e_t({\vec{x}}), \end{eqnarray} where $M$ is the length of the training sequence. We call $B$ the {\em background behavior image} \cite{Jodo08vcip} as it captures the background activity (in the training data) in a low-dimension representation (one scalar per location ${\vec{x}}$). This specific $B$ image captures peak activity in the training sequence, and can be efficiently computed as it requires no estimation of the event PDF; maximum activity is employed as a surrogate for normality. As shown in Fig.~\ref{fig:Comm}, the $B$ image succinctly synthesizes the ongoing activity in a training sequence, here a busy urban intersection at peak hour. It implicitly includes the paths followed by moving objects as well as the amount of activity registered at every point in the training sequence. The event model (\ref{eqn:event_prob_suff}) is based on binary random variables $L$ whose realizations $l$ are computed, for example, using background subtraction. Since the computed labels $l$ will be necessarily noisy, i.e., will include false positives and misses, a positive bias will be introduced into the event model (even if the noise process is {\it iid}, its mean is positive since labels $l$ are either $0$ or $1$). The simplest method of noise suppression is by means of lowpass filtering. Thus, in scenarios with severe event noise (e.g., unstable camera, unreliable background subtraction) instead of seeking zero false-alarm rate we opt for event-noise suppression using a simple averaging filter to compute the background behavior image \cite{Jodo08icip}: \begin{eqnarray}\label{eqn:bbehimg2} B({\vec{x}}) = \frac{1}{M} \sum_{t=1}^M e_t({\vec{x}}). \end{eqnarray} This background behavior image estimates a space-variant bias from the training data. A non-zero bias can be considered as a temporal stationarity, and therefore normality, against which observed data can be compared. \begin{figure}[t] \centering \begin{tabular}{cc} \footnotesize Video frame $I_t$ & \footnotesize Motion label field $l_t$\\ \includegraphics[width=4.0cm]{figures/commAveResults_2.eps}& \includegraphics[width=4.0cm]{figures/commAveResults_7.eps}\\ \footnotesize $B$ image & \footnotesize Anomaly map\\ \includegraphics[width=4.0cm]{figures/commAveResults_5.eps}& \includegraphics[width=4.0cm]{figures/commAveResults_6.eps} \end{tabular} \caption{Behavior subtraction results for the maximum-activity surrogate (\ref{eqn:bbehimg1}) on data captured by a stationary, although vibrating, camera. This is a highly-cluttered intersection of two streets and interstate highway. Although the jitter induces false positives during background subtraction ($L_t$), only the tramway is detected by behavior subtraction; the rest of the scene is considered normal.} \label{fig:Comm} \vglue -0.4cm \end{figure} \subsection{Behavior Subtraction} \label{ssec:behsub} Having defined the zero-false-alarm threshold $\tau(0)$ or event-noise bias {\it via} the background behavior image $B$ (\ref{eqn:bbehimg1}-\ref{eqn:bbehimg2}), we can now apply the event hypothesis test (\ref{eqn:event_hyptest}) as follows: \begin{eqnarray*} e_t({\vec{x}}) - B({\vec{x}}) \decide{abnormal}{normal} \Theta \end{eqnarray*} where $\Theta$ is a user-selectable constant allowing for non-zero tolerance ($\Theta=0$ leads to a strict test). In analogy to calling $B$ a background behavior image, we call $e_t$ an {\it observed behavior image} as it captures events observed in the field of view of the camera over a window of $w$ video frames. The above test requires the accumulation of motion labels $l$, object sizes $f$, and state transitions ($\kappa_t$) over $w$ frames. All these quantities can be easily and efficiently computed. Clearly, abnormal behavior detection in this case simplifies to the subtraction of the background behavior image $B$, containing an aggregate of long-term activity in the training sequence, from the observed behavior image $e_t$, containing a snapshot of activity just prior to time $t$, and subsequent thresholding. This explains the name {\em behavior subtraction} that we gave to this method. \begin{figure*}[tb] \centering \begin{tabular}{cccc} \footnotesize Video frame $I_t$ & \footnotesize Motion label field $l_t$ & \footnotesize Object-size descriptor $f_t$ & \footnotesize Anomaly map\\ \includegraphics[width=4.0cm]{figures/resultThumbnail_1.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_2.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_3.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_4.eps}\\ \includegraphics[width=4.0cm]{figures/resultThumbnail_17.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_18.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_19.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_20.eps}\\ \includegraphics[width=4.0cm]{figures/resultThumbnail_21.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_22.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_23.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_24.eps}\\ \includegraphics[width=4.0cm]{figures/resultThumbnail_13.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_14.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_15.eps}& \includegraphics[width=4.0cm]{figures/resultThumbnail_16.eps} \end{tabular} \caption{Behavior subtraction results for maximum-activity surrogate (\ref{eqn:bbehimg1}) on video sequences containing shimmering water surface (two top rows), strong shadows (third row) and very small abnormally-behaving object (bottom row).} \label{fig:robustness} \vglue -0.6cm \end{figure*} \section{Experimental Results} \label{sec:expres} We tested our behavior subtraction algorithm for both the maximum- and average-activity surrogates on black-and-white and color, indoor and outdoor, urban and natural-environment video sequences. In all cases, we computed the label fields $l_t$ using simple background subtraction (\ref{eq:md}) with $\tau = 40$ and background $b$ updated with $\alpha$ between $10^{-3}$ and $10^{-2}$, depending on the sequence. Although we have performed experiments on a wide range of model parameters, we are presenting here the results for event model based on size descriptor (\ref{eqn:motobjdes}) ($A_1=A_2=0$). The results of behavior subtraction using the maximum-activity surrogate (\ref{eqn:bbehimg1}) are shown in Figs.~\ref{fig:Comm}-\ref{fig:groupCampus}. Each result was obtained using a training sequence of length $M$=1000-5000 frames, $w=100$, and $\Theta \in [0.5, 0.7]$. As is clear from the figures, the proposed method is robust to inaccuracies in motion labels $l_t$. Even if moving objects are not precisely detected, the resulting anomaly map is surprisingly precise. This is especially striking in Fig.~\ref{fig:Comm} where a highly-cluttered environment results in high density of motion labels while camera jitter corrupts many of those labels. Behavior subtraction is also effective in removal of unstructured, parasitic motion such as due to water activity (fountain, rain, shimmering surface), as illustrated in Fig.~\ref{fig:robustness}. Note that although motion label fields $l_t$ include unstructured detections due to water droplets, only the excessive motion is captured by the anomaly maps (passenger car and truck with trailer). Similarly, the shimmering water surface is removed by behavior subtraction producing a fairly clean boat outline in this difficult scenario. Our method also manages to detect abandoned objects and people lingering, as seen in the two bottom rows of Fig.~\ref{fig:robustness}. Fig.~\ref{fig:groupCampus} shows yet another interesting outcome of behavior subtraction. In this case the background behavior image was trained on a video with single pedestrian and fluttering leaves. While the object-size descriptor captures both individual pedestrians and groups thereof, anomalies are detected only when a large group of pedestrians passes in front of the camera. The results of behavior subtraction using the average-activity surrogate are shown in Fig.~\ref{fig:motiondetection}. The video sequence has been captured by a vibrating camera (structural vibrations of camera mount). It is clear that behavior subtraction with average-activity surrogate outperforms background subtraction based on single-Gaussian model \cite{Wren97} and non-parametric-kernel model \cite{Elga02}. As can be seen, behavior subtraction effectively eliminates false positives without significantly increasing misses. As already mentioned, the proposed method is efficient in terms of processing power and memory use, and thus can be implemented on modest-power processors (e.g., embedded architectures). For each pixel, it requires one floating-point number for each pixel of $B$ and $e$, and $w/8$ bytes for $l$. This corresponds to a total of 11 bytes per pixel for $w=24$. This is significantly less than 12 floating-point numbers per pixel needed by a tri-variate Gaussian for color video data (3 floating-point numbers for $R,G,B$ means and 9 numbers for covariance matrix). Our method currently runs in {\em Matlab} at 20 fps on $352\times 240$-pixel video using a 2.1 GHz dual-core Intel processor. More experimental results can be found in our preliminary work \cite{Jodo08vcip,Jodo08icip}, while complete video sequences can be downloaded from {\small\tt www.dmi.usherb.ca/$\sim$jodoin/projects/PAMI\_2009}. \section{Conclusions} \label{sec:concl} In this paper, we proposed a framework for the characterization of dynamic events and, more generally, behavior. We defined events as spatio-temporal signatures composed of various moving-object features, and modeled them using stationary random processes. We also proposed a computationally-efficient implementation of the proposed models, called behavior subtraction. In fact, due to simple surrogates of activity/behavior statistics used, behavior subtraction is very easy to implement, uses little memory and can run on an embedded architecture. Furthermore, the proposed framework is content-blind, i.e., equally applicable to pedestrians, motor vehicles or animals. Among applications that can benefit from the proposed framework are suspicious behavior detection and motion detection in presence of strong parasitic background motion. Yet, challenges remain. One challenge is to extend the proposed concepts to multiple cameras so that a mutual reinforcement of decisions takes place; some of our preliminary work can be found in \cite{Ermi08icdsc}. Another challenge is to detect anomalies at object level while using only pixel-level decisions proposed here. \begin{figure}[t] \centering \begin{tabular}{cc} \footnotesize Video frame $I_{t=0}$ & \footnotesize Video frame $I_{t=2240}$ \\ \includegraphics[width=4.0cm]{figures/groupCampus_1.eps}& \includegraphics[width=4.0cm]{figures/groupCampus_2.eps}\\ \footnotesize Object-size descriptor $f_{t=0}$ & \footnotesize Object-size descriptor $f_{t=2240}$ \\ \includegraphics[width=4.0cm]{figures/groupCampus_3.eps}& \includegraphics[width=4.0cm]{figures/groupCampus_4.eps}\\ \footnotesize Anomaly map at $t=0$ & \footnotesize Anomaly map at $t=2240$ \\ \includegraphics[width=4.0cm]{figures/groupCampus_5.eps}& \includegraphics[width=4.0cm]{figures/groupCampus_6.eps} \end{tabular} \caption{Results of behavior subtraction for the maximum-activity surrogate (\ref{eqn:bbehimg1}) with training performed on a video containing single pedestrian and fluttering leaves (top of the frame). The group of pedestrians is associated with a large amount of activity and thus detected by our method as anomaly.} \label{fig:groupCampus} \vglue -0.6cm \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \footnotesize Video frame $I_t$ & \footnotesize Single-Gaussian method \\ \includegraphics[width=4.0cm]{figures/chapel_1.eps} & \includegraphics[width=4.0cm]{figures/chapel_2.eps} \\ \footnotesize Parzen-window method & \footnotesize Behavior subtraction \\ \includegraphics[width=4.0cm]{figures/chapel_3.eps} & \includegraphics[width=4.0cm]{figures/chapel_4.eps} \end{tabular} \caption{Results for background subtraction based on single-Gaussian \cite{Wren97} and non-parametric-kernel \cite{Elga02} hypothesis tests, as well as for behavior subtraction, on data captured by severely vibrating camera. Camera jitter introduces excessive false positives in both background subtraction methods while behavior subtraction is relatively immune to jitter.} \label{fig:motiondetection} \vglue -0.6cm \end{figure} \end{document}
proofpile-arXiv_065-6323
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the coming year the RHIC experiments will take data at very low beam energies ($\sqrt{s}\sim 5-50$GeV/u) in order to explore the QCD phase transition critical point. At these energies spectator nucleons from beam breakup are emitted at large angles and must be considered when calculating the trigger efficiency. Whereas existing heavy ion event generators produce spectators with either no momentum or according to a simple Fermi step distribution, existing data favor a more complicated distribution. We show that for this calculation the above approximations lead to significant errors. In an earlier note the PHENIX Zero Degree Calorimeter (ZDC) efficiency at low $\sqrt{s}$ was calculated using Thomas- Fermi(TF) and Feshbach-Huang(FH) (see \cite{Feshbach}) distributions. Feschbach-Huang fits available data(E814\cite{E814} and also Bevalac) better than Thomas-Fermi. It yields a gaussian momentum distribution with the same r.m.s. value as the Fermi one. Of the 2 distributions F-H gave a better low energy efficiency for the ZDC. We also calculated the acceptance for evaporation neutrons\cite{Evaporation} and found that they dominate the ZDC trigger efficiency. In this note we discuss the efficiency of the PHENIX Beam-Beam Counters (BBC) (which cover $\eta $=3.1-$>$3.9) due to spectator protons. We discuss in some detail the physics of the spectator proton distributions. We also show that a hard component in the spectrum, usually neglected and due to short range correlations in cold nuclear matter, alters significantly the BBC efficiency. This component is not ruled out by E814, for example, because their acceptance is limited to $p_T<$250 MeV. On the other hand it is confirmed by a number of experiments at JLAB, BNL, etc. A popular review can be found in CERN Cour.49N1:22-24,2009. For the experiments see refs.\cite{Tang:2002ww}-\cite{Egiyan:2005hs}.\\ It also impacts significantly the low beam energy BBC acceptance as can be seen in the following video: http://www.phenix.bnl.gov/phenix/WWW/publish/swhite/all3.avi In the following we first calculate the detection efficiency of the BBC for single spectator protons. We then use NA49 \cite{NA49} measured proton multiplicities to calculate the trigger efficiency of the BBC. \\ \begin{figure} \centering \includegraphics[width=0.99\linewidth]{aafragmentation} \caption{Spectator emission.} \label{fig:Layout} \end{figure} \subsection{The Physics of Fragmentation Distributions} It's been known for a long time that the scattering of high energy photons, leptons, hadrons, and nuclei off stationary nuclei results in significant production of nucleons with momenta $p\ge p_F$ ( $p_F$ is the Fermi momentum) in a wide range of angles including backward angles where fragmentation of wounded nucleons does not contribute. It was suggested that the main mechanism of nucleon production is the destruction of short-range correlations in nuclei \cite{Frankfurt:1981mk}. This model allows one to describe a number of universal features of these processes including absolute rates. It also led to the prediction \cite{Frankfurt:1981mk} of correlations in semiexclusive reactions like A(e,e'pN) and A(p,2pn) and scaling of the ratios of the (e,e') cross sections at $x> 1.4$. These predictions were recently confirmed by experiments at the BNL AGS, and Jlab. For a review see \cite{Frankfurt:2008zv}. In the process of nucleus- nucleus interactions, fragmentation of the nucleus at a given impact parameter, b, should depend only weakly on the incident energy (the main effect is a change of $\sigma_{inel}(NN)$). However one would expect different dynamics for small and large b. {\it Small impact parameters} For small b most of the nucleons are knocked out and as a result there is a crescent shaped volume - practically a two dimensional surface. As a result the average distance between surviving nucleons will be comparable to the internucleon distances and hence the spectator system would primarily decay into a system of nucleons with momenta similar to those in the initial wave function - that is, the spectator mechanism will be effective. Hence in this limit we can describe the spectrum approximately as in the spectator mechanism (for simplicity we will give non-relativistic expressions for the cross section). In this case the distribution of produced nucleons over momenta (in the nucleus rest frame) is \begin{equation} \frac{d N}{d^3p} = (1+p_3/m_N)\, n(p). \end{equation} where the $3d$ direction is along the projectile direction. $ (1+p_3/m_N)$ is the flux factor. So there are fewer nucleons emitted backward. $n(p)$ is the momentum distribution which for momenta $p\ge p_0=300 $ MeV/c is dominated by two-nucleon short-range correlations(SRC). The probability of these SRC is $\sim 25\%$ for heavy nuclei ($\int d^3p \, n(p) \, \theta (p-p_0)$), and $n(p) \propto \exp (- \lambda p)$, $\lambda \sim 5.6 $GeV$^{-1}$ , for $p\ge p_0$. One could also use a more realistic wavefunction than discussed there. Since the momentum tail in this case is much larger and drops more slowly with $p$, the rate of large angle emission is much stronger in this case than in the Fermi step model. {\it Medium impact parameters} If $b\sim R_A$ the spectator system is roughly a hemisphere. In this case emission of nucleons due to the destruction of SRC near the surface of the hemisphere $\sim \pi R_A^2$ leads to emission into the open volume which is given by Eq.(1). A rough estimate of the number of nucleons near the surface facing the collision volume, given the mean density of nucleons in nuclear media of $\rho_0\sim 0.17 nucleon/fm^3$, is $\pi R_A^2\rho_0 \sim 25 - 30 $ nucleons. This is about 1/4 of the spectator nucleons. The nucleons emitted into the volume of the hemisphere are interacting with the nuclear media and are mostly absorbed leading to heating of the volume of the nucleus. There is an additional excitation of the spectator system due to the creation of holes near the surface due to the removal of low momentum nucleons. Since the overall excitation energy is $\sim 40 MeV \times A^{2/3}$ this energy is divided between $A/2$ nucleons corresponding to 10 - 15 MeV per nucleon. Hence the spectrum of produced nucleons should be quite soft - and a significant part of the fragmentation should be into nuclear fragments. Accordingly, in this case, the production of nucleons should contain two components - a faster one % predominantly emitted in the direction of the open hemisphere and a slower mostly isotropic one. Other mechanisms of fast proton production involve secondary hadron interactions with the nucleus and elastic/ diffractive NN near the periphery of one of the nuclei. These contributions do not change qualitatively our conclusions as they also lead to a significant fraction of nucleons being produced with transverse momenta $> p_F$. \subsection{The momentum distributions} We use normalized distributions so the integral over momentum space is always one. The Thomas-Fermi distribution is isotropic and uniform in momentum up to $p<p_F$ for which we use 270 MeV- roughly what is found for heavy nuclei. The mean $p^2$ is =3$p_F^2/5.$ \begin{equation} \frac{dN^{TF}}{dp_T}=\frac{3p_T \sqrt{p_F^2-p_T^2}}{p_F^3}\cdot \theta(p_F-p_T) \end{equation} Feschbach and Huang and also Goldhaber\cite{Goldhaber} derived the observed Gaussian spectra (at the Bevalac) of light fragments. The calculation is based on clustering in the nucleus. Their distribution yields the same mean - ie $<p^2>$=3$p_F^2$/5 but their distribution in $p_T$ peaks lower and extends to higher momenta. Although the calculation accounts well for the measured spectra it does less well when compared with today's picture of the wave function of light and heavy nuclei. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{plot2} \caption{The Feshbach-Huang distribution extends to slightly larger $p_L$ and $p_T$ than the Fermi distribution.} \label{fig:Layout1} \end{figure} We now consider the distribution discussed in the two nucleon short-range correlation model by Frankfurt and Strikman \cite{Frankfurt:1981mk}. This has a dominant component which is Thomas - Fermi like and a harder component beyond the Fermi momentum. The latter is normalized by the observed 25 $\%$ short range correlation probability (SRC). For our purposes it is not that significant that the wavefunction for $p<p_F$ is uniform rather than the gaussian of F-H. The critical aspect is the hard component.\\ A simplified version of their wave function which is sufficient for the purposes of our analysis is given by \begin{equation} \psi ^2 (p) = a\cdot \theta(p_F - p) + b\cdot \exp (-\lambda p)\cdot \theta (p - p_F) \end{equation} It enters in Eq.(1) and leads to the spectator distribution in the nucleus rest frame which is not isotropic.\\ \begin{figure} \centering \includegraphics[width=0.99\linewidth]{bbc_text2_gr2} \caption{The Frankfurt -Strikman distribution as a cumulative.} \label{fig:Layout2} \end{figure} The effect of Fermi smearing on longitudinal momentum is small. Let's evaluate it using a Thomas Fermi distribution. We find an r.m.s. of 120 MeV/c. Similarly if we define the light cone fraction, $\alpha $, given below, which is boost invariant, we find that $<\alpha >$=1 and $\delta \alpha \simeq \pm $ .13. Since y$\cong $-ln((E-$p_L$)/mp), the rapidity spread relative to that of the nucleus is ln($\alpha $) and so the spread in rapidity due to the longitudinal motion is $\pm$ 0.12 and we will ignore it. \begin{equation} \alpha (p_L)=\frac{\sqrt{mp^2+p_L^2}-p_L}{mp} \end{equation} The angular coverage by the PHENIX BBC is effectively 2.3 to 5.2$^o$ or 3.1$<$eta$<$3.9. The Cerenkov response is fully efficient above$\sim $1.5 GeV proton momentum. The BBC acceptance is determined by the integration limits on the $p_T$ distribution over this angular range. Naively we wouldn't expect much when $P_{beam}\geq$7 GeV/u since then the BBC coverage falls outside the Fermi surface. The ZDC aperture is roughly 2.8 mrad relative to the forward direction. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{plott} \caption{ BBC acceptance relative to the Fermi distribution.} \label{fig:Layout3} \end{figure} \begin{equation} Acceptance(P_{beam})= \int_{P_{beam}\cdot \theta_{min}}^{P_{beam} \cdot\theta_{max}} \frac{dN}{d p_T} d p_T \end{equation} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{bbc_text2_gr4} \caption{Calculated acceptance for one proton in the 3 distributions.} \label{fig:Layout4} \end{figure} We now combine the NA49 measured spectator proton multiplicity with the above acceptances to calculate the BBC trigger efficiency due to protons from fragmentation. The NA49 results are presented vs. impact parameter, b, calculated from VENUS and listed here with b(fermi) in the 1st and N(proton) in the 5th column.\\ \begin{figure} \centering \includegraphics[width=0.99\linewidth]{bbc_text2_gr5} \caption{NA49's measured spectator proton multiplicity(5) vs calculated b (1).} \label{fig:Layout5} \end{figure} Below we illustrate the significance of the hard component in BBC trigger efficiency at low energy. The efficiency for a given multiplicity,N, and acceptance,A, is given by $\varepsilon $=1-(1-A)\(^N\) . For example, with the Fermi distribution and at $p_{beam}$=6 GeV, A is $\sim $.1 so $\varepsilon $=0.5 for b=2 and .9 for b=8. Of course $\varepsilon $ is computed for 1 arm.\\ \begin{figure} \centering \includegraphics[width=0.99\linewidth]{plot5} \caption{BBC trigger efficiency calculated from the NA49 multiplicities.} \label{fig:Layout6} \end{figure} There is another mechanism for producing spectators at large angles which is usually neglected. Elastic scattering and quasi-elastic(diffractive) scattering of nucleons will also yield spectators beyond the Fermi surface. We estimate that this is comparable in magnitude to the hard component from the wave function discussed above. In the case of $pA$ collisions a significant contribution to the fast spectator yield comes from the reinteractions of the few GeV hadrons produced in $pA$ scattering. This mechanism is probably less efficient in the case of AA nuclear collisions since production of hadrons in the nuclear fragmentation region in AA interactions is likely to be suppressed as compared to the case of pA interactions. \subsection{Conclusion} In conclusion we plot the overall min bias trigger efficiency combining both the ZDC and BBC 2 arm coincidence acceptances. For the ZDC we include the acceptance for evaporation neutrons also. It's interesting that the ZDC and BBC very nearly complement one another and, when combined, give full coverage over the beam energy range from 2 to 100 GeV. Study of the correlation between the direction of the reaction plane and the direction of the proton emission will provide a novel probe of the heavy ion collision dynamics and will also allow one to study it as a function of the incident energy. In particular if the dynamics were to be energy independent the $\eta$ distribution of the spectators would shift with a change of energy simply by $\frac{1}{2} \ln (s_1/s_2)$. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{bbc_text2_gr7} \caption{Overall trigger Efficiency due to spectator nucleons when requiring either a North arm and South arm coincidence of the BBC or the ZDC.} \label{fig:Layout7} \end{figure}
proofpile-arXiv_065-6340
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The theory of frames in Hilbert spaces presents a central tool in many areas and has developed rather rapidly in the past decade. The motivation has come from applications to engineering, i.e. signal analysis, as well as from applications to different areas of Mathematics, such as, sampling theory \cite{AG}, operator theory \cite{HL}, harmonic analysis \cite{Gr}, nonlinear sparse approximation \cite{DE}, pseudo-differential operators \cite{GH}, and quantum computing \cite{EF}. Recently, the theory of frames also showed connections to theoretical problems such as the Kadison-Singer Problem \cite{CFTW}. A standard frame for a Hilbert space $H$ is a family of vectors $x_i\in H$, $i\in\mathbb{N}$, such that there are constants $A, B>0$ for which $$A\,\|x\|^2\le \sum|\langle x, x_i\rangle|^2\le B\,\|x\|^2, \mbox{ whenever } x\in H.$$ In this paper we consider Schauder frames in Banach spaces, which, on the one hand, generalize Hilbert frames, and extend the notion of Schauder basis, on the other. In \cite{CL}, D. Carando and S. Lassalle consider the duality theory for atomic decompositions. In our independent work, we will mostly concentrate on properties of Schauder frames, which do not depend on the choice of associated spaces, give out the concepts of minimal and maximal (associated) spaces and the corresponding minimal and maximal (associated) bases with respect to Schauder frames, and closely connect them to the duality theory. Moreover, we extend James' well known results on characterizing the reflexivity of spaces with an unconditional bases, to spaces with unconditional frames. In Section 2 we recall the basic definitions and properties of Schauder frames. Then we introduce the concept of shrinking and boundedly complete frames and prove some elementary facts. Section 3 deals with the concept of associated spaces, and introduces the definitions of minimal and maximal (associated) spaces and the corresponding minimal and maximal (associated) bases with respect to Schauder frames. In Section 4 we extend James' results on shrinking and boundedly bases to frames \cite{Ja} and prove the following theorems. All necessary definitions can be found in the following sections 2 and 3. \begin{thmA} Let $(x_i,f_i)\subset X\times X^*$ be a Schauder frame of a Banach space $X$ and assume that for all $m\in\mathbb{N}$ $$\lim_{n\to\infty} \|f_m|_{\text{span}(x_i:i\ge n)}\|=0.$$ Then the following are equivalent. \begin{enumerate} \item $(x_i,f_i)$ is shrinking. \item Every normalized block of $(x_i)$ is weakly null. \item $X^*=\overline{\text{span}(f_i:i \in\mathbb{N})}$. \item The minimal associated basis is shrinking. \end{enumerate} \end{thmA} \begin{thmB} Let $(x_i,f_i)\subset X\times X^*$ be a Schauder frame of a Banach space $X$ and assume that for all $m\in\mathbb{N}$ $$\lim_{n\to\infty} \|f_m|_{\text{span}(x_i:i\ge n)}\|=0 \text{ and } \lim_{n\to\infty} \|x_m|_{\text{span}(f_i:i\ge n)}\|=0.$$ Then the following are equivalent. \begin{enumerate} \item $(x_i,f_i)$ is boundedly complete. \item $X$ is isomorphic to $ \overline{\text{span}(f_i:i \in\mathbb{N})}^*$ under the natural canonical map. \item The maximal associated basis is boundedly complete. \end{enumerate} \end{thmB} In Section 5, we discuss unconditional Schauder frames. We obtain a generalization of James's theorem and prove that a Banach space with a locally shrinking and unconditional Schauder frame is either reflexive or contains isomorphic copies of $\ell_1$ or $c_0$. \begin{thmC} Let $(x_i,f_i)\subset X\times X^*$ be a unconditional Schauder frame of a Banach space $X$ and assume that for all $m\in\mathbb{N}$ $$\lim_{n\to\infty} \|f_m|_{\text{span}(x_i:i\ge n)}\|=0.$$ Then $X$ is reflexive if an only if $X$ does not contain isomorphic copies of $c_0$ and $\ell_1$ \end{thmC} All Banach spaces in this paper are considered to be spaces over the real number field $\mathbb{R}$. The unit sphere and the unit ball of a Banach space $X$ are denoted by $S_X$ and $B_X$, respectively. The vector space of scalar sequences $(a_i)$, which vanish eventually, is denoted by $c_{00}$. The usual unit vector basis of $c_{00}$, as well as the unit vector basis of $c_0$ and $\ell_p$ ($1\le p<\infty$) and the corresponding coordinate functionals will be denoted by $(e_i)$ and $(e^*_i)$, respectively. Given two sequences $(x_i)$ and $(y_i)$ in some Banach space, and given a constant $C>0$, we say that \emph{$(y_i)$ $C$-dominates $(x_i)$}, or that \emph{$(x_i)$ is $C$-dominated by $(y_i)$}, if \begin{eqnarray*} \Big\|\sum a_i x_i\Big\|\le C\Big\|\sum a_i y_i\Big\| \quad \mbox{ for all } (a_i)\inc_{00}. \end{eqnarray*} We say that \emph{$(y_i)$ dominates $(x_i)$}, or that \emph{$(x_i)$ is dominated by $(y_i)$}, $(y_i)$ $C$-dominates $(x_i)$ for some constant $C>0$. \section{Frames in Banach Spaces} In this section, we give a short review of the concept of frames in Banach spaces, and make some preparatory observations. \begin{defin} Let $X$ be a (finite or infinite dimensional) separable Banach space. A sequence $(x_i,f_i)_{i\in \mathbb{I}}$, with $(x_i)_{i\in \mathbb{I}}\subset X$ and $(x_i)_{i\in \mathbb{I}}\subset X^*$ with $\mathbb{I}=\mathbb{N}$ or $\mathbb{I}=\{1,2, .\, .\, .\, , N\}$ for some $N\in\mathbb{N}$, is called a {\em (Schauder) frame of $X$} if for every $x\in X$, \begin{equation}\label{def.frame.eq1} x=\displaystyle\sum_{i\in\mathbb{I}} f_i(x) x_i. \end{equation} In case that $\mathbb{I}=\mathbb{N}$, we mean that the series in (\ref{def.frame.eq1}) converges in norm, that is, \begin{equation} x=\|\cdot\|-\displaystyle\lim_{n\rightarrow\infty}\sum_{i=1}^n f_i(x)x_i. \end{equation} An \emph{unconditional frame of $X$} is a frame $(x_i,f_i)_{i\in\mathbb{N}}$ for $X$ for which the convergence in (\ref{def.frame.eq1}) is unconditional. We call a frame $(x_i,f_i)$ {\em bounded } if $$\sup_i\|x_i\|<\infty \, \mbox{ and } \, \sup_i\|f_i\|<\infty,$$ and \emph{semi-normalized} if $(x_i)$ and $(f_i)$ both are semi-normalized, that is, if $$0<\inf_i\|x_i\|\le \sup_i\|x_i\|<\infty \, \text{ and } \, 0<\inf_i\|f_i\|\le \sup_i\|f_i\|<\infty.$$ \end{defin} \begin{remark} Throughout this paper, it will be our convention that we only consider non-zero frames $(x_i,f_i)$ indexed by $\mathbb{N}$, that is, the index set $\mathbb{I}$ will always be $\mathbb{N}$ and we assume that $x_i\neq 0$ and $f_i\neq 0$ for all $i\in\mathbb{N}$. \end{remark} In the following proposition we recall some easy observations from \cite{CHL} and \cite{CDOSZ}. \begin{prop}\cite{CHL,CDOSZ}\label{basic.pp1} Let $(x_i,f_i)$ be a frame of $X$. \begin{enumerate} \item[(a)] \begin{enumerate} \item[(i)] Using the Uniform Boundedness Principle we deduce that $$K=\sup_{x\in B_X}\sup_{m\le n} \Big\|\sum_{i=m}^n f_i(x) x_i\Big\|<\infty.$$ We call $K$ the \textrm{projection constant of $(x_i,f_i)$}. \item[(ii)] If $(x_i,f_i)$ is an unconditional frame, then it also follows from the Uniform Boundedness Principle that $$K_u=\sup_{x\in B_X}\sup_{(\sigma_i)\subset\{\pm1\}} \Big\|\sum \sigma_i f_i(x) x_i\Big\|<\infty.$$ We call $K_u$ the \textrm{unconditional constant} of $(x_i,f_i)$. \end{enumerate} \item[(b)] The sequence $(f_i,x_i)$ is a $w^*$-Schauder frame of $X^*$, that is to say, for every $f\in X^*$, $$f=w^*-\lim\limits_{n\to\infty}\sum_{i=1}^n f(x_i) f_i.$$ \item[(c)] For any $f\in X^*$ and $m\le n$ in $\mathbb{N}$, we have \begin{equation}\label{E:2.5a.1} \Big\| \sum_{i=m}^n f(x_i) f_i\Big\|= \sup_{x\in B_X}\Big|\sum_{i=m}^n f(x_i) f_i(x)\Big|\le \|f\| \sup_{x\in B_X}\Big\|\sum_{i=m}^n f_i(x) x_i\Big\|\le K \|f\|, \end{equation} and \begin{align}\label{E:2.5a.2} \Big\| \sum_{i=m}^n f(x_i) f_i\Big\|&=\sup_{x\in B_X}\Big|\sum_{i=m}^n f(x_i) f_i(x)\Big| =\sup_{x\in B_X}\Big|f\Big(\sum_{i=m}^n f_i(x) x_i\Big)\Big|\\ & \le \sup\limits_{ z \in \, \text{\rm span}(x_i:i\ge m), \|z\|\le K }|f(z)|= K\|f|_{\text{\rm span}(x_i:i\ge m)}\|,\notag \end{align} where $K$ is the projection constant of $(x_i,f_i)$. \end{enumerate} \end{prop} Next, we present some basic properties of frames in Banach spaces. \begin{prop}\label{norming} Let $(x_i, f_i)$ be a frame of a Banach space $X$ and denote by $K$ the projection constant of $(x_i, f_i)$. Then ${\overline{\textrm{span}}(f_i:i\in\mathbb{N})}$ is a norming subspace of $X^*$. \end{prop} \begin{proof} By Proposition \ref{basic.pp1} (b) and (c) (\ref{E:2.5a.1}), for all $f\in B_{X^*}$ and $n\in \mathbb{N}$ we have $$f=w^*\mbox{-}\lim\limits_{n\to\infty}\sum\limits_{i=1}^n f(x_i) f_i, \ \ \left\|\sum_{i=1}^n f(x_i) f_i\right\|\le K,$$ where $K$ is the projection constant of $(x_i,f_i)$. Thus, we obtain that $$B_{X^*}\subset \overline{K\cdot B_{X^*}\bigcap\mathrm{span}(f_i:i\in\mathbb{N})}^{w^*}\subset K\cdot B_{X^*}.$$ Then it is easy to deduce that $\overline{\textrm{span}}(f_i:i\in\mathbb{N})$ is norming for $X$. \end{proof} \begin{defin}\label{def.local} Let $(x_i,f_i)$ be a frame of a Banach space $X$. $(x_i,f_i)$ is called \emph{locally shrinking} if for all $m\in\mathbb{N}$ $\|f_m|_{\textrm{span}(x_i:i\geq n)}\|\to0$ as $n\to \infty$. $(x_i,f_i)$ is called \emph{locally boundedly complete} if for all $m\in\mathbb{N}$ $\|x_m|_{\textrm{span}(f_i:i\geq n)}\|\to0$ as $n\to \infty$. $(x_i,f_i)$ is called {\em weakly localized } if it is locally shrinking and locally boundedly complete. The frame $(x_i,f_i)$ is called {\em pre-shrinking } if $(f_i,x_i)$ is a frame of $X^*$.It is called \emph{pre-boundedly complete} if for all $x^{**}\in X^{**}$, $\sum\limits_{i=1}^\infty x^{**}(f_i) x_i$ converges. We call $(x_i,f_i)$ {\em shrinking } if it is locally shrinking and pre-shrinking, and we call $(x_i,f_i)$ {\em boundedly complete } if it weakly localized and {\em pre boundedly complete}. \end{defin} It is clear that every basis for a Banach space is weakly localized. However, it is false for frames. The following example is an unconditional and semi-normalized frame for $\ell_1$ which is not locally shrinking or locally boundedly complete. We leave the proof to the reader. \begin{ex} Let $(e_i)$ denote the usual unit vector basis of $\ell_1$ and let $(e_i^*)$ be the corresponding coordinate functionals, and set ${\bf 1}=(1,1,1, .\, .\, .\, )\in \ell_\infty$. Then define a sequence $(x_i,f_i)\subset \ell_1 \times \ell_\infty$ by putting $x_{2i-1}=x_{2i}=e_i$ for all $i\in\mathbb{N}$ and $$ f_i=\left\{% \begin{array}{ll} {\bf 1}, & \hbox{if $\!i=\!1$;} \\ e_1^*-{\bf 1}, & \hbox{if $i\!=\!2$;} \\ e_k^*-e_1^*/2^k, & \hbox{if $i\!=\!2k\kminus1$ for $k\!\in\!\mathbb{N}\setminus\{1\}$;} \\ e_1^*/2^k, & \hbox{if $i\!=\!2k$ for $k\!\in\!\mathbb{N}\setminus\{1\}$.} \\ \end{array}% \right.$$ \end{ex} \begin{prop}\label{self} Let $(x_i,f_i)$ be a frame of a Banach space $X$. Then the space $$X_0=\Big\{x\in X: \|x|_{\textrm{span}(f_i:i\geq n)}\|\to 0 \mbox{ as } n\to\infty \Big\}$$ is a norm closed subspace of $X$. Moreover, if $(x_i,f_i)$ is locally boundedly complete, then $X_0=X.$ \end{prop} \begin{proof} If $(x_k)\subset X_0$ with $x_k\to x$ in $X$, then given any $\varepsilon>0$, there are $k_0$ with $\|x-x_{k_0} \|\le \varepsilon$, and $n_0\in\mathbb{N}$ such that for all $n\ge n_0$, $$\|x|_{\textrm{span}(f_i:i\geq n)}\| \le \|x-x_{k_0} \|+\|x_{k_0}|_{\textrm{span}(f_i:i\geq n)}\|\le 2\varepsilon,$$ which implies that $x\in X_0.$ If $(x_i,f_i)$ is locally boundedly complete, then $x_i\in X_0$ for all $i\in\mathbb{N}$. It follows that $X=\overline{\textrm{span}}(x_i:i\in\mathbb{N})\subset X_0$. Thus, we complete the proof. \end{proof} \begin{prop}\label{P:2.5b} Let $(x_i,f_i)$ be a frame of a Banach space $X$. Then the space $$Y=\Big\{f\in X^*: f=\|\cdot\|-\lim_{n\to\infty}\sum_{i=1}^n f(x_i) f_i\Big\},$$ is a norm closed subspace of $X^*$. Moreover, if $(x_i,f_i)$ is locally shrinking, then $$Y=\overline{\mathrm{span}}(f_i:i\in\mathbb{N}),$$ and, thus, $(f_i,x_i)$ is a frame for $Y$. \end{prop} \begin{proof} First, define a new norm $|\!|\!|\cdot|\!|\!|$ on $X^*$ as follows $$|\!|\!|f|\!|\!|=\sup\limits_{m\le n}\Big\| \sum\limits_{i=m}^n f(x_i) f_i\Big\| \quad \mbox{ for all } f\in X^*.$$ By Proposition \ref{basic.pp1} (c) this is an equivalent norm of $(X^*, \|\cdot\|)$. Thus, if $(g_k)\subset Y$ with $g_k\to g$ in $X^*$, it follows that, $$\lim_{k\to\infty}|\!|\!|g-g_k |\!|\!| =\lim_{k\to\infty}\sup_{m\le n}\Big\| \sum_{i=m}^n g(x_i) f_i - \sum_{i=m}^n g_k(x_i) f_i\Big\|=0.$$ Thus, given any $\varepsilon>0$, there are $k_0$ with $|\!|\!|g-g_{k_0} |\!|\!|\le \varepsilon$, and $m_0\in\mathbb{N}$ such that for all $n\ge m\ge m_0$, $\Big\| \sum\limits_{i=m}^n g_{k_0}(x_i) f_i\Big\|\le\varepsilon$, and thus, $$\Big\| \sum_{i=m}^n g(x_i) f_i\Big\|\le |\!|\!|g-g_{k_0} |\!|\!|+\Big\| \sum_{i=m}^n g_{k_0}(x_i) f_i\Big\| \le 2\varepsilon,$$ which implies that $\sum\limits_{i=1}^\infty g(x_i) f_i$ converges. By Proposition \ref{basic.pp1} (b), we get $g=\sum\limits_{i=1}^\infty g(x_i) f_i\in Y$. If $(x_i,f_i)$ is locally shrinking, it follows from Proposition \ref{basic.pp1} (c) that for all $i\in\mathbb{N}$, $f_i\in Y$. Hence $\overline{\text{\rm span}}(f_i:i\in\mathbb{N})\subset Y$. On the other hand, it is clear from the definition of $Y$ that $Y\subset\overline{\text{\rm span}}(f_i:i\in\mathbb{N})$. Therefore, $Y=\overline{\textrm{span}}(f_i:i\in\mathbb{N}).$ \end{proof} \section{Associated Spaces} \begin{defin}\label{ass.def} Let $(x_i,f_i)$ be a frame of a Banach space $X$ and let $Z$ be a Banach space with a basis $(z_i)$. We call $Z$ an {\em associated space to $(x_i,f_i)$} and $(z_i)$ an \emph{associated basis}, if \begin{align*} S:Z\to X,\quad \sum a_i z_i\mapsto \sum a_i x_i \ \ \text{ and } \ \ T:X\to Z,\quad x=\sum f_i(x) x_i\mapsto \sum f_i(x) z_i, \end{align*} are bounded operators. We call $S$ the {\em associated reconstruction operator} and $T$ the {\em associated decomposition operator} or \emph{analysis operator}. \end{defin} \begin{remark} If $(x_i,f_i)$ is a frame of a Banach space $X$ and $Z$ a corresponding associated space with an associated basis $(z_i)$, then (see \cite[Definition 2.1]{CHL} or \cite{Ch}) $(x_i,f_i)$ is an \emph{atomic decomposition} of $X$ with respect to $Z$. In our paper, we will mostly concentrate on frames and properties which are independent of the associated spaces. \end{remark} \begin{prop}\label{P:2.7a} Let $(x_i,f_i)$ be a frame of a Banach space $X$ and let $Z$ be an associated space with an associated basis $(z_i)$. Let $S$ and $T$ be the associated reconstruction operator and the associated decomposition operator, respectively. Then $S$ is a surjection onto $T(X)$, and $T$ is an isomorphic embedding from $X$ into $Z$. Moreover, for all $i\in\mathbb{N}$, $S(z_i)=x_i$ and\, $T^*(z^*_i)=f_i$. \end{prop} \begin{proof} Note that for any $x\in X$, it follows that $$S\circ T(x)=S\circ T\Big(\sum f_i(x) x_i\Big)=S\Big(\sum f_i(x) z_i\Big)= \sum f_i(x) x_i=x.$$ Therefore, $T$ must be an isomorphic embedding and $S$ a surjection onto the space $T(X)=\Big\{ \sum f_i(x) z_i : x\in X\Big\}$. And the map $P: Z\to Z$, $z\mapsto T\circ S(z)$ is a projection onto $T(X)$. By Definition \ref{ass.def}, it is clear that $S(z_i)=x_i$ for all $i\in\mathbb{N}$. Secondly, it follows that for any $x\in X$ and $i\in\mathbb{N}$, $$T^*(z^*_i)(x)=z^*_i \circ T\Big(\sum f_j(x) x_j\Big)= z^*_i\Big(\sum f_j(x) z_j\Big)= f_i(x),$$ and thus, $T^*(z^*_i)=f_i$, which completes our claim. \end{proof} We now introduce the notion of minimal bases. \begin{defin}\label{minimal.def} Let $(x_i)$ be a non zero sequence in a Banach space $X$. Define a norm on $c_{00}$ as follows \begin{equation}\label{minimal.def.eq1} \Big\|\sum a_i e_i\Big\|_{Min}=\max_{m\le n} \Big\|\sum_{i=m}^n a_i x_i\Bigr\|_X \quad \mbox{ for all } \sum a_i e_i\inc_{00}, \end{equation} Denote by $Z_{Min}$ the completion of $c_{00}$ endowed with the norm $\|\cdot\|_{Min}$. It is easy to prove that $(e_i)$, denoted by $(e_i^{Min})$, is a bi-monotone basis of $Z_{Min}$. By the following Theorem \ref{minimal.1} (b), we call $Z_{Min}$ and $(e_i^{Min})$ the \emph{minimal space} and the \emph{minimal basis with respect to $(x_i)$}, respectively. Note that the operator: $$S_{Min}:Z_{Min}\to X, \quad \sum a_i z_i\mapsto \sum a_i z_i,$$ is linear and bounded with $\|S_{Min}\|=1$ If $(x_i,f_i)$ is a frame the {\em minimal space } (or the {\em minimal basis) with respect to $(x_i,f_i)$} is the minimal space (or the minimal basis) with respect to $(x_i)$. \end{defin} As the following result from \cite[Theorem 2.6]{CHL} shows, associated spaces always exist. \begin{thm}\cite[Theorem 2.6]{CHL}\label{minimal.1} Let $(x_i,f_i)$ be a frame of a Banach space $X$ and let $Z_{Min}$ be the minimal space with the minimal basis $(e_i^{Min})$. \begin{enumerate} \item[(a)] $Z_{\textrm{Min}}$ is an associated space to $(x_i,f_i)$ with the associated basis $(e_i^{Min})$. \item[(b)] For any associated space $Z$ with an associated basis $(z_i)$, $(e_i^{Min})$ is dominated by $(z_i)$. \end{enumerate} Thus, we will call $Z_{Min}$ and $(e_i^{Min})$ the \emph{minimal associated space} and the \emph{minimal associated basis} to $(x_i,f_i)$, respectively. \end{thm} We give a sketch of the proof. \begin{proof} (a) Let $K$ be the projection constant of $(x_i,f_i)$. It follows that the map $T_{Min}: X\to Z_{Min}$ defined by $$T_{Min}: X\to Z_{Min}, \quad x=\sum f_i(x) x_i\mapsto \sum f_i(x) e_i^{Min},$$ is well-defined, linear and bounded and $\|T\|\le K$. As already noted in Definition \ref{minimal.def} , the operator $S_{Min}: Z\to X$ is linear and bounded. (b) If $Z$ is an associated space with an associated basis $(z_i)$ and $S: Z\to X$ is the corresponding associated reconstruction operator, then it follows that for any $(a_i)\inc_{00}$, \begin{eqnarray}\label{minimal.remark1.eq1} \Big\|\sum a_i e_i^{Min}\Big\|&=&\max_{m\le n}\Big\|\sum_{i=m}^n a_i x_i\Big\|=\max_{m\le n}\Big\|\sum_{i=m}^n a_i S(z_i)\Big\|\\ &\le& \|S\| \max_{m\le n}\Big\|\sum_{i=m}^n a_i z_i\Big\|\le K_Z\|S\| \Big\|\sum a_i z_i\Big\|,\nonumber \end{eqnarray} where $K_Z$ is the projection constant of $(z_i)$. \end{proof} Next we introduce the notion of the maximal space and the maximal basis. \begin{defin}\label{max.def} Let $(x_i,f_i)$ be a frame of a Banach space $X$. Define a norm on $c_{00}$ as follows \begin{equation}\label{max.def.eq1} \Big\|\sum a_i e_i\Big\|_{Max}=\sup_{\stackrel{(b_i)\in c_{00}}{\max\limits_{m\leq n}\|\sum\limits_{i=m}^n b_i f_i\|\leq 1}}\Big|\sum a_i b_i\Big| \quad \mbox{ for all } \sum a_i e_i\inc_{00}. \end{equation} Denote by $Z_{Max}$ the completion of $c_{00}$ under $\|\cdot\|_{Max}$. Clearly, $(e_i)$ is a bi-monotone basis of $Z_{Max}$, which will be denoted by $(e_i^{Max})$. We call $Z_{Max}$ and $(e_i^{Max})$ the \emph{maximal space} and the \emph{maximal basis with respect to $(x_i,f_i)$}, respectively. \end{defin} \begin{thm}\label{associated.maximal.pp1} Let $(x_i, f_i)$ be a frame of a Banach space $X$ and let $Z_{Max}$ be the maximal space with the maximal basis $(e_i^{Max})$. \begin{enumerate} \item[(a)] If $Z$ is an associated space with an associated basis $(z_i)$, then $(e_i^{Max})$ dominates $(z_i)$. \item[(b)] The mapping \begin{equation}\label{max.S.eq} S_{Max}:Z_{Max}\to X,\quad z=\sum a_i e_i^{Max}\mapsto \sum a_i x_i, \end{equation} is well-defined, linear and bounded. \item[(c)] If $(x_i,f_i)$ is locally boundedly complete, then $Z_{\textrm{Max}}$ is an associated space to $(x_i,f_i)$ with the associated basis $(e_i^{Max})$. In this case, we call $Z_{Max}$ and $(e_i^{Max})$ the \emph{maximal associated space} and the \emph{maximal associated basis to $(x_i, f_i)$}. \end{enumerate} \end{thm} \begin{proof} (a) Let $Z$ be an associated space with an associated basis $(z_i)$, $(z_i^*)$ is the corresponding coordinate functionals, and let $T:X\to Z$ be the associated decomposition operator. By Proposition \ref{P:2.7a} $T^*(z_i^*)=f_i$, for all $i\in\mathbb{N}$. Thus, for any $(a_i)\in c_{00}$, we have \begin{eqnarray}\label{max.eq2} \Big\|\sum a_i z_i\Big\|&\leq& K_Z \sup_{\stackrel{(b_i)\in c_{00}}{\|\sum b_i z_i^*\|\leq 1}}\Big|\Big\langle\sum a_i z_i, \sum b_i z_i^* \Big\rangle\Big| \\ &\leq& K^2_Z \sup_{\stackrel{(b_i)\in c_{00}}{\max\limits_{m\leq n}\|\sum\limits_{i=m}^n b_i z_i^*\|\leq 1}}\Big|\sum a_i b_i\Big|\nonumber\\ &\leq& K^2_Z \sup_{\stackrel{(b_i)\in c_{00}}{\max\limits_{m\leq n}\|T^*(\sum\limits_{i=m}^n b_i z_i^*)\|\leq \|T^*\|}}\Big|\sum a_i b_i\Big| \nonumber\\ &\leq& K^2_Z \ \ \|T^*\| \sup_{\stackrel{(b_i)\in c_{00}}{\max\limits_{m\leq n}\|\sum\limits_{i=m}^n b_i f_i\|\leq 1}}\Big|\sum a_i b_i\Big| \leq K^2_Z \ \ \|T^*\| \ \ \Big\| \sum a_i e_i^{Max} \Big\|,\nonumber \end{eqnarray} where $K_Z$ is the projection constant of $(z_i, z_i^*)$. (b) Let $(Z_{Min}, (e_i^{Min}))$ be the minimal space to $(x_i,f_i)$ and by Theorem \ref{minimal.1} (a) let $T_{Min}: X\to Z_{Min}$ be the corresponding associated decomposition operator. Then by (\ref{max.eq2}), for any $(a_i)\in c_{00}$, we have \begin{eqnarray}\label{max.eq4} \max_{m\leq n}\Big\|\sum_{i=m}^na_i x_i\Big\|&=& \Big\|\sum a_i e_i^{Min}\Big\|\leq C \Big\|\sum a_i e_i^{Max}\Big\|, \end{eqnarray} where $C=K^2_{Min} \, \|T^*_{Min}\|$ and $K_{Min}$ is the projection constant of $(e_i^{Min})$. Thus, the map $S_{Max} : Z_{Max}\rightarrow X$ with $S_{Max}(e_i^{Max})=x_i$, for $i\in\mathbb{N}$, is well defined, linear and bounded with $\|S_{Max}\|\leq K^2_{Min} \|T^*_{Min}\|$. (c) If $(x_i,f_i)$ is locally boundedly complete, then for any $x\in X$ and $l\leq r$, we have \begin{eqnarray*} \Big\|\sum_{i=l}^r f_i(x) e_i^{Max}\Big\|=\sup_{\stackrel{(b_i)\in c_{00}}{\max\limits_{m\leq n}\|\sum\limits_{i=m}^n b_i f_i\|\leq 1}}\Big|\sum_{i=l}^r b_i f_i(x)\Big|\leq \|x|_{\text{\rm span}(f_i : i\geq l)}\|, \end{eqnarray*} which by Proposition \ref{self}, tends to zero as $l\to\infty$. Thus, the map \begin{equation} T_{Max} : X\rightarrow Z_{Max}, \ \ x=\sum f_i(x) x_i \mapsto \sum f_i(x) e_i^{Max}, \end{equation} is well-defined, linear and bounded with $\|T_{Max}\|\le1$, which completes our proof. \end{proof} The following result emphases that for every frame, that associated bases dominate $(e^{Min}_i)$ and are dominated by $(e^{Max}_i)$. \begin{cor} Let $(x_i,f_i)$ be a frame of a Banach space $X$. Assume that $(e_i^{Min})$ and $(e_i^{Max})$ is the minimal basis and the maximal basis with respect to $(x_i,f_i)$, respectively. Then for any associated space $Z$ with an associated basis $(z_i)$, there are $C_1,C_2>0$ such that \begin{equation} C_1 \Big\|\sum a_i e_i^{Min}\Big\|\leq \Big\|\sum a_i z_i\Big\|\leq C_2 \Big\|\sum a_i e_i^{Max}\Big\| \quad \mbox{for all \,$(a_i)\in c_{00}$}. \end{equation} \end{cor} \section{Applications of frames to duality theory} The following results extend James' work on shrinking and boundedly complete bases \cite{Ja} from to frames. Theorem \ref{shrinking.1} obviously yields Theorem A and Theorem \ref{cor.bdd} implies Theorem B. \begin{thm}\label{shrinking.1} Let $(x_i,f_i)$ be a Schauder frame of a Banach space $X$. Assume that $Z_{Min}$ and $(e_i^{Min})$ are the minimal space and minimal basis with respect to $(x_i,f_i)$, respectively. Then the following conditions are equivalent. \begin{enumerate} \item[a)] Every normalized block sequence of $(x_i)$ is weakly null. \item[b)] \begin{enumerate} \item[i)] $(x_i,f_i)$ is locally shrinking. \item[ii)] If $(u_n)\subset B_X$ with $\lim\limits_{n\to\infty} f_m(u_n)=0$ for all $m\in\mathbb{N}$, then $(u_n)$ is weakly null. \end{enumerate} \item[c)] \ \ $(x_i,f_i)$ is locally shrinking and pre-shrinking. \item[d)] \begin{enumerate} \item[i)] $(x_i,f_i)$ is locally shrinking. \item[ii)] $X^*=\overline{\text{\rm span}}(f_i:i\in\mathbb{N})$. \end{enumerate} \item[e)] \begin{enumerate} \item[i)] $(x_i,f_i)$ is locally shrinking. \item[ii)] $(e_i^{Min})$ is a shrinking basis of $Z_{Min}$. \end{enumerate} \end{enumerate} \end{thm} \begin{thm}\label{cor.bdd} Let $(x_i,f_i)$ be a frame of a Banach space $X$. Then the following conditions are equivalent. \begin{enumerate} \item[a)] $(x_i,f_i)$ locally shrinking and for all $x^{**}\in X^{**}$, $\| x^{**}|_{\text{\rm span}(f:i:i\ge n)}\|\to0$, if $n\to\infty$ . \item[b)] $(x_i,f_i)$ is locally shrinking, locally boundedly complete and pre-boundedly complete. \item[c)]\begin{enumerate} \item[i)] $(x_i,f_i)$ is locally shrinking and locally boundedly complete. \item[ii)] For every $x^{**}\in X^{**}$, $\sum x^{**}(f_i) x_i$ converges under the topology $\sigma(X, \overline{\textrm{span}(f_i:i\in\mathbb{N})})$. \end{enumerate} \item[d)]\begin{enumerate} \item[i)] $(x_i,f_i)$ is locally shrinking and locally boundedly complete. \item[ii)] $X$ is isomorphic to $\overline{\textrm{span}(f_i:i\in\mathbb{N})}^*$ under the natural canonical map. \end{enumerate} \item[e)]\begin{enumerate} \item[i)] $(x_i,f_i)$ is locally shrinking and locally boundedly complete. \item[ii)] $(e^{Max}_i)$ is a boundedly complete basis of $Z_{Max}$. \end{enumerate} \end{enumerate} \end{thm} For the above main theorems, we need the following results. \begin{prop}\label{pre.pre} \begin{enumerate} \item[(a)] Every frame satisfying (a) of Theorem \ref{shrinking.1} is pre-shrinking. \item[(b)] Every frame satisfying (a) of Theorem \ref{cor.bdd} is pre-boundedly complete. \end{enumerate} \end{prop} \begin{proof} Assume that $(x_i,f_i)$ is a frame of a Banach space $X$. (a) Notice that every normalized block sequence of $(x_i)$ is weakly null if and only if for all $f\in X^*$, $\|f|_{\textrm{span}(x_i:i\ge n)}\|\to 0$, as $n\to\infty$. This easily implies our claim by Proposition \ref{basic.pp1} (b) and (c). \noindent (b) For $ m\le n$ in $\mathbb{N}$ we have \begin{align} \Big\| \sum_{i=m}^n x^{**}(f_i) x_i\Big\|&= \sup_{f\in B_{X^*}}\Big|\sum_{i=m}^n x^{**}(f_i) f(x_i)\Big|\\ &=\sup_{f\in B_{X^*}}x^{**}\Big(\sum_{i=m}^n f(x_i) f_i\Big)\notag\\ &\le \sup\limits_{ g \in \, \text{\rm span}(f_i:i\ge m), \|g\|\le K }x^{**}(g) = K \, \|x^{**}|_{\text{\rm span}(f_i:i\ge m)}\|,\notag \end{align} where $K$ is the projection constant of $(x_i,f_i)$. \end{proof} \begin{prop}\label{bdd.2} Let $(x_i,f_i)$ is a Schauder frame of a Banach space $X$. Assume that $Z$ is an associated space with an associated basis $(z_i)$ to $(x_i,f_i)$. \begin{enumerate} \item[a)] If $(z_i)$ is shrinking, then $(x_i,f_i)$ is pre-shrinking. \item[b)] If $(z_i)$ is boundedly complete, then $(x_i,f_i)$ is pre-boundedly complete. \end{enumerate} \end{prop} \begin{proof} Assume that $S$ and $T$ are the corresponding associated reconstruction and decomposition operators, respectively. By Proposition \ref{P:2.7a}, $S(z_i)=x_i$ and $T^*(z_i^*)=f_i$ for all $i\in\mathbb{N}$. (a) If $(z_i)$ is shrinking, we have \begin{align} f=T^*S^*(f)=T^*\Big( \sum \langle S^*(f), z_i\rangle z_i^* \Big)= \sum \langle f, S(z_i)\rangle T^*(z_i^*)=\sum f(x_i) f_i, \end{align} which proves our claim. \noindent (b) For any $x^{**}\in X^{**}$ and $m,n\in\mathbb{N}$ with $m\le n$, \begin{align}\label{bdd.3} \Big\|\sum\limits_{i=m}^n x^{**}(f_i)x_i\Big\|&=\Big\|\sum\limits_{i=m}^n x^{**}(T^*(z_i^*))S(z_i)\Big\|=\Big\|S\Big(\sum\limits_{i=m}^n T^{**}(x^{**})(z_i^*)z_i\Big)\Big\| \\ &\leq\|S\|\cdot\Big\|\sum\limits_{i=m}^n T^{**}(x^{**})(z_i^*)z_i\Big\|.\notag \end{align} Since $(z_i)$ is boundedly complete, $\sum\limits_{i=1}^\infty T^{**}(x^{**})(z_i^*)z_i$ converges, by (\ref{bdd.3}), so does $\sum\limits_{i=1}^\infty x^{**}(f_i)x_i$, which completes the proof. \end{proof} \begin{prop}\label{shr.bdd} Let $(x_i,f_i)$ be a Schauder frame of a Banach space $X$. \begin{enumerate} \item[a)] Assume that $Z_{Min}$ and $(e_i^{Min})$ are the minimal space and minimal basis with respect to $(x_i,f_i)$, respectively. If $(x_i,f_i)$ satisfies (a) of Theorem \ref{shrinking.1}, then $(e_i^{Min})$ is shrinking. \item[b)] Assume that $Z_{Max}$ are the maximal space with the maximal basis $(e_i^{Max})$ with respect to $(x_i,f_i)$. If $(x_i,f_i)$ satisfies (a) of Theorem \ref{cor.bdd} , then $(e_i^{Max})$ is boundedly complete. \end{enumerate} \end{prop} For the proof of Proposition \ref{shr.bdd}, we will need the following result, which is a slight variation of Lemma 2.10 of \cite{OS}. \begin{lem}\label{L:3.2} Let $X$ be a Banach space and a sequence $(x_i)\subset X\setminus \{0\}$, and let $Z_{Min}$ and $(e^{Min}_i)$ be the associated minimal space and basis, respectively. \begin{enumerate} \item[a)] Let $(y_i)\subset B_{Z_{Min}}$ be a block basis of $(e_i^{Min})$ on $Z_{Min}$. Assume that the sequence $(w_i)=(S_{Min}(y_i))$ is a semi-normalized basic sequence in $X$. Then for $(a_i)\in c_{00}$, $$\Big\|\sum a_i w_i\Big\| \le\Big\|\sum a_i y_i\Big\| \le\Big(\frac{2K}a+K\Big) \Big\|\sum a_iw_i\Big\|,$$ where $K$ is the projection constant of $(w_i)$ and $a:=\inf\limits_{i\in\mathbb{N}} \|w_i\|$. \item[b)] If every normalized block sequence of $(x_i)$ is weakly null, then $(e_i^{Min})$ is shrinking. \end{enumerate} \end{lem} \begin{proof} Let $S_{Min}:Z_{Min}\to X$ be defined as in Definition \ref{minimal.def}. \noindent (a) For $i\in\mathbb{N}$, write $$y_i=\sum_{j=k_{i-1}+1}^{k_i} \beta_j^{(i)} e_j^{Min},\text{ with $0=k_0<k_1<k_2\ldots$ and $\beta_j^{(i)}\in\mathbb{R}$, for $i,j\in\mathbb{N}$},$$ and set $$w_i=S_{Min}(y_i)=\sum_{j=k_{i-1}+1}^{k_i} \beta_j^{(i)} x_j.$$ Let $(a_i)\inc_{00}$. We use the definition of $Z_{Min}$ to find $1\le i_1\le i_2+1$ and $\ell_1\in[k_{i_1-1}+1,k_{i_1}]$ and $\ell_2\in[k_{i_2}+1,k_{i_2+1}]$ in $\mathbb{N}$ so that, when $i_1\le i_2-1$, \begin{align*} \Big\|\sum a_i w_i\Big\|&\le \Big\|\sum a_i y_i\Big\| \text{\ \ (Since $\|S_{Min}\|\le 1$)}\\ &=\Big\|a_{i_1} \sum_{j=\ell_1}^{k_{i_1}} \beta_j^{(i_1)} x_j +\sum_{s=i_1+1}^{i_2} a_s w_s+a_{i_2+1}\sum_{j=k_{i_2}+1}^{\ell_2} \beta_j^{(i_2)} x_j\Big\|\\ &\le \Big\|a_{i_1} \sum_{j=\ell_1}^{k_{i_1}} \beta_j^{(i_1)} x_j \Big\| + \Big\|\sum_{s=i_1+1}^{i_2} a_s w_s \Big\|+ \Big\|a_{i_2+1}\sum_{j=k_{i_2}+1}^{\ell_2} \beta_j^{(i_2)} x_j\Big\|\\ &\le |a_{i_1}|\| y_{i_1}\|+ |a_{i_2+1}|\| y_{i_2+1}\|+K\Big\|\sum a_i w_i\Big\|\\ &\le |a_{i_1}|+ |a_{i_2+1}|+K\Big\|\sum a_i w_i\Big\| \le \Big(\frac{2K}a+K\Big)\Big\|\sum a_i w_i\Big\|. \end{align*} The other two cases $i_1=i_2$ and $i_1=i_2+1$ can be obtained in similar way. \noindent (b) Assume that $(y_i)$ is a normalized block sequence of $(e_i^{Min})$. For $i\in\mathbb{N}$, we write $$y_i=\sum_{j=k_{i-1}+1}^{k_i} a_j e_j^{Min},\text{ with $0=k_0<k_1<k_2\ldots$ and $a_j\in\mathbb{R}$}.$$ Then, by definition of the space $S_{Min}$, $(S_{Min}(y_i))$ is a bounded block sequence of $(x_i)$. It is enough to show that $(y_i)$ has a weakly null subsequence. If $\liminf\limits_{i\to\infty}\|S_{Min}(y_i)\|>0$, then our claim follows from (a). In the case that $\lim\limits_{i\to\infty}\|S_{Min}(y_i)\|=0$, we use the definition of $Z_{Min}$ to find $k_0<m_1\le n_1\le k_1<m_2\le n_2<\ldots$ so that for all $i\in\mathbb{N}$, $1=\|y_i\|=\big\|\sum\limits_{j=m_i}^{n_i} a_i x_i\big\|.$ Thus, by (a), the sequences $(w_i^{(1)})$ and $(w_i^{(2)})$ with $$w_i^{(1)}=\sum_{j=m_i}^{n_i} a_j x_j\text{ and } w_i^{(2)}=S_{Min}(y_i)- \sum_{j=m_i}^{n_i} a_j x_j=\sum_{j=k_{i-1}+1}^{k_i} a_j x_j- \sum_{j=m_i}^{n_i} a_j x_j \, \text{ for $i\in\mathbb{N}$},$$ both can, after passing to a further subsequence, be assumed to be semi-normalized and, by hypothesis, are weakly null, which implies that we can, after passing to a subsequence again, also assume that they are basic. Claim (a) implies that the sequences $(y_i^{(1)})$ and $(y_i^{(2)})$ with $$y_i^{(1)}=\sum_{j=m_i}^{n_i} a_j e_j^{Min}\text{ and } y_i^{(2)}=\sum_{j=k_{i-1}}^{k_i} a_j e_j^{Min}- \sum_{j=m_i}^{n_i} a_j e_j^{Min} \, \text{ for $i\in\mathbb{N}$},$$ are weakly null in $Z_{Min}$, which implies that $(y_i)$ is weakly null. \end{proof} \begin{proof}[Proof of Proposition \ref{shr.bdd}] (a) It can be directly obtained by Lemma \ref{L:3.2} (b). \noindent (b) Denote by $(e_i^*)$ the coordinate functionals of $(e_i^{Max})$. Since $(x_i,f_i)$ is boundedly complete Proposition \ref{associated.maximal.pp1} (c) yields that $Z_{Max}$ is an associated space. Let $T_{Max}: X\to Z_{Max}$ be the associated decomposition operator, and recall that by Proposition \ref{P:2.7a}, $T_{Max}^*(e_i^*)=f_i$, for $i\in\mathbb{N}$. Then for any $(a_i)\in c_{00}$, \begin{equation} \max_{m\le n}\Big\|\sum_{i=m}^n a_i f_i\Big\|=\max_{m\le n}\Big\|\sum_{i=m}^n a_i T_{Max}^*(e_i^*)\Big\|\le \|T_{Max}^*\| \max_{m\le n}\Big\|\sum_{i=m}^n a_i e_i^*\Big\|\le K \|T_{Max}^*\| \Big\|\sum a_i e_i^*\Big\|, \end{equation} where $K$ is the projection constant of $(e_i^*)$. Moreover, \begin{eqnarray*} \Big\|\sum a_i e_i^*\Big\|&=&\sup_{\stackrel{(b_i)\inc_{00}}{\|\sum b_i e_i^{Max}\|\leq 1}} \Big|\sum a_i b_i\Big|\\ &\leq& \sup_{\stackrel{(b_i)\inc_{00}}{\|\sum b_i e_i^{Max}\|\leq 1}} \quad \sup_{\stackrel{(c_i)\inc_{00}}{\max\limits_{m\le n}\|\sum\limits_{i=m}^n c_i f_i\|\leq \max\limits_{m\le n}\|\sum\limits_{i=m}^n a_i f_i\|}} \Big|\sum c_i b_i\Big| \\ &=&\max\limits_{m\le n}\Big\|\sum\limits_{i=m}^n a_i f_i\Big\| \sup_{\stackrel{(b_i)\inc_{00}}{\|\sum b_i e_i^{Max}\|\leq 1}} \ \ \sup_{\stackrel{(c_i)\inc_{00}}{\max\limits_{m\le n}\|\sum\limits_{i=m}^n c_i f_i\|\leq 1}} \Big|\sum c_i b_i\Big|\\ &=&\max\limits_{m\le n}\Big\|\sum\limits_{i=m}^n a_i f_i\Big\| \sup_{\stackrel{(b_i)\inc_{00}}{\|\sum b_i e_i^{Max}\|\leq 1}}\Big\|\sum b_i e_i^{Max}\Big\| \leq \max\limits_{m\le n}\Big\|\sum\limits_{i=m}^n a_i f_i\Big\|. \end{eqnarray*} Thus, $(e_i^*)$ is equivalent to the minimal basis with respect to $(f_i)\subset X^*$. By Proposition \ref{P:2.5b} $(f_i,x_i)$ is a frame for $\overline{\text{span}{(f_i:i\in\mathbb{N}})}$. $(e_i^{Min})$ with respect to $(f_i)$ in $X^*$ constructed in Lemma \ref{L:3.2}. Since by assumption $\| x^{**}|_{\text{\rm span}(f:i:i\ge n)}\|\to0$, if $n\to\infty$, every normalized block sequence of $(f_i)$ is weakly null. Therefore Lemma \ref{L:3.2} (b) yields that $(e_i^*)$ is shrinking. Thus, $(e_i^{Max})$ is boundedly complete, which proves our claim. \end{proof} We are now ready to present a proof of our main theorems: \begin{proof}[Proof of Theorem \ref{shrinking.1}] \noindent (a)$\Rightarrow$(b) It is clear that (a) implies (b)(i), while (b)(ii) follows from (a) and the fact that the frame representation (\ref{def.frame.eq1}) implies that every sequence $(u_n)\subset B_X$ for which $\lim\limits_{n\to\infty}f_i(u_n)=0$, whenever $i\in\mathbb{N}$, has a subsequence which is an arbitrary small perturbation of a block sequence of $(x_i)$ in $B_X$. \noindent (b)$\Rightarrow$(c) By Proposition \ref{basic.pp1} (c), every $f\in X^*$ can be written as $$f=w^*-\lim\limits_{n\to\infty}\sum_{i=1}^n f(x_i) f_i.$$ If for some $f$, this sum did not converge in norm, we could find a sequence $(u_k)\subset B_X$ and $m_1\le n_1<m_2\le n_2< . . .$ in $\mathbb{N}$ and $\varepsilon>0$ so that for all $k\in\mathbb{N}$, \begin{equation}\label{E:3.1.1} f\Big(\sum_{i=m_k}^{n_k} f_i(u_k) \, x_i\Big)=\sum_{i=m_k}^{n_k} f(x_i) f_i(u_k)\ge \frac{1}{2}\Big\|\sum_{i=m_k}^{n_k} f(x_i) f_i\Big\| \ge \frac{\varepsilon}{2}.\end{equation} By Proposition \ref{basic.pp1} (b), $(\tilde u_k)\subset K\cdot B_X$, where $\tilde u_k=\sum\limits_{i=m_k}^{n_k} x_i f_i(u_k)$, for $k\in\mathbb{N}$. Thus, $\tilde u_k$ is a bounded block sequence of $(x_i)$ , which contradicts (b)(ii) \noindent (c)$\Rightarrow$(d) trivial. \noindent (d)$\Rightarrow$(a) by Proposition \ref{self}. Thus we verified (a)$\Leftrightarrow$(b)$\Leftrightarrow$(c)$\Leftrightarrow$(d). \noindent (a)$\Leftrightarrow$(e) by Proposition \ref{shr.bdd} (a). \noindent (e)$\Leftrightarrow$(c) by Proposition \ref{bdd.2} (a). \end{proof} \begin{proof}[Proof of Theorem \ref{cor.bdd}] \noindent (a)$\Rightarrow$(b) by Proposition \ref{pre.pre}. \noindent (b)$\Rightarrow$(c) trivial. \noindent (c)$\Rightarrow$(d) Let $Y=\overline{\textrm{span}(f_i:i\in\mathbb{N})}$. Define $J: X\rightarrow Y^*$ by $J(x): f \mapsto f(x)$, which is the natural canonical map. Then we have $$\|J(x)\|=\sup\limits_{f\in B_Y} |J(x)f|=\sup\limits_{f\in B_Y} |f(x)|\leq \|x\|,$$ which implies that $J$ is a bounded linear operator. Next we will show that $J$ is bijective. Since, by Proposition \ref{norming}, $Y$ is a norming set of $X$, $J$ is injective. On the other hand, any $y^{*}\in Y^*$, can, by the Hahn-Banach Theorem, be extended it to an element $x^{**}\in X^{**}$. Then by hypothesis, there is an $x\in X$ such that $x=\lim\limits_{n\rightarrow\infty}\sum\limits_{i=1}^n x^{**}(f_i) x_i$ under the topology $\sigma(X, Y)$. Thus, for any $f\in Y$, \begin{equation} J(x)(f)=f(x)=\lim_{n\rightarrow\infty} f\Big(\sum_{i=1}^n x^{**}(f_i) x_i\Big)=\lim_{n\rightarrow\infty}x^{**}\Big(\sum_{i=1}^n f(x_i) f_i\Big)=x^{**}(f), \end{equation} which implies that $J$ is surjective. Then by the Banach Open Mapping Principle, $J$ is an isomorphism from $X$ onto $Y^*$. \noindent (d)$\Rightarrow$(a) Let $x^{**}\in X^{**}$ and put $f^*=x^{**}|_Y\in Y^*$ (i.e.$f^*(f)=x^{**}(f)$ for $f\in Y$). By assumption (d) there is an $x\in X$ so that $f(x)= f^*(f)=x^{**}(f)$ for ll $f\in Y$. Thus (a) follows from Proposition \ref{self}. Note we have now verified the equivalences (a)$\Leftrightarrow$(b)$\Leftrightarrow$(c)$\Leftrightarrow$(d). \noindent (a)$\Rightarrow$(e) by Proposition \ref{shr.bdd}. \noindent (e)$\Rightarrow$ (b) by Proposition \ref{bdd.2} (b) and Theorem \ref{associated.maximal.pp1} (c) \end{proof} \begin{ex}\label{Ex:3.3} The following example shows that there is a semi-normalized tight Hilbert frame for $\ell_2$ satisfying (b)(ii) and (d)(ii) in Proposition \ref{shrinking.1} but not condition (b)(i). Choose $c>0$ and $(c_i)\subset(0,1)$ so that \begin{equation}\label{E:3.3.1} c^2+\sum c_i^2=1\text{ and } \sum c_i=\infty \end{equation} In $\ell_2$ put $x_1=ce_1$ and for $i\in\mathbb{N}$ $$x_{2i}=\frac1{\sqrt2}e_{i+1} +\frac{c_i}{\sqrt2} e_1\text{ and }x_{2i+1}=\frac1{\sqrt2}e_{i+1} -\frac{c_i}{\sqrt2} e_1.$$ It follows for any $x=\sum a_i e_i\in\ell_2$ that \begin{align*} \sum_{i=1}^\infty \langle x_i, x\rangle^2&= c^2 a_1^2+\frac12\sum_{j=2}^\infty (a_j+c_{j-1} a_1)^2+ (a_j-c_{j-1} a_1)^2\\ &= c^2a_1^2+\sum_{j=2}^\infty a_j^2+a_1^2\sum_{j=1}^\infty c_j^2=\|x\|^2, \end{align*} Thus, $(x_i)$ is a tight frame, which implies (b)(ii), (c)(ii) and (d)(ii). Using the second part of (\ref{E:3.3.1}) we can choose $0=n_0<n_1<n_2<\ldots$ so that $$\lim_{i\to\infty} y_i=e_1, \text{ where }y_i=\sum_{j=n_{i-1}+1}^{n_i}x_{2j}- x_{2j+1} \text{ for }i\in\mathbb{N},$$ which implies that (b)(i) is not satisfied. \end{ex} \begin{prop}\label{pre.shr} Let $(x_i,f_i)$ be a Schauder frame of a Banach space $X$. Then the following conditions are equivalent: \begin{enumerate} \item[a)] $(x_i,f_i)$ is a pre-shrinking Schauder frame of $X$. \item[b)] $(f_i,x_i)$ is a pre-boundedly complete Schauder frame of $X^*.$ \item[c)] $(f_i,x_i)$ is a pre-boundedly complete Schauder frame of $\overline{\textrm{span}(f_i:i\in\mathbb{N})}.$ \end{enumerate} \end{prop} \begin{proof} (a)$\Rightarrow$(b) Assume that $(f_i,x_i)$ is a Schauder frame of $X^*.$ For any $x^{***}\in X^{***}$, $x^{***}|_{X}$ is a continuous linear functional on $X$. Then $\sum\limits_{i=1}^\infty x^{***}(x_i)f_i=\sum\limits_{i=1}^\infty x^{***}|_X (x_i)f_i$ converges in $X^*$, which completes the claim. \noindent (b)$\Rightarrow$(c) is trivial. \noindent (c)$\Rightarrow$(b) Let $Y=\overline{\textrm{span}(f_i:i\in\mathbb{N})}.$ and let $f\in X^*$. By Proposition \ref{norming}. $X$ can be isomorphically embedded into $Y^*$ under the natural canonical map. By the Hahn-Banach Theorem, extend $f$ to an element in $Y^{**}$ and, thus, assumption (c) yields that $\sum\limits_{i=1}^\infty f(x_i)f_i$ converges in $Y.$ Since this series converges in $w^*$ to $f$ by Proposition \ref{basic.pp1} this completes the proof.\end{proof} \begin{prop}\label{reflexive} Let $(x_i,f_i)$ be a Schauder frame of a Banach space $X$. If $(x_i,f_i)$ is pre-shrinking and pre-boundedly complete, then $X$ is reflexive. \end{prop} \begin{proof} Since $(x_i,f_i)$ is pre-shrinking we can write every $f\in X^*$ as $f=\sum f(x_i) f_i$. Since $(x_i,f_i)$ pre-boundedly complete we can choose for each $x^{**}\in X^{**}$ an $x\in X$ so that $x=\sum x^{**}(f_i) x_i$. Thus for any $f\in X^*$ $$x^{**}(f)=\sum f(x_i)x^{**}(f_i)=f(x),$$ which proves our claim. \end{proof} \section{Unconditional Schauder Frames} The following result extends James \cite{Ja} well known result on unconditional bases to unconditional frames. \begin{thm}\label{un.bdd} Let $(x_i, f_i)$ be an unconditional and locally shrinking Schauder frame of a Banach space $X$. \begin{enumerate} \item[(a)] If $(x_i,f_i)$ is not pre-boundedly complete, then $X$ contains an isomorphic copy of $c_0.$ \item[(b)] If $(x_i, f_i)$ is not shrinking, then $X$ contains an isomorphic copy of $\ell_1.$ \end{enumerate} \end{thm} Then by Proposition \ref{reflexive} and Theorem \ref{un.bdd}, we obtain Theorem C. For the proof, we need the following lemma. \begin{lem}\label{Basic subseq lem} Let $X$ be a separable Banach space and $(x_i,f_i)\subset X\times X^*$ be a locally shrinking Schauder frame of $X$ with the projection operator $K$. Let $Y$ be a finite-dimensional subspace of $X$. Then for every $\varepsilon>0$, there exists $N\in \mathbb{N}$ such that $\|y\|\leq (K+\varepsilon)\|y+ x\|$ whenever $x\in \text{\rm span}(x_i:i\geq N)$ and $y\in Y$. \end{lem} \begin{proof} W.l.o.g. $\varepsilon<1/2$. Let $(y_i)_{i=1}^n$ be an $\frac{\varepsilon}{8K^2}$-net of $S_{Y}$, and $(x^*_i)_{i=1}^n\subset S_{X^*}$ with $x_i^*(y_i)=1$ for $1\le i\le n$. For large enough $k$ it follows that $(\tilde x^*_i)_{i=1}^n$, with $\tilde x^*_i=\sum_{j=1}^k x^*_i(x_j) f_j$, $i=1,2,\ldots n$, satisfies that $$\|\tilde x^*_i\|\le K \ \ (1\le i\le n) \ \ \mbox{ and } \, \max_{1\le i\le n} |\tilde x_i^*(y)|\ge 1-\frac{\varepsilon}{4K} \ \ \mbox{ for all } y\in S_Y.$$ It follows that $\|\tilde x_j\|\le K$, for $j=1,2,\ldots n$. Using our assumption that $(x_i,f_i)$ is locally shrinking we can choose $N\in\mathbb{N}$, so that $\|\tilde x^*_i|_{\text{span}(x_j:j\ge N)}\|\le \frac{\varepsilon}{8K}$. If $y\in Y$ and $x\in \text{\rm span}(x_i:i\geq N)$ , then either $\|x\|\ge 2 \|y\|$, in which case $\|y+x\|\ge \|x\|-\|y\|\ge \|y\|$. Or $\|x\|\le 2 \|y\|$, and then $$K \,\|y+x\|\ge \max_{i\le n} |\tilde x^*_i(y+x)|\ge \Big(1-\frac{\varepsilon}{4K}\Big)\|y\| -\frac{\varepsilon}{8K} \|x\|\ge \Big(1-\frac{\varepsilon}{2K}\Big)\|y\|\ge \frac{\|y\|}{1+\varepsilon/K}.$$ \end{proof} \begin{cor}\label{Basic subseq} Let $X$ be a separable Banach space and $(x_i,f_i)\subset X\times X^*$ be a locally shrinking Schauder frame of $X$ with the projection operator $K$. Then for every normalized block sequence $(u_i)$ of $(x_i)$ and every $\epsilon >0$, there is a basic subsequence of $(u_i)$ whose basis constant $K_b$ is not larger than $K+\epsilon$. \end{cor} \begin{proof} Using at each step Lemma \ref{Basic subseq lem} we can choose a subsequence basis $(v_i)$ of $(u_i)$, so that for all $N\in\mathbb{N}$ $$\|y+ x\|\geq \frac{\|y\|}{K+\varepsilon} \ \ \mbox{ for all } y\in \text{\rm span}(v_i:i\leq N) \mbox{ and } x\in \text{\rm span}(v_i:i\geq N+1).$$ It follows then, that $(v_i)$ is basic and its basis constant does not exceed $K+\varepsilon$. \end{proof} \begin{lem}\label{uncond.block} Assume that $(x_i,f_i)$ is an unconditional and locally shrinking frame for a Banach space $X$. Let $K_u$ be the constant of unconditionality of $(x_i,f_i)$ and let $(u_i)$ be a block basis of $(x_i)$. For any $\varepsilon>0$ there is a subsequence $(v_i)$ of $(u_i)$ which is $K_u+\varepsilon$ unconditional. \end{lem} \begin{proof} W.l.o.g. $\|x_n\|=1$, for $n\in\mathbb{N}$, otherwise replace $x_n$ by $x_n/\|x_n\|$ and $f_n$ by $f_n\|x_n\|$. By Corollary \ref{Basic subseq} we can assume that $(u_i)$ is $2K_u$-basic (note that the projection constant of $(x_i,f_i)$ is at most $K_u$. Let $(\delta_i)\subset (0,1)$ with $\sum_{j>i}\delta_j< \delta_i$, $i\in\mathbb{N}$, and $\sum \delta_i< \varepsilon/8K^2_u$. Then we choose recursively increasing sequences $(n_i)$ and $(k_i)$ in $\mathbb{N}$ so that \begin{align} \label{E1}&|f_s(u_{n_i})|< \frac{\delta_i}{k_{i-1}} \text{whenever $s\le k_{i-1}$},\text{ and }\\ \label{E2}&\Big\|\sum_{s=k_i}^N f_s\Big( \sum_{j=1}^{i} \lambda_j u_{n_j}\Big) x_s\Big\| < \delta_{i+1}\text{ whenever $N\ge k_i$ and $(\lambda_j)_{j=1}^i \subset[-1,1]$}. \end{align} Indeed, assume $k_{i-1}$ was chosen ($k_0=1$). Since $(x_i,f_i)$ is locally shrinking, we can choose $n_{i}$ so that \eqref{E1} is satisfied. Secondly, using the compactness of the set $\{ \sum_{j=1}^i \lambda_j u_{n_j}:(\lambda_j)_{j=1}^i \subset[-1,1]\}$, we can choose $k_i$ so that \eqref{E2} is satisfied. We are given now $(\lambda_i)\subset c_{00}$ with $\max |\lambda_i|=1$ and $(\varepsilon_i)\subset\{-1,1\}$. For $u=\sum \lambda_iu_{n_i}$ and $\overline{u}=\sum \varepsilon_i \lambda_i u_{n_i} $ we compute: \begin{align*} \|\overline{u}\|&=\Big\|\sum_{s=1}^\infty f_s(\overline{u}) x_s\Big\|\\ & =\Big\|\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} f_s(\overline{u}) x_s\Big\|\\ &\le K_u\Big\|\sum_{i=1}^\infty \varepsilon_i \sum_{s=k_{i-1}}^{k_i-1} f_s(\overline{u}) x_s\Big\| \\ &\le K_u\Big\|\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} \lambda_i f_s(u_{n_i}) x_s\Big\| \!+\! K_u\sum_{i=1}^\infty \Big\|\sum_{s=k_{i-1}}^{k_i-1} f_s\Big( \sum_{j=1}^{i-1} \varepsilon_j\lambda_j u_{n_j} \Big) x_s\Big\|\\ & \qquad \!+\!K_u\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} \sum_{j={i+1}}^\infty |f_s(u_{n_j})| \\ & \le K_u\Big\|\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} \lambda_i f_s(u_{n_i}) x_s\Big\| + \frac\varepsilon{8K_u} + K_u \sum_{i=1}^\infty k_i \sum_{j=i+1}^\infty \frac{\delta_j}{k_i}\\ & \le K_u\Big\|\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} \lambda_i f_s(u_{n_i}) x_s\Big\| + \frac{\varepsilon}{4K_u}. \end{align*} By switching the role of $u$ and $\overline{u}$, we compute also $$\Big\|\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} \lambda_i f_s(u_{n_i}) x_s\Big\| \le \Big\|\sum_{i=1}^\infty \sum_{s=k_{i-1}}^{k_i-1} f_s(u) x_s\Big\| +\frac{\varepsilon}{4K_u}= \|u\| +\frac{\varepsilon}{4K_u}.$$ Since the basis constant of $(u_i)$ does not exceed $2K_u$ it follows that $\|\overline{u}\|$, $\|u\|\ge \frac1{2K_u}$. and thus $$\|\overline{u}\|\le \|u\|+\frac{\varepsilon}{2K_u}\le( K_u+\varepsilon)\|u\|,$$ which proves our claim. \end{proof} \begin{proof}[Proof of Theorem \ref{un.bdd}.] (a) By assumption, there is some $x_0^{**}\in S_{X^{**}}$ such that $\sum_{i=1}^n x_0^{**}(f_i) x_i$ does not converge. By the Cauchy criterion, there are $\delta>0$ and natural numbers $p_1<q_1<p_2<q_2\cdot\cdot\cdot$ such that for $u_j=\sum_{i=p_j}^{q_j} x_0^{**}(f_i) x_i$ we have $\|u_j\|\geq\delta$ for every $j$. By Corollary \ref{Basic subseq}, we can find a basic subsequence $(u_{n_j})$ of $(u_j)$ with the basis constant $C>1$. Then for every sequence $(\lambda_j)_{j=1}^m$ of scalars and every $i\in\{1,...,m\}$, we have $ \left\|\sum_{j=1}^m \lambda_j u_{n_j}\right\|\geq\frac{1}{2C}\|\lambda_i u_{n_i}\|\geq \frac{\delta}{2C}|\lambda_i|. $ That is, $\|\sum_{j=1}^m \lambda_j u_{n_j}\|\geq \frac{\delta}{2C}\|(\lambda_j)\|_\infty$. Recall that the unconditional constant of $(x_i,f_i)$ is defined by \begin{equation*}K_u=\sup_{x\in B_X} \sup_{(\varepsilon_i)\subset\{\pm 1\}}\Big\|\sum_{i=1}^\infty \varepsilon_i f_i(x) x_i\Big\|= \sup_{x\in B_X} \sup_{(\lambda_i)\subset[-1,1]}\Big\|\sum_{i=1}^\infty \lambda_i f_i(x) x_i\Big\|<\infty \end{equation*} (the second ``$=$'' follows from a simple convexity argument). Secondly we compute \begin{align*} \sup_{(\lambda_i)\in c_{00}\cap[-1,1]^\mathbb{N}} \big\| \sum \lambda_i u_i\big\|& =\sup_{(\lambda_i)\in c_{00}\cap[-1,1]^\mathbb{N}} \big\| \sum_i \sum_{s=p_i}^{q_i} \lambda_i x^{**}_0(f_i)\big\|\\ &\le \sup_{x^{**}\in B_{X^{**}}} \sup_{(\lambda_s)\in c_{00}\cap[-1,1]^\mathbb{N}} \big\| \sum_s \lambda_s x^{**}(f_i) x_i\big\| \\ &= \sup_{x\in B_{X}} \sup_{(\lambda_s)\in c_{00}\cap[-1,1]^\mathbb{N}} \big\| \sum \lambda_s f_s(x) x_i\big\| =K_u. \end{align*} \noindent (b) Since $(x_i, f_i)$ is not shrinking, there exists $f\in S_{X^*}$ and a normalized block basis $(u_n)$ of $(x_n)$ and a $\delta>0$, so that $f(u_n)\ge \delta$, for $n\in\mathbb{N}$. Since by Lemma \ref{uncond.block} we can assume that $(u_n)$ is $2K_u$-unconditional, it follows $(\lambda_i)\in c_{00}$ that $$\Big\|\sum \lambda_i u_i\Big\|\ge \frac1{2K_u} \Big\|\sum |\lambda_i| u_i\Big\|\ge f\Big(\sum |\lambda_i| u_i\Big)\ge\frac{\delta}{K_u}\sum |\lambda_i| . $$ \end{proof} \vskip3mm \noindent\textbf{Acknowledgment.} This paper forms a portion of the author's doctoral dissertation, which is being prepared at Texas A$\&$M University and Nankai University under the direction of Thomas Schlumprecht and Guanggui Ding. The author thanks Dr. Schlumprecht and Ding for their invaluable help, guidance and patience.
proofpile-arXiv_065-6359
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Features of multifragmentation} \label{section1} When studying heavy-ion collisions at non-relativistic energies, multifragmentation can be observed for the most central ones, in a range of projectile-ion bombarding energies from tens MeV/A up to a few hundreds MeV/A, depending of the properties of the nuclei under consideration. Many issues of this phenomenon are still under discussion, in particular con\-cer\-ning the stage at which it occurs in the evolution of a reaction, e.g. if a nuclear system undergoing multifragmentation is or not equilibrated, how a simultaneous break-up in multiple fragments can occur, and if the multifragmentation is the result of a phase-transition. According to the currently most believable scenario, during the overlapping stage of heavy-ion collisions (typical time $\simeq$ 100 fm/c) matter can undergo compression, leading to large excitation e\-ner\-gies. As a consequence, the blob of nuclear matter starts to expand and can go on expanding down to sub-saturation densities ($\rho$ $\simeq$ 0.1 - 0.3 $\rho_0$, where $\rho_0$ is the normal nuclear matter density) and reach temperatures $\simeq$ 3 - 8 MeV, where it becomes unstable and breaks up into multiple fragments. These conditions are typical of a liquid-gas coexistence region~\cite{bondorf,buyuk}. As already mentioned, one of the open issues is if equilibration is reached in these reactions. A statistical description of multifragmentation is based on this assumption. A dynamical description of multifragmentation instead is not based on this assumption. Difficulties in coming to a non-controversial conclusion are largely due to the fact that most of the experimental data refer to patterns of particles detected just in the last stage of the reactions, involving channels feeded by the sequential decays which heavily affect and modify the primary fragment distribution. Multifragmentation can be distinguished from other decay channels on the basis of the excitation energy: a typical scenario of small excitation energies (E $<$ 2 - 3 MeV/A) is characterized by the formation of a compound-like system and by its evolution through binary sequential decays (evaporation/fission), whereas for high excitation energies (E $>$ 3 MeV/A) multifragmentation in a finite volume and a simultaneous break-up into multiple fragments can occur. The excitation energy is indeed related to the mass asymmetry $(A_{proj} - A_{target})$: in case of symmetric central reactions the compression is responsible of the high excitation energy, whereas in case of asymmetric reactions only a partial compression can occur and a large part of the excitation energy appears in the form of thermal energy~\cite{singh}. In all cases, multifragmentation is tipically driven by the following expansion. Due to its mentioned features, multifragmentation, occurring during the phase of expansion of the nuclear system formed by an ion-ion (central) collision, allows to study the nuclear Equation of State (EoS) at subnormal nuclear densities. In particular, it is possible to infer useful information concerning the symmetry energy and its density dependence, by investigating the isotopic yield distributions of the emitted fragments. The isoscaling technique, based on the analysis of isotopic yield ratios obtained in reactions with a different isospin asimmetry, has been developed with this purpose. After an overview of the models to study multifragmentation in Section~\ref{section2}, and a brief presentation of the one used in this work in Subsection~\ref{subsection2.1}, examples of its application in the isoscaling technique and in the reconstruction of multifragmenting sources at energies of a few tens MeV/A are provided in Subsection~\ref{subsection3.1} and~\ref{subsection3.2}, respectively. Finally, our perspectives on further applications of our model are drawn in Section~\ref{section4}. \section{Models to study multifragmentation} \label{section2} One can distinguish between \begin{itemize} \item Dynamical Models: some of them are 1-body approaches, inspired to the BUU/BNV/Landau-Vlasov transport theory. Alternatively, n-body approaches have been developed, such as the QMD/AMD/FMD. n-body approaches are very powerful in the description of the simultaneous break-up of a nuclear system in multiple fragments, since they preserve correlations among nucleons. \item Statistical Models: they assume to work with an equilibrated excited source at freeze-out (thermal equilibrium). Taking into account that the nuclear system undergoes an expansion, leading to decreasing densities, down to subnormal values, the freeze-out~\cite{trautmann} occurs when the mutual nuclear interaction among fragments can be neglected. Statistical models have been worked out both in the grand-canonical framework (see e.g. Ref.~\cite{chauduri}) and in the micro-canonical framework. The most widespread among the last ones is the SMM~\cite{botvina,bondorf,gupta} and its modifications ISMM~\cite{tan} and SMM-TF~\cite{souza}. \end{itemize} We emphasize that the onset of multifragmentation according to dynamical models is different from the description of multifragmentation according to statistical models. In fact, in the statistical models a source in thermal equilibrium is assumed to fragment. This means that memory effects concerning how the source has been originated are neglected. On the other hand, in the dynamical models multifragmentation is a fast process: the involved nucleons have not the time to come to equilibrium. Fragments originate from the density fluctuations (nucleon-nucleon correlations) due to collisions in the ion-ion overlapping stage, which survive the expansion phase (memory effects). The chemical composition of hot fragments is expected to play a role in helping to disentangle the nature (dynamical / statistical) of the multifragmentation mechanism~\cite{milazzo}. Different models reproduce different features of the collisions with different success. A mixed model, inspired to the QMD dynamical approach to describe the fast stage of ion-ion collisions and to a statistical approach to describe the further decay of the multiple primary excited fragments produced by QMD down to their ground state, has been used to obtain the results presented in this work. Due to the crucial role of dynamics, as supported by our results, in the following we mainly concentrate on the description of the dynamical aspects of multifragmentation. \subsection{QMD/AMD approaches} \label{subsection2.1} In these microscopic models a nucleus is considered a set of mutually interacting nucleons. The propagation of each nucleon occurs according to a classical Hamiltionian with quantum effects~\cite{aiche}. In particular, nucleons are described by gaussian wave packets. Each of them moves under the effects of a potential given by the sum of the contribution of all other nucleons (2-body effects). Furthermore, when two nucleons come very close to each other, they can undergo elastic collisions (nucleon-nucleon stochastic scattering cross-sections) with Pauli blocking. A proper treatment of antisymmetrization is implemented in AMD~\cite{ono}. On the other hand, QMDs do not provide any antisymmetrization of the nuclear wave-function. An approximate effect can be obtained through the inclusion in the Hamiltonian of a Pauli potential term, or through the implementation of specific constraints. Still open questions in molecular dynamics approaches concern the functional form of the nucleon-nucleon potential (each working group who developed a molecular dynamics code has its preferred choice of terms), the potential parameters and their relation to the nuclear matter EoS. Nowaday, many groups prefer parameter sets leading to a soft EoS. Anyway, there are open questions concerning the symmetry term~\cite{li,baran}. In particular, a stiff dependence for this term means that the symmetry energy always increases with increasing densities. On the other hand, a soft dependence means that the symmetry energy decreases at high densities. At present, a stiff dependence seems more reliable than a soft one. Many uncertainties come from the fact that our observations are mainly based on symmetric nuclear matter (N / Z $\simeq$ 1) near normal nuclear density ($\simeq$ 0.16 $\mathrm{fm}^{-3}$), since it is difficult to obtain highly asymmetric nuclear matter in terrestrial laboratories. On the other hand these studies are crucial to understand features of astrophysical objects (such as neutron star formation and structure), where conditions of extreme neutron-proton asymmetry can be present. Other open issues concern the gaussian width, the use of in-medium nucleon-nucleon cross-sections instead of free nucleon-nucleon cross-sections (in QMD the free choice is usually implemented, whereas in the AMD the in-medium choice has been implemented), the question of how long the dynamical simulation has to be carried over and the problem of the development of a fully relativistic approach (on the last point see e.g. Ref.~\cite{mancusi}). A QMD code has been developed by us~\cite{mvg} in fortran 90. It includes a 3-body repulsive potential and a surface term (attractive at long distances and repulsive at short distances). Pauli blocking is implemented by means of the CoMD constraint~\cite{papa}. Neutron and proton are fully distinguished by means of a simmetry term and an isospin dependent nucleon-nucleon stochastic scattering cross-section. The kinematics is relativistic and attention is paid to the conservation of key quantities (total energy/momentum, etc.) in each ion-ion collision. Simulations are performed by means of our code from the ion-ion overlapping stage up to t $\simeq$ 200 - 300 fm/c (fast stage of the reaction). The description of the de-excitation of the excited fragments present at the end of the fast stage is obtained through the coupling of our QMD with the statistical model taken from the PEANUT module available in the FLUKA Monte Carlo code~\cite{fluka0,fluka1,fluka2,fluka3} in a version for the g95 compiler. Up to now, the QMD + FLUKA interface has been tested in the collisions of ions with charge up to Z=86 (radon isotopes), providing interesting results (see e.g. Ref.~\cite{nd2007} and references therein). \section{Results} \subsection{Isospin dependence in fragment production: application of the isoscaling technique} \label{subsection3.1} The isoscaling technique, already mentioned in Section~\ref{section1}, is based on ratio of yields taken in multifragmentation reactions with similar total size, but different isospin asymmetries ($N - Z$) / ($N + Z$)~\cite{tsang, ono2}: \begin{eqnarray} R_{21}(N,Z) = Y_2(N,Z) / Y_1(N,Z) = Const \,\, \exp( A_{coeff} N + B_{coeff} Z ) \, . \label{eq1} \end{eqnarray} The numerator of this formula refers to the yield of a given fragment ($N,Z$) obtained from a neutron rich nucleus-nucleus reaction system, whereas the denominator refers to the yield of the same fragment from a neutron poor (more symmetric) reaction at the same energy. $A_{coeff}$ is related to the symmetry energy and is increasingly larger for couple of reactions with increasingly different isospin composition $N/Z$. In particular, we have considered the neutron rich systems Ar + Fe ($N/Z$ = 1.18) and Ar + Ni ($N/Z$ = 1.13) with respect to the neutron poor system Ca + Ni ($N/Z$ = 1.04). Among other authors, these systems have been previously studied by~\cite{shetty} (see also Ref.~\cite{shetty2,wuenschel}). Isotopic yield ratios for light fragment (Z $\le$ 8) emission have been obtained from our QMD + FLUKA simulations for the couple of reactions Ar + Ni / Ca + Ni and Ar + Fe / Ca + Ni at 45 MeV/A projectile bombarding energy. When plotted in the logarithmic plane, isotopic yield ratios for each fixed $Z$ turn out to be approximately linear, with a slope given by a $A_{coeff}$, as expected from Eq.~(\ref{eq1}). As for the isoscaling parameter $A_{coeff}$, our simulations give the following insights: \begin{itemize} \item The results of our analysis are quite sensitive to the number of isotopes included in the linear fit, at fixed $Z$ (i.e. to the goodness of the gaussian approximation to the fragment isotopic distribution). \item $A_{coeff}$ differs with $Z$, in agreement with~\cite{tsang}, which claims that isoscaling is observed for a variety of reaction mechanisms, from multifragmentation to evaporation to deep inelastic scattering, with different slopes in the logarithmic plane. \item $A_{coeff}$ is larger for the couple of reactions with larger difference in the isospin compositions ($N_1/Z_1$ - $N_2/Z_2$). \item Our average values $A_{coeff}$ = 0.18 for Ar + Ni / Ca + Ni and $A_{coeff}$ = 0.31 for Ar + Fe / Ca + Ni are larger than the experimental values~\cite{shetty}, but the comparison is not so meaningful, since it is largely affected by the fact that we include fragments emitted in all directions in our preliminary analysis, whereas in the experiment only fragments emitted at 44$^o$ were selected. \item $A_{coeff}$ turns out to be affected by the choice of the impact parameter and decreases significantly when selecting only the most central events. \item $A_{coeff, hot}$ at the end of the overlapping stage can be larger than $A_{coeff}$ at the end of the full simulation by no more than 20\%, at least for the reaction systems under study. \end{itemize} As far as the emissions at preequilibrium are concerned, our simulations lead to the following results: \begin{itemize} \item For central collisions of Ca + Ni the yield of emitted protons turns out to be larger than the yield of emitted neutrons by 20\%. For central collisions of Ar + Ni and Ar + Fe, on the other hand, the yield of emitted protons turns out to be lower than the yield of emitted neutrons by 10 - 15\%. \item For each of the three systems under study, the fragment asymmetry of the liquid phase $(Z/A)_{liq}$ at the end of the preequilibrium stage turns out to be lower than the corresponding value at $t=0$, in qualitative agreement with the AMD simulations~\cite{shetty}. \item No traces of isospin fractionation appear, expected indeed for systems with an higher $N/Z$ content (e.g. $^{60}$Ca + $^{60}$Ca). \end{itemize} The dependence of our results on the projectile bombarding energy is currently under study, by considering the same reactions at different bombarding energies. \subsection{Multifragmenting source reconstruction in Nb + Mg reactions at 30 MeV/A} \label{subsection3.2} Multifragmentation has been observed in Nb + Mg reactions at a 30 MeV/A projectile bombarding energy in an experiment performed at the INDRA detector by the INDRA + CHIMERA collaborations~\cite{manduci}. Event selection has been performed, according to experimental cuts on the momentum along the beam axis, $p_{z,det} > 0.6 \, p_{z,tot}$, and on the angular acceptance of the INDRA detector, $4^o$ < $\theta$ < $176^o$. The selected events have then been assigned to different regions, corresponding to portions of the plane identified by the total transverse energy and the total multiplicity of charged particles detected in each event. Three regions have been singled out this way, as shown in Fig. 2 of Ref.~\cite{manduci}. We have applied the same selection procedure by implementing proper cuts and filters on the simulated events obtained by our QMD + FLUKA. The selected theoretical events are plotted in Fig.~\ref{nostrafigura1}, which can be directly compared with Fig. 2 of Ref.~\cite{manduci} and turns out to be in good agreement. The events plotted in the T1 region (red) are the less dissipative ones (more peripheral collisions), whereas the events in the T3 region (blue) correspond to more dissipative (central) collisions. \begin{figure}[h!] \begin{center} \includegraphics[width=8cm]{cut2.eps} \caption{Multifragmentation of Nb + Mg at 30 MeV/A: event selection and identification of different regions T1 (red), T2 (green) and T3 (blue) by our QMD + FLUKA simulations. Each point corresponds to a different ion-ion reaction event in the plane identified by the total multiplicity of detected charged particles and the detected total transverse energy.} \label{nostrafigura1} \end{center} \end{figure} For each of the three regions, average values of interesting quantities have been obtained both in the experiment and in the theoretical simulations. Our results, concerning the transverse energy, the multiplicity of charged particles, the velocity and the charge of the biggest residual averaged over all events belonging respectively to the T1, T2 and T3 regions are shown in Table~\ref{tabellagenerale}. As far as the average transverse energy and multiplicity of charged particle are concerned, the results of our simulations turn out to be in good agreement with the experimental data, within the experimental uncertainties, in the region T1 and T2, corresponding respectively to peripheral and semiperipheral collisions, whereas in case of central collisions the theoretical average transverse energy underestimates the experimental one and the theoretical multiplicity of charged particles slightly overestimates the experimental result. On the other hand, as for the properties of the largest residual, the results of the theoretical simulations show good agreement with experimental data especially for the most central collisions, belonging to the T3 region, whereas for the more peripheral ones the theory overestimates the velocity and the charge of the largest residual. These results, considered all together, seem to point out to the fact that in the experiment the interacting nuclei are slightly more stopped than in the simulation. \begin{table}[h!] \begin{center} \caption{Multifragmentation in Nb + Mg at 30 MeV/A: average values of interesting quantities in the regions T1, T2 and T3.} \label{tabellagenerale} \begin{tabular}{p{5cm}cccc} \hline\hline \textbf{Region} & \textbf{$<E_{trasv}>$} & \textbf{$<M_{tot}>$} & \textbf{$<v_{maxres}>$} & \textbf{$<Z_{maxres}>$} \\ \hline \multicolumn{5}{l} {Experiment (from Ref.~\cite{manduci}):}\\ \hline T1 & 72.9 (10.1) & 8.1 (0.8) & 6.6 (0.2) & 34.1 (2.3) \\ T2 & 120.9 (10.6) & 11.2 (0.9) & 6.4 (0.2) & 31.9 (2.7) \\ T3 & 176.4 (13.8) & 13.4 (1.0) & 6.3 (0.2) & 30.5 (2.8) \\ \hline \multicolumn{5}{l} {Theoretical simulations (QMD + FLUKA de-exc.):} \\ \hline T1 & 74.8 & 8.4 & 7.0 & 38.8 \\ T2 & 115.8 & 12.0 & 6.6 & 35.7 \\ T3 & 155.1 & 15.3 & 6.2 & 31.1 \\ \hline \hline \end{tabular} \end{center} \end{table} Furthermore, the considered experiment aim at the recostruction of the properties of the so-called source, the blob of matter formed by compression in the ion-ion overlapping stage, which undergoes multifragmentation. Since the experiment detects final cold fragments, i.e. fragments in their ground state after the de-excitation, a procedure has been established to reconstruct the properties of the source by using the observed properties of the final fragments and emitted protons. In particular, the source is isolated by a selection in parallel velocity of different fragments (velocity cuts), by considering different cuts in different regions. The velocity cuts implemented are summarized in Table 2 of Ref.~\cite{manduci}. As far as the proton are concerned, the parallel velocity cut is fixed to 3 cm/ns, whereas for increasingly heavier fragments the velocity cuts are fixed to increasing values. The velocities of the emitted fragments are easily obtained even from our simulation by QMD + FLUKA, so it is possible to apply the same procedure for the reconstruction of the source properties even in case of our simulation. As an example, the velocities of the emitted protons obtained by our simulation for events in each of the three regions are shown in Fig.~\ref{nostrafigura2}, by plotting their perpendicular component $v_{perp}$ vs. their component along the beam axis $v_{par}$. This figure can be compared with Fig.3 of Ref.~\cite{manduci}. The vertical line in each panel corresponds to the $v_{par}$ cut implemented in the reconstruction of the source. \begin{figure}[ht!] \begin{center} \includegraphics[bb=51 51 410 200, width=15cm]{vel2.eps} \caption{Multifragmentation of Nb + Mg at 30 MeV/A: $v_{perp}$ vs. $v_{par}$ for protons emitted in the region T1 (left panel), T2 (central panel) and T3 (right panel) , respectively, as obtained by our QMD + FLUKA simulations. Each point in each panel correspond to a different emitted proton. The vertical lines correspond to the velocity cuts implemented for the reconstruction of the multifragmenting sources. } \label{nostrafigura2} \end{center} \end{figure} Since the experiment is able to detect the charge of the emitted fragments but not their mass, the velocity cuts can be directly used just to obtain the charge of the source $Z_s$. To calculate the mass of the source $A_s$ a further hypothesis is needed. The author of Ref.~\cite{manduci} assume that the source has the same isotopic ratio as the projectile, i.e. $A_s/Z_s = A_{proj}/Z_{proj}$. Source properties in the three regions T1, T2 and T3, as reconstructed both from the experiment and from our simulation, are shown in Table~\ref{tabella123}. Since the theoretical model allows to simulate even the process of source formation in a straightforward way, the properties of the source can be directly obtained before its de-excitation and break-up into multiple fragments, without using an a-posteriori reconstruction based on velocity cuts. If we identify the source with the biggest fragment present just at the end of the QMD simulation, we obtain a very good agreement with the experimental results of Ref.~\cite{manduci}, especially in the region T1 and T2, even if the experimental results are based on the a-posteriori reconstruction, as can be inferred from Table~\ref{tabella123}. On the other hand, if we use a reconstruction procedure analogous to the one adopted by the authors of Ref.~\cite{manduci}, we overestimate the size of the source, especially for the most central collisions. Finally, as for the average multiplicity of IMF fragments (Z $\ge$ 3) subsequently emitted from the source, we obtain good agreement with the experiment in all regions. \begin{table}[h] \begin{center} \caption{Multifragmentation in Nb + Mg @ 30 MeV/A: reconstruction of the source properties in the regions T1, T2 and T3} \label{tabella123} \begin{tabular}{p{5cm}ccccc} \hline\hline \multicolumn{6}{l} {Experiment (from Ref.~\cite{manduci}):}\\ \hline \textbf{Region} & \textbf{$<Z_s>$} & \textbf{$<A_s>$} & \textbf{$<M_p>$} & \textbf{$<M_\alpha>$} & \textbf{$<M_{frag}>$}\\ \hline T1 & 40.7 (2.0) & 91.2 (4.7) & 2.0 (0.6) & 1.1 (0.5) & 1.2 (0.4) \\ T2 & 42.8 (2.1) & 96.0 (4.9) & 2.7 (0.7) & 1.8 (0.6) & 1.4 (0.3) \\ T3 & 45.1 (2) & 101.3 (4.6) & 3.1 (0.7) & 2.5 (0.7) & 1.6 (0.3) \\ \hline \hline \multicolumn{6}{l} {Theoretical simulation:}\\ \hline \textbf{Region} & \textbf{$<Z_s>$} & \textbf{$<A_s>$} & \textbf{$<M_p>$} & \textbf{$<M_\alpha>$} & \textbf{$<M_{frag}>$}\\ \hline \multicolumn{6}{l} {Source properties reconstruction from final secondary fragments (QMD + FLUKA de-exc):}\\ \hline T1 & 42.6 & 96.2 & 2.2 & 0.4 & 1.0 \\ T2 & 45.1 & 101.5 & 3.5 & 1.6 & 1.25 \\ T3 & 48.5 & 109.5 & 4.2 & 3.8 & 1.5 \\ \hline \multicolumn{6}{l} {Source properties at the end of the QMD simulation (primary fragments):}\\ \hline T1 & 41.0 & 91.0 & $\,$& $\,$& $\,$ \\ T2 & 43.3 & 96.6 & $\,$& $\,$& $\,$ \\ T3 & 47.5 & 106.1 & $\,$ & $\,$ & $\,$ \\ \hline \hline \end{tabular} \end{center} \end{table} \section{Conclusions and perspectives} \label{section4} The QMD model developed in Milano and coupled to the de-excitation module of the Monte Carlo FLUKA code has been used to study reactions between ions of intermediate mass which exhibit multifragmentation features. The results presented in this paper are encouraging, and can be further refined by more precisely investigating up to which extent the statistical de-excitation process from FLUKA modifies the pattern of primary fragments originated dynamically by QMD, and how the results of the simulation change when the time of the transition from the dynamical description of the nuclear system to a statistical description is modified. Further studies at non relativistic energies that we are going to perform with our theoretical simulation tool concern: \begin{itemize} \item the isospin distillation effect: it occurs in the multifragmentation of charge-asymmetric systems, and leads to IMF fragments (liquid) more symmetric with respect to the initial matter, and light fragments (gas) more neutron rich. This effect is related to the density dependence of the symmetry energy. \item The bimodality in the probability distribution of the largest fragment as a function of the mass number $A_{max}$ of the largest fragment, as a signature of a phase transition. Experimental data on this effect have been obtained by the CHIMERA collaboration (see e.g. Ref.~\cite{pichon}). \item (Complete) fusion cross-sections (this kind of analysis has already been performed by other groups, e.g. by means of the ImQMD model~\cite{zhao}). \end{itemize} \section*{Acknowledgements} We wish to thank L. Manduci for enlightening comments on the data on the reaction Nb + Mg at 30 MeV/A collected at the INDRA detector. The QMD code developed by us and used in this study is the fruit of a collaboration involving many people along the years. In particular, we would like to mention F. Ballarini, G. Battistoni, F. Cerutti, A. Fass\`o, E. Gadioli, A. Ottolenghi, M. Pelliccioni, L.S. Pinsky and J. Ranft for their support and suggestions. The FLUKA code is under continuous development and maintainance by the FLUKA collaboration and is copyrighted by the INFN and CERN.
proofpile-arXiv_065-6360
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Consider a group of $K$ sensors measuring a common phenomenon, like weather. In this paper, we investigate a communication scenario in which some sensors desire to obtain measurements of the other nodes with the help of some existing relay nodes in the network. In the language of information theory, we can consider measurements of sensors as outputs of discrete memoryless correlated sources and model the communication network as a cooperative relay network in which each node can simultaneously be a transmitter, a relay and a receiver. So the problem can be defined as below:\par \emph{Given a set of sources $U_{{\mathcal A}}=\{U_{a_j}:a_j\in{\mathcal A}\}$ observed at nodes ${\mathcal A}=\{a_1,\cdots,a_M\}\subseteq{\mathcal V}$ respectively (${\mathcal V}=\{1,\cdots,V\}$ is the set of nodes in the network) and a set of receivers at nodes ${\mathcal D}=\{d_1,\cdots,d_K\}\subseteq{\mathcal V}$ which is not necessarily disjoint from ${\mathcal A}$, what conditions must be satisfied to enable us to reliably multicast $U_{{\mathcal A}}$ to all the nodes in ${\mathcal D}$ over the cooperative relay network?} \par The problem of Slepian-Wolf (SW) coding over multi-user channels has been considered for some special networks. First in \cite{tuncel}, Tuncel investigated the problem of multicasting a source over a broadcast channel with side information at the receivers. He proposed a joint source-channel coding scheme which achieves \emph{operational separation} between source coding and channel coding in the sense that the source and channel variables are separated. He also proved the optimality of his scheme. In a recent work \cite{gunduz}, this problem was generalized to the problem of lossy multicasting of a source over a broadcast channel with side information. In \cite{babu}, a necessary and sufficient condition for multicasting a set of correlated sources over acyclic Aref networks \cite{aref} was derived. The problem of multicasting correlated sources over networks was also studied in the network coding literature \cite{ho,effros}.\par Cooperative relay network has been widely studied in terms of achievable rate region for relay networks \cite{xie,kramer2005}, multiple access relay channels \cite{marc} and multi-source, multi-relay and multi-destination networks \cite{xie:2007}. In all the mentioned works, two main strategies of Cover and El Gamal for relay channels \cite{cover}, namely, \emph{decode and forward} (DF) and \emph{compress and forward} (CF) were generalized for the cooperative relay networks. In a more general setting \cite{goldsmith}, G\"{u}nd\"{u}z, et.al., consider a compound multiple access channel with a relay in which three transmitters where, one of them acts as a relay for the others, want to multicast their messages to the two receivers. Several Inner bounds to the capacity region of this network were derived using DF, CF and also structured lattice codes. Although finding the capacity of the simple relay channel is a longstanding open problem, an approximation for the Gaussian relay network with multicast demands has been recently found in \cite{aves:isit,aves:phd,aves:sub}. In these works, the authors propose a scheme that uses the Wyner-Ziv coding at the relays and a distinguishability argument at the receivers.\par In this paper, we first study the problem of multi-layer Slepian-Wolf coding of multi-component correlated sources, in which each source should encode its components according to a given hierarchy. Using the sub-modularity of the entropy function and a covering lemma, we prove an identity which states that for any points of SW-region with respect to joint encoding/decoding of the components, there exists a multi-layer SW-coding which achieves it. To the best of our knowledge, this identity is new and we call it the SW-identity. Then, we propose a \emph{joint Source-Wyner-Ziv encoding/sliding window decoding} scheme for Slepian-Wolf coding over cooperative networks. In this scheme, each node compresses its channel observation using Wyner-Ziv coding and then jointly maps its source observation and compressed channel observation to a channel codeword. For decoding, each receiver uses sliding window decoding with respect to an ordered partition of other nodes. For each ordered partition, we obtain a set of DMCS which can reliably be multicast over the cooperative relay network. By utilizing the SW-identity, we obtain the union of the sets of all feasible DMCS with respect to all ordered partitions. Our scheme results in \emph{operational separation} between the source and channel coding. In addition, this scheme does not depend on the graph of the network, so the result can easily be applied to any arbitrary network. We show that the sufficient conditions for our scheme, are indeed necessary conditions for the Slepian-Wolf coding over arbitrary Aref networks and linear finite-field cooperative relay networks. Moreover, we prove the feasibility of multicasting of all DMCS whose Slepian-Wolf region overlap the cut-set bound within a constant number of bits over a Gaussian cooperative relay network. This establishes a large set of DMCS that belongs to the set of DMCS which can reliably be multicast in the operational separation sense. Note that the model considered in this paper, encompasses the model of multiple access channel with correlated sources. So the set of feasible DMCS in the operational separation sense is a subset of all feasible DMCS. We extract an achievable rate region for cooperative relay networks by reducing sufficient conditions for reliable multicasting. We show that this achievable rate region subsumes some recent achievable rates based on the CF strategy \cite{kramer2005,yassaee}. In addition, we estimate the capacity region of Gaussian cooperative relay networks within a constant number of bits from the cut-set bound. Our result improves capacity approximation of Gaussian relay networks given in \cite{aves:sub}.\par The rest of the paper is organized as follows. In section \ref{sec:2}, we introduce notations and definitions used in this paper. Section \ref{sec:3} derives necessary conditions for reliable multicasting of DMCS over cooperative networks. Section \ref{sec:4} studies the multi-layer Slepian-Wolf coding, in particular, a novel identity related to the entropy function is derived. In section \ref{sec:5}, we obtain feasibility constraints which are the main results of the paper. In sections \ref{sec:6} and \ref{sec:7}, we derive necessary and sufficient conditions for multicasting of DMCS over some classes of semi-deterministic networks and Gaussian cooperative relay networks, respectively. Section \ref{sec:8} employs results of the previous sections to derive an inner bound and an outer bound for the capacity region of a cooperative relay networks. Section \ref{sec:9} concludes the paper. \section{Preliminaries and Definitions}\label{sec:2} \subsection{Notation} We denote discrete random variables with capital letters, e.g., $X$, $Y$, and their realizations with lower case letters $x$, $y$. A random variable $X$ takes values in a set ${\mathcal X}$. We use $|{\mathcal X}|$ to denote the cardinality of a finite discrete set ${\mathcal X}$, and $p_X(x)$ to denote the probability mass function (p.m.f.) of $X$ on ${\mathcal X}$, for brevity we may omit the subscript $X$ when it is obvious from the context. We denote vectors with boldface letters, e.g. $\mathbf{x}$, $\mathbf{y}$. The superscript identifies the number of samples to be included in a given vector, e.g., $X^i=(X_1,\cdots,X_i)$. We use $\mathit{T}_{\epsilon}^n(X)$ to denote the set of $\epsilon$-strongly typical sequences of length $n$, with respect to p.m.f. $p_X(x)$ on ${\mathcal X}$. Further, we use $\mathit{T}_{\epsilon}^n(Y|\mathbf{x})$ to denote the set of all $n$-sequences $\mathbf{y}$ such that $(\mathbf{x},\mathbf{y})$ are jointly typical, w.r.t. $p_{XY}(x,y)$. We denote the vectors in the $j$th block by a subscript $[j]$. For a given set ${\mathcal S}$, we use the shortcuts $X_{{\mathcal S}}=\{X_i:i\in{\mathcal S}\}$ and $R_{{\mathcal S}}=\sum_{i\in{\mathcal S}}R_i$. We use ${\mathcal S}\backslash{\mathcal T}$ to denote the set theoretic difference of ${\mathcal S}$ and ${\mathcal T}$. We say that $a_n\stackrel{.}{\le}2^{nb}$, if for each $\epsilon>0$ and sufficiently large $n$, the relation $a_n\le 2^{n(b-\epsilon)}$ holds. \subsection{Sub-modular Function} Let ${\mathcal V}$ be a finite set and $2^{{\mathcal V}}$ be a power set of it, i.e., the collection of all subsets of ${\mathcal V}$. A function $f:2^{{\mathcal V}}\rightarrow\mathbb{R}$ is called sub-modular, if for each ${\mathcal S},{\mathcal T}\subseteq{\mathcal V}$, \begin{equation} f({\mathcal S}\cap{\mathcal T})+f({\mathcal S}\cup{\mathcal T})\le f({\mathcal S})+f({\mathcal T}) \end{equation} Function $f$ is called super-modular, if $-f$ is sub-modular. Given two sets ${\mathcal S},{\mathcal T}$ and a sub-modular function $f$, we define $f({\mathcal S}|{\mathcal T})\triangleq f({\mathcal S}\cup{\mathcal T})-f({\mathcal T})$.\\ Let $X_{{\mathcal A}}$ be DMCS with distribution $p(x_{{\mathcal A}})$. For each ${\mathcal S}\subseteq{\mathcal A}$, we define the entropy function $h$ as $h({\mathcal S})=H(X_{{\mathcal S}})$ where $H(X)$ denotes the entropy of random variable $X$. It is well-known that the entropy function $h$ is a sub-modular function over the set ${\mathcal A}$ \cite{yeoung:book}. The sub-modularity property of the entropy function plays an essential role in the remainder of the paper, (in contrast to the non-decreasing property of the entropy, i.e, $h({\mathcal S})\ge h({\mathcal T}),\ \forall {\mathcal T}\subseteq{\mathcal S}$). \subsection{Some Geometry} A polytope is a generalization of polygon to a higher dimension. Point, segment and polygon are polytopes of dimension $0$, $1$ and $2$, respectively. A polytope of dimension $d\ge 3$ can be considered as a space bounded by a set of polytopes of dimension $d-1$. The boundary polytope of dimension $d-1$ is called facet. For a given polytope $\mathbf{P}$, a collection of polytopes $\{\mathbf{P}_1,\cdots,\mathbf{P}_n\}$ is called a \emph{closed covering} of $\mathbf{P}$, if $\mathbf{P}=\cup_{i=1}^n \mathbf{P}_i$. \begin{lemma}\label{le:cover} Let $\mathbf{P}$ be a polytope and ${\mathcal F}=\{\mathbf{P}_1,\mathbf{P}_2,\cdots,\mathbf{P}_n\}$ be a collection of polytopes with the same dimension as $\mathbf{P}$. If $\mathbf{P}$ and ${\mathcal F}$ satisfy the following conditions: \begin{enumerate} \item $\forall i:\quad\mathbf{P}_i\subset\mathbf{P}$ \item Each facet of $\mathbf{P}$ is covered by some facets of some polytopes $(\mathbf{P}_{i1},\cdots,\mathbf{P}_{ik})$. \item For each facet of $\mathbf{P}_i$ inside $\mathbf{P}$, there is $\mathbf{P}_j\neq\mathbf{P}_i$ such that $\mathbf{P}_i$ and $\mathbf{P}_j$ have only that facet as the common part. \end{enumerate} \par then ${\mathcal F}$ is a \emph{closed covering} of $\mathbf{P}$. \end{lemma} \begin{proof} The proof is provided in the Appendix \ref{app:1}. \end{proof} Lemma \ref{le:cover} provides a powerful tool for dealing with the regions which are described with a set of inequalities. \begin{definition} \label{def:majorize} A point $Q=(q_1,\cdots,q_d)$ in $\mathbb{R}^d$ is said to majorize point $P =(p_1,\cdots,p_d)$, if $q_i\ge p_i$ for all $i$. In addition, point $Q$ is said to majorize set ${\mathcal P}$ (denoted by $Q\succ{\mathcal P}$), if there exists a point $X\in{\mathcal P}$ which is majorized by $Q$. \end{definition} It is easy to show that majorization has the following simple property: \begin{equation} \label{eq:maj-property} Q\succ {\mathcal P}_1\cup{\mathcal P}_2 \quad \Leftrightarrow\quad Q\succ {\mathcal P}_1\ \mbox{or}\ Q\succ {\mathcal P}_2 \end{equation} \begin{definition} \label{def:associate} Let $f$ be a sub-modular function over the set ${\mathcal V}$. The \emph{essential polytope} associated with $f$ is: \begin{equation} \label{eq:esspoly} \mathbf{P}_f=\{\mathbf{x}\in\mathbb{R}^{|{\mathcal V}|}: x_{{\mathcal V}}= f({\mathcal V})\ \mbox{and}\ \forall {\mathcal S}\subset{\mathcal V}, x_{{\mathcal S}}\ge f({\mathcal S}|{\mathcal S}^C)\} \end{equation} where $\mathbf{x}=[x_1,x_2,\cdots,x_{|{\mathcal V}|}]$ and $x_{{\mathcal S}}=\sum_{i\in{\mathcal S}}x_i$. \end{definition} % \par The essential polytope of the sub-modular function $f$ over the set ${\mathcal V}$ is a polytope of dimension $|{\mathcal V}|-1$, which has $2^{|{\mathcal V}|}-2$ facets, each corresponding to intersection of hyperplane $x_{{\mathcal T}}=f({\mathcal T}|{\mathcal T}^C)$ with $\mathbf{P}_{f}$ for each non-empty subset ${\mathcal T}\subset{\mathcal V}$. By $\mathbf{F}_{f,{\mathcal T}}$, we denote the facet corresponding to the subset ${\mathcal T}$. Since $g({\mathcal T})=f({\mathcal T}|{\mathcal T}^C)=f({\mathcal V})-f({\mathcal T})$ is a super-modular function, one can easily show that $\mathbf{F}_{f,{\mathcal T}}$ is a non-empty polytope of dimension $|{\mathcal V}|-2$ (see for example, \cite{polytope}) . \begin{lemma} \label{le:facet} The facet $\mathbf{F}_{f,{\mathcal T}}$ of polytope $\mathbf{P}_f$ can be decomposed to projections of $\mathbf{P}_f$ on $\mathbb{R}^{{\mathcal T}}$ and $\mathbb{R}^{{\mathcal T}^C}$ (in which $\mathbb{R}^{{\mathcal S}}$ stands for the space $\{\mathbf{x}\in\mathbb{R}^{|{\mathcal V}|}:\forall s\in{\mathcal S}^C, x_{s}=0\}$). More precisely, \begin{equation} \label{eq:app} \mathbf{F}_{f,{\mathcal T}}=\{\mathbf{x}\in\mathbb{R}^{|{\mathcal V}|}:\mathbf{x}_{{\mathcal T}}\in \mathbf{F}_{f,{\mathcal T}}^{(1)}, \mathbf{x}_{{\mathcal T}^C}\in\mathbf{F}_{f,{\mathcal T}}^{(2)}\} \end{equation} where \begin{equation} \label{eq:pface1} \mathbf{F}_{f,{\mathcal T}}^{(1)}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}}: x_{{\mathcal T}}=f({\mathcal T}|{\mathcal T}^C),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}, x_{{\mathcal S}}\ge f({\mathcal S}|{\mathcal S}^C)\} \end{equation} and \begin{equation} \label{eq:pface2} \mathbf{F}_{f,{\mathcal T}}^{(2)}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}^C}: x_{{\mathcal T}^C}=f({\mathcal T}^C),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}^C, x_{{\mathcal S}}\ge f({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})\}. \end{equation} Moreover, $\mathbf{F}_{f,{\mathcal T}}^{(1)}$ and $\mathbf{F}_{f,{\mathcal T}}^{(2)}$ are the essential polytopes of the functions $f_1:2^{{\mathcal T}}\rightarrow\mathbb{R}$ and $f_2:2^{{\mathcal T}^C}\rightarrow\mathbb{R}$ respectively, where $f_1({\mathcal S})=f({\mathcal S}|{\mathcal T}^C)$ and $f_2({\mathcal S})=f({\mathcal S})$. \end{lemma} \begin{proof} The proof is provided in Appendix \ref{app:2}. \end{proof} \begin{lemma}[\cite{polytope}] \label{le:sum-polytope} Let $f_1$ and $f_2$ be two sub-modular functions defined on a set ${\mathcal V}$. Then, \begin{equation} \mathbf{P}_{f_1+f_2}=\mathbf{P}_{f_1}+\mathbf{P}_{f_2} \end{equation} where the sum of two sets is defined as ${\mathcal X}+{\mathcal Y}=\{x+y:x\in{\mathcal X},y\in{\mathcal Y}\}$. \end{lemma} \subsection{System Model} A cooperative relay network is a discrete memoryless network with $V$ nodes ${\mathcal V} =\{1, 2,\cdots, V\}$, and a channel of the form \[ ({\mathcal X}_1,{\mathcal X}_2,\cdots,{\mathcal X}_V,p(y_1,y_2,\cdots,y_V|x_1,x_2,\cdots,x_V),{\mathcal Y}_1,{\mathcal Y}_2,\cdots,{\mathcal Y}_V). \] At each time $t = 1, 2,\cdots$, every node $v\in{\mathcal V}$ sends an input $X_{v,t}\in{\mathcal X}_v$, and receives an output $Y_{v,t}\in{\mathcal Y}_v$, which are related via $p(Y_{1,t},\cdots , Y_{V,t}|X_{1,t}, . . . ,X_{V,t})$. \begin{definition}[Reliable multicasting of correlated sources over cooperative networks] \label{def:sw} Let ${\mathcal A}$ and ${\mathcal D}$ be two subsets of ${\mathcal V}$ corresponding to the set of the sources and the destinations, respectively. We say that the set of DMCS, $U_{\mathcal A}$, can reliably be multicast over discrete memoryless cooperative network, to all nodes in ${\mathcal D}$, if there exists a sequence of a pair of positive integers $(s_n,r_n)$ such that $s_n\rightarrow\infty,\ r_n\rightarrow\infty,\ \dfrac{r_n}{s_n}\rightarrow 1$ as $n\rightarrow\infty$ and a sequence of encoding functions \[ f_{v,t}^{(s_n)}:{\mathcal U}_v^{s_n}\times{\mathcal Y}_v^{t-1}\rightarrow{\mathcal X}_v\quad \mbox{for}\quad t=1,\cdots,r_n\] at all nodes $v\in{\mathcal V}$, where, for the non-source nodes we let ${\mathcal U}_v=\emptyset$ and a set of decoding functions defined at each node $d_i\in{\mathcal D}$; \[g_{d_i}^{(s_n,r_n)}:{\mathcal U}_{d_i}^{s_n}\times{\mathcal Y}_{d_i}^{r_n}\rightarrow{\mathcal U}_{{\mathcal A}}^{s_n}\] such that the probability of error \[ P_{e,d_i}^{(s_n,r_n)}=\Pr\left(g_{d_i}^{(s_n,r_n)}(U_{d_i}^{s_n},Y_{d_i}^{r_n})\neq U_{{\mathcal A}}^{s_n}\right) \] vanishes for all $d_i\in{\mathcal D}$ as $n$ goes to the infinity. \end{definition} According to Definition \ref{def:sw}, the joint probability distribution of the random variables factors as, \begin{equation} p(\mathbf{u}_{{\mathcal A}},\mathbf{x}_{{\mathcal V}},\mathbf{y}_{{\mathcal V}})=\prod_{j=1}^{s_n} p(u_{{\mathcal A},j})\prod_{t=1}^{r_n}\prod_{v=1}^V p(x_{v,t}|y_v^{t-1},\mathbf{u}_v)p(y_{{\mathcal V},t}|x_{{\mathcal V},t}) \end{equation} \begin{remark} The network model described in the Definition \ref{def:sw} includes several network models such as MAC with feedback, relay networks and multi-way channels (i.e., a generalization of the two-way channel). \end{remark} \section{Cut-set type necessary conditions for reliable multicasting}\label{sec:3} In this section, we prove necessary conditions for reliable multicasting of correlated sources over cooperative network. \begin{proposition} \label{pro:ob} A set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a cooperative network, only if there exists a joint p.m.f. $p(x_{{\mathcal V}})$ such that \begin{equation} \label{eq:sw2} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})< \min_{d_i\in{\mathcal D}\backslash{\mathcal S}}\min_{{\mathcal V}\supseteq{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C} I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}) \end{equation} \end{proposition} \begin{proof} Using Fano's inequality, imposing the condition $P_{e,d_i}^{(s_n,r_n)}\rightarrow 0$ as $n\rightarrow\infty$, it follows that: \begin{equation} \label{eq:ob}\forall {\mathcal S}\subseteq{\mathcal V}, d_i\in{\mathcal D}\backslash{\mathcal S}: \frac{1}{s_n}H(U_{{\mathcal A}}^{s_n}|Y_{d_i}^{r_n},U_{d_i}^{s_n})\leq\epsilon_n \end{equation} with $\epsilon_n\rightarrow 0$ as $n\rightarrow\infty$. We also have $\frac{1}{s_n}H(U_{{\mathcal S}}^{s_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{d_i}^{r_n}U_{d_i}^{s_n})\leq\epsilon_n$. For each $({\mathcal W},d_i)$ such that ${\mathcal S}\subseteq{\mathcal W}\subseteq{\mathcal V}$ and $d_i\in{\mathcal W}^C$, we have: \begin{align} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&=\frac{1}{s_n}H(U_{{\mathcal S}}^{s_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}) \label{fa:1}\\ &=\frac{1}{s_n}(I(U_{{\mathcal S}}^{s_n};Y_{d_i}^{r_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n})+H(U_{{\mathcal S}}^{s_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{d_i}^{r_n})) \label{fa:2}\\ &\leq\frac{1}{s_n}I(U_{{\mathcal S}}^{s_n};Y_{{\mathcal W}^C}^{r_n}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n})+\epsilon_n \label{fa:3}\\ &=\frac{1}{s_n}\sum_{i=1}^{r_n} I(U_{{\mathcal S}}^{s_n};Y_{{\mathcal W}^C,i}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{{\mathcal W}^C}^{i-1}X_{{\mathcal W}^C,i})+\epsilon_n \label{fa:4}\\ &=\frac{1}{s_n}\sum_{i=1}^{r_n}H(Y_{{\mathcal W}^C,i}|U_{{\mathcal A}\backslash{\mathcal S}}^{s_n}Y_{{\mathcal W}^C}^{i-1}X_{{\mathcal W}^C,i})- H(Y_{{\mathcal W}^C,i}|U_{{\mathcal A}}^{s_n}Y_{{\mathcal W}^C}^{i-1}X_{{\mathcal W}^C,i})+\epsilon_n \label{fa:5}\\ &\leq\frac{1}{s_n}\sum_{i=1}^{r_n}H(Y_{{\mathcal W}^C,i}|X_{{\mathcal W}^C,i})-H(Y_{{\mathcal W}^C,i}|U_{{\mathcal A}}^{s_n}Y_{{\mathcal V}}^{i-1}X_{{\mathcal V},i})+\epsilon_n \label{fa:6}\\ &=\frac{1}{s_n}\sum_{i=1}^{r_n}I(X_{{\mathcal W},i};Y_{{\mathcal W}^C,i}|X_{{\mathcal W}^C,i})+\epsilon_n \label{fa:7}\\ &=\frac{r_n}{s_n} I(X_{{\mathcal W},Q};Y_{{\mathcal W}^C,Q}|X_{{\mathcal W}^C,Q},Q)+\epsilon_n \label{fa:8}\\ &\leq\frac{r_n}{s_n}I(X_{{\mathcal W},Q};Y_{{\mathcal W}^C,Q}|X_{{\mathcal W}^C,Q})+\epsilon_n \label{fa:9}\\ &\rightarrow I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}) \label{fa:10} \end{align} \noindent where \eqref{fa:4} follows from the fact that $X_{{\mathcal W}^C,i}$ is a function of $(Y_{{\mathcal W}^C}^{i-1},U_{{\mathcal W}^C\cap{\mathcal A}}^{s_n})$ and the fact that ${\mathcal W}^C\cap{\mathcal A}\subseteq{\mathcal A}\backslash{\mathcal S}$, \eqref{fa:6} follows since conditioning reduces entropy, \eqref{fa:7} follows because $(U_{{\mathcal A}}^{s_n},Y_{{\mathcal V}}^{i-1})-X_{{\mathcal V},i}-Y_{{\mathcal V},i}$ form a Markov chain, \eqref{fa:8} is obtained by introducing a time-sharing random variable $Q$ which is uniformly distributed over the set $\{1,2,\cdots,r_n\}$ and is independent of everything else, \eqref{fa:10} follows by allowing $s_n,r_n\rightarrow\infty$ with $\frac{r_n}{s_n}\rightarrow 1$ and defining $Y_{{\mathcal V}}\triangleq Y_{{\mathcal V},Q}$ and $X_{{\mathcal V}}\triangleq X_{{\mathcal V},Q}$. \end{proof} \section{Multi-Layer Slepian-Wolf Coding}\label{sec:4} Before describing our scheme and the related results, in this section, we deal with the problem of \emph{multi-layer Slepian-Wolf coding} (ML-SW). Study of the ML-SW enables us to find a new tool to analyze the main problem. In the previous works (for example \cite{mlsw}, \cite{mlsw1}), ML-SW is used to describe a source with some small components (for example, by a binary representation of it) and then successively encoding these components with SW-coding instead of encoding the whole source at once. For example, if we describe an i.i.d. source $S$ by $(X,Y)$, i.e., $S=(X,Y)$, instead of encoding $S$ by $R=H(S)$ bits/symbol, we can first describe $X$ by $R_X=H(X)$ bits/symbol and then apply SW-coding to describe $Y$ by $R_Y=H(Y|X)$ bits/symbol, assuming that the receiver knows $X$ from decoding the previous layer information as a side information. Since the total bits required to describe $S$ in two layers is $R_X+R_Y=H(X,Y)=H(S)$, it follows that there is no loss in the two-layer SW-coding compared with the jointly encoding of the source components. A natural question is: \emph{How can this result be generalized to a more general setting of multi-terminal SW-coding}? \begin{figure}[t] \centering \includegraphics[scale=.8]{2L} \caption{Two-Layer Slepian-Wolf coding for a pair of two-component correlated sources. This coding is suboptimal in the sense that it does not achieve the entire of Slepian-Wolf coding. } \label{fig:2L} \end{figure} At first, let us look at the two-terminal SW-coding. Suppose two sources $S_1=(X_1,Y_1)$ and $S_2=(X_2,Y_2)$ are given. Joint SW-coding yields that lossless description of $(S_1,S_2)$ with rates $(R_1,R_2)$ is feasible, provided that $(R_1,R_2)\in\{(r_1,r_2):r_1\ge H(X_1Y_1|X_2Y_2),r_2\ge H(X_2Y_2|X_1Y_1), r_1+r_2\ge H(X_1X_2Y_1Y_2)\}$. Now suppose the following simple ML-SW. Assume in the first layer, $X_1$ and $X_2$ are encoded by SW-coding with rates $(R_{11},R_{21})$ and in the next layer $Y_1$ and $Y_2$ are encoded by SW-coding with rates $(R_{12},R_{22})$ assuming that the receiver knows $(X_1,X_2)$ from decoding of the previous layer information (See Fig. \ref{fig:2L}). The lossless description of $(S_1,S_2)$ in this manner is possible, if: \begin{align*} R_1=R_{11}+R_{12}&\ge H(X_1|X_2)+H(Y_1|X_1X_2Y_2)\ge H(X_1Y_1|X_2Y_2)\\ R_2=R_{21}+R_{22}&\ge H(X_2|X_1)+H(Y_2|X_1X_2Y_1)\ge H(X_2Y_2|X_1Y_1)\\ R_1+R_2&\ge H(X_1X_2)+H(Y_1Y_2|X_1X_2)=H(X_1X_2Y_1Y_2) \end{align*} \begin{figure}[t] \centering \includegraphics[scale=.7]{SW-region} \caption{Slepian-Wolf rate region vs rate regions with two and three layers Slepian-Wolf coding. Segments $AC$ correspond to the three-layer SW-coding, in which in the first layer, $X_2$ is encoded, then in the second layer $(Y_2,X_1)$ is encoded assuming that $X_2$ is already decoded at the receiver and in the third layer $Y_1$ is encoded assuming that $(X_2,Y_2,X_1)$ is already available at the receiver. Segment $CD$ corresponds to two-layer SW-coding of the Fig. \ref{fig:2L}. Segment $DB$ is obtained from a similar three layer SW-coding to that of segment $AC$. Notice that each corner point of any multi-layer SW-coding that lies inside the SW-region is coincident to a corner point of another multi-layer SW-coding. } \label{fig:SW} \end{figure} This shows that this simple layering can not achieve all the points in the SW-region, in particular the corner points $A=(H(X_1Y_1|X_2Y_2),H(X_2Y_2))$ and $B=(H(X_1Y_1),H(X_2Y_2|X_1Y_1))$ can not be achieved by this scheme(See Fig. \ref{fig:SW}). But the point $A$ can be achieved by successive SW-coding of $X_2$, $Y_2$, $X_1$ and $Y_1$ regarding that the previous sources are available at the receiver. This method suggests that instead of dividing the SW-coding in two layers, SW-coding can be performed in three layers: in the first layer $X_2$ is described for the receiver with rate $R_{21}\ge H(X_2)$, in the second layer $(Y_2,X_1)$ are encoded by SW-coding in the presence of $X_2$ at the receiver, and finally in the last layer $Y_1$ is described using SW-coding assuming $(X_2,Y_2,X_1)$ are available to the receiver. Analyzing this strategy, yields that $(R_1,R_2)$ are achievable if, \begin{align*} R_1=R_{11}+R_{12}&\ge H(X_1|X_2Y_2)+H(Y_1|X_1X_2Y_2)= H(X_1Y_1|X_2Y_2)\\ R_2=R_{21}+R_{22}&\ge H(X_2)+H(Y_2|X_1X_2)\ge H(X_2Y_2|X_1Y_1)\\ R_1+R_2&\ge H(X_2)+H(X_1Y_2|X_2)+H(Y_1|X_2Y_2X_1)=H(X_1X_2Y_1Y_2) \end{align*} From this strategy, the corner point $A$ is achieved, but the corner point $B$ is not achieved. In addition, as it can be seen in Fig. \ref{fig:SW}, the other corner point of this scheme ($C$) is coincident with one of the corner points of the two-layer scheme. By symmetry, the corner point $B$ is achieved by a three-layer scheme in which $X_1$, $(X_2,Y_1)$ and $Y_2$ are encoded in the first, second and third layer respectively. In addition, as it can be seen in Fig. \ref{fig:SW}, the union of the regions of the three different layering schemes is a closed covering of the SW-region. Note that in all the three schemes, there is a hierarchy in the sense that the first component of each source (i.e., $X_i$) is encoded prior to the second component of it (i.e., $Y_i$). The result of the two-terminal SW-coding suggests that to obtain the entire SW-region of multi-components DMCS, it suffices to consider all possible layering schemes such that a given hierarchy on each source is satisfied. \begin{definition} An ordered partition $\mathbf{C}$ of a set ${\mathcal V}$ is a sequence $[{\mathcal L}_1,{\mathcal L}_2,\cdots,{\mathcal L}_K]$ of subsets of ${\mathcal V}$, with union ${\mathcal V}$, which are non-empty, and pairwise disjoint. Denote the family of all ordered partitions of a given set ${\mathcal V}$, by ${\mathcal F}_{{\mathcal V}}$. \end{definition} \par Consider a DMCS $S_{{\mathcal V}}$ with two component sources, i.e., $S_v=(X_v,Y_v)$. Now we describe ML-SW with respect to a given ordered partition $\mathbf{C}=[{\mathcal L}_1,\cdots,{\mathcal L}_K]$. In addition, we assume that the decoder has access to side information $Z$ which is correlated with $(X_{{\mathcal V}},Y_{{\mathcal V}})$ according to an arbitrary distribution $p(x_{{\mathcal V}},y_{{\mathcal V}},z)$. \begin{enumerate} \item In the first layer, using SW-coding, $X_{{\mathcal L}_1}^n$ is encoded with rates $R_1=(R_{11},R_{12},\cdots,R_{1V})$ in which for $v\notin{\mathcal L}_1$, we set $R_{1v}=0$. The receiver can reliably decode $X^n_{{\mathcal L}_1}$ provided that \begin{equation} \label{eq:sw-l1} \forall{\mathcal S}\subseteq{\mathcal L}_1: R_{1{\mathcal S}}\ge H(X_{{\mathcal S}}|X_{{\mathcal L}_1\backslash{\mathcal S}}Z) \end{equation} Define the function $h_{\mathbf{C},1}:2^{{\mathcal V}}\rightarrow \mathbb{R}$ as \[ h_{\mathbf{C},1}({\mathcal S})=H(X_{{\mathcal S}\cap{\mathcal L}_1}|Z) \] Now using the sub-modularity of the entropy function, we have \begin{align} h_{\mathbf{C},1}({\mathcal S}\cap{\mathcal T})+h_{\mathbf{C},1}({\mathcal S}\cup{\mathcal T})&=H(X_{{\mathcal S}\cap{\mathcal T}\cap{\mathcal L}_1}| Z)+H(X_{({\mathcal S}\cup{\mathcal T})\cap{\mathcal L}_1}|Z)\nonumber\\ &=H(X_{({\mathcal S}\cap{\mathcal L}_1)\cap({\mathcal T}\cap{\mathcal L}_1)}|Z)+H(X_{({\mathcal S}\cap{\mathcal L}_1)\cup({\mathcal T}\cap{\mathcal L}_1)}|Z)\nonumber\\ &\le H(X_{{\mathcal S}\cap{\mathcal L}_1}|Z)+H(X_{{\mathcal T}\cap{\mathcal L}_1}|Z)\nonumber\\ &=h_{\mathbf{C},1}({\mathcal S})+h_{\mathbf{C},1}({\mathcal T})\label{eq:sub-h} \end{align} Hence $h_{\mathbf{C},1}$ is sub-modular. In addition, we have: $h_{\mathbf{C},1}({\mathcal S}|{\mathcal S}^C)=H(X_{{\mathcal V}\cap{\mathcal L}_1}|Z)-H(X_{{\mathcal S}^C\cap{\mathcal L}_1}|Z)=H(X_{{\mathcal S}}|X_{{\mathcal L}_1\backslash{\mathcal S}},Z)$. Note that $R_{1{\mathcal S}}=R_{1{\mathcal S}\cap{\mathcal L}_1}$, thus \eqref{eq:sw-l1} is equivalent to \begin{equation} \label{eq:sw-eq1} \forall{\mathcal S}\subseteq{\mathcal V}: R_{1{\mathcal S}}\ge h_{\mathbf{C},1}({\mathcal S}|{\mathcal S}^C) \end{equation} Now it follows from Definition \ref{def:associate} that $R_1$ is contained in the SW-region of the first layer, iff it majorizes the essential polytope of $h_{\mathbf{C},1}$, i.e., $R_1\succ \mathbf{P}_{h_{\mathbf{C},1}}$. \item In the layer $2\le i\le K+1$, assuming that $(X^n_{{\mathcal L}^i},Y^n_{{\mathcal L}^{i-1}})$ has been decoded at the receiver from the previous layers (where ${\mathcal L}^i=\cup_{k=1}^{i-1} {\mathcal L}_k$), using SW-coding $(X_{{\mathcal L}_i}^n,Y^n_{{\mathcal L}_{i-1}})$ is encoded with rates $R_i=(R_{i1},R_{i2},\cdots,R_{iV})$ in which for $v\notin{\mathcal L}_{i-1}\cup{\mathcal L}_i$, we set $R_{iv}=0$. The receiver can reliably decode $(X_{{\mathcal L}_i}^n,Y^n_{{\mathcal L}_{i-1}})$ provided that, \begin{equation} \label{eq:sw-li} \forall{\mathcal S}\subseteq{\mathcal L}_{i-1}\cup{\mathcal L}_i: R_{i{\mathcal S}}\ge H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}_i\backslash{\mathcal S}}Y_{{\mathcal L}_{i-1}\backslash{\mathcal S}}X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z) \end{equation} Define the function $h_{\mathbf{C},i}:2^{{\mathcal V}}\rightarrow \mathbb{R}$ as follows: \[ h_{\mathbf{C},i}({\mathcal S})=H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z) \] Now in similar manner to \eqref{eq:sub-h}, it can be shown that $h_{\mathbf{C},i}$ is sub-modular. Following similar steps described in the previous stage, we conclude that $R_i$ is contained in the SW-region of the layer $i$, iff it majorizes the essential polytope of $h_{\mathbf{C},i}$, i.e., $R_i\succ \mathbf{P}_{h_{\mathbf{C},i}}$. \end{enumerate} Define $R\triangleq\sum_{k=1}^{K+1}R_k$ (which is the overall rate vector) and $h_{\mathbf{C}}\triangleq \sum_{k=1}^{K+1}h_{\mathbf{C},k}$. We showed that $R\succ \mathbf{P}_{h_{\mathbf{C}}}$. On the other side, suppose that the point $R$ majorizes $\mathbf{P}_{h_{\mathbf{C}}}$, so there is a point $R^*\in\mathbf{P}_{h_{\mathbf{C}}}$ such that $R\succ R^*$. Applying Lemma \ref{le:sum-polytope} to $(h_{\mathbf{C},k}:1\le k\le K+1)$, we have $\mathbf{P}_{h_{\mathbf{C}}}=\sum_{k=1}^{K+1}\mathbf{P}_{h_{\mathbf{C},k}}$. Hence there are points $(R^*_k\in\mathbf{P}_{h_{\mathbf{C},k}}:1\le k\le K+1)$ such that $R^*\triangleq\sum_{k=1}^{K+1}R^*_k$. Let $R_k=R^*_k+\frac{\Delta R}{K+1}$ where $\Delta R=R-R^*$. Now we have $R\triangleq\sum_{k=1}^{K+1}R_k$ and for all $k$, $R_k\succ \mathbf{P}_{h_{\mathbf{C},k}}$. Thus, each rate vector $R$ satisfying $R\succ\mathbf{P}_{h_{\mathbf{C}}}$ can be achieved using ML-SW coding with respect to $\mathbf{C}$. Therefore the set of all achievable rates with respect to $\mathbf{C}$ is given by: \begin{align} \label{eq:sw-bc} \mathcal{R}_{\mathbf{C}}&= \{R\in\mathbb{R}^{|{\mathcal V}|}:R\succ\mathbf{P}_{h_{\mathbf{C}}}\}\nonumber\\ &=\{R\in\mathbb{R}^{|{\mathcal V}|}:\forall {\mathcal S}\subseteq{\mathcal V}, R_{{\mathcal S}}\ge\sum_{i=1}^{K+1} H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}_i\backslash{\mathcal S}}Y_{{\mathcal L}_{i-1}\backslash{\mathcal S}}X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z) \} \end{align} \par The next theorem, is the main result of this section. \begin{theorem}[SW-identity] \label{thm:sw-covering} The set $\{\mathcal{R}_{\mathbf{C}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ is a closed covering of ${\mathcal R}_{SW}$ which is the SW-region defined by: \begin{equation} \label{eq:sw-region} {\mathcal R}_{SW}= \{R\in\mathbb{R}^{|{\mathcal V}|}:\forall {\mathcal S}\subseteq{\mathcal V}, R_{{\mathcal S}}\ge H(X_{{\mathcal S}} Y_{{\mathcal S}}|X_{{\mathcal S}^C}Y_{{\mathcal S}^C}Z)\} \end{equation} \end{theorem} \begin{proof} Define the function $h:2^{{\mathcal V}}\rightarrow\mathbb{R}$ with $h({{\mathcal S}})=H(X_{{\mathcal S}}Y_{{\mathcal S}}|Z)$. $h$ is a sub-modular function with the essential polytope $\mathbf{P}_h$. By definition, a point $R$ belongs to SW-region iff it majorizes $\mathbf{P}_h$. To prove the theorem, we must show that \begin{equation} \label{eq:eq-thm} {\mathcal R}_{SW}=\bigcup_{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}}{\mathcal R}_{\mathbf{C}} \end{equation} Applying Equation \eqref{eq:maj-property} to the RHS of \eqref{eq:eq-thm} yields, \begin{equation} \label{eq:un} \bigcup_{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}}{\mathcal R}_{\mathbf{C}}= \{R\in\mathbb{R}^{|{\mathcal V}|}:R\succ\bigcup_{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}}\mathbf{P}_{h_{\mathbf{C}}}\} \end{equation} \par Thus, to prove the theorem, we only need to show that $\{\mathbf{P}_{h_{\mathbf{C}}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ is a closed covering of $\mathbf{P}_h$. We prove this by strong induction on $|{\mathcal V}|$. For $N = 1$ as base of induction, it is clear (The case $N=2$ was proved separately in the beginning of the section). For $|{\mathcal V}|\ge 2$ assume that the theorem holds for any ${\mathcal V}$ with size $|{\mathcal V}|\le N-1$. We show that $\{\mathbf{P}_{h_{\mathbf{C}}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ and $\mathbf{P}_h$ satisfy the conditions of Lemma \ref{le:cover}, thus $\{\mathbf{P}_{h_{\mathbf{C}}}:\mathbf{C}\in{\mathcal F}_{{\mathcal V}}\}$ is a closed covering of $\mathbf{P}_h$. \begin{claim} \label{cl:1} For any ordered partition $\mathbf{C}$ of ${\mathcal V}$, we have \[ \mathbf{P}_{h_{\mathbf{C}}}\subseteq\mathbf{P}_h.\] \end{claim} \emph{Proof of Claim \ref{cl:1}}. First note that, (See equation \eqref{eq:sw-li}) \begin{align} h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C)&=\sum_{i=1}^{K+1} H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal L}_i\backslash{\mathcal S}}Y_{{\mathcal L}_{i-1}\backslash{\mathcal S}}X_{{\mathcal L}^i}Y_{{\mathcal L}^{i-1}}Z) \label{eq:app-1}\\ &\ge \sum_{i=1}^{K+1}H(X_{{\mathcal S}\cap{\mathcal L}_i}Y_{{\mathcal S}\cap{\mathcal L}_{i-1}}|X_{{\mathcal S}^C}Y_{{\mathcal S}^C}X_{{\mathcal S}\cap{\mathcal L}^i}Y_{{\mathcal S}\cap{\mathcal L}^{i-1}}Z)\label{eq:p1-1}\\ &= H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal S}^C}Y_{{\mathcal S}^C}Z)\label{eq:p1-2}\\ &= h({\mathcal S}|{\mathcal S}^C)\label{eq:p1-3} \end{align} where \eqref{eq:p1-1} follows from the fact that $({\mathcal L}_i\backslash{\mathcal S})\cup{\mathcal L}^i\subseteq{\mathcal S}^C\cup({\mathcal S}\cap{\mathcal L}^i)$ with equality holds if ${\mathcal S}={\mathcal V}$ and conditioning does not reduce the entropy, and \eqref{eq:p1-2} follows by the chain rule, since $\{{\mathcal L}_i\cap{\mathcal S}\}_{i=1}^K$ is a partition of ${\mathcal S}$. Now we can conclude the claim from \eqref{eq:p1-3}. $\square$ \begin{claim} \label{cl:2} Suppose ${\mathcal F}_{{\mathcal T}^C,{\mathcal T}}$ is a subset of ${\mathcal V}$ that consists of all ordered partitions which are generated by concatenating an ordered partition of ${\mathcal T}^C$ and an ordered partition of ${\mathcal T}$, i.e., \[{\mathcal F}_{{\mathcal T}^C,{\mathcal T}}=\{\mathbf{C}\in{\mathcal F}_{{\mathcal V}}:\mathbf{C}=[\mathbf{C}_1,\mathbf{C}_2],\mathbf{C}_1\in{\mathcal F}_{{\mathcal T}^C}\ \mbox{and}\ \mathbf{C}_2\in{\mathcal F}_{{\mathcal T}}\}\] Then, the set of facets $\{\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}:\mathbf{C}\in{\mathcal F}_{{\mathcal T}^C,{\mathcal T}}\}$ is a closed covering of $\mathbf{F}_{h,{\mathcal T}}$. \end{claim} \emph{Proof of Claim \ref{cl:2}}.By Lemma \ref{le:facet}, $\mathbf{F}_{h,{\mathcal T}}$ is given by: \begin{equation} \mathbf{F}_{h,{\mathcal T}}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal V}}:\mathbf{x}_{{\mathcal T}}\in\mathbf{P}_{h_1},\mathbf{x}_{{\mathcal T}^C}\in\mathbf{P}_{h_2}\} \end{equation} In which $\mathbf{P}_{h_1}$ and $\mathbf{P}_{h_2}$ are the associated essential polytopes of sub-modular functions $h_1({\mathcal S})=H(X_{{\mathcal S}}Y_{{\mathcal S}}|Z)$ and $h_2({\mathcal S})=H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)$ with domains $2^{{\mathcal T}^C}$ and $2^{{\mathcal T}}$, respectively. More precisely, $\mathbf{P}_{h_1}$ and $\mathbf{P}_{h_2}$ are given by: \begin{small} \begin{gather*} \mathbf{P}_{h_1}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}^C}:x_{{\mathcal T}^C}=H(X_{{\mathcal T}^C}Y_{{\mathcal T}^C}|Z) ,\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}^C,x_{{\mathcal S}}\ge H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal S}^C\cap{\mathcal T}^C}Y_{{\mathcal S}^C\cap{\mathcal T}^C}Z)\} \\ \mathbf{P}_{h_2}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}}:x_{{\mathcal T}}=H(X_{{\mathcal T}}Y_{{\mathcal T}}|X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z) ,\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T},x_{{\mathcal S}}\ge H(X_{{\mathcal S}}Y_{{\mathcal S}}|X_{{\mathcal S}^C\cap{\mathcal T}}Y_{{\mathcal S}^C\cap{\mathcal T}}X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)\} \end{gather*} \end{small} Now, since the size of ${\mathcal T}^C$ and ${\mathcal T}$ are smaller than $N$, by applying the induction assumption to essential polytopes $\mathbf{P}_{h_1}$ and $\mathbf{P}_{h_2}$ (with side information $\tilde{Z}=(X_{{\mathcal T}^C},Y_{{\mathcal T}^C},Z)$ at the decoder), we obtain: \begin{align} \mathbf{P}_{h_1}&=\bigcup_{\mathbf{C}_1\in{\mathcal F}_{{\mathcal T}^C}}\mathbf{P}_{h_{1,\mathbf{C}_1}}\nonumber\\ \mathbf{P}_{h_2}&=\bigcup_{\mathbf{C}_2\in{\mathcal F}_{{\mathcal T}}}\mathbf{P}_{h_{2,\mathbf{C}_2}}\label{eq:p-1-1} \end{align} \par where $\mathbf{C}_1=[{\mathcal L}_{1,1},\cdots,{\mathcal L}_{1,K_1}]$, $\mathbf{C}_2=[{\mathcal L}_{2,1},\cdots,{\mathcal L}_{2,K_2}]$ and the functions $h_{1,\mathbf{C}_1}$ and $h_{2,\mathbf{C}_2}$ whose domain are $2^{{\mathcal T}^C}$ and $2^{{\mathcal T}}$, are defined by: \begin{align} h_{1,\mathbf{C}_1}({\mathcal S})&=\sum_{k=1}^{K_1+1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\label{eq:p2-1}\\ h_{2,\mathbf{C}_2}({\mathcal S})&=\sum_{k=1}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}\tilde{Z}) \label{eq:p2-2} \end{align} Using \eqref{eq:p2-1} and \eqref{eq:p2-2}, we obtain $\mathbf{P}_{h_{1,\mathbf{C}_1}}$ and $\mathbf{P}_{h_{2,\mathbf{C}_2}}$ as: \begin{align} \mathbf{P}_{h_{1,\mathbf{C}_1}}=&\big\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}^C}:x_{{\mathcal T}^C}=H(X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}^C\nonumber\\ & x_{{\mathcal S}}\ge\sum_{k=1}^{K_1+1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_{1,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{1,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\big\}\label{eq:p0-1} \\ \mathbf{P}_{h_{2,\mathbf{C}_2}}=& \big\{\mathbf{x}\in\mathbb{R}^{{\mathcal T}}:x_{{\mathcal T}}=H(X_{{\mathcal T}}Y_{{\mathcal T}}|\tilde{Z}),\ \mbox{and}\ \forall{\mathcal S}\subset{\mathcal T}\nonumber\\ & x_{{\mathcal S}}\ge\sum_{k=1}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_{2,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{2,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}X_{{\mathcal T}^C}Y_{{\mathcal T}^C}Z)\big\}\label{eq:p0-2} \end{align} Let $\mathbf{C}=[{\mathcal L}_{1,1},\cdots,{\mathcal L}_{1,K_1},{\mathcal L}_{2,1},\cdots,{\mathcal L}_{2,K_2}]$ be the concatenation of $\mathbf{C}_1$ and $\mathbf{C}_2$. We assert that \begin{equation} \label{eq:p3} \mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}=\{\mathbf{x}\in\mathbb{R}^{{\mathcal V}}:\mathbf{x}_{{\mathcal T}^C}\in\mathbf{P}_{h_{1,\mathbf{C}_1}},\mathbf{x}_{{\mathcal T}}\in\mathbf{P}_{h_{2,\mathbf{C}_2}}\} \end{equation} By Lemma \ref{le:facet}, $\mathbf{x}$ belongs to $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$, iff \begin{equation} \label{eq:p4} \begin{array}{lccr} {\mathcal S}\subseteq{\mathcal T}^C:& x_{{\mathcal S}}\ge h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\cap{\mathcal S}^C) &\mbox{with equality for}& {\mathcal S}={\mathcal T}^C\\ {\mathcal S}\subseteq{\mathcal T}:& x_{{\mathcal S}}\ge h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C) &\mbox{with equality for}& {\mathcal S}={\mathcal T} \end{array} \end{equation} To evaluate \eqref{eq:p4}, consider \begin{align} h_{\mathbf{C}}({\mathcal S})&=\sum_{k=1}^{K_1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\nonumber\\&\qquad+H(X_{{\mathcal S}\cap{\mathcal L}_{2,1}}Y_{{\mathcal S}\cap{\mathcal L}_{1,K_1}}|X_{{\mathcal T}^C}Y_{{\mathcal L}_1^{K_1}}Z)+\sum_{k=2}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}\tilde{Z}) \end{align} where we have used the fact that ${\mathcal L}_1^{K_1+1}={\mathcal T}^C$. Now, we compute the RHS of \eqref{eq:p4}: \begin{align} h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\cap{\mathcal S}^C)&=h_{\mathbf{C}}({\mathcal T}^C)-h_{\mathbf{C}}({\mathcal T}^C\cap{\mathcal S}^C)\nonumber\\ &=\sum_{k=1}^{K_1+1}H(X_{{\mathcal L}_{1,k}}Y_{{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)-H(X_{{\mathcal S}^C\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}^C\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\label{eq:p5-1}\\ &=\sum_{k=1}^{K_1+1}H(X_{{\mathcal S}\cap{\mathcal L}_{1,k}}Y_{{\mathcal S}\cap{\mathcal L}_{1,k-1}}|X_{{\mathcal L}_{1,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{1,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_1^k}Y_{{\mathcal L}_1^{k-1}}Z)\label{eq:p5-2}\\ h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C)&=\sum_{k=1}^{K_2+1}H(X_{{\mathcal S}\cap{\mathcal L}_{2,k}}Y_{{\mathcal S}\cap{\mathcal L}_{2,k-1}}|X_{{\mathcal L}_{2,k}\backslash{\mathcal S}}Y_{{\mathcal L}_{2,k-1}\backslash{\mathcal S}}X_{{\mathcal L}_2^k}Y_{{\mathcal L}_2^{k-1}}\tilde{Z})\label{eq:p5-3} \end{align} where \eqref{eq:p5-1} follows, because ${\mathcal S}\subseteq{\mathcal T}^C$ and ${\mathcal T}^C$ are disjoint from all ${\mathcal L}_{2,i}$, \eqref{eq:p5-3} follows from the fact that ${\mathcal T}$ is disjoint from all ${\mathcal L}_{1,i}$.\par Now \eqref{eq:p0-1}, \eqref{eq:p0-2}, \eqref{eq:p5-2} and \eqref{eq:p5-3} together show the truth of assertion. Finally, the assertion with \eqref{eq:p-1-1} implies that for each point $\mathbf{x}\in\mathbf{F}_{h,{\mathcal T}}$, there exists an ordered partition $\mathbf{C}\in{\mathcal F}_{{\mathcal T}^C,{\mathcal T}}$ for which $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$ contains $\mathbf{x}$. This completes the proof of Claim 2. $\square$ \begin{claim} \label{cl:3} For each facet $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$ of an essential polytope of a given ordered partition $\mathbf{C}$ inside the $\mathbf{P}_h$, there exists an ordered partition $\mathbf{C}^*\neq\mathbf{C}$, such that \begin{equation} \label{eq:common} \mathbf{P}_{h_{\mathbf{C}}}\bigcap\mathbf{P}_{h_{\mathbf{C}^*}}=\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}} \end{equation} \end{claim} \par \emph{Proof of Claim \ref{cl:3}}. Let $\mathbf{C}=[{\mathcal L}_1,\cdots,{\mathcal L}_K]$. From the proof of Claim \ref{cl:2}, the corresponding facets to $({\mathcal L}_{i}^{K}=\cup_{k=i}^{K}{\mathcal L}_k:k\ge2)$ lie on the boundary of $\mathbf{P}_h$. Thus, we only consider the facets corresponding to ${\mathcal T}\neq{\mathcal L}_{i}^{K}=\cup_{k=i}^{K}{\mathcal L}_k$. For such ${\mathcal T}$, set $\mathbf{C}^*=[{\mathcal L}_1^*,\cdots,{\mathcal L}_K^*,{\mathcal L}_{K+1}^*]$, where ${\mathcal L}_k^*=({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k)$. Now we show that \begin{equation} \label{eq:cl3} \mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}=\mathbf{F}_{h_{\mathbf{C}^*},{\mathcal T}^C}. \end{equation} This proves Claim \ref{cl:3}, because $\mathbf{P}_{h_{\mathbf{C}}}$ gets the minimum of $x_{{\mathcal T}}$ on the $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}$, and $\mathbf{P}_{h_{\mathbf{C}}}$ gets the maximum of $x_{{\mathcal T}}$ on the $\mathbf{F}_{h_{\mathbf{C}^*},{\mathcal T}^C}$ (since $x_{{\mathcal T}}=H(X_{{\mathcal V}}Y_{{\mathcal V}}|Z)-x_{{\mathcal T}^C}$). \par We provide the formal proof of \eqref{eq:cl3} in Appendix \ref{app:3}. Instead, we give the main idea behind the construction of $\mathbf{C}^*$. First, consider the simple SW-coding of a DMCS $X_{{\mathcal V}}$ with rate-tuple $R_{{\mathcal V}}$. It is well-known that the minimum of $R_{{\mathcal T}}$ is achieved with joint decoding of $X_{{\mathcal T}^C}$ with sum-rate $R_{{\mathcal T}^C}=H(X_{{\mathcal T}^C}|Z)$ followed by joint decoding of $X_{{\mathcal T}}$ in the presence of $X_{{\mathcal T}^C}$ at the decoder with sum-rate $R_{{\mathcal T}}=H(X_{{\mathcal T}}|X_{{\mathcal T}^C}Z)$. Also, Lemma \ref{le:facet} confirms this result about any sub-modular function. Moreover, this lemma tells us that each point which achieves the minimum of $R_{{\mathcal T}}$ can be obtained by this two-level decoding. Now consider the ML-SW coding with respect to $\mathbf{C}$. Each point of ML-SW region can be written in the form $R=\sum_{k=1}^{K+1}R_k$, where $R_k$ lies in the SW-region of layer $k$. So $R_{{\mathcal T}}$ can be split into the rates $R_{k,{\mathcal T}}=R_{k,{\mathcal T}\cap({\mathcal L}_k\cup{\mathcal L}_{k-1})}$. Thus to minimize $R_{{\mathcal T}}$, we require to minimize each of $R_{k,{\mathcal T}\cap({\mathcal L}_k\cup{\mathcal L}_{k-1})}$. In layer $k$, SW-coding is done over $(X_{{\mathcal L}_k},Y_{{\mathcal L}_{k-1}})$, therefore to achieve the minimum of $R_{k,{\mathcal T}\cap({\mathcal L}_k\cup{\mathcal L}_{k-1})}$, it suffices to consider two levels of decoding at the decoder: the decoder, first decodes $(X_{{\mathcal T}^C\cap{\mathcal L}_k},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}})$ in the presence of $(X_{{\mathcal L}^k},Y_{{\mathcal L}^{k-1}},Z)$, then decodes $(X_{{\mathcal T}\cap{\mathcal L}_k},Y_{{\mathcal T}\cap{\mathcal L}_{k-1}})$ in the presence of $(X_{{\mathcal L}^k},Y_{{\mathcal L}^{k-1}},X_{{\mathcal T}^C\cap{\mathcal L}_k},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}},Z)$. In overall, to minimize $R_{{\mathcal T}}$ with respect to $\mathbf{C}$, one can consider the following $2K+2$ levels of decoding: \begin{equation} \label{eq:chain-1} X_{{\mathcal T}^C\cap{\mathcal L}_1},X_{{\mathcal T}\cap{\mathcal L}_1},\cdots,(X_{{\mathcal T}^C\cap{\mathcal L}_k},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}}),(X_{{\mathcal T}\cap{\mathcal L}_k},Y_{{\mathcal T}\cap{\mathcal L}_{k-1}}),\cdots,Y_{{\mathcal T}^C\cap{\mathcal L}_{K}},Y_{{\mathcal T}\cap{\mathcal L}_K} \end{equation} On the other side, to maximize $R_{{\mathcal T}}$ (or equivalently, to minimize $R_{{\mathcal T}^C}$) with respect to $\mathbf{C}^{*}$, the following order on SW-coding is required, \begin{equation} \label{eq:chain-2} X_{{\mathcal T}\cap{\mathcal L}_1^*},X_{{\mathcal T}^C\cap{\mathcal L}_1^*},\cdots,(X_{{\mathcal T}\cap{\mathcal L}_k^*},Y_{{\mathcal T}\cap{\mathcal L}_{k-1}^*}),(X_{{\mathcal T}^C\cap{\mathcal L}_k^*},Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}^*}),\cdots,Y_{{\mathcal T}\cap{\mathcal L}_{K+1}^*},Y_{{\mathcal T}^C\cap{\mathcal L}_{K+1}^*} \end{equation} Now, note that ${\mathcal T}^C\cap{\mathcal L}_k^*={\mathcal T}^C\cap(({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k))={\mathcal T}^C\cap{\mathcal L}_k$ and ${\mathcal T}\cap{\mathcal L}_k^*={\mathcal T}\cap(({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k))={\mathcal T}\cap{\mathcal L}_{k-1}$; in particular ${\mathcal T}\cap{\mathcal L}_1^*={\mathcal T}^C\cap{\mathcal L}_{K+1}^*=\emptyset$. Comparing \eqref{eq:chain-1} with \eqref{eq:chain-2}, we see that these two multi-level decoding schemes are the same, thus the intersection of $\mathbf{P}_{h_{\mathbf{C}}}$ and $\mathbf{P}_{h_{\mathbf{C}*}}$ is $\mathbf{F}_{h_{\mathbf{C}},{\mathcal T}}=\mathbf{F}_{h_{\mathbf{C}^*,{\mathcal T}^C}}$. $\square$ \par Now Claims \ref{cl:1}--\ref{cl:3} ensure that $\mathbf{P}_{h_{\mathbf{C}}}$ and $\mathbf{P}_h$ satisfy the conditions of Lemma \ref{le:cover}. This completes the proof. \end{proof} \section{Feasible Constraints for reliable multicasting of a DMCS over cooperative networks}\label{sec:5} In this section, we obtain a set of DMCS which can reliably be multicast over a cooperative network. Our approach is based on generalization of the CF strategy for relay networks. Two types of generalization have been considered in the previous works, \cite{kramer2005,rost}. In \cite{kramer2005}, the CF strategy was generalized in the following manner: \begin{enumerate} \item Each relay and destination partially decode the messages of the other relays. \item Each relay compresses its observation $Y_v$, in the presence of side information from messages of the other relays. \item Each relay sends its compressed observation through a Multiple Access Channel (MAC). Finally, destination decodes the source message. \end{enumerate} This scenario deals with relays in a symmetric way, i.e., all relays lie in a single MAC layer. In \cite{rost}, a generalization of \emph{mixed strategy} of \cite[Theorem 7]{cover} is proposed. By relaxing the partial decode-and-forward part of the mixed strategy, we obtain a generalization of the CF strategy. In this scenario, relays are ordered according to a given permutation. Each relay compresses its observation using multiple description method (MD) and sends these descriptions through a broadcast channel with a degraded message set. Each relay and destination decode their respective descriptions after decoding their broadcast messages according to a sequential decoding scheme. However, if the relays use the simple Wyner-Ziv coding rather than MD, the result is a special case of \cite[Theorem 3]{kramer2005}. In another scenario proposed in \cite{rost:asilomar}, CF is generalized for half-duplex channels. Although, this method is proposed for half-duplex relay networks, it can be generalized for relay networks, too. In this scenario, each relay uses the simple Wyner-Ziv coding. This scenario differs from the previous generalization of CF, in which the destination considers an ordering of relays, and decodes the compressed observation of relay $k$, in the presence of compressed observations of relays $(k+1,k+2,\cdots,N-1)$ which are decoded in the previous blocks. This is similar to ML-SW coding. \par We propose a joint source coding and Wyner-Ziv coding for multicasting a DMCS $U_{{\mathcal A}}$ over cooperative networks. In this scenario, in each block, each node compresses its observation using Wyner-Ziv coding, then in the next block jointly maps the compressed observation and its current source sequence to a channel input codeword and transmits the codeword. The joint encoding used in this scheme, benefits from the advantage of joint source-channel coding in comparison with source-channel separation in the multicast scenario, which is illustrated in \cite{tuncel}. Moreover, in this scheme, each node has two types of sources including the compressed observation and the source sequence which are required to decode at each destination. By the nature of relaying, it is not possible to decode these sources, simultaneously. This situation is similar to ML-SW coding, in which two components of the source are not being decoded simultaneously. Motivated by the results of ML-SW coding, e.g., Theorem \ref{thm:sw-covering}, each destination groups the other nodes into some layers according to its ability to decode the information of other nodes. Using insights from the ML-SW, in the first level of decoding, the destination can directly decode the first component of the information of nodes in the first layer, i.e., the source sequences of the first layer, through a MAC layer between layer one and the destination, and in level $k$ of decoding, destination decodes the source sequences of layer $k$ and the compressed observations of layer $k-1$ (second component of information of layer $k-1$) jointly through the MAC layer between layer $k$ and the destination in the presence of the decoded information from levels $(1,2,\cdots,k-1)$ as side information. These side information play two roles in improving the decoding: \begin{enumerate} \item These are side information for Slepian-Wolf coding that enlarge the SW-region. \item These are side information for MAC that enlarge the MAC-region. Unlike the first role, this role does not arise from the ML-SW. \end{enumerate} \par Enlarging the SW-region and the MAC-region provides the opportunity for some intersection between the two regions which results in the reliable transmission of source sequences of the nodes in layer $k$ and the compressed observations of the nodes in layer $k-1$ in an operational separation sense, even if the original MAC region does not intersect with the original SW-region. \par The next theorem is the main result of the paper. \begin{theorem} \label{thm:sw} The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a cooperative network to nodes in ${\mathcal D}$, if there exist auxiliary random variables $\hat{Y}_{{\mathcal V}}$ and $Q$, such that for each ${\mathcal S}\subseteq{\mathcal A}$, we have \begin{equation} \label{eq:sw} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})< \min_{d_i\in{\mathcal D}\backslash{\mathcal S}}\min_{{\mathcal V}\supseteq{\mathcal W}\supseteq{\mathcal S}: \atop d_i\in{\mathcal W}^C} [I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}Q)-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Q)] \end{equation} \noindent where the joint p.m.f. of random variables factors as \begin{equation} \label{eq:dist} p(q)p(u_{{\mathcal A}})[\prod_{v\in{\mathcal V}}p(x_v|q)p(\hat{y}_v|x_v,y_v,q)]p(y_{{\mathcal V}}|x_{{\mathcal V}}). \end{equation} \end{theorem} \begin{remark} The constraint \eqref{eq:sw} separates source coding from channel coding in the operational separation sense \cite{tuncel}. To see this, observe that the constraint \eqref{eq:sw} is equivalent to the following constraint, \begin{equation}\label{eq:sw-5} \forall{\mathcal W}\subseteq{\mathcal V},d_i\in{\mathcal W}^C: H(U_{{\mathcal W}\cap{\mathcal A}}|U_{{\mathcal A}\backslash{\mathcal W}})+I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Q)<I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}Q). \end{equation} Consider a cut $\Lambda=({\mathcal W},{\mathcal W}^C)$. The RHS of the \eqref{eq:sw-5} provides an achievable flow through the cut $\Lambda$. The first term in the LHS of \eqref{eq:sw-5} represents the rate of the Slepian-Wolf coding for describing $U_{{\mathcal W}\cap{\mathcal A}}$ to the destinations in the other side of the cut in the presence of $U_{{\mathcal A}\backslash{\mathcal W}}$ which is available in ${\mathcal W}^C$. The second term in the LHS of \eqref{eq:sw-5} can be interpreted as the rate of the Wyner-Ziv coding for describing a compression of the observation $Y_{{\mathcal W}}$, i.e. $\hat{Y}_{{\mathcal W}}$, to the other side of the cut in the presence of $(X_{{\mathcal W}^C},\hat{Y}_{{\mathcal W}^C},Y_{d_i})$ and $X_{{\mathcal W}}$, which the latter can be regarded as the output of channel decoder. Since the compression rate of the sources is less than the information flow, one can expect that the multicasting of the sources is feasible, due to the source-channel separation approach. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:sw}] For the sake of simplicity, we assume that $|{\mathcal Q}|=1$ where $Q$ is a time-sharing random variable. First, we characterize a set of DMCS which can reliably be multicast over a cooperative network, with respect to given ordered partitions at each destination. For each destination node $d_i$, let ${\mathcal V}_{-d_i}={\mathcal V}\backslash\{d_i\}$. The following lemma, establishes a set of sufficient conditions for reliable multicasting of $U_{{\mathcal A}}$ over the cooperative network. We provide the proof of it in Subsection \ref{sub:a}. \begin{lemma} \label{le:first} The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a cooperative network to subset ${\mathcal D}$ of the nodes, if for each $d_i\in{\mathcal D}$, there exists an ordered partition $\mathbf{C}^{(d_i)}=[{\mathcal L}_1,{\mathcal L}_2,\cdots,{\mathcal L}_{\ell}]$ of ${\mathcal V}_{-d_i}$ such that for each ${\mathcal S}\subseteq{\mathcal V}_{-d_i}$, the following constraint is satisfied: \begin{align} \label{eq:sufficient} \sum_{t\in{\mathcal S}}H(X_t)+H(\hat{Y}_t|X_tY_t)\geq& \sum_{k=1}^{\ell+1}\Big(H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})+\nonumber\\ &\quad\qquad H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal S}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal S}}X_{{\mathcal L}^k}\hat{Y}_{{\mathcal L}^{k-1}}Y_{d_i}X_{d_i})\Big), \end{align} \noindent where the random variables $(x_{{\mathcal V}},y_{{\mathcal V}},\hat{y}_{{\mathcal V}})$ are distributed according to \eqref{eq:dist}. \end{lemma} \par This lemma gives a partial solution to the problem of reliable multicasting of $U_{{\mathcal A}}$ over the cooperative network, in the sense that to find out that the multicasting of $U_{{\mathcal A}}$ is feasible, we must consider all the possible ordered partitions of each set ${\mathcal V}_{-d_i}$ and check that the constraint \eqref{eq:sufficient} is satisfied or not. If for each destination node $d_i$, there exists at least one ordered partition of ${\mathcal V}_{-d_i}$ such that \eqref{eq:sufficient} is satisfied, then reliable multicasting is feasible. Since the number of ordered partitions of a set ${\mathcal V}$ grows rapidly with $|{\mathcal V}|$, such approach (checking the constraint \eqref{eq:sufficient} for all the ordered partitions) seems to be difficult. However, using Theorem \ref{thm:sw-covering}, we show that there exists a set of constraints that unifies the set of constraints \eqref{eq:sufficient} with respect to all the ordered partitions. The following lemma, establishes such a result. \begin{lemma}[Unified Sufficient Conditions]\label{le:6} For a destination node $d_i$, there exists at least one ordered partition $\mathbf{C}^{(d_i)}$ of ${\mathcal V}_{-d_i}$ for which the constraint \eqref{eq:sufficient} is satisfied, if and only if the following constraint is satisfied, \begin{equation} \label{eq:uniuni} \forall {\mathcal S}\subseteq{\mathcal V}_{-d_i}:\sum_{t\in{\mathcal S}}H(X_t)+H(\hat{Y}_t|X_tY_t)\geq H(\hat{Y}_{{\mathcal S}}X_{{\mathcal S}}|X_{{\mathcal S}^C}\hat{Y}_{{\mathcal S}^C}Y_{d_i}X_{d_i})+H(U_{{\mathcal S}}|U_{{\mathcal S}^C}U_{d_i}). \end{equation} \end{lemma} \emph{Proof of Lemma \ref{le:6}}. For each $v\in{\mathcal V}$, define $R_v=H(X_v)+H(\hat{Y}_v|Y_vX_v)$ and \[R^{(d_i)}=(R_1,\cdots,R_{d_i-1},R_{d_i+1},\cdots,R_V).\] Consider the RHS of \eqref{eq:sufficient}. Since random variables $U_{{\mathcal A}}$ and $(X_{{\mathcal V}},\hat{Y}_{{\mathcal V}},Y_{{\mathcal V}})$ are independent, the constraint \eqref{eq:sufficient} can be rewritten as \begin{equation}\label{eq:correc} \forall{\mathcal S}\subseteq{\mathcal V}_{-d_i}: R^{(d_i)}_{{\mathcal S}}\ge \sum_{k=1}^{\ell+1}H(U_{{\mathcal S}\cap{\mathcal L}_k}X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal S}\cap{\mathcal L}_{k-1}}|U_{{\mathcal L}_k\backslash{\mathcal S}}X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal S}}U_{{\mathcal L}^k}X_{{\mathcal L}^k}\hat{Y}_{{\mathcal L}^{k-1}}U_{d_i}Y_{d_i}X_{d_i}). \end{equation} The RHS of \eqref{eq:correc} can be expressed in the form of \eqref{eq:sw-bc} with ${\mathcal V}={\mathcal V}_{-d_i}$ , $X_v=(X_v,U_v)$, $Y_v=\hat{Y}_v$ and $Z=(Y_{d_i},X_{d_i},U_{d_i})$, thus the constraint \eqref{eq:sufficient} is equivalent to $R^{(d_i)}\in{\mathcal R}_{\mathbf{C}^{(d_i)}}$. Therefore for the node $d_i$, there exists at least one ordered partition of ${\mathcal V}_{-d_i}$ such that \eqref{eq:sufficient} is satisfied, iff $R^{(d_i)}\in\cup_{\mathbf{C}^{(d_i)}\in{\mathcal F}_{{\mathcal V}_{-d_i}}}{\mathcal R}_{\mathbf{C}^{(d_i)}}$. Applying Theorem \ref{thm:sw-covering}, we conclude that such $\mathbf{C}^{(d_i)}$ exists iff \eqref{eq:uniuni} is satisfied.$\square$ \par The constraint \eqref{eq:uniuni} can be rewritten in the following form: \begin{subequations} \label{eq:adi}\begin{align} \forall {\mathcal S}\subseteq{\mathcal A}\backslash\{d_i\} :& H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})\le \min_{{\mathcal W}\supseteq{\mathcal S}\atop d_i\in{\mathcal W}^C} R^{(d_i)}_{{\mathcal W}}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}),\label{eq:adi1}\\ \label{eq:adi2} \forall {\mathcal S}\subseteq{\mathcal A}^C\backslash\{d_i\}: &R^{(d_i)}_{{\mathcal S}} - H(\hat{Y}_{{\mathcal S}}X_{{\mathcal S}}|X_{{\mathcal S}^C}\hat{Y}_{{\mathcal S}^C\backslash\{d_i\}}Y_{d_i})\ge 0 . \end{align}\end{subequations} Consider the constraint \eqref{eq:adi}. In Appendix \ref{app:simplify}, using the joint p.m.f. \eqref{eq:dist} we will show that this constraint is equivalent to the following constraint \begin{subequations} \label{eq:adi-adi}\begin{align} \forall {\mathcal S}\subseteq{\mathcal A}\backslash\{d_i\} :& H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})< \min_{{\mathcal W}\supseteq{\mathcal S}: \atop d_i\in{\mathcal W}^C} [I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})] ,\label{eq:adi-adi1}\\ \label{eq:adi-adi2} \forall {\mathcal S}\subseteq{\mathcal A}^C\backslash\{d_i\}: & I(\hat{Y}_{{\mathcal S}};Y_{{\mathcal S}}|X_{{\mathcal V}}\hat{Y}_{{\mathcal S}^C\backslash\{d_i\}}Y_{d_i})\le I(X_{{\mathcal S}};\hat{Y}_{{\mathcal S}^C\backslash\{d_i\}}Y_{d_i}|X_{{\mathcal S}^C}). \end{align}\end{subequations} The first constraint \eqref{eq:adi-adi1} is the same as the constraint \eqref{eq:sw}, so we only need to show that the second constraint \eqref{eq:adi-adi2} is an additional constraint. The second constraint represents a sufficient condition for reliable multicasting of the compressed observations of the non-source nodes to the destinations. Since the destinations only need to decode the sources and do not need to decode any other information, it is logical to neglect the second constraint, which completes the proof of Theorem \ref{thm:sw}. We provide a rigorous proof of this fact in subsection \ref{sub:c}. \end{proof} \subsection{Multi-Layer Slepian-Wolf coding over a cooperative network (Proof of Lemma \ref{le:first})} \label{sub:a} We transmit $s_{nB}=nB$-length source over cooperative network in $B+2V-3$ blocks of length $n$ where $V$ is the cardinality of ${\mathcal V}$. Observe that $r_{nB}=n(B+2V-3)$ and $\dfrac{r_{nB}}{s_{nB}}\rightarrow 1$ as $B\rightarrow \infty$, thus the sequence $\{(s_{nB},r_{nB})\}_{B=1}^{\infty}$ satisfies the condition of Definition \ref{def:sw}. \subsubsection*{Codebook generation at node $v$} Fix $0<\epsilon''<\epsilon'<\epsilon$. Also fix $\delta>0$ such that $|\mathit{T}_{\epsilon}^n(U_v)|<2^{n(H(U_{v})+\delta)}$. To each element of $\mathit{T}_{\epsilon}^n(U_v)$, assign a number $w_v\in[1,2^{n(H(U_{v})+\delta)}]$ using a one-to-one mapping. Moreover, for each non-typical sequence, set $w_v =1$. Denote the result by $\mathbf{u}_{v}(w_v)$. For channel coding, independently repeat the following procedure $V$ times. Denote the resulting $k$-th codebook by ${\mathcal C}_v(k)$.\par Choose $2^{n(H(U_{v})+I(Y_v;\hat{Y}_v|X_v)+2\delta)}$ codewords $\mathbf{x}_v(w_v,z_v)$, each drawn uniformly and independently from the set $\mathit{T}_{\epsilon''}^n(X_v)$ where $z_v\in[1,2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}]$. For Wyner-Ziv coding, for each $\mathbf{x}_v(w_v,z_v)$ choose $2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}$ codewords $\mathbf{\hat{y}}_v(z'_v|\mathbf{x}_v)$, each drawn uniformly and independently from the set $\mathit{T}_{\epsilon'}^n(\hat{Y}_v|\mathbf{x}_v)$ where $z'_v\in[1,2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}]$. \subsubsection*{Encoding at node $v$} Divide the $nB$-length source stream $u_v^{nB}$ into $B$ vectors $(\mathbf{u}_{v,[j]}:1\leq j\leq B)$ where $\mathbf{u}_{v,[j]}=(u_{v,(j-1)n+1},\cdots,u_{v,jn})$. We say that the channel encoder receives $\mathbf{m}_v=(m_{v,[1]},\cdots,m_{v,[B]})$, if for $1\leq j\leq B$, $\mathbf{u}_{v,[j]}$ is assigned to $m_{v,[j]}\in[1,2^{n(H(U_{v})+\delta)}]$. Encoding is performed in $B+2V-3$ blocks where in block $b$, we use the codebook ${\mathcal C}_v(b\mod V)$. For $1\leq b\leq B+2V-3$, define: \[ w_{v,[b]}= \left\{ \begin{array}{ll} m_{v,[b-V+1]} &, V\le b\le B+V-1\\ 1 & ,\mbox{otherwise}. \end{array}\right. \] \par In block $1$, a default codeword, $\mathbf{x}_v(1,1)$ is transmitted. In block $b>1$, knowing $z_{v,[b-1]}$ from Wyner-Ziv coding at the end of block $b-1$ (described below), node $v$ transmits $\mathbf{x}_v(w_{v,[b]},z_{v,[b-1]})$. \subsubsection*{Wyner-Ziv coding} At the end of block $b$, node $v$ knows $(\mathbf{x}_{v,[b-1]},\mathbf{y}_{v,[b-1]})$ and declares that $z_{v,[b-1]}=z_v$ is received if $z_{v}$ is the smallest index such that $(\mathbf{\hat{y}}_{v,[b-1]}(z_{v}|\mathbf{x}_{v,[b-1]}),\mathbf{x}_{v,[b-1]},\mathbf{y}_{v,[b-1]})$ are jointly typical. Since we have more than $2^{nI(Y_v;\hat{Y}_v|X_v)}$ codewords, such a $z_v$ exists with high probability. (See Table \ref{ta:enc} which illustrates encoding for a network with four nodes in which node $4$ is only a destination, i.e., ${\mathcal U}_4={\mathcal X}_4=\emptyset$.) \begin{table*} \centering \caption{Encoding Scheme for Multicasting of two blocks of source sequences over a network with ${\mathcal V}=\{1,2,3,4\}$, ${\mathcal A}=\{1,2,3\}$, ${\mathcal D}=\{3,4\}$ and node $4$ has no channel input, i.e., ${\mathcal U}_4={\mathcal X}_4=\emptyset$.} \label{ta:enc} \vspace{-0.6cm} \resizebox{\textwidth}{!} {% \begin{tabular}[t]{|c|c|c|c|c|c|c|c|} \hline Node &Block 1& Block 2& Block 3& Block 4& Block 5& Block 6& Block 7 \\ \hline \hline & & & & $\mathbf{u}_1(m_{1[1]})$ & $\mathbf{u}_1(m_{1[2]})$ & & \\ 1 & $\mathbf{x}_1(1,1)$ & $\mathbf{x}_1(1,z_{1[1]})$ & $\mathbf{x}_1(1,z_{1[2]})$ & $\mathbf{x}_1(m_{1[1]},z_{1[3]})$ & $\mathbf{x}_1(m_{1[2]},z_{1[4]})$ & $\mathbf{x}_1(1,z_{1[5]})$ & $\mathbf{x}_1(1,z_{1[6]})$ \\ & $\mathbf{\hat{y}}_1(z_{1[1]}|\mathbf{x}_{1[1]})$ & $\mathbf{\hat{y}}_1(z_{1[2]}|\mathbf{x}_{1[2]})$ & $\mathbf{\hat{y}}_1(z_{1[3]}|\mathbf{x}_{1[3]})$ & $\mathbf{\hat{y}}_1(z_{1[4]}|\mathbf{x}_{1[4]})$ & $\mathbf{\hat{y}}_1(z_{1[5]}|\mathbf{x}_{1[5]})$ & $\mathbf{\hat{y}}_1(z_{1[6]}|\mathbf{x}_{1[6]})$ & $\mathbf{\hat{y}}_1(z_{1[7]}|\mathbf{x}_{1[7]})$\\ \hline & & & & $\mathbf{u}_2(m_{2[1]})$ & $\mathbf{u}_2(m_{2[2]})$ & & \\ 2 & $\mathbf{x}_2(1,1)$ & $\mathbf{x}_2(1,z_{2[1]})$ & $\mathbf{x}_2(1,z_{2[2]})$ & $\mathbf{x}_2(m_{2[1]},z_{2[3]})$ & $\mathbf{x}_2(m_{2[2]},z_{2[4]})$ & $\mathbf{x}_2(1,z_{2[5]})$ & $\mathbf{x}_2(1,z_{2[6]})$ \\ & $\mathbf{\hat{y}}_2(z_{2[1]}|\mathbf{x}_{2[1]})$ & $\mathbf{\hat{y}}_2(z_{2[2]}|\mathbf{x}_{2[2]})$ & $\mathbf{\hat{y}}_2(z_{2[3]}|\mathbf{x}_{2[3]})$ & $\mathbf{\hat{y}}_2(z_{2[4]}|\mathbf{x}_{2[4]})$ & $\mathbf{\hat{y}}_2(z_{2[5]}|\mathbf{x}_{2[5]})$ & $\mathbf{\hat{y}}_2(z_{2[6]}|\mathbf{x}_{2[6]})$ & $\mathbf{\hat{y}}_2(z_{2[7]}|\mathbf{x}_{2[7]})$\\ \hline & & & & $\mathbf{u}_3(m_{3[1]})$ & $\mathbf{u}_3(m_{3[2]})$ & & \\ 3 & $\mathbf{x}_3(1,1)$ & $\mathbf{x}_3(1,z_{3[1]})$ & $\mathbf{x}_3(1,z_{3[2]})$ & $\mathbf{x}_3(m_{3[1]},z_{3[3]})$ & $\mathbf{x}_3(m_{3[2]},z_{3[4]})$ & $\mathbf{x}_3(1,z_{3[5]})$ & $\mathbf{x}_3(1,z_{3[6]})$ \\ & $\mathbf{\hat{y}}_3(z_{3[1]}|\mathbf{x}_{3[1]})$ & $\mathbf{\hat{y}}_3(z_{3[2]}|\mathbf{x}_{3[2]})$ & $\mathbf{\hat{y}}_3(z_{3[3]}|\mathbf{x}_{3[3]})$ & $\mathbf{\hat{y}}_3(z_{3[4]}|\mathbf{x}_{3[4]})$ & $\mathbf{\hat{y}}_3(z_{3[5]}|\mathbf{x}_{3[5]})$ & $\mathbf{\hat{y}}_3(z_{3[6]}|\mathbf{x}_{3[6]})$ & $\mathbf{\hat{y}}_3(z_{3[7]}|\mathbf{x}_{3[7]})$ \\ \hline \end{tabular}} \end{table*} \subsubsection*{Decoding at node $d_i$} Let $\mathbf{C}^{(d_i)}=[{\mathcal L}_1,\cdots,{\mathcal L}_{\ell}]$ be an ordered partition of the set ${\mathcal V}_{-d_i}={\mathcal V}\backslash\{d_i\}$. We propose a sliding window decoding with respect to $\mathbf{C}^{(d_i)}$. Define $s_{v,[b]}=(w_{v,[b]},z_{v,[b-1]})$. Suppose that $(s_{{\mathcal L}_1,[b-1]},s_{{\mathcal L}_2,[b-2]},\cdots,s_{{\mathcal L}_\ell,[b-\ell]})$ have been correctly decoded at the end of block $b-1$. Node $d_i$, declares that $(\hat{s}_{{\mathcal L}_1,[b]},\cdots,\hat{s}_{{\mathcal L}_{\ell},[b-\ell+1]})$ has been sent, if it is a unique tuple such that for each $1\le k\le\ell+1$ satisfies the following conditions, \begin{small} \begin{equation} \label{eq:typ} \begin{array}{lc} \begin{split} \Big(\mathbf{x}_{{\mathcal L}_k}(\hat{s}_{{\mathcal L}_k,[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}_{k-1}}(\hat{z}_{{\mathcal L}_{k-1},[b-k+1]}|\mathbf{x}_{{\mathcal L}_{k-1},[b-k+1]}),\mathbf{x}_{{\mathcal L}^k,[b-k+1]},&\\ \mathbf{\hat{y}}_{{\mathcal L}^{k-1},[b-k+1]},\mathbf{y}_{d_i,[b-k+1]},\mathbf{x}_{d_i,[b-k+1]}\Big)\in\mathit{T}_{\epsilon}^n,&\quad\mbox{for all $k$ such that $k\le b $} \\ (\mathbf{u}_{{\mathcal L}_k}(\hat{w}_{{\mathcal L}_k,[b-k+1]}),\mathbf{u}_{{\mathcal L}^k}(w_{{\mathcal L}^k,[b-k+1]}), \mathbf{u}_{d_i}(w_{d_i,[b-k+1]}))\in\mathit{T}_{\epsilon}^n,&\quad \mbox{for all $k$ such that $V\le b-k+1\le V+B-1$} \end{split} \end{array} \end{equation} \end{small} \noindent where $\hat{s}_{{\mathcal L}_k,[b-k+1]}=(\hat{w}_{{\mathcal L}_k,[b-k+1]},\hat{z}_{{\mathcal L}_k,[b-k]})$. Note that at the end of block $b+V+\ell-2$, the vector $w_{{\mathcal A},[b+V-1]}=m_{{\mathcal A},[b]}$ is decoded. Since each $(\mathbf{u}_{v,[b]}:v\in{\mathcal A})$ is jointly typical with high probability, we find the source sequence $\mathbf{u}_{{\mathcal A},[b]}$ with small probability of error. Hence at the end of block $B+V+\ell-2$, $u_{{\mathcal A}}^{nB}$ is decoded with small probability of error. \par Note that in the first $V-1$ blocks of decoding, no sources information is decoded. The advantage of decoding compressed observation in these blocks is to provide side information at the receiver, in order to improve decoding in the next blocks. \begin{table*} \centering \caption{Illustration of decoding scheme of four-node network depicted in Table \ref{ta:enc}, at node $4$ with respect to the ordered partition $\mathbf{C}^{(4)}=[\{1,2\},\{3\}]$ at the end of blocks $2$ and $5$. The gray cells highlight the random variables corresponding to the unknown indices and the yellow cells highlight the random variables available at decoder which will be used for decoding of the unknown indices through a joint typicality condition between them and the gray random variables.} \label{ta:dec} \vspace{-0.6cm} \resizebox{\textwidth}{!} {% \begin{tabular}[t]{|c|c|c||c|c|c|c|c|} \hline Node &Block 1& Block 2& Block 3& Block 4& Block 5& Block 6& Block 7 \\ \hline \hline & & & &\cellcolor{gray!20!yellow} $\mathbf{u}_1(m_{1[1]})$ & \cellcolor{gray!30}$\mathbf{u}_1(m_{1[2]})$ & & \\ 1 &\cellcolor{gray!20!yellow} $\mathbf{x}_1(1,1)$ & $\cellcolor{gray!30}\mathbf{x}_1(1,z_{1[1]})$&\cellcolor{gray!20!yellow} $\mathbf{x}_1(1,z_{1[2]})$ & \cellcolor{gray!20!yellow}$\mathbf{x}_1(m_{1[1]},z_{1[3]})$ & \cellcolor{gray!30}$\mathbf{x}_1(m_{1[2]},z_{1[4]})$ & $\mathbf{x}_1(1,z_{1[5]})$ & $\mathbf{x}_1(1,z_{1[6]})$ \\ & \cellcolor{gray!30}$\mathbf{\hat{y}}_1(z_{1[1]}|\mathbf{x}_{1[1]})$ & $\mathbf{\hat{y}}_1(z_{1[2]}|\mathbf{x}_{1[2]})$ &\cellcolor{gray!20!yellow} $\mathbf{\hat{y}}_1(z_{1[3]}|\mathbf{x}_{1[3]})$ & \cellcolor{gray!30}$\mathbf{\hat{y}}_1(z_{1[4]}|\mathbf{x}_{1[4]})$ & $\mathbf{\hat{y}}_1(z_{1[5]}|\mathbf{x}_{1[5]})$ & $\mathbf{\hat{y}}_1(z_{1[6]}|\mathbf{x}_{1[6]})$ & $\mathbf{\hat{y}}_1(z_{1[7]}|\mathbf{x}_{1[7]})$\\ \hline & & & & \cellcolor{gray!20!yellow}$\mathbf{u}_2(m_{2[1]})$ &\cellcolor{gray!30} $\mathbf{u}_2(m_{2[2]})$ & & \\ 2 &\cellcolor{gray!20!yellow} $\mathbf{x}_2(1,1)$ & \cellcolor{gray!30}$\mathbf{x}_2(1,z_{2[1]})$ &\cellcolor{gray!20!yellow} $\mathbf{x}_2(1,z_{2[2]})$ & \cellcolor{gray!20!yellow}$\mathbf{x}_2(m_{2[1]},z_{2[3]})$ &\cellcolor{gray!30} $\mathbf{x}_2(m_{2[2]},z_{2[4]})$ & $\mathbf{x}_2(1,z_{2[5]})$ & $\mathbf{x}_2(1,z_{2[6]})$ \\ & \cellcolor{gray!30}$\mathbf{\hat{y}}_2(z_{2[1]}|\mathbf{x}_{2[1]})$ & $\mathbf{\hat{y}}_2(z_{2[2]}|\mathbf{x}_{2[2]})$ &\cellcolor{gray!20!yellow} $\mathbf{\hat{y}}_2(z_{2[3]}|\mathbf{x}_{2[3]})$ & \cellcolor{gray!30}$\mathbf{\hat{y}}_2(z_{2[4]}|\mathbf{x}_{2[4]})$ & $\mathbf{\hat{y}}_2(z_{2[5]}|\mathbf{x}_{2[5]})$ & $\mathbf{\hat{y}}_2(z_{2[6]}|\mathbf{x}_{2[6]})$ & $\mathbf{\hat{y}}_2(z_{2[7]}|\mathbf{x}_{2[7]})$\\ \hline & & & & \cellcolor{gray!30}$\mathbf{u}_3(m_{3[1]})$ & $\mathbf{u}_3(m_{3[2]})$ & & \\ 3 & $\mathbf{x}_3(1,1)$ & $\mathbf{x}_3(1,z_{3[1]})$ &\cellcolor{gray!20!yellow} $\mathbf{x}_3(1,z_{3[2]})$ & \cellcolor{gray!30}$\mathbf{x}_3(m_{3[1]},z_{3[3]})$ & $\mathbf{x}_3(m_{3[2]},z_{3[4]})$ & $\mathbf{x}_3(1,z_{3[5]})$ & $\mathbf{x}_3(1,z_{3[6]})$ \\ & $\mathbf{\hat{y}}_3(z_{3[1]}|\mathbf{x}_{3[1]})$ & $\mathbf{\hat{y}}_3(z_{3[2]}|\mathbf{x}_{3[2]})$ &\cellcolor{gray!30} $\mathbf{\hat{y}}_3(z_{3[3]}|\mathbf{x}_{3[3]})$ & $\mathbf{\hat{y}}_3(z_{3[4]}|\mathbf{x}_{3[4]})$ & $\mathbf{\hat{y}}_3(z_{3[5]}|\mathbf{x}_{3[5]})$ & $\mathbf{\hat{y}}_3(z_{3[6]}|\mathbf{x}_{3[6]})$ & $\mathbf{\hat{y}}_3(z_{3[7]}|\mathbf{x}_{3[7]})$ \\ \hline & &&& \cellcolor{gray!20!yellow}$\mathbf{u}_{4[4]}$ & \cellcolor{gray!20!yellow}$\mathbf{u}_{4[5]}$ & &\\ 4& $\cellcolor{gray!20!yellow}\mathbf{y}_{4[1]}$ & \cellcolor{gray!20!yellow}$\mathbf{y}_{4[2]}$ &\cellcolor{gray!20!yellow} $\mathbf{y}_{4[3]}$ & \cellcolor{gray!20!yellow}$\mathbf{y}_{4[4]}$ & \cellcolor{gray!20!yellow}$\mathbf{y}_{4[5]}$ & $\mathbf{y}_{4[6]}$ & $\mathbf{y}_{4[7]}$\\ \hline Decoding& & & & $\hat{m}_{\{1,2\},[1]}$&\cellcolor{gray!30}$\hat{m}_{\{1,2\},[2]}$,$\hat{m}_{3,[1]}$ &$\hat{m}_{3,[2]}$ &\\ at node 4& $\emptyset$& \cellcolor{gray!30}$\hat{z}_{\{1,2\},[1]}$&$\hat{z}_{\{1,2\},[2]}$,$\hat{z}_{3,[1]}$&$\hat{z}_{\{1,2\},[3]}$,$\hat{z}_{3,[2]}$&\cellcolor{gray!30}$\hat{z}_{\{1,2\},[4]}$,$\hat{z}_{3,[3]}$&$\hat{z}_{\{1,2\},[5]}$,$\hat{z}_{3,[4]}$&$\hat{z}_{\{1,2\},[6]}$,$\hat{z}_{3,[5]}$\\ \hline \end{tabular}} \end{table*} \begin{example} Consider the four-node network of Table \ref{ta:enc}. Here we assume that node $4$ observes source $U_4$ correlated with the other sources. Let $\mathbf{C}^{(4)}=[\{1,2\},\{3\}]$. Decoding at node $4$ begins at the end of block $2$. In block $2$, node $4$ declares that $\hat{z}_{\{1,2\},[1]}$ is decoded if $(\mathbf{x}_1(1,\hat{z}_{1,[1]}),\mathbf{x}_2(1,\hat{z}_{2,[1]}),\mathbf{y}_{4[2]})$ and \\ $(\mathbf{\hat{y}}_1(\hat{z}_{1,[1]}|\mathbf{x}_{1[1]}),\mathbf{\hat{y}}_2(\hat{z}_{2,[1]}|\mathbf{x}_{2[1]}),\mathbf{x}_{1[1]}(1,1),\mathbf{x}_{2[1]}(1,1),\mathbf{y}_{4[1]})$ are jointly typical. In the next block, $(\hat{z}_{\{1,2\},[2]},\hat{z}_{3,[1]})$ are decoded and in block $b$, $(\hat{w}_{\{1,2\},[b]},\hat{z}_{\{1,2\},[b-1]},\hat{w}_{3,[b-1]},\hat{z}_{3,[b-2]})$ are decoded, if (See Table \ref{ta:dec}) \begin{align} (\mathbf{u}_{\{1,2\}}(\hat{w}_{\{1,2\},[b]}),\mathbf{u}_{4}(w_{4[b]}))&\in\mathit{T}_{\epsilon}^n\\ (\mathbf{x}_{\{1,2\}}(\hat{w}_{\{1,2\},[b]},\hat{z}_{\{1,2\},[b-1]}),\mathbf{y}_{4[b]})&\in\mathit{T}_{\epsilon}^n\\ (\mathbf{u}_{3}(\hat{w}_{3,[b-1]}),\mathbf{u}_{\{1,2\},[b-1]},\mathbf{u}_{4}(w_{4[b-1]}))&\in\mathit{T}_{\epsilon}^n\\ (\mathbf{x}_{3}(\hat{w}_{3,[b-1]},\hat{z}_{3,[b-2]}),\mathbf{\hat{y}}_{\{1,2\}}(\hat{z}_{\{1,2\},[b-1]}|\mathbf{x}_{\{1,2\}[b-1]}),\mathbf{x}_{\{1,2\}[b-1]},\mathbf{y}_{4[b-1]})&\in\mathit{T}_{\epsilon}^n\\ (\mathbf{\hat{y}}_{3}(\hat{z}_{3,[b-2]}|\mathbf{x}_{3[b-2]}),\mathbf{x}_{\{1,2,3\}[b-2]},\mathbf{\hat{y}}_{\{1,2\}[b-2]},\mathbf{y}_{4[b-2]})&\in\mathit{T}_{\epsilon}^n \end{align} \end{example} \subsubsection*{ Error Probability Analysis} Let $\mathbf{U}_{v,[b-V+1]}$ be the observed sequence at node $v$, which is used for encoding in block $b$. We bound the error probability of decoding at the end of block $b$ averaged over $(\mathbf{U}_{{\mathcal A}[b-V+1]},\mathbf{U}_{{\mathcal A}[b-V]},\cdots,\mathbf{U}_{{\mathcal A}[b-\ell-V+2]})$ and all random codebooks, assuming that no error occurred in the decoding of the previous blocks. Let $S_{v[j]}=(W_{v[j]},Z_{v[j-1]})$, in which $W_{v[j]}$ and $Z_{v[j-1]}$ are the indices of $\mathbf{U}_{v,[b-V+1]}$ and $\mathbf{\hat{Y}}_{v,[b-1]}$, respectively. Define $\mathbf{S}_b=(S_{{\mathcal L}_1[b]},\cdots,S_{{\mathcal L}_{\ell}[b-\ell+1]})$. Also, let $\mathbf{s}=(s_{{\mathcal L}_1},\cdots,s_{{\mathcal L}_{\ell}})$, in which $s_v=(w_v,z_v):w_v\in[1,2^{n(H(U_v)+\delta)}],z_v\in[1,2^{n(I(Y_v;\hat{Y}_v|X_v)+\delta)}]$. Define the events, \begin{align} {\mathcal E}_0(b,k)&:=\{(\mathbf{U}_{{\mathcal L}_k,[b-k-V+2]},\mathbf{U}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{U}_{d_i,[b-k-V+2]})\notin\mathit{T}_{\epsilon}^n\} \nonumber\\ {\mathcal E}_1(b,k,v)&:=\{(\mathbf{X}_{v,[b-k+1]},\mathbf{Y}_{v,[b-k+1]},\mathbf{\hat{Y}}_{v}(z_v|\mathbf{X}_{v,[b-k+1]}))\notin\mathit{T}_{\epsilon'}^n,\ \mbox{for all $z\in[1,2^{n(I(Y_v;\hat{Y}_v)+\delta)}]$}\}\nonumber\\ {\mathcal E}_2(b,k,\mathbf{s})&:=\{(\mathbf{u}_{{\mathcal L}_k}(w_{{\mathcal L}_k}),\mathbf{U}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{U}_{d_i,[b-k-V+2]})\in\mathit{T}_{\epsilon}^n\}\nonumber\\ {\mathcal E}_3(b,k,\mathbf{s})&:=\{(\mathbf{X}_{{\mathcal L}_k}(s_{{\mathcal L}_k}),\mathbf{\hat{Y}}_{{\mathcal L}_{k-1}}(z_{{\mathcal L}_{k-1}}|\mathbf{X}_{{\mathcal L}_{k-1}[b-k+1]}),\mathbf{X}_{{\mathcal L}^k,[b-k+1]}, \mathbf{\hat{Y}}_{{\mathcal L}^{k-1},[b-k+1]},\mathbf{Y}_{d_i,[b-k+1]},\mathbf{X}_{d_i,[b-k+1]})\in\mathit{T}_{\epsilon}^n\}. \end{align} Then the error event ${\mathcal E}(b)$ corresponding to decoding at the end of block $b$ can be expressed as \[ {\mathcal E}(b)=\cup_{k=1}^{\ell+1}\big({\mathcal E}_0(b,k)\bigcup\cup_{v\in{\mathcal V}}{\mathcal E}_1(b,k,v)\bigcup{\mathcal E}_3^C(b,k,\mathbf{S}_b)\big)\bigcup\cup_{\mathbf{s}\neq\mathbf{S}_b}\big(\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})\cap{\mathcal E}_3(b,k,\mathbf{s})\big).\] Using the union bound, we bound above the probability of error as follows: \begin{align} \mathbb{P}[{\mathcal E}(b)]&\le \mathbb{P}[\cup_{k=1}^{\ell+1}{\mathcal E}_0(b,k)]+\mathbb{P}[\cup_{k=1}^{\ell+1}\cup_{v\in{\mathcal V}}{\mathcal E}_1(b,k,v)]+\mathbb{P}[\cup_{k=1}^{\ell+1}({\mathcal E}_3^C(b,k,\mathbf{S}_b)\bigcap\cap_{v\in{\mathcal V}}{\mathcal E}_1^C(b,k,v))]+\nonumber\\ &\qquad \mathbb{P}[\cup_{\mathbf{s}\neq\mathbf{S}_b}\big(\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})\cap{\mathcal E}_3(b,k,\mathbf{s})\big)]. \end{align} By the typical lemma \cite[Theorem 1.1]{kramer:book}, the first term vanishes as $n\rightarrow\infty$, the second term vanishes since at each node $v$ and for each input $\mathbf{x}_{v,[b-k+1]}$, there are more than $2^{nI(Y_v;\hat{Y}_v|X_v)}$ codewords $\mathbf{\hat{y}}_v(z_v|\mathbf{x}_{v,[b-k+1]})$, and the third term vanishes by \cite[Theorem 1.2.]{kramer:book}. For the last term, let ${\mathcal E}_1(b)=\cup_{\mathbf{s}\neq\mathbf{S}_b}\big(\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})\cap{\mathcal E}_3(b,k,\mathbf{s})\big)$. \begin{align} \mathbb{P}[{\mathcal E}_1(b)]&\le\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\ &\qquad\sum_{\mathbf{s}\neq\mathbf{s}_b}\mathbb{P}[\cap_{k=1}^{\ell+1}{\mathcal E}_2(b,k,\mathbf{s})|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\mathbb{P}[\cap_{k=1}^{\ell+1}{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:1}\\ &=\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\ &\qquad\sum_{\mathbf{s}\neq\mathbf{s}_b}\prod_{k=1}^{\ell+1} \mathbb{P}[{\mathcal E}_2(b,k,\mathbf{s})|\mathbf{u}_{{\mathcal A}[b-k-V+2]}]\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:2}\\ &=\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\ &\qquad\sum_{\mathbf{s}\neq\mathbf{s}_b}\prod_{k=1}^{\ell+1} \mathbf{1}[(\mathbf{u}_{{\mathcal L}_k}(w_{{\mathcal L}_k}),\mathbf{u}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{u}_{d_i,[b-k-V+2]})\in\mathit{T}_{\epsilon}^n]\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:3} \end{align} where \eqref{sal:1} follows from the fact that the codebook generation is independent of the sources $U_{{\mathcal A}}^{nB}$, \eqref{sal:2} follows from the fact that the codebooks used in any $\ell\le V$ consecutive blocks are generated independently and the fact that the sources are i.i.d., therefore the source sequences are independently generated in the consecutive blocks and \eqref{sal:3} follows from the definition of ${\mathcal E}_2(b,k,\mathbf{s})$, in which $\mathbf{1}$ represents the indicator function. Define, \begin{equation}\label{sal:def-mn}\begin{split} {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)=\Big\{\mathbf{s}: s_t\neq s_{t,[b-k+1]}, z_{t'}\neq z_{t',[b-k]},\\ \mbox{for all}\ k\in[1,\ell+1],t\in{\mathcal S}\cap{\mathcal L}_k,t'\in{\mathcal Z}\cap{\mathcal L}_k, \ \mbox{and}\ s_{{\mathcal L}_k\backslash{\mathcal S}}=s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}, \mbox{for all $k\in[1,\ell]$}, \\ \mbox{and}\quad (\mathbf{u}_{{\mathcal L}_k}(w_{{\mathcal L}_k}),\mathbf{u}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{u}_{d_i,[b-k-V+2]})\in\mathit{T}_{\epsilon}^n,\ \mbox{for all $k\in[1,\ell]$}\Big\}. \end{split}\end{equation} Then, \eqref{sal:3} can be rewritten as, \begin{align} \mathbb{P}[{\mathcal E}_1(b)]&\le\sum_{\mathpalette\mathclapinternal{\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b}}p(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]})\cdots p(\mathbf{u}_{{\mathcal A}[b-V+1]})\mathbb{P}[\mathbf{S}_b=\mathbf{s}_b|\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]}]\nonumber\\ &\qquad \sum_{\emptyset\neq{\mathcal S}\subseteq{\mathcal V}_{-d_i}}\sum_{{\mathcal Z}\subseteq{\mathcal S}}\sum_{\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)}\prod_{k=1}^{\ell+1}\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\label{sal:5} \end{align} Define, \begin{equation}\label{sal:le} \mathbb{P}_{{\mathcal S},{\mathcal Z}}=\sum_{\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)}\prod_{k=1}^{\ell+1}\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b].\end{equation} Notice that there are $3^{|{\mathcal V}_{-d_i}|}$ pairs $({\mathcal S},{\mathcal Z})$ such that ${\mathcal Z}\subseteq{\mathcal S}\subseteq{\mathcal V}_{-d_i}$. Using this fact, $\mathbb{P}[{\mathcal E}_1(b)]$ is upper bounded by, \begin{equation} \mathbb{P}[{\mathcal E}_1(b)]\le 3^{|{\mathcal V}_{-d_i}|}\max_{{\mathcal Z}\subseteq{\mathcal S}\subseteq{\mathcal V}_{-d_i}}\mathbb{P}_{{\mathcal S},{\mathcal Z}}. \end{equation} Therefore to show that $\mathbb{P}[{\mathcal E}_1(b)]$ vanishes as $n\rightarrow\infty$, it suffices to show that each of $\mathbb{P}_{{\mathcal S},{\mathcal Z}}$ vanishes as $n\rightarrow\infty$. To bound above the probability $\mathbb{P}_{{\mathcal S},{\mathcal Z}}$, we use the following lemmas which provide an upper bound on the probability inside the last summation. \begin{lemma}\label{le:9} For each $\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$, we have \begin{equation*} \mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})|\mathbf{S}_b=\mathbf{s}_b]\stackrel{.}{\le} 2^{-n\beta_{{\mathcal S},{\mathcal Z}}(k)}, \end{equation*} where $\beta_{{\mathcal S},{\mathcal Z}}(k)$ is given by \begin{equation}\label{sal:simp1} \beta_{{\mathcal S},{\mathcal Z}}(k)=\sum_{t\in{\mathcal S}\cap{\mathcal L}_k}H(X_t)+\sum_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}H(\hat{Y}_{t'}|X_{t'})-H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal S}^C\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}^C\cap{\mathcal L}_{k-1}}X_{{\mathcal L}^k}\hat{Y}_{{\mathcal L}^{k-1}}X_{d_i}Y_{d_i}). \end{equation} \end{lemma} \begin{proof} See Appendix \ref{app:5}. \end{proof} \begin{lemma}\label{le:7} For each $(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$, we have \begin{equation*} {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)\stackrel{.}{\le} 2^{n(\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})+\sum_{t\in{\mathcal Z}}I(Y_t;\hat{Y}_t|X_t))}. \end{equation*} \end{lemma} \begin{proof} See Appendix \ref{app:6}. \end{proof} Applying Lemma \ref{le:9} and Lemma \ref{le:7} to \eqref{sal:le} yields, \begin{equation} \mathbb{P}_{{\mathcal S},{\mathcal Z}}\stackrel{.}{\le} 2^{-n(\sum_{k=1}^{\ell+1}\beta_{{\mathcal S},{\mathcal Z}}(k)-\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})-\sum_{t'\in{\mathcal Z}}I(Y_{t'};\hat{Y}_{t'}|X_{t'}))}. \end{equation} Thus $\mathbb{P}_{{\mathcal S},{\mathcal Z}}$ vanishes as $n\rightarrow\infty$, provided that \begin{equation}\label{sal:simp2} \sum_{k=1}^{\ell+1}\beta_{{\mathcal S},{\mathcal Z}}(k)-\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})-\sum_{t'\in{\mathcal Z}}I(Y_{t'};\hat{Y}_{t'}|X_{t'})> 0. \end{equation} Substituting \eqref{sal:simp1} in \eqref{sal:simp2}, simplifies it as follows, \begin{align} 0&<\sum_{t\in{\mathcal S}}H(X_t)+\sum_{t'\in{\mathcal Z}}H(\hat{Y}_t|X_tY_t)-\sum_{k=1}^{\ell+1}\left(H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}X_{{\mathcal L}^{k}}\hat{Y}_{{\mathcal L}^{k-1}}X_{d_i}Y_{d_i})\right.\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.+H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})\right)\label{sal:simp}\\ &=\sum_{k=1}^{\ell+1}\left(H(X_{{\mathcal S}\cap{\mathcal L}_k})+H(\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal Z}\cap{\mathcal L}_{k-1}}Y_{{\mathcal Z}\cap{\mathcal L}_{k-1}})-H(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}X_{{\mathcal L}^{k}}\hat{Y}_{{\mathcal L}^{k-1}}Y_{d_i})\right.\nonumber\\ &\qquad\qquad\qquad\left.-H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})\right)\label{sal:7}\\ &= \sum_{k=1}^{\ell+1}\left(I(X_{{\mathcal S}\cap{\mathcal L}_k};Y_{d_i}\hat{Y}_{({\mathcal L}_{k-1}\backslash{\mathcal Z})\cup{\mathcal L}^{k-1}}|X_{d_i}X_{({\mathcal L}_k\backslash{\mathcal S})\cup {\mathcal L}^k})-I(Y_{{\mathcal Z}\cap{\mathcal L}_{k-1}};\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{d_i}X_{{\mathcal L}^{k+1}}Y_{d_i}\hat{Y}_{({\mathcal L}_{k-1}\backslash{\mathcal Z})\cup{\mathcal L}^{k-1}})\right.\nonumber\\ &\qquad\qquad\left. -H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i})\right)\label{sal:8} \end{align} where in \eqref{sal:7} and \eqref{sal:8}, we have used the fact that $X_t$'s are independent and $\hat{Y}_t$ given $(X_t,Y_t)$ is independent of all the other random variables. Now consider the RHS of \eqref{sal:8}. Since ${\mathcal Z}\subseteq{\mathcal S}$, it can easily be shown that the first term inside the summation takes its minimum for ${\mathcal Z}={\mathcal S}$ while the second term simultaneously takes its maximum for ${\mathcal Z}={\mathcal S}$. On the other side, ${\mathcal Z}={\mathcal S}$ corresponds to the probability $\mathbb{P}_{{\mathcal S},{\mathcal S}}$. Hence if $\mathbb{P}_{{\mathcal S},{\mathcal S}}$ vanishes, then all $\mathbb{P}_{{\mathcal S},{\mathcal Z}}:{\mathcal Z}\subseteq{\mathcal S}$ vanish as $n\rightarrow\infty$. Therefore $\mathbb{P}[{\mathcal E}_1(b)]$ vanishes if all $\mathbb{P}_{{\mathcal S},{\mathcal S}}$ (${\mathcal S}\subseteq{\mathcal V}_{-d_i}$) vanish. Finally, substituting ${\mathcal Z}={\mathcal S}$ in \eqref{sal:simp} results in \eqref{eq:sufficient}, which completes the proof of Lemma \ref{le:first}. \par \begin{remark} If there exists only a single destination, one can use the offset encoding scheme of \cite{xie} and \cite{kramer2005} which has less delay compared to the proposed encoding scheme, to prove Lemma \ref{le:first}. In general however, since the ordered partitions corresponding to each receiver for reliable decoding are different, it is impossible to obtain the same offset encoding scheme for all the destinations. This makes it clear why the encoding scheme does not transmit any information in the first $V-1$ blocks. \end{remark} \subsection{Removing additional constraints} \label{sub:c} This subsection claims that for each $d_i$, we can reduce the constraints of \eqref{eq:adi} to the first term of it. A special case of our claim about the single relay channel has been studied in \cite{kang:itw}. We prove our claim by induction on $|{\mathcal V}_{-d_i}|$. For $|{\mathcal V}_{-d_i}|=1$, it is true. Now suppose the induction assumption is true for all $k<|{\mathcal V}_{-d_i}|$. For each ${\mathcal Z}\subseteq{\mathcal V}$ which contains $d_i$ and each ${\mathcal S}\subseteq{\mathcal Z}\backslash\{d_i\}$, let \[h^{(d_i)}_{{\mathcal Z}}({\mathcal S})=R^{(d_i)}_{{\mathcal S}} - H(\hat{Y}_{{\mathcal S}}X_{{\mathcal S}}|X_{{\mathcal Z}\backslash{\mathcal S}}\hat{Y}_{{\mathcal Z}\backslash({\mathcal S}\cup\{d_i\})}Y_{d_i})\] \par Assume there exists a subset ${\mathcal T}$ of ${\mathcal A}^C\backslash\{d_i\}$ such that $h^{(d_i)}_{{\mathcal V}}({\mathcal T})<0$. For each ${\mathcal W}\subseteq{\mathcal V}_{-d_i}$ observe that, \begin{IEEEeqnarray}{rLl} h^{(d_i)}_{{\mathcal V}}({\mathcal W}\cup{\mathcal T})&=&h^{(d_i)}_{{\mathcal V}}({\mathcal T})+R^{(d_i)}_{{\mathcal W}\backslash{\mathcal T}}- H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C\backslash{\mathcal T}}\hat{Y}_{{\mathcal W}^C\backslash({\mathcal T}\cup\{d_i\})}Y_{d_i}) \nonumber\\ &<&R^{(d_i)}_{{\mathcal W}\backslash{\mathcal T}}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C\backslash{\mathcal T}}\hat{Y}_{{\mathcal W}^C\backslash({\mathcal T}\cup\{d_i\})}Y_{d_i}) \nonumber\\ &\le&R^{(d_i)}_{{\mathcal W}}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}) \nonumber\\ \label{eq:symp} &=&h^{(d_i)}_{{\mathcal V}}({\mathcal W}) \end{IEEEeqnarray} Using \eqref{eq:symp}, \eqref{eq:adi1} can be simplified as follows: \begin{IEEEeqnarray}{rCl} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&\le& \min_{{\mathcal V}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C} h^{(d_i)}_{{\mathcal V}}({\mathcal W})\nonumber\\ &\stackrel{(a)}{=}&\min_{{\mathcal V}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C} h^{(d_i)}_{{\mathcal V}}({\mathcal W}\cup{\mathcal T})\nonumber\\ &\stackrel{(b)}{\le}&\min_{{\mathcal V}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C}h^{(d_i)}_{{\mathcal V}\backslash{\mathcal T}}({\mathcal W}\backslash{\mathcal T})\nonumber\\ \label{eq:fin} &=& \min_{{\mathcal V}\backslash{\mathcal T}\supset{\mathcal W}\supseteq{\mathcal S}:\atop d_i\in{\mathcal W}^C}h^{(d_i)}_{{\mathcal V}\backslash{\mathcal T}}({\mathcal W}) \end{IEEEeqnarray} where (a) follows from \eqref{eq:symp}, since ${\mathcal S}\subset{\mathcal W}\cup{\mathcal T}$ and $d_i\notin{\mathcal T}$, and (b) follows from the first inequality in \eqref{eq:symp}.\par Now by the induction assumption, the last term of \eqref{eq:fin} corresponds to the feasibility constraints of the reliable transmission of $U_{{\mathcal A}}$ to node $d_i$ over the cooperative network with the set of nodes ${\mathcal V}\backslash{\mathcal T}$. Hence node $d_i$ can decode $U_{{\mathcal A}}$, by treating $(X_{{\mathcal T}},\hat{Y}_{{\mathcal T}})$ as noise. We note that the encoding scheme, only results in more delay rather than a corresponding encoding/decoding scheme for a cooperative networks with the node's set ${\mathcal V}\backslash{\mathcal T}$. Therefore, the encoding scheme does not need any changes and the decoding is done only with respect to the cooperative network with ${\mathcal V}={\mathcal V}\backslash{\mathcal T}$. This proves our claim \section{Slepian-Wolf coding over some classes of cooperative networks}\label{sec:6} In this section, we extract some corollaries from Proposition \ref{pro:ob} and Theorem \ref{thm:sw} about semi-deterministic network, Aref networks and linear finite-field and state-dependent deterministic networks, for which Proposition \ref{pro:ob} and Theorem \ref{thm:sw} (partially) match. \begin{definition} A cooperative network with one destination $d$, is said to be \emph{semi-deterministic}, if each node $v\in{\mathcal V}\backslash\{d\}$ observes a deterministic function of all the channel inputs and the destination channel output, i.e., $Y_v=f_v(X_{{\mathcal V}},Y_d)$. \end{definition} \begin{remark} The semi-deterministic cooperative network is a generalization of semi-deterministic relay channel \cite{aref} and a class of deterministic relay channels, recently defined in \cite{yhk}. \end{remark} \begin{definition} A cooperative network is said to be \emph{deterministic}, if each node observes a deterministic function of all the channel inputs, i.e., $Y_v=f_v(X_{{\mathcal V}})$. \end{definition} \begin{definition} A deterministic network is said to be an \emph{Aref network}, if each channel output $Y_v$ can be decomposed into $|{\mathcal V}|-1$ components $(Y_{v',v}:v'\in{\mathcal V}\backslash\{v\})$, where $Y_{v',v}$ is a deterministic function of $X_{v'}$. A semi-deterministic network with destination node $d$, is said to be a \emph{semi-deterministic Aref network}, if each channel output $Y_v$ can be decomposed into $|{\mathcal V}|-1$ components $(Y_{v',v}:v'\in{\mathcal V}\backslash\{v\})$, where $Y_{v',v}$ is a deterministic function of $X_{v'}$ for $v\in{\mathcal V}_{-d}$ and $Y_{v',d}$ is a stochastic function of $X_{v'}$. \end{definition} \begin{definition} A deterministic network is said to be a \emph{linear finite-field deterministic network}, if all the channel inputs and outputs lie in the same field $\mathbf{GF}(q)$ and each channel output can be expressed as a linear combination of all the channel inputs. The relation between the channel inputs and the channel outputs can be determined via a matrix product, $Y_{{\mathcal V}}=\mathbf{G}X_{{\mathcal V}}$, where $\mathbf{G}$ is called the channel matrix of the network. $\mathbf{G}_{{\mathcal T}_1,{\mathcal T}_1}$ is a sub-matrix obtained by deleting the rows and columns of $\mathbf{G}$ corresponding to ${\mathcal T}_1$ and ${\mathcal T}_2$, respectively. \end{definition} \begin{definition} A cooperative network is state-dependent (SD) \cite{yhk:isit09}, if there exists a set of states ${\mathcal S}$ such that the channel inputs and the channel outputs at each time are related via the current state of the network. A SD-cooperative network is said to be deterministic if each node observes a deterministic function of all the channel inputs and the state of the network, i.e., $Y_v=f_v(X_{{\mathcal V}},S)$. A SD-deterministic network is said to be an Aref network, if each channel output $Y_v$ can be decomposed into $|{\mathcal V}|-1$ components $(Y_{v',v}:v'\in{\mathcal V}\backslash\{v\})$, where $Y_{v',v}$ is a deterministic function of $(X_{v'},S)$. A SD-linear finite-field deterministic network is a network described by $Y_{{\mathcal V}}=\mathbf{G}(S)X_{{\mathcal V}}$, where $\mathbf{G}(S)$ is the matrix of coefficients corresponding to state $S$. \end{definition} \begin{proposition} \label{pro:semi} The set of DMCS $U_{{\mathcal A}}$ can reliably be transmitted over a semi-deterministic network, if there exists random variable $Q$, such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have : \begin{equation} \label{eq:semi} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{{\mathcal V}_{-d}\supseteq{\mathcal W}\supseteq{\mathcal S}}I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}Q) \end{equation} where the joint p.m.f. of random variables factors as $p(q)[\prod_{v\in{\mathcal V}}p(x_v|q)]p(y_{{\mathcal V}}|x_{{\mathcal V}})$.\\ On the other side, multicasting is feasible, only if there exists a joint p.m.f. $p(x_{{\mathcal V}})$ such that \begin{equation} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{{\mathcal V}_{-d}\supseteq{\mathcal W}\supseteq{\mathcal S}}I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}). \end{equation} \end{proposition} \begin{proposition} \label{pro:det} The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a deterministic network, if there exists a product distribution $\prod_{v\in{\mathcal V}}p(x_v)$ such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have: \begin{equation} \label{eq:det} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}) \end{equation} On the other side, multicasting is feasible, only if there exists a joint p.m.f. $p(x_{{\mathcal V}})$ such that \begin{equation} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}). \end{equation} \end{proposition} \begin{remark} Comparing the direct part and converse part of Propositions \ref{pro:semi} and \ref{pro:det}, we see that the sufficient conditions partially match to necessary conditions and these conditions completely match together, if we can restrict the set of joint p.m.f. in the converse part to the set of product distributions. \end{remark} \begin{proposition} \label{pro:aref} The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over an Aref network, if and only if, there exists a product distribution $\prod_{v\in{\mathcal V}}p(x_v)$ such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have: \begin{equation} \label{eq:aref} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\sum_{v\in{\mathcal W}}H(Y_{v,{\mathcal W}^C}) \end{equation} \end{proposition} \begin{remark} This proposition was partially proved in \cite{babu} for acyclic Aref networks. \end{remark} \begin{proposition} \label{pro:semi-aref} The set of DMCS $U_{{\mathcal A}}$ can reliably be transmitted over a semi-deterministic Aref network, if and only if, there exists a product distribution $\prod_{v\in{\mathcal V}}p(x_v)$ such that for each ${\mathcal S}\subseteq{\mathcal V}$, we have: \begin{equation} \label{eq:semi-aref} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{{\mathcal V}_{-d}\supseteq{\mathcal W}\supseteq{\mathcal S}}\sum_{v\in{\mathcal W}}I(X_v;Y_{v,{\mathcal W}^C}) \end{equation} \end{proposition} \begin{proposition} \label{pro:finite} The set of DMCS $U_{{\mathcal A}}$ can reliably be multicast over a linear finite-field deterministic network, if and only if, \begin{equation} \label{eq:finite} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\mbox{rank}(\mathbf{G}_{{\mathcal W},{\mathcal W}^C})\log q \end{equation} \end{proposition} Now, consider the SD-network. In the sequel, assume that the state $S$ is an i.i.d. random process. \begin{proposition} \label{pro:state} For reliable multicasting over a SD-deterministic network, if all destinations have the state information $S$, then a sufficient condition is given by, \begin{equation} \label{eq:state} \forall{\mathcal S}\subseteq{\mathcal A}: H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C},S) \end{equation} Moreover, condition \eqref{eq:state} is a necessary condition for reliable multicasting over a SD-Aref network and a SD-linear finite-field deterministic network with state information available at the destinations. In these cases, \eqref{eq:state} is simplified to, \begin{align*} \mbox{SD-Aref network}:& H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\sum_{v\in{\mathcal W}}H(Y_{v,{\mathcal W}^C}|S)\\ \mbox{SD-linear finite-field deterministic network}:& H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}\mathbb{E}_{S}[\mbox{rank}(\mathbf{G}_{{\mathcal W},{\mathcal W}^C}(S))] \log q \end{align*} \end{proposition} \begin{proof}[Proof of Propositions 2-7] The direct part of Propositions \ref{pro:semi} and \ref{pro:det} follow from Theorem \ref{thm:sw}, by setting $\hat{Y}_v=Y_v$ in Theorem \ref{thm:sw}, because $(Y_{{\mathcal W}}: {\mathcal W}\subseteq{\mathcal V}_{-d})$ and $(Y_{{\mathcal W}}: {\mathcal W}\subseteq{\mathcal V})$ are deterministic functions of $(Y_{d},X_{{\mathcal V}})$ and $X_{{\mathcal V}}$, respectively. The converse part of Propositions \ref{pro:semi} and \ref{pro:det} are the direct consequence of Proposition \ref{pro:ob}. The direct part of Proposition \ref{pro:aref} follows from Proposition \ref{pro:det}, and the converse is deduced from Proposition \ref{pro:ob} as follows: \begin{align*} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&<H(Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})\\ &\le H(\cup_{v\in{\mathcal W}}Y_{v,{\mathcal W}^C})\\ &\le \sum_{v\in{\mathcal W}}H(Y_{v,{\mathcal W}}) \end{align*} Now, since $Y_{v,{\mathcal W}^C}$ depends only on $X_v$, the last term of inequalities only depends on the mariginal p.m.f. of the random variables. Thus, we can restrict the set of joint p.m.f. of Proposition \ref{pro:ob} to the product distribution, which completes the proof of Proposition \ref{pro:aref}. The direct part of Proposition \ref{pro:semi-aref} follows from Proposition \ref{pro:semi} and the converse part is obtained from Proposition \ref{pro:ob} as follows: \begin{align} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})&<I(X_{{\mathcal W}};Y_{{\mathcal W},{\mathcal W}^C}Y_{{\mathcal W}^C,{\mathcal W}^C}|X_{{\mathcal W}^C})\label{eq:semi-aref-1}\\ &=I(X_{{\mathcal W}};Y_{{\mathcal W},{\mathcal W}^C}|X_{{\mathcal W}^C}Y_{{\mathcal W}^C,{\mathcal W}^C})\label{eq:semi-aref-2}\\ &\le I(X_{{\mathcal W}};Y_{{\mathcal W},{\mathcal W}^C})\label{eq:semi-aref-3}\\ &\le \sum_{v\in{\mathcal W}}I(X_v;Y_{v,{\mathcal W}^C})\label{eq:semi-aref-4} \end{align} where \eqref{eq:semi-aref-2} follows, because $X_{{\mathcal W}}-X_{{\mathcal W}^C}-Y_{{\mathcal W}^C,{\mathcal W}^C}$ form a Markov chain and \eqref{eq:semi-aref-3} follows from the fact that $(X_{{\mathcal W}^C}Y_{{\mathcal W}^C,{\mathcal W}^C})-X_{{\mathcal W}}-Y_{{\mathcal W},{\mathcal W}^C}$ form a Markov chain and \eqref{eq:semi-aref-4} follows, since $Y_{v,{\mathcal W}^C}$ given $X_v$ is independent of other random variables. Finally, note that the RHS of \eqref{eq:semi-aref-4} only depends on the marginal p.m.f. of the random variables $X_{{\mathcal V}}$ which implies the converse.\par The direct part of Proposition \ref{pro:finite} is deduced from Proposition \ref{pro:det}, by computing the RHS of \eqref{eq:det} for the product distribution $\prod_{v\in{\mathcal V}}p(x_{v})$, in which each $X_v$ is uniformly distributed over the field $\mathbf{GF}(q)$. The converse follows from Proposition \ref{pro:ob}, since the product and the uniform distribution simultaneously maximized the RHS of \eqref{eq:sw2} for all ${\mathcal W}\subseteq{\mathcal V}$.\par The sufficient condition of Proposition \ref{pro:state} is deduced from Theorem \ref{thm:sw}, by treating the state information at each destination as an additional output of the network and the fact that $(Y_{{\mathcal W}}:{\mathcal W}\subseteq{\mathcal V}_{-d_i})$ is a deterministic function of $(X_{{\mathcal V}},S)$. The necessary conditions for the SD-Aref network and the SD-linear finite-field deterministic network follow from similar arguments for the converse of these networks without state. \end{proof} \section{Slepian-Wolf coding over Gaussian cooperative networks}\label{sec:7} In the previous section, we focused on some networks for which the cut-set type necessary conditions became sufficient conditions at least for product distribution of channel inputs. In this section, we focus on the Gaussian networks for which simple forwarding of the observations of each node is impossible. Instead, following \cite{aves:phd,aves:sub}, each node quantizes its observations at the noise level, then transmits these to the destinations. We compute sufficient conditions corresponding to this approach and compare it with the necessary conditions. \par Consider a Gaussian cooperative network, in which the received signal $\mathbf{y}_{v}$ is given by, \begin{equation} \mathbf{y}_{v}=\sum_{v'\in{\mathcal V}_{-v}}h_{v',v}\mathbf{x}_{v'}+\mathbf{z}_{v} \end{equation} \noindent where $h_{v',v}$ is a complex number which represents the channel gain from node $v'$ to node $v$. Furthermore, we assume that each node has an average power constraint equal to one on its transmitted signal. Moreover, $Z_v$ is an i.i.d. complex Gaussian random process with variance $\sigma^2_v$. Theorem \ref{thm:gauss} is the main result of this section. \begin{theorem}\label{thm:gauss} A set of DMCS $U_{{\mathcal A}}$ can reliably be transmitted over a Gaussian network, if for each ${\mathcal S}\subseteq{\mathcal V}$, we have: \begin{equation} \label{eq:gauss-in} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-\kappa_{{\mathcal W}} \end{equation} where \[ C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)=\max_{p(x_{{\mathcal W}}):\sum_{v\in{\mathcal W}}\mathbb{E}X^2_v=|{\mathcal W}|}I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C}) \] and \[ \kappa_{{\mathcal W}}=\min\{|{\mathcal W}|,|{\mathcal W}^C|\}\log(1+\dfrac{|{\mathcal W}|}{\min\{|{\mathcal W}|,|{\mathcal W}^C|\}})+V-1 \] Moreover, $\kappa_{{\mathcal W}}$ is bounded above by $\frac{3}{2}V-1$.\\ On the other side, the multicasting is feasible, only if: \begin{equation} \label{eq:gauss-out} H(U_{{\mathcal S}}|U_{{\mathcal A}\backslash{\mathcal S}})<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C) \end{equation} \end{theorem} \begin{remark} This theorem establishes the fact that multicasting of all DMCS whose Slepian-Wolf region intersects cut-set bound region within a $\dfrac{3}{2}V-1$ bits, is feasible. \end{remark} \begin{proof} $C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)$ is the capacity of the ${\mathcal W}\times{\mathcal W}^C$ MIMO channel with antenna input $X_{{\mathcal W}}$ and antenna output $Y_{{\mathcal W}^C}$. Now constraint \eqref{eq:gauss-out} is a direct result of Proposition \ref{pro:ob}, since there exists an average power constraint equal to one at each node $v\in{\mathcal W}$. To show \eqref{eq:gauss-in}, we apply Theorem \ref{thm:sw} to the Gaussian network. Assume $(X_v:v\in{\mathcal W})$ be jointly complex Gaussian random variables with covariance matrix $I_{V\times V}$. Let $\hat{Y}_v=Y_v+\hat{Z}_v$ where $\hat{Z}_v$ is a complex Gaussian random variable with variance equal to $\sigma_v^2$ (In other words, $\hat{Y}_{v}$ quantizes $Y_v$ at the noise level, \cite{aves:sub}). Now consider, \begin{align} I(X_{{\mathcal W}};Y_{{\mathcal W}^C}|X_{{\mathcal W}^C})&=I(X_{{\mathcal W}};Y_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})\label{eq:g-1}\\ &=I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})+I(X_{{\mathcal W}};Y_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-2} \end{align} where \eqref{eq:g-1} follows, since $X_{{\mathcal W}}-(X_{{\mathcal W}^C},Y_{{\mathcal W}^C})-\hat{Y}_{{\mathcal W}^C}$ form a Markov chain. Next consider, \begin{align} I(X_{{\mathcal W}};Y_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})&=I(X_{{\mathcal W}};\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C},Y_{{\mathcal W}^C\backslash\{d_i\}}+\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-3}\\ &\le h(\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})-h(\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}}|Z_{{\mathcal W}^C\backslash\{d_i\}}+\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-4}\\ &= I(\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}};Z_{{\mathcal W}^C\backslash\{d_i\}}+\hat{Z}_{{\mathcal W}^C\backslash\{d_i\}})\label{eq:g-5}\\ &= |{\mathcal W}^C|-1\label{eq:g-6} \end{align} where \eqref{eq:g-3} follows from the definition of $\hat{Y}_{v}$, \eqref{eq:g-4} follows from the fact that conditioning does not increase entropy and the fact that conditioning on $(Y_{{\mathcal W}^C}+\hat{Z}_{{\mathcal W}^C},X_{{\mathcal V}})$ is equivalent to conditioning on $(Z_{{\mathcal W}^C}+\hat{Z}_{{\mathcal W}^C},X_{{\mathcal V}})$ and $(Z_{{\mathcal W}^C},\hat{Z}_{{\mathcal W}^C})$ is independent of $X_{{\mathcal V}}$. \eqref{eq:g-6} follows, because $\{(Z_v,\hat{Z}_v):v\in{\mathcal W}^C\}$ are independent and $Z_v$ and $\hat{Z}_v$ are complex Gaussian r.v. with the same variance. In a similar way consider, \begin{align} I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}})&=I(Z_{{\mathcal W}};Z_{{\mathcal W}}+\hat{Z}_{{\mathcal W}}|X_{{\mathcal V}},Z_{d_i},Z_{{\mathcal W}^C}+\hat{Z}_{{\mathcal W}^C})\nonumber \\&=I(Z_{{\mathcal W}};Z_{{\mathcal W}}+\hat{Z}_{{\mathcal W}})\nonumber\\ &=|{\mathcal W}|\label{eq:g} \end{align} Next, we derive a slight modified version of Beam-Forming Lemma \cite[Appendix F]{aves:sub}. $C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)$ with water-filling is given by \[ C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)=\sum_{i=1}^n \log(1+Q_{ii}\lambda_i) \] where $n=\min(|{\mathcal W}|,|{\mathcal W}^C|)$ and $\lambda_i$'s are the singular values of the channel matrix of the MIMO channel and $Q_{ii}$ is given by water-filling solution satisfying, $\sum_{i=1}^n Q_{ii}=|{\mathcal W}|$. Following \cite[Appendix F, Equations 140-143]{aves:sub}, we obtain, \begin{align}\label{eq:g-7} C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-I(X_{{\mathcal W}};Y_{{\mathcal W}}|X_{{\mathcal W}^C})&\le n\log(1+\dfrac{|{\mathcal W}|}{n})\\ &\le n\log(\dfrac{V}{n})\label{eq:gg} \end{align} Finally, comparing \eqref{eq:g-2}, \eqref{eq:g-6}, \eqref{eq:g} and \eqref{eq:g-7} we get, \begin{equation} I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C})\ge C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-\kappa_{{\mathcal W}} \end{equation} Substituting it in \eqref{eq:sw}, we conclude that the constraint \eqref{eq:gauss-in} is a sufficient condition. Now note that $n\in [1,\frac{V}{2})$. Define $f(x)=x\log(\dfrac{V}{x})$ on $[1,\frac{V}{2}]$. $f$ is a convex function and gets its maximum at the end point $\frac{V}{2}$. Hence the RHS of \eqref{eq:gg} is equal to or less than $\dfrac{V}{2}$ which results in $\kappa_{{\mathcal W}}\le\frac{3}{2}V-1$. \end{proof} \section{Achievable rate region for cooperative relay networks}\label{sec:8} Consider ${\mathcal A}={\mathcal V}$ and the sources $(U_v:v\in{\mathcal V})$ are statistically independent and uniformly distributed over the sets ${\mathcal M}_v=\{1,2,\cdots,2^{R_v}\}$, thus $H(U_v)=R_v$. Substituting these values in Theorem \ref{thm:sw}, we find an achievable rate region which is based on the CF, for \emph{cooperative relay networks} with multicast demands. \begin{theorem} \label{thm:ach} A V-tuple $(R_1,R_2,\cdots,R_V)$ is contained in the achievable rate region of a cooperative network with multicast demands at each node $d_i\in{\mathcal D}$, if for each ${\mathcal S}\subseteq{\mathcal V}$ the following constraint holds: \begin{equation} \label{eq:ach} R_{{\mathcal S}}< \min_{d_i\in{\mathcal D}\backslash{\mathcal S}}\min_{{\mathcal V}\supseteq{\mathcal W}\supseteq{\mathcal S}: \atop d_i\in{\mathcal W}^C} \big[I(X_{{\mathcal W}};Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}|X_{{\mathcal W}^C}Q)-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}Y_{d_i}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Q)\big]^+ \end{equation} where $[x]^+=\max\{x,0\}$ and the joint p.m.f. of $(q,x_{{\mathcal V}},y_{{\mathcal V}},\hat{y}_{{\mathcal V}})$ factors as $p(q)\prod_{v\in{\mathcal V}}p(x_v|q)p(\hat{y}_v|x_v,y_v,q)]p(y_{{\mathcal V}}|x_{{\mathcal V}})$. \end{theorem} \begin{proof} Let ${\mathcal T}$ be the largest subset of ${\mathcal V}$ such that the RHS of \eqref{eq:sw} is non-negative subject to each ${\mathcal S}\subseteq{\mathcal T}$ (Note that if two subsets ${\mathcal T}_1,{\mathcal T}_2$ have this property, then ${\mathcal T}_1\cup{\mathcal T}_2$ also has this property, hence ${\mathcal T}$ is unique.). Substituting $R_{{\mathcal S}}=H(U_{{\mathcal S}}|U_{{\mathcal S}^C})$ in Theorem \ref{thm:sw} yields that $U_{{\mathcal T}}$ can reliably be multicast, if \eqref{eq:ach} holds. Hence $(R_1,\cdots,R_V)$ is achievable (Note that $R_v=0$ for each node $v\in{\mathcal T}^C$). \end{proof} \begin{corollary} \label{cor:rel-1} Consider a relay network with node $1$ as a transmitter which has no channel output, i.e., $Y_1=\emptyset$, $N-2$ relay nodes $\{2,\cdots,N-1\}$ and node $N$ as a destination which has no channel input, i.e., $X_N=\emptyset$. Substituting $R_2=\cdots=R_{N}=0$ in Theorem \ref{thm:ach} gives the following achievable rate ($R_{CF}$) for relay network. \begin{equation} \label{eq:ach:rel} R_{CF}=\min_{{\mathcal S}\subseteq{\mathcal V}:\atop 1\in{\mathcal S},N\in{\mathcal S}^C}\big[I(X_{{\mathcal S}};\hat{Y}_{{\mathcal S}^C\backslash\{V\}}Y_N|X_{{\mathcal S}^C}Q)-\\I(Y_{{\mathcal S}};\hat{Y}_{{\mathcal S}}|X_{{\mathcal V}}Y_V\hat{Y}_{{\mathcal S}^C\backslash\{V\}}Q)\big]^+ \end{equation} \end{corollary} \begin{remark} For the single relay channel, the achievable rate is reduced to the CF rate with time-sharing as given in \cite{elgamal}. \end{remark} \begin{remark} In \cite{yassaee}, we obtain an achievable rate based on CF, which subsumes the CF rate given in \cite{kramer2005}, when the partial decoding part of the CF strategy is relaxed. The CF rate in \cite[Theorem 3]{yassaee} is given by: \begin{equation} R^*_{CF}=I(X_1;Y_V\hat{Y}_{{\mathcal V}_{-V}}|X_{{\mathcal V}_{-V}}) \end{equation} \emph{subject to the constraints} \begin{equation} \label{eq:cf-cons} \forall{{\mathcal S}\subseteq{\mathcal V}\backslash\{1,V\}}: I(Y_{{\mathcal S}};\hat{Y}_{{\mathcal S}}|X_{{\mathcal V}_{-1}}Y_V\hat{Y}_{{\mathcal S}^C\backslash\{V\}})\le I(X_{{\mathcal S}};Y_V\hat{Y}_{{\mathcal S}^C\backslash\{V\}}|X_{{\mathcal S}^C\backslash\{V\}}) \end{equation} Now let ${\mathcal Q}=\emptyset$ in Corollary \ref{cor:rel-1}. It can be easily shown that when the constraints \eqref{eq:cf-cons} hold, then ${\mathcal S}={\mathcal V}$ reaches the minimum of the RHS of \eqref{eq:ach:rel}. Therefore, the rate of Corollary \ref{cor:rel-1} subsumes the CF-rate given in \cite[Theorem 3]{yassaee}. \end{remark} \begin{corollary} Consider a two-way relay network with nodes $1$ and $V$ as the two transmitters each demanding the message of the other one, and $V-2$ relay nodes $\{2,\cdots,V-1\}$. Substituting $R_2=\cdots=R_{V-1}=0$ and $\hat{Y}_1=\hat{Y}_V=\emptyset$ in Theorem \ref{thm:ach} gives the following achievable rate region for the two-way relay network. \begin{equation} k=1,V:\ R_{k}=\min_{{\mathcal S}\subseteq{\mathcal V}:\atop k\in{\mathcal S},\bar{k}\in{\mathcal S}^C}\big[I(X_{{\mathcal S}};\hat{Y}_{{\mathcal S}^C\backslash\{\bar{k}\}}Y_{\bar{k}}|X_{{\mathcal S}^C})-\\I(Y_{{\mathcal S}\backslash\{k\}};\hat{Y}_{{\mathcal S}\backslash\{k\}}|X_{{\mathcal V}}Y_{\bar{k}}\hat{Y}_{{\mathcal S}^C\backslash\{\bar{k}\}})\big]^+ \end{equation} \noindent where $\bar{1}=V$ and $\bar{V}=1$. \end{corollary} \begin{remark} Propositions \ref{pro:semi}-\ref{pro:state} are generalizations of several recent works on deterministic relay networks including \cite[Theorem 3.9]{aref}, \cite[Theorem 4.2]{aves:sub}, \cite[Theorem 4.4]{aves:sub}, \cite[Theorem 1]{yhk}, \cite[Theorem 1]{multicast} and \cite[Theorem 1]{yhk:isit09}. \end{remark}\par Next, consider the Gaussian cooperative network. Applying Theorem \ref{thm:gauss} to $U_{{\mathcal V}}$, we conclude the following corollary which shows that the cut-set bound region is achievable within a constant number of bits. \begin{corollary}\label{cor:gauss} A V-tuple $(R_1,R_2,\cdots,R_V)$ is contained in the achievable rate region of a Gaussian cooperative network with multicast demands at each node $d_i\in{\mathcal D}$, if for each ${\mathcal S}\subseteq{\mathcal V}$ the following constraint holds: \begin{equation} \label{eq:rel-gauss-in} R_{{\mathcal S}}<\min_{d_i\in{\mathcal D}}\min_{{\mathcal V}_{-d_i}\supseteq{\mathcal W}\supseteq{\mathcal S}}C_{wf}({\mathcal W}\rightarrow{\mathcal W}^C)-\kappa_{{\mathcal W}} \end{equation} where $C_{wf}$ and $\kappa_{{\mathcal W}}$ are as defined in Theorem \ref{thm:gauss}. \end{corollary} \begin{remark} In \cite[Theorem 4.6]{aves:sub}, authors have shown that by quantization at noise level, Gaussian relay network achieves the cut-set bound within $14V$ bits. But Corollary \ref{cor:gauss} implies that quantization at noise level achieves the cut-set bound within $\dfrac{3}{2} V-1$ bits; thus we have tightened the gap between the achievable rate and the cut-set bound. A similar result holds for the two-way Gaussian relay network. \end{remark} \section{conclusions}\label{sec:9} We derived sufficient and necessary conditions for reliable multicasting of DMCS over cooperative networks. Necessary conditions were based on the cut-set type outer bound for the relay network. Sufficient conditions are based on joint source-channel coding, compress and forward strategy for the relay network and an identity related to the sub-modularity property of the entropy function. We showed that the sufficient conditions are indeed necessary conditions for some classes of deterministic networks including Aref networks and the linear finite-field deterministic networks. We also proved that multicasting of DMCS whose Slepian-Wolf region intersects the cut-set outer bound within a constant number of bits are feasible. In particular, we reduced all results of the paper to obtain achievable rate regions for multiple messages-multicast over the cooperative relay networks. We showed that this achievable rate region subsumes some recent achievable rate (region) for relay networks. \appendices \section{Proof of Lemma \ref{le:cover}}\label{app:1} We prove this lemma by contradiction. Let $d$ be the dimension of $\mathbf{P}$. Suppose ${\mathcal F}$ is not a closed covering of $\mathbf{P}$, so there exists a point $A$ inside $\mathbf{P}$ which is not covered by ${\mathcal F}$ (Note that by assumption 2, the points that lie on the boundary of $\mathbf{P}$ are covered). Let $B$ be the closest point in $\cup_{i=1}^n\mathbf{P}_i$ to $A$. It is clear that $B$ must lie on a facet of at least one of the polytopes $(\mathbf{P}_i:1\le i\le n)$. Denote this facet by $\mathbf{F}_{\mathbf{P}_j}$. Two situations arise: \begin{enumerate} \item $\mathbf{F}_{\mathbf{P}_j}$ lies inside $\mathbf{P}$. Now by assumption 3, there exists $k\neq j$, such that $\mathbf{P}_j\cap\mathbf{P}_k=\mathbf{F}_{\mathbf{P}_j}$. Let $\mathbf{S}(B,\epsilon)$ be a $d$-\emph{dimensional} sphere with center $B$ and radius $\epsilon$ which is small enough such that $\mathbf{S}(B,\epsilon)$ is contained in $\mathbf{P}_j\cup\mathbf{P}_k$. Then the segment $AB$ intersects $\mathbf{S}(B,\epsilon)$ at a point $C$ which belongs to one of $\mathbf{P}_j$ or $\mathbf{P}_k$. Now $C$ is closer than $B$ to $A$ and lies on $\cup_{i=1}^n\mathbf{P}_i$. This results in contradiction, which proves lemma in this case. \item $\mathbf{F}_{\mathbf{P}_j}$ lies on the boundary of $\mathbf{P}$. Let $\mathbf{S}(B,\epsilon)$ be a sphere with center $B$ and radius $\epsilon$ which is small enough such that $\mathbf{S}(B,\epsilon)$ only intersects $\mathbf{P}_j$. Since, $A$ lies inside $\mathbf{P}$, the segment $AB$ intersects $\mathbf{S}(B,\epsilon)$ at a point $C$ inside $\mathbf{P}$. By assumption, $C$ belongs to $\mathbf{P}_j$, which again results in contradiction that proves the lemma. \end{enumerate} \section{Proof of Lemma \ref{le:facet}}\label{app:2} Denote the RHS of \eqref{eq:app} by $\mathbf{F}^*_{f,{\mathcal T}}$. First, we prove that $\mathbf{F}^*_{f,{\mathcal T}}\subseteq\mathbf{F}_{f,{\mathcal T}}$. Suppose $\mathbf{x}$ belongs to $\mathbf{F}^*_{f,{\mathcal T}}$. Now for each ${\mathcal U}\subseteq{\mathcal V}$, we have: \begin{align} x_{{\mathcal U}}&=x_{{\mathcal U}\cap{\mathcal T}}+x_{{\mathcal U}\cap{\mathcal T}^C}\label{eq:app21}\\ &\ge f({\mathcal U}\cap{\mathcal T}|({\mathcal U}\cap{\mathcal T})^C)+f({\mathcal U}\cap{\mathcal T}^C|{\mathcal T}^C\cap{\mathcal U}^C)\label{eq:app22}\\ &=f({\mathcal V})-f({\mathcal U}^C\cup{\mathcal T}^C)+f({\mathcal T}^C)-f({\mathcal U}^C\cap{\mathcal T}^C)\label{eq:app23}\\ &\ge f({\mathcal V})-f({\mathcal U}^C)\label{eq:app24}\\ &=f({\mathcal U}|{\mathcal U}^C)\label{eq:app25} \end{align} where \eqref{eq:app22} follows from the definition of $\mathbf{F}^*_{f,{\mathcal T}}$ and \eqref{eq:app24} follows, since $f$ is a sub-modular function. Now, \eqref{eq:app25} yields $\mathbf{x}\in\mathbf{F}_{f,{\mathcal T}}$. Hence $\mathbf{F}^*_{f,{\mathcal T}}\subseteq\mathbf{F}_{f,{\mathcal T}}$. On the other side, assume $\mathbf{x}\in\mathbf{F}_{f,{\mathcal T}}$. Note that by definition, $\mathbf{x}_{{\mathcal T}}\in\mathbf{F}^{(1)}_{f,{\mathcal T}}$. For each ${\mathcal S}\subseteq{\mathcal T}^C$, consider: \begin{align} x_{{\mathcal S}}&=x_{{\mathcal T}\cup{\mathcal S}}-x_{{\mathcal T}}\\ &=x_{{\mathcal T}\cup{\mathcal S}}-f({\mathcal T}|{\mathcal T}^C)\label{eq:app26}\\ &\ge f({\mathcal T}\cup{\mathcal S}|{\mathcal T}^C\cap{\mathcal S}^C)-f({\mathcal T}|{\mathcal T}^C)\\ &=f({\mathcal T}^C)-f({\mathcal T}^C\cap{\mathcal S}^C)\\ &=f({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})\label{eq:app27} \end{align} where \eqref{eq:app26} follows, because $\mathbf{x}$ lies on the hyperplane $x_{{\mathcal T}}=f({\mathcal T}|{\mathcal T}^C)$. Now, \eqref{eq:app27} implies that $\mathbf{x}_{{\mathcal T}^C}\in\mathbf{F}^{(2)}_{f,{\mathcal T}}$ which results in $\mathbf{F}_{f,{\mathcal T}}\subseteq\mathbf{F}^*_{f,{\mathcal T}}$. Thus $\mathbf{F}^*_{f,{\mathcal T}}=\mathbf{F}_{f,{\mathcal T}}$. \par Next, we show that $\mathbf{F}_{f,{\mathcal T}}^{(1)}=\mathbf{P}_{f_1}$ and $\mathbf{F}_{f,{\mathcal T}}^{(2)}=\mathbf{P}_{f_2}$. First observe that since $f$ is sub-modular, $f_1$ and $f_2$ are sub-modular functions. Hence $\mathbf{P}_{f_1}$ and $\mathbf{P}_{f_2}$ are well defined. Moreover, note that \begin{align} \forall{\mathcal S}\subseteq{\mathcal T}: f_1({\mathcal S}|{\mathcal T}\backslash{\mathcal S}) &= f_1({\mathcal S}\cup{\mathcal T})-f_1({\mathcal T}\backslash{\mathcal S})\nonumber\\ &= f({\mathcal S}\cup{\mathcal T}|{\mathcal T}^C)-f({\mathcal T}\backslash{\mathcal S}|{\mathcal T}^C)\nonumber\\ &= f({\mathcal V})-f([{\mathcal T}\backslash{\mathcal S}]\cup{\mathcal T}^C)\nonumber\\ &= f({\mathcal V})-f({\mathcal S}^C)\nonumber\\ &= f({\mathcal S}|{\mathcal S}^C)\label{eq:eq} \end{align} Comparing \eqref{eq:eq} and \eqref{eq:pface1} with Definition \ref{def:associate}, we conclude that $\mathbf{F}_{f,{\mathcal T}}^{(1)}$ is the essential polytope of $f_1$ with dimension $|{\mathcal T}|-1$. Likewise, we can show that $\mathbf{F}_{f,{\mathcal T}}^{(2)}$ is the essential polytope of $f_2$ with dimension $|{\mathcal T}^C|-1$. This completes the proof. \section{Formal Proof of Equation \eqref{eq:cl3}}\label{app:3} By Lemma \ref{le:facet}, it suffices to prove the following identities: \begin{align} {\mathcal S}\subseteq{\mathcal T}^C:\quad & h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})=h_{\mathbf{C}^*}({\mathcal S}|{\mathcal S}^C)\\ {\mathcal S}\subseteq{\mathcal T}:\quad & h_{\mathbf{C}}({\mathcal S}|{\mathcal S}^C)=h_{\mathbf{C}^*}({\mathcal S}|{\mathcal T}\backslash{\mathcal S}) \end{align} We prove the first identity. Proof of the second identity is similar. For each ${\mathcal S}\subseteq{\mathcal T}^C$ consider, \begin{align} h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})&=h_{\mathbf{C}}({\mathcal T}^C)-h_{\mathbf{C}}({\mathcal T}^C\cap{\mathcal S}^C)\\ &=\sum_{k=1}^{K+1}H(X_{{\mathcal T}^C\cap{\mathcal L}_k}Y_{{\mathcal T}^C\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}^k}Y_{{\mathcal L}^{k-1}})-H(X_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_k}Y_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}^k}Y_{{\mathcal L}^{k-1}})\\ &=\sum_{k=1}^{K+1}H(X_{{\mathcal S}\cap{\mathcal L}_k}Y_{{\mathcal S}\cap{\mathcal L}_{k-1}}|X_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_k}Y_{{\mathcal T}^C\cap{\mathcal S}^C\cap{\mathcal L}_{k-1}}X_{{\mathcal L}^k}Y_{{\mathcal L}^{k-1}})\label{eq:app:c1} \end{align} Note that ${\mathcal L}^{*k}={\mathcal L}^{k-1}\cup({\mathcal T}^C\cap{\mathcal L}_{k-1})$. Moreover, for each ${\mathcal S}\subseteq{\mathcal T}^C$, simple calculations yield: \begin{align} {\mathcal S}\cap{\mathcal L}_k^*&={\mathcal S}\cap\left[({\mathcal T}\cap{\mathcal L}_{k-1})\cup({\mathcal T}^C\cap{\mathcal L}_k)\right]={\mathcal S}\cap{\mathcal L}_k\nonumber\\ {\mathcal S}^C\cap{\mathcal L}_k^*&=\left[{\mathcal T}\cap{\mathcal L}_{k-1}\right]\cup\left[{\mathcal S}^C\cap{\mathcal T}^C\cap{\mathcal L}_k\right]\nonumber\\ {\mathcal L}^{*k}\cup({\mathcal S}^C\cap{\mathcal L}_k^*)&={\mathcal L}^k\cup\left[{\mathcal S}^C\cap{\mathcal T}^C\cap{\mathcal L}_k\right]\label{eq:app:c2} \end{align} substituting \eqref{eq:app:c2} in \eqref{eq:app:c1} gives: \begin{align} h_{\mathbf{C}}({\mathcal S}|{\mathcal T}^C\backslash{\mathcal S})&=\sum_{k=1}^{K+1}H(X_{{\mathcal S}\cap{\mathcal L}_k^*}Y_{{\mathcal S}\cap{\mathcal L}_{k-1}^*}|X_{{\mathcal S}^C\cap{\mathcal L}_k^*}Y_{{\mathcal S}^C\cap{\mathcal L}_{k-1}^*}X_{{\mathcal L}^{*k}}Y_{{\mathcal L}^{*k-1}}Z )\\ &=\sum_{k=1}^{K+2}H(X_{{\mathcal S}\cap{\mathcal L}_k^*}Y_{{\mathcal S}\cap{\mathcal L}_{k-1}^*}|X_{{\mathcal S}^C\cap{\mathcal L}_k^*}Y_{{\mathcal S}^C\cap{\mathcal L}_{k-1}^*}X_{{\mathcal L}^{*k}}Y_{{\mathcal L}^{*k-1}}Z)\\ &=h_{\mathbf{C}^*}({\mathcal S}|{\mathcal S}^C) \end{align} where in the last step, we have used the fact that ${\mathcal S}\cap{\mathcal L}_{K+1}^*={\mathcal S}\cap{\mathcal L}_{K+1}=\emptyset$. This completes the proof.$\square$ \section{Equivalence of Constraints \eqref{eq:adi} and \eqref{eq:adi-adi}}\label{app:simplify} It is sufficient to show that the RHS of \eqref{eq:adi1} and \eqref{eq:adi-adi1} are equal. Substituting $R_v=H(X_v)+H(\hat{Y}_v|X_vY_v)$ in the RHS of \eqref{eq:adi1} gives, \begin{align} R_{{\mathcal W}}^{(d_i)}-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})&=H(X_{{\mathcal W}})+H(\hat{Y}_{{\mathcal W}}|X_{{\mathcal W}}Y_{{\mathcal W}})-H(\hat{Y}_{{\mathcal W}}X_{{\mathcal W}}|X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})\label{eq:sal:7}\\ &=I(X_{{\mathcal W}};\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}|X_{{\mathcal W}^C})+H(\hat{Y}_{{\mathcal W}}|X_{{\mathcal W}}Y_{{\mathcal W}})\nonumber\\&\qquad\qquad -H(\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})\nonumber\\ &=I(X_{{\mathcal W}};\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i}|X_{{\mathcal W}^C})-I(Y_{{\mathcal W}};\hat{Y}_{{\mathcal W}}|X_{{\mathcal V}}\hat{Y}_{{\mathcal W}^C\backslash\{d_i\}}Y_{d_i})\label{eq:sal:8} \end{align} where \eqref{eq:sal:7} follows from the fact that $d_i\notin{\mathcal W}$, $X_t$'s are independent and $\hat{Y}_t$ given $(X_t,Y_t)$ is independent of all other random variables and \eqref{eq:sal:8} follows, since $(X_{{\mathcal W}^C}\hat{Y}_{{\mathcal W}^C}Y_{d_i})-(X_{{\mathcal W}},Y_{{\mathcal W}})-\hat{Y}_{{\mathcal W}}$ forms a markov chain. Substituting \eqref{eq:sal:8} in \eqref{eq:adi1} shows that \eqref{eq:adi1} and \eqref{eq:adi-adi1} are equal. Also, using \eqref{eq:sal:8} with ${\mathcal W}={\mathcal S}$ shows that \eqref{eq:adi2} and \eqref{eq:adi-adi2} are equal. \section{Proof of Lemma \ref{le:9}}\label{app:5} According to the codebook generation and the definition of ${\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$ in \eqref{sal:def-mn}, $(\mathbf{X}_t(s_t):t\in{\mathcal S}\cap{\mathcal L}_k)$ and $(\mathbf{X}_{{\mathcal V}}(s_{{\mathcal V},[b-k+1]}))$ are drawn independently from the sets $\mathit{T}_{\epsilon''}^n(X_t)$ and $\mathit{T}_{\epsilon}^n(X_{{\mathcal V}})$. Also given $\mathbf{X}_{t',[b-k+1]}( t'\in{\mathcal Z}\cap{\mathcal L}_{k-1})$, $\mathbf{\hat{Y}}_{t'}(z_{t'}|\mathbf{X}_{t',[b-k+1]})$ is drawn uniformly from the set $\mathit{T}_{\epsilon}^n(\hat{Y}_t|\mathbf{X}_{t',[b-k+1]})$ and is independent from other random variables. Hence the joint p.m.f. of\\ $(\mathbf{x}_{{\mathcal S}\cap{\mathcal L}_k}(s_{{\mathcal S}\cap{\mathcal L}_k}),\mathbf{x}_{{\mathcal V}}(s_{{\mathcal V},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}(z_{{\mathcal Z}\cap{\mathcal L}_{k-1}}),\mathbf{\hat{y}}_{{\mathcal L}^{k-1}\cup({\mathcal L}_{k-1}\backslash{\mathcal Z}),[b-k+1]},\mathbf{y}_{d_i,[b-k+1]})$ factors as \begin{equation}\label{sal:p} \mathbb{P}[\mathbf{x}_{{\mathcal V}}(s_{{\mathcal V},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}^{k-1}\cup({\mathcal L}_{k-1}\backslash{\mathcal Z}),[b-k+1]},\mathbf{y}_{d_i,[b-k+1]}]\prod_{t\in{\mathcal S}\cap{\mathcal L}_k}P_{\mathbf{X}_t}(\mathbf{x}_{t}(s_t))\prod_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}P_{\mathbf{\hat{Y}}_t|\mathbf{X}_t}(\mathbf{\hat{y}}_{t'}(z_{t'}|\mathbf{x}_{t',[b-k+1]})), \end{equation} where $P_{\mathbf{X}_t}$ and $P_{\mathbf{\hat{Y}}_t|\mathbf{X}_t}$ are uniform distributions on the sets $\mathit{T}_{\epsilon''}^n(X_t)$ and $\mathit{T}_{\epsilon'}^n(\hat{Y}_t|X_t)$, respectively. Now, we upper bound $\mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})]$ for each $\mathbf{s}\in {\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]},\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$ as follows, \begin{align} \mathbb{P}[{\mathcal E}_3(b,k,\mathbf{s})]&=\sum_{\mathpalette\mathrlapinternal{\big(\mathbf{x}_{{\mathcal L}_k\backslash{\mathcal S}}(s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}(z_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]}),\mathbf{x}_{{\mathcal L}^k,[b-k+1]},\mathbf{\hat{y}}_{{\mathcal L}^{k-1},[b-k+1]},\mathbf{x}_{d_i},\mathbf{y}_{d_i}\big)\in\mathit{T}_{\epsilon}^n}} \mathbb{P}[\mathbf{x}_{{\mathcal L}_k\backslash{\mathcal S}}(s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{\hat{y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}(z_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]}),\mathbf{\hat{y}}_{{\mathcal L}^{k-1}[b-k+1]},\mathbf{x}_{d_i,[b-k+1]},\mathbf{y}_{d_i,[b-k+1]}]\nonumber\\ &\qquad\quad\sum_{\mathpalette\mathclapinternal{\mathbf{x}_{{\mathcal S}\cap{\mathcal L}_k}(s_{{\mathcal S}\cap{\mathcal L}_k}),\mathbf{\hat{y}}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}(z_{{\mathcal Z}\cap{\mathcal L}_{k-1}})\in\atop\mathit{T}_{\epsilon}^n(X_{{\mathcal S}\cap{\mathcal L}_k}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|\mathbf{x}_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]},\mathbf{\hat{y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]},\mathbf{\hat{y}}_{{\mathcal L}^{k-1}[b-k+1]},\mathbf{y}_{d_i,[b-k+1]})}}\qquad\qquad\qquad\prod_{t\in{\mathcal S}\cap{\mathcal L}_k}P_{\mathbf{X}_t}(\mathbf{x}_{t}(s_t))\prod_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}P_{\mathbf{\hat{Y}}_t|\mathbf{X}_t}(\mathbf{\hat{y}}_{t'}(z_{t'}|\mathbf{x}_{t',[b-k+1]}))\label{sal:p1}\\ &=\mathbb{P}[(\mathbf{X}_{{\mathcal L}_k\backslash{\mathcal S}}(s_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{\hat{Y}}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}(z_{{\mathcal L}_{k-1}\backslash{\mathcal Z},[b-k]}),\mathbf{\hat{Y}}_{{\mathcal L}^{k-1}[b-k+1]},\mathbf{X}_{d_i,[b-k+1]},\mathbf{Y}_{d_i,[b-k+1]})\in\mathit{T}_{\epsilon}^n]\nonumber\\& \qquad\dfrac{|\mathit{T}_{\epsilon}^n(X_{{\mathcal S}\cap{\mathcal Z}}\hat{Y}_{{\mathcal Z}\cap{\mathcal L}_{k-1}}|X_{{\mathcal L}_k\backslash{\mathcal S}}\hat{Y}_{{\mathcal L}_{k-1}\backslash{\mathcal Z}}X_{{\mathcal L}^{k}}\hat{Y}_{{\mathcal L}^{k-1}}X_{d_i}Y_{d_i})|}{\prod_{t\in{\mathcal S}\cap{\mathcal L}_k}|\mathit{T}_{\epsilon''}^n(X_t)|\prod_{t'\in{\mathcal Z}\cap{\mathcal L}_{k-1}}|\mathit{T}_{\epsilon'}^n(\hat{Y}_{t'}|X_{t'})|}\label{sal:p2}\\ &\stackrel{.}{\le} 2^{-n\beta_{{\mathcal S},{\mathcal Z}}(k)}\label{sal:p3} \end{align} where \eqref{sal:p1} follows from \eqref{sal:p}, \eqref{sal:p2} follows from the definition of $P_{X_t}$ and $P_{\hat{Y}_t|X_t}$ and \eqref{sal:p3} is a result of the properties of jointly typical sequences. \section{Proof of Lemma \ref{le:7}}\label{app:6} According to the definition of ${\mathcal N}_{{\mathcal S},{\mathcal Z}}(\mathbf{u}_{{\mathcal A}[b-\ell-V+2]}\cdots,\mathbf{u}_{{\mathcal A}[b-V+1]},\mathbf{s}_b)$, for $\mathbf{s}=(w_{{\mathcal L}_1},z_{{\mathcal L}_1},\cdots,w_{{\mathcal L}_{\ell}},z_{{\mathcal L}_{\ell}})$ in \eqref{sal:def-mn}, each $(z_v:v\in{\mathcal Z})$ takes $2^{n(I(\hat{Y}_v;Y_v|X_v)+\delta)}-1$ different values and each $(z_v:v\in{\mathcal V}_{-d_i}\backslash{\mathcal Z})$ takes a fixed value, thus $z_{{\mathcal V}_{-d_i}}$ takes less than $2^{n(\sum_{t\in{\mathcal Z}}I(Y_t;\hat{Y}_t|X_t))}$ different values. Also, according to the definition for each $k\in[1,\ell]$, $w_{{\mathcal L}_k\backslash{\mathcal S}}$ takes the fixed value $w_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}$ and $w_{{\mathcal S}\cap{\mathcal L}_k}$ must satisfy the following relation: \[ \mathbf{u}_{{\mathcal S}\cap{\mathcal L}_k}(w_{{\mathcal L}_k\cap{\mathcal S}})\in\mathit{T}_{\epsilon}^n(U_{{\mathcal L}_k}(w_{{\mathcal L}_k\cap{\mathcal S}})|\mathbf{u}_{{\mathcal L}_k\backslash{\mathcal S}}(w_{{\mathcal L}_k\backslash{\mathcal S},[b-k+1]}),\mathbf{u}_{{\mathcal L}^k,[b-k-V+2]},\mathbf{u}_{d_i,[b-k-V+2]}). \] Thus $\mathbf{u}_{{\mathcal S}\cap{\mathcal L}_k}(w_{{\mathcal L}_k\cap{\mathcal S}})$ (or equivalently $w_{{\mathcal L}_k\cap{\mathcal S}}$) takes at most $2^{n(H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i}))}$ different values. Therefore, $w_{{\mathcal V}_{-d_i}}$ takes at most $2^{n(\sum_{k=1}^{\ell}H(U_{{\mathcal S}\cap{\mathcal L}_k}|U_{{\mathcal L}_k\backslash{\mathcal S}}U_{{\mathcal L}^k}U_{d_i}))}$ different values. Now, comparing the bounds on the number of possible choices for $z_{{\mathcal V}_{-d_i}}$ and $w_{{\mathcal V}_{-d_i}}$ yields the lemma. \section*{acknowledgement} We would like to thank the anonymous reviewers and the Associate Editor for their suggestions which greatly improved the paper in terms of its presentation as well as technical clarity and context. We would also like to thank the members of Information Theory and Security Lab at Sharif University for their comments.
proofpile-arXiv_065-6375
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} During recent years important progress has been achieved in understanding the entropy of the so-called small black holes (for reviews see~\cite{deWit:2005ya, Mohaupt:2005jd, Sen:2007qy}) which have vanishing horizon area at the supergravity level~\cite{Gibbons:1982ih, Gibbons:1985ac, Gibbons:1987ps, Garfinkle:1990qj}. The discrepancy with the microscopic counting which gives the finite entropy was resolved by the discovery that the area of the horizon is stretched to finite radius once curvature corrections are included. Indeed such corrections have long been known to exist in the low-energy effective theories of superstrings~\cite{Zwiebach:1985uq, Callan:1986jb, Gross:1986iv, Metsaev:1987zx, Gross:1986mw}. The classically computed entropy then differs from the Bekenstein-Hawking value~\cite{Wald:1993nt, Jacobson:1993vj, Iyer:1994ys, Jacobson:1994qe}, but agrees (at least up to a coefficient) with microscopic counting~\cite{Behrndt:1998eq, LopesCardoso:1998wt, LopesCardoso:1999cv, LopesCardoso:1999ur, LopesCardoso:1999xn, Mohaupt:2000mj, LopesCardoso:2000qm, LopesCardoso:2000fp, Dabholkar:2004yr, Dabholkar:2004dq, Sen:2004dp, Hubeny:2004ji, Bak:2005mt} including non-BPS cases~\cite{Goldstein:2005hq, Kallosh:2005ax, Tripathy:2005qp, Giryavets:2005nf, Goldstein:2005rr, Kallosh:2006bt, Kallosh:2006bx, Prester:2005qs,Cvitan:2007pk,Cvitan:2007en,Prester:2008iu, Alishahiha:2006ke, Sinha:2006yy, Chandrasekhar:2006kx, Parvizi:2006uz, Sahoo:2006rp, Astefanesei:2006sy}. Within the models in which the supersymmetric versions of the curvature square terms are available, the correspondence was checked using the exact classical solutions~\cite{Dabholkar:2004yr, Dabholkar:2004dq, Bak:2005mt}. It was also observed that good agreement is achieved if the curvature corrections are taken in the form of the Gauss-Bonnet (GB) term both in 4D and higher dimensions~\cite{Prester:2005qs, Cvitan:2007pk, Cvitan:2007en, Prester:2008iu}. To compute the entropy of extremal black holes with the horizon $AdS_2 \times S^{D-2}$ from the classical side it is enough to construct local solutions in the vicinity of the horizon which is easily done analytically~\cite{Sen:2005wa, Sen:2005iz, Cai:2007cz}. But this does not guarantee the existence of global asymptotically flat solutions. Construction of solutions with curvature corrections, apart from purely perturbative probes~\cite{Callan:1988hs, Mignemi:1992nt, Mignemi:1993ce}, requires numerical integration of the field equations. For non-extremal black holes this was done in~\cite{Kanti:1995vq,Torii:1996yi,Alexeev:1997ua, Melis:2005xt, Melis:2005ji, Alexeev:1996vs, Guo:2008hf, Guo:2008eq, Ohta:2009tb, Ohta:2009pe, Maeda:2009uy}. The global existence of extremal black holes in the 4D model with the GB terms endowed with an arbitrary dilaton coupling $a$ was proven in~\cite{Chen:2006ge,Chen:2008px}. It turned out that global asymptotically flat black holes with the horizon $AdS_2 \times S^2$ existed for the dilaton coupling below the critical value of the order $a_{\rm cr} \sim 1/2$ and less than $\frac12$. This range does not include the heterotic string value $a = 1$ nor $\frac12$. This result is modified in the presence of the magnetic charge~\cite{Chen:2008hk}, which extends the region of the allowed couplings and serve as the order parameter ensuring continuous transition to the theory without curvature corrections. It is worth noting that our model has neither continuous nor discrete S-duality, so properties of the purely electric solution essentially differs from that of dyons. The purpose of the present paper is to investigate existence of global solutions for small stretched purely electric black holes in higher dimensional Einstein-Maxwell-Dilaton theory with the Gauss-Bonnet term (EMDGB). We construct the local solutions in terms of series expansion around the degenerate event horizon for an arbitrary space-time dimension $D$ and calculate the discrete sequence of black hole entropies using Sen's entropy function approach. The entropy is found to be monotonically increasing with $D$. Then we continue numerically these local solutions and show that in dimensions higher than four the heterotic string value of the dilaton coupling lies inside the range of the existence of global asymptotically flat static black holes. We also investigate physical significance of the so-called turning points which were encountered in numerical solutions within the four-dimensional EMDGB theory~\cite{Kanti:1995vq, Torii:1996yi, Alexeev:1996vs, Alexeev:1997ua, Chen:2006ge, Chen:2008hk}. They correspond to mild singularities at finite radii outside the horizon where metric and its first derivatives are finite, but the second derivatives diverge. Numerical solutions can be extended through these singularities, which we call `cusps' in this paper, by suitable redefinition of the integration variable~\cite{Alexeev:1997ua, Pomazanov:2000}. In four dimensions the solution extended this way then meets a stronger singularity at finite distance, so actually the cusp is just a precursor of the strong singularity. In higher dimensions ($D \ge 7$) we encounter an interesting new feature: the cusps come out in pairs of right and left turning points, so the extended solution finally may be even asymptotically flat. This could correspond to a novel type of black hole coated by cusp pairs. But somewhat disappointingly, our analysis shows that continuation of geodesics through the cusp singularities in the extended manifolds cannot be performed in a smooth way. Thus we are inclined to reject such extended manifolds as physical black hole solutions. Instead, we interpret the occurrence of cusp singularity as failure to produce asymptotically flat black holes. This gives an upper bound on the dilaton coupling. We find numerically the sequence of critical dilaton couplings for $4 \leq D \leq 10$ which turns out to be increasing with $D$. Another novel feature of EMDGB black holes with a degenerate horizon in higher dimensions is that the role of the GB term in the near critical solutions may still be significant. In four dimensions, as was shown in~\cite{Chen:2006ge}, the near-critical solutions saturate the BPS bounds of the corresponding theory without curvature corrections. This means that relative contribution of the GB term becomes negligible when the dilaton coupling approaches its upper boundary. We find that for $D \ge 7$ this is not so, and the BPS conditions are not satisfied in this limit. This paper is organized as follows. In Sec.~II, we define the action, present the field equations in various forms and discuss symmetries of the system. In Sec.~III, we review solutions for small black holes without GB corrections as well and the solutions with the GB term but without dilaton. Then we construct the local series solutions near the horizon and calculate the entropy of stretched black holes using Sen's entropy function. We obtain the discrete sequence of the entropies of curvature corrected black holes in various dimensions interpolating starting with twice the Hawking-Bekenstein value $A/2$ for $D =4$ up to $41A/52$ for $D = 10$. In Sec.IV, we present asymptotic expansions of the desired solutions, introduce global charges and discuss the BPS conditions. The next Sec.~V is devoted to the cusp problem. We explain why extension of solutions through the cusp singularity is physically unacceptable. Finally in Sec.~VI we present numerical results for various dimensions and explore the fulfillment of the BPS conditions on the boundary of the allowed dilaton couplings. \section{Setup} \label{setup} A low-energy bosonic effective action for the heterotic string theory with the curvature corrections is given by~\cite{Metsaev:1987zx, Gross:1986mw} \begin{equation} I = \frac1{16 \pi G} \int d^Dx \sqrt{-\tilde g} \Phi \left( \tilde R + \Phi^{-2} \tilde \partial_\mu \Phi \tilde \partial^\mu \Phi - \tilde F_{\mu\nu} \tilde F^{\mu\nu} + \frac{\alpha'}8 \tilde{\cal L}_{\rm GB} \right), \end{equation} where $\tilde F^{\mu\nu}$ is the Maxwell field (we use a truncation involving only one $U(1)$ field), $\tilde{{\cal L}}_{\rm GB}$ is the Euler density \begin{equation} \tilde{{\cal L}}_{\rm GB} = \tilde{R}^2 - 4 \tilde{R}_{\mu\nu} \tilde{R}^{\mu\nu} + \tilde{R}_{\alpha\beta\mu\nu} \tilde{R}^{\alpha\beta\mu\nu}, \end{equation} and $\alpha'$ is the Regge slope parameter. The tilde denotes the quantities related to the string frame metric $\tilde g_{\mu\nu}$. The action can be transformed to the Einstein frame with metric $g_{\mu\nu}$ by the conformal rescaling \begin{equation} g_{\mu\nu} = \Phi^{\frac2{D-2}} \, \tilde g_{\mu\nu}, \end{equation} giving \begin{equation} I = \frac1{16 \pi G} \int d^Dx \sqrt{- g} \left( R - \frac{\Phi^{-2}}{D-2} \partial_\mu \Phi \partial^\mu \Phi - \Phi^{\frac2{D-2}} F_{\mu\nu} F^{\mu\nu} + \frac{\alpha'}8 \Phi^{\frac2{D-2}} {\cal L}_{\rm GB} + \mathcal{F}(\partial \Phi, R) \right), \end{equation} where $\mathcal{F}(\partial \Phi, R)$ denotes the cross terms of $\partial \Phi$ and curvature coming from the GB term under the frame transformation. For simplicity, we do not include these terms in our analysis. We expect that inclusion of these terms might affect the black hole properties only quantitatively but not qualitatively. Then redefining the dilaton field as \begin{equation} \Phi = \mathrm{e}^{\sqrt{2(D-2)} \; \phi}, \end{equation} we obtain the action \begin{equation} I = \frac1{16 \pi G} \int d^Dx \sqrt{-g} \left( R - 2 \partial_\mu \phi \partial^\mu \phi - {\rm e}^{2 \sqrt{2/(D-2)} \, \phi} F_{\mu\nu} F^{\mu\nu} + \frac{\alpha'}8 {\rm e}^{2 \sqrt{2/(D-2)} \, \phi} {\cal L}_{\rm GB} \right). \end{equation} In this action we have the sequence of the dilaton couplings \begin{equation}\label{astring} a_{\rm str}^2 = \frac2{D-2}, \end{equation} relevant for the string theory. If we do this in 4 dimensions, we have the dilaton coupling $a_{\rm str} = 1$, but if we do this in 10 dimensions, we have $a_{\rm str} = 1/2$. It will be convenient, however, to consider the above action for two arbitrary dilaton couplings $a$ and $b$: \begin{equation} \label{action} I = \frac1{16 \pi G} \int d^Dx \sqrt{-g} \left( R - 2 \partial_\mu \phi \partial^\mu \phi - {\rm e}^{2 a \phi} F_{\mu\nu} F^{\mu\nu} + \alpha {\rm e}^{2 b \phi} {\cal L}_{\rm GB} \right), \end{equation} where we also denoted the GB coupling $\alpha'/8 = \alpha$. The space-time metric is parametrized by two functions $\omega(r)$ and $\rho(r)$: \begin{equation}\label{met} ds^2 = - \omega(r) dt^2 + \frac{dr^2}{\omega(r)} + \rho^2(r) d\Omega_{D-2}^2. \end{equation} For convenience, we list in Appendix A the relevant geometric quantities for more general static spherically symmetric metrics. We will consider only purely electric static spherically symmetric configurations of the $D$-dimensional Maxwell field \begin{equation} A = - f(r) \, dt. \end{equation} Then, integrating the Maxwell equations \begin{equation} \left( \rho^{D-2} f' {\rm e}^{2 a \phi} \right)' = 0, \end{equation} one obtains \begin{equation}\label{Solf} f'(r) = q_e \rho^{2-D} {\rm e}^{- 2 a \phi}, \end{equation} where $q_e$ is the electric charge, which is considered as a free parameter (note that the physical electric charge defined asymptotically differs from this quantity, see Sec.~\ref{global}). \subsection{Field equations} \label{field} We present the Einstein equations in the form \begin{equation} G_{\mu\nu} = 8 \pi G ( T_{\mu\nu}^{\rm mat} + T_{\mu\nu}^{\rm GB} ), \end{equation} where $T_{\mu\nu}^{\rm mat}$ is the matter stress-tensor \begin{equation} 8 \pi G \, T_{\mu\nu}^{\rm mat} = 2 \left[ \partial_\mu \phi \partial_\nu \phi - \frac12 \partial_\alpha \phi \partial^\alpha \phi \, g_{\mu\nu} + {\rm e}^{2 a \phi} \left( F_{\mu\alpha} F_\nu{}^\alpha - \frac14 F_{\alpha\beta} F^{\alpha\beta} \, g_{\mu\nu} \right) \right], \end{equation} and the $T_{\mu\nu}^{\rm GB}$ is the effective gravitational stresses due to the GB term \begin{equation} 8 \pi G \, T_{\mu\nu}^{\rm GB} = - \alpha {\rm e}^{2 b \phi} \left[ H_{\mu\nu} + 8 \left( 2 b^2 \nabla^\alpha \phi \nabla^\beta \phi + b \nabla^\alpha \nabla^\beta \phi \right) P_{\mu\alpha\nu\beta} \right], \end{equation} where \begin{eqnarray} H_{\mu\nu} &=& 2 ( R R_{\mu\nu} - 2 R_{\mu\alpha} R^\alpha{}_\nu - 2 R^{\alpha\beta} R_{\mu\alpha\nu\beta} + R_{\mu\alpha\beta\gamma} R_\nu{}^{\alpha\beta\gamma} ) - \frac12 {\cal L}_{GB} \; g_{\mu\nu}, \\ P_{\mu\alpha\nu\beta} &=& R_{\mu\alpha\nu\beta} + 2 g_{\mu[\beta} R_{\nu]\alpha} + 2 g_{\alpha[\nu} R_{\beta]\mu} + R g_{\mu[\nu} g_{\beta]\alpha}. \end{eqnarray} For the metric (\ref{met}), the components of $G_{\mu\nu}$ are \begin{eqnarray} G_{tt} &=& - \frac{(D-2) \omega}{2 \rho^2} \left[ 2 \omega \rho \rho'' + \rho \omega' \rho' + (D - 3) (\omega \rho'^2 - 1) \right], \nonumber\\ G_{rr} &=& \frac{D-2}{2 \omega \rho^2} \left[ \rho \omega' \rho' + (D - 3) (\omega \rho'^2 - 1) \right], \nonumber\\ G_{\theta\theta} &=& \frac12 \rho^2 \omega'' + \frac{D-3}2 \left[ 2 \omega \rho \rho'' + 2 \rho \omega' \rho' + (D - 4) (\omega \rho'^2 - 1) \right], \end{eqnarray} while the energy-momentum due to matter fields is given by \begin{eqnarray} 8 \pi G T_{tt}^{\rm mat} &=& \omega^2 \phi'^2 + {\rm e}^{2 a \phi} \omega f'^2, \nonumber\\ 8 \pi G T_{rr}^{\rm mat} &=& \phi'^2 - {\rm e}^{2 a \phi} \frac{f'^2}{\omega}, \nonumber\\ 8 \pi G T_{\theta\theta}^{\rm mat} &=& - \rho^2 \left( \omega \phi'^2 - {\rm e}^{2 a \phi} f'^2 \right). \end{eqnarray} The energy-stress tensor due to the GB term is more complicated \begin{eqnarray} \frac{8 \pi G}{\alpha \mathrm{e}^{2 b \phi}} T^{GB}_{tt} &=& - \frac{D^2_4 \omega}{\rho^3} (2 \omega \rho'' + \omega' \rho') (\omega \rho'^2 - 1) - \frac{D^2_5 \omega}{2 \rho^4} (\omega \rho'^2 - 1)^2 \nonumber\\ && - \frac{2 b D^2_3 \omega}{\rho^2} \left[ (2 \omega \phi'' + \omega' \phi') (\omega \rho'^2 - 1) + 2 \omega \rho' \phi' (2 \omega \rho'' + \omega' \rho') \right] \nonumber\\ && - \frac{4 b D^2_4 \omega^2 \rho' \phi'}{\rho^3} (\omega \rho'^2 - 1) - \frac{8 b^2 D^2_3 \omega^2 \phi'^2}{\rho^2} (\omega \rho'^2 - 1), \nonumber\\ \frac{8 \pi G}{\alpha \mathrm{e}^{2 b \phi}} T^{GB}_{rr} &=& \frac{D^2_4 \omega' \rho'}{\omega \rho^3} (\omega \rho'^2 - 1) + \frac{D^2_5}{2 \omega \rho^4} (\omega \rho'^2 - 1)^2 \nonumber\\ && + 2 b \left[ \frac{D^2_3 \omega' \phi'}{\omega \rho^2} (3 \omega \rho'^2 - 1) + \frac{2 D^2_4 \rho' \phi'}{\rho^3} (\omega \rho'^2 - 1) \right], \nonumber\\ \frac{8 \pi G}{\alpha \mathrm{e}^{2 b \phi}} T^{GB}_{\theta\theta} &=& D^3_4 [ \omega'' (\omega \rho'^2 - 1) + 2 \omega \omega' \rho' \rho'' + \omega'^2 \rho'^2] + \frac{2 D^3_5}{\rho} (\omega \rho')' (\omega \rho'^2 - 1) + \frac{D^3_6}{2 \rho^2} (\omega \rho'^2 - 1)^2 \nonumber\\ && + 4 b \Biggl[ D^3_3 \rho (\omega \omega' \rho' \phi')' + D^3_4 (\omega \phi')' (\omega \rho'^2 - 1) + 2 D^3_4 \omega \rho' \phi' (\omega \rho')' \nonumber\\ && + \frac{D^3_5 \omega \rho' \phi'}{\rho} (\omega \rho'^2 - 1) \Biggr] + 8 b^2 \left[ D^3_3 \omega \omega' \rho \rho' \phi'^2 + D^3_4 \omega \phi'^2 (\omega \rho'^2 - 1) \right], \end{eqnarray} where we have introduced the dimension-dependent coefficients \begin{equation} D^m_n = (D - m)_n = (D - m) (D - m - 1) \cdots (D - n), \qquad n \ge m. \end{equation} The dilaton equation reads \begin{eqnarray} 2 (\omega \phi')' + 2 D^2_2 \omega \phi' \frac{\rho'}{\rho} + 2 a f'^2 {\rm e}^{2 a \phi} + \alpha b D^2_3 {\rm e}^{2 b \phi} \left\{ 2 \frac{[\omega' (\omega \rho'^2 - 1)]'}{\rho^2} + D^4_5 \frac{(\omega \rho'^2 - 1)^2}{\rho^4} + 4 D^4_4 (\omega \rho')' \frac{\omega \rho'^2 - 1}{\rho^3} \right\} = 0. \end{eqnarray} {}From the Einstein equation, one can derive the following two second order equations for the metric functions $\rho(r)$ and $\omega(r)$,~\footnote{Namely, the equation for $\rho$ is $- \frac{\rho^2}{\omega^2} \left[ (\mbox{Einstein equation})_{tt} + \omega^2 (\mbox{Einstein equation})_{rr} \right]$ and the equation for $\omega$ is $\frac2{\rho} (\mbox{Einstein equation})_{\theta\theta}$.} which are more convenient for numerical integration: \begin{equation} D^2_2 \rho \rho'' + 2 \rho^2 \phi'^2 - 4 \alpha b D^2_3 \left[ (\omega \rho'^2 - 1) \phi' {\rm e}^{2 b \phi} \right]' + 2 \alpha D^2_3 {\rm e}^{2 b \phi} \left( 2 b \omega' \rho'^2 \phi' - D^4_4 \frac{\omega \rho'^2 - 1}{\rho} \rho'' \right) = 0, \label{Eqrho} \end{equation} \begin{eqnarray} \rho \omega'' + 2 D^3_3 (\omega \rho')' + D^3_4 \frac{\omega \rho'^2 - 1}{\rho} + 2 \omega \rho \phi'^2 - 2 \rho f'^2 {\rm e}^{2 a \phi} - 8 \alpha b D^3_3 \left( \omega \omega' \rho' \phi' {\rm e}^{2 b \phi} \right)' && \nonumber\\ - \alpha D^3_4 {\rm e}^{2 b \phi} \Biggl\{ D^5_6 \frac{(\omega \rho'^2 - 1)^2}{\rho^3} + 4 D^5_5 \left[ (\omega \rho')' + 2 b \omega \rho' \phi' \right] \frac{\omega \rho'^2 - 1}{\rho^2} && \nonumber\\ + 2 \left[ \omega'' + 4b(\omega \phi')' + 8 b^2 \omega \phi'^2 \right] \frac{\omega \rho'^2 - 1}{\rho} + 2 \frac{\rho'}{\rho} \left[ 2 \omega \omega' \rho'' + \omega'^2 \rho' + 8 b \omega \phi' (\omega \rho')' \right] \Biggr\} &=& 0. \label{Eqw} \end{eqnarray} \subsection{Symmetries of the reduced action} \label{symmetries} One can check that equations of motion are invariant under a {\em three-parametric} group of global transformations which consist of the transformations of the field functions: \begin{equation} \label{symsol} \omega \to \omega \, {\rm e}^{\mu}, \qquad \rho \to \rho \, {\rm e}^{\delta}, \qquad \phi \to \phi + \frac{\delta}{b}, \qquad f \to f \, {\rm e}^{\frac{\mu}2 - \frac{a}{b} \delta}, \end{equation} accompanied by the shift and rescaling of the radial variable \begin{equation} \label{trr} r \to r \, {\rm e}^{\frac{\mu}2 + \delta} + \nu. \end{equation} Transformation of the electric potential is equivalent to rescaling of the electric charge \begin{equation} q_e \to q_e \, {\rm e}^{\left( D - 3 + \frac{a}{b} \right) \delta}. \end{equation} Not all of these symmetries are the symmetries of the Lagrangian, however. Integrating the action (\ref{action}) over the $(D-2)$-dimensional sphere and dropping integration over time integral, one obtains the one-dimensional reduced Lagrangian from the relation $I = \int L dr$. Up to the total derivative one has: \begin{eqnarray} L &=& D^2_2 \rho' \left( \omega \rho^{D-3} \right)' + D^2_3 \rho^{D - 4} - 2 \rho^{D - 2} (\omega \phi'^2 - f'^2 {\rm e}^{2 a \phi}) \nonumber\\ &-& \frac43 \alpha D^2_4 \rho'^3 \left( \omega^2 \rho^{D - 5} {\rm e}^{2 b \phi} \right)' + 4 \alpha D^2_4 \rho' \left( \omega \rho^{D - 5} {\rm e}^{2 b \phi} \right)' \nonumber\\ &-& \alpha {\rm e}^{2 b \phi} \left[ 4 b D^2_3 \rho^{D - 4} \omega' \phi' - 2 D^2_4 \rho^{D - 5} \omega' \rho' - D^2_5 \rho^{D - 6} (\omega \rho'^2 - 1) \right] (\omega \rho'^2 - 1). \end{eqnarray} It is easy to check that the one-dimensional action remains invariant under the above transformations provided \begin{equation} \mu = - 2 (D - 3) \delta, \end{equation} namely under the following {\em two-parametric} group of global transformations: \begin{equation} \label{symL} r \to r \, {\rm e}^{- (D - 4) \delta} + \nu, \quad \omega \to \omega \, {\rm e}^{- 2 (D - 3) \delta}, \quad \rho \to \rho \, {\rm e}^{\delta}, \quad \phi \to \phi + \frac{\delta}{b}, \quad f \to f \, {\rm e}^{- \left( D - 3 + \frac{a}{b} \right) \delta}. \end{equation} They generate two conserved Noether currents \begin{equation} J_g := \left( \frac{\partial L}{\partial \Phi'^A} \Phi'^A - L \right) \partial_g r \bigg|_{g=0} - \frac{\partial L}{\partial \Phi'^A} \, \partial_g \Phi^A \bigg|_{g=0}, \qquad \partial_r J_g = 0, \end{equation} where $\Phi^A$ stands for $\omega, \rho, \phi, f$, and $g = \delta, \nu$. The conserved quantity corresponding to $\nu$ is the Hamiltonian \begin{eqnarray} \label{J1} H &=& D^2_2 \rho' \left( \omega \rho^{D-3} \right)' - D^2_3 \rho^{D - 4} - 2 \omega \rho^{D - 2} \phi'^2 + 2 \rho^{D - 2} f'^2 {\rm e}^{2 a \phi} \nonumber\\ &-&\! \alpha {\rm e}^{2 b \phi} \left[ 4 b D^2_3 \rho^{D - 4} \omega' \phi' (3 \omega \rho'^2 \!-\! 1) \!-\! 2 D^2_4 \rho^{D - 5} \omega' \rho' (3 \omega \rho'^2 \!-\! 1) \!-\! D^2_5 \rho^{D - 6} (\omega \rho'^2 \!-\! 1) (3 \omega \rho'^2 \!+\! 1) \right] \nonumber\\ &-& 4 \alpha D^2_4 \rho'^3 \left( \omega^2 \rho^{D - 5} {\rm e}^{2 b \phi} \right)' + 4 \alpha D^2_4 \rho' \left( \omega \rho^{D - 5} {\rm e}^{2 b \phi} \right)'. \end{eqnarray} This is known to vanish on shell for diffeomorphism invariant theories, $H = 0$. The Noether current corresponding to the parameter $\delta$ leads to the conservation equation $\partial_r J_\delta = 0$, where \begin{eqnarray} \label{J2} J_\delta &=& - D^4_4 r H - D^2_2 \omega' \rho^{D-2} + \frac4{b} \omega \rho^{D-2} \phi' + 4 \left( D - 3 + \frac{a}{b} \right) q_e f \nonumber\\ &+& \alpha {\rm e}^{2 b \phi} \left[ (\omega \rho'^2 - 1) ( 2 D^2_2 D^2_3 \omega' \rho^{D-4} - 8 b D^2_3 \omega \rho^{D-4} \phi' ) + 8 b D^2_3 \omega \omega' \rho^{D-3} \rho' \phi' \right]. \end{eqnarray} Symmetry transformations will be used to rescale numerically obtained solutions to desired asymptotic form and obtain true physical parameters of the solution. \section{Stretching the horizon of small black hole} \label{stretch} \subsection{Small dilatonic $D$-dimensional black hole without GB term} \label{small} Let us first discuss the black hole solution without the GB term. It can be presented in the form~\cite{Gal'tsov:2005vf} \begin{eqnarray} && ds^2 = - f_+ f_-^{-1 + \frac{4(D-3)}{(D-2)\Delta}} dt^2 + f_+^{-1} f_-^{-1 + \frac2{D-3} - \frac4{(D-2)\Delta} } dr^2 + r^2 f_-^{\frac2{D-3} - \frac4{(D-2)\Delta}} d\Omega_{D-2}^2, \\ && \mathrm{e}^{2 a \phi} = \mathrm{e}^{2 a \phi_\infty} f_-^{- \frac{2a^2}{\Delta}}, \qquad F_{tr} = 4 (D - 3) \frac{(r_+ r_-)^\frac{D-3}{2}}{\sqrt{\Delta}} \mathrm{e}^{- a \phi_\infty} \frac1{r^{D-2}}, \end{eqnarray} where \begin{equation} f_\pm = 1 - \frac{r_\pm^{D-3}}{r^{D-3}}, \qquad \Delta = a^2 + \frac{2(D-3)}{D-2}. \end{equation} The mass and the electric and dilaton charges are given by \begin{eqnarray} \mathcal{M} &=& \frac{\Omega_{D-2}}{16 \pi G} \left[ (D-2) (r_+^{D-3} - r_-^{D-3}) + \frac{4(D-3)}{\Delta} r_-^{D-3} \right], \\ Q_e &=& \frac{(D-3) \Omega_{D-2}}{4 \pi G} \sqrt{\frac{(r_+ r_-)^{D-3}}{\Delta}} \; \mathrm{e}^{a \phi_\infty}, \\ \mathcal{D} &=& - \frac{(D-3) a \Omega_{D-2}}{4 \pi G \Delta} r_-^{D-3}. \end{eqnarray} For $a = 0$, this solution reduces to the $D$-dimensional Reissner-Nordstr\"om solution. In the extremal limit $r_+ = r_- = r_0$, it contracts to \begin{equation}\label{ReNo} ds^2 = - f_0^2 dt^2 + f_0^{-1} dr^2 + r^2 d\Omega_{D-2}^2, \qquad f_0 = 1 - \frac{r_0^{D-3}}{r^{D-3}}, \end{equation} and has a degenerate event horizon $AdS_2 \times S^{D-2}$. Note that for $a = 0$, the GB term decouples from the system, so this solution remains valid in the full theory with $\alpha \neq 0$. For $a \neq 0$, the extremal solution reads \begin{equation}\label{edblh} ds^2 = - f_0^{\frac{4(D-3)}{(D-2)\Delta}} dt^2 + f_0^{- \frac{2(D-4)}{D-3} - \frac4{(D-2)\Delta} } dr^2 + r^2 f_0^{\frac2{D-3} - \frac4{(D-2)\Delta}} d\Omega_{D-2}^2. \end{equation} This has a null singularity at the horizon. The Ricci scalar in the vicinity of this point diverges as \begin{equation} R \sim (r^{D-3} - r_0^{D-3})^{-\frac2{D-3} + \frac{4}{(D-2) \Delta}}, \end{equation} together with the dilaton function \begin{equation} {\rm e}^{2 a \phi} \sim (r^{D-3} - r_0^{D-3})^{-\frac{2 a^2}{\Delta}}. \end{equation} The divergence of the GB term near the horizon is~\footnote{We use this occasion to correct Eq.(33) of our previous paper for $D = 4$ \cite{Chen:2006ge}.} \begin{equation}\label{GBdiv} \mathrm{e}^{2 a \phi}{\cal L}_{GB}|_{r=r_+} \sim (r_+ - r_-)^{-\frac{a^2(D^2-4)}{a^2 (D-2) + 2 (D-3)}}, \end{equation} so one can expect that the GB term will substantially modify the dilaton black hole solution in the extremal limit. The mass, the dilaton charge and the electric charge for this solution (defined as in Sec.~IV below) are \begin{equation} \mathcal{M} = \frac{\Omega_{D-2}}{4 \pi G} \frac{D^3_3}{\Delta} \, r_0^{D-3}, \qquad \mathcal{Q}_e = Q_e \mathrm{e}^{-a \phi_\infty} = \frac{\Omega_{D-2}}{4 \pi G} \frac{D^3_3}{\sqrt{\Delta}} r_0^{D-3}, \qquad \mathcal{D} = - \frac{\Omega_{D-2}}{4 \pi G} \frac{a D^3_3}{\Delta} r_0^{D-3}. \end{equation} They are determined by a single parameter $r_0$, so we have the following relations among the three quantities \begin{equation} \mathcal{D} = a \mathcal{M}, \qquad \mathcal{Q}_e = \sqrt{\Delta} \mathcal{M}, \end{equation} which imply the following BPS condition \begin{equation} a^2 \mathcal{M}^2 + \mathcal{D}^2 = \frac{2 a^2}{\Delta} \mathcal{Q}_e^2. \end{equation} \subsection{Wiltshire black hole} \label{wiltshire} Another limit in which our action admits an exact solution is that of vanishing dilaton. This is consistent with the field equations for $a = b = 0$. In this case an exact solution was found by Wiltshire~\cite{Wiltshire:1988uq} \begin{equation} \omega(r) = 1 + \frac{r^2}{2 D^3_4 \alpha} \left( 1 \mp \sqrt{1 + \frac{64 \pi D^3_4 \alpha \mathcal{M}}{D^2_2 \Omega_{D-2} r^{D-1}} - \frac{8 D^4_4 \alpha \, q_e^2}{D^2_2 r^{2(D-2)}} } \right), \quad \rho(r) = r. \end{equation} The lower sign corresponds to an asymptotically AdS space-time for $\alpha > 0$ and to an asymptotically de Sitter solution for $\alpha < 0$. The upper sign leads to an asymptotically flat solution coinciding with the $D$-dimensional Reissner-Nordstr\"om solution. These solutions exist in dimensions $D \geq 5$ where the GB term is not the total derivative. The asymptotically flat solution has two horizons which coincide in the extremal limit for a special value of the electric charge. For the extremal solution, the mass and the charge can be expressed in terms of the single parameter, the radius of the horizon $r_0$: \begin{equation} \label{gpWe} {\cal M} = \frac{\Omega_{D-2}}{8\pi} (D-2)[r_0^2+(D-4)^2 \alpha] r_0^{D-5}, \qquad q_e^2 = \frac{D^2_3}{2} [r_0^2+ D^4_5 \alpha] r_0^{2(D-4)}. \end{equation} Conversely the radius can be expressed as \begin{equation} r_0^{D-3} = - \frac{4 \pi D^5_5 \mathcal{M}}{D^2_2 \Omega_{D-2}} + \sqrt{ \left( \frac{4 \pi D^5_5 \mathcal{M}}{D^2_2 \Omega_{D-2}} \right)^2 + \frac{2 D_4^4 q_e^2}{D^2_3}}. \end{equation} \subsection{Local solution near the horizon} \label{local} In what follows we set $b = a$ as relevant for the heterotic string theory case, but still keeping $a$ arbitrary. Assuming that the full system with the GB term admits the $AdS_2 \times S^{D-2}$ horizon, $r = r_H$, we look for the series expansions of the metric function in powers of $x = r - r_H$: \begin{equation} \omega(r) = \sum_{i=2}^\infty \omega_i x^i, \qquad \rho(r) = \sum_{i=0}^\infty \rho_i x^i, \qquad P(r) := {\rm e}^{2 a \phi(r)} = \sum_{i=0}^\infty P_i x^i. \end{equation} The function $\omega$ starts with the quadratic term in view of the degeneracy of the horizon, while two other functions have the general Taylor's expansions. Denoting the physical radius of the horizon $\rho_0 = \rho(r_H)$, we obtain for the leading order coefficients: \begin{eqnarray} \label{NHSol} \omega_2 = \frac{D^2_3}{2 \rho_0^2}, \qquad P_0 = \frac{\rho_0^2}{4 \alpha (2 D - 7)}, \end{eqnarray} and $\rho_0$ is related to the electric charge via \begin{equation} \label{defqe} \rho_0^{D-2} = q_e \frac{4 \sqrt{2 \alpha} (2D - 7)}{\sqrt{D^2_3 (D^2 - D - 8)}}. \end{equation} Note that the expression under the square root and the right hand as a whole are positive for $D \geq 4$. The horizon radius is fixed entirely by the electric charge, like in the extremal Reissner-Nordstr\"om case. In our units the GB parameter $\alpha$ has dimension $L^2$. When the GB term is switched off ($\alpha \to 0$), the horizon radius shrinks, as expected for small extremal black holes. Higher order expansion coefficients exhibit dependence on only one free parameter, namely the $P_1$ in the dilaton expansion. Other coefficients are expressed in terms of the horizon radius $\rho_0$ and $P_1$, the first sub-leading coefficients being \begin{eqnarray} \omega_3 &=& - \frac{2 \alpha P_1}{3 a^2 \rho_0^4 (D^3 - 9D^2 + 16D + 8)} \Bigl[ D^2_3 (3D^5 - 43D^4 + 213D^3 - 421D^2 + 236D + 76) a^2 \nonumber\\ && + D^3_3 (2D - 7) (3D^4 - 25D^3 + 70D^2 - 56D - 40) \Bigr], \nonumber\\ \rho_1 &=& \frac{4 \alpha P_1 [ (D^4 - 13D^3 + 54D^2 - 72D - 2) a^2 + (2D - 7)(D^2 - 3D -2)]}{a^2 \rho_0 (D^3 - 9D^2 + 16D + 8)}. \end{eqnarray} One can notice that the free parameter enters the expansion coefficients always in the combination $P_1/a^2$. This facilitates transition to the Wiltshire case. In the limit of decoupled dilaton $a = 0$, the parameter $P_1 \to 0$, while the ratio $P_1/a^2$ remains finite. In this case we have nonvanishing coefficients $P_0, \; \rho_1$ and $\omega_i$: \begin{equation} P(r) = P_0, \qquad \rho(r) = \rho_0 + \rho_1 (r - r_0). \end{equation} The asymptotic flatness requires $\rho_1 = 1$, so we have $\rho_0 = r_0$. For the Wiltshire solution $P_0 = 1$, and we obtain the following relation in the extremal case: \begin{equation} \rho_0^2 = r_0^2 = 4 \alpha (2D - 7), \label{ddc} \end{equation} Substituting (\ref{ddc}) into (\ref{gpWe}), we find \begin{equation} \label{gpWe1} {\cal M} = \frac{\Omega_{D-2}}{32\pi} \frac{D^2_2 (D^2 - 12)}{2D-7} \, r_0^{D-3}, \qquad q_e^2 = \frac1{8} \frac{D^2_3 (D^2-D-8)}{2D-7} \, r_0^{2(D-3)}, \end{equation} and so our solution with the decoupled dilaton coincides with the extremal case of the Wiltshire solution. We can consider subgroup of global symmetry transformation defined by two parameters $\delta$ and $\mu$. We can eliminate the parameter $\rho_0$ from the expansion on the horizon if we apply the transformation with parameters $\mu = -2 \ln\rho_0$ and $\delta = - \ln\rho_0$. The other transformation with parameters $\mu = 2 \ln|P_1|, \, \delta = 0$ can take out $P_1$ from expansions. Choosing the absolute value $|P_1|$ in the second transformation allows us to get the remaining parameter $\xi =\frac{P_1}{|P_1|}$ in the expansion at the horizon which fixes the sign of $\rho$ ($\rho_1 > 0$ to obtain a global solution) in the expansion. As a result, we have a map between parameters of expansion $P_1,\, \rho_0$ and parameters of global transformation $\mu,\, \delta$. Typically one can first investigate special solution with a simple choice of near horizon data, such as $P_1 = 1$ and $\rho_0 = 1$, which are the values we use for our numerical analysis below. Then general solutions with arbitrary values of free parameters can be simply obtained by the global transformation with $\mu = 2\ln\frac{|P_1|}{\rho_0}, \, \delta = -\ln\rho_0$. Also it is clear from the relation between $\rho_0$ and $q_e$ that electrical charge plays the role of rescaling parameter. Finally the free parameter $P_1$ could be fixed in accordance with the boundary condition at infinity. We can also eliminate the GB coupling constant $\alpha$ from the system by introducing new dilaton function $F = \alpha\, P$ and rescaling charge $q_e$. In this way, we have only two parameters, the number of the dimension $D$ and the dilaton coupling $a$, which affect the dynamics of solutions. The values of the integrals of motion~(\ref{J1}) and (\ref{J2}) in terms of the parameters of the local solution are \begin{equation} H = - \alpha \rho_0^{D-2} D^2_5 P_0 - D^2_3 \rho_0^{D-4} + \frac{2 q_e^2}{P_0} \rho_0^{2-D}, \qquad J_{\delta} = 4 (D-2) q_e f_0. \end{equation} \subsection{The entropy} \label{entropy} Knowledge of the local solution near the horizon is enough to calculate the entropy of the black hole, assuming that the local solution can be extended to infinity. To compute the entropy, we apply Sen's entropy function approach~\cite{Sen:2007qy} which is valid for the black holes with near horizon geometry of $AdS_2 \times S^{D-2}$. Using the notation of~\cite{Sen:2007qy} we parametrize the near horizon geometry by two constants, $v_1$ and $v_2$ related to the radii of $AdS_2$ and $S^{D-2}$, as \begin{equation} ds^2 = v_1 \left( - r^2 d\tau^2 + \frac{dr^2}{r^2} \right) + v_2 d\Omega^2_{D-2}. \end{equation} The scalar curvature and the GB term will read \begin{equation} R = - \frac2{v_1} + \frac{D^2_3}{v_2}, \qquad \mathcal{L}_{GB} = \frac{D^2_5}{v_2^2} - \frac{4 D^2_3}{v_1 v_2}. \end{equation} The dilaton field and gauge field strength are constant on the horizon \begin{equation} \phi = u, \qquad F_{\tau r} = p. \end{equation} Sen's entropy function is defined to be the integrand of the action after integrating all angular coordinates of $S^{D-2}$. Using (\ref{action}) we obtain \begin{equation} f = \frac{\Omega_{D-2}}{16 \pi G} v_1 v_2^{\frac{D-2}2} \left[ - \frac2{v_1} + \frac{D^2_3}{v_2} + \mathrm{e}^{2 a u} \frac{2 p^2}{v_1^2} + \alpha \mathrm{e}^{2 a u} \left( \frac{D^2_5}{v_2^2} - \frac{4 D^2_3}{v_1 v_2} \right) \right]. \end{equation} The parameters $v_1, v_2, u, e$ are related to the near horizon expansion coefficients by (note the rescaling of time coordinates $\tau = \omega_2 t$) \begin{equation} \omega_2 = \frac1{v_1}, \qquad \rho_0 = \sqrt{v_2}, \qquad P_0 = \mathrm{e}^{2 a u}, \qquad q_e = p \, v_1^{-1} \, v_2^{\frac{D-2}2} \,\mathrm{e}^{2 a u}. \end{equation} According to the equations of motion, the value of parameters should minimize the entropy function: \begin{equation} \partial_{v_1} f = 0, \qquad \partial_{v_2} f = 0, \qquad \partial_u f = 0, \end{equation} which lead to the following constraints \begin{equation} v_2 = \frac{D^2_3}2 v_1, \qquad v_1 = \frac{4 (2 D - 7) p^2}{D^2 - D- 8} \mathrm{e}^{2 a u}, \qquad p^2 = \frac{2(D^2 - D - 8)}{D^2_3} \alpha, \end{equation} and furthermore imply $f = 0$. These three constraints are exactly identical with the relations (\ref{NHSol}) and (\ref{defqe}) from the near horizon analysis. The physical electric charge, $q$ (i.e. $Q_e$ defined in subsection~\ref{global}), can be obtained via $q = \partial_e f$ \begin{equation} q = \frac{\Omega_{D-2}}{4 \pi G} \; p \, v_1^{-1} \, v_2^{\frac{D-2}2} \mathrm{e}^{2 a u} = \frac{\Omega_{D-2}}{4 \pi G} \; q_e. \end{equation} The entropy of black holes is related to the entropy function by a Legendre transformation \begin{equation} S = 2 \pi (q p - f) = 2 \pi q p = \frac{D^2 - D - 8}{8 (2 D - 7) G} \; \Omega_{D-2} v_2^{\frac{D-2}2}. \end{equation} The horizon area of $AdS_2 \times S^2$ is $A = \mathrm{vol}(\Omega_{D-2}) v_2^{\frac{D-2}2}$, thus the entropy can be expressed in terms of area of horizon as \begin{equation} S = \frac{D^2 - D - 8}{8 (2 D - 7) G} A = \frac{A}{4 G} + \frac{D^2 - 5 D + 6}{8 (2 D - 7) G} A = S_{BH} + S_{GB}, \end{equation} and the deviation of the entropy from Bekenstein-Hawking relation by the GB term increases for higher and higher dimensions. For example, the ratio of $S_{GB}/S_{BH}$ from $D = 4$ to $10$ is \begin{equation} \frac{S_{GB}}{S_{BH}} = \left\{ 1, 1, \frac65, \frac{10}7, \frac53, \frac{21}{11}, \frac{28}{13} \right\}. \end{equation} A general discussion on the entropy of theories with quadratic curvature correction and Lovelock theory is given in \cite{Cai:2007cz}. \section{Asymptotics} \label{asymptotics} Now consider the asymptotic expansions of the metric function by substituting the following expansions into the equations of motion: \begin{equation} \omega(r) = 1 + \sum_{i=1} \frac{\bar\omega_i}{r^i}, \qquad \rho(r) = r + \sum_{i=1} \frac{\bar\rho_i}{r^i}, \qquad \phi(r) = \bar\phi_\infty + \sum_{i=1} \frac{\bar\phi_i}{r^i} \end{equation} According to the falloff of the Newton potential in different dimensions, one has the first non-zero term in the expansion for $\omega$ and that for dilaton starting from $i = D-3$, while $\rho$ differs from $r$ in $(2D-7)$-th terms: \begin{eqnarray} \omega(r) &=& 1 + \frac{\bar\omega_{D-3}}{r^{D-3}} + \frac{2 q_e^2 \, \mathrm{e}^{- 2 a \bar\phi_\infty}}{D^2_3} \frac1{r^{2(D-3)}} + O\left( \frac1{r^{2D-4}} \right), \nonumber\\ \rho(r) &=& r - \frac{(D-3) \bar\phi_{D-3}^2}{(D - 2)(2D - 7)} \frac1{r^{2D - 7}} + O\left( \frac1{r^{2D-5}} \right), \\ \phi(r) &=& \bar\phi_\infty + \frac{\bar\phi_{D-3}}{r^{D-3}} - \frac12 \left[ \frac{a q_e^2 \, \mathrm{e}^{- 2 a \bar\phi_\infty}}{(D-3)^2} + \bar\omega_{D-3} \bar\phi_{D-3} \right] \frac1{r^{2(D-3)}} + O\left( \frac1{r^{2D-4}} \right). \nonumber \end{eqnarray} One can notice, that these terms of expansion do not contain the GB coupling $\alpha$. The contribution of the GB term is manifest in the third non-vanishing coefficient in $\rho$. If the GB term is switched off $\alpha = 0$, the third non-vanishing coefficient of $\rho$ is \begin{equation} \bar\rho_{3D-10} = \frac{4}{3 D^2_3 (3D-10)} \bar\phi_{D-3} \left[ (D-3)^2 \bar\omega_{D-3} \bar\phi_{D-3} + a q_e^2 \, \mathrm{e}^{- 2 a \bar\phi_\infty} \right]. \end{equation} In presence of the GB term it is \begin{equation} \bar\rho_{2D-5} = \frac{2(D-3)^2}{2D-5} \alpha a \bar\omega_{D-3} \bar\phi_{D-3} \mathrm{e}^{2 a \bar\phi_\infty}. \end{equation} So in $D = 4$ the GR contribution dominates appearing as $\bar\rho_2$, and in $D = 5$ both GR and GB contributions appear in $\bar\rho_5$, but in higher dimensions, GB contribution is leading. For the asymptotically flat geometry the global physical quantities, such as mass and charges, can be read out from the asymptotic expansion. Since the first sub-leading coefficients are independent of the GB coupling, we can still use the formula of global charges for the theories without higher curvature corrections. \subsection{Global charges} \label{global} The ADM mass is given in our notation by~\footnote{The volume of $S^{D-2}$ is $\Omega_{D-2} = \frac{2 \pi^{\frac{D-1}2}}{\Gamma(\frac{D-1}2)}$ and the gamma function is either $\Gamma(n+1) = n!$ or $\Gamma(\frac{n}2+1) = \sqrt\pi \frac{n!!}{2^{\frac{n+1}2}}$ for integer $n$. This gives $\Omega_{D-2} = \left\{ 4\pi, 2\pi^2, \frac83 \pi^2, \pi^3, \frac{16}{15} \pi^3, \frac13 \pi^4, \frac{32}{105} \pi^4 \right\}$ for $D = 4, \cdots, 10$.} \begin{equation} \mathcal{M} = \frac{\Omega_{D-2}}{8 \pi G} (D - 2) \left[ r^{D-3} \left( \frac1{\sqrt\omega} - \frac{\rho}{r} \right) - r^{D-2} \left( \frac{\rho}{r} \right)' \right]_{r \to \infty}, \end{equation} and reduces to \begin{equation} \mathcal{M} = - \frac{\Omega_{D-2}}{16 \pi G} (D - 2) \left[ \bar\omega_{D-3} - 2 (D-4) \bar\rho_{D-4} \right]. \end{equation} For $D > 4$, in general, the ADM mass could depend not only on the first sub-leading coefficient $\bar\omega_{D-3}$ of $\omega$, but also on the sub-leading coefficient $\bar\rho_{D-4}$ of $\rho$. But we have seen this coefficient is zero, so we have \begin{equation} \bar\omega_{D-3} = - \frac{16 \pi G \mathcal{M}}{(D-2) \Omega_{D-2}}. \end{equation} The definition of the dilaton charge $\mathcal{D}$ is \begin{equation} \mathcal{D} = \frac1{4 \pi G} \int_{r \to \infty} d\Omega_{D-2} \, r^{D-2} \, \partial_r \phi, \end{equation} which has a contribution from the expansion coefficient \begin{equation} \bar\phi_{D-3} = -\frac{4 \pi G \mathcal{D}}{(D-3) \Omega_{D-2}}. \end{equation} The physical electric charge can be computed by the flux \begin{equation} Q_e = \frac1{4 \pi G} \int_{r \to \infty} d\Omega_{D-2} \, r^{D-2} \, \mathrm{e}^{2 a \phi_\infty} \, F_{tr}, \end{equation} so we have the following relation between this quantity and the charge introduced as an integration constant in the previous section: \begin{equation} q_e = \frac{4 \pi G}{\Omega_{D-2}} Q_e. \end{equation} The asymptotic values of two integrals of motion~(\ref{J1}) and (\ref{J2}) are \begin{eqnarray} H^\infty &=& \left[ D_3^2 (r \rho')_\infty^{D-4} - \alpha D_5^2 P_{\infty}(r \rho')_\infty^{D-6} \right] (\omega_\infty \rho'_\infty{}^2 - 1), \label{J1a} \\ J_\delta^\infty &=& \left[ \alpha \mathrm{e}^{2 a \phi_\infty} D^2_5 D^4_4 (r \rho')_\infty^{D-5} (\omega_\infty \rho_\infty'^2 - 1) - D^2_4 (r \rho')_\infty^{D-3} \right] \frac{\omega_\infty \rho_\infty'^2 - 1}{\rho_\infty'} \nonumber\\ && + 4 D_2^2 q_e f_\infty - 2 D_2^2 D_3^2 \mathcal{M} \rho_\infty'^{D-2} - 4 D_3^3 \omega_\infty \rho_\infty'^{D-2} \frac{\mathcal{D}}{a}. \end{eqnarray} From $H = 0$, we can see that $\omega_\infty \rho_\infty'^2 \to 1$ for $r \to \infty$, which also regularizes the second integral of motion. For Minkowski space ($\omega_\infty = \rho'_{\infty} = 1$), we have \begin{equation} J_\delta^\infty = 4 D_2^2 q_e f_\infty - 2 D_2^2 D_3^2 \mathcal{M} - 4 D_3^3 \frac{\mathcal{D}}{a}. \end{equation} It is possible to apply global transformation to satisfy asymptotically flat condition which fixes one of the free parameters in expansion around horizon, $P_1$. Note that the values of the integral of motion are four times of what we have in~\cite{Chen:2006ge}: \begin{equation*} H = \frac12 \left( \omega_\infty \rho'^2_\infty - 1 \right), \quad J_\delta = 2 \, q_e \,f_{\infty} - \mathcal{M} - \frac{\mathcal{D}}a. \end{equation*} \subsection{BPS condition} \label{bpsCond} The theory we are considering here does not necessarily have an underlying supersymmetry. However, it is instructive to investigate the fulfilment of the no-force condition which is usually associated with the supersymmetry. In particular, in the $D = 4$ case the supersummetric embedding into the heterotic string theory in the supergravity limit gives the BPS condition for the extremal small black holes ($\mathcal{Q}_e = Q_e \, \mathrm{e}^{- a \phi_\infty}$) \begin{equation} \mathcal{M}^2 + \mathcal{D}^2 = \mathcal{Q}_e^2. \end{equation} This corresponds to vanishing of the sum of the gravitational and dilaton attractive forces and the electric repulsion. This does not hold if the GB term is turned on. However, it was demonstrated in~\cite{Chen:2006ge} that on the boundary of the allowed domain of the dilaton coupling the role of the GB term is diminished, and the BPS condition is restored. Our aim here is to confirm this property. In the higher-dimensional cases the gravitational, Coulomb and dilaton forces are \begin{eqnarray} F_g &\sim& - \frac{8 \pi G (D-3)}{(D-2) \Omega_{D-2}} \frac{\mathcal{M}^2}{r^{D-2}}, \nonumber\\ F_A &\sim& \frac{4 \pi G}{\Omega_{D-2}} \frac{\mathcal{Q}_e^2}{r^{D-2}}, \nonumber\\ F_\phi &\sim& - \frac{4 \pi G}{\Omega_{D-2}} \frac{\mathcal{D}^2}{r^{D-2}}, \end{eqnarray} so the no force condition reads \begin{equation} \label{noforce} 2 (D - 3) \mathcal{M}^2 + (D - 2) \mathcal{D}^2 = (D - 2) \mathcal{Q}_e^2. \end{equation} In the case that the GB term is decoupled, i.e. $\alpha = 0$, the no-force condition (\ref{noforce}) at infinity is equivalent to the degenerated horizon obtained for the exact extremal dilatonic black hole solutions in the previous section \begin{equation} \mathcal{D} = a \mathcal{M}, \quad \mathcal{Q}_e = \sqrt{\Delta} \mathcal{M} \quad \Rightarrow \quad a^2 \mathcal{M}^2 + \mathcal{D}^2 = \frac{2 a^2}{\Delta} \mathcal{Q}_e^2. \label{bps} \end{equation} Note that the relation (\ref{noforce}) does not involve explicitly the dilaton coupling (though it appears in the definition of $\mathcal{Q}_e$). The special case of $D = 4$ ($\Delta = a^2 + 1$) was earlier discussed in~\cite{Chen:2006ge} in which case $\bar\omega_1 = - 2 \mathcal{M}, \bar\phi_1 = - \mathcal{D}$ (with different sign convention). \section{Cusps} \label{cusp} It was discovered in~\cite{Chen:2006ge} that the 4D EGBD static spherically symmetric gravity typically develops cusps at some points $r = r_c$ in the vicinity of where the metric functions vanish. There they have Taylor expansions in terms of \begin{equation} y = |r - r_c|. \end{equation} The metric and its first derivative are regular there, while the second derivatives diverge as $y^{-1/2}$. There are therefore the cusp hypersurfaces which are the spheres $S^{D-2}$ of finite radius. These cusp spheres have curvature singularity which is rather mild (the Ricci scalar diverges only as $y^{-1/2}$, and the Kretchmann scalar as $1/y$). They are in fact the singular turning points of the radial variable $\rho(r)$. The presence of turning points in the numerical solutions was encountered in the case $D = 4$ in~\cite{Alexeev:1996vs, Pomazanov:2000, Chen:2006ge}. The numerical solution can be extended through these points using the technique of~\cite{Pomazanov:2000} and then the solution evolves into a strong singularity. Here we find that the situation is similar in higher dimensions $D \leq 6$, but starting from $D = 7$ the solution can be extended to an asymptotically flat one. \subsection{Expansion near the turning points} \label{expansion} The general property of the turning points is that the metric functions and the exponential of the dilaton field, $f = \{ \omega(r), \rho(r), F(r) = \alpha \mathrm{e}^{2 a \Phi(r)} \}$, have finite first derivative and divergent second derivative, i.e. \begin{equation} f'(r_\mathrm{tp}) = \mathrm{constant}, \qquad f''(r_\mathrm{tp}) \to \infty. \end{equation} The metric functions and the dilaton can be expanded in terms of the fractional powers of the variable $y$ \begin{equation} f(y) = f_0 + \sum\limits_{i=2} f_i \, y^{\frac{i}2}, \end{equation} where we have either $y = r_\mathrm{tp} - r$ (the right turning point) or $y = r - r_\mathrm{tp}$ (the left turning point). These two types of turning points have opposite signs of the odd-order derivatives, \begin{equation} f^{(2n+1)}(r - r_\mathrm{tp}) = - f^{(2n+1)}(r_\mathrm{tp} - r), \end{equation} and the expansion coefficients, $\{ f_i \}$, have the same ``iterative'' relations for both type turning points. The expansions read \begin{eqnarray} \omega &=& \omega_0 + \omega_2 \, y + \omega_3 \, y^{\frac32} + O(y^2), \\ \rho &=& \rho_0 + \rho_2 \, y + \rho_3 \, y^{\frac32} + O(y^2), \\ F &=& F_0 + F_2 \, y + F_3 \, y^{\frac32} + O(y^2). \end{eqnarray} They contain four free parameters, namely $\omega_0,\, \rho_0,\, F_0$ and $\rho_2$ (for the fixed charge parameter $q_e$), other coefficients depending on them. The coefficient $\rho_3$ is given by the square roots of a second order equation which can have two branches (positive and negative) corresponding to double valued solution near turning points. Similarly, $\omega_3$ and $F_3$ also have two-branch solutions. The exponents in the turning point expansions are independent of $D$. Therefore, the rate of divergence of geometric quantities is universal. More precisely, the scalar curvature is \begin{equation} R \sim - \frac34 \frac{\omega_3 \rho_0 + 2(D-2) \omega_0 \rho_3}{\rho_0} y^{-1/2}, \end{equation} and the matter stress tensor is finite. Indeed, $\omega_3$ is proportional to $\rho_3$ which has double values (with opposite sign) near the turning point. Therefore, the sign of divergent scalar curvature also changes. Moreover, one expects that the GB combination should have $y^{-1}$ divergence, but actually it is weaker, namely $y^{-1/2}$. For numerical integration, we rewrite the equations of motion as a matrix equation of the dynamical system \begin{equation} \bm{A} \bm{x}' = \bm{b}, \end{equation} where 6D vector $\bm{x}$ denotes $\bm{x}(r) = \{ \omega(r), \omega'(r), \rho(r), \rho'(r), F(r), F'(r) \}$. The solution is ill-defined at the points where $\det\bm{A} = 0$. The turning points are special cases of the general situation (see~\cite{Pomazanov:2000} for complete classification). We can extend solutions through the turning points introducing a suitable new parameter $\sigma$: \begin{equation} \dot r = \frac{dr}{d\sigma} = \lambda \det\bm{A}, \end{equation} and generalizing the dynamical system to one dimension more, i.e. $\tilde{\bm{x}}(\sigma) = \{ \omega(\sigma), \dot\omega(\sigma), \rho(\sigma), \dot\rho(\sigma), F(\sigma), \dot F(\sigma), r(\sigma) \}$. The matrix equation then becomes \begin{equation} \tilde{\bm{A}} \dot{\tilde{\bm{x}}} = \tilde{\bm{b}}, \qquad \tilde{\bm{A}} = \left( \begin{array}{cc} \bm{A} & - \bm{b} \\ 0 & 1 \end{array} \right), \qquad \tilde{\bm{b}} = \left( \begin{array}{c} 0 \\ \lambda \det\bm{A} \end{array} \right). \end{equation} The parameter $\lambda$ can be fixed by normalization of $\dot{\tilde{\bm{x}}}$. We choose $\dot{\tilde{\bm{x}}}^2 = \dot{\bm{x}}^2 + \dot r^2 = 1$ and this ensures that $\dot r$ is finite for both small and large values of $\det\bm{A}$ which is useful for numerical calculation. In terms of $\sigma$, we have the following result near turning point \begin{equation} r(\sigma) = r_\mathrm{tp} + r_1 (\sigma_\mathrm{tp} - \sigma)^2 + \cdots, \end{equation} where $r_1 < 0$ for the right turning points and $r_1 >0$ for the left ones. The metric can be rewritten as \begin{equation} \label{metricsigma} ds^2 = - \omega(\sigma) dt^2 + \frac{d^2\sigma}{W(\sigma)} + \rho^2(\sigma) d\Omega_{D-2}^2, \end{equation} where $W = \omega / \dot r^2$. Now, the functions $\omega, W, \rho$ are single valued functions of $\sigma$. \subsection{Geodesics near the turning points} \label{geodesics} As mentioned before, the curvature weakly diverges at turning point. One would expect the property near turning points is much better than near the singularity. So let us check the radial geodesic of $t$ and $\sigma$ as functions of the proper time $\lambda$ ($d\theta/d\lambda = 0 = d\phi/d\lambda$). The relevant Christoffel symbols are \begin{equation} \Gamma^t{}_{t\sigma} = \frac12 \frac{\dot \omega}{\omega}, \qquad \Gamma^\sigma{}_{tt} = \frac12 W \dot \omega, \qquad \Gamma^\sigma{}_{\sigma\sigma} = - \frac12 \frac{\dot W}{W}, \end{equation} The geodesic equation for $t$ \begin{equation} \label{GeoT} \frac{d^2 t}{d\lambda^2} + \frac{\dot \omega}{\omega} \frac{dt}{d\lambda} \frac{d\sigma}{d\lambda} = 0, \end{equation} can be simplified as \begin{equation} \frac{d}{d\lambda}\left( \omega \frac{dt}{d\lambda} \right) = 0, \end{equation} or \begin{equation} \frac{dt}{d\lambda} = \frac{C}{\omega}, \end{equation} where the integration constant $C > 0$ means the ``energy'' per unit mass of test particle at infinity. The geodesic equation for $\sigma$ coordinate is \begin{equation} \label{Geqsigma} \frac{d^2 \sigma}{d\lambda^2} + \frac12 \left[ W \dot \omega \left( \frac{dt}{d\lambda} \right)^2 - \frac{\dot W}{W} \left( \frac{d\sigma}{d\lambda} \right)^2 \right] = 0. \end{equation} After integration, it reduces to \begin{equation} k = \omega \left( \frac{dt}{d\lambda} \right)^2 - \frac1{W} \left( \frac{d\sigma}{d\lambda} \right)^2, \end{equation} or \begin{equation} \left(\frac{d\sigma}{d\lambda} \right)^2 = W \left( \frac{C^2}{\omega} - k \right) = \frac{C^2 - k \omega}{\dot r^2} , \end{equation} where $k = 1$ for time-like geodesic and $k = 0$ for null geodesic. The geodesic solutions are \begin{equation} \label{solsigma2} t = \frac{C}{\omega_0} \lambda, \qquad (\sigma - \sigma_\mathrm{tp})^2 = - \frac{k \omega_2}{4 r_1^2} \lambda^2 \pm \frac{\sqrt{C^2 - k \omega_0}}{|r_1|} \lambda. \end{equation} There are two possible solutions for $\sigma$, the minus branch is valid for $-\infty < \lambda \le 0$ and plus branch for $0 \le \lambda < \infty$ and both geodesics are terminated at the turning point ($\lambda = 0$). The only possible extension for the geodesic solution is ``gluing'' these two solutions, i.e. \begin{equation} (\sigma - \sigma_\mathrm{tp})^2 = - \frac{k \omega_2}{4 r_1^2} \lambda^2 + \mathrm{sign}(\lambda) \frac{\sqrt{C^2 - k \omega_0}}{|r_1|} \lambda. \end{equation} However, one can easy check that the result is not a smooth solution at $\lambda = 0$ and the second derivative of $\sigma$ with respect to $\lambda$ generates a delta function. Therefore, such extension, in general, is not a solution of (\ref{Geqsigma}). Similar situation happens for the time-like geodesic. Hence, the cusp turning points are not extendable for the geodesics. However there is an exception for the special values of $C^2 = k \omega_0$ and $k \omega_2 < 0$ (if $\omega_2 > 0$ the time-like geodesic cannot reach the turning point). It is instructive to compare divergences in various cases. To study this, we use Krechman scalar $K = R^{\alpha\beta\gamma\delta} R_{\alpha\beta\gamma\delta}$. The Schwarzschild black hole at $r = 0$ has $K \sim r^{-2(D-1)}$, and the Reissner-Nordstr\"{o}m black hole at $r = 0$ has $K \sim r^{-4(D-2)}$, GB pure gravity black hole ($D > 5$) at $r=0$ has 1) $K \sim r^{-2(D-1)}$ for charged solution, and 2) neutral one $K \sim r^{-(D-1)}$. The extremal dilatonic black hole with GB term at a turning point $K \sim y^{-1}$ for any dimension: \begin{equation} R = \pm \frac3{4\rho_0} \frac{2 \, w_0 \rho_3 (D-2) + \rho_0\, w_3}{\sqrt{y}}, \end{equation} where $y = r_{tp} - r$ for upper sign (a first type of turning point) and $y = r - r_{tp}$ for lower (a second type). If $\rho_3 = 0$, the scalar curvature at such a type of point is regular, but it imposes some constraint on the parameters because $\rho_3$ depends on the parameters. \section{Numerical Results} \label{numerical} Lets consider the special limit $a \to 0$ in which the dilaton field decouples. The analytical general solution is \begin{equation} \omega(r) = 1 + \frac{r^2}{2 D^3_4 \alpha} \left( 1 \mp \sqrt{1 + \frac{64 \pi D^3_4 \alpha \mathcal{M}}{(D-2) \mathrm{vol}(\Omega_{D-2}) r^{D-1}} - \frac{8 D^4_4 \alpha \, q_e^2}{D^2_2 r^{2(D-2)}} + \frac{4 D^3_4 \alpha \Lambda}{D^1_2}} \right), \quad \rho(r) = r. \end{equation} For the case $\Lambda = 0$, the radius of degenerated (extremal) horizon is $r_0^2 = 4 (2D - 7) \alpha$ which can be obtained from (\ref{ddc}). It is clear that the horizon shrinks to a point when we turn off the GB term. Although the dilaton field is decoupled, in the dimensions $D \geq 5$, the GB term still gives a non-trivial contribution to equations of motion which will break the BPS (non-force) condition. In more detail, the ratio of the mass and the electrical charge for an extremal solution can be computed \begin{equation} \frac{\Delta \, \mathcal{M}}{Q_e^2} = \frac{(D^2 - 12)^2}{4 (D^2 - D - 8) (2D - 7)} \ge 1, \end{equation} and our numerical analysis give consistent results in the decoupling limit. Moreover, the numerical analysis also indicates that the dilaton charge is proportional to the dilation coupling times mass and \begin{equation} \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2} \ge 1, \end{equation} but the exact form for the ratio is still unknown. In these two ratios, the equality holds for $D = 4$ which saturates the BSP condition~(\ref{bps}). In the numerical results, we are going to show the following quantities. From the symmetries (\ref{symL}), we know that if we regard the electrical charge as a scaling parameter, the following ratios will depend only on the dilaton coupling: \begin{equation} k_M(a) = \frac{\mathcal{M}^\frac{D-2}{D-3}}{Q_e}, \qquad k_D(a) = \frac{\mathcal{D}^\frac{D-2}{D-3}}{Q_e}, \qquad k_F(a) = \frac{\alpha \, \mathrm{e}^{(D-2) a \phi_\infty}}{Q_e}. \label{ratio} \end{equation} For verifying the BPS conditions, we will analyze the ratios \begin{equation} \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}, \qquad \frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \qquad k_{BPS} = \frac{\Delta (a^2 \mathcal{M}^2 + \mathcal{D}^2)}{2 a^2 \mathcal{Q}_e^2}. \end{equation} We now present our numerical results. \subsection{$D = 4$ } \label{d4} For convenience, we first recall the results for $D = 4$ found in~\cite{Chen:2006ge}. Starting with small $a$ (for $a=0$ one has the Reissner-Nordstr\"om solution) one finds that asymptotically flat black holes with degenerate event horizons exist up to $a = a_{\rm cr} = 0.488219703$. The critical value of dilaton coupling $a_{\rm cr}$ separates the regions where there exist regular asymptotically flat solutions for $a < a_{\rm cr}$ and the singular ones for $a > a_{\rm cr}$, which firstly have a throat ($\rho' = 0$), then a turning point where $\rho''$ changes sign and $\rho''$ diverges and finally a singular point. In the limit $a \to a_{\rm cr}$, the mass $\mathcal{M}$ diverges, and somewhat surprisingly, the BPS condition of the theory without curvature corrections holds for the ratios of parameters. This can be understood as dominance of the Einstein term over Gauss-Bonnet. Indeed, if we keep the mass fixed, the limit corresponds to gravitational constant $G$ going to zero. Then the Einstein term becomes greater, unless the GB term is increasing similarly. The critical value of the dilaton coupling is less than the heterotic string value $a=1$ or $1/2$, so no asymptotically flat extremal EMDGB black holes exist in $D=4$. \subsection{$D = 5$ and $6$} \label{d5} For $D = 5$ and $6$, the dynamics of the system is similar: the asymptotically flat solutions exist up to the critical value of dilaton coupling $a_{\rm cr}$, and in the limit $a \to a_{\rm cr}$ the BPS conditions are satisfied, namely $k_{BPS} \to 1$ as shown in Figs.~\ref{BPS5} and \ref{BPS6}. \begin{figure}[ht] \includegraphics[width=12cm]{k_ratio5.eps} \caption{Left: $k_M(a), k_D(a)$ and $k_F(a)$ (multiplied by a factor $10^3$) as functions of $a$ in $D=5$. Right: $k_{BPS}$ and the ratios $\frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}$.} \label{BPS5} \end{figure} The critical value of dilaton coupling for $D=5$ is found to be $a_{\rm cr}^{D=5} = 0.872422988$. The corresponding string value (\ref{astring}) is smaller: $a^{D=5}_{\rm str}=0.816496580$, so asymptotically flat stretched dilatonic black holes exist in five dimensions. For $D=6$ we obtain $a_{\rm cr}^{D=6} = 1.432881972$, the corresponding string value (\ref{astring}) being $a^{D=6}_{\rm str}=0.707106781)$. We observe that while the critical dilaton coupling is increasing with $D$, the string value (\ref{astring}) is decreasing, so we expect that for $D>4$ we will have always $a_{\rm str} < a_{\rm cr}$). The metric functions and the dilaton exponential $F$ for asymptotically flat black holes are given in Fig.~\ref{AF5} for $a = 0.3$ and $0.8$ (which are smaller than the critical value in $D = 5$), and in Fig.~\ref{AF6} for $a = 0.4$ and $1.4$ in $D = 6$. Those for $a$ larger than the critical value are displayed in Fig.~\ref{NAF5} in $D = 5$ and in Fig.~\ref{NAF6} in $D = 6$. \begin{figure}[ht] \includegraphics[width=12cm]{w_rhop_F5AF.eps} \caption{Radial dependence of metric functions $\omega, \rho'$ and the dilaton exponential $F$ for asymptotically flat black holes ($a < a_{\rm cr}^{D=5}$) with dilaton couplings $a_1 = 0.3$ and $a_2 = 0.8$ in $D=5$.} \label{AF5} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{w_rhop5NAF.eps} \caption{Radial dependence of metric functions $\omega, \rho'$ for singular solutions ($a > a_{\rm cr}^{D=5}$) for dilaton couplings $a = 0.9$ in $D=5$.} \label{NAF5} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{k_ratio6.eps} \caption{Left: $k_M(a), k_D(a)$ and $k_F(a)$ (multiplied by a factor $10^4$) in $D=6$. Right: the ratios of $k_{BPS}$ and $\frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}$.} \label{BPS6} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{w_rhop_F6AF.eps} \caption{Radial dependence of metric functions $\omega, \rho'$ and an exponential of dilaton field $F$ for asymptotical flat black holes ($a < a_{\rm cr}^{D=6}$) of dilaton couplings $a_1 = 0.4$, $a_2 = 1.4$ in $D=6$.} \label{AF6} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{w_rhop6NAF.eps} \caption{Radial dependence of metric functions $\omega, \rho'$ for singular solutions ($a > a_{\rm cr}^{D=6}$) of dilaton coupling $a = 1.5$ in $D=6$.} \label{NAF6} \end{figure} \subsection{$D = 7$} \label{d7} The critical value for the dilaton coupling $a^{D=7}_\mathrm{cr} = 1.793909999$, the heterotic string value being $a^{D=7}_{\rm str} = 0.632455532$. As in lower dimensions, the critical value corresponds to the appearance of the first cusp (turning point) in the solution. The novel feature in $D = 7$ is that after the right turning point the left one appears, and the solution can be extended along the lines of~\cite{Alexeev:1997ua}. Using the same procedure one can extend the solution to an asymptotically flat one, as shown in Fig.~\ref{AF7a}. With further increasing dilaton coupling the number of pairs of the turning points increases, so the asymptotically flat extended solutions look as shown in Fig.~\ref{AF7b}. The global parameters change in step-function-like manner each time when one new turning point is created, see Fig.~\ref{k7}. However, the extended solution cannot be considered as true black hole solution, since geodesics, as we have shown in Sec.~V~B, cannot be continued smoothly through the cusp singularities. Therefore we have to consider the critical value of the dilaton coupling in $D = 7$ as the true boundary of the range of $a$. In the case $D = 4$~\cite{Chen:2006ge} it was observed that on both boundary of the dilaton coupling $a \to 0$ and $a \to a_\mathrm{cr}$, the BPS condition of the EMD theory is saturated. This can be understood as indication that the GB term is decoupled in these two limits. Indeed, for $a=0$ it is obvious, while in the limit $a \to a_\mathrm{cr}$ the mass tends to infinity, in which case the Einstein term turns out to be dominant. In higher dimensions situation is different. For $D \ge 5$ decoupling of the dilaton $a=0$ does not switch off the GB term; instead we have to deal with Wiltshire solutions of the EMGB theory. So the BPS saturation is not expected for $a=0$, and this is confirmed by numerical calculations. For $D=5, 6$ the BPS condition still holds on the right boundary of the dilaton coupling $a \to a_\mathrm{cr}$ (see Figs.~\ref{BPS5}, \ref{BPS6}). However, in $D=7$, the BPS condition does not hold anymore (see Fig.~\ref{BPS7}). This means that the GB term does not decouple in the limit $a \to a_\mathrm{cr}$ as in lower dimensions. \begin{figure}[ht] \includegraphics[width=10cm]{w_rhop7AFa.eps} \caption{Dependence of metric functions $w$ and $\rho'$ on radial coordinate for $a=1.8$ in $D=7$ with two turning points. (Insets describe properties of solution nearby turning points).} \label{AF7a} \end{figure} \begin{figure}[ht] \includegraphics[width=10cm]{w_rhop7AFb.eps} \caption{Dependence metric functions $w$ and $\rho'$ of radial coordinate for $a=1.81$ in $D=7$. There are fourteen turning points.} \label{AF7b} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{k7.eps} \caption{Left: $k_M(a), k_D(a)$ and $k_F(a)$ in the region before formation of turning point in $D=7$. Right: same quantities after formation of turning points (the number of turning points is denoted by italic font).} \label{k7} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{ratio7.eps} \caption{Left: Ratios of $k_{BPS}$ and $\frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}$ in the region before formation of turning point in $D=7$. Right: those after formation of turning points (the number of turning points is denoted by italic font).} \label{BPS7} \end{figure} \begin{figure}[ht] \includegraphics[width=10cm]{w_rhop7s1.eps} \caption{Dependence metric functions $w$ and $\rho'$ of parameter $\sigma$ for $a = 1.8$ in $D=7$ with two turning points. (insets describe properties of solution nearby turning points).} \label{B8} \end{figure} \begin{figure}[ht] \includegraphics[width=10cm]{w_rhop7s2.eps} \caption{ Dependence metric functions $w$ and $\rho'$ of parameter $\sigma$ for $a=1.81$ in $D=7$ with fourteen turning points.} \label{B9} \end{figure} \begin{figure}[ht] \includegraphics[width=4cm]{r7s.eps} \caption{Dependence original radial coordinate $r$ of parameter $\sigma$ for $a=1.8$ in $D=7$ with two turning points.} \label{B10} \end{figure} \subsection{$D = 8, 9, 10$} \label{d8} Properties similar to those in the case $D=7$ were observed for higher dimensional solutions $D=8, 9, 10$. The critical values of dilaton coupling are $a^{D=8}_\mathrm{cr} = 1.887653885$ ($a^{D=8}_\mathrm{str} = 0.577350269$), $a^{D=9}_\mathrm{cr} = 2.002906751$ ($a^{D=9}_\mathrm{str} = 0.534522483$) and $a^{D=10}_\mathrm{cr} = 2.121748877$ ($a^{D=10}_\mathrm{str} = 0.5$). The numerical results are presented in Figs.~\ref{B11}, \ref{r8}, \ref{B12}, \ref{r9}, \ref{B13}, \ref{r10}. The BPS condition is not fulfilled on both boundaries of $a$. Supercritical solutions can be formally continued to infinity (as asymptotically flat) through the cusps which are met in pairs like in the case of $D=7$. However we do not qualify them as physical black holes for the reasons explained in Sec.~V. \begin{figure}[ht] \includegraphics[width=12cm]{k8.eps} \caption Left: $k_M(a), k_D(a)$ and $k_F(a)$ in $D=8$ in the region before formation of turning point. Right: those after formation of turning points.} \label{B11} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{ratio8.eps} \caption Left: Ratios of $k_{BPS}$ and $\frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}$ in$D=8$ in the region before formation of turning point. Right: those after formation of turning points.} \label{r8} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{k9.eps} \caption Left: $k_M(a), k_D(a)$ and $k_F(a)$ in $D=9$ in the region before formation of turning point. Right: those after formation of turning points.} \label{B12} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{ratio9.eps} \caption Left: Ratios of $k_{BPS}$ and $\frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}$ in $D=9$ in the region before formation of turning point. Right: those after formation of turning points.} \label{r9} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{k10.eps} \caption Left: $k_M(a), k_D(a)$ and $k_F(a)$ in $D=10$ in the region before formation of turning point. Right: those after formation of turning points.} \label{B13} \end{figure} \begin{figure}[ht] \includegraphics[width=12cm]{ratio10.eps} \caption Left: Ratios of $k_{BPS}$ and $\frac{\Delta \, \mathcal{M}^2}{\mathcal{Q}_e^2}, \frac{a^2 \mathcal{M}^2}{\mathcal{D}^2}$ in $D=10$ in the region before formation of turning point. Right: those after formation of turning points.} \label{r10} \end{figure} \section {Conclusions} \label{concl} Here we summarize our findings. First, we have constructed explicit local solution of the EMDGB static extremal black holes in the vicinity of the horizon and calculated the corresponding entropies. The ratios of entropy to the Hawking-Bekenstein one $A/4$ increases from $1$ for $D = 4, 5$ to $41/13$ for $D = 10$. The entropy does not depend on the dilaton coupling. Contrary to this, the asymptotic behavior of the solutions crucially depend on dilaton coupling and asymptotically flat black holes exist only for $a < a_{\rm cr}$. The critical value of the dilaton coupling depend on $D$ and increases with $D$. For $D = 4$, $a_{\rm cr}$ is smaller than the heterotic string value, therefore no stretched black holes exist in the effective heterotic theory. In contrast, for $D \ge 5$ the heterotic values of $a$ lie inside the allowed region. Numerical solutions for asymptotically flat black holes are constructed for $4 \le D \le 10$. We investigated the ratios of the mass, dilaton charge and electric charges which show the degree of deviation form the BPS bounds in the absence of GB term as functions of the dilaton coupling. It is observed that for $D < 5$ the BPS bound is saturated near the threshold value $a_{\rm cr}$, thus demonstrating that the contribution of the GB term is effectively small there. For larger $D$ such a behavior was not observed, indicating that the GB term remains important on the boundary. The failure to reach the flat asymptotic in numerical integration manifests itself as emergence of turning points of the radial variable in which the scalar curvature has very mild divergence. The solutions then exhibit typical cusp-shaped behavior. It was suggested before that these turning points should be passed by changing the integration variable in a suitable way so that the solution can be continued through these singularities. We have found that in dimensions $D \ge 7$ the turning points comes in pairs, and the solution can be formally extended to the flat asymptotic. However an inspection of radial geodesics reveals that they cannot be analytically continued through cusp singularities, so we do not believe that continuation of numerical solutions through the cusps is physically meaningful. \section*{Acknowledgments} CMC is grateful to the AEI, Postdam for its hospitality in the early stages of this work. The work of CMC and DGO was supported by the National Science Council of the R.O.C. under the grant NSC 96-2112-M-008-006-MY3 and in part by the National Center of Theoretical Sciences (NCTS). The work of DG was supported by the RFBR under the project 08-02-01398-a. The work of NO was supported in part by the Grant-in-Aid for Scientific Research Fund of the JSPS Nos. 20540283, and also by the Japan-U.K. Research Cooperative Program. \begin{appendix} \section{Geometric Quantities for Spherical Symmetric Metric} This appendix gives detail geometric quantities associated with the following spherical symmetric metric in and dimensions $D$ \begin{equation} ds^2 = - \mathrm{e}^{2 u(r)} dt^2 + \mathrm{e}^{2 v(r)} dr^2 + \mathrm{e}^{2 w(r)} d\Omega_{D-2, k}^2, \end{equation} where $k$ denotes the spatial curvature. The Riemann and Ricci tensors have the following components \begin{eqnarray} R_{trtr} &=& \mathrm{e}^{2u} (u'' + u'^2 - u' v'), \nonumber\\ R_{tatb} &=& \mathrm{e}^{2u - 2v + 2w} u' w' \; g_{ab}, \nonumber\\ R_{rarb} &=& - \mathrm{e}^{2w} (w'' + w'^2 - v' w') \; g_{ab}, \nonumber\\ R_{acbd} &=& \left( - \mathrm{e}^{4w - 2v} w'^2 + k \, \mathrm{e}^{2w} \right) (1 - \delta_{ac} \delta_{bd}) \; g_{ab} g_{cd}, \nonumber\\ R_{tt} &=& \mathrm{e}^{2u - 2v} (u'' + u' H'), \nonumber\\ R_{rr} &=& - (u'' + u'^2 - u' v') - (D - 2) (w'' + w'^2 - v' w'), \nonumber\\ R_{ab} &=& \left[ - \mathrm{e}^{2w - 2v} (w'' + w' H') + k (D - 3) \right] g_{ab}, \end{eqnarray} and then the scalar curvature, Ricci square ($R_{\mu\nu}^2 = R_{\mu\nu} R^{\mu\nu}$) and Riemann square ($R_{\alpha\beta\mu\nu}^2 = R_{\alpha\beta\mu\nu} R^{\alpha\beta\mu\nu}$) are \begin{eqnarray} R &=& - \mathrm{e}^{-2 v} \left[ 2 u'' + u' H' + u'^2 - u' v' + D^2_2 ( 2 w'' + w' H' + w'^2 - v' w' ) \right] + k D^2_3 \, \mathrm{e}^{-2 w}, \nonumber\\ &=& - \mathrm{e}^{-2 v} \left[ 2 (u'' + u'^2 - u' v') + 2 D^2_2 (w'' + u' w' - v' w' + w'^2) + D^2_3 \left( w'^2 - k \mathrm{e}^{2v - 2w} \right) \right], \nonumber\\ R_{\mu\nu}^2 &=& \mathrm{e}^{-4 v} (u'' + u' H')^2 + \mathrm{e}^{-4 v} [ u'' + u'^2 - u' v' + D^2_2 ( w'' + w'^2 - v' w') ]^2 \nonumber\\ &+& D^2_2 \left[ \mathrm{e}^{-2 v} (w'' + w' H') - k D^3_3 \, \mathrm{e}^{-2 w} \right]^2, \\ R_{\alpha\beta\mu\nu}^2 &=& 4 \, \mathrm{e}^{-4 v} (u'' + u'^2 - u' v')^2 + 4 D^2_2 \, \mathrm{e}^{-4 v} u'^2 w'^2 + 4 D^2_2 \, \mathrm{e}^{-4 v} ( w'' + w'^2 - v' w')^2 \nonumber\\ &+& 2 D^2_3 \left( \mathrm{e}^{-2 v} w'^2 - k \, \mathrm{e}^{-2 w} \right)^2, \nonumber \end{eqnarray} where $H$ is defined \begin{equation} H = u - v + (D - 2) w, \end{equation} and the following notation is used \begin{equation} D^m_n = (D - m)_n = (D - m) (D - m - 1) \cdots (D - n), \qquad n \ge m. \end{equation} The GB combination is \begin{eqnarray} {\cal L}_{\rm GB} &=& D^2_3 \, \mathrm{e}^{- 4 v} \biggl\{ D^4_5 \left( w'^2 - k \, \mathrm{e}^{2v - 2w} \right)^2 + 8 u' w' (w'' + w'^2 - w' v') \nonumber\\ &+& 4 \left[ u'' + u'^2 - u' v' + D^4_4 (w'' + u' w' - v' w' + w'^2) \right] \left( w'^2 - k \, \mathrm{e}^{2v - 2w} \right) \biggr\} \nonumber\\ &=& 4 D^2_3 \, \mathrm{e}^{- u - v - 2w} \left[ \mathrm{e}^{u - 3v + 2w} u' \left( w'^2 - k \, \mathrm{e}^{2v - 2w} \right) \right]' \nonumber\\ &+& 4 D^2_4 \, \mathrm{e}^{- 4 v} (w'' + u' w' - v' w' + w'^2) \left( w'^2 - k \, \mathrm{e}^{2v - 2w} \right) \nonumber\\ &+& D^2_5 \, \mathrm{e}^{- 4 v} \left( w'^2 - k \, \mathrm{e}^{2v - 2w} \right)^2. \end{eqnarray} One can easy check that, in four dimension, the GB term of $\sqrt{-g} {\cal L}_{\rm GB}$ is a total derivative. For the gauge choice of coordinates \begin{equation} u = - v = \frac12 \ln\omega, \qquad w = \ln\rho, \end{equation} the relevant quantities become \begin{eqnarray} R &=& - \rho^{-2} \left[ \omega'' \rho^2 + 2 D^2_2 \rho (\omega \rho'' + \omega' \rho') + D^2_3 (\omega \rho'^2 - k) \right], \nonumber\\ &=& \rho^{2-D} \left[ - \left( \omega' \rho^{D-2} + 2 D^2_2 \omega \rho' \rho^{D-3} \right)' + D^2_2 \rho' (\omega \rho^{D-3})' + D^2_3 \rho^{D-4} k \right], \\ {\cal L}_{\rm GB} &=& D^2_3 \rho^{-4} \left\{ 2 \omega' \rho' \rho^2 (2 \omega \rho'' + \omega' \rho') + 2 \rho [\omega'' \rho + 2 D^4_4 (\omega \rho')'] (\omega \rho'^2 - k) + D^4_5 (\omega \rho'^2 - k)^2 \right\}, \nonumber\\ &=& 2 D^2_3 \rho^{-2} [\omega' (\omega \rho'^2 - k)]' + 4 D^2_4 \rho^{-3} (\omega \rho'' + \omega' \rho') (\omega \rho'^2 - k) + D^2_5 \rho^{-4} (\omega \rho'^2 - k)^2. \end{eqnarray} \end{appendix}
proofpile-arXiv_065-6385
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec:intro} The supernova (SN) explosions blow out the various heavy elements generated by the nucleosynthesis process inside the progenitor stars. Meanwhile, the blast wave generated by the SN explosion sweeps up and heats the interstellar matter (ISM). It forms a characteristic structure of each supernova remnant (SNR). In this way, the morphology of the shell structure provides information about the ambient density of the ISM. The \object{Cygnus Loop} is one of the brightest SNRs in the X-ray sky. Its age is estimated to be $\sim10,000$yr and the distance is comparatively close to us (540pc; Blair et al. 2005). The large apparent size (2.5\arcdeg$\times$3.5\arcdeg; Levenson et al. 1997) enables us to study the plasma structure of the Loop. The origin of the \object{Cygnus Loop} is thought to be a cavity explosion \citep{McCray79, Hester86, Hester94, Levenson97}. The progenitor is presumed to be a B0, 15$M_\odot$ \ star \citep{Levenson98} and some X-ray studies also estimated the progenitor mass to be 12-15$M_\odot$ \ (\textit{e.g.}, Tsunemi et al. 2007). From the morphological point of view, the \object{Cygnus Loop} is a typical shell-like SNR and it is almost circular in shape. However, a large break is seen in the south, known as ``blowout'' region \citep{Aschenbach99}. The origin of the ``blowout'' is not well understood. \citet{Aschenbach99} explained this extended structure as a breakout into a lower density ISM. On the other hand, based on a radio observation, \citet{Uyaniker02} suggested the existence of a secondary SNR in the south. Some other radio observations support this conclusion \citep{Uyaniker04, Sun06}. Recently, \citet{Uchida08} observed this region with the \textit{XMM-Newton} observatory and found that the X-ray spectra of this region consist of two plasma components with different electron temperatures. Judging from the plasma structures and the metal distributions, they concluded that the X-ray emission is consistent with a \object{Cygnus Loop} origin and that each plasma component is derived from the ejecta and the cavity material, respectively. They also showed that the X-ray shell is thin in their fields of view (FOV) and concluded that the origin of the blowout can be explained as a breakout into a lower density ISM as proposed by \citet{Aschenbach99}. It is natural to consider that such a breakout may also exist along the line of sight. \citet{Tsunemi07} observed the \object{Cygnus Loop} along the diameter with \textit{XMM-Newton} and argued about it. They divided their FOV into a north path and a south path and found the thin shell region to be 5\arcmin \ in the south path and 20\arcmin \ in the north path. They estimated this thin shell region to have a diameter of 1\arcdeg, centering on $\alpha = 20^{\mathrm h}49^{\mathrm m}11^{\mathrm s}$, $\delta = 31\arcdeg05\arcmin20\arcsec$. \citet{Kimura09} expanded their observation northward with \textit{Suzaku} and found that the flux of the swept-up matter in southwest is about a third of that in northeast. The width of this region is $\sim50\arcmin$. \citet{Kimura09} presumed that there is a blowout region in the direction of our line of sight in the middle west of the Loop. In this paper, we used 41 observation data obtained by the \textit{Suzaku} and the \textit{XMM-Newton} observatories. We reanalyzed all the data to conduct a comprehensive study on the shell structure of the \object{Cygnus Loop}. \section{Observations} We summarized the 41 observations in table \ref{tab:sum}. All the data were taken between 2002 and 2008. The observing regions are shown in figure \ref{fig:HRI} top. The circles and rectangles represent the FOV of the \textit{XMM-Newton} MOS and the \textit{Suzaku} XIS, respectively. All of the \textit{Suzaku} data were analyzed with version 6.5 of the HEAsoft tools. For reduction of the \textit{Suzaku} data, we used version 9 of the Suzaku Software. The calibration database (CALDB) used was updated in July 2008. We used revision 2.2 of the cleaned event data and combined 3$\times$3 and 5$\times$5 event files. We applied the spaced row charge injection (SCI) method \citep{Prigozhin08} to P1, P2, P3, P4, P5, P6, P7, P9, P10, P11, P18, P19, P20, P21, P22, P23, P24 and P25 data. This method reduces the effect of radiation damage of the XIS and recovers the energy resolution, for example, from 205$\pm6$eV to 157$\pm4$eV at 5.9 keV. In order to exclude the background flare events, we obtained good time intervals (GTIs) including only times at which the count rates were within $\pm2\sigma$ of the mean count rates. Since the \object{Cygnus Loop} is a large diffuse source and our FOV are filled with the SNR's emission, we could not obtain background spectra from our FOV. We also had no background data from the neighborhood of the \object{Cygnus Loop}. We therefore applied the Lockman Hole data for background subtraction. We reviewed the effect of the galactic ridge X-ray emission (GRXE). The flux of the GRXE at $l = 62\arcdeg$, $|b| < 0\arcdeg.4$ is $6\times10^{-12}$erg cm$^{-2}$s$^{-1}$deg$^{-2}$ (0.7-2.0 keV) \citep{Sugizaki01}. Although the \object{Cygnus Loop} ($l = 74\arcdeg$, $b = -8\arcdeg.6$) is located outside the FOV of \citet{Sugizaki01}, this value gives us an upper limit of the GRXE at the \object{Cygnus Loop}. Meanwhile, the flux of the \object{Cygnus Loop} is estimated to be $8.2\times10^{-10}$erg cm$^{-2}$s$^{-1}$deg$^{-2}$ (0.7-2.0 keV), assuming that the \object{Cygnus Loop} is a circle with a diameter of $3\arcdeg.0$. Therefore, we concluded that the effect of the GRXE on the \object{Cygnus Loop} is vanishingly small. The solar wind charge exchange (SWCX) is also considered to correlate with the soft X-ray background below 1 keV \citep{Fujimoto07}. However, in terms of the \object{Cygnus Loop}, we consider that the SWCX is negligible because of the prominent surface brightness of the Loop. Thus, the Lockman Hole data obtained in 2006, 2007 and 2008 were applied for background subtraction. We selected data whose observation dates were close to those of the \object{Cygnus Loop} observations. Since there were no photons above 3.0 keV after background subtraction, the energy ranges of 0.2-3.0 keV and 0.4-3.0 keV were used for XIS1 (back-illuminated CCD; BI CCD) and XIS0,2,3 (front-illuminated CCD; FI CCD), respectively \citep{Koyama07}. All the \textit{XMM-Newton} data were processed with version 7.1.0 of the \textit{XMM} Science Analysis System (SAS). The current calibration files (CCFs) used were updated on 2008 June. We used data obtained with the EPIC MOS and pn cameras. These data were taken by using the medium filters and the prime full-window mode. We selected X-ray events corresponding to patterns 0-12 and flag = 0 for MOS 1 and 2, patterns 0-4 and flag = 0 for pn, respectively. In order to exclude background flare events, we determined the GTIs in the same way as those of the \textit{Suzaku} data. After filtering the data, they were vignetting-corrected by using the SAS task \textbf{evigweight}. For background subtraction, we employed blank-sky observations prepared by \citet{Read03} for a similar reason with the \textit{Suzaku} case. After the background subtraction, the energy range of 0.3-3.0 keV was used for each instrument. \section{Spectral Analysis}\label{sec:specana} To investigate the plasma structure of the \object{Cygnus Loop}, we divided the entire FOV into small box regions. In order to equalize the statistics, we initially divided all images of XIS1 or MOS2 into two parts; if each divided region had more than 10,000 photons, it was once again divided. In this way, we obtained 1042 box regions. Each region contained 5,000-10,000 photons for XIS1 and MOS2. The side length of each box ranges from 2.2\arcmin \ to 14\arcmin. Therefore box sizes are not smaller than the angular resolution capability of the \textit{Suzaku} XIS. We grouped 1042 spectra into bins with a minimum of 20 counts such that $\chi^2$ statistics are appropriate. In order to generate a response matrix file (RMF) and an ancillary response file (ARF), we employed xisrmfgen \citep{Ishisaki07} and xissimarfgen for the \textit{Suzaku} data, rmfgen and arfgen for the XMM-Newton data. Firstly, we fitted all the spectra by a single-component variable abundance non-equilibrium ionization (VNEI) model. We employed \textbf{TBabs} (T\"{u}bingen-Boulder ISM absorption model; Wilms et al. 2000) and \textbf{VNEI} (NEI ver.2.0; Borkowski et al. 2001) in XSPEC version 12.5.0 \citep{Arnaud96}. In this model, the abundances for C, N, O, Ne, Mg, Si and Fe were free, while we set the relative abundances for S to the solar value equal to that of Si, Ni equal to Fe. The other elements were fixed to their solar values. Other parameters were all free such as the electron temperature, $kT_e$, the ionization timescale, $\tau$ (a product of the electron density and the elapsed time after the shock heating), and the emission measure, EM ($=\int n_e n_{\rm H} dl$, where $n_e$ and $n_{\rm H}$ are the number densities of electron and hydrogen and $dl$ is the plasma depth). We also set the absorption column density, $\rm\textit{N}_H$, to be free. As a result, the spectra from the limb regions are well fitted by the single-component VNEI model. As shown by earlier observations of the northeast and the southeast limb \citep{Tsunemi07, Kimura09, Uchida09Nrim, Tsunemi09}, the spectra obtained from the limb regions of the Cygnus Loop are typically described by a single-component VNEI model. On the other hand, the spectra from the inner regions are generally not fitted by the single-component VNEI model. From earlier observations of the northeast to the southwest regions along the diameter, \citet{Tsunemi07} found that the spectra from the inner regions of the \object{Cygnus Loop} consist of a two-component VNEI plasma. They concluded the plasma structure of the \object{Cygnus Loop} as follows: the high-$kT_e$ ejecta component is surrounded by a low-$kT_e$ ISM component. \cite{Uchida09ejecta} showed that the two-component VNEI model is wholly applicable to the inner regions of the \object{Cygnus Loop}. Therefore, we next intended to give an additional high-$kT_e$ VNEI component to the single-component VNEI model. In this model, we fixed the metal abundances of the low-$kT_e$ component to the values obtained from the result of \citet{Tsunemi07}, since the model whose abundances set all free could not obtain the physically meaningful results. \citet{Tsunemi07} showed the relative abundances to the solar values of the ISM component as follows: C=0.27, N=0.10, O=0.11, Ne=0.21, Mg=0.17, Si=0.34, S=0.17, Fe(=Ni)=0.20. In addition, they fixed other elements to the solar values \citep{Anders89}. Meanwhile, in the high-$kT_e$ component, the abundances for O, Ne, Mg, Si, and Fe were free, while we set the abundances for C and N equal to O, S equal to Si, Ni equal to Fe. Other elements were fixed to their solar values. The other parameters such as $kT_e$, $\tau$, EM, and $\rm\textit{N}_H$ were all free. We applied both single-component VNEI model and two-component VNEI model to all the spectra and determined which model is acceptable by using the F-test with a significance level of 99\%. As a result, roughly $<0.80R_{\rm s}$ of the northeast region and $<0.85R_{\rm s}$ of the southwest region need an additional component, where $R_{\rm s}$ is a shock radius. Here, we define the ``limb observations'' as the regions where the single-component VNEI model is acceptable and the ``inside observations'' as the remaining regions. Figure \ref{fig:spec} shows two example XIS1 spectra. The spectral extracted regions are shown in figure \ref{fig:HRI}. Both regions are located at the inside observations and the two-component VNEI model is applicable. The bottom two panels show the best-fit results of the two-component VNEI model. Blue and red lines represent the high-$kT_e$ and the low-$kT_e$ component. We also show the result with the single-component VNEI model at the top two panels for comparison. The best-fit parameters are shown in table \ref{tab:spec}. These results show that the reduced $\chi^2$ values are significantly improved with the two-component VNEI models. \section{Discussion} \subsection{Temperature distribution of the low-$kT_e$ component} All the spectra are well fitted by either the single-component VNEI model or the two-component VNEI model. From the best-fit parameters of the inside observations, we found that the electron temperature of the low-$kT_e$ component is almost uniform. The averaged value is 0.23 keV ($\sigma = 0.08$ keV) and it is sufficiently lower than that of the high-$kT_e$ component (0.52 keV, $\sigma = 0.17$ keV). The temperature of the low-$kT_e$ component is close to that of the limb observations (0.29 keV, $\sigma = 0.07$ keV). Therefore we collectively call these components ``low-$kT_e$ component'' hereafter. Figure \ref{fig:kTe} shows our FOV and the electron temperature distribution of the low-$kT_e$ component overlaid with the white contour from the \textit{ROSAT} HRI image. The averaged value is $\sim0.28$ keV and it ranges from 0.12 keV to 0.35 keV. Meanwhile, the temperature of the high-$kT_e$ component ranges from 0.4 keV to 0.9 keV, which is consistent with the previous observations \citep{Tsunemi07, Katsuda08ejecta, Kimura09, Uchida09ejecta}. Then, we confirmed that the temperature of each component is clearly separated. \citet{Uchida09ejecta} also showed that the temperature distribution of the high-$kT_e$ component is not uniform and that it is lower in the southwest part than that in the northeast part. On the other hand, the temperature of the low-$kT_e$ component is relatively uniform (see figure \ref{fig:kTe}). The detailed distribution shows the temperature near the center is lower than that of the surroundings. We also found that the temperature distribution is seamless at the boundary between the limb observations and the inside observations. Therefore, the low-$kT_e$ components of these regions must have the same origin. The spectra from the limb observations are obviously swept-up ISM origin, and thus, we concluded that any low-$kT_e$ component originates from the ISM component. \subsection{Line-of-sight Shell Structure of the Cygnus Loop} Taking into account the age of the \object{Cygnus Loop} the reverse shocks should have already reached its center. Therefore, on the assumption that the density of the ejecta-origin plasma is homogeneous, the X-ray flux depends exclusively on its plasma depth. In figure \ref{fig:spec}, the blue line represents the high-$kT_e$ component of the two-component VNEI model. Since the region-A and the region-B are located at the same radial distance from the center ($R\sim50\arcmin$, where we define $R$ as a distance from the ``geometric center'' determined by Levenson et al. 1998), they should have almost the same plasma depths. Accordingly, the fluxes of the high-$kT_e$ components are actually not so different, while the spectral extracted regions are separated. Meanwhile, the contributions of the low-$kT_e$ components are quite different as shown with the red lines in figure \ref{fig:spec}. From the bottom left panel of figure \ref{fig:spec}, the flux of the low-$kT_e$ component in the region-A overwhelms that of the high-$kT_e$ component at 0.2-1.0 keV. On the other hand, the contribution of the low-$kT_e$ component in the region-B is clearly smaller than that in the region-A. Such a difference should be attributed to the difference of the surrounding shell of each region. The value of the flux is proportional to EM ($\propto n_{\rm H}^2l$), which means that the surface brightness is sensitive to the change of the density and the plasma depth there. In order to estimate the ambient density of the \object{Cygnus Loop}, we calculated the fluxes of the low-$kT_e$ component from all regions. Figure \ref{fig:flux} left shows the 0.2-3.0 keV flux distribution of the low-$kT_e$ component. We also show that of the high-$kT_e$ component at the right panel. The flux distribution of the high-$kT_e$ component is relatively uniform compared with that of the low-$kT_e$ component. It reflects that the ejecta component uniformly filled inside the Loop. In contrast, from the left panel, we clearly see the "limb-brightening" which reflects the spherical shell structure. Therefore, we confirmed that the low-$kT_e$ component comes from the surrounding ISM. We also found that the northeast flux is higher than that in the southwest. It suggests that the density is higher in the direction of the northeast than that of the southwest. The detailed shell structures are also seen from the left panel, for example, the ``V-shape'' knot at the southwest \citep{Aschenbach99, Leahy04}. From the left panel of figure \ref{fig:flux}, we found the flux distribution inside the Loop is far from what we expect in the uniform shell structure. This suggests the ambient density and the shell thickness varies considerably from region to region. Thus, we can study the line-of-sight shell structure of the Loop. Considering the relation between the surface brightness and the plasma density, the flux of the low-$kT_e$ component reflects the local density of the ISM. For example, the bright region in the northeast part is considered that the blast waves are expanding into the dense ISM there. In contrast, there is a low-flux region at the south of the Loop (see figure \ref{fig:flux}). It suggests the ambient density there is extremely low compared with other areas of the Loop. As shown by \citet{Uchida08}, we noticed that there is a large break in the south where the ISM density is very thin. In general, the velocity of the blast wave toward such tenuous ISM should become higher than other region. Therefore, it forms a blowout where the shell thickness must be thin. From figure \ref{fig:flux}, we also found a large low-flux region at slightly west of the \object{Cygnus Loop} center. Although our FOV does not cover the whole region, the structure is close to a circular form, and we estimated the diameter to be $\sim1.3\arcdeg$. The size is comparable to that of the south blowout. The existence of such large low-flux region suggests that it has a blowout structure along the line of sight like the south blowout. This result confirms the prediction by \citet{Tsunemi07} and \citet{Kimura09}. From figure \ref{fig:flux}, the northeast of the center also has lower flux than that of the surrounding region. It strongly indicates that the line-of-sight ambient density there is locally low as well as that in the south blowout. This region has a C-shape structure which could be explained by the superposition of the circular low-flux region and the bright region where the blast wave interacts with a small cloud. We estimate the diameter of this low-flux region to be $\sim30\arcmin$. These results show the ambient density of the \object{Cygnus Loop} is quite different from region to region. \subsection{Evidence of Cavity Explosion} To put our result into perspective, we plotted the flux of each component as a function of radius $R$ as shown in figure \ref{fig:flux_plot}. From this figure, we found the flux of the high-$kT_e$ component (shown as crosses) decreases from the center to the outside, which reflects the spherical structure of the ejecta filled inside the \object{Cygnus Loop}. On the other hand, the flux of the low-$kT_e$ component (circles) has a limb-brightening structure, as mentioned in the previous section. Furthermore the low-$kT_e$ flux at the southwest ($R>0$) is totally lower than that at the northeast. While the high-$kT_e$ flux distribution is approximately symmetric, the low-$kT_e$ flux is a few times higher at the northeast than that at the southwest. In addition, looking at the inner region of the Loop, the flux distribution of the low-$kT_e$ component is declining from $R=-50$ to $R=50$. This fact suggests the ambient density of the \object{Cygnus Loop} globally decreases from the northeast to the southwest. In order to estimate the ambient density more quantitatively, we calculated the EM of the low-$kT_e$ component and plotted it as a function of $R$. Figure \ref{fig:EM_region} shows the EM distribution of the low-$kT_e$ component. We plotted the EM profiles from six rectangular regions with different azimuthal angles as shown in figure \ref{fig:EM_region} (NE-A to NE-E and SW). That are shown in figure \ref{fig:EM_plots}. We simulated the EM profile of the shell component derived from the Sedov solution with different ambient density $n_0$ and estimated $n_0$ by comparing our observations with the EM models. In this model, we assume the shock radius of the \object{Cygnus Loop} to be 13 pc and the ejecta is filled in 90$\%$ of it. The results are shown in figure \ref{fig:EM_plots} with red lines. We also show the best-fit models using the data only in the limb-brightening regions with green lines. As for the northeast regions, the EM profiles inside the Loop are close to the models of $n_0$=0.3-0.4 cm$^{-3}$ (red) while the EM values at the limb-brightening regions are higher than these models. On the other hand, applying the data only in the limb-brightening regions (green), $n_0$ increases to 0.7-0.9 cm$^{-3}$. In any case, there are no Sedov models which agree with the EM profiles of the northeast part of the \object{Cygnus Loop}. The result is the same as the case of the southwest region while the ambient density $n_0$ is less than half of the northeast results. These results clearly show the \object{Cygnus Loop} can not be explained by a simple Sedov evolution model. \citet{McCray79} proposed that the \object{Cygnus Loop}'s SN explosion had occurred in a preexisting cavity and some other studies also supported it \citep{Hester86, Hester94, Levenson97}. Considering their results, it is natural that the EM distribution disagrees with a simple Sedov model, and thus, we concluded that our result also supports the cavity explosion as the origin of the \object{Cygnus Loop} from the standpoint of the X-ray spectral analysis. It should be noted that the Cygnus Loop is almost perfect circular in shape, although the EM (or flux) is globally higher in the northeast than that in the southwest. This fact strongly suggests that the northeast and the southwest blast waves should have hit the cavity wall very recently, and that the cavity-wall density is higher in the northeast than that in the southwest. \section{Conclusion} By analyzing the X-ray spectra, we clearly distinguished the ISM component from the ejecta component, and established a method to investigate the line-of-sight shell structure. From the flux distribution of the ISM component, we found three low-flux regions in the FOV; one is a well-known southwest blowout which is evidence of the cavity-wall break, and we also found other low-flux regions at the west and the northeast of the \object{Cygnus Loop} center. From the EM distribution of the ISM component, we support that the \object{Cygnus Loop} is originated from a cavity explosion. Then, the ISM component, or cavity wall does not have an uniform structure but has a lot of breaks or tenuous regions. We also found that the condition of the surrounding cavity wall is not uniform; the density of it is globally higher in the northeast than that in the southwest. \acknowledgments H.U. would like to thank Professor Jacco Vink and his students for many useful discussions and their hospitality at Utrecht University. This work is partly supported by a Grant-in-Aid for Scientific Research by the Ministry of Education, Culture, Sports, Science and Technology (16002004). H.U. and S.K. are supported by JSPS Research Fellowship for Young Scientists. \begin{figure} \begin{center} \includegraphics[width=120mm]{f1a.eps} \includegraphics[width=120mm]{f1b.eps} \end{center} \caption{\textit{Top}: \textit{ROSAT} HRI image of the entire Cygnus Loop. The circles and rectangles represent our FOV of the \textit{XMM-Newton} MOS and the \textit{Suzaku} XIS, respectively. \textit{Bottom}: Same as the left panel, but for overlaid with the spectral extraction regions with small rectangles.}\label{fig:HRI} \end{figure} \begin{figure} \begin{center} \includegraphics[width=80mm]{f2a.eps} \includegraphics[width=80mm]{f2b.eps} \includegraphics[width=80mm]{f2c.eps} \includegraphics[width=80mm]{f2d.eps} \end{center} \caption{Example XIS1 spectra from the regions where the flux of the swept-up matter is high (region-A: left two panels) and low (region-B: right two panels), respectively (see figure\ref{fig:HRI}). The best-fit curves for the single-component VNEI models are shown by solid black lines in the top two panels. Bottom two panels are the same as the top panels, but for the fitting results with the two-component VNEI models. In the bottom panels, blue and red lines represent the high-$kT_e$ component and the low-$kT_e$ component, respectively. The residuals are shown in lower panels.}\label{fig:spec} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=120mm]{f3a.eps} \end{center} \caption{Our FOV and the electron temperature distribution of the low-$kT_e$ component overlaid with the white contour from the \textit{ROSAT} HRI image. The images are smoothed by Gaussian kernel of $\sigma=2.8\arcmin$. The values are in units of keV.}\label{fig:kTe} \end{figure} \clearpage \begin{figure} \begin{center} \includegraphics[width=160mm]{f4a.eps} \end{center} \caption{0.2-3.0 keV flux distribution of the low-$kT_e$ (left) and the high-$kT_e$ (right) component in logarithmic scales overlaid with the white contour of the \textit{ROSAT} HRI image. The images are smoothed by Gaussian kernel of $\sigma=2.8\arcmin$. The values are in units of counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$ and the scale parameters correspond with each other. Blue and red correspond to $\sim10^{-4}$ and $\sim10^{-3}$ counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$, respectively.}\label{fig:flux} \end{figure} \begin{figure} \begin{center} \includegraphics[width=150mm]{f5a.eps} \end{center} \caption{Averaged flux profile as a function of $R$. The circles and triangles represent the flux of low-$kT_e$ and high-$kT_e$ components, respectively.}\label{fig:flux_plot} \end{figure} \begin{figure} \begin{center} \includegraphics[width=160mm]{f6a.eps} \end{center} \caption{EM distribution of the low-$kT_e$ component in logarithmic scales overlaid with the white contour of the \textit{ROSAT} HRI image.}\label{fig:EM_region} \end{figure} \begin{figure} \begin{center} \includegraphics[width=75mm]{f7a.eps} \includegraphics[width=75mm]{f7b.eps} \includegraphics[width=75mm]{f7c.eps} \includegraphics[width=75mm]{f7d.eps} \includegraphics[width=75mm]{f7e.eps} \includegraphics[width=75mm]{f7f.eps} \end{center} \caption{EM profiles as a function of $R$ calculated from the data in the rectangular regions shown in figure \ref{fig:EM_region}. The EM profiles based on the Sedov model and the estimated ambient densities $n_0$ are shown in red and green (see text).}\label{fig:EM_plots} \end{figure} \clearpage \begin{deluxetable}{lcccc} \tablewidth{0pt} \tablecaption{Summary of the 41 observations\label{tab:sum}} \tablehead{\colhead{Obs. ID} & \colhead{Obs. Date}& \colhead{RA, DEC (J2000)} & \colhead{Position Angle} & \colhead{Effective Exposure}} \startdata \sidehead{\textit{Suzaku Observations}} 501012010 (P1) & 2007-11-14 & 20$^{\mathrm h}$54$^{\mathrm m}$07.6$^{\mathrm s}$, 31\arcdeg57\arcmin22.0\arcsec & 240$^\circ$.0 & 9.8 ksec\\ 501013010 (P2) & 2007-11-14 & 20$^{\mathrm h}$53$^{\mathrm m}$08.5$^{\mathrm s}$, 31\arcdeg45\arcmin40.3\arcsec & 240$^\circ$.0 & 16.4 ksec\\ 501014010 (P3) & 2007-11-14 & 20$^{\mathrm h}$52$^{\mathrm m}$09.9$^{\mathrm s}$, 31\arcdeg36\arcmin43.4\arcsec & 240$^\circ$.0 & 16.9 ksec\\ 501015010 (P4) & 2007-11-14 & 20$^{\mathrm h}$51$^{\mathrm m}$11.8$^{\mathrm s}$, 31\arcdeg22\arcmin08.4\arcsec & 240$^\circ$.0 & 18.3 ksec\\ 501016010 (P5) & 2007-11-15 & 20$^{\mathrm h}$50$^{\mathrm m}$11.3$^{\mathrm s}$, 31\arcdeg10\arcmin48.0\arcsec & 240$^\circ$.0 & 19.3 ksec\\ 501017010 (P6) & 2007-11-11 & 20$^{\mathrm h}$49$^{\mathrm m}$11.3$^{\mathrm s}$, 30\arcdeg59\arcmin27.6\arcsec & 240$^\circ$.0 & 28.7 ksec\\ 501018010 (P7) & 2007-11-12 & 20$^{\mathrm h}$48$^{\mathrm m}$18.7$^{\mathrm s}$, 30\arcdeg46\arcmin33.6\arcsec & 240$^\circ$.0 & 21.0 ksec\\ 501028010 (P8) & 2006-05-13 & 20$^{\mathrm h}$55$^{\mathrm m}$56.3$^{\mathrm s}$, 31\arcdeg28\arcmin56.2\arcsec & 62$^\circ$.5 & 4.9 ksec\\ 501019010 (P9) & 2007-11-12 & 20$^{\mathrm h}$47$^{\mathrm m}$14.2$^{\mathrm s}$, 30\arcdeg36\arcmin10.8\arcsec & 240$^\circ$.0 & 16.2 ksec\\ 501020010 (P10) & 2007-11-13 & 20$^{\mathrm h}$46$^{\mathrm m}$20.8$^{\mathrm s}$, 30\arcdeg23\arcmin22.6\arcsec & 240$^\circ$.0 & 14.7 ksec\\ 503055010 (P11) & 2008-05-09 & 20$^{\mathrm h}$49$^{\mathrm m}$48.7$^{\mathrm s}$, 31\arcdeg30\arcmin18.0\arcsec & 50$^\circ$.0 & 22.2 ksec\\ 501029010 (P12) & 2006-05-09 & 20$^{\mathrm h}$55$^{\mathrm m}$00.0$^{\mathrm s}$, 31\arcdeg15\arcmin46.8\arcsec & 62$^\circ$.1 & 13.2 ksec\\ 501030010 (P13) & 2006-05-10 & 20$^{\mathrm h}$53$^{\mathrm m}$59.3$^{\mathrm s}$, 31\arcdeg03\arcmin39.6\arcsec & 68$^\circ$.2 & 13.9 ksec\\ 501031010 (P14) & 2006-05-12 & 20$^{\mathrm h}$52$^{\mathrm m}$58.8$^{\mathrm s}$, 30\arcdeg51\arcmin32.4\arcsec & 62$^\circ$.4 & 18.2 ksec\\ 501032010 (P15) & 2006-05-25 & 20$^{\mathrm h}$51$^{\mathrm m}$58.6$^{\mathrm s}$, 30\arcdeg39\arcmin10.8\arcsec & 62$^\circ$.0 & 17.4 ksec\\ 501033010 (P16) & 2006-05-22 & 20$^{\mathrm h}$50$^{\mathrm m}$58.8$^{\mathrm s}$, 30\arcdeg27\arcmin00.0\arcsec & 62$^\circ$.0 & 20.0 ksec\\ 501034010 (P17) & 2006-05-22 & 20$^{\mathrm h}$48$^{\mathrm m}$49.7$^{\mathrm s}$, 30\arcdeg00\arcmin21.6\arcsec & 62$^\circ$.0 & 13.9 ksec\\ 501035010 (P18) & 2006-12-18 & 20$^{\mathrm h}$48$^{\mathrm m}$16.2$^{\mathrm s}$, 29\arcdeg42\arcmin07.2\arcsec & 237$^\circ$.5 & 11.2 ksec\\ 501036010 (P19) & 2006-12-18 & 20$^{\mathrm h}$47$^{\mathrm m}$17.3$^{\mathrm s}$, 30\arcdeg04\arcmin21.4\arcsec & 237$^\circ$.5 & 11.8 ksec\\ 503056010 (P20) & 2008-05-10 & 20$^{\mathrm h}$48$^{\mathrm m}$00.0$^{\mathrm s}$, 31\arcdeg10\arcmin30.0\arcsec & 50$^\circ$.0 & 22.5 ksec\\ 503057010 (P21) & 2008-06-02 & 20$^{\mathrm h}$52$^{\mathrm m}$43.8$^{\mathrm s}$, 32\arcdeg26\arcmin19.0\arcsec & 61$^\circ$.9 & 16.2 ksec\\ 503058010 (P22) & 2008-06-03 & 20$^{\mathrm h}$51$^{\mathrm m}$17.2$^{\mathrm s}$, 32\arcdeg25\arcmin24.6\arcsec & 61$^\circ$.4 & 19.3 ksec\\ 503059010 (P23) & 2008-06-03 & 20$^{\mathrm h}$49$^{\mathrm m}$50.6$^{\mathrm s}$, 32\arcdeg21\arcmin50.8\arcsec & 61$^\circ$.9 & 19.5 ksec\\ 503060010 (P24) & 2008-06-04 & 20$^{\mathrm h}$48$^{\mathrm m}$28.2$^{\mathrm s}$, 32\arcdeg17\arcmin44.5\arcsec & 61$^\circ$.4 & 18.5 ksec\\ 503061010 (P25) & 2008-06-04 & 20$^{\mathrm h}$47$^{\mathrm m}$22.7$^{\mathrm s}$, 32\arcdeg10\arcmin22.8\arcsec & 60$^\circ$.9 & 26.0 ksec\\ 503062010 (P26) & 2008-05-13 & 20$^{\mathrm h}$56$^{\mathrm m}$26.5$^{\mathrm s}$, 30\arcdeg19\arcmin55.2\arcsec & 49$^\circ$.8 & 16.9 ksec\\ 503063010 (P27) & 2008-05-13 & 20$^{\mathrm h}$55$^{\mathrm m}$16.3$^{\mathrm s}$, 30\arcdeg01\arcmin44.0\arcsec & 49$^\circ$.6 & 22.8 ksec\\ 503064010 (P28) & 2008-05-14 & 20$^{\mathrm h}$53$^{\mathrm m}$51.6$^{\mathrm s}$, 29\arcdeg54\arcmin42.5\arcsec & 49$^\circ$.1 & 18.2 ksec\\ 500020010 (NE1) & 2005-11-23 & 20$^{\mathrm h}$56$^{\mathrm m}$48.9$^{\mathrm s}$, 31\arcdeg56\arcmin54.8\arcsec & 223$^\circ$.0 & 20.4 ksec\\ 500021010 (NE2) & 2005-11-24 & 20$^{\mathrm h}$55$^{\mathrm m}$56.0$^{\mathrm s}$, 31\arcdeg56\arcmin53.2\arcsec & 223$^\circ$.0 & 21.4 ksec\\ 500022010 (NE3) & 2005-11-29 & 20$^{\mathrm h}$55$^{\mathrm m}$05.6$^{\mathrm s}$, 32\arcdeg10\arcmin35.4\arcsec & 222$^\circ$.9 & 21.7 ksec\\ 500023010 (NE4) & 2005-11-30 & 20$^{\mathrm h}$54$^{\mathrm m}$03.8$^{\mathrm s}$, 32\arcdeg21\arcmin47.9\arcsec & 221$^\circ$.2 & 25.3 ksec\\ \sidehead{\textit{XMM-Newton Observations}} 0082540101 (Pos-1) & 2002-11-25 & 20$^{\mathrm h}$55$^{\mathrm m}$23.6$^{\mathrm s}$, 31\arcdeg46\arcmin17.0\arcsec & 241$^\circ$.7 & 14.7 ksec\\ 0082540201 (Pos-2) & 2002-12-03 & 20$^{\mathrm h}$54$^{\mathrm m}$07.2$^{\mathrm s}$, 31\arcdeg30\arcmin51.1\arcsec & 241$^\circ$.7 & 14.4 ksec\\ 0082540301 (Pos-3) & 2002-12-05 & 20$^{\mathrm h}$52$^{\mathrm m}$51.1$^{\mathrm s}$, 31\arcdeg15\arcmin25.7\arcsec & 241$^\circ$.7 & 11.6 ksec\\ 0082540401 (Pos-4) & 2002-12-07 & 20$^{\mathrm h}$51$^{\mathrm m}$34.7$^{\mathrm s}$, 31\arcdeg00\arcmin00.0\arcsec & 241$^\circ$.7 & 4.9 ksec\\ 0082540501 (Pos-5) & 2002-12-09 & 20$^{\mathrm h}$50$^{\mathrm m}$18.4$^{\mathrm s}$, 30\arcdeg44\arcmin34.3\arcsec & 231$^\circ$.4 & 12.6 ksec\\ 0082540601 (Pos-6) & 2002-12-11 & 20$^{\mathrm h}$49$^{\mathrm m}$02.0$^{\mathrm s}$, 30\arcdeg29\arcmin08.6\arcsec & 241$^\circ$.7 & 11.5 ksec\\ 0082540701 (Pos-7) & 2002-12-13 & 20$^{\mathrm h}$47$^{\mathrm m}$45.8$^{\mathrm s}$, 30\arcdeg13\arcmin42.9\arcsec & 241$^\circ$.7 & 13.7 ksec\\ 0405490101 (Pos-8) & 2006-05-13 & 20$^{\mathrm h}$50$^{\mathrm m}$32.2$^{\mathrm s}$, 30\arcdeg11\arcmin00.0\arcsec & 69$^\circ$.9 & 6.5 ksec\\ 0405490201 (Pos-9) & 2006-05-13 & 20$^{\mathrm h}$49$^{\mathrm m}$54.2$^{\mathrm s}$, 29\arcdeg42\arcmin25.0\arcsec & 69$^\circ$.8 & 3.6 ksec\\ \enddata \end{deluxetable} \begin{table} \caption{Spectral fit parameters}\label{tab:spec} \begin{center} \begin{tabular}{ccccc} \tableline \tableline & \multicolumn{2}{c}{\textit{single-component VNEI model}} & \multicolumn{2}{c}{\textit{two-component VNEI model}} \\ \tableline & region A & region B & region A & region B\\ \tableline N$\rm _H$ [10$^{20}$cm$^{-2}$] & 1.8 $\pm$ 0.3 & 3.4 $\pm$ 0.3 & 5.2 $\pm$ 0.2 & 7.0 $\pm$ 0.3 \\ & & & \multicolumn{2}{c}{\textit{Low-$kT_e$ component:}} \\ \ \ $kT_e$ [keV] & 0.59 $\pm$ 0.03 & 0.42 $\pm$ 0.02 & 0.24 $\pm$ 0.01 & 0.12 $\pm$ 0.01 \\ \ \ C & 0.24 $\pm$ 0.05 & 0.96 $\pm$ 0.21 & \multicolumn{2}{c}{0.27 (fixed)}\\ \ \ N & 0.22 $\pm$ 0.05 & 0.09 $\pm$ 0.03 & \multicolumn{2}{c}{0.10 (fixed)}\\ \ \ O & 0.23 $\pm$ 0.02 & 0.14 $\pm$ 0.02 & \multicolumn{2}{c}{0.11 (fixed)}\\ \ \ Ne & 0.44 $\pm$ 0.04 & 0.31 $\pm$ 0.03 & \multicolumn{2}{c}{0.21 (fixed)}\\ \ \ Mg & 0.26 $\pm$ 0.03 & 0.24 $\pm$ 0.03 & \multicolumn{2}{c}{0.17 (fixed)}\\ \ \ Si & 0.27 $\pm$ 0.06 & 0.30 $\pm$ 0.06 & \multicolumn{2}{c}{0.34 (fixed)}\\ \ \ S & (=Si) & (=Si) & \multicolumn{2}{c}{0.17 (fixed)}\\ \ \ Fe(=Ni) & 0.36 $\pm$ 0.04 & 0.22 $\pm$ 0.02 & \multicolumn{2}{c}{0.20 (fixed)}\\ \ \ log $\tau$ & 10.42 $\pm$ 0.03 & 10.82 $^{+ 0.07}_{- 0.08}$ & 11.32 $^{+ 0.12}_{- 0.16}$ & $<$ 12\\ \ \ flux [counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$] & $8.90 \times 10^{-4}$ & $4.34 \times 10^{-4}$ & $7.49 \times 10^{-4}$ & $2.18 \times 10^{-4}$\\ & & & \multicolumn{2}{c}{\textit{High-$kT_e$ component:}} \\ \ \ $kT_e$ [keV] & \nodata & \nodata & 0.88 $\pm$ 0.13 & 0.43 $\pm$ 0.02\\ \ \ O(=C=N) & \nodata & \nodata & 0.34 $\pm$ 0.13 & 0.38 $\pm$ 0.07\\ \ \ Ne & \nodata & \nodata & 0.82 $\pm$ 0.26 & 0.74 $\pm$ 0.12\\ \ \ Mg & \nodata & \nodata & 0.56 $\pm$ 0.19 & 0.55 $\pm$ 0.10\\ \ \ Si(=S) & \nodata & \nodata & 1.28 $\pm$ 0.42 & 0.79 $\pm$ 0.16\\ \ \ Fe(=Ni) & \nodata & \nodata & $<$ 1.43 & 0.48 $\pm$ 0.08\\ \ \ log $\tau$ & \nodata & \nodata & 10.66 $^{+ 0.09}_{- 0.12}$ & 11.11 $\pm$ 0.06\\ \ \ flux [counts cm$^{-2}$s$^{-1}$arcmin$^{-2}$] & \nodata & \nodata & $1.41 \times 10^{-4}$ & $2.16 \times 10^{-4}$\\ $\chi ^2$/dof & 1043/739 & 728/548 & 868/738 & 637/547\\ \tableline \end{tabular} \tablecomments{Other elements are fixed to solar values.} \end{center} \end{table} \clearpage
proofpile-arXiv_065-6388
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A primary motivation for work towards a complete, consistent quantum theory on nonassociative spaces is the desire to model more physical forces with fewer assumptions. Theoretical reductionism is particularly important when the consequences of a proposed model are hard or impossible to measure, as is the case in quantum gravity. How can one unification proposal be evaluated against another? Today's description of physical law does not require nonassociativity as a fundamental notion in quantum mechanics. On the other hand, it is unclear whether quantum gravity and its unification with the Standard Model may ever be modeled using conventional associative geometry. If nonassociativity is a fundamental property of nature, as presumed here, then it must be shown how today's conventional formulations may emerge from such foundation without contradicting the observation. Section \ref{sec:TowardsNonassocQuantTh} follows up on recent investigations into nonassociative quantum theory and highlights the mathematical structure of certain modifications to conventional quantum theory: Nonassociative parts of quantum mechanical operators are unobservable in principle\citep{Dzhu2007ObsUnobs}, and decompositions exist for a supersymmetric nonrelativistic Hamiltonian\citep{Dzhun2007NonassocSuperAndHidden}, the spin-$\frac{1}{2}$ operator algebra and Lorentz Lie algebra\citep{Dzhu2009naQFT,Koepl2009octoocto}, and the hypothesized {}``glueball'' particle from strongly interacting fields\citep{Dzhu2009NonassocDecompStrong,Dzhu2010NonperturbQC,Dzhu2010SU3FluxTube}. It is shown how probability conservation in conventional quantum mechanics may become an emergent phenomenon that may be better described in a nonassociative geometry to be found. There are many proposals today that introduce nonassociativity into conventional quantum formulations. More direct approaches may use nonassociative algebras, such as the octonions or split-octonions, instead of customary complex number or matrix algebra. More indirect approaches embed observed Lie group symmetries into the exceptional Lie groups, which are automorphism groups of types of nonassociative algebras (for a review see e.g.~\citep{Baez2002TheOctonions}). The range of envisioned applications spans much of fundamental physics. A certainly incomplete list of nonassociativity in quantum physics over the past four decades includes: quark statistics in the Strong Force\citep{GunayidinGursey1973QuarkStructure,GunaydinGursey1974QuarkStatistics,GunaydinPironRuegg1978OctoQM,Okubo1995IntoOctoNonassocBook}, chirality and triality in fundamental particles\citep{SchrayManogueOcts1994}, Standard Model symmetries from spinors over the division algebras \citep{Dixon1994DivisionAlgs,Furey2010UnifThIdeal}, the Weak Force and Yang-Mills instantons\citep{Okubo1995IntoOctoNonassocBook}, octonionic quantum theory and Dirac equation from left/right-associating operators \citep{LeoAbdelKhalek1996OctoQM,LeoAbdelKhalek1996OctoDirac,LeoAbdelKhalek1998Octoworld}, fermion generations\citep{DrayManogue1999QuaternionicSpin,DrayManogue1998DimRed}, a geometric relation between Heisenberg uncertainty and the light cone\citep{Gogb2004OctonionicGeometry}, the Dirac equation with electromagnetic field\citep{Gogb2005OctoVersionsDiracEqn,Gogb2006OctonionElectrodyn,koepl2007GravEMconicSed}, a four dimensional Euclidean operator quantum gravity\citep{koepl2007hypernumbersRel,Koepl2009octoocto}, Lie group symmetries of the Standard Model \citep{DrayManogue2009OctoSpinorsAndE6,ManogueDray2009OctE6andParticle} with gravity\citep{Lisi2007E8TOE}, and supersymmetry with the Standard Model\citep{BaezHuerta2009DivisionAlgsAndSUSY,BaezHuertaDivAlgSUSY2}. These and other approaches introduce nonassociativity into existing formulations in physics, which requires modifying some assumptions while keeping others unchanged. Yet, with all these clues and hints it is entirely in the open whether a {}``better'' description of physical law may ever be found this way. If one believes this could be accomplished, the tantalizing question is: Which, and how many, of today's paradigms in physics need to be amended? Section \ref{sec:HopfCoquasiCandidateMethod} proposes a new prototype nonassociative quantum theory in one dimension that is built from algebraic and geometric rules. Rather than declaring physical principles up front (e.g.~conservation of probability, invariance of the speed of light, equivalence of energy and masses), the model builds wave functions from self-dual types of transformations. The Born rule which governs observation in conventional quantum mechanics is modified, and requires the operator/eigenfunction/eigenvalue relation to be contained in a complex number subalgebra of the otherwise quaternionic or octonionic formulation. Solutions exist that are similar to what one could expect from a physical model. All work is done under the speculation that the prototype's current limitation of one spacial dimension may eventually be overcome and model nature's spacetime as we observe it. One possible way towards achieving this goal is shown in section \ref{sec:Hopf}. A further generalized Born rule requires the real eigenvalue of the operator/eigenfunction relation to remain invariant under changes between equivalent algebras. An understanding of the complete solution set of such a generalization appears contingent on a proper mathematical tool. The eigenvalue equations to be solved are shown to have Hopf coquasigroup structure \citep{KlimMajid2009HopfCoquasigroup}. \section{Towards a nonassociative quantum theory} \label{sec:TowardsNonassocQuantTh}Conventional operator quantum mechanics uses unobservable wave functions $\Psi$ that are decomposed into orthogonal eigenfunctions $\psi_{n}$ of an operator $H$, to yield observable eigenvalues $h_{n}$. The expectation value of $H$ over some configuration space $V$ is then determined through expressions like:\begin{align*} \left\langle \Psi\right|H\left|\Psi\right\rangle & =\sum_{n}h_{n}\int_{V}\psi_{n}^{*}\psi_{n}dV.\end{align*} Probability density $\rho$ models the relative frequency of occurrence of measurement outcomes $h_{n}$, and is defined through:\begin{align*} \rho & :=\left\langle \Psi\right|1\left|\Psi\right\rangle =\sum_{n}\rho_{n}, & \rho_{n} & :=\int_{V}\psi_{n}^{*}\psi_{n}dV.\end{align*} What is also called the {}``Born rule'' gives the expectation value of $H$ as:\begin{align} \left\langle \Psi\right|H\left|\Psi\right\rangle & =\sum_{n}h_{n}\rho_{n}.\label{eq:expectationValueBornRule}\end{align} As a special case, the Hamiltonian operator of a quantum mechanical system allows to describe the time dependency of other operators through Ehrenfest's theorem. If $H$ is the Hamiltonian and $L$ another operator on that same system, then the expectation value of $L$ as a function of time is:\begin{align} \left\langle \Psi\right|\frac{dL}{dt}\left|\Psi\right\rangle & =\left\langle \Psi\right|\frac{\partial L}{\partial t}+\frac{\imath}{\hbar}\left[H,L\right]\left|\Psi\right\rangle .\label{eq:defEhrenfestClassical}\end{align} Here, $\imath$ is the imaginary basis element of the complex numbers, $b_{\mathbb{C}}=\left\{ 1,\imath\right\} $. Two operators $H_{1},H_{2}$ with eigenvalues $h_{1n},h_{2n}$ model physical quantities that can be observed simultaneously only if they commute:\begin{eqnarray*} H_{1}\left(H_{2}\left|\Psi\right\rangle \right)=H_{2}\left(H_{1}\left|\Psi\right\rangle \right) & \Longleftrightarrow & h_{1n}\textrm{ and }h_{2n}\textrm{ simultaneously observable}.\end{eqnarray*} For example, momentum operator $\hat{p}_{i}:=-\imath\hbar\partial/\partial x_{i}$ (with $i=1,2,3$) and angular momentum operator $\hat{L}_{i}:=-\imath\hbar\left(\vec{x}\times\nabla\right)_{i}$ allow only components with same index $i$ to be measured simultaneously since $\hat{p}_{i}\left(\hat{L}_{i}\left|\Psi\right\rangle \right)=\hat{L}_{i}\left(\hat{p}_{i}\left|\Psi\right\rangle \right)$, but not two different components since generally $\hat{p}_{i}\left(\hat{L}_{j}\left|\Psi\right\rangle \right)\neq\hat{L}_{j}\left(\hat{p}_{i}\left|\Psi\right\rangle \right)$ for $j\neq i$. \subsection{Nonassociativity and unobservables} Noncommutativity of operators from conventional quantum mechanics is now extended to nonassociativity and speculated to be of use in a future nonassociative quantum theory. New kinds of operators $Q_{n}$ may in general not satisfy:\begin{align} \left\langle \Psi\right|\left(Q_{n}\left|\Psi\right\rangle \right) & \neq\left(\left\langle \Psi\right|Q_{n}\right)\left|\Psi\right\rangle .\label{eq:defNonassocOperator}\end{align} This requires additional rules to be supplied to the Ehrenfest theorem (\ref{eq:defEhrenfestClassical}) or the Born rule (\ref{eq:expectationValueBornRule}), to obtain expectation values of the $Q_{n}$, understand their evolution over time and predict measurement outcomes unambiguously. This can be realized by having $Q_{n}$ and the $\left|\Psi\right\rangle $ in some nonassociative algebra. Such operators are interpreted to model {}``unobservables'' that cannot be measured in principle \citep{Dzhu2007ObsUnobs}. The concept is distinct from conventional {}``hidden variables'' models, which contain information that could in principle be extracted from the quantum system. An example of an unobservable property in nature would be the color charge in the Strong Force, a property that is instrumental in the workings of the force; however, cannot be observed from the outside. It is pointed out that unobservables do not need to be quantum contributions on small scales. They may in general be of the same order of magnitude as conventional observable properties. There are many ways of bringing this general approach into agreement with the observation. One way is to decompose known operators into unobservable parts, define dynamics of these parts and show how the conventional formulation emerges in the appropriate limit\@. For example, if $H$ is an operator from conventional quantum mechanics, it could be made from parts, $H:=Q_{1}Q_{2}$, where the $Q_{1}$ and $Q_{2}$ are unobservables. A nonassociative quantum theory, to be found, would then have to explain why such decomposition is necessary or desirable, as opposed to merely being possible. To give an example, without going into the model assumptions% \footnote{Here: An ansatz from nonrelativistic $N=1$ supersymmetry.% }, a Hamiltonian $H$ is proposed in \citep{Dzhun2007NonassocSuperAndHidden,Dzhu2009naQFT} to be made from $Q_{1}$ and $Q_{2}$ with the following properties:\begin{eqnarray*} H & := & \frac{1}{2}\left(Q_{1}+Q_{2}\right)^{2},\\ Q_{1}Q_{1}\left|\Psi\right\rangle =Q_{2}Q_{2}\left|\Psi\right\rangle & = & 0,\\ \left(Q_{1}Q_{2}\right)\left|\Psi\right\rangle & = & \left(Q_{2}Q_{1}\right)\left|\Psi\right\rangle ,\\ \left\langle \Psi\right|\left(Q_{n}\left|\Psi\right\rangle \right) & \neq & \left(\left\langle \Psi\right|Q_{n}\right)\left|\Psi\right\rangle \qquad\left(n\in\left\{ 1,2\right\} \right),\\ \left\langle \Psi\right|\left(\left(Q_{1}Q_{2}\right)\left|\Psi\right\rangle \right) & = & \left(\left\langle \Psi\right|\left(Q_{1}Q_{2}\right)\right)\left|\Psi\right\rangle .\end{eqnarray*} The $Q_{1}$ and $Q_{2}$ are unobservables per (\ref{eq:defNonassocOperator}). These relations can be satisfied when modeling the $Q_{1/2}$ as linear differential operators and using nonassociative split-octonion algebra (for details, see \citep{Dzhun2007NonassocSuperAndHidden,Dzhu2009naQFT}). A new quantum theory could then specify the dynamics of $Q_{1/2}$ and split-octonion wave functions $\left|\Psi\right\rangle $, but model the observable operator $H=Q_{1}Q_{2}=Q_{2}Q_{1}$ in agreement with conventional quantum theory. \subsection{Example: Spin operator and Lorentz Lie algebra from nonassociative algebra} For a new nonassociative quantum theory to be useful or desired, it has to do more than just recreating known physics. There have to be novel observable effects, or it has to describe known effects using fewer assumptions. This section gives an example that hints towards the latter. A nonassociative algebra is shown to have two associative subalgebras, each of which models an independent effect in physics: the algebra of spin operators from spin-$\frac{1}{2}$ particles, and the Lorentz Lie algebra from Special Relativity \citep{Dzhu2008HiddenStructures}. The finding demonstrates an opportunity for a future quantum theory that uses nonassociative algebra, to let previously unrelated descriptions of natural law emerge from a single formalism. \subsubsection{Algebra of spin-$\frac{1}{2}$ operators} A spin in physics is a fundamental internal property of particles or bound quantum systems, such as atomic nuclei. It is independent from the space-time or energy-momentum parameters that describe other dynamic properties. A simple example is a spin-$\frac{1}{2}$ particle, where two spin states are possible when measured along any direction in space: {}``up'' or {}``down''. Conventional quantum mechanics describes spin observables through operators $\hat{s}_{j}$:\begin{align*} \hat{s}_{j} & :=\frac{1}{2}\sigma_{j}\qquad\left(j\in\left\{ 1,2,3\right\} \right).\end{align*} The index $j$ enumerates three orthogonal spacial axes $x_{j}$ along which to measure. In the choice of units here% \footnote{In SI units there is an additional constant here, the Planck constant $\hbar$. It becomes $1$ in the choice of units in this paper.% }, only a factor $\frac{1}{2}$ comes with the $\sigma_{j}$, which are the Pauli matrices over complex numbers. To basis $b_{\mathbb{C}}=\left\{ 1,\imath\right\} $ these are:\begin{align} \sigma_{1} & :=\left(\begin{array}{rr} 0 & 1\\ 1 & 0\end{array}\right), & \sigma_{2} & :=\left(\begin{array}{rr} 0 & -\imath\\ \imath & 0\end{array}\right), & \sigma_{3} & :=\left(\begin{array}{rr} 1 & 0\\ 0 & -1\end{array}\right),\label{eq:defPauliMatAlg2}\\ \sigma_{j}\sigma_{j} & =\left(\begin{array}{rr} 1 & 0\\ 0 & 1\end{array}\right)=\left(\mathrm{id}\right), & \sigma_{j}\sigma_{k} & =-\frac{\imath}{2}\sum_{l=1}^{3}\epsilon_{jkl}\sigma_{l} & & \left(j,k\in\left\{ 1,2,3\right\} \right).\nonumber \end{align} Born's rule gives measurable spin states from eigenfunctions $\left|\Psi\right\rangle $ to the $\sigma_{j}$, so that $\sigma_{j}\left|\Psi\right\rangle =\lambda\left|\Psi\right\rangle $ with real eigenvalues $\lambda=+\frac{1}{2}$ for {}``spin up'' and $\lambda=-\frac{1}{2}$ for {}``spin down'' along an axis of measurement. Without addressing physical measurement, the algebra of $\sigma_{j}$ operators (\ref{eq:defPauliMatAlg2}) can be expressed in the associative complex quaternion algebra. Written to a quaternion basis $b_{\mathbb{H}}=\left\{ 1,i_{1},i_{2},i_{3}\right\} $ and complex number coefficients to $b_{\mathbb{C}}=\left\{ 1,\imath\right\} $, the $\sigma_{j}$ can be defined as:\begin{align} \sigma_{j} & :=\imath i_{j}.\label{eq:defPauliMatComplexQuats}\end{align} On a side note, the imaginary quaternions (here to basis elements $\left\{ i_{1},i_{2},i_{3}\right\} $) also generate the $\mathfrak{su}\left(2\right)$ Lie algebra. \subsubsection{Lorentz Lie algebra} \label{sub:LorentzLieAlg}The Lorentz group is the matrix Lie group that preserves the quadratic form $\left|\cdot\right|$ on four-vectors $x:=\left(x_{0},x_{1},x_{2},x_{3}\right)$:\begin{align*} \left|\cdot\right| & \,:\,\mathbb{R}^{4}\rightarrow\mathbb{R}, & \left|x\right| & :=x_{0}^{2}-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}=x_{0}^{2}-\left\Vert \vec{x}\right\Vert ^{2}.\end{align*} In Special Relativity in physics, this quadratic form is interpreted as the metric tensor $\eta$ of Minkowski spacetime:\begin{align*} \eta_{\mu\nu} & :=\left(\begin{array}{rrrr} 1 & 0 & 0 & 0\\ 0 & -1 & 0 & 0\\ 0 & 0 & -1 & 0\\ 0 & 0 & 0 & -1\end{array}\right), & \left|x\right| & =\sum_{\mu,\nu=0}^{3}x_{\mu}x_{\nu}\eta_{\mu\nu}.\end{align*} Here, $x_{0}$ is called the {}``time component'' and $\vec{x}:=\left(x_{1},x_{2},x_{3}\right)$ the {}``spacial components'' of the four-vector $x$. Examples for such four-vectors are energy-momentum $p:=\left(E,p_{1},p_{2},p_{3}\right)=\left(E,\vec{p}\right)$ or space-time intervals $dx:=\left(dt,dx_{1},dx_{2},dx_{3}\right)=\left(dx_{0},d\vec{x}\right)$. The preserved quadratic form of energy-momentum corresponds to invariant mass $m^{2}=E^{2}-\left\Vert \vec{p}\right\Vert ^{2}$, and space-time intervals model invariant proper time $d\tau^{2}=dt^{2}-\left\Vert d\vec{x}\right\Vert ^{2}$. These physical properties remain unchanged when translating between equivalent frames of reference. Next to a translation symmetry, the geometry of Minkowski spacetime is symmetric under rotation in space, and transformations between uniformly moving, nonaccelerated frames of reference. The last two symmetries together make up the Lorentz group. The associated Lie algebra of the generators of Lorentz transformation $M_{\mu\nu}$ is:\begin{align} \left[M_{\mu\nu},M_{\rho\sigma}\right] & =\imath\left(\eta_{\nu\rho}M_{\mu\sigma}+\eta_{\mu\sigma}M_{\nu\rho}-\eta_{\mu\rho}M_{\nu\sigma}-\eta_{\nu\sigma}M_{\mu\rho}\right),\label{eq:defLorentzLieAlgebra}\\ x_{\mu}' & :=\sum_{\mu=0}^{3}M_{\mu\nu}x_{\nu},\qquad\mu,\nu,\rho,\sigma\in\left\{ 0,1,2,3\right\} .\nonumber \end{align} The $M_{\mu\nu}$ rotate the spacial components of a four-vector $x$, and perform so-called {}``Lorentz boosts''. Similar to the algebra of spin-$\frac{1}{2}$ operators above, the Lorentz Lie algebra can be generated with octonions $\mathbb{C}\otimes\mathbb{O}$ to basis $\left\{ 1,\imath\right\} \otimes\left\{ 1,i_{1},\ldots,i_{7}\right\} $ when defining \citep{Dzhu2008HiddenStructures,Koepl2009octoocto}:\begin{align*} R_{0} & :=\frac{i_{4}}{2}\left(1+\imath\right), & R_{j} & :=\frac{i_{\left(j+4\right)}}{2}\left(1-\imath\right), & & \left(j\in\left\{ 1,2,3\right\} \right)\\ M_{\mu\nu} & :=\frac{1}{2}\left[R_{\mu},R_{\nu}\right].\end{align*} All terms of the $M_{\mu\nu}$ are calculated in appendix A, and it follows directly that the defining relation of the Lorentz Lie algebra (\ref{eq:defLorentzLieAlgebra}) is satisfied. \subsubsection{Nonassociative algebra} The spin-$\frac{1}{2}$ operator generated by $\sigma_{j}$ (\ref{eq:defPauliMatComplexQuats}) and the Lorentz Lie algebra generated by $M_{\mu\nu}$ (\ref{eq:defLorentzLieAlgebra}) are both expressed on complex octonions:\begin{align*} \sigma_{j} & :=-\frac{\imath}{4}\epsilon_{jkl}R_{k}R_{l}, & M_{\mu\nu} & :=\frac{1}{2}\left[R_{\mu},R_{\nu}\right].\end{align*} The four $R_{\mu}$ satisfy the additional associator relation:\begin{align*} \left(R_{\mu},R_{\nu},R_{\rho}\right) & :=\left(R_{\mu}R_{\nu}\right)R_{\rho}-R_{\mu}\left(R_{\nu}R_{\rho}\right)=2\sum_{\sigma,\xi=0}^{3}\epsilon_{\mu\nu\rho\xi}\eta_{\xi\sigma}R_{\sigma}.\end{align*} One can validate this expression from \begin{align*} \left(i_{\mu},i_{\nu},i_{\rho}\right) & =2\sum_{\sigma=4}^{7}\varepsilon_{\mu\nu\rho\sigma}i_{\sigma}, & \mu,\nu,\rho & \in\left\{ 4,5,6,7\right\} ,\end{align*} which is a property of any antiassociative four-tuple in the octonions (here: $\left\{ i_{4},i_{5},i_{6},i_{7}\right\} $). The Minkowski tensor $\eta_{\xi\sigma}$ then comes from the difference in sign in the $\left(1\pm\imath\right)$ factor of $R_{0}$ and the $R_{j}$. With this, the four element set $\left\{ R_{0},R_{1},R_{2},R_{3}\right\} $ that generates the Lorentz Lie algebra can be viewed as a four dimensional {}``spacetime'' generalization of the $\left\{ R_{1},R_{2},R_{3}\right\} $ set that generates the algebra of spin-$\frac{1}{2}$ operators, $\sigma_{j}$, in three dimensional space. \subsection{Example: Operator algebra of strongly interacting fields and glueball} The Strong Force in physics is the interaction between building blocks of matter, the quarks. It is mediated through exchange particles, the gluons. Both quarks and gluons carry a color charge, but neither charges nor particles can be isolated or directly observed. This is known as color confinement in physics. In the context of this paper, this means that there exist no operators $A$ in conventional quantum mechanics that would allow measurement of the color charge with real eigenvalues $a_{n}$ after the Born rule (\ref{eq:expectationValueBornRule}) to $\left\langle \Psi\right|A\left|\Psi\right\rangle =\sum_{n}a_{n}\rho_{n}$. This section shows how a certain modification to this rule for observation in quantum mechanics makes room for nonassociative operator algebras. This may aid in modeling the \emph{glueball}, a hypothetical particle that is made from gluons only \citep{Dzhu2010NonperturbQC,Dzhu2010SU3FluxTube}. \subsubsection{Observables from nonassociative parts of an operator and modified Born rule} A product of two operators $A^{B}$ and $A^{C}$ is measured in conventional quantum mechanics as:\begin{align*} A^{B}A^{C}\left|\Psi\right\rangle & \overset{\mathrm{def}}{=}A^{B}\left(A^{C}\left|\Psi\right\rangle \right).\end{align*} The operators $A^{B},A^{C}$ are now proposed to be made from a product of operators $e$, $\Phi^{B}$ and $\Phi^{C}$:\begin{align*} A^{B} & :=e\Phi^{B}, & A^{C} & :=e\Phi^{C}.\end{align*} The operator algebra $\mathbb{G}$ of the $A^{B}$ and $A^{C}$ is associative by definition, whereas the $e$, $\Phi^{B}$ and $\Phi^{C}$ are elements in a nonassociative operator algebra% \footnote{Even though it is not specified what algebras the $\mathbb{A}$ and $\mathbb{G}$ exactly are, it compares on a very general level to the $R_{\mu}$ and $\sigma_{j}$ from the previous section. There, the spin operators $\sigma_{j}$ were elements in the associative complex quaternion algebra, $\sigma_{j}\in\mathbb{C}\otimes\mathbb{H}$, whereas the $R_{\mu}$ were from the nonassociative complex octonions, $R_{\mu}\in\mathbb{C}\otimes\mathbb{O}$. This comparison with the previous section does not hold much beyond this point, though.% } $\mathbb{A}$: \begin{align*} A^{B},A^{C} & \in\mathbb{G}, & e,\Phi^{B},\Phi^{C} & \in\mathbb{A}.\end{align*} A modification to the Born rule for observability in quantum mechanics can then be proposed. For operators that are made from nonassociative parts, measurement requires to reassociate the parts:\begin{align*} A^{B}A^{C}\left|\Psi\right\rangle & =A^{B}\left(A^{C}\left|\Psi\right\rangle \right)=\left(e\Phi^{B}\right)\left(\left(e\Phi^{C}\right)\left|\Psi\right\rangle \right)\\ & =e\left(\Phi^{B}\left(e\left(\Phi^{C}\left|\Psi\right\rangle \right)\right)\right)+m^{BC}\left|\Psi\right\rangle .\end{align*} The $m^{BC}\left|\Psi\right\rangle $ term is the associator,\begin{align*} m^{BC} & :=\left(e\Phi^{B}\right)\left(e\Phi^{C}\right)-e\left(\Phi^{B}\left(e\Phi^{C}\right)\right),\\ \left\langle \Psi\right|A^{B}A^{C}\left|\Psi\right\rangle & :=\left\langle \Psi\right|e\left(\Phi^{B}\left(e\left(\Phi^{C}\left|\Psi\right\rangle \right)\right)\right)+\left\langle \Psi\right|m^{BC}\left|\Psi\right\rangle .\end{align*} Such operators $A^{B}$ and $A^{C}$ then model an observable physical quantity only if wave functions $\left|\Psi\right\rangle $ exist where applying the operators' reassociated constituents yields a real-valued function $\chi$ and a constant (but not necessarily real) factor $a^{BC}$as: \begin{align*} \left\langle \Psi\left(x_{1},x_{2}\right)\right|e\left(x_{1}\right)\left(\Phi^{B}\left(x_{1}\right)\left(e\left(x_{2}\right)\left(\Phi^{C}\left(x_{2}\right)\left|\Psi\left(x_{1},x_{2}\right)\right\rangle \right)\right)\right) & \overset{!}{=}a^{BC}\chi\left(x_{1},x_{2}\right),\end{align*} \begin{align*} \Psi & \,:\,\mathbb{R}^{N}\otimes\mathbb{R}^{N}\rightarrow\mathbb{R}^{N}, & x_{1},x_{2} & \in\mathbb{R}^{N},\\ \chi & \,:\,\mathbb{R}^{N}\otimes\mathbb{R}^{N}\rightarrow\mathbb{R}, & a^{BC} & =\mathrm{const}.\end{align*} \subsubsection{Glueball} When the $A^{B}$ and $A^{C}$ operators are interpreted as unobservable Strong Force fields and charges, the methodology can be applied to a quantum system that interacts purely through fields. This is possible in principle when particles that mediate a force carry charge themselves, as is the case with the gluons in the strong force. A bound state between gluons only, without quarks, has been referred to as \emph{glueball} in the literature. But conventional treatment of the strong force currently indicates that such a particle either doesn't exist, or if it exists it would always be in a superposition with regular particles from bound quarks, where it would be indistinguishable therefrom. A solution for the glueball has been brought forward \citep{Dzhu2010NonperturbQC,Dzhu2010SU3FluxTube}, by adapting an approach similar to this section and constraining degrees of freedom to model a force with known characteristics from the strong force in physics. Since the new glueball solution is obtained from a quantum theory that generally does not reduce to conventional quantum theory, there is an opportunity to predict novel effects from this nonstandard treatment. \subsection{Emergent probability from a nonassociative geometry?} Conventional quantum mechanics specifies a conservation rule for probability density $\rho$ and flux $\vec{j}$:\begin{align*} \frac{\partial}{\partial t}\rho+\mathrm{div}\,\vec{j} & =0.\end{align*} This relation can be extended to more than three spacial dimensions in the $\mathrm{div}\,\vec{j}$ term when the underlying geometry is locally flat and differentiable. For a single time axis and three or more spacial dimensions, an $n$-dimensional vector space over the reals $\mathbb{R}^{n}$ can be equipped with a quadratic metric of the form:\begin{align*} ds^{2} & :=dt^{2}-\sum_{j=1}^{n-1}dx_{j}^{2}.\end{align*} For a static volume $X$ with no flux $\vec{j}$ on the surface, probability density $\rho$ is then required to be conserved as a function of time:\begin{align*} \frac{\partial}{\partial t}\int_{X}\rho\, d^{n-1}x & =0.\end{align*} When allowing nonassociative wave functions $\left|\Psi\right\rangle $ where generally $\left\langle \Psi\right|\left(Q\left|\Psi\right\rangle \right)\neq\left(\left\langle \Psi\right|Q\right)\left|\Psi\right\rangle $, this requires additional conditions on how to extract observable values $h_{n}$ with probabilities $\rho_{n}$. Recalling the Born rule for observability, clarification is required at the fundamental level of quantum mechanics: \begin{align*} \left\langle \Psi\right|Q\left|\Psi\right\rangle & \overset{?}{=}\sum_{n}h_{n}\int_{V}\psi_{n}^{*}\psi_{n}dV, & \rho_{n} & \overset{?}{=}\int_{V}\psi_{n}^{*}\psi_{n}dV.\end{align*} Rather than trying to somehow fit nonassociative algebra into these relations from conventional quantum mechanics, it is now speculated for classical probability to become an emergent phenomenon, where nonassociative values of $\left|\Psi\right\rangle $ suggest some kind of nonassociative geometry in which to better understand the fluxes involved. This notion of theoretical reductionism ultimately has to prove itself in an actual model. It needs to confirm or not whether simplification is indeed achievable, and describe natural law with fewer assumptions. It is noted that probability doesn't have to be abandoned as a concept altogether. But there may be an opportunity for modeling physical law in approaches where nonconservation of probability forced investigators to abort in the past. \section{Prototype nonassociative quantum theory in one dimension} \label{sec:HopfCoquasiCandidateMethod}A new nonassociative quantum theory will have to be consistent in itself and reproduce known results in parameter ranges that have been tested experimentally. To be considered for comparison with existing theories, it will have to predict new testable effects or describe known effects with fewer assumptions. This section brings forward a prototype for such a theory that is built from algebraic rules: There are types transformations in a vector space, a self-duality principle, and an eigenvalue invariance condition. In strong simplification as compared to nature, all physical fields and charges are placed along a single, preferred real axis in $\mathbb{R}^{d}$ . Wave functions $\psi$ are made from two types of transformations, active $T^{A}$ and passive $T^{P}$, which map the preferred real axis into the unit sphere in $d$ dimensions, $S^{d-1}$. Active and passive transformations are considered dual to one another, and relate through a condition that can be satisfied in the normed division algebras $\mathbb{C}$ ($d=2$), $\mathbb{H}$ ($d=4$), and $\mathbb{O}$ ($d=8$). Fields and particles in physics are mapped to active and passive transformations respectively. The mathematical duality between the allowed types of transformations becomes a self-duality principle between physical fields and particles. Physical measurement requires an eigenfunction/eigenvalue rule similar to the Born rule in conventional quantum mechanics, with the additional requirement that the eigenvalue relation must be reducible to a complex number description% \footnote{This requirement leads to a class of quaternion and octonion algebras that are equivalent in the sense that the eigenvalue relation in complex number form remains unchanged when switching between equivalent algebras. This is discussed in section \ref{sec:Hopf}.% }. Solutions are shown and the prototype is advertised for further exploration. Using complex numbers and asking for the influence of many fields on a single particle, the solutions are the Dirac equation with $1/r$ fields if a timeless physical world would only have one dimension in space. Quaternionic solutions exist only if all fields (or particles) are local to the point that marks a particle in the complex numbers. There must be at least two contributing fields% \footnote{Or particles; fields and particles are required to be equivalent duals. The side note {}``(or particles)'' is therefore omitted going forward when talking about fields, but it is always implied.% } that cannot be observed or probed independently. The real eigenvalue from the modified Born rule remains invariant under general rotation of the imaginary quaternion basis. Therefore, the eigenvalue relation is said to have local $\mathrm{SU}\left(2\right)$ Lie group symmetry. Octonionic solutions are further restricted by requiring at least three contributing fields. One solution is shown and said to have local $\mathrm{G}_{2}$ symmetry. Without claiming completeness, the solution set of the prototype appears wide enough to sufficiently resemble physical reality, given the model's current limitation of only one spacial dimension and no time concept. \subsection{Configuration space, self-duality, active and passive transformations} The model is built in a $d$-dimensional vector space over the reals, $\mathbb{R}^{d}$. One preferred axis in $\mathbb{R}^{d}$ corresponds to physical space and is denoted with $x$. A physical system is described through a combination of \emph{active} and \emph{passive} transformations, which map the $x$-axis onto the unit sphere in $\mathbb{R}^{d}$:\begin{align*} T^{A},T^{P} & \,:\,\mathbb{R}\rightarrow S^{d-1}.\end{align*} Active and passive transformations are modeled as exponentials% \footnote{A different type of active and passive transformation was discussed in \citep{Gogber2008SplitOctoRotations}, where a {}``passive'' transformation was a rotation of the coordinate basis elements of $\mathbb{R}^{8}$ that leaves the norm of a split-octonion invariant. {}``Active'' rotations were actions on the split-octonion basis elements that result in another split-octonion basis.% } using normed division algebras:\begin{eqnarray*} \theta & \in & \begin{cases} \mathbb{C} & \left(d=2\right),\\ \mathbb{H} & \left(d=4\right),\\ \mathbb{O} & \left(d=8\right),\end{cases}\qquad\left|\theta\right|=1,\qquad\theta^{2}=-1,\\ T^{A}\left(x\right) & := & \left|x-a\right|^{\theta t_{A}}:=\exp\left(\theta t_{A}\ln\left|x-a\right|\right)\in S^{d-1},\\ T^{P}\left(x\right) & := & \theta^{\left(x-a\right)t_{P}}:=\exp\left(\left(x-a\right)t_{P}\ln\theta\right)\\ & = & \exp\left(\theta t_{P}\left(x-a\right)\left(\frac{\pi}{2}+2\pi N\right)\right)\in S^{d-1},\\ x,a,t_{A},t_{P} & \in & \mathbb{R},\, x\neq a,\, N\in\mathbb{Z}.\end{eqnarray*} The natural logarithm with real argument, $\ln\left|x-a\right|$, is chosen to be real-valued by definition. This choice omits possible terms $\pm2\pi\theta$ in the exponent of $T^{A}\left(x\right)$. For $x\neq a$ it is always possible to find $\left\{ x,a,t_{A},t_{P},\theta\right\} $ such that:\begin{eqnarray*} T^{A}\left(x\right) & \overset{!}{=} & T^{P}\left(x\right),\\ \exp\left(\theta t_{A}\ln\left|x-a\right|\right) & \overset{!}{=} & \exp\left(\theta t_{P}\left(x-a\right)\left(\frac{\pi}{2}+2\pi N\right)\right),\\ \Rightarrow\, t_{A} & = & t_{P}\frac{x-a}{\ln\left|x-a\right|}\left(\frac{\pi}{2}+2\pi N\right)\qquad\textrm{for }x\neq a.\end{eqnarray*} This relation allows to call $T^{A}$ and $T^{P}$ equivalent duals under a map $\widetilde{\cdot}$ that exchanges base and exponent:\begin{align*} T^{A} & \sim\alpha^{\beta}, & T^{P} & \sim\beta^{\alpha}, & \alpha^{\beta} & \sim\widetilde{\beta^{\alpha}}.\end{align*} \subsection{Fields and particles} When modeling the electromagnetic force in physics, the {}``first quantization'' particle point of view of Quantum Electrodynamics is fully equivalent to the {}``second quantization'' field point of view of Quantum Field Theory. The speculation here is that this equivalence can be carried forward for modeling physical forces beyond electromagnetism, given a suitable quantum theory (to be found). The mathematical duality between $T^{A}$ and $T^{P}$ becomes a \emph{self-duality principle} when declaring active transformations to model physical fields, and passive transformations to model physical particles. The proposed field-particle duality is thereby reflected in mathematical properties of the model. It is arbitrary to assign active transformations $T^{A}$ to fields, as opposed to particles. Since $T^{A}$ and $T^{P}$ are equivalent duals, the choice is irrelevant for the model predictions as long as one adheres to it throughout the entire calculation. Interaction between many particles and fields is then modeled by an effective transformation $T^{\mathrm{eff}}$ that is a product of any number of active and passive transformations:\begin{eqnarray*} f & : & T_{1}\otimes\ldots\otimes T_{n}\rightarrow T^{\mathrm{eff}},\\ T_{i} & \in & \left\{ T_{i}^{A},T_{i}^{P}\right\} \quad\textrm{for }i=1,\ldots,n,\\ T^{\mathrm{eff}} & : & \mathbb{R}\rightarrow S^{d-1}.\end{eqnarray*} The tensor symbol $\otimes$ indicates possible complex number, quaternion, and octonion multiplication rules. Terms from the $\left\{ T_{i}\right\} $ may be expressed as their Taylor polynomials. After choosing an algebra, the $f$ become polynomial functions in $\mathbb{R}^{d}$. The effective transformations $T^{\mathrm{eff}}$ will also be written with the symbol $\psi$ to be similar to notation customary for wave functions in physics. \subsection{Physical measurement, modified Born rule, and select solutions} The Born rule from conventional quantum mechanics governs physical measurement. As was recalled in section \ref{sec:TowardsNonassocQuantTh}, operators $\hat{D}$ model observable physical quantities when they act on wave functions $\psi\equiv T^{\mathrm{eff}}$ that are eigenfunctions to $\hat{D}$ with real eigenvalues $\lambda$. An additional condition is now supplied for the prototype quantum theory here, which requires the eigenfunctions $\psi$ to fall into a complex number subspace of the algebra:\begin{align*} \hat{D}\psi & \overset{!}{=}\lambda\psi, & \lambda & \in\mathbb{R}, & \psi & \in\mathbb{C}\subset\left\{ \mathbb{H},\mathbb{O}\right\} .\end{align*} \subsubsection{Complex numbers} In the complex number case to basis $b_{\mathbb{C}}:=\left\{ 1,\imath\right\} $, a solution exists for a linear differential operator $\hat{D}$, wave function $\psi$ and eigenvalue $m\in\mathbb{R}$:\begin{eqnarray*} \hat{D} & := & -\imath\frac{\partial}{\partial x}-\sum_{i=1}^{n-1}\frac{t_{i}}{x-a_{i}},\\ \psi & := & \imath^{t_{n}x}\left(\prod_{i=1}^{n-1}\left|x-a_{i}\right|^{\imath t_{i}}\right)\\ & = & \exp\left(\imath\pi t_{n}x\left(\frac{1}{2}\pm2M\right)\right)\prod_{i=1}^{n-1}\exp\left(\imath t_{i}\ln\left|x-a_{i}\right|\right),\end{eqnarray*} \begin{align} \Rightarrow\,\hat{D}\psi & =m\psi, & m & =\pi t_{n}\left(\frac{1}{2}\pm2M\right), & M & \in\mathbb{N}.\label{eq:ComplexesOpEqDirac}\end{align} The wave function $\psi$ contains a product of active transformations $T_{i}^{A}$ which are interpreted as fields or external influences on a quantum system:\begin{align*} T_{i}^{A} & =\left|x-a_{i}\right|^{\imath t_{i}}.\end{align*} The passive transformation $T^{P}$ is interpreted as particle property, or characteristic property of the system under investigation:\begin{align*} T^{P} & =\imath^{t_{n}x}.\end{align*} In comparison with physics, the Dirac equation with electromagnetic field can be written with complex-valued $4\times4$ matrices $\gamma_{j}$, four vectors $\Psi$ and spacetime coordinates $\left\{ x_{0},\ldots,x_{3}\right\} $ as:\begin{align*} \hat{D}_{\mathrm{EM}} & :=\sum_{j=0}^{3}\gamma_{j}\left(-\imath\frac{\partial}{\partial x_{j}}-\sum_{i=1}^{n-1}\frac{t_{i}}{x_{j}-a_{i}}\right),\\ \gamma_{j} & \in\mathbb{C}^{4}\times\mathbb{C}^{4},\qquad\Psi\in\mathbb{C}^{4},\\ \hat{D}_{\mathrm{EM}}\Psi & =m\Psi.\end{align*} These equations use four spacetime coordinates $x_{j}$ instead of a single coordinate $x$ in (\ref{eq:ComplexesOpEqDirac}), and four $\gamma_{j}$ matrices instead of a mere multiplicative identity. The operator equation (\ref{eq:ComplexesOpEqDirac}) is interpreted as a state equation of a test particle under the influence of linear superpositioned $1/x$-type fields of different strength $t_{i}$ and poles at places $a_{i}$ along a single $x$ axis. The particle is located at $x=0$ and has a characteristic property $m\in\mathbb{R}$. On this primitive level it supports the conjecture that the prototype quantum theory is similar enough to known physics to be of interest for further investigation of spacetime-internal isospin properties of a quantum system. Of course, the argument cannot be made conclusively as long as it is unknown how, or even if, today's description of observed spacetime can be made to emerge in a future generalization of the prototype. Equation (\ref{eq:ComplexesOpEqDirac}) models a test particle under the influence of $\left(n-1\right)$ fields. Any amount of further external influences can be supplied independently as:\begin{align*} \psi' & :=\psi\exp\left(\imath\alpha\right), & \alpha & \in\mathbb{R}.\end{align*} Using verbiage customary in physics, this property is now called a \emph{global symmetry}. Additional fields can be supplied independently at any place along the real axis through superposition with the existing fields. Since the $\exp\left(\imath\alpha\right)$ terms generate the $\mathrm{U}\left(1\right)$ Lie group under multiplication, \begin{align*} \left\{ \exp\left(\imath\alpha\right),\,\alpha\in\mathbb{R}\right\} & \cong\mathrm{U}\left(1\right),\end{align*} the eigenvalue relation is said to have global $\mathrm{U}\left(1\right)$ symmetry. \subsubsection{Quaternions} Not every combination of active and passive transformations in the quaternions can be written as a single effective transformation using complex numbers only. For example, a combination of a single active and passive transformation may fall into a complex number subspace only if both are already contained in that same subspace:\begin{align*} T_{1}^{A} & :=\left|x-a_{1}\right|^{\theta_{1}t_{1}}=\exp\left(\theta_{1}t_{1}\ln\left|x-a_{1}\right|\right),\\ T_{2}^{P} & :=\theta_{2}^{t_{2}\left(x-a_{2}\right)}=\exp\left(\theta_{2}\pi\left(x-a_{2}\right)t_{2}\left(\frac{1}{2}+2M_{2}\right)\right),\\ \psi & :=T_{1}^{A}T_{2}^{P},\qquad\psi\in\mathbb{C}\,\Longleftrightarrow\,\theta_{1}=\pm\theta_{2}.\end{align*} Due to the different $x$-dependency in $T_{1}^{A}$ and $T_{2}^{P}$, the two unit quaternions $\theta_{1}$ and $\theta_{2}$ must necessarily be linear dependent for $\psi$ to remain in the same complex number subalgebra for any $x\neq a_{1},a_{2}$. A truly quaternionic solution may therefore only come from wave functions that are a product of a single type of transformation, active or passive. The following examines the example of interacting particles% \footnote{The same reasoning from the example is valid for interacting fields. This must be the case since the transformations satisfy the self-duality requirement.% }. A pair of passive transformations in the quaternions is in general:\begin{align*} T_{1}^{P}T_{2}^{P} & =\exp\left(\theta_{1}\pi\left(x-a_{1}\right)t_{1}\left(\frac{1}{2}+2M_{1}\right)\right)\exp\left(\theta_{2}\pi\left(x-a_{2}\right)t_{2}\left(\frac{1}{2}+2M_{2}\right)\right)\\ & =\exp\left(\theta_{1}c_{1}\right)\exp\left(\theta_{1}d_{1}x\right)\exp\left(\theta_{2}c_{2}\right)\exp\left(\theta_{2}d_{2}x\right),\end{align*} \begin{align*} c_{1} & :=-\pi a_{1}t_{1}\left(\frac{1}{2}+2M_{1}\right), & d_{1} & :=\pi t_{1}\left(\frac{1}{2}+2M_{1}\right),\\ c_{2} & :=-\pi a_{2}t_{2}\left(\frac{1}{2}+2M_{2}\right), & d_{2} & :=\pi t_{2}\left(\frac{1}{2}+2M_{2}\right).\end{align*} The $c_{1,2},d_{1,2}\in\mathbb{R}$ and $M_{1,2}\in\mathbb{Z}$ are constants and independent of $x$. The imaginary unit quaternions $\theta_{1,2}$ are elements of the $\mathfrak{su}\left(2\right)$ Lie algebra made from the imaginary quaternion basis elements $\left\{ i_{1},i_{2},i_{3}\right\} $. The Baker-Campbell-Hausdorff formula for Lie algebras gives existence of an imaginary unit quaternion $\tilde{\theta}$ from the same $\mathfrak{su}\left(2\right)$ algebra such that:\begin{align*} \exp\left(\tilde{\theta}\tilde{d}\right) & =\exp\left(\theta_{1}d_{1}\right)\exp\left(\theta_{2}d_{2}\right).\end{align*} $\tilde{d}$ is a real constant. The $x$-dependency can be written to a single unit quaternion $\tilde{\theta}$:\begin{align*} T_{1}^{P}T_{2}^{P} & =\exp\left(\theta_{1}c_{1}\right)\exp\left(\theta_{1}d_{1}x\right)\exp\left(\theta_{2}d_{2}x\right)\exp\left(\theta_{2}c_{2}\right)\\ & =\exp\left(\theta_{1}c_{1}\right)\exp\left(\tilde{\theta}\tilde{d}x\right)\exp\left(\theta_{2}c_{2}\right).\end{align*} To express $\psi$ using a single term $\exp\left(\tilde{\theta}\tilde{d}x\right)$ there may be two cases: \begin{enumerate} \item The $\theta_{1},\theta_{2}$ are identical (except for a possible sign change). In this case, $\psi$ is fully contained in a complex number subalgebra within the quaternions and the eigenvalue equation reduces to the complex number case. \item The $a_{1}$ and $a_{2}$ are the same, $a:=a_{1}=a_{2}$, so that all particles are located at the same position along $x$. This makes $c_{1}$ and $c_{2}$ multiples of $d_{1}$ and $d_{2}$ by the same real factor, $c_{1,2}=ad_{1,2}$, and allows for a new quaternionic solution $\exp\left(\tilde{\theta}\tilde{d}\left(x-a\right)\right)=\exp\left(\theta_{1}d_{1}\left(x-a\right)\right)\exp\left(\theta_{2}d_{2}\left(x-a\right)\right)$. \end{enumerate} In the second case there is a new quaternionic type of solution where $\psi$ can be expressed as:\begin{align*} \psi & =\exp\left(\tilde{\theta}\tilde{d}\left(x-a\right)\right).\end{align*} An operator $\hat{D}$ exists that commutes with $\psi$ and has a real eigenvalue $\tilde{d}$:\begin{align*} \hat{D} & :=-\tilde{\theta}\frac{\partial}{\partial x}, & \hat{D}\psi & =\psi\overleftarrow{\hat{D}}=\tilde{d}\psi.\end{align*} This satisfies the modified Born rule as the eigenvalue relation can be reduced to a complex number subalgebra. It may become complicated to find an explicit expression for $\tilde{\theta}$ and $\tilde{d}$ from a given $\left\{ d_{1},d_{2},\theta_{1},\theta_{2}\right\} $. But with its existence proven it is concluded that the prototype quantum theory yields a novel type of solution for the quaternion case. All particles have to be at the same place $x=a$ which makes it a \emph{local} solution. Finding a $\tilde{\theta}$ depends on both $\theta_{1}$ and $\theta_{2}$. It is not possible anymore to supply a third particle with arbitrary $\theta_{3}$ independently, since any new term with a $\theta_{3}$ generally requires a change of $\tilde{\theta}$. This is different from the notion of describing influences from a given system on an independent test particle, as it was possible in the complex number case. The $\left\{ \theta_{1},\theta_{2},\tilde{\theta}\right\} $ in the quaternion example of two interacting particles are elements of an $\mathfrak{su}\left(2\right)$ Lie algebra. The example can be extended to any number of interaction partners since the Baker-Campbell-Hausdorff formula may be repeated any number of times, as long as one remains within the same Lie algebra. The quaternionic solution is therefore said to have \emph{local} $\mathrm{SU}\left(2\right)$ \emph{symmetry}. It models an internal \emph{isospin} property that does not depend on $x$. This resembles observed properties from the Weak Force in physics. \subsubsection{Octonions} \label{sub:octonionPrototype}Use of octonion algebra justifies calling the formulation here a \emph{nonassociative} prototype quantum theory. Wave functions $\psi$ are made from transformations $T_{i}$ which contain a number of octonionic imaginary unit vectors $\left\{ \theta_{i}\right\} $. These fall into one of four cases: \begin{enumerate} \item All $\theta_{i}$ are identical (except for a possible difference in sign) and the eigenvalue relation reduces to the complex number case. \item All $\theta_{i}$ are associative under multiplication and are therefore elements of the same $\mathfrak{su}\left(2\right)$ algebra. This reduces to the quaternion case. \item The automorphism group of the $\left\{ \theta_{i}\right\} $ is $\mathrm{G}_{2}$, which is the automorphism group of the octonions. \item The automorphism group of the $\left\{ \theta_{i}\right\} $ is $\mathrm{SU}\left(3\right)$, which is the subgroup of $\mathrm{G}_{2}$ that leaves one imaginary octonion unit unchanged. \end{enumerate} Cases 3 and 4 are new with the octonions. They require a product of at least three $T_{i}$, since any two octonion basis elements are always part of an $\mathfrak{su}\left(2\right)$ algebra and therefore have an automorphism group no larger than $\mathrm{SU}\left(2\right)$. Octonions $\left\{ \theta_{i}\right\} $ generally don't form a Lie algebra. It is not possible to find an octonion $\tilde{\theta}$ and real number $\tilde{d}$ for any given $\theta_{i},d_{i}$ ($i=1,2,3$) such that:\begin{eqnarray*} \left(\exp\left(\theta_{1}d_{1}\right)\exp\left(\theta_{2}d_{2}\right)\right)\exp\left(\theta_{3}d_{3}\right) & \overset{?}{=} & \exp\left(\tilde{\theta}\tilde{d}\right).\end{eqnarray*} But there are subalgebras in the octonions that are larger than the quaternions, for which the Baker-Campbell-Hausdorff equation is still applicable. The nonassociative Lie algebras $\mathfrak{g}_{2}$ or $\mathfrak{su}\left(3\right)\subset\mathfrak{g}_{2}$ can be expressed in terms of octonions (e.g.~\citep{Dixon1994DivisionAlgs}). The $\mathfrak{g}_{2}$ can be written as algebra of derivations over the octonions, $\mathfrak{der}\left(\mathbb{O}\right)$, in form of \citep{Baez2002TheOctonions,Schafer1995nonassIntro}:\begin{align*} D_{u,v}\left(a\right) & =\left[\left[u,v\right],a\right]-3\left(\left(uv\right)a-u\left(va\right)\right), & u,v,a & \in\mathbb{O}.\end{align*} Since the $D_{u,v}\left(a\right)$ form a Lie algebra it is possible to find $\tilde{\theta}$ and $\tilde{d}$ for any given $\theta_{i},d_{i}$ ($i=1,2,3$) and $u,v$:\begin{align} \left(\exp\left(D_{u,v}\left(\theta_{1}\right)d_{1}\right)\exp\left(D_{u,v}\left(\theta_{2}\right)d_{2}\right)\right)\exp\left(D_{u,v}\left(\theta_{3}\right)d_{3}\right) & =\exp\left(D_{u,v}\left(\tilde{\theta}\right)\tilde{d}\right).\label{eq:waveFunctionAsG2}\end{align} A wave function $\psi$ and operator $\hat{D}$ exist that model a solution for the nonassociative prototype quantum theory: \begin{align*} \psi & :=\exp\left(\tilde{\theta}_{D}\tilde{d}\left(x-a\right)\right), & \hat{D} & :=-\tilde{\theta}_{D}\frac{\partial}{\partial x}, & \hat{D}\psi & =\tilde{d}\psi.\end{align*} Solutions in this ansatz are similarly restricted as in the quaternion case. All particles% \footnote{The same reasoning applies to fields as well and is always implied, just as in the quaternion case. A wave function $\psi_{F}:=\exp\left(\tilde{\theta}_{D}\tilde{d}\ln\left(x-a\right)\right)$ would be an eigenfunction to $\hat{D}_{F}:=-\tilde{\theta}_{D}\left(x-a\right)\left(\partial/\partial x\right)$ with eigenvalue $\lambda=\tilde{d}$.% } have to be located at the same place. Observable wave functions are made from interactions between particles, but without the ability of inserting an independent test probe. For a truly octonionic eigenvalue relation at least three particles are required to model a wave function (\ref{eq:waveFunctionAsG2}). Through the $\left\{ D_{u,v}\left(\theta_{i}\right)\right\} $, their imaginary unit vectors $\left\{ \theta_{i}\right\} $ may generate the nonassociative algebra $\mathfrak{g}_{2}$ or its $\mathfrak{su}\left(3\right)$ subalgebra. On this very high level it appears that the prototype continues to be a candidate for future usefulness in physics. The octonion case contains spaces that have the observed $\mathrm{SU}\left(3\right)$ symmetry of the Strong Force between quarks. The minimum number of quarks that can enter into a bound state is three, not counting quark-antiquark states% \footnote{Time as a concept is absent from the current prototype theory, and since particle-antiparticle duality is related to time symmetry in nature, it is conjectured that the prototype quantum theory does not contradict this.% }. \section{Hopf coquasigroup symmetry of the octonionic eigenvalue relation} \label{sec:Hopf}This section is looking at ways to extend the prototype nonassociative quantum theory to more than one dimension in space, as nature is obviously more than one dimensional. An equivalence class for normed division algebras is introduced, and a further generalized Born rule for physical observation requires a real eigenvalue to remain invariant under changes between equivalent algebras. The prototype nonassociative quantum theory in one dimension from the previous section is shown to be contained in such an approach, which now leaves room for supplying additional dimensions independently. Understanding the mathematical structure of the new equivalence class is advertised as key to understanding the physical meaning of its associated solution spaces. \subsection{A further generalization to the Born rule} It appears natural to assume that a solution along the $x$ axis,\begin{align} \hat{D}_{x}\psi\left(x\right) & =\tilde{d}\psi\left(x\right),\label{eq:eigenvalueWithSingleAxis}\end{align} should still be valid if properties $\phi\left(y\right)$ along some other axis $y$ orthogonal to $x$ exist:\begin{eqnarray} \hat{D}_{x}\psi\left(x\right)=\tilde{d}\psi\left(x\right) & \Longrightarrow & \hat{D}_{x}\psi\left(x\right)\phi\left(y\right)=\tilde{d}\psi\left(x\right)\phi\left(y\right).\label{eq:eigenvalueWithOtherPart}\end{eqnarray} Even though the factors may be using nonassociative octonion algebra, it is not needed to set brackets on the right-hand side since $\hat{D}_{x}$ only acts on $\psi\left(x\right)$, $\hat{D}_{x}\psi\left(x\right)$ is in the same complex number subalgebra as $\psi\left(x\right)$, and all normed division algebras are alternative. The requirement to express the entire eigenvalue equation (\ref{eq:eigenvalueWithOtherPart}) in the complex plane appears too narrow for a modified Born rule, as it would restrict the $\phi\left(y\right)$ to the same complex number subalgebra for any $y$, without need. To make room for expansion, a class is now introduced such that the real eigenvalue in (\ref{eq:eigenvalueWithOtherPart}) is to remain invariant when switching between equivalent normed division algebras. Any two algebras are equivalent under this class if they share the same axes in their associative imaginary basis triplets, but allow for a change in sign (or order, or parity) of these triplets. Equivalent algebras will be labeled $N=0,\ldots,M-1$. Parity of imaginary quaternion basis triplets gives rise to algebraic noncommutativity. A quaternion algebra $\mathbb{H}\left[0\right]$ may be defined with $i_{1}i_{2}:=-i_{2}i_{1}:=i_{3}$, and another $\mathbb{H}\left[1\right]$ with $i_{1}i_{2}:=-i_{2}i_{1}:=-i_{3}$. These $M=2$ algebras are equivalent under the new class. In the octonions, parity of its basis triples gives rise to noncommutativity as well as nonassociativity. There are $M=16$ equivalent octonion algebras. If the eigenvalue equation (\ref{eq:eigenvalueWithSingleAxis}) is confined to a complex number subalgebra, then $\tilde{d}$ remains invariant under changes between equivalent algebras. The one dimensional prototype quantum theory is therefore contained in an extension that requires eigenvalue invariance under changes of algebra in its generalized Born rule. In turn this allows for introduction of independent axes as in (\ref{eq:eigenvalueWithOtherPart}) that may be modeled in other algebra subspaces. In symbolic form, wave functions $\psi$ are polynomial functions $\psi\equiv f\left[N\right]$ made from polynomials $f$ supplied with an algebra multiplication that is octonionic in general. A functor $A$ maps the polynomial $f\in P$ into the set of polynomial functions $\left\{ f\left[N\right]\right\} $ that are made from equivalent algebras:\begin{align*} A & \,:\, P\rightarrow\left\{ \mathbb{R}\otimes\ldots\otimes\mathbb{R}\rightarrow S^{7}\right\} , & A\left(f\right) & :=\left\{ f\left[N\right],\, N=0,\ldots,M-1\right\} .\end{align*} Here, the variable list of real parameters $\mathbb{R}\otimes\ldots\otimes\mathbb{R}$ denote the possible physical axes or dimensions. The 7-sphere $S^{7}$ is the unit sphere in $\mathbb{R}^{8}$. Requiring an eigenvalue $\lambda\in\mathbb{R}$ to remain invariant under changes of algebra then becomes the generalized Born rule for observation: \begin{align} a & \overset{!}{=}\lambda f\left[0\right]\textrm{ for any }a\in A\left(\hat{D}f\right);\qquad\textrm{or equivalently:}\nonumber \\ A\left(\hat{D}f\right) & \overset{!}{=}\lambda\left\{ \underbrace{f\left[0\right],\ldots,f\left[0\right]}_{M\textrm{ times}}\right\} .\label{eq:defGenerlizedBornRuleDef}\end{align} The index $\left[0\right]$ is by choice and refers to one of the $M$ possible algebras. Any of the equivalent algebras from $A$ may be selected at will for a certain index number. Once selected, however, this choice has to remain throughout the entire calculation. \subsection{Hopf quasigroup structure} In the octonions to basis $b_{\mathbb{O}}=\left\{ 1,i_{1},\ldots,i_{7}\right\} $ there are seven associative permutation triplets of imaginary basis elements, which are now chosen for an octonion algebra $\mathbb{O}\left[0\right]$ (i.e., $\mathbb{O}$ with index $\left[0\right]$) as:\begin{align} \mathbb{O}\left[0\right] & :=\left\langle \mathbb{R}^{8},+,\times\right\rangle ,\nonumber \\ i_{\mu}\times i_{\nu} & :=\epsilon_{\mu\nu\rho}i_{\rho}-\delta_{\mu\nu}\textrm{, with}\nonumber \\ \mu\nu\rho & \in t_{\mathbb{O}\left[0\right]}:=\left\{ 123,761,572,653,145,246,347\right\} ,\label{eq:chosenOzeroTriplets}\\ 1\times i_{\mu} & :=i_{\mu}\times1=i_{\mu}.\nonumber \end{align} The product of any three imaginary basis elements is nonassociative when these basis elements are not contained in a single permutation triplet% \footnote{Other choices of triplet labeling are of course possible, for example, the cyclically symmetric $\left\{ 124,235,346,457,561,672,713\right\} $ used by Dixon \citep{Dixon1994DivisionAlgs}. The choice here has $\left\{ i_{1},i_{2},i_{3}\right\} $ recalling the associative triplet from the quaternions and $\left\{ i_{4},i_{5},i_{6},i_{7}\right\} $ as a nonassociative quadruplet that extends quaternions to the octonions.% } from $t_{\mathbb{O}\left[0\right]}$ (\ref{eq:chosenOzeroTriplets}). It follows directly from the $\epsilon_{\mu\nu\rho}$ that even permutations of $\left\{ \mu\nu\rho\right\} $ produce the identical algebra, whereas odd permutations change the sign of the corresponding product of basis elements. An odd permutation can be understood as changing the parity of the triplet. There are seven basis element triplets in $t_{\mathbb{O}\left[0\right]}$ which allow for $2^{7}$ possible combinations of sign changes. However, only $16$ of the combinations generate an alternative composition algebra that is octonion. These $M=16$ combinations are written as $\left\{ t_{\mathbb{O}\left[N\right]}\textrm{ with }N=0,\ldots,15\right\} $ and represent the set of equivalent algebras $\left\{ \mathbb{O}\left[N\right]\right\} $. To construct these, one can start from a given $\mathbb{O}\left[0\right]$ and four duality automorphisms $\mathcal{T}_{0},\ldots,\mathcal{T}_{3}$ that act on the $\left\{ \mathbb{O}\left[N\right]\right\} $:\begin{align*} \mathcal{T}_{n} & :\left\{ \mathbb{O}\left[N\right]\right\} \rightarrow\left\{ \mathbb{O}\left[N\right]\right\} , & \left\{ \left(\mathrm{id}\right),\mathcal{T}_{n}\right\} & \cong\mathbb{Z}_{2},\\ \mathcal{T}_{n}\mathcal{T}_{n} & =\left(\mathrm{id}\right), & n & \in\left\{ 0,1,2,3\right\} .\end{align*} $\mathbb{Z}_{2}$ is the cyclic group with two elements. When acting on the $t_{\mathbb{O}\left[N\right]}$ the $\mathcal{T}_{n}$ either leave the parity of a permutation triplet unchanged, $\left(\mathrm{id}\right)$, or swap it, $\left(\mathrm{sw}\right)$:\begin{align} \mathcal{T}_{0} & :=\left\{ \left(\mathrm{id}\right),\left(\mathrm{id}\right),\left(\mathrm{id}\right),\left(\mathrm{id}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right)\right\} ,\label{eq:defOctoAutomorphisms}\\ \mathcal{T}_{1} & :=\left\{ \left(\mathrm{sw}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right),\left(\mathrm{id}\right),\left(\mathrm{id}\right),\left(\mathrm{id}\right)\right\} ,\nonumber \\ \mathcal{T}_{2} & :=\left\{ \left(\mathrm{id}\right),\left(\mathrm{sw}\right),\left(\mathrm{id}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right),\left(\mathrm{id}\right),\left(\mathrm{sw}\right)\right\} ,\nonumber \\ \mathcal{T}_{3} & :=\left\{ \left(\mathrm{id}\right),\left(\mathrm{id}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right),\left(\mathrm{id}\right),\left(\mathrm{sw}\right),\left(\mathrm{sw}\right)\right\} .\nonumber \end{align} All possible combinations of the $\left\{ \mathcal{T}_{n}\right\} $ acting on $t_{\mathbb{O}\left[0\right]}$ then generate the $16$ triplet sets $t_{\mathbb{O}\left[N\right]}$ for the $\mathbb{O}\left[N\right]$ respectively. For previous approaches that use this construction see e.g. the {}``left-handed'' and {}``right-handed'' multiplication tables from \citep{Lockyer2008Octospace}, or the group action $T$ from \citep{SchrayManogueOcts1994} (equation 30 therein). Octonions that are here mapped through $\mathcal{T}_{0}$ are called {}``opposite algebra'' in \citep{SchrayManogueOcts1994} (equation 33 therein) and correspond to octonionic spinors of opposite chirality. Whereas $\mathcal{T}_{0}$ changes the parity of three triplets, the $\left\{ \mathcal{T}_{1},\mathcal{T}_{2},\mathcal{T}_{3}\right\} $ each change the parity of four triplets. $\mathcal{T}_{0}$ is an algebra isomorphism that transitions between opposite algebras of different chirality \citep{SchrayManogueOcts1994}. It is not an isomorphism in the sense that opposite algebras could be transformed into one another through transformation of the basis vectors in $\mathbb{R}^{8}$ alone \citep{Lockyer2008Octospace} (they cannot). The combined $\mathcal{T}_{0}\mathcal{T}_{1}$ inverts the sign of all seven nonreal octonion elements and corresponds to complex conjugation. The structure of the generalized Born rule on octonions (\ref{eq:defGenerlizedBornRuleDef}) is therefore given by the structure of the $\mathcal{T}_{n}$ from (\ref{eq:defOctoAutomorphisms}). For a select $n$, the pair $\left\{ \left(\mathrm{id}\right),\mathcal{T}_{n}\right\} $ forms the two element cyclic group $\mathbb{Z}_{2}$. The possible unique combinations of the $\left\{ \mathcal{T}_{1},\mathcal{T}_{2},\mathcal{T}_{3}\right\} $ form the set\[ \left\{ \mathcal{T}_{1},\,\mathcal{T}_{2},\,\mathcal{T}_{3},\,\mathcal{T}_{1}\mathcal{T}_{2},\,\mathcal{T}_{1}\mathcal{T}_{3},\,\mathcal{T}_{2}\mathcal{T}_{3},\,\mathcal{T}_{1}\mathcal{T}_{2}\mathcal{T}_{3}\right\} \] which transitions between octonions $\mathbb{O}\left[N\right]$ of the same chirality. It can be graphed in the Fano plane where the combination of any two automorphisms on a line yields the third one (figure \ref{fig:T1T2T3Fano-1}).% \begin{figure} \centering{}\caption{\label{fig:T1T2T3Fano-1}All unique automorphisms from repeat application of the $\left\{ \mathcal{T}_{1},\mathcal{T}_{2},\mathcal{T}_{3}\right\} $ can be graphed in the Fano plane (left), where the product of each two automorphisms on a line yields the third. Together with identity $\left(\mathrm{id}\right)$ this forms the group $\mathbb{Z}_{2}^{3}=\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2}$ (right).} \includegraphics[clip,scale=0.7]{T1T2T3Fano} \end{figure} Together with the identity element, $\left(\mathrm{id}\right)$, this forms the group% \footnote{The structure of octonion algebra and its relation to $\mathbb{Z}_{2}^{3}$ and Hadamard transforms is also investigated in \citep{AlbuMajidQuasialg}.% } $\mathbb{Z}_{2}^{3}=\mathbb{Z}_{2}\times\mathbb{Z}_{2}\times\mathbb{Z}_{2}$. Together with the chirality-changing $\left\{ \left(\mathrm{id}\right),\mathcal{T}_{0}\right\} \cong\mathbb{Z}_{2}$, the automorphism group between all the $f\left[N\right]$ is then:\begin{align*} \mathrm{Aut}\left(\left\{ f_{\mathbb{O}}\left[N\right]\right\} \right) & \cong\mathbb{Z}_{2}^{4}.\end{align*} The $f\left[N\right]$ themselves are functions that map their of real arguments into the 7-sphere $S^{7}$, the unit sphere in $\mathbb{R}^{8}$:\begin{eqnarray*} f\left[N\right] & : & \mathbb{R}\otimes\ldots\otimes\mathbb{R}\rightarrow S^{7}.\end{eqnarray*} Writing $\left[S^{7}\right]$ as the set of all functions on the 7-sphere, the symmetry $S^{D}$ of the generalized Born rule (\ref{eq:defGenerlizedBornRuleDef}) becomes:\begin{align*} S^{D} & \cong\left[S^{7}\right]\rtimes\mathbb{Z}_{2}^{4}.\end{align*} Such $S^{D}$ is not a group due to nonassociativity of the $\left[S^{7}\right]$. It may instead have Hopf (co)quasigroup structure as in \citep{KlimMajid2009HopfCoquasigroup}. If true, it can be equipped with a differential calculus and Fourier transformation \citep{Klim2010IntegralHopfQuasi}. This mathematical flexibility would make it a promising structure to investigate the solution set of the generalized Born rule, which in turn would allow to identify applicability in physics. \subsection{Next steps and outlook} The prototype nonassociative quantum theory in one dimension was developed as a self-consistent formalism under the speculation that it may be further developed into a working quantum theory for the description of nature. Time and space will need to be modeled, and the set of solutions needs to be understood much deeper. Only then will it be possible to compare it with other models that are built from observed or speculated properties of nature. Active and passive transformations here use exponentiation between an imaginary vector of unit length and a real number. These are the central pieces for modeling physical fields and particles. As a morphism over a two dimensional vector space, exponentiation $\mathcircumflex\,:\,\mathbb{R}^{2}\otimes\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ in the complex numbers $\mathbb{C}$ is noncommutative, nonassociative, nonalternative, a left-inverse is generally different from a right inverse, and the morphism doesn't distribute over addition. One might speculate about building new kinds of algebras from requiring existence of an exponential function that preserves a certain geometric simplicity, rather than attempting to preserve algebraic rules from pairwise morphisms (commutativity, associativity, distributivity, and similar). Two such examples in the two dimensional plane have been brought forward \citep{ShusterKoeplWSpace,ShusterKoeplPQSpace}, and may be of interest for evaluating new kinds of transformations for applicability in modeling nature. In all, mathematical properties from nonassociative algebras, spinors, symmetries, and observed properties of nature at the smallest scales continue to offer enigmatic similarity, yet it is unclear whether nonassociativity in physics may ever overcome its current status of being an incidental curiosity: Are known circumstantial evidence and suspicious parallels the results of a yet undiscovered theory of consequence? Time (emergent or not) will tell. \section*{Acknowledgments} \thanks{Many thanks to the conference organizers of the {}``2nd Mile High Conference on Nonassociative Mathematics'' at Denver University, CO (2009), as well as the {}``Special Session on Quasigroups, Loops, and Nonassociative Division Algebras'' at the AMS Fall Central Section Meeting at Notre Dame University, South Bend, IN (2010), to allow presentation of material from this paper. We are grateful for the NSF travel grant that allowed VD to participate in person in Denver. Our best thanks extend to Tevian Dray, Shahn Majid, John Huerta, and Geoffrey Dixon for open discussions, criticism, and thoughts that helped develop the material. A special thank you to the referee of the initial version of the paper, for going to extraordinary length and detail in the review.} \section*{Appendix A: Lorentz Lie algebra from nonassociative product} \label{sec:AppALorentzLieAlg}\setcounter{equation}{0}\renewcommand{\theequation}{A.\arabic{equation}} Section \ref{sub:LorentzLieAlg} shows the Lorentz Lie algebra,\begin{align} \left[M_{\mu\nu},M_{\rho\sigma}\right] & =\imath\left(\eta_{\nu\rho}M_{\mu\sigma}+\eta_{\mu\sigma}M_{\nu\rho}-\eta_{\mu\rho}M_{\nu\sigma}-\eta_{\nu\sigma}M_{\mu\rho}\right),\label{eq:LLA2}\\ & \qquad\textrm{with }\mu,\nu,\rho,\sigma\in\left\{ 0,1,2,3\right\} ,\nonumber \end{align} and states that this relation can be satisfied in the algebra of complex octonions $\mathbb{C}\otimes\mathbb{O}$ to basis $\left\{ 1,\imath\right\} \otimes\left\{ 1,i_{1},\ldots,i_{7}\right\} $ when defining \citep{Dzhu2008HiddenStructures,Koepl2009octoocto}:\begin{align*} R_{0} & :=\frac{i_{4}}{2}\left(1+\imath\right), & R_{j} & :=\frac{i_{\left(j+4\right)}}{2}\left(1-\imath\right) & & \left(j\in\left\{ 1,2,3\right\} \right),\\ M_{\mu\nu} & :=\frac{1}{2}\left[R_{\mu},R_{\nu}\right].\end{align*} This appendix calculates relation (\ref{eq:LLA2}) explicitly from the $R_{\mu}$ to provide proof: Written in matrix form, the $M_{\mu\nu}$ are:\begin{align*} M_{\mu\nu}=\frac{1}{2}\left[R_{\mu},R_{\nu}\right] & =\frac{1}{2}\left(\begin{array}{rrrr} 0 & i_{1} & i_{2} & i_{3}\\ -i_{1} & 0 & \imath i_{3} & \,-\imath i_{2}\\ -i_{2} & \,-\imath i_{3} & 0 & \imath i_{1}\\ -i_{3} & \imath i_{2} & -\imath i_{1} & 0\end{array}\right).\end{align*} The possible combinations of indices $\left\{ \mu,\nu,\rho,\sigma\right\} $ from the Lorentz Lie algebra (\ref{eq:LLA2}) fall in the following four cases: \begin{itemize} \item The case $M_{\mu\nu}=M_{\rho\sigma}$ is trivially satisfied. \item If $\mu=\nu$ or $\rho=\sigma$ then either $M_{\mu\nu}=0$ or $M_{\rho\sigma}=0$. The four terms of the right-hand side of relation (\ref{eq:LLA2}) cancel each other out pairwise. \item If all four elements in $\left\{ \mu,\nu,\rho,\sigma\right\} $ are different then $M_{\mu\nu}$ must be $\pm\imath M_{\rho\sigma}$:\begin{align*} M_{01} & =\frac{1}{2}\left[R_{0},R_{1}\right]=\frac{i_{1}}{2}=-\imath\frac{1}{2}\left[R_{2},R_{3}\right]=-\imath M_{23},\\ M_{02} & =\frac{1}{2}\left[R_{0},R_{2}\right]=\frac{i_{2}}{2}=\imath\frac{1}{2}\left[R_{1},R_{3}\right]=\imath M_{13},\\ M_{03} & =\frac{1}{2}\left[R_{0},R_{3}\right]=\frac{i_{3}}{2}=-\imath\frac{1}{2}\left[R_{1},R_{2}\right]=-\imath M_{12}.\end{align*} This makes the commutator $\left[M_{\mu\nu},M_{\rho\sigma}\right]=0$ as required per (\ref{eq:LLA2}). \item The remaining case $\mu=\rho$ and $\nu\neq\sigma$ yields:\begin{align*} \left[M_{01},M_{02}\right] & =\frac{i_{3}}{2}=-\imath\eta_{00}M_{12}, & \left[M_{01},M_{03}\right] & =-\frac{i_{2}}{2}=-\imath\eta_{00}M_{13},\\ \left[M_{12},M_{13}\right] & =-\frac{i_{1}}{2}=-\imath\eta_{11}M_{23}, & \left[M_{10},M_{13}\right] & =\frac{\imath i_{3}}{2}=-\imath\eta_{11}M_{03},\\ \left[M_{20},M_{21}\right] & =\frac{\imath i_{1}}{2}=-\imath\eta_{22}M_{01}, & \left[M_{20},M_{23}\right] & =\frac{\imath i_{3}}{2}=-\imath\eta_{22}M_{03},\\ \left[M_{30},M_{31}\right] & =\frac{\imath i_{1}}{2}=-\imath\eta_{33}M_{01}, & \left[M_{30},M_{32}\right] & =\frac{\imath i_{2}}{2}=-\imath\eta_{33}M_{02}.\end{align*} All other index combinations are obtained from either switching the arguments of the commutator bracket, or from switching the indices of both $M$ terms. From the Minkowski tensor $\eta$ in (\ref{eq:LLA2}) there will only be one nonzero term on the right-hand side. These are exactly the terms obtained from the complex octonion algebra of the $R_{\mu}$ above. $\square$\end{itemize}
proofpile-arXiv_065-6415
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{}} \section{Introduction} The top quark was discovered in 1995 at the Tevatron proton-antiproton collider at Fermilab by the CDF and D0 collaborations \cite{Abe,Abachi}. The most intriguing aspect of the top quark is its mass. It is approximately 35 times the mass of the next most massive fermion, the b quark, and it is very close to the electroweak scale. Because of its mass, the top quark gives the largest contribution to loop corrections in the W boson propagator. Within the Standard Model (SM), the correlation between the top quark mass (${\rm M_{t}}$) and the W boson mass induced by these corrections allows for setting limits on the mass of the yet undiscovered Higgs boson, and favor a relatively light Higgs. According to the SM, at the Tevatron's 1.96 TeV center-of-mass energy top quarks are predominantly produced in pairs, by ${\rm q\overline{q}}$ annihilation in $\sim$85\% of the cases and by gluon-gluon fusion in the remaining $\sim$15\% \cite{Cacciari}. Due to its very short life time, which in the SM is expected to be about 10$^{-25}$ s, the top quark decays before hadronizing. In the SM the top quark decays into a W boson and a b quark in almost 100\% of the cases. The W boson can decay either into quarks as a ${\rm q\overline{q}^{\prime}}$ pair which subsequently hadronize or into a charged lepton-neutrino pair. This allows for a classification of the ${\rm t\overline{t}}$ candidate events into three non-overlapping samples, or decay channels, which are characterized by different final-state signatures, branching ratios (BRs), and background contaminations. The {\it all-hadronic} sample, where both W bosons decay hadronically, is characterized by six or more jets in the event (about 55\% of the ${\rm t\overline{t}}$ events). The {\it lepton+jets} sample, where one W decays leptonically and the other hadronically, is characterized by one electron or muon, four or more jets, and large missing transverse energy $\not\!{\rm E}_{\rm T}$ in the event (about 38\% of the ${\rm t\overline{t}}$ events). The {\it dilepton} sample, where both W bosons decay leptonically, is characterized by two leptons, electrons or muons, two or more jets, and large $\not\!{\rm E}_{\rm T}$ in the event (about 7\% of the ${\rm t\overline{t}}$ events). The lepton+jets sample has the best compromise between statistics and background contamination. The dilepton sample is the cleanest at the cost of having the poorest statistics. The background contamination in all three samples can be greatly suppressed by ``tagging'' the jets associated with the b quarks. The most common tagging technique is based on the displacement of the reconstructed jet vertex from the event's primary vertex due to the relatively long life time of the b-flavored hadrons. \section{Top quark mass measurements} The top quark mass is a free parameter in the SM which can be directly measured at the Tevatron. Top mass measurements have been performed in each channel using a variety of methods. The best result has been achieved in the lepton+jets channel, due to its relatively high BR and moderate background. Recently a boost has been given to the mass accuracy by an innovative technique which exploits the hadronic products of the W decay in order to constrain the largest source of systematic uncertainty: the jet energy scale (JES). In this technique the mass of the two jets from the W decay is required to match the W mass, allowing for the so called ``JES in situ'' calibration. Thanks to this technique analyses in the all-hadronic sample have also achieved a better sensitivity than those in the dilepton channel. Complementary to this technique, new measurement methods have been recently applied which make use of only lepton or track-based information in the event and therefore are free of the JES systematic uncertainty. Two general methods have been established to measure the top quark mass at the Tevatron. In the {\bf Template Mehtod (TM)} distributions, or ``templates'', of variables strongly correlated with the top mass (most typical example is the event-by-event reconstructed top mass itself) are reconstructed on signal and background simulated events. In the {\bf Matrix Element Mehtod (ME)} an event-by-event probability for signal and background is computed as a function of the top mass (for the signal only) and of the reconstructed observables. The ME method exploits all of the information in the event by making use of a leading order ${\rm t\overline{t}}$ production matrix element, convoluted with parton distribution functions which model the structure of the colliding protons and transfer functions which are needed to step back from the reconstructed jets to the hadronizing partons. Both methods use a likelihood to compare data with the simulated events and extract the top mass. This likelihood is defined using a combination of signal and background templates (TM) or probabilities (ME), weighted according to the expected fraction of signal events in the data. In the next subsections the top quark mass measurements reaching the highest sensitivity in CDF are described sample by sample. For brevity, not all of the measurements are reported in this paper. \subsection{Dilepton} The dilepton channel is characterized by a final-state signature of two high-P$_{\rm T}$ charged leptons (electrons or muons), two high-E$_{\rm T}$ b-jets, and large $\not\!{\rm E}_{\rm T}$ from the neutrinos. The largest amount of background comes from diboson events, Drelll-Yan events, and W+jets events where one jet fakes a charged-lepton signature. The signal-to-background (S/B) ratio is relatively high ($\sim$2 without b-tagging). The greatest challenge in this channel is the impossibility of ``in situ'' JES calibration. In addition, the kinematics is under-constrained due to the undetected neutrinos. The ME method deals with this issue by integrating over neutrino momenta while computing the event probability, whereas the top mass TM needs some assumptions to constrain the kinematics and reconstruct the event. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure1.ps} \caption{The likelihood fit of the neutrino $\phi$ weighting method which determines the top quark mass from dilepton events.} \label{dilepton} \end{figure} The most accurate CDF measurement in this channel is based on a ME method \cite{dilME}. It exploits an evolutionary neural network (NN) optimized directly on the mass resolution rather than some intermediate or approximate figure of merit, such as the S/B ratio. The use of a NN improves by 20\% the mass uncertainty compared to the previous analysis using the same method \cite{dilMEold}. This measurement yields ${\rm M_{t}=[171.2\pm 2.7(stat.)\pm 2.9(syst.)]}$ GeV/c$^{2}$ for an integrated luminosity of 2.9/fb. The TM is also used on the basis of an event-by-event top mass reconstruction \cite{dilTM}. The azimuthal angles of the neutrinos are integrated in order to constrain the kinematics, hence the method is named ``neutrino $\phi$ weighting''. The likelihood fit, shown in Figure~\ref{dilepton}, yields ${\rm M_{t}=[165.1^{+3.3}_{-3.2}(stat.)\pm 3.1(syst.)]}$ GeV/c$^{2}$ for an integrated luminosity of 2.8/fb. \subsection{Lepton+jets} The lepton+jets channel is characterized by a signature of a high-P$_{\rm T}$ electron or muon, four high-E$_{\rm T}$ jets, and high $\not\!{\rm E}_{\rm T}$. The background is mainly composed of W+jets events and multi-jet QCD events in which a jet is faking the signature of a charged lepton and $\not\!{\rm E}_{\rm T}$ comes from calorimeter mis-measurements. In order to enhance the S/B ratio from $\sim$0.5 to $\sim$4 and decrease the possible jet-to-parton assignments from 12 to 6 the presence of at least one b-tagged jet is usually required, with an efficiency of $\sim$55\%. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure2.ps} \caption{The fit of the L$_{\rm 2d}$ signal and background templates which determines the top quark mass from lepton+jets events.} \label{Lxy} \end{figure} The most accurate CDF analysis applies the ME method with ``in situ'' JES calibration \cite{ljME}. The method uses angular and energetic transfer functions while computing the event probability. The measurement yields ${\rm M_{t}=[172.1\pm 1.1(stat.+JES)\pm 1.1(syst.)]}$ GeV/c$^{2}$ for an integrated luminosity of 3.2/fb. The b-JES remains the largest source of systematic uncertainty. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure3.ps} \caption{The likelihood fit of the lepton P$_{\rm T}$ signal and background distributions which determines the top quark mass from lepton+jets events.} \label{leptonPt} \end{figure} Two novel TM techniques have been applied to CDF data making no direct use of jets for measuring the top quark mass. Both make use of kinematic variables sensitive to the top mass but insenstive to the JES. The one makes use of the tranverse decay length L$_{\rm xy}$ or L$_{\rm 2d}$ of the b-tagged jets together with the transverse momentum P$_{\rm T}$ of the leptons and has been applied to 1.9/fb of lepton+jets data, yielding a result of ${\rm M_{t}=[175.3\pm 6.2(stat.)\pm 3.0(syst.)]}$ GeV/c$^{2}$ \cite{L2d}. Figure~\ref{Lxy} shows the fit of the L$_{\rm 2d}$ templates to the data. The other makes use of the transverse momentum P$_{\rm T}$ of the leptons only and has been applied to 2.8/fb of lepton+jets and dilepton data yielding a combined result of ${\rm M_{t}=[172.8\pm 7.2(stat.)\pm 2.3(syst.)]}$ GeV/c$^{2}$ \cite{leptonPt}. Figure~\ref{leptonPt} shows the fit of the lepton P$_{\rm T}$ distribution to the data in the lepton+jets channel only. Both techniques are fast and accurate candidates for the LHC, where the statistics will not limit the precision of the measurements. \subsection{All-hadronic} The all-hadronic channel is characterized by a signature of six high-E$_{\rm T}$ jets. Current analyses accept events with six to eight jets in the final state in order to include signal events with additional jets from initial or final state gluon radiation. At least one b-tagged jet is required. The all-hadronic sample is challenging because of the huge amount of QCD multi-jet background. For this reason NN are needed to optimize event selection in order to drastically enhance the S/B ratio from $\sim$1/400 up to $\sim$1/4. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure4.ps} \caption{The fit of the top mass signal and background templates which determines the top quark mass from all-hadronic events.} \label{alljets} \end{figure} So far only CDF has measured the top quark mass from this sample. The most sensitive analysis in this channel is a 2-dimensional TM \cite{alljets}. Variables used to build templates are the event-by-event reconstructed top mass and the JES, which allows for for JES ``in situ'' calibration. This measurement uses a NN for event selection. This NN was recently upgraded to include also variables related with the jet shape for a better separation between gluon jets and light-quark jets from ${\rm t\overline{t}}$ decays. Figure~\ref{alljets} shows the fit of the top mass signal and background templates which yields a result of ${\rm M_{t}=[174.8\pm 2.4(stat.+JES)^{+1.2}_{-1.0}(syst.)]}$ GeV/c$^{2}$ with an integrated luminosity of 2.9/fb. \section{Tevatron combination, future perspective and electroweak implications} With the increasing integrated luminosity available at the Tevatron the systematic uncertainty has started dominating over the statistical uncertainty in the top quark mass measurements. The JES uncertainty remains the largest one among the various types of systematics. This is still the case in the lepton+jets and all-hadronic channels, despite the ``in situ'' JES calibration. New analyses which follow a different approach to measure the top quark mass are now emerging in CDF and D0. Such are the b-jet transverse decay length and the lepton-P$_{\rm T}$ TM anayses described above. Even if these measurements do not reach a competitive statistical sensitivity, they are reported here because they are sensitive to different systematic uncertainties compared with the analyses directly involving jets. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure5.ps} \caption{Projection of the CDF top quark mass uncertainty as a function of the Tevatron integrated luminosity.} \label{combo} \end{figure} Results from most of the analyses discussed above have been used to update the Tevatron top quark mass combination. Figure~\ref{combo} summarizes the measurements included in the combination along with the Tevatron combined top quark mass of ${\rm M_{t}=[173.1\pm 0.6(stat.)\pm 1.1(syst.)]}$ GeV/c$^{2}$, as of March 2009, which has a relative precision of 0.75\% \cite{TevMt}. CDF by itself has a top quark mass combination yielding ${\rm M_{t}=[172.6\pm 0.9(stat.)\pm 1.2(syst.)]}$ GeV/c$^{2}$ \cite{CDFMt}. The precision achieved is below 1\%, already better than the Run II goal. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure6.ps} \caption{Projection of the CDF top quark mass uncertainty as a function of the Tevatron integrated luminosity.} \label{extra} \end{figure} The future perspective of CDF for the precision of the top quark mass measurements is shown in Figure~\ref{extra}. There the top mass total uncertainty (statistical plus systematic added in quadrature) is shown as a function of the Tevatron integrated luminosity, with the points representing the CDF top mass combined results which are obtained so far. The red dashed line above the last point represents the Run II goal of 1\% relative total uncertainty. The continuous blue line beyond the last point is an extrapolation of the total uncertainty assuming that the statistical part will scale with the luminosity and the systematic part will remain constant, i.e. if no improvements will be made in the measurement methods. The blue dotted-dashed line beyond the last point is an extrapolation of the total uncertainty assuming that both the statistical and systematic parts will scale with the luminosity. This in turn assumes improvements such that only data driven sources of uncertainty (e.g. JES calibrations or fakes background estimates) will dominate the systematic uncertainty. \begin{figure}[h] \centering \includegraphics[width=80mm]{figure7.ps} \caption{1$\sigma$-level expectation for the SM Higgs boson mass derived from the measurements of the W boson and top quark masses.} \label{ewkfit} \end{figure} The importance of the high precision of the top quark mass measurements achieved at the Tevatron for the localization of the SM Higss boson mass, as discussed in the Introduction, is shown in Figure~\ref{ewkfit}. There the green band represents regions in the W mass vs. top mass plane corresponding to different values of the Higgs mass. The ellipsoids represent the expectation limits set on that plane by the measured W and top masses at the 1$\sigma$ confidence level. The expectation arising from the latest Tevatron top mass measurement and the combined Tevatron and LEP2 W mass measurements \cite{Wmass} which is shown by the ellipsoid in continuous blue line points to a low Higgs mass, as mentioned in the Introduction. The improvement in the localization of the Higgs mass thanks to the Tevatron top mass precision is shown by comparing with the old expectation (red dashed line ellipsoid) for which the top mass was not yet measured but constrained instead by a global electroweak fit. \begin{acknowledgments} The author wishes to thank the CDF top-quark working group conveners for their help in preparing this presentation and the organizers of the ``DPF 2009 Conference'' for their hard work to set up the conference. \end{acknowledgments} \bigskip
proofpile-arXiv_065-6425
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The B-factories and the Tevatron have been remarkably successful in producing a wealth of data needed to constrain the flavor sector of the Standard Model. Inconsistencies between independent determinations of the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix~\cite{Cabibbo:1963yz,Kobayashi:1973fv} and its CP-violating phase would provide evidence for new physics. The search for such inconsistencies has received a great deal of attention because generic new physics scenarios lead to additional CP-violating phases beyond the single one of the Standard Model. Physicists generally believe that the Standard Model cannot be the whole story, despite its great experimental successes, because it does not describe the large matter-antimatter asymmetry of the universe, nor can it account for dark matter. Although there is reasonably good agreement with the Standard Model prediction of a single CP-violating phase, as encoded in the CKM matrix, some tensions have been pointed out recently~\cite{Lunghi:2007ak,Lunghi:2008aa,Buras:2008nn,Buras:2009pj,Lunghi:2009sm}. Most of the constraints on the CKM matrix are limited by theoretical uncertainties in the hadronic matrix elements that encode the nonperturbative QCD contributions to weak processes. These hadronic matrix elements are calculated using numerical lattice QCD, and great progress has been made in reducing the errors in the last few years.\footnote{\textcolor{black}{For a pedagogical review of lattice QCD methods see Ref.~\cite{Bazavov:2009bb}}.} Three flavors of light quarks ($u$, $d$, and $s$) are now being included in the vacuum polarization, so that the quenched approximation has become a thing of the past for most quantities of interest. Lattice calculations of many quantities have now been done by various groups with all sources of systematic error under control.\footnote{\textcolor{black}{See, for example, Refs.~\cite{Gamiz:2008bd,Lellouch:2009fg}, for the status of lattice calculations of kaon and heavy-light physics.}} In order to maximize the impact of lattice input on phenomenology, it is necessary to average the different results. This is not entirely straightforward, since correlations between various lattice errors must be taken into account. For example, statistical errors in quantities computed on the same gauge ensembles will be highly correlated. Systematic errors can also be correlated, so familiarity with the lattice methods used in each calculation is needed in order to understand and account for these correlations in the averaging procedure. In this work we present lattice QCD averages of the hadronic weak matrix elements that enter the global fit of the CKM unitarity triangle. We provide results for the neutral kaon mixing parameter ($B_K$), for the neutral $B$-meson decay constants and mixing matrix elements, for the inclusive determinations of the CKM matrix elements $|V_{ub}|$ and $|V_{cb}|$, for the Standard Model correction to $\epsilon_K$ ($\kappa_\epsilon$), and for the kaon decay constant ($f_K$). Although we do not know the exact correlations between different lattice calculations of the same quantity, we still attempt to account for correlations in a reasonable manner. We do so by assuming that, whenever a source of error is at all correlated between two lattice calculations, the degree-of-correlation is 100\%. This assumption is conservative and will lead to somewhat of an overestimate in the total error of the lattice averages; nevertheless, it is the most systematic treatment possible without knowledge of the correlation matrices between the various calculations, which do not exist. These averages are intended for use in the global CKM unitarity triangle fit, as well as other phenomenological analyses.\footnote{\textcolor{black}{There are some additional correlations between different lattice quantities that enter the unitarity triangle fit, since collaborations often calculate more than one quantity using the same gauge configurations. Although we do not include these effects in our analysis, such correlations are reduced in the procedure of averaging several lattice results for the same quantity that have been computed with different configurations. Therefore these correlations should become less significant over time as more independent $N_f = 2+1$ flavor lattice results become available to include in the averages.}} Our emphasis in this work is different than that of the CKMfitter and UTfit collaborations~\cite{CKMfitter,UTFit}. Their focus is primarily on the statistical techniques used to extract information from a given set of inputs (\textcolor{black}{see Ref.~\cite{Battaglia:2003in} for a quantitative comparison of the Bayesian versus frequentist approaches}), whereas our focus is on the lattice QCD inputs themselves. Nevertheless, it is useful to point out some important differences between the manner in which they compute their lattice averages and the treatment that we use in this paper. Both CKMfitter and UTfit combine two- and three-flavor results in their lattice averages~\cite{CKMfitter_lat,UTFit_lat}, which we do not. This is because, although the systematic uncertainty in a particular quantity due to omitting the dynamical strange quark may in fact be small, this is impossible to quantify until the equivalent calculation has been done with both two and three flavors. We therefore only consider $N_f = 2+1$ lattice results in our averages. CKMfitter assigns the smallest systematic error of any of the individual lattice calculations to the average~\cite{CKMfitter_lat} instead of combining the systematic uncertainties between different lattice calculations. This treatment does not take full advantage of current lattice QCD results by preventing the average value from having a smaller systematic uncertainty than any of the individual values. Thus, although this treatment is conservative, it may obscure the presence of new physics in an attempt to be overly cautious. The method for obtaining the central values and errors used by UTfit is not fully spelled out in Ref.~\cite{UTFit_lat}. In this work we take all quoted lattice errors at face value when averaging results, but we only include results with complete systematic error budgets that have been sufficiently documented in either a publication or conference proceeding. This paper is organized as follows. In Sec.~\ref{sec:LatticeInputs} we average lattice QCD results for hadronic weak matrix elements that enter the standard global fit of the CKM unitarity triangle. We present the individual results used in the averages, briefly describe the methods used in each lattice calculation, and spell out which errors we consider correlated between the calculations. Next, for completeness, we summarize the other inputs used in our unitarity triangle fit in Sec.~\ref{sec:OtherInputs}. We then illustrate the impact of the new lattice averages on the unitarity triangle fit in Sec.~\ref{sec:SMpredictions}. We observe a (2--3)$\sigma$ tension in the fit, depending upon whether we use the inclusive or exclusive determination of $|V_{cb}|$. In Sec.~\ref{sec:NewPhys} we interpret the tension as a sign of new physics in either neutral kaon or $B$-meson mixing. We find that the current data prefer the scenario in which the new physics is in the kaon sector. Finally, we summarize our results and conclude in Sec.~\ref{sec:Conc}. \textcolor{black}{Summary plots of all of the lattice averages are provided in Appendix~\ref{sec:App}.} \section{Lattice QCD inputs to the fit of the unitarity triangle} \label{sec:LatticeInputs} Many of the constraints on the CKM unitarity triangle rely upon knowledge of hadronic matrix elements that parameterize the nonperturbative QCD contributions to weak decays and mixing. In the past, these hadronic weak matrix elements have often been difficult to compute precisely, and have enabled only mild ($10-15\%$ level) constraints on the apex of the CKM unitarity triangle that have been insufficient to probe the presence of new physics. Recent advances in computers, algorithms, and actions, however, now allow reliable lattice QCD calculations of hadronic weak matrix elements with all sources of systematic error under control. State-of-the-art lattice computations now regularly include the effects of the dynamical up, down, and strange quarks. They also typically simulate at pion masses below 300 MeV, and sometimes even below 200 MeV, in order to control the extrapolation to the physical pion mass. For many hadronic weak matrix elements of interest, there are now at least two reliable lattice calculations. Just as with experimental measurements, some of the errors are correlated among the lattice QCD calculations, and such correlations much be taken into account when averaging lattice inputs to be used in the CKM unitarity triangle analysis. In this section, we average the latest lattice QCD results and provide values that should be used in current fits of the CKM unitarity triangle. In the averages, we only include results from simulations with three dynamical quark flavors, and with associated proceedings or publications that include comprehensive error budgets. Fortunately, for all quantities of interest, there is at least one calculation that satisfies these critera. In taking the averages we assume that all errors are normally distributed and follow the prescription outlined in Ref.~\cite{Schmelling:1994pz} to take the correlations into account. The degree of correlation induced by a given source of uncertainty onto the errors of different lattice calculations is extremely difficult to estimate. In order to be conservative, whenever there are arguments that suggest some correlation between errors in distinct lattice results, we take it to be 100\%. Finally, we adopt the PDG prescription to combine several measurements whose spread is wider than what expected from the quoted errors: the error on the average is increased by the square root of the minimum of the chi-square per degree of freedom (constructed following Ref.~\cite{Schmelling:1994pz}). \subsection{$B_K$} \label{sec:BK} The experimental measurement of indirect CP-violation in the kaon sector, $\varepsilon_K$, when combined with a nonperturbative determination of the neutral kaon mixing parameter, $B_K$, places a constraint on the apex of the unitarity triangle. There have been three realistic lattice QCD calculations of $B_K$ since 2006; the results are summarized in Table~\ref{tab:LQCD_BK}. \begin{table}[t] \begin{center} \begin{tabular}{lccc} \hline \hline & $\widehat B_K$ & $\;\;\;\;( \delta \widehat B_K)_{\rm stat} $ & $\;\;\;\;( \delta \widehat B_K)_{\rm syst}$ \\ HPQCD/UKQCD '06~\cite{Gamiz:2006sq} & 0.83 & 0.02 & 0.18 \\ RBC/UKQCD '07~\cite{Antonio:2007pb} & 0.720 & 0.013 & 0.037 \\ Aubin, Laiho \& Van de Water '09~\cite{Aubin:2009jh} & 0.724 & 0.008 & 0.028 \\ Average & $0.725 \pm 0.026$ & & \\ \hline \hline \end{tabular} \caption{Unquenched lattice QCD determinations of the neutral kaon mixing parameter $\widehat{B}_K$. \textcolor{black}{A plot showing the three $N_f = 2+1$ results and their average is given in Fig.~\ref{fig:BK}.}\label{tab:LQCD_BK}} \end{center} \end{table} The first, by the HPQCD and UKQCD Collaborations~\cite{Gamiz:2006sq}, uses the ``2+1" flavor asqtad-improved staggered gauge configurations\textcolor{black}{~\cite{Susskind:1976jm,Lepage:1998vj,Orginos:1999cr}} generated by the MILC Collaboration~\cite{Bernard:2001av,Bazavov:2009bb}, which include the effects of two degenerate light quarks and one heavier quark with a mass close to that of the physical strange quark. The calculation also uses staggered valence quarks in the four-fermion operator used to compute $B_K$. The result for the renormalization-group invariant quantity $\widehat{B}_K$ has a $\sim 22\%$ total uncertainty, which is primarily due to the omission of operators specific to staggered fermions that break flavor symmetry in the lattice-to-continuum operator matching calculation. Because the other determinations of $B_K$ have much smaller total errors, this result has little impact on the weighted average. The second calculation by the RBC and UKQCD Collaborations~\cite{Antonio:2007pb} uses 2+1 flavor domain-wall gauge configurations, as well as domain-wall valence quarks in the four-fermion operator used to compute $B_K$. Because domain-wall quarks have an approximate chiral symmetry~\cite{Kaplan:1992bt,Shamir:1993zy}, it is easier to calculate the renormalization factor needed to determine $B_K$ in the continuum and in the $\overline{\textrm{MS}}$ scheme for domain-wall quarks than for staggered quarks. Therefore the RBC and UKQCD Collaborations compute the renormalization factor in the RI-MOM scheme nonperturbatively using the method of Rome-Southampton~\cite{Martinelli:1994ty}, and convert it to the $\overline{\textrm{MS}}$ scheme using 1-loop continuum perturbation theory~\cite{Ciuchini:1997bw}. They estimate that the truncation error due to the perturbative matching is small, $\sim 2\%$. Although the RBC/UKQCD result is obtained from data at only a single lattice spacing of $a \approx$ 0.11 fm, they include a reasonable $\sim 4\%$ estimate of discretization errors in their total uncertainty, which is based on the scaling behavior of quenched data with the same gluon and valence quark action. The total error in the RBC/UKQCD calculation of $\widehat{B}_K$ is $\sim 5\%$. Recently, Aubin, Laiho, and Van de Water (ALV) calculated $B_K$ using domain-wall valence quarks on the MILC staggered gauge configurations~\cite{Aubin:2009jh}. The use of domain-wall valence quarks allows them to compute the renormalization factor nonperturbatively in the RI-MOM scheme just like RBC/UKQCD. Their result, however, includes a more conservative estimate of the truncation error, $\sim 3\%$, which is based on a comparison with an independent calculation of the renormalization factor using lattice perturbation theory. Their calculation also improves upon the work of RBC and UKQCD by analyzing data at two lattice spacings, so that they can extrapolate $B_K$ to the continuum limit. They obtain a total error in $\widehat{B}_K$ of $\sim 4\%$. In order to average the three determinations of $\widehat{B}_K$, we must determine which sources of error are correlated between the various calculations. The HPQCD/UKQCD and ALV calculations both use the staggered gauge configurations generated by the MILC Collaboration. Aubin, Laiho, and Van de Water, however, use nine independent ensembles of gauge configurations for their analysis, whereas HPQCD/UKQCD only use two ensembles. Thus the overlap in the two sets of data is quite small, and the statistical errors of the ALV result are sufficiently independent of the statistical errors of the HPQCD/UKQCD result that we treat them as uncorrelated in the average. The RBC/UKQCD and ALV calculations both use the same 1-loop continuum perturbation theory expression to convert the $B_K$ renormalization factor from the RI-MOM scheme to the $\overline{\textrm{MS}}$ scheme. Thus we treat the truncation errors as 100\% correlated between the two calculations. Given these assumptions, we obtain \begin{equation} \widehat B_K = 0.725 \pm 0.026 \label{eq:bk} \end{equation} for the weighted average, and we use this value for the unitarity triangle fit presented in Sec.~\ref{sec:SMpredictions}. \subsection{$B$-meson decay constants and mixing matrix elements} \label{sec:xi} The $B$-meson decay constant $f_B$ places a constraint on the CKM unitarity triangle when combined with the experimental branching fraction for $B \to \tau \nu$ leptonic decay. Because the experimental measurement is difficult, the $B \to \tau \nu$ unitarity triangle constraint is currently quite weak, and is not included in the standard global unitarity triangle fits~\cite{CKMfitter,UTFit}. Nevertheless, we present the $B$-meson decay constant here with the expectation that the experimental branching fraction will improve and the constraint will be more useful in the future. Furthermore, we can use the average values for the $B_d$ and $B_s$-meson decays constant, which have smaller errors than any of the individual determinations, to reduce the total uncertainty in the $B_d$ and $B_s$-meson mixing matrix elements, as discussed later in this section. There have been two 2+1 flavor lattice calculations of the $B$-meson decay constants; the results for $f_B$ are summarized in the upper panel of Table~\ref{tab:LQCD_fB}, while those for $f_{B_s}$ in the lower panel. \begin{table}[t] \begin{center} \begin{tabular}{lccc} \hline \hline & $f_B ({\rm MeV})$ & $\;\;\;\;( \delta f_B)_{\rm stat}$ & $\;\;\;\;( \delta f_B)_{\rm syst}$ \\ FNAL/MILC '08~\cite{Bernard:2009wr} & 195 & 7 & 9 \\ HPQCD '09~\cite{Gamiz:2009ku} & 190 & 7 & 11 \\ Average & $192.8 \pm 9.9$& & \\ \hline\hline & $f_{B_s} ({\rm MeV})$ & $\;\;\;\;( \delta f_{B_s})_{\rm stat}$ & $\;\;\;\;( \delta f_{B_s})_{\rm syst}$ \\ FNAL/MILC '08~\cite{Bernard:2009wr} & 243 & 6 & 9 \\ HPQCD '09~\cite{Gamiz:2009ku} & 231 & 5 & 14 \\ Average & $238.8 \pm 9.5$ & & \\ \hline \hline \end{tabular} \caption{Unquenched lattice QCD determinations of the $B$-meson decay constants $f_B$ and $f_{B_s}$. \textcolor{black}{Plots showing the $N_f = 2+1$ results and their averages are given in Figs.~\ref{fig:fB} and~\ref{fig:fBs}.}\label{tab:LQCD_fB}} \end{center} \end{table} The Fermilab Lattice and MILC Collaborations presented preliminary determinations of $f_{B}$ and $f_{B_s}$ at Lattice 2008~\cite{Bernard:2009wr}. They use the asqtad action for the light $u$, $d$, and $s$-quarks, and the Fermilab action~\cite{ElKhadra:1996mp} for the heavy $b$-quarks. The largest uncertainty in the decay constants is statistical, and is $\sim$ 3\% for both $f_{B}$ and $f_{B_s}$. Several systematic uncertainties --- the light-quark discretization error and chiral extrapolation, heavy-quark discretization error, and scale and light-quark mass determination --- all lead to comparable errors of $\sim$ 2\%. The HPQCD Collaboration recently published a determination of $f_{B}$ and $f_{B_s}$~\cite{Gamiz:2009ku} using staggered light quarks and NRQCD $b$-quarks~\cite{Lepage:1992tx}. The statistical plus chiral extrapolation errors are comparable to those of Fermilab/MILC. The largest systematic errors, however, are from the continuum extrapolation ($\sim$ 3\%) and operator matching ($\sim$ 4\%). Because both decay constant calculations rely upon the MILC gauge configurations, including many overlapping ensembles, we treat the statistical errors as 100\% correlated between the two calculations. Most of the systematic errors in the two calculations, however, such as those from tuning the quark masses, heavy-quark discretization effects, and operator matching, are independent, so we treat the systematic errors as uncorrelated. Given these assumptions, we obtain the weighted averages \begin{eqnarray} f_B & = & 192.8 \pm 9.9 \\ f_{B_s} & = & 238.8 \pm 9.5 . \end{eqnarray} In practice, the CKMfitter and UTfit Collaborations do not in fact, use the $B$-meson decay constant to implement the unitarity triangle constraint from $B \to \tau \nu$ decay. Instead, they construct the ratio ${\rm{B.R.}}(B\to\tau\nu) / \Delta m_d$, where $\Delta m_d$ is the $B_d$-meson oscillation frequency, to reduce the uncertainty from hadronic matrix elements. The quantity $f_B^2$ cancels in this ratio, such that the ratio depends only on the $B$-meson bag parameter, $B_{B_d}$, which currently has a smaller relative uncertainty than $f_B^2$. Currently there is only one available 2+1 flavor calculation of the neutral $B$-meson bag parameters by the HPQCD Collaboration~\cite{Gamiz:2009ku}. They use the same lattice actions and analysis methods as for the decay constants, and obtain \begin{eqnarray} B_{B_d} & = &1.26 \pm 0.11 \\ B_{B_s} & = & 1.33 \pm 0.06 . \end{eqnarray} These results are also presented in Table~\ref{tab:LQCD_Bq}. \begin{table} \begin{center} \begin{tabular}{lcc} \hline \hline & $\widehat{B}_{B_d}$ & $\widehat{B}_{B_s}$ \\ HPQCD '09~\cite{Gamiz:2009ku} &$\;\;\;\; 1.26 \pm 0.11\;\;\;\; $ & $1.33 \pm 0.06$ \\ \hline \hline \end{tabular} \caption{Unquenched lattice QCD determinations of the neutral $B$-meson bag parameters $\widehat{B}_{B_q}$. \label{tab:LQCD_Bq}} \end{center} \end{table} The experimental measurements of the $B_d$- and $B_s$-meson oscillation frequencies, when combined with a calculation of the neutral $B$-meson mixing matrix elements, place additional constraints on the apex of the CKM unitarity triangle. The weaker of the two constraints comes from $\Delta m_d$, which is proportional to the hadronic matrix element $f_{B_d} \sqrt{\widehat{B}_{B_d}}$. Nevertheless, this constraint plays an important role in the search for new physics because, depending upon the type of physics beyond the Standard Model (BSM) that is present, new physics may affect $B_s$- and $B_d$-mixing independently. For example, in some minimal flavor violating scenarios, new physics will alter the separate constraints on the apex of the CKM unitarity triangle from $B_s$- and $B_d$-mixing, but not the constraint from their ratio. Although there has been only one 2+1 flavor calculation of the neutral $B$-meson mixing matrix elements by the HPQCD Collaboration~\cite{Gamiz:2009ku}, there have been two calculations of the decay constant $f_B$, as discussed earlier in this section. We can therefore use the average values of $f_B$ and $f_{B_s}$ to improve the lattice determinations of the mixing matrix elements $f_{B_d} \sqrt{\widehat{B}_{B_d}}$ and $f_{B_s} \sqrt{\widehat{B}_{B_s}}$. We do so by combining the average value of $f_B$ in Table~\ref{tab:LQCD_fB} with the HPQCD determination of $B_{B_d}$ in Table~\ref{tab:LQCD_Bq}. This procedure reduces the errors in the mixing matrix element to below that from the HPQCD calculation alone, thereby improving the resulting constraint on the unitarity triangle. We add the errors of $f_B$ and $B_{B_d}$ in quadrature, despite the fact that the average $f_B$ value contains information from the HPQCD decay constant calculation, and is therefore somewhat correlated with the HPQCD $B_{B_d}$ value. This error treatment is conservative, however, because adding the HPQCD errors for $f_B$ and $B_{B_d}$ in quadrature slightly overestimates the resulting error in $f_{B_d} \sqrt{\widehat{B}_{B_d}}$ since correlated statistical fluctuations lead to a smaller error in the product of the two quantities (this is also true for the $B_s$-meson case). The resulting 2+1 flavor lattice averages for $f_{B_d} \sqrt{\widehat{B}_{B_d}}$ and $f_{B_s} \sqrt{\widehat{B}_{B_s}}$ are given in Table~\ref{tab:LQCD_fB_Bq}. \begin{table} \begin{center} \begin{tabular}{lcc} \hline \hline & $\;\; f_{B} \sqrt{\widehat{B}_{B_d}} ({\rm MeV})\;\;$ & $\;\; f_{B_s} \sqrt{\widehat{B}_{B_s}} ({\rm MeV}) \;\;$ \\ Average & $216 \pm 15$ & $275 \pm 13$ \\ \hline \hline \end{tabular} \caption{Unquenched lattice QCD averages of the neutral $B$-meson mixing matrix elements $f_{B} \sqrt{\widehat{B}_{B_d}}$ and $f_{B_s} \sqrt{\widehat{B}_{B_s}}$. The results are obtained by combining the average decay constants given in Table~\ref{tab:LQCD_fB} with the HPQCD determinations of the bag-parameters presented in Table~\ref{tab:LQCD_Bq}, thereby minimizing the total uncertainties. \label{tab:LQCD_fB_Bq}} \end{center} \end{table} In practice, the hadronic matrix element $f_{B_d} \sqrt{\widehat{B}_{B_d}}$ has larger uncertainties than the corresponding quantity in $B_s$-mixing, $f_{B_s} \sqrt{\widehat{B}_{B_s}}$. This is primarily because current lattice QCD calculations can simulate directly at the physical $s$-quark mass, but must extrapolate to the $u$- and $d$-quark masses. Therefore the chiral extrapolation error, which is often the dominant systematic, is larger for $f_{B_d} \sqrt{\widehat{B}_{B_d}}$ than for $f_{B_s} \sqrt{\widehat{B}_{B_s}}$. In order to minimize the hadronic uncertainty in the $\Delta m_d$ constraint on the unitarity triangle, we therefore replace $f_{B_d} \sqrt{\widehat{B}_{B_d}}$ with $ f_{B_s} \sqrt{\widehat{B}_{B_s}}/\xi$, where $\xi \equiv f_{B_s} \sqrt{\widehat{B}_{B_s}}/f_{B_d} \sqrt{\widehat{B}_{B_d}}$ is an $SU(3)$-breaking ratio that can currently be determined more accurately in lattice calculations than the individual matrix elements, as discussed below. The more stringent neutral $B$-meson mixing constraint on the unitarity triangle comes from the ratio of the oscillation frequencies, $\Delta m_s / \Delta m_d$, because many uncertainties are reduced in the lattice calculation of $\xi$. There have been two recent 2+1 flavor lattice QCD calculations of $\xi$; the results are summarized in Table~\ref{tab:LQCD_xi}. \begin{table} \begin{center} \begin{tabular}{lccc} \hline \hline & $\xi$ & $\;\;\;\;(\delta \xi)_{\rm stat}$ & $\;\;\;\;(\delta \xi)_{\rm syst}$ \\ FNAL/MILC '08~\cite{Evans:2009} & 1.205 & 0.036 & 0.037 \\ HPQCD '09~\cite{Gamiz:2009ku} & 1.258 & 0.025 & 0.021 \\ Average & $1.243 \pm 0.028$ & & \\ \hline \hline \end{tabular} \caption{Unquenched lattice QCD determinations of the $SU(3)$-breaking ratio $\xi$. \textcolor{black}{A plot showing the two $N_f = 2+1$ results and their average is given in Fig.~\ref{fig:xi}.} \label{tab:LQCD_xi}} \end{center} \end{table} The Fermilab Lattice and MILC Collaborations presented a preliminary calculation of $\xi$ at Lattice 2008~\cite{Evans:2009}, while the HPQCD Collaboration has recently published a determination of $\xi$ to $2.6\%$~\cite{Gamiz:2009ku} accuracy. Both groups use the same lattice actions and methods as they did for the $B$-meson decay constant calculations. The largest uncertainties in Fermilab-MILC's determination of $\xi$ are from statistics and from the chiral-continuum extrapolation, both of which are $\sim$ 3\%. They obtain a total error in $\xi$ of $\sim 4\%$. HPQCD's largest source of uncertainty is also statistics and the chiral-continuum extrapolation, which together contribute $\sim 2\%$ to the total uncertainty. Because both calculations of $\xi$ rely on the MILC gauge configurations, in taking the average of the two results, we treat the statistical errors as 100\% correlated. We treat the systematic errors as uncorrelated because most of the systematic errors are independent since they use different heavy-quark actions and operator renormalization methods. Given these assumptions, we obtain the weighted average \begin{equation} \xi = 1.243 \pm 0.028 \end{equation} for use in the unitarity triangle analysis. \subsection{$|V_{ub}|$} \label{sec:Vub} The CKM matrix element $|V_{ub}|$ also places a constraint on the apex of the CKM unitarity triangle. It can be determined by combining experimental measurements of the branching fraction for semileptonic $B \to \pi \ell \nu$ decay with lattice QCD calculations of the $B \to \pi \ell \nu$ form factor. There have been two exclusive determinations of $|V_{ub}|$ based on 2+1 flavor lattice calculations; the results are summarized in the upper panel of Table~\ref{tab:LQCD_Vqb}. \begin{table} \begin{center} \begin{tabular}{lccc} \hline\hline & $\left|V_{ub}\right| \times 10^{-3} $ & $\;\;\;\;( \delta V_{ub})_{\rm exp} $ & $\;\;\;\;( \delta V_{ub})_{\rm theo}$ \\ HPQCD '06~\cite{Dalgic:2006dt} + HFAG Winter '09~\cite{HFAG_FPCP_09} & 3.40 & 0.20 & $^{+0.59}_{-0.39}$ \\ FNAL/MILC '08~\cite{Bailey:2008wp} + BABAR '06~\cite{Aubert:2006px} & 3.38 & $\sim$ 0.20 & $\sim$ 0.29 \\ Average & $3.42 \pm 0.37$ & & \\ \hline\hline & $\left|V_{cb}\right| \times 10^{-3} $ & $\;\;\;\;( \delta V_{cb})_{\rm exp} $ & $\;\;\;\;( \delta V_{cb})_{\rm theo}$ \\ $B\to D \ell \nu$: FNAL/MILC '04~\cite{Okamoto:2004xg} + HFAG Winter '09~\cite{HFAG_FPCP_09} & 39.1 & 1.4 & 0.9 \\ $B\to D^* \ell \nu$: FNAL/MILC '08~\cite{Bernard:2008dn} + HFAG Winter '09~\cite{HFAG_FPCP_09} & 38.3 & 0.5 & 1.0 \\ Average & $38.6 \pm 1.2$ & & \\ \hline\hline \end{tabular} \caption{Exclusive determinations of the CKM matrix elements $|V_{ub}|$ and $|V_{cb}|$ from unquenched lattice QCD calculations. \textcolor{black}{Plots showing the $N_f = 2+1$ results and their averages are given in Figs.~\ref{fig:Vcb} and~\ref{fig:Vub}.} \label{tab:LQCD_Vqb}} \end{center} \end{table} In 2006 the HPQCD Collaboration published the first unquenched computation of the $B \to \pi \ell \nu$ semileptonic form factor using asqtad valence light quarks and NRQCD valence b-quarks~\cite{Dalgic:2006dt} and MILC 2+1 flavor dynamical gauge configurations. The $B \to \pi \ell \nu$ form factor is more difficult to compute numerically than other lattice quantities such as $B_K$ or $\xi$, and consequently has a larger total error. Because of the poor statistics associated with lattice data at nonzero momentum, the largest source of uncertainty in the HPQCD form factor calculation is the 10\% statistical plus chiral extrapolation error. When the HPQCD result for the form factor is combined with the latest Heavy Flavor Averaging Group (HFAG) average for the $B \to \pi \ell \nu$ branching fraction~\cite{HFAG_FPCP_09}, one obtains $|V_{ub}|$ with a total error of $\sim 16\%$, only $\sim 6\%$ of which comes from the experimental uncertainty in the branching fraction. The Fermilab Lattice and MILC Collaborations recently published an improved determination of the $B \to \pi \ell \nu$ semileptonic form factor and $|V_{ub}|$ using staggered light quarks and Fermilab $b$-quarks~\cite{Bailey:2008wp}. As in the case of the HPQCD calculation, the largest source of uncertainty is statistics plus chiral-continuum extrapolation, which leads to a $\sim 6\%$ error in the form factor. Fermilab/MILC, however, extract $|V_{ub}|$ in a different manner than HFAG. They perform a simultaneous fit to the lattice data and the 12-bin BABAR experimental data~\cite{Aubert:2006px} using a fit function based on analyticity and crossing symmetry~\cite{Bourrely:1980gp,Boyd:1994tt,Lellouch:1995yv,Boyd:1997qw}, leaving the relative normalization between lattice and experiment as a free parameter to be determined in the fit. With this method they reduce the total uncertainty in $|V_{ub}|$ by combining the lattice and experimental information in an optimal, model-independent manner. They obtain a total error in $|V_{ub}|$ of $\sim 11\%$. Because the HPQCD and Fermilab/MILC calculations both use the MILC gauge configurations, we treat their statistical errors as 100\% correlated when taking the average. We also treat the experimental errors in the two determinations of $|V_{ub}|$ as 100\% correlated. This is a conservative assumption because the HPQCD extraction comes from the HFAG average, which is obtained from many experimental measurements of the branching fraction including the 12-bin BABAR analysis. We treat the systematic errors in the two calculations as uncorrelated, since that they use different actions for the heavy quarks and different methods for the lattice-to-continuum operator matching. Given these assumptions, we obtain \begin{equation} |V_{ub}| = ( 3.42 \pm 0.37) \times 10^{-3} \label{vubexclAV} \end{equation} for the weighted average. \textcolor{black}{Note that in our averaging procedure we symmetrize the HPQCD systematic error. For this reason the central value of the average (\ref{vubexclAV}) is slightly larger than both the HPQCD and Fermilab/MILC central values.} \subsection{$|V_{cb}|$} \label{sec:Vcb} The CKM matrix element $|V_{cb}|$ normalizes the base of the CKM unitarity triangle. Therefore it implicitly enters many of the constraints on the apex of the CKM unitarity triangle, including those coming from $B_K$ and $|V_{ub}|$. $|V_{cb}|$ can be determined by combining experimental measurements of the branching fractions for $B \to D \ell \nu$ or $B \to D^* \ell \nu$ semileptonic decay, in combination with lattice QCD calculations of the relevant form factor at zero recoil. There have been two exclusive determinations of $|V_{cb}|$ based on 2+1 flavor lattice calculations; the results are summarized in the lower panel of Table~\ref{tab:LQCD_Vqb}. The Fermilab Lattice and MILC Collaborations presented the first unquenched lattice determination of the $B \to D \ell \nu$ form factor at Lattice 2004~\cite{Okamoto:2004xg}. Like other Fermilab/MILC calculations of heavy-light meson quantities, this calculation uses staggered light quarks and Fermilab $b$- and $c$-quarks. Because the $B \to D \ell \nu$ form factor at zero recoil can be obtained by a carefully constructed double ratio of matrix elements, it can be computed very precisely from lattice calculations. The statistical error in the resulting form factor determination is $\sim 2\%$, and all systematic errors are $\sim 1\%$ or less. Although the result is obtained from data at only a single lattice spacing, Fermilab/MILC include an estimate of discretization effects in their error budget. When the result for the $B \to D \ell \nu$ form factor is combined with the latest HFAG average of the experimental branching fraction~\cite{HFAG_FPCP_09}, it leads to a determination of $|V_{cb}|$ with a $\sim 4\%$ total error, of which $\sim 2\%$ is lattice theoretical and $\sim 4\%$ is experimental. More recently, the Fermilab Lattice and MILC Collaborations published the first unquenched lattice determination of the $B \to D^* \ell \nu$ form factor~\cite{Bernard:2008dn}. This calculation also uses staggered light quarks and Fermilab $b$- and $c$-quarks, and obtains the form factor at zero recoil from a double ratio of matrix elements. It improves upon the earlier determination of the $B \to D \ell \nu$ form factor, however, by using data at three lattice spacings, and performing a more sophisticated chiral-continuum extrapolation. When the result for the $B \to D^* \ell \nu$ form factor is combined with the latest HFAG average of the experimental branching fraction~\cite{HFAG_FPCP_09}, it leads to a determination of $|V_{cb}|$ with a $\sim 3\%$ total error, of which $\sim 2.5\%$ is lattice theoretical and $\sim 1.5\%$ is experimental. We note that the average of $B\to D^*$ data yields a poor chi-square per degree of freedom ($\chi^2_{\rm min}/{d.o.f.} = 39/21$). For this reason, we rescale the experimental error quoted in Table~\ref{tab:LQCD_Vqb} by a factor $\sim1.4$. Because the Fermilab/MILC calculations are computed on some of the same ensembles and using the same lattice actions and methods, we treat the theoretical errors as 100\% correlated between the extractions of $|V_{cb}|$ from $D$ and $D^*$ final states. We assume that the experimental errors are independent between the two measurements. Given these assumptions we obtain the weighted average \begin{equation} |V_{cb}|_{\rm excl} = ( 38.6 \pm 1.2) \times 10^{-3} \; \label{vcbexclusive} \end{equation} for use in the unitarity triangle fit. \subsection{$\kappa_\varepsilon$} \label{sec:kappaepsilon} Buras and Guadagnoli~\cite{Buras:2008nn} have pointed out that corrections to $\varepsilon_K$ that had typically been neglected in the unitarity triangle analysis due to the large errors on $B_K$ and $|V_{cb}|$ \textcolor{black}{(with the exception of Refs.~\cite{Andriyash:2003ym,Andriyash:2005ax})} are actually substantial, and amount to a $\sim 8\%$ correction to the Standard Model prediction for $\varepsilon_K$. In this section, we \textcolor{black}{provide the first estimate of the correction factor $\kappa_\varepsilon$ using a 2+1 flavor lattice QCD calculation of ${\rm Im} \left[A (K\to \pi \pi(I=2))\right]$~\cite{Li:2008kc}. We follow the notation of Ref.~\cite{Buras:2008nn}. Our result supersedes previous estimates of $\kappa_\varepsilon$~\cite{Bardeen:1986vz,Anikeev:2001rk,Buras:2003zz,Blanke:2007wr}, and is in perfect agreement with the recent determination of Ref.~\cite{Buras:2008nn}.} It is conventional to define $K\to\pi\pi$ matrix elements in terms of definite isospin amplitudes by \begin{eqnarray} A(K^0\to\pi\pi(I))=A_I e^{i\delta_I}, \end{eqnarray} \begin{eqnarray} A(\overline{K}^0\to\pi\pi(I))=-A^*_I e^{i\delta_I}. \end{eqnarray} CP-violation in the kaon system is then parameterized in terms of \begin{eqnarray}\label{eq:epsilon_K} \varepsilon_K = e^{i\phi_\varepsilon}\sin{\phi_\varepsilon}\left(\frac{\textrm{Im}(M^K_{12})}{\Delta M_K}+P_0 \right), \end{eqnarray} and \begin{eqnarray} \varepsilon'_K = \frac{ie^{i(\delta_2-\delta_0)}}{\sqrt{2}}\omega[P_2-P_0], \end{eqnarray} where \begin{eqnarray}\label{eq:P_12} P_0 \equiv \frac{\textrm{Im} A_0}{\textrm{Re} A_0}, \ \ \ P_2 \equiv \frac{\textrm{Im}A_2}{\textrm{Re}A_2}, \ \ \ \omega\equiv \frac{\textrm{Re}A_2}{\textrm{Re}A_0}. \end{eqnarray} The first term in parenthesis in Eq.~(\ref{eq:epsilon_K}) is the short distance contribution to kaon mixing; it is the part that is conventionally normalized by the bag-parameter $B_K$. The second term, $P_0$, is due to long distance contributions to kaon mixing, and is related to the ratio of $K\to\pi\pi$ decay amplitudes in the $\Delta I=1/2$ channel defined in Eq.~(\ref{eq:P_12}). In the usual unitarity triangle analysis, $\phi_\varepsilon$ is taken to be $\pi$/4, and $P_0$ is taken to be negligible compared to the first term in parenthesis in Eq.~(\ref{eq:epsilon_K}). However, these corrections are not negligible in the current analysis, and the corrections coming from $\phi_\varepsilon \neq \pi/4$ and $P_0\neq 0$ are small but in the same direction \cite{Buras:2008nn}. Following Ref.~\cite{Buras:2008nn}, we define an overall multiplicative correction factor for $\varepsilon_K$ that accounts for $\phi_\varepsilon \neq \pi/4$ and $P_0\neq 0$, \begin{eqnarray}\label{eq:kappa_epsilon} \kappa_\varepsilon = \sqrt{2} \sin{\phi_\varepsilon} \overline{\kappa}_\varepsilon, \end{eqnarray} where $\overline{\kappa}_\varepsilon$ parameterizes the correction to $|\varepsilon_K|$ coming from $P_0$. To good approximation, we have \begin{eqnarray} \overline{\kappa}_\varepsilon \sim 1 + \frac{P_0}{\sqrt{2}|\varepsilon_K|}, \end{eqnarray} and given that \begin{eqnarray} \textrm{Re}\left(\varepsilon'_K/\varepsilon_K\right)\sim \frac{\omega}{\sqrt{2}|\varepsilon_K|}(P_2-P_0), \end{eqnarray} we see that \begin{eqnarray}\label{eq:kappabar_epsilon} \overline{\kappa}_\varepsilon = 1-\frac{1}{\omega}\textrm{Re}(\varepsilon'_K/\varepsilon_K) + \frac{P_2}{\sqrt{2}|\varepsilon_K|}. \end{eqnarray} All of the quantities entering $\kappa_\varepsilon$, Eqs.~(\ref{eq:kappa_epsilon}) and (\ref{eq:kappabar_epsilon}), are well determined from experiment, assuming the Standard Model, except for $P_2$. In Eq.~(\ref{eq:P_12}), we need theory input for $\textrm{Im} A_2$ to determine $P_2$. The amplitude $\textrm{Im}A_2$ is non-vanishing due to the electroweak penguin contribution to $K\to\pi\pi$ decays. We use the result of lattice calculations of this amplitude in our analysis. There is only one $2+1$ flavor result for this quantity with rather large systematic errors associated with the use of leading order chiral perturbation theory \cite{Li:2008kc}, obtained by the RBC/UKQCD Collaborations. Their value is given in Table~\ref{tab:ImA2unquench}, where the first error is statistical and the second is due to the systematic error in the determination of the leading order low energy constant of chiral perturbation theory ($\chi$PT), and the truncation of $\chi$PT to tree-level. Although the quoted error budget does not include typical lattice errors such as finite volume effects, scale setting, and discretization errors, they are almost certainly much smaller than the errors attributed to the use of leading order chiral perturbation theory. For comparison, we also mention lattice results for $\textrm{Im}A_2$ in the quenched approximation, collected in Table~\ref{tab:ImA2}. Some of these quenched calculations are more thorough than others at assessing systematic errors. However, all of their systematic error budgets are necessarily incomplete, due to the uncontrolled nature of the quenched approximation, and we do not attempt to estimate this error here. We take a simple average of the five quenched lattice results and note that it is very similar to the $2+1$ flavor result. We use only the $2+1$ flavor result for our determination of $\kappa_\epsilon$ for use in the unitarity triangle fits. \begin{table}[t] \begin{center} \begin{tabular}{lcc} \hline\hline 2+1 Flavor & \ $\textrm{Im}A_2 \times 10^{13}$ GeV & \\[0.5mm] RBC/UKQCD '08~\cite{Li:2008kc} & $-7.9\pm1.6 \pm3.9$ &\\ \hline\hline \end{tabular} \caption{2+1 flavor lattice value for $\textrm{Im}A_2$. Errors are statistical and systematic, respectively. \textcolor{black}{A plot comparing the $N_f = 2+1$ result with several quenched determinations is given in Fig.~\ref{fig:ImA2}.}\label{tab:ImA2unquench}} \end{center} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{lcc} \hline\hline Quenched & \ $\textrm{Im}A_2 \times 10^{13}$ GeV & \\[0.5mm] RBC '01~\cite{Blum:2001xb} & $-12.6$ \\ CP-PACS '01~\cite{Noaki:2001un} & $-9.1$ \\ ${\rm SPQ_{CD}R}$ '04~\cite{Boucaud:2004aa} & $-5.5$ \\ Babich et al '06~\cite{Babich:2006bh} & $-9.2$\\ Yamazaki '08~\cite{Yamazaki:2008hg} & $-11.8$ \\ Average & $-9.6$ &\\ \hline\hline \end{tabular} \caption{Quenched lattice values for $\textrm{Im}A_2$. \label{tab:ImA2}} \end{center} \end{table} \begin{table}[t] \begin{center} \begin{tabular}{lcl} \hline\hline $\phi_\varepsilon=(43.51\pm0.05)^{\circ}$ & \;\;\;\;\; & $|\varepsilon_K|=(2.229\pm0.012)\times 10^{-3}$ \\ $\omega=0.0450$ & & $\textrm{Re}(\varepsilon'_K/\varepsilon_K)=1.68\pm0.19\times 10^{-3}$ \\ $\textrm{Re}A_2 = 1.50\times 10^{-8} \; {\rm GeV}$ & & $\textrm{Im}A_2 = (-7.9\pm 4.2)\times 10^{-13}\; {\rm GeV}$ \vphantom{\Big(} \\ \hline \hline \end{tabular} \caption{Inputs used to determine $\kappa_\varepsilon$.\label{tab:kappa_eps}}\end{center} \end{table} All inputs used to determine $\kappa_\varepsilon$ are given in Table~\ref{tab:kappa_eps}. We take the most recent experimental world average for $\textrm{Re}(\varepsilon'/\varepsilon)$~\cite{Blucher2009}, noting that we inflate the errors according to the PDG prescription because of the somewhat low confidence level ($13\%$) in the world average. Using these values in Eqs.~(\ref{eq:P_12}), (\ref{eq:kappa_epsilon}), and (\ref{eq:kappabar_epsilon}) we find \begin{eqnarray} \kappa_\varepsilon=0.92\pm0.01, \end{eqnarray} in agreement with Ref.~\cite{Buras:2008nn}. The $50\%$ error in the $2+1$ flavor determination of $\textrm{Im} A_2$ dominates the error in $\kappa_\epsilon$. We note for comparison that if we use the average quenched value of $\textrm{Im} A_2$, assigning to it a $100\%$ error, we find $\kappa_\epsilon=0.92\pm0.02$. \subsection{$f_K$} \label{sec:fk} The kaon decay constant $f_K$ enters the CKM unitarity triangle through $\varepsilon_K$. \textcolor{black}{Because experiments can only measure the product $f_K \times |V_{us}|$, lattice calculations are needed to obtain $f_K$ by itself.} There have already been four 2+1 flavor lattice QCD determinations of $f_K$ using different valence and sea quark actions, and several more calculations are underway. Thus $f_K$ is one of the best-known hadronic weak matrix elements. Table~\ref{tab:LQCD_fK} summarizes the current status of 2+1 flavor lattice QCD calculations of $f_K$. \begin{table}[t] \begin{center} \begin{tabular}{lccc} \hline\hline & $f_K ({\rm MeV})$ & $\;\;\;\;(\delta f_K)_{\rm stat}$ & $\;\;\;\;(\delta f_K)_{\rm syst}$ \\ MILC '07~\cite{Bernard:2007ps} & 156.5 & 0.4 & $^{+1.0}_{-2.7}$ \\ HPQCD/UKQCD '07~\cite{Follana:2007uv} & 157 & 1 & 2 \\ RBC/UKQCD '08~\cite{Allton:2008pn} & 149.6 & 3.6 & 6.3 \\ Aubin, Laiho \& Van de Water '08~\cite{Aubin:2008ie} & 153.9 & 1.7 & 4.4 \\ Average & $155.8 \pm 1.7$ & &\\ \hline\hline \end{tabular} \caption{Unquenched lattice QCD determinations of the kaon decay constant $f_K$. \textcolor{black}{A plot showing the four $N_f = 2+1$ results and their average is given in Fig.~\ref{fig:fK}.} \label{tab:LQCD_fK}} \end{center} \end{table} The MILC Collaboration published the first 2+1 flavor determination of $f_K$ in 2004~\cite{Aubin:2004fs}, and updated the result at Lattice 2007 by including data with lighter quarks and finer lattice spacings~\cite{Bernard:2007ps}. The largest source of uncertainty in their calculation is from the extrapolation to the physical light quark masses and the continuum. A small but non-negligible error also arises due to the determination of the absolute lattice scale needed to convert dimensionful quantities into physical units. MILC first determines the relative scale $r_1/a$ from the heavy-quark potential. Next they obtain the absolute scale $r_1 = 0.3108(15)(^{+26}_{-79})$ by tuning $f_\pi$ to be equal to the experimental value. The uncertainty in $r_1$ leads to an uncertainty in $f_K$ of $^{+0.25}_{-0.75}$, which is 25\% of the total error. The remaining finite volume effects and EM effects are an order of magnitude or more smaller, and the total uncertainty in the MILC Collaboration's determination of $f_K$ is $\sim 2\%$. The HPQCD Collaboration published a determination of $f_K$ using a mixed-action method with highly-improved staggered quarks~\cite{Follana:2006rc} on the MILC asqtad-improved staggered gauge configurations~\cite{Follana:2007uv}. The largest source of uncertainty in their calculation is from the determination of the scale. They use the MILC Collaboration's determination of the relative scale $r_1/a$ to convert dimensionful quantities from lattice units to $r_1$ units. They obtain the value $r_1 = 0.321(5)$, independently, however, using the $\Upsilon$ spectrum computed with nonrelativistic $b$-quarks on the MILC ensembles~\cite{Gray:2005ur}. The uncertainty in $r_1$ leads to an uncertainty in $f_K$ of 1.1\%. The remaining statistical and systematic errors are all much smaller, and the total error in $f_K$ is 1.3\%. The RBC and UKQCD Collaborations published an independent determination of $f_K$ using domain-wall quarks~\cite{Allton:2008pn}. Because they obtain their result using only a single lattice spacing of $a \approx 0.11$ fm, the dominant uncertainty in their result is from discretization errors. They estimate these errors to be 6\% using power-counting arguments. Because the remaining statistical and systematic errors are all much smaller, the total error in $f_K$ is 6.3\%. Aubin, Laiho, and Van de Water presented a preliminary determination of $f_K$ using a mixed-action method with domain-wall valence quarks on the MILC staggered gauge configurations at Lattice 2008~\cite{Aubin:2008ie}. The largest source of uncertainty in their calculation is from the chiral and continuum extrapolation, which they estimate to be 2.3\%. The 1.6\% error from the uncertainty in the scale $r_1$ however, is close in size, so the total error in $f_K$ is 3.0\%. Because the HPQCD, MILC and ALV calculations all use the MILC gauge configurations, we treat the statistical errors as 100\% correlated when taking the average. Because ALV use the MILC Collaboration's determination of the scale $r_1$ from $f_\pi$, the scale uncertainties are also 100\% correlated between the calculations. We take the scale uncertainty in HPQCD's calculation to be uncorrelated, however, because they use a largely independent determination of $r_1$ based on the $\Upsilon$ spectrum. We also treat the remaining systematic errors as uncorrelated between the HPQCD, MILC, and ALV calculations because they use different valence quark formulations. The calculation of the RBC and UKQCD Collaborations is independent of the other results, and we therefore take the errors to be completely uncorrelated. Given these assumptions, we obtain the weighted average \begin{equation} f_K = (155.8 \pm 1.7) \; {\rm MeV} \end{equation} to be used in the unitarity triangle analysis presented in Sec.~\ref{sec:SMpredictions}. \section{Other inputs to the fit of the unitarity triangle} \label{sec:OtherInputs} Table~\ref{tab:utinputs} summarizes the set of inputs that we use in the fit. We obtain $\alpha$ from the isospin analysis of $B\to (\pi\pi,\rho\rho,\rho\pi)$ decays (the description of the method we use can be found in Refs.~\cite{Gronau:1990ka,Snyder:1993mx,Quinn:2000by} and the experimental inputs are taken from Ref.~\cite{HFAG_Moriond_09}). We take the direct determination of $\gamma$ from the model-independent UTfit analysis of $B\to D^{(*)} K^{(*)}$ decays~\cite{Bona:2005vz,Bona:2006ah} (the experimental inputs used are taken from Ref.~\cite{HFAG_Moriond_09}). The inclusive determination of $|V_{cb}|$ deviates by more than 2 $\sigma$ from the average of exclusive results that we quote in Eq.~(\ref{vcbexclusive}). For use in the unitarity triangle fit we combine the three determinations of $|V_{cb}|$ from inclusive and exclusive ($D$ and $D^*$) modes. Taking into account the correlations between the errors of the two exclusive determinations of $|V_{cb}|$ and assuming no correlation between inclusive and exclusive analyses, we obtain: \begin{eqnarray} \left|V_{cb}\right|_{\rm excl + incl} = \left( 40.3 \pm 1.0 \right) \times 10^{-3} \; , \end{eqnarray} where the error has been appropriately rescaled following the PDG prescription. We quote the inclusive determination of $|V_{ub}|$ from the most recent GGOU analysis~\cite{Gambino:2007rp,HFAG_FPCP_09}. Because, however, the extraction of $|V_{ub}|_{\rm incl}$ depends strongly on the theoretical framework adopted~\cite{HFAG_FPCP_09}, we adopt a conservative stance and omit $|V_{ub}|_{\rm incl}$ from the set of measurements that we include in the full unitarity triangle fit. Our predictions for the Standard Model parameters in the following section are independent of $|V_{ub}|$, and our conclusions regarding indications of new physics in Sec.~\ref{sec:NewPhys} are relatively insensitive to the value of $|V_{ub}|$. \textcolor{black}{Apart from the inputs listed in Table~\ref{tab:utinputs}, we take $G_F$, $m_K$, $m_W$, $m_{B_d}$ and $m_{B_s}$ from the Particle Data Group~\cite{Yao:2006px}.} \begin{table}[t] \begin{center} \begin{tabular}{ll} \hline\hline $\left| V_{cb} \right|_{\rm incl} = (41.31 \pm 0.76) \times 10^{-3}$~\cite{HFAG_FPCP_09} $\;\;\;$& $\left| V_{ub} \right|_{\rm incl} = (40.3 \pm 1.5^{+2.0}_ {-2.5}) \times 10^{-4} $~\cite{HFAG_FPCP_09} \cr $\Delta m_{B_d} = (0.507 \pm 0.005)\; {\rm ps}^{-1}$~\cite{HFAG_PDG_09} & $\Delta m_{B_s} = (17.77 \pm 0.10 \pm 0.07)\; {\rm ps}^{-1}$~\cite{Evans:2007hq} \vphantom{\Big(} \\ $\alpha = (89.5 \pm 4.3)^{\rm o}$& $\gamma = (78 \pm 12)^{\rm o}$~\cite{Bona:2005vz,Bona:2006ah} \vphantom{\Big(} \\ $\eta_1 = 1.51 \pm 0.24$~\cite{Herrlich:1993yv} & $m_{t, pole} = (172.4 \pm 1.2) \; {\rm GeV}$~\cite{:2008vn} \vphantom{\Big(}\\ $\eta_2 = 0.5765 \pm 0.0065$~\cite{Buras:1990fn} & $m_c(m_c) = (1.268 \pm 0.009 ) \; {\rm GeV}$~\cite{Allison:2008xk}\vphantom{\Big(}\\ $\eta_3 = 0.47 \pm 0.04$~\cite{Herrlich:1995hh} & $\varepsilon_K = (2.229 \pm 0.012 ) \times 10^{-3}$~\cite{Yao:2006px} \vphantom{\Big(} \\ $\eta_B = 0.551 \pm 0.007$~\cite{Buchalla:1996ys} & $\lambda = 0.2255 \pm 0.0007$~\cite{Antonelli:2008jg}\vphantom{\Big(} \\ $S_{\psi K_S} = 0.672 \pm 0.024$~\cite{HFAG_Moriond_09} & \vphantom{\Big(} \\ \hline \hline \end{tabular} \caption{Inputs used in the unitarity triangle fit. \textcolor{black}{Note that the most precise determination of $m_c$ is obtained from lattice QCD~\cite{Allison:2008xk}.} \label{tab:utinputs}} \end{center} \end{table} \section{Standard Model Predictions} \label{sec:SMpredictions} In this section we extract the Standard Model predictions for $\widehat B_K$, $|V_{cb}|$ and $|V_{ub}/V_{cb}|$. We use only the three constraints from $S_{\psi K_S}$, $\Delta M_{B_s}/\Delta M_{B_d}$ and $\varepsilon_K$, and do not include the constraints from $|V_{ub}|$, $\alpha$ and $\gamma$ in the fit because predictions are almost completely insensitive to their impact. The analytical formulae for $\varepsilon_K$ and $\Delta M_{B_s}/\Delta M_{B_d}$ can be found, for instance, in Ref.~\cite{Lunghi:2009sm}. \bigskip We obtain the prediction for $\widehat B_K$ by excluding the direct lattice determination of $\widehat B_K$ from the chi-square. The dominant source of uncertainty in the extraction of $\widehat B_K$ stems from the strong dependence of $\varepsilon_K$ on $|V_{cb}|$ ($\varepsilon_K \propto |V_{cb}|^4$). This issue is even more problematic because of the discrepancy between the extraction of $|V_{cb}|$ from exclusive and inclusive decays. For this reason we perform the analysis both with and without the inclusive determination of $|V_{cb}|$. Note that when combining the inclusive and exclusive extractions of $|V_{cb}|$ we follow the PDG prescription for inflating the error when combining inconsistent measurements. We find: \begin{eqnarray} ( \widehat B_K )_{\rm fit} = \begin{cases} 1.09 \pm 0.12 & \left|V_{cb} \right|_{\rm excl} \cr 0.903 \pm 0.086 &\left|V_{cb} \right|_{\rm incl} \cr 0.98 \pm 0.10 & \left|V_{cb} \right|_{\rm excl+incl} \cr \end{cases} \end{eqnarray} The comparison of these predictions with the lattice determination of $\widehat B_K$ given in Eq.~(\ref{eq:bk}) yields a deviation at the $2.9\sigma$, $2\sigma$ and $2.4\sigma$ level, respectively. \begin{figure}[t] \begin{center} \includegraphics[width= 0.48\linewidth]{ut-excl.pdf} \includegraphics[width= 0.48\linewidth]{ut-incl.pdf} \caption{Impact of $\varepsilon_K$ on the UT fit. The solid, dashed and dotted contours are obtained by omitting $\varepsilon_K$, $S_{\psi K}$ and $\Delta M_{B_s}/\Delta M_{B_d}$, respectively. The left and right panels use the exclusive and inclusive $|V_{cb}|$ determinations, respectively. \label{fig:ut}} \end{center} \end{figure} We obtain the prediction for $|V_{cb}|$ in a similar fashion by excluding the inclusive and exclusive determinations of $|V_{cb}|$ from the chi-square. We find: \begin{eqnarray} \left| V_{cb} \right|_{\rm fit} = (43.0 \pm 0.9) \times 10^{-3} \; . \end{eqnarray} This prediction deviates by $3.0\sigma$ and $1.3\sigma$ from the exclusive and inclusive determinations of $|V_{cb}|$, respectively. Figure~\ref{fig:ut} illustrates the (2 -- 3)$\sigma$ tension in the fit to the unitarity triangle. In the left and right panels we use $|V_{cb}|$ from exclusive and inclusive semileptonic $B$ decays, respectively. The solid, dashed and dotted contours are obtained by omitting $\varepsilon_K$, $S_{\psi K}$ and $\Delta M_{B_s}/\Delta M_{B_d}$, respectively. They correspond to three scenarios in which new physics affects $K$ mixing, the phase and the amplitude of $B_d$ mixing. An alternative measure of the tension in the UT fit is the minimum $\chi^2$ per degree of freedom: when we include all three constraints we obtain $\chi^2_{\rm min}/{\rm dof} = 6.1 \; (2.6)$ using $|V_{cb}|_{\rm excl \; (incl)}$, corresponding to a confidence level of 0.2\% (7.4\%). It is interesting to note that the errors on the fitted values of $\widehat B_K$ ($\sim 10\%$ -- $12\%$) are much larger than the corresponding lattice uncertainty ($\sim 3.5 \%$); therefore, improvements on the latter are not poised to have a sizable effect on the UT tension. In contrast, the errors on the direct and indirect determinations of $|V_{cb}|$ are similar (about $2\%$), indicating that improvements on the theoretical predictions for exclusive and inclusive semileptonic $B$ decays will have a huge impact on our understanding of this issue. This discussion is summarized in Fig.~\ref{fig:ek}, which shows the relative impact of the present $|V_{cb}|_{\rm excl}$ and $\widehat B_K$ uncertainties on the total $\varepsilon_K$ error band. Finally, we note that the SM prediction for the ratio $|V_{ub}/V_{cb}|$ is in good agreement with the lattice expectation and deviates by only $1.6\sigma$ from the inclusive ratio: \begin{eqnarray} \left| \frac{V_{ub}}{V_{cb}} \right| = \begin{cases} 0.0846 \pm 0.0035 & \rm fit \cr 0.089 \pm 0.010 & \rm exclusive \cr 0.0969 \pm 0.0068 & \rm inclusive \cr \end{cases} \; . \end{eqnarray} \begin{figure}[t] \begin{center} \includegraphics[width= 0.48\linewidth]{ek-err.pdf} \caption{Impact of $|V_{cb}|$ (solid red line) and $\widehat B_K$ (dashed green line) on the $\varepsilon_K$ error band. The uncertainties induced by variations of $m_c$, $m_t$, $\eta_i$ and $\kappa_\varepsilon$ have a negligible impact on the $\varepsilon_K$ error budget. \label{fig:ek}} \end{center} \end{figure} \section{Interpretation as New Physics} \label{sec:NewPhys} In this section we assume that physics beyond the Standard Model does not affect tree-level processes at the current level of precision, and that any sign of new physics must arise due to higher-order loop effects. Given these assumptions, it is well known~\cite{Lunghi:2007ak, Buras:2008nn, Lunghi:2008aa, Buras:2009pj} that the $\sim 2 \sigma$ tension in the fit to the unitarity triangle can be interpreted as a manifestation of new physics effects in $\varepsilon_K$ and/or $B_d$ mixing. In order to test the consistency of these two hypotheses with the current measurements, we describe the two new physics possibilities using the following model-independent parametrization: \begin{eqnarray} \varepsilon_K & = & C_\varepsilon \left( \varepsilon_K\right)_{\rm SM} \; , \label{ekpar} \\ M_{12}^{d} & = & r_d^2 \; e^{i 2 \theta_d} \left( M_{12}^{d} \right)_{\rm SM}\; , \label{Bdpar} \end{eqnarray} where $M_{12}^d$ is the matrix element of the complete effective Hamiltonian between $B^0$ and $\overline B_0$ states. A value of $C_\varepsilon \neq 1$ would move the location of the $\varepsilon_K$ band, while the presence of a $r_d \neq 1$ and a non-vanishing $\phi_d$ would alter the following three unitarity triangle constraints: \begin{eqnarray} \Delta M_{B_d} & = & \Delta M_{B_d}^{\rm SM} \; r_d^2 \; , \\ \beta_{\rm eff} & = & \beta + \theta_d \; , \\ \alpha_{\rm eff} & = & \alpha - \theta_d \; , \end{eqnarray} where $\beta_{\rm eff}$ and $\alpha_{\rm eff}$ are the angles extracted from the CP asymmetries in $B\to J/\psi K$ and $B\to (\pi\pi,\rho\rho,\rho\pi)$, respectively. \textcolor{black}{Although the presence of new physics effects in $B_s$ mixing is a very interesting possibility, we do not consider it in the model-independent analysis presented in this work for the following reason. Because of the smallness of the phase of the CKM element $V_{ts}$, any evidence of CP violation in $B_s$ decays translates immediately into evidence for physics beyond the SM. The golden mode studied at CDF and D0 is the time dependent CP asymmetry in the decay $B_s \to J/\psi \phi$, which presently deviates by about $2\sigma$ from the SM expectation ($\sim 0$). Unfortunately, however, a new phase in $B_s$ mixing does not affect any of the observables that enter the unitarity triangle fit. Therefore, signs of new physics in $B_d /K$ and $B_s$ mixing can be connected only in specific new physics scenarios in which new phases in $K$, $B_d$ and $B_s$ mixing have a common origin (see for instance Refs.~\cite{Buras:2008nn,Lunghi:2009sm} for a discussion of this point).} From the inspection of Fig.~\ref{fig:ut} we see that, as long as we consider only $\varepsilon_K$, $\Delta M_{B_s}/\Delta M_{B_d}$, $S_{\psi K_S}$ and $|V_{cb}|$, the data are not able distinguish between scenarios with new physics in the $K$ or $B_d$ sectors. In fact, both the solid (new physics in $K$ mixing) and dashed (new physics in $B_d$ mixing) contours in Fig.~\ref{fig:ut} have $\chi^2_{\rm min} \simeq 0$. This tie, however, is broken by the inclusion of constraints on $|V_{ub}|$ (from exclusive semileptonic $b\to u$ decays), $\alpha$ (from $B\to (\pi\pi, \rho\rho,\rho\pi)$ decays) and $\gamma$ (from $B\to D^{(*)} K^{(*)}$ decays). Figure~\ref{fig:utfit-tot} shows the resulting full fit to the unitarity triangle using the combined inclusive + exclusive determination of $|V_{cb}|$. \begin{figure}[t] \begin{center} \includegraphics[width= 0.48\linewidth]{utfit-tot.pdf} \caption{Fit to the unitarity triangle in the SM. We average the inclusive and exclusive determinations of $|V_{cb}|$, but use only the exclusive determination of $|V_{ub}|$. The black contour is obtained from the minimization of the complete chi-squared. \label{fig:utfit-tot}} \end{center} \end{figure} In order to test the hypothesis that new physics only affects neutral kaon-mixing, we minimize the chi-square while excluding $\varepsilon_K$ from the fit. The solid contour in the left panel of Fig.~\ref{fig:utNP} shows the allowed $(\overline\rho,\overline\eta)$ region in this scenario. Adopting the parametrization in Eq.~(\ref{ekpar}) we obtain the following value for the new physics contribution to $\varepsilon_K$: \begin{eqnarray} ( C_\varepsilon)_{\rm fit} = \begin{cases} 1.47 \pm 0.17 & \left|V_{cb} \right|_{\rm excl} \; , \cr 1.21 \pm 0.11 &\left|V_{cb} \right|_{\rm incl} \; , \cr 1.32 \pm 0.14 & \left|V_{cb} \right|_{\rm excl+incl}\; . \cr \end{cases} \label{ce} \end{eqnarray} In the upper right and lower panels of Fig.~\ref{fig:utNP} we consider scenarios in which only new physics in $B_d$ mixing is allowed. For the sake of simplicity we consider only two extreme cases in which we take $(\theta_d\neq 0, r_d=1)$ and $(\theta_d=0,r_d \neq 1)$. In the former case $S_{\psi K}$ and the extraction of $\alpha$ are affected by new physics contributions and must be excluded from the fit; in the latter one, only $\Delta M_{B_s}/\Delta M_{B_d}$ receives contributions. The fitted values of the new physics parameters $\theta_d$ and $r_d$ are: \begin{eqnarray} ( \theta_d)_{\rm fit} = \begin{cases} (-4.3 \pm 2.1)^{\rm o} & \left|V_{cb} \right|_{\rm excl} \cr (-2.8 \pm 1.9)^{\rm o} &\left|V_{cb} \right|_{\rm incl} \cr (-3.4 \pm 2.0)^{\rm o} & \left|V_{cb} \right|_{\rm excl+incl}\cr \end{cases} \quad {\rm and} \quad ( r_d)_{\rm fit} = \begin{cases} (0.940 \pm 0.036) & \left|V_{cb} \right|_{\rm excl} \cr (0.950 \pm 0.036)&\left|V_{cb} \right|_{\rm incl} \cr (0.946 \pm 0.036) & \left|V_{cb} \right|_{\rm excl+incl} \; .\cr \end{cases} \label{td} \end{eqnarray} In this case, the tension between $\Delta M_{B_s}/\Delta M_{B_d}$, $\varepsilon_K$ and $|V_{ub}|$ reduces the quality of the fit: the fit omitting the constraint from $\varepsilon_K$ has a confidence level of 91\%, while the fit omitting the constraints from $S_{\psi K}$ and $\alpha$ and from $\Delta M_{B_s}/\Delta M_{B_d}$ have a confidence level of 23\% and 30\%, respectively. Thus the scenario with new physics in $K$ mixing is favored by present data. This can also be seen from the inspection of Eqs.~(\ref{ce}) and (\ref{td}): $C_\varepsilon$ deviates from $C_\varepsilon^{\rm SM} = 1$ at a higher confidence level than $\theta_d$ from $\theta_d^{\rm SM} = 0$. \textcolor{black}{Finally, it should be noted that the marked preference for new physics in the kaon sector is a direct consequence of our inclusion of the lower determination of $|V_{ub}|$ from exclusive semileptonic decays in the fit. As can be seen from the upper right-hand plot in Fig.~\ref{fig:utNP}, further removal of the $|V_{ub}/V_{cb}|$ constraint results in a fit with a high confidence level (CL=81\%). The overlap of the constraints from $\varepsilon_K$ and $\Delta M_s / \Delta M_d$, however, corresponds to a very large value of $|V_{ub}/V_{cb}| = 0.120 \pm 0.017$. The correlation between possible new physics in $B_d$ mixing and a large implied value of $|V_{ub}/V_{cb}|$ is well known, and has been discussed in Refs.~\cite{Buras:2008nn,Altmannshofer:2009ne,Buras:2009if}.} \begin{figure}[t] \begin{center} \includegraphics[width= 0.48\linewidth]{utfit-ek.pdf} \includegraphics[width= 0.48\linewidth]{utfit-bd.pdf} \includegraphics[width= 0.48\linewidth]{utfit-bdr.pdf} \caption{Full fit to the unitarity triangle. Upper left panel: The black contour is obtained without the inclusion of the $\varepsilon_K$ constraint. Upper right panel: The black contour is obtained without the inclusion of the $\alpha$ and $\beta$ constraints. Lower panel: The black contour is obtained without the inclusion of the $\Delta M_{B_d}$ \textcolor{black}{ and $\Delta M_{B_s}/\Delta M_{B_d}$ constraints.}\label{fig:utNP}} \end{center} \end{figure} \section{Conclusions} \label{sec:Conc} Lattice QCD calculations that include the effects of the dynamical up, down, and strange quarks are becoming standard, and allow reliable calculations of hadronic weak matrix elements with all sources of uncertainty under control. Because there are now multiple lattice calculations of most of the hadronic matrix elements that enter the unitarity triangle fit, it is essential to average these results in order to reduce the theoretical uncertainties and obtain the most sensitive test of new physics in the flavor sector possible. We have therefore presented averages for the hadronic weak matrix elements that enter the standard global fit of the CKM unitarity triangle. Although we do not know the precise correlations between different lattice calculations of the same quantity, we have accounted for correlations between the different lattice results in a conservative manner in our averages. Whenever there is any correlation between the statistical or a particular systematic error in different lattice calculations, we assume that the degree of correlation is 100\%. Our lattice averages of hadronic weak matrix elements are therefore appropriate for use in phenomenological analyses such as the global CKM unitarity triangle fit. When these up-to-date lattice averages of the hadronic weak matrix elements are used in a global fit of the CKM unitarity triangle, we find a (2--3)$\sigma$ tension. \textcolor{black}{As was first pointed out by Lunghi and Soni~\cite{Lunghi:2008aa}, this tension is primarily between the three most precise constraints on the unitarity triangle from $\sin(2 \beta)$, $\Delta M_{B_s} / \Delta M_{B_d}$, and $\varepsilon_K$, and is largely independent of the value of $|V_{ub}|$, which differs significantly between determinations using inclusive and exclusive semileptonic decays. We confirm their observation and put it on an even stronger footing by using lattice averages that include more recent lattice calculations and take into account correlations.} The significance of the tension depends upon whether we use the exclusive or inclusive determination of $|V_{cb}|$, which disagree by $\sim 2 \sigma$. If we assume that new physics does not affect tree-level processes at the current level of precision, this tension can be interpreted as a sign of new physics either in neutral kaon mixing or in neutral $B$-meson mixing. We find that the current data prefer the scenario in which the new physics is in kaon mixing; this can be seen by the fact that the confidence level of the global fit increases significantly when we remove the constraint from the $\epsilon_K$ band leaving all others unchanged. The tension between the $\epsilon_K$ band and the other constraints is enhanced by our inclusion of the correction factor $\kappa_\epsilon$, which lowers the Standard Model prediction for $\epsilon_K$ by 8\%. \textcolor{black}{This factor has been recently included by the UTfit collaboration~\cite{Bona:2009ze} (they find a similar tension in the fit), but not yet by the CKMfitter group~\cite{CKMfitter}.} The errors in the hadronic weak matrix elements needed as inputs to the unitarity triangle analysis will continue to decrease over the next few years, as the results included in these averages are updated and as new independent results using different lattice actions from other collaborations appear. If the tension observed in the current global fit persists as the theoretical errors are reduced, this may indeed be a sign of new physics. This will be difficult to ascertain conclusively, however, unless the inclusive and exclusive determinations of $|V_{cb}|$ converge. Thus a better understanding of the theoretical errors in both determinations is a high priority for flavor physics. Lattice QCD calculations of weak matrix elements are truly living up to their promise and may ultimately lead to the discovery of new physics in the quark flavor sector. \section*{Acknowledgments} \textcolor{black}{We thank Andreas Kronfeld and Steve Gottlieb for entertaining comments on the manuscript. We also acknowledge interesting dicussions with Diego Guadagnoli, Andrzej Buras and Amarjit Soni.}
proofpile-arXiv_065-6433
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Investigations of traffic flows on substrates of various topologies and discussions of their efficiency have been a topic of recent research interest \cite{tadic,moreno}. The optimization of network structure and traffic protocols to achieve maximum efficiency is also a problem of practical importance. Congestion effects can occur in real networks like telephone networks, computer networks and the Internet. Congestion/decongestion transitions are seen in these systems. Recent studies on the `ping'-experiment shows $1/f$ fluctuation at the critical point \cite {Takayasu}. Message transport on different network geometries has been studied earlier on a linear chain \cite {Huisinga}, on two-dimensional lattices \cite{sat} and on Cayley trees \cite {Arenas} where messages are routed through shortest paths. Here, we consider a $1-d$ ring lattice of ordinary nodes and hubs similar to that considered in \cite{sat}. Networks based on ring geometries have been studied in the context of ATM networks and Local Area Networks (LAN). A realistic ring topology such as fiber distributed data interface (FDDI) sends messages clockwise or counter-clockwise through the shared link. Similarly, messages are deposited on our model $1-d$ ring lattice at regular intervals. We show that the network reproduces the experimental findings of the Internet traffic flow. \begin{figure*}[!t] \centerline{\subfloat[]{\includegraphics[width=2.0in]{1dringI.eps} \label{fig_1}} \hfil \subfloat[]{\includegraphics[width=3.0in]{powerspecbsl1.eps} \label{fig_2}}} \caption{(a) A $1-d$ ring lattice of ordinary nodes ($X$) with nearest neighbor connections and randomly distributed hubs ($Y$). Each hub has $2k$ nearest neighbors, where $k=2$. (b) The plot of $S(f)$ against $f$ for posting rates of $N_{m}=10, 50, 100, 150$.} \label{fig_sim1} \end{figure*} \section{A $1-d$ model of a communication Network} Here we discuss an one dimensional version of the communication network of nodes and hubs. The base network is a ring lattice of size $L$ with nearest neighbor connections. Hubs are distributed randomly in the lattice where each hub has $2k$ nearest neighbors. No two hubs are separated by a less than a minimum distance, $d_{min}$. In our simulation we have taken $k$=4 and $d_{min}$=1, although Fig.\ref{fig_sim1}(a) illustrates only $k$=2 connections. The distance between a source and target is defined by the Manhattan distance $D_{st}=|is -it|$. Messages are routed along the shortest path between a source $S$ and a target $T$ in the clockwise direction taking advantage of all links in the direction of the target. Thus, if a message is routed from a source $S$ to a target $T$ on this lattice through the baseline mechanism, it takes the path $S$-1-2-Y-3-4-5-$T$ as in Fig.\ref{fig_sim1}(a). \section{Power Spectrum Analysis} In our simulation, a given number $N_{m}$ of source and target pairs start sending $N_{m}$ messages at every $100-th$ time step for a total run time of $500000$ for a lattice size $L=10000$, and $D_{st}=2000$. The average load per node is given as $\bar p(N_{m},t)=R(N_{m},t)/L$ where $R(N_{m},t)$ is the total number of messages flowing on the lattice. For smaller values of the posting rate, the value of $\bar p(N_{m},t)$ is very small and the system is in the decongested phase. As the posting rate of messages is increased, the system attains the congested regime. The autocorrelation function of the average load per node ($\bar p(N_{m},t)$) is defined as \cite{gautam}: \begin {equation} C(N_{m},t) = \frac { \langle {\bar p(N_{m},t')\bar p(N_{m},t+t')} \rangle - \langle {\bar p(N_{m},t')} \rangle^2} {\langle {\bar p^2(N_{m},t')} \rangle - \langle {\bar p(N_{m},t')} \rangle^2} \end {equation} The Fourier transform of the autocorrelation function $C(N_{m},t)$ is known as the spectral density or power spectrum $S(f)$, and is defined as \begin {equation} S(N_{m},f) = \int^{\infty}_{-\infty} e^{-ift}C(N_{m},t)dt \end {equation} We plot $S(f)$ against $f$ for posting rates of $N_{m}=10, 50, 100, 150$. The plot of $S(f)$ against $f$ shows a power law: $S(f) \sim f^{-\alpha}$. In this case the spectral exponent $\alpha = 1$ thus indicating $1/f$ scaling irrespective of the posting rate (Fig.\ref{fig_sim1}(b)). \begin{figure*} \centerline{\subfloat[]{\includegraphics[width=2.0in]{1dringII.eps} \label{fig_5}} \hfil \subfloat[]{\includegraphics[width=3.0in]{Probintbslassrt1.eps} \label{fig_6}}} \caption{(a) The hubs are connected assortatively with two connections per hub. A message is routed along the path $S$-1-A-r-B-2-3-$T$. (b) The inter-arrival time distribution which shows a stretched exponential behavior for the baseline mechanism and a power law tail for the random assortative mechanism.} \label{fig_sim3} \end{figure*} We also study the inter-arrival time of messages for the most congested hub. The most congested hub is identified by calculating the coefficient of betweenness centrality (CBC), which is defined as the ratio of the number of messages $N_{k}$ which pass through a given hub $k$ to the total number of messages which run simultaneously i.e. $CBC=\frac{N_k}{R}$. Hubs with higher CBC value are more prone to congestion. We calculate the inter arrival time of messages for the hub with highest CBC. Inter-arrival times were studied earlier in the context of dynamics of information access on the web \cite{bara1} and also for human dynamics \cite{bara2}. For the baseline mechanism, the distribution of inter-arrival times is of the stretched exponential form, given by \begin{equation} P(t_{ia})=\exp(-b{t_{ia}}^{\delta}) \end{equation} where $\delta = 2.0$ for $N_{m}=50$ (Fig.\ref{fig_sim3}(b)). If the hubs in the lattice are connected by random assortative connections with two connections per hubs as shown in Fig.\ref{fig_sim3}(a), the inter arrival time of messages show power law behavior of the form \begin{equation} P(t_{ia})={t_{ia}}^{-\alpha} \end{equation} where $\alpha = 5.5$ for $N_{m}=50$ (Fig.\ref{fig_sim3}(b)). In the next section, we will discuss a double ring variation of the $1-d$ network, and discuss another statistical characteriser, the travel time distribution. \section{The $1-d$ double ring communication network} The $1-d$ ring lattice can be easily modified to the $1-d$ double-ring lattice as shown in Fig.\ref{fig_sim2}(a). Double-ring network topologies have been used earlier to model the head-direction system in animals \cite{seung} as well as for Local Area Networks (LAN). Our double-ring lattice consists of two concentric ring lattices (Fig.\ref{fig_sim2}(a)) of size $L_{i}$ and $L_{o}$ respectively, where $L_{i}$ is the size of the inner ring lattice and $L_{o}$ is the size of the outer ting lattice. The source-target pairs and the hubs are located in the outer lattice, with each hub having a connection to a node in the inner lattice. As before a message is routed along the shortest path $S$-1-X-2-3-4-5-Y-6-$T$ in the clockwise direction as shown in Fig.\ref{fig_sim2}(a). \begin{figure*}[!t] \centerline{\subfloat[]{\includegraphics[width=2.0in]{1dtworing1.eps} \label{fig_3}} \hfil \subfloat[]{\includegraphics[width=3.0in]{cdt_TT_2ring1.eps} \label{fig_4}}} \caption{(a) Figure shows a $1-d$ double-ring lattice consisting of two concentric ring lattices. The outer ring consists of ordinary nodes with nearest neighbor connections and randomly distributed hubs ($X$, $Y$). Each hub has $2k$ nearest neighbors, where $k=2$. Each hub is connected to an ordinary node in the inner ring lattice. (b) The travel time distribution of messages flowing on this lattice shows bimodal distribution.} \label{fig_sim2} \end{figure*} We study the travel time distribution of messages which are flowing on the lattice. The travel time is defined to be the time required for a message to travel from source to target, including the time spent waiting at congested hubs. A given number $N_{m}$ of source and target pairs start sending $N_{m}$ messages continuously at every $100$ time steps for a total run time of $30000$. In our simulation the travel time is calculated for a source-target separation of $D_{st}=2000$ on a $L_{i}=1000$ and $L_{o}=9000$ double ring lattice, and averaged over $200$ hub realizations. The distribution of travel times of messages shows bimodal behaviour. The peak at higher travel times shows Gaussian behavior whereas the peak at lower travel time shows log-normal behavior. In the case of the $1-d$ ring, crossover from Gaussian to log-normal behavior was observed during the congestion-decongestion transition in the $1-d$ ring lattice \cite{sat2}. Hence we conclude that the Gaussian peak at higher travel times for the double ring corresponds to the initial congestion in the system, whereas the log-normal peak at lower travel times corresponds to the later decongested stage. \section{Conclusion} To summarize, we have studied message transport on model communication network of ordinary nodes and hubs, embedded on a ring lattice. The properties of message traffic on such a lattice is largely consistent with the real world networks like the Internet. The power spectral analysis of load time series data shows $1/f$ type fluctuations confirming long-ranged correlation in the network load time series, which is also seen in real life networks. For the baseline mechanism the inter arrival time distribution of messages show a stretched exponential behavior. The behavior changes to a power law if random assortative connections are introduced in the lattice. We also studied a variation of the ring lattice, namely the double ring lattice. The travel time distribution is bimodal, with one Gaussian peak and one log-normal peak. It would be interesting to see if our results have relevance in real life communication networks like telephone networks, biological networks etc. \section*{Acknowledgment} We thank CSIR, India for support under their extra-mural scheme. The authors also thank A. Prabhakar for helpful suggestions and comments.
proofpile-arXiv_065-6439
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $\psi:\mb{R}^+\to\mb{R}^+ $ be a real positive decreasing function with $\psi(r)\to{}0$ as $r\to\infty$. Such a function will be refereed to as an \emph{approximation} function. An $m\times n$ matrix $X=(x_{ij})\in\mb{R}^{mn}$ is said to be \emph{$\psi$--approximable} if the system of inequalities \[ |q_{_{1}}x_{_{1}i}+q_{_{2}}x_{_{2}i}+\dots+q_{m}x_{mi}| \leq \psi(|\mbf{q}|)\text{\quad{}for\quad}(1\leq i\leq n), \] is satisfied for infinitely many $\mbf{q}\in \mb{Z} ^{m}\setminus\{0\}$. Here and throughout $|\mbf{q}|$ will denote the supremum norm of the vector $\mbf{q}$. Specifically, $\left\vert \mbf{q}\right\vert =\max \left\{ \left\vert q_{_{1}}\right\vert ,\left\vert q_{_{2}}\right\vert ,\dots,\left\vert q_{_{m}}\right\vert \right\}$. The system $q_{_{1}}x_{_{1}i}+q_{_{2}}x_{_{2}i}+\dots+q_{m}x_{mi}$ of $n$ linear forms in $m$ variables $q_{_{1}},q_{_{2}},\dots,q_{_{m}}$ will be written more concisely as $\mbf{q}X$, where the matrix $X$ is regarded as a point in $ \mb{R}^{mn}.$ It is easily verified that $\psi$--approximability is not affected under translation by integer vectors and we can therefore restrict attention to the unit cube $\mb{I}^{mn}:= [ -\frac{1}{2},\frac{1}{2}] ^{mn}$. The set of $\psi$--approximable points in $\mb{I}^{mn} $ will be denoted by $W_0( m,n;\psi)$; \[ W_0( m,n;\psi) :=\{ X\in \mb{I}^{mn}:|\mbf{q}X|<\psi(|\mbf{q}|)\text{\ for i.m.\ }\mbf{q}\in \mb{Z}^{m}\setminus \{\mbf{0}\}\}, \] where `i.m.' means `infinitely many'. In the case when $\psi(r)= r^{-\tau}$ for some $ (\tau>0)$ we shall write $W_0\left( m,n;\tau \right)$ instead of $W_0\left( m,n;\psi \right)$. It is worth relating the above to the set of $\psi$--well approximable matrices as is often studied in classical Diophantine approximation. In such a setting studying the metric structure of the $\limsup$-set \[ W( m,n;\psi)=\{ X\in \mb{I}^{mn}:\| \mbf{q}X\| <\psi(|\mbf{q}|)\text{ for i.m. \ } \mbf{q}\in \mb{Z}^{m}\setminus \{\mbf{0}\}\}, \] where $\|x\|$ denotes the distance of $x$ to the nearest integer vector, is a central problem and the theory is well established, see for example \cite{geo} or \cite{BDV}. Probably the main result in this setting is the Khintchine-Groshev theorem which gives an elegant answer to the question of the size of the $W(m,n;\psi)$. The result links the measure of the set to the convergence or otherwise of a series that depends only on the approximating function and is the template for many results in the field of metric number theory. It is clear then that the set $W_0\left( m,n;\psi \right)$ is an analogue of $W(m,n; \psi)$ with $|\cdot|$ replacing $\|\cdot\|$. The aim of this paper is to obtain the complete metric theory for the set $W_0(m,n;\psi)$. It is readily verified that $W_0( 1,n;\psi) = \{0\}$ as any $x=(x_1,x_2,\dots,x_n)\in{}W_0(1,n;\psi)$ must satisfy the inequality $\left\vert qx_{j}\right\vert < \psi(q) $ infinitely often. As $\psi(q)\to 0$ as $q\to\infty$ this is only possible if $x_j=0$ for all $j=1,2,\dots,n.$ Thus when $m=1$ the set $W_0(1, n; \psi)$ is a singleton and must have both zero measure and dimension. We will therefore assume that $m\geq 2.$ Before giving the main results of this paper we include a brief review of some of the work done previously on the measure theoretic structure of $W_0(m,n;\psi)$. The first result is due to Dickinson \cite{Dickinson}. \begin{Dickinson}\label{hd} When $\tau> \frac{m}{n}-1$ and $m\geq2$ \[ \dim(W_0(m,n;\tau))=(m-1)n+ \frac{m}{\tau+1}, \] and when $0<\tau\leq \frac{m}{n}-1,$ \ \dim(W_0(m,n;\tau))=mn. \] \end{Dickinson} It turns out that Dickinson's original result is false when $m \leq n$. The correct statement is given in Corollary $5$ which is a consequence of Theorem \ref{Mumtaz2} proved below. To the best of our knowledge the only other result is due to Kemble \cite{kemble} who established a Khintchine--Groshev type theorem for $W_0(m,1;\psi)$ under various conditions on the approximating function. We shall remove these conditions and prove the precise analogue of the Khintchine--Groshev theorem for $W_0(m,n;\psi)$. Finally, it is worth mentioning that the set is not only of number theoretic interest but appears naturally in operator theory, see \cite{perturbation} for further details. \emph{Notation.} To simplify notation the symbols $\ll$ and $\gg$ will be used to indicate an inequality with an unspecified positive multiplicative constant. If $a\ll b$ and $a\gg b$ we write $a\asymp b$, and say that the quantities $a$ and $b$ are comparable. For a set $A$, $|A|_k$ will be taken to mean the $k-$dimensional Lebesgue measure of the set $A$. \section{Statement of Main Results} The results of this paper depend crucially on the choice of $m$ and $n$. We shall see that when $m>n$, the metric theory is `independent' and for this particular case Dickinson's dimension result is correct. When $m\leq n$ the measure results are dependent on the independent case. Dickinson's result for this particular case is incorrect and we provide the correct result. In the following $\mcal{H}^{f}$ denotes $f$-dimensional Hausdorff measure which will be defined fully in \S\ref{hm}. Given an approximating function $\psi$ let $\Psi(r):=\frac{\psi(r)}{r}$. \begin{theorem}\label{thm1} Let $m>n$ and $\psi$ be an approximating function. Let $f$ be a dimension function such that $r^{-mn}f(r)$ is monotonic and $r^{-(m-1)n}f(r)$ is increasing. Then \[ \mcal{H}^{f}\left(W_0\left( m,n;\psi \right) \right)= \left\{\begin{array}{lll} 0&\mbox{if}& \sum \limits_{r=1}^{\infty}f(\Psi (r))\Psi(r)^{-(m-1)n}r^{m-1}<\infty, \\ \mcal{H}^f(\mb{I}^{mn})&{if}& \sum \limits_{r=1}^{\infty}f(\Psi (r))\Psi(r)^{-(m-1)n}r^{m-1}=\infty. \end{array}\right. \] \end{theorem} The requirement that $r^{-mn}f(r)$ be monotonic is a natural and not particularly restrictive condition. Note that if the dimension function $f$ is such that $r^{-mn}f(r)\to \infty$ as $r\to 0$ then $\mcal{H}^f(\mb{I}^{mn})=\infty$ and Theorem \ref{thm1} is the analogue of the classical result of Jarn\'{\i}k (see \cite{Jarnik}). Theorem \ref{thm1} implies analogues of both the Lebesgue and Hausdorff measure results familiar from classical Diophantine approximation. In the case when $f(r) := r^{mn}$ the Hausdorff measure $\mcal{H}^f$ is simply standard Lebesgue measure supported on $\mb{I}^{mn}$ and the result is the natural analogue of the Khintchine--Groshev theorem for $ W_0\left( m,n;\psi\right )$. \begin{cor}\label{cor1} Let $m>n$ and $\psi $ be an approximating function, then \begin{equation*} |W_0\left( m,n;\psi \right)|_{mn} = \begin{cases} 0\hspace{1cm} if\hspace{1cm} \sum \limits_{r=1}^{\infty}\psi (r)^{n}r^{m-n-1}<\infty, \\ 1\hspace{1cm} if\hspace{1cm} \sum \limits_{r=1}^{\infty}\psi (r)^{n}r^{m-n-1}=\infty. \end{cases} \end{equation*} \end{cor} If we now set $f:r\to r^s(s>0)$ then Theorem \ref{thm1} reduces to the following $s$-dimensional Hausdorff measure statement which is more discriminating then the Hausdorff dimension result of Dickinson. \begin{cor}\label{smesmgn}Let $m>n$ and $\psi$ be an approximating function. Let $s$ be such that $(m-1)n<s\leq mn.$ Then, \[ \mcal{H}^{s}\left( {W_0}\left( m,n;\psi \right) \right)= \left\{\begin{array}{lll} 0&\mbox{if}& \sum \limits_{r=1}^{\infty}\Psi(r)^{s-(m-1)n}r^{m-1}<\infty , \\ \mcal{H}^s(\mb{I}^{mn})&{if}& \sum \limits_{r=1}^{\infty}\Psi(r)^{s-(m-1)n}r^{m-1}=\infty. \end{array}\right. \] \end{cor} Under the conditions of Corollary~\ref{smesmgn} it follows from the definition of Hausdorff dimension that \[ \dim\left(W_0\left( m,n;\psi \right) \right)=\inf \left\{s:\sum \limits_{r=1}^{\infty}\Psi(r)^{s-(m-1)n}r^{m-1}<\infty \right\}, \] and in particular the following dimension result for $W_0(m,n;\tau)$ holds. \begin{cor}\label{dimmln} Let $m>n$ and $\tau>\frac{m}{n}-1$ then \[ \dim\left(W_0\left( m,n;\tau \right) \right)=(m-1)n+\frac{m}{\tau+1}. \] \end{cor} Theorem~\ref{thm1} establishes the metric theory for $W_0\left( m,n;\psi \right) $ when $m>n.$ For the cases when $m\leq{}n$ the statement of Theorem~\ref{thm1} changes somewhat. The sum which determines the $f$--measure remains the same but the conditions on the dimension functions are different. This is due to the fact that set $W(m,n;\psi)$ can be shown to lie in a manifold $\Gamma\subset\mathbb{R}^{mn}$ of dimension $(m-1)(n+1)$, a fact we prove later in \S\ref{proofThm2}. In light of this remark, an upper bound for $\dim{}W_0\left( m,n;\psi \right)$ follows immediately. More specifically, \[ \dim W_0(m,n;\psi)\leq (m-1)(n+1). \] \begin{theorem}\label{Mumtaz2} Let $m\leq n$ and $\psi$ be an approximating function. Let $f$ and $g$ be dimension functions with $g(r)=r^{-(n-m+1)(m-1)}f(r)$. Assume that $r^{-(m-1)(n+1)}f(r)$ is monotonic and $r^{-(m-1)n}f(r)$ increasing. Then $\mcal{H}^{f}(W_0(m,n;\psi)=0$ if \[ \sum_{r=1}^\infty{}f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1}<\infty. \] If \[ \sum_{r=1}^\infty{}f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1}=\infty, \] then \[ \mcal{H}^{f}(W_0(m,n;\psi) = \begin{cases} \infty & \text{\quad{}if\quad{}} r^{-(m-1)(n+1)}f(r)\to\infty \text{ as } r\to{}0, \\ \mathcal{H}^f(\Gamma) & \text{\quad{}if\quad{}} r^{-(m-1)(n+1)}f(r)\to{}C \text{ as } r\to{}0, \end{cases} \] where $C>0$ is some fixed constant. \end{theorem} It is worth noting that for dimension functions $f$ such that $r^{-(m-1)(n+1)}f(r)\to{}C>0$ as $r\to{}0$ the measure $\mathcal{H}^f$ is comparable to standard $(m-1)(n+1)$-dimensional Lebesgue measure and in the case when $f(r)=r^{(m-1)(n+1)}$, we obtain the following analogue of the Khintchine-Groshev theorem. \begin{cor}\label{kgmleqn} Let $m\leq n$ and $\psi$ be an approximating function and assume that the conditions of Theorem~\ref{Mumtaz2} hold for the dimension function $f(r):=r^{(m-1)(n+1)}$. Then \[ \mu(W_0 (m,n;\psi))=\begin{cases} 0\hspace{1cm} if\hspace{1cm} \sum \limits_{r=1}^{\infty}\psi (r)^{m-1}<\infty, \\ 1\hspace{1cm} if\hspace{1cm} \sum \limits_{r=1}^{\infty}\psi (r)^{m-1}=\infty, \end{cases} \] where $\mu$ is the normalised measure on the manifold $\Gamma$. \end{cor} As above, if we set $f(r)=r^s$ we obtain the $m\leq n$ analogue of Corollary~\ref{smesmgn}. \begin{cor}\label{smesln}Let $m\leq n$ and $\psi$ be an approximating function. Let $s$ be such that $(m-1)n<s\leq (m-1)(n+1)$ and let $g:r\to r^{s-(n-(m-1))(m-1)}$ be a dimension function. Then, \[ \mcal{H}^{s}(W_0(m,n;\psi))=\left\{\begin{array}{lll} 0&\mbox{if}& \sum \limits_{r=1}^{\infty}\Psi(r)^{s-(m-1)n}r^{m-1}<\infty, \\ \mcal{H}^s(\Gamma)&{if}& \sum \limits_{r=1}^{\infty}\Psi(r)^{s-(m-1)n}r^{m-1}=\infty. \end{array}\right. \] \end{cor} Further, under the same conditions as Corollary~\ref{smesln}, but with the approximation function $\psi(x)= x^{-\tau}$, we have the following result: \begin{cor}\label{dimWmleqn} Let $ m\leq n$ and $\tau> \frac{m}{m-1}-1.$ Then \[ \dim(W_0(m,n;\tau))=(m-1)n+ \frac{m}{\tau+1}, \] and when $0<\tau\leq \frac{m}{m-1}-1,$ \[ \dim(W_0(m,n;\tau))=(m-1)(n+1). \] \end{cor} The paper is organized as follows. In Section~\ref{aux}, we give the definitions of Hausdorff measure and ubiquity, which is the main tool for proving Theorem~\ref{thm1}, in a manner appropriate to the setting of this paper. Section~\ref{aux} also includes the statement of the `Slicing' lemma (Lemma~\ref{slicing}) which is used to prove Theorem~\ref{Mumtaz2}. The paper continues with the proof of Theorem~\ref{thm1} in \S~\ref{sthm1}. As is common when proving such `zero-full' results the proof is split into two parts; the convergence case and the divergence case. We conclude the paper with the proof of Theorem~\ref{Mumtaz2}. \section{Basic Definitions and Auxiliary Results}\label{aux} In this section we give definitions of some fundamental concepts along with some auxiliary results which will be needed in the proofs of Theorems~\ref{thm1} and \ref{Mumtaz2}. \subsection{Hausdorff Measure and Dimension} \label{hm} Below we give a brief introduction to Hausdorff $f$-measure and dimension. For further details see \cite{Falc2}. Let $f:\mb{R}^+\rightarrow \mb{R}^+$ be an increasing continuous function such that $f(r)\to 0$ as $r\to 0$. Such a function $f$ is referred to as a \emph{dimension function}. We are now in a position to define the Hausdorff $f$--measure $\mathcal{H}^f(X) $ of a set $X\subset \mb{R}^n$. Let $B$ be a (Euclidean) ball in $\mb{R}^n$. That is a set of the form \[ B = \{ x\in\mb{R}^n: |x - c |_2 < \delta \} \] for some $c\in\mb{R}^n$ and some $\delta>0$. The \textit{diameter} $\mathrm{diam}(B)$ of $B$ is \[ \mathrm{diam}(B) := \sup\{ |x-y|_2: x,y\in{}B \}. \] Now for any $\rho>0$ a countable collection $\{B_i\}$ of balls in $\mb{R}^n$ with diameters $\mathrm{diam} (B_i)\le \rho$ such that $X\subset \bigcup\limits_i B_i$ is called a $\rho$--cover for $X$. Define \begin{equation*} \mcal{H}_\rho^f(X)=\inf\left\{\sum\limits_if(\mathrm{diam}(B_i))\;:\;\{B_i\}% \mbox{ is a $\rho$-cover for }X\right\}, \end{equation*} where the infimum is taken over all possible $\rho$--covers of $X$. The Hausdorff $f$-measure of $X$ is defined to be \[ \mcal{H}^f(X)=\lim\limits_{\rho\to 0}\mcal{H}_\rho^f(X). \] In the particular case when $f(r):=r^s$ $(s>0)$, we write $\mcal{H}^s (X)$ for $\mcal{H}^f$ and the measure is refereed to as $s$--dimensional Hausdorff measure. The Hausdorff dimension of a set $X$ is denoted by $\dim(X)$ and is defined as follows, \[ \dim(X):=\inf\{s\in \mb{R}^+\;:\; \mcal{H}^s(X)=0\}=\sup\{s\in \mb{R}^+\;:\; \mcal{H}^s(X)=\infty\}. \] Note that the value of $\dim(X)$ is unique. At the critical exponent $s = \dim X$ the quantity ${\mcal H}^s(X) $ is either zero, infinite or strictly positive and finite. In the latter case; i.e. when \[ 0< {\mcal H}^s(X)<\infty, \] the set $X$ is said to be an $s$-set. \subsection{Ubiquitous Systems}\label{ubiq} To make this article as self contained as possible we describe the main tool used in proving the divergence part of Theorem \ref{thm1}, the idea of a locally ubiquitous system. The set-up presented below is simplified for the current problem. The general framework is much more abstract and full details can be found in \cite{geo} and \cite{BDV}. Let $\Re =\left\{ R_{\mbf{q} }:\mbf{q} \in \mb{Z}^m\setminus\{\mbf{0}\}\right\} $ be the family of subsets $R_{\mbf{q} }:=\{X\in\mb{I}^{mn}:\mbf{q}X=\mbf{0}\}$. The sets $R_{\mbf{q}}$ will be referred to as \emph{resonant sets}. Let the function $\beta:\mb{Z}^m\setminus\{\mbf{0}\}\to\mb{R}^+:\mbf{q}\to |\mbf{q}|$ attach a weight to the resonant set $R_{\mbf{q}}.$ Now, given an approximating function $\psi$ and $R_{\mbf{q} }$, let \begin{equation*} \Delta\left( R_{q},\Psi (\vert\mbf{q}% \vert )\right) :=\left\{ X\in \mb{I}^{mn}: \text{dist}\left( X,R_{q}\right) \leq \frac{\psi (\left\vert \mbf{q}\right\vert )}{% \left\vert \mbf{q}\right\vert }\right\} \end{equation*} where $\mathrm{dist}(X,R_{\mbf{q}}):=\inf \{|X-Y|:Y\in R_{\mbf{q}}\}.$ Thus $\Delta\left( R_{q},\Psi (\vert\mbf{q}% \vert )\right)$ is a $\Psi$--neighbourhood of $R_{\mbf{q}}$. Notice that in the case when the resonant sets are points the sets $\Delta\left( R_{q},\Psi (\vert\mbf{q}% \vert )\right)$ are simply balls centred at resonant points. Let \[ \Lambda(m,n;\psi) =\{X\in\mb{I}^{mn}:X\in \Delta\left( R_{q},\Psi (\vert\mbf{q}% \vert )\right) \text{for i.m. } \mbf{q}\in\mb{Z}^m\setminus\{\mbf{0}\}\}. \] The set $\Lambda(m,n; \psi)$ is a `limsup' set. It consists entirely of points in $\mb{I}^{mn}$ which lie in infinitely many of the sets $\Delta\left( R_{q},\Psi (\vert\mbf{q}% \vert )\right)$. This is apparent if we restate $\Lambda(m,n;\psi)$ in a manner which emphasises its limsup structure. Fix $k>1$ and for any $t\in\mb{N}$, define \begin{equation} \label{limsup1} \Delta(\psi,t):= \bigcup_ k^{t-1}\leq |\mbf{q}|\leq{}k^t}\Delta( R_{\mbf{q} },\Psi(|\mbf{q}|)). \end{equation} It follows that \begin{equation} \label{limsup2} \Lambda(m,n;\psi) = \limsup_{t\rightarrow \infty }% \Delta( \psi ,t) = \bigcap_{N=1}^{\infty}\bigcup_{t=N}^\infty{}\Delta( \psi,t). \end{equation} The key point by which ubiquity will be utilised is in the fact that the sets $W_0(m,n;\psi)$ and $\Lambda(m,n;\psi)$ actually coincide. We now move onto the formal definition of a locally ubiquitous system. As stated above the definition given below is in a much simplified form suitable to the problem at hand. In the more abstract setting given in \cite{BDV} there are specific conditions on both the measure on the ambient space and its interaction with neighbourhoods of the resonant set which must be shown to hold. These conditions are not stated below as they hold trivially for Lebesgue measure, the measure on our ambient space $\mathbb{I}^{mn}$, and stating the conditions would complicate the discussion somewhat. Never the less, the reader should be aware that in the more abstract notion of ubiquity these extra conditions exist and need to be be established. Let $\rho :\mb{R}^{+}\rightarrow\mb{R}^{+}$ be a function with $\rho \left( r\right) \rightarrow 0$ as $% r\rightarrow \infty $ and let% \[ \Delta \left( \rho ,t\right) := \underset{\mbf{q} \in J\left( t\right) }{\bigcup }\Delta\left( R_{\mbf{q} },\rho \left( k^t\right) \right) \] where $J(t)$ is defined to be the set \[ J(t):= \left\{\mbf{q}\in\mb{Z}^m\setminus\{\mbf{0}\}: |\mbf{q}|\leq k^t\right\} \] for a fixed constant $k>1$. \begin{defn} Let $B:=B\left( X,r\right) $ be an arbitrary ball with centre $X\in \mb{I}^{mn} $ and $r\leq r_{o}.$ Suppose there exists a function $\rho$ and an absolute constant $\kappa>0$ such that \[ |B\cap \Delta \left( \rho ,t\right)|_{mn} \geq \kappa | B|_{mn} \text{ for }t\geq t_{o}\left( B\right). \] Then the pair $\left( \Re ,\beta \right) $ is said to be a \emph{locally ubiquitous} system relative to $\left( \rho ,k\right).$ \end{defn} Loosely speaking the definition of local ubiquity says that the set $\Delta(\rho,t)$ locally approximates the underlying space $\mb{I}^{mn}$ in terms of the Lebesgue measure. The function $\rho$, will be referred to as the {\em ubiquity function}. The actual values of the constants $\kappa$ and $k$ in the above definition are irrelevant, it is their existence that is important. In practice the local ubiquity of a system can be established using standard arguments concerning the distribution of the resonant sets in $\mb{I}^{mn}$, from which the function $\rho$ arises naturally. Clearly if $ | \Delta\left( \rho ,t\right)|_{mn} \to 1 \text{ as }t\to{}\infty$ then $\left( \Re ,\beta \right) $ is locally--ubiquitous. To see this let $B$ be any ball and assume without loss of generality that \ $|B|_{mn} =\epsilon >0.$ Then for t sufficiently large, $$|\Delta\left( \rho ,t\right)|_{mn} > 1 -\epsilon /2.$$ Hence $|B\cap \Delta \left( \rho ,t\right)|_{mn}\geq \epsilon /2$ as required. Given a positive real number $k>1$ a function $f$ will be said to be \textsl{k-regular} if there exists a positive constant $\lambda <1$ such that for $t$ sufficiently large $$ f(k^{t+1}) \leq \lambda f(k^t). $$ Finally, we set $\gamma=\mathrm{dim}(R_{\mathbf{q}})$, the common (Euclidean) dimension of the resonant sets $R_{\mathbf{q}}$. The following theorem is a simplified version of Theorem 1 from \cite{geo}. \begin{theorem}[BV] \label{BV} Suppose that $(\Re ,\beta)$ is locally ubiquitous relative to $(\rho, k)$ and $\psi$ is an approximation function. Let $f$ be a dimension function such that $r^{-\delta}f(r)$ is monotonic. Furthermore suppose that $r^{-\gamma}f(r)$ is increasing and $\rho$ is $k$--regular. Then \begin{equation}\label{sumcon} \mcal{H}^f(W_0(m,n;\psi)) = \mcal{H}^f(\mb{I}^{mn}) \quad \textrm{if} \quad \sum_{n=1}^\infty \frac{f(\Psi(k^t))\Psi(k^t)^{-\gamma}}{\rho(k^t)^{\delta-\gamma}}=\infty. \end{equation} \end{theorem} \subsection{Slicing}\label{auxi} We now state a result which is the crucial key ingredient in the proof of Theorem \ref{Mumtaz2}. The result was used in \cite{slicing} to prove the Hausdorff measure version of the W. M. Schmidt's inhomogeneous linear forms theorem in metric number theory. The authors refer to the technique as ``slicing''. We will merely state the result. For a more detailed discussion and proof see \cite{slicing} or \cite{Mat}. However, before we do state the theorem it is necessary to introduce a little notation. Suppose that $V$ is a linear subspace of $\mb{I}^k$, $V^{\perp}$ will be used to denote the linear subspace of $\mb{I}^k$ orthogonal to $V$. Further $V+a:=\left\{v+a:v\in V\right\}$ for $a\in V^{\perp}$. \begin{lemma}\label{slicing} Let $l, k \in \mb{N}$ be such that $l\leq k$ and let $f \;\textrm{and}\; g:r\to r^{-l}f(r)$ be dimension functions. Let $B\subset \mb{I}^k$ be a \emph{Borel} set and let $V$ be a $(k-l)$--dimensional linear subspace of $\mb{I}^k$. If for a subset $S$ of $V^{\perp}$ of positive $\mcal{H}^l$ measure$$\mcal{H}^{g}\left(B\cap(V+b)\right)=\infty \hspace{.5cm} \forall \; b\in S,$$ \noindent then $\mcal{H}^{f}(B)=\infty.$ \end{lemma} We are now in a position to begin the proofs of Theorems~\ref{thm1} and \ref{Mumtaz2}. \section{The Proof of Theorem \ref{thm1}}\label{sthm1} As stated above, the proof of Theorem~\ref{thm1} is split into two parts; the convergence case and the divergence case. We begin with the convergence case as this is more straightforward than the divergence case. \subsection {The Convergence Case}\label{con} Recall that in the statement of Theorem~\ref{thm1} we assumed that $m>n$ and we imposed some conditions on the dimension function $f$. As it turns out these conditions are not needed in the convergence case and we can state and prove a much cleaner result which has the added benefit of also implying the convergence case of Theorem \ref{Mumtaz2}. \begin{theorem}\label{conv} Let $\psi$ be an approximating function and let $f$ be a dimension function. If \[ \sum \limits_{r=1}^{\infty}f(\Psi (r))\Psi(r)^{-(m-1)n}r^{m-1}<\infty, \] then \[ \mcal{H}^f\left(W_0\left( m,n;\psi \right)\right)=0. \] \end{theorem} Obviously Theorem~\ref{conv} implies the convergence cases of Theorems~\ref{thm1} and \ref{Mumtaz2}. \begin{proof} To prove Theorem~\ref{conv} we make use of the natural cover of $W_0\left( m,n;\psi \right)$ given by Equations~\eqref{limsup1} and \eqref{limsup2}. It follows almost immediately that for each $N\in \mb{N}$ the family% \begin{equation*} \left\{ \underset{R_{\mbf{q}}:\left\vert \mbf{q}\right\vert =r}{\bigcup }\Delta\left( R_{\mbf{q}},\Psi (\vert\mbf{q}% \vert )\right) :r=N,N+1,\dots\right\} \end{equation*} is a cover for the set $W_0\left( m,n;\psi \right) $. That is \[ W_0\left( m,n;\psi \right) \subset \underset{r>N}{{\bigcup }}\underset{|\mbf{q}| =r}{{\bigcup }}\Delta( R_{\mbf{q}},\Psi(|\mbf{q}% |)) \] for any $N\in\mathbb{N}$. Now, for each resonant set $R_{\mbf{q}}$ let $\Delta (q)$ be a collection of $mn$-% dimensional closed hypercubes $C$ with disjoint interiors and side length $\Psi (|\mbf{q}|)$ such that \[ C{}\bigcap \bigcup_{|\mathbf{q}|=r} \Delta( R_{\mbf{q}},\Psi(|\mbf{q}|)) \neq \emptyset \] and \[ \Delta(R_{\mbf{q}},\Psi (| \mbf{q}|)) \subset \underset{C\in \Delta (q)}{{\bigcup }}C. \] Then \[ \# \Delta(q)\ll(\Psi(|\mbf{q}|)) ^{-(m-1)n}. \] were $\#$ denotes cardinality. Note that \begin{equation*} W_0\left( m,n;\psi \right) \subset \underset{r>N}{{\bigcup }}\underset{\left\vert \mbf{q}\right\vert =r}{{\bigcup }}\Delta\left( R_{\mbf{q}},\Psi (\left\vert \mbf{q}\right\vert)\right) \subset \underset{r>N}{{\bigcup }}\underset{\Delta (q):\left\vert \mbf{q}\right\vert =r}{{\bigcup }}\underset{C\in \Delta (q)}{{\bigcup }}C. \end{equation*} It follows on setting $\rho(N)=\psi(N)$ that \begin{eqnarray*} \mcal{H}_{\rho}^f\left(W_0\left( m,n;\psi \right)\right) &\leq &% \sum\limits_{r>N} \sum\limits_{\Delta (q):\left\vert \mbf{q}\right\vert =r} \sum \limits_{C\in \Delta (q)}f(\Psi (\left\vert \mbf{q}\right\vert )) \\ &\ll &\sum \limits_{r>N} r^{m-1}f\left(\Psi (r)\right)\Psi (r) ^{-(m-1)n}\to 0 \hspace{.5cm}as \hspace{.5cm}\rho\to 0, \end{eqnarray*} and thus from the definition of $\mcal{H}^{f}$--measure that $\mcal{H}^f(W_0\left( m,n;\psi \right))=0$, as required. \end{proof} \subsection{The Divergence Case}\label{div} When $m>n$, the divergence part of Theorem~\ref{thm1} relies on the notion of ubiquity and primarily Theorem~\ref{BV}. To use ubiquity we must show that $\left( \Re ,\beta \right) $ is locally--ubiquitous with respect to $\left( \rho ,k\right)$ for a suitable ubiquity function $\rho$. For the sake of simplicity we fix $k=2$. To establish ubiquity we need two technical lemmas. The first of which is due to Dickinson~\cite{Dickinson} and is an analogue of Dirichlet's theorem. The second is a slight modification again of a result of Dickinson from the same paper. The key difference being the introduction of a function $\omega$ instead of $\log$. We prove only the second result here and the reader is referred to the previously mentioned paper for the proof of Lemma~\ref{dir}. \begin{lemma}\label{dir} For each $X\in \mb{I}^{mn}$, there exists a non-zero integer vector $\mbf{q}$ in $ \mb{Z}^{m}$ with $\left\vert \mbf{q}\right\vert \leq 2^{t}\left( t\in \mb{N} \right) $ such that \begin{equation*} \left\vert \mbf{q}X\right\vert <m\left( 2^{t}\right) ^{-\frac{m}{n}+1}. \end{equation*} \end{lemma} \begin{lemma} Let $\omega$ be a positive real increasing function such that $\frac{1}{\omega \left( t\right) }% \rightarrow 0$ as $t\rightarrow \infty $ and such that for any $C>1$ and $t$ sufficiently large $\omega \left( 2t\right) <C\omega \left( t\right) .$ The the family $(\Re,\beta)$ is locally ubiquitous with respect to the function $\rho : \mb{N} \rightarrow \mb{R} ^{+}$ where $\rho(t)=m(2^t)^{-\frac{m}{n} }\omega(t)$. \end{lemma} \begin{proof} Throughout this proof $\mbf{q}$ will refer to those integer vectors which satisfy the conclusion of Lemma~\ref{dir}. Note that a simple calculation will establish the fact that $\rho$ is $2$-regular for $t$ sufficiently large. Define now the set $E(t)$ where \[ E(t) =\{ X\in \mb{I}^{mn}:|\mbf{q}|<% \frac{2^{t}}{\omega(t) }\} \] and \[ \Delta(t) = \{ X\in \mb{I}^{mn}: | X-\partial \mb{I}^{mn} | \geq 2^{-t}\} \setminus E(t), \] $\partial \mb{I}^{mn}$ denotes the boundary of the set $\mathbb{I}^{mn}$. \noindent Then% \begin{equation*} E(t) \subseteq \underset{1\leq r\leq \frac{2^{t}}{\omega \left( t\right) }}{{\bigcup }}\ \underset{\left\vert \mbf{q}\right\vert =r}{% {\bigcup }}\left\{ X\in \mb{I}^{mn}:\left\vert \mbf{q}X\right\vert <m\left( 2^{t}\right) ^{\frac{-m}{n}+1}\right\}. \end{equation*} \noindent Therefore% \begin{eqnarray*} \left\vert E\left( t\right) \right\vert_{mn} &\leq&\sum \limits_{1\leq r\leq \frac{% 2^{t}}{\omega \left( t\right) }} \sum \limits_{\left\vert \mbf{q}% \right\vert =r}\frac{m^n\left( 2^{t}\right) ^{-m+n}}{\left\vert \mbf{q}\right\vert _{2}^{n}} \\ &\ll &\left( 2^{t}\right) ^{-m+n}\sum \limits_{1\leq r\leq \frac{2^{t}}{\omega \left( t\right) }} r^{m-n-1} \\ &\ll &\left( 2^{t}\right) ^{-m+n}\frac{2^{t}}{\omega \left( t\right) }\left( \frac{2^{t}}{\omega \left( t\right) }\right) ^{m-n-1} \\ &=&\left( \omega \left( t\right) \right) ^{-m+n}. \end{eqnarray*} \noindent Therefore, since $m>n, \lim\limits_{t\rightarrow \infty }\left\vert E\left( t\right) \right\vert_{mn} \rightarrow 0$ and $\underset{t\rightarrow \infty }{\lim}% \left\vert \mb{I}^{mn}\backslash \Delta \left( t\right) \right\vert_{mn} \rightarrow 0. $ Now to show that $\left\vert \left( \Delta\left( \rho ,t\right) \right) \right\vert_{mn} \rightarrow 1$ as $t\rightarrow \infty ,$ it would be enough to show that $\Delta \left( t\right) \subseteq \Delta \left( \rho ,t\right) .$ For this let $X\in \Delta \left( t\right) \Rightarrow X\notin E\left( t\right) $ and let $\overset{\sim }{\mbf{q}}$ be from lemma \ref{dir}, \begin{equation*} \frac{2^{t}}{\omega \left( t\right) }\leq \left\vert \overset{\sim }{\mbf{% q}}\right\vert \leq 2^{t}. \end{equation*} \noindent By definition $\left\vert \overset{\sim }{\mbf{q}}\right\vert =\left\vert \overset{\sim }{q}_{i}\right\vert $ for some $1\leq i\leq m.$ Let $\delta _{j}=\frac{-\overset{\sim }{\mbf{q}}\cdot \mbf{x}^{\left( j\right) }}{\left\vert \overset{\sim }{q}_{i}\right\vert },j=1,2\dots,n$ so that $\overset{\sim }{\mbf{q}}\cdot \left( \mbf{x}^{\left( j\right) }+\delta _{j}\mbf{e}^{\left( i\right) }\right) =0,$ where $% \mbf{e}^{\left( i\right) }$ denotes the i'th basis vector. Also% \begin{equation*} \left\vert \delta _{j}\right\vert =\left\vert \frac{-\overset{\sim }{\mbf{% q}}\cdot \mbf{x}^{\left( j\right) }}{\left\vert \overset{\sim }{q}% _{i}\right\vert }\right\vert \leq m\left( 2^{t}\right) ^{\frac{-m}{n}}\omega \left( t\right). \end{equation*}% \noindent Therefore $U=\left( \mbf{x}^{\left( j\right) }+\delta _{j}\mbf{e}% ^{\left( i\right) }\right) =\left( \mbf{x}^{\left( 1\right) }+\delta _{1}% \mbf{e}^{\left( i\right) },\mbf{x}^{\left( 2\right) }+\delta _{2}% \mbf{e}^{\left( i\right) }\dots,\mbf{x}^{\left( n\right) }+\delta _{n}% \mbf{e}^{\left( i\right) }\right) $ is a point in $R_{\overset{\sim }{% \mbf{q}}}$ and $\left\vert X-U\right\vert \leq m\left( 2^{t}\right) ^{% \frac{-m}{n}}\omega \left( t\right) =\rho \left( t\right) .$ Hence $X\in \Delta \left( \rho ,t\right =\underset{% 2^{t-1}<\left\vert \mbf{q}\right\vert \leq 2^{t}}{{\bigcup }}\Delta\left( R_{\mbf{q}},\rho \left( t\right) \right) $ so that \begin{equation*} \left\vert \left( \Delta\left( \rho ,t\right) \right) \right\vert_{mn} \rightarrow 1\text{ \ as \ }t\rightarrow \infty. \end{equation*} \end{proof} We are now almost in position to apply Theorem~\ref{BV}. To this end consider the sum \[ \label{sumcomp} \sum \limits_{t=1}^{\infty}f\left(\Psi \left( k^{t}\right) \right) \left(\frac{\Psi(k^t)}{\rho(k^t)}\right)^{\delta-\gamma}\Psi(k^t)^{-\delta}, \] which is comparable to \[ \label{sumcomp2} \sum_{t=1}^{\infty}f(\Psi ( 2^{t}) ) \Psi(2^t)^{-(m-1)n}(2^t)^{m}\omega(t)^{-n}. \] Assuming that $\psi:\mb{R}^{+}\rightarrow\mb{R}^{+}$ is a monotonic function, $\alpha, \beta \in\mb{R}$ and $k>1.$ Let $f$ be a dimension function. It is straightforward to show that the convergence or divergence of the sums \begin{equation*} \sum\limits_{t=1}^{\infty} k^{t\alpha }f(\psi \left( k^{t}\right))\psi \left( k^{t}\right)^{\beta} \text{ \ \ \ and \ \ \ }\sum \limits_{r=1}^{\infty}r^{\alpha -1}f(\psi \left( r\right))\psi \left( r\right)^{\beta} \end{equation*} coincide. By virtue of this fact, the sum in Equation~\eqref{sumcomp} is the same as \begin{equation}\label{sumequ} \sum \limits_{r=1}^{\infty}f\left( \Psi \left( r\right) \right) \Psi\left( r\right) ^{-(m-1)n}r^{m-1}\omega \left( r\right) ^{-n}. \end{equation} To obtain the precise statement of the Theorem \ref{thm1} we need to remove the $\omega $ factor from the above. To do this we choose $\omega $ in such a way that the sum \[ \sum \limits_{r=1}^{\infty}f\left( \Psi \left( r\right) \right) \Psi\left( r\right) ^{-(m-1)n}r^{m-1}\omega \left( r\right) ^{-n} \] will converge (\textit{respec.} diverge) if and only if the sum \begin{equation}\label{eq:sum2} \sum \limits_{r=1}^{\infty}f\left( \Psi \left( r\right) \right) \Psi\left( r\right) ^{-(m-1)n}r^{m-1} \end{equation} converges (\textit{respec.} diverges). This is always possible. Firstly, note that if the sum in Equation~\eqref{sumequ} diverges then so does the sum in Equation~\eqref{eq:sum2}. On the other hand and if the sum in Equation~\eqref{eq:sum2} diverges, then we can find a strictly increasing sequence of positive integers $\{r_i\} _{i\in \mb{N}}$ such that \[ \sum_{r_{i-1}\leq r\leq r_{i}}^{\infty}f( \Psi(r) )\Psi(r)^{-(m-1)n}r^{m-1}>1 \] and $r_{i}>2r_{i-1}$. Now simply define $\omega$ be the step function $% \omega(r) =i^{\frac{1}{n}}$ for $r_{i-1}\leq r\leq r_{i}$ and $\omega$ satisfies the required properties. This completes the proof of Theorem~\ref{thm1}. \section{Proof of Theorem \ref{Mumtaz2}}\label{proofThm2} In view of Theorem~\ref{conv} we need only prove the divergence part of Theorem~\ref{Mumtaz2}. The proof will be split into two sub-cases. The first, which we refer to as the ``infinite measure'' case, is for dimension functions $f$ such that $r^{-(m-1)(n+1)}f(r)\to\infty$. The second case corresponds to $f$ which satisfy $r^{-(m-1)(n+1)}f(r)\to{}C$ for some constant $C>0$, in which case the measure is comparable to $(m-1)(n+1)$-Lebesgue measure and we call this case the ``finite measure'' case. We begin the proof of Theorem~\ref{Mumtaz2} with the key observation that if $m\leq n$, $W_0(m,n;\psi)$ lies in a manifold of dimension at most $(m-1)(n+1)$. Consider first the case when $m=n$. Take any $X\in{}W_0(m,m;\psi)$, then the column vectors of $X$ are linearly dependent. To prove this, assume to the contrary, that the column vectors are linearly independent. Since $X$ is a member of $W_0(m, n; \psi)$ there exists infinitely many $\mbf{q}$ such that \[ |\mbf{q}X|<\psi({\mbf{|q|}}). \] Setting $\mbf{q}X=\theta$ where $ |\theta|<\psi({\mbf{|q|}})$, as all column vectors are linearly independent $X$ is invertible. Thus \[ |\mbf{q}|=|\theta X^{-1}| \] and it follows that \[ 1\leq |\mbf{q}|=|\mbf{q} X X^{-1}|\leq C_2(X)\psi(|\mbf{q}|)\to 0 \text{\quad{}as\quad} |\mbf{q}|\to\infty. \] Which is clearly impossible. Therefore the column vectors of $X$ must be linearly dependent and so $\det{}X=0$. This in turn implies that $X$ lies on some surface defined by the multinomial equation $\det{}Y=0$ where $Y\in\mathbb{I}^{m^2}$. As this equation defines a co-dimension $1$ manifold in $\mathbb{I}^{m^2}$, at most $m^{2}-1$ independent variables are needed to fully specify $X$. This above argument is essentially that needed to prove the more general case when $m\leq{}n$. We prove the result only for the case when $n=m+1$ as the general case follows with a straightforward modification of the argument given. Given any $X\in{}W(m,m+1;\psi)$. Thinking of $X$ as an $m$ by $m+1$ matrix as above. Let $x$ be the first column vector, $x_2$ the $(m+1)$--th column vector and $X^\prime$ the $m$ by $m-1$ matrix formed by taking the remaining $m-1$ columns of $X$. Further let $X_1$ be the $m\times{}m$ matrix with first column $x$ and remaining columns made up of $X^\prime$. Similarly let $X_2$ be the $m\times{}m$ matrix with first $m-1$ columns the same as $X^\prime$ and final column $x_2$. Using the same argument as above we claim that both these sub-matrices of $X$ are in fact non-invertible and so each sub-matrix lies on a co-dimension $1$ manifold $\Gamma_i$ determined by the equation $\det{}Y_i=0$ with $i=1,2$. Here $Y_1$ is an $m\times{}m$ matrix consisting of all but the final $m$ variables of an arbitrary element $Y\in\mathbb{I}^{m(m+1)}$ and $Y_2$ is similarly defined but the first $m$ variables are now removed. Now $X$ must lie in the intersection of the two manifolds $\Gamma_1$ and $\Gamma_2$, say $\Gamma$. This is a co-dimension $2$ manifold and the result is proved. With the above observation in mind we begin the proof of Theorem~\ref{Mumtaz2} in earnest by defining the set: \begin{equation} W_0\left( m,n;c\psi \right) :=\left\{ X\in \mb{I} ^{mn}:\left\vert \mbf{q}X\right\vert <c\psi (\left\vert \mbf{q}% \right\vert )\text{ for i.m.}\text{ \ }\mbf{q}\in \mb{Z}^{m}\setminus\{\mbf{0}\}\right\}, \end{equation} \noindent where $c=\max(\frac{m-1}{2},1).$ It is clear then that $W_0\left(m,n;\psi \right)\subseteq W_0\left( m,n;c\psi \right).$ Let $A$ be the set of points of the form \[ \left(X^{(1)},X^{(2)},\dots,X^{(m-1)},\sum\limits_{j=1}^{m-1} a_{j}^{(1)}{X^{(j)}} ,\dots, \sum\limits_{j=1}^{m-1} a_{j}^{(n-m+1)}X^{(j)}\right), \] where \[ \left(X^{(1)},X^{(2)},\dots,X^{(m-1)}\right)\in W_0(m,m-1;\psi) \] and $a^{(i)}_{j}\in\left(\frac{-1}{2}, \frac{1}{2}\right)$ for $1\leq{}i\leq(n-m+1)$. Note that \begin{align*} \left\vert\mbf{q}\cdot\sum\limits_{j=1}^{m-1}a_{j}^{(i)}X^{(j)}\right\vert &=\left\vert\sum\limits_{j=1}^{m-1}a_{j}^{(i)}\mbf{q}\cdot X^{(j)}\right\vert\\ &\leq \sum\limits_{j=1}^{m-1}\vert a_{j}^{(i)} \vert \vert \mbf{q} \cdot X^{(j)}\vert \\ &\leq \left(\sum\limits_{j=1}^{m-1}\vert a_{j}^{(i)}\vert\right)\psi(|\mbf{ q}|)\\ &\leq c\psi(|\mbf{q}|)\hspace{1cm} 1\leq i\leq(n-(m-1)), \end{align*} and it follows that $A\subseteq{}W(m,n; c\psi)$. Now define the function \[ \eta:W_0(m,m-1,\psi)\times \left(\frac{-1}{2}, \frac{1}{2}\right)^{(n-(m-1))(m-1)}\to A \] by \begin{eqnarray*} \eta\left(X^{(1)},X^{(2)},\dots,X^{(m-1)}, a_{1}^{1},\dots,a_{m-1}^{1},\dots,a_{1}^{(n-(m-1))},\dots,a_{m-1}^{(n-(m-1))}\right)&=&\\ \left(X^{(1)},X^{(2)},\dots,X^{(m-1)},\sum\limits_{j=1}^{m-1} a_{j}^{(1)}X^{(j)},\dots,\sum\limits_{j=1}^{m-1} a_{j}^{(n-m+1)}X^{(j)}\right). \end{eqnarray*} Note that $\eta$ is surjective and that the vectors $X^{j}$, for $j=1,\dots,m-1$, are linearly independent. This ensures that $\eta$ is well defined, one-to-one and the Jacobian, $J(\eta)$, of $\eta$ is of maximal rank. The function $\eta$ is therefore an embedding and its range is diffeomorphic to $A$. This in turn implies that $\eta$ is (locally) bi-Lipschitz. \subsection{The Infinite Measure Case} As mentioned above the proof of Theorem~\ref{Mumtaz2} is split into two parts. In this section we concentrate on the infinite measure case which can be deduced from the following lemma. \begin{lemma} \label{lem:infmeslem} Let $\psi$ be an approximating function and let $f$ and $g:r\to r^{-(n-(m-1))(m-1)}f(r)$ be dimension functions with $r^{-(m-1)(n+1)}f(r)\to\infty$ as $r\to{}0$. Further, let $r^{-m(m-1)}g(r)$ be monotonic and $r^{-(m-1)^2}g(r)$ be increasing. If \[ \sum_{r=1}^{\infty}f(\Psi (r))\Psi(r)^{-(m-1)n}r^{m-1}=\infty, \] then \[ \mcal{H}^{f}(A)=\infty. \] \end{lemma} \begin{proof} As $\eta$ is bi-Lipschitz, we have that \begin{eqnarray*} \mcal{H}^{f}(A) &=&\mcal{H}^{f}\left(\eta\left(W_0(m,m-1,\psi)\times \mb{I}^{(n-(m-1))(m-1)}\right)\right)\notag\\&\asymp&\mcal{H}^{f}\left(W_0(m,m-1,\psi)\times \mb{I}^{(n-(m-1))(m-1)}\right). \end{eqnarray*} The proof relies on the slicing technique of Lemma~\ref{slicing}. Let $B:=W_0(m,m-1;\psi)\times \mb{I}^{(n-(m-1))(m-1)}\subseteq \mb{I}^{(m-1)(n+1)}$ and $V$ be the space $\mathbb{I}^{m(m-1)}\times{}\{0\}^{(m-1)(n+1-m)}$. As $W_0(m,m-1;\psi)$ is a $\limsup$ set, $B$ is a Borel set. We know that $\dim W_0(m,m-1; \psi) = m(m-1)$ by Theorem~\ref{thm1} and this means that $W_0(m,m-1;\psi)$ is dense in $\mathbb{I}^{m(m-1)}$. Let $S:=\{0\}^{m(m-1)}\times\mb{I}^{(n+1-m)(m-1)}$. Clearly $S$ is a subset of $V^\perp$, and further it has positive $\mcal{H}^{(n-(m-1))(m-1)}$-measure. Now for each $b\in{}S$ \begin{eqnarray*} \mcal{H}^{g}\left(B\cap(V+b)\right)&=&\mcal{H}^{g}\left((W_0(m,m-1;\psi)\times \mb{I}^{(n-(m-1))(m-1)})\cap(V+b)\right)\notag\\&=&\mcal{H}^{g}(\widetilde{W}_0(m,m-1,\psi)+b)\notag\\&=&\mcal{H}^{g}\left(\widetilde{W}_0(m,m-1;\psi)\right) \label{eqn:gmesB}, \end{eqnarray*} where $\widetilde{W}_0(m,m-1;\psi)=W(m,m-1;\psi)\times\{0\}^{(n+1-m)(m-1)}$. Thus the $g$-measure of $\widetilde{W}_0(m,m-1;\psi)$ conicides with the $g$-measure of $W(m,m-1;\psi)$ and Now applying Theorem~\ref{thm1} with $n=m-1$ implies that $\mcal{H}^{g}\left(B\cap(V+b)\right)=\infty$ if \[ \sum_{r=1}^\infty r^{m-1}g(\Psi(r))\Psi(r)^{-(m-1)^2}=\infty. \] Applying Lemma~\ref{slicing}, we have $\mcal{H}^{f}(A)=\infty$ if \[ \sum_{r=1}^\infty r^{m-1}g(\Psi(r))\Psi(r)^{-(m-1)^2}=\infty \] and we conclude that $\mcal{H}^{f}(A)=\infty$ if \[ \sum_{r=1}^\infty f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1}=\infty, \] as required. \end{proof} We can now compete the proof of Theorem~\ref{Mumtaz2}. As $A\subseteq W_0(m,n;c\psi)$, $\mcal{H}^f(A)=\infty$ implies that $\mcal{H}^f(W_0(m,n;c\psi))=\infty$ and we need only show that the value of the constant $c$ is irrelevant. Recall that $c\geq{}1.$ For convenience let $\psi_c(r):=\frac{\psi(r)}{c}$, $\Psi_c(r):=\frac{\Psi(r)}{c}$, $\sum:=\sum_{r=1}^{\infty} f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1}$ and $\sum_c:=\sum_{r=1}^{\infty}f(\Psi_c(r))\Psi_c(r)^{-(m-1)n}r^{m-1}$ Since $r^{-(m-1)(n+1)}f(r)$ is decreasing it follows that \[ \infty=\sum\leq c_1\sum{}_c \] where $c_1=c^{-(m-1)(n+1)}$. Therefore $\sum_c=\infty$ if $\sum=\infty$ and we have $\mcal{H}^{f}(W_0( m,n;c\psi_c)=\infty$ if $ \sum_{r=1}^\infty f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1}=\infty. $ Finally, it follows that $\mcal{H}^{f}(W_0( m,n;\psi)=\infty$ if $ \sum_{r=1}^\infty f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1}=\infty, $ as required. \subsection{Finite measure case} We now come onto the case where $r^{-(m-1)(n+1)}f(r)\to C$ as $r\to 0$ and $C>0$ is finite. In this case $\mcal{H}^f$ is comparable to $(m-1)(n+1)$--dimensional Lebesgue measure. Note that in this case the dviergence of the sum \[ \sum_{r=1}^\infty{}f(\Psi(r))\Psi(r)^{-(m-1)n}r^{m-1} \] is in direct correspondence with that of the sum \[ \sum_{r=1}^\infty{}\psi^{m-1}(r). \] We begin with the following general lemma, the proof of which we leave to the reader. \begin{lemma}\label{fulllemma} Suppose that $L\subset\mb{R}^l$, $M\subset\mb{R}^k$ and $\eta:L\to M$ is an onto bi--Lipschitz transformation. That is there exists constants $c_1$ and $c_2$ with $0<c_1\leq c_2<\infty$, such that \[ c_1 d_L(x,y) \leq d_M(\eta(x),\eta(y)) \leq c_2{}d_L(x,y \] for any $x,y\in{}L$ where $d_L$ and $d_M$ are the respective metrics on $L$ and $M$. Then for any $C\subseteq L$, with $|C|_L = 0$, we have $|\eta(C)|_M =0$ and for any $C^\prime\subseteq L$ with $|L\setminus C^\prime|_L=0$, $|\eta(L\setminus C^\prime)|_M \ = \ 0$ where $|\cdot|_L$ (\textit{respec} $|\cdot|_M$) denotes induced measure on $L$ (\textit{respec} $M$). That is $\eta$ preserves a $zero-full$ law. \end{lemma} In applying Lemma~\ref{fulllemma}, we first need to show that $W_0(m,m-1;\psi)\times \mb{I}^{(n-m+1)(m-1)}$ has $full$ Lebesgue measure in $\mb{I}^{(m-1)(n+1)}$. Theorem~\ref{thm1} implies that $|W_0(m,m-1;\psi)|_{m(m-1)} = 1$ if $\sum_{r=1}^{\infty}\psi(r)^{m-1}=\infty$ and a straightforward application of Fubini's Theorem gives \[ |W_0(m,m-1;\psi)\times \mb{I}^{(n-(m-1))(m-1)}|_{(m-1)(n+1)}= 1 \] as the Lebesgue measure of a product of two sets is simply the product of the measures of the two sets. It follows then that $W_0(m,m-1;\psi)\times \mb{I}^{(n-m+1)(m-1)}$ is full in $\mathbb{I}^{(m-1)(n+1)}$, as required. It remains to prove that $A$, the image of $W_0(m,m-1;\psi)\times \mb{I}^{(n-m+1)(m-1)}$ under $\eta$ is full in $\Gamma$. To do this we use local charts on $\Gamma$. As $\Gamma$ is an $(m-1)(n+1)$-dimension smooth manifold, we know that there is a countable atlas for $\Gamma$. Take any chart in the atlas, say $(O,\nu)$ where $O$ is an open set in $\mathbb{R}^{(m-1)(n+1)}$ and $\nu$ is the (local) diffeomorphism from $O$ to $\Gamma$. Now, $\eta$ is invertiable and $\eta^{-1}(\nu(O))$ is in $\mathbb{I}^{(m-1)(n+1)}$. We have just shown that $W_0(m,m-1;\psi)\times \mb{I}^{(n-m+1)(m-1)}$ has full measure and so therefore must its intersection with $\eta^{-1}(\nu(O))$. It follows then that $\eta$ of this intersection must have the same induced measure on $\Gamma$ as $\nu(O)$ does. We can repeat this argument for each element of the atlas of $\Gamma$ and it follows that $\eta(A)$ must be full in $\Gamma$ as required. This completes the proof of Theorem~\ref{Mumtaz2}.
proofpile-arXiv_065-6441
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and motivation} \begin{question} Let $S$ be a set of effects on a separable Hilbert space $\mathbb H$. Is there a measurable space $(X,\mathcal A)$ and a POV-measure $\alpha:(X,\mathcal A)\to\mathcal E(\mathbb H)$ such that $S$ is a subset of the range of $\alpha$? \end{question} If $S$ consists only of orthogonal projections (that means, idempotent effects), then the answer is simple: $S$ is a subset of the range of a POV-measure iff the elements of $S$ commute. On the other hand, if there are non-idempotent effects in $S$, the answer is not known. In the present paper, we examine a related question: \begin{question} If $S$ is a subset of an effect algebra $E$, is there a Boolean algebra $B$ and a morphism of effect algebras $\alpha:B\to E$ such that $S\subseteq\alpha(B)$? \end{question} This can be considered as a quantum-logical version of Question 1. We prove that, given subset $S$ of an effect algebra $E$ such that $1\in S$, there exist a Boolean algebra $B$ and a morphism $\alpha:B\to E$ with $S\subseteq\alpha(B)$ if and only if there is a mapping $\csm{~.~}{~.~}:\Fin(S)\times\Fin(S)\to E$ satisfying certain properties. We call them {\em compatibility support mappings}. The proof uses a modification of the limit techniques introduced in \cite{CatDAlGiuPul:EAaPM}. We show that compatibility support mappings, and hence pairs $(B,\alpha)$, exist whenever $S$ is an MV-algebra or $S$ is a pairwise commuting set of effects on a Hilbert space. We prove several properties of strong compatibility support maps, generalizing the properties of the prototype Example \ref{ex:joinmeet}. The results presented in this paper are more general than the results from an earlier paper \cite{Jen:CiIEA}, where only interval effect algebras were considered. In that paper, a related notion of {\em witness 06} was introduced to characterize coexistent subsets of interval effect algebras. In the last section, we examine connections between compatibility support mappings and witness mappings. We prove that, for a subset $S$ of an interval effect algebra, every compatibility support map for $S$ gives rise to a witness mapping for $S$. We do not know whether this relationship is a one-to-one correspondence. \section{Definitions and basic relationships} An {\em effect algebra} is a partial algebra $(E;\oplus,0,1)$ with a binary partial operation $\oplus$ and two nullary operations $0,1$ satisfying the following conditions. \begin{enumerate} \item[(E1)]If $a\oplus b$ is defined, then $b\oplus a$ is defined and $a\oplus b=b\oplus a$. \item[(E2)]If $a\oplus b$ and $(a\oplus b)\oplus c$ are defined, then $b\oplus c$ and $a\oplus(b\oplus c)$ are defined and $(a\oplus b)\oplus c=a\oplus(b\oplus c)$. \item[(E3)]For every $a\in E$ there is a unique $a'\in E$ such that $a\oplus a'=1$. \item[(E4)]If $a\oplus 1$ exists, then $a=0$ \end{enumerate} Effect algebras were introduced by Foulis and Bennett in their paper \cite{FouBen:EAaUQL}. Independently, K\^ opka and Chovanec introduced an essentially equivalent structure called {\em D-poset} (see \cite{KopCho:DP}). Another equivalent structure, called {\em weak orthoalgebras} was introduced by Giuntini and Greuling in \cite{GiuGre:TaFLfUP}. For brevity, we denote the effect algebra $(E,\oplus,0,1)$ by $E$. In an effect algebra $E$, we write $a\leq b$ iff there is $c\in E$ such that $a\oplus c=b$. It is easy to check that every effect algebra is cancellative, thus $\leq$ is a partial order on $E$. In this partial order, $0$ is the least and $1$ is the greatest element of $E$. Moreover, it is possible to introduce a new partial operation $\ominus$; $b\ominus a$ is defined iff $a\leq b$ and then $a\oplus(b\ominus a)=b$. It can be proved that $a\oplus b$ is defined iff $a\leq b'$ iff $b\leq a'$. It is usual to denote the domain of $\oplus$ by $\perp$. If $a\perp b$, we say that $a$ and $b$ are {\em orthogonal}. \begin{example} The prototype example of an effect algebra is the {\em standard effect algebra $\mathcal E(\mathbb H)$.} Let $\mathbb H$ be a Hilbert space. Let $\mathcal S(\mathbb H)$ be the set of all bounded self-adjoint operators. on $\mathbb H$. Let $\mathbb I$ be the identity operator $\mathbb H$. For $A,B\in\mathcal S(\mathbb H)$, write $A\leq B$ if and only if, for all $x\in\mathbb H$, $\langle Ax,x\rangle\leq\langle Bx,x\rangle$. Put $\mathcal E(\mathbb H)=\{X\in\mathcal S(\mathbb H):0\leq X\leq I\}$ and for $A,B\in\mathcal E(\mathbb H)$ define $A\oplus B$ iff $A\oplus B\leq I$, $A\oplus B=A+B$. Then $(\mathcal E(\mathbb H),\oplus,0,I)$ is an effect algebra. The elements of $\mathcal E(\mathbb H)$ are called Hilbert space effects. \end{example} An effect algebra $E$ is {\em lattice ordered} iff $(E,\leq)$ is a lattice. An effect algebra is an {\em orthoalgebra} iff $a\perp a$ implies $a=0$. An orthoalgebra that is lattice ordered is an orthomodular lattice. An {\em MV-effect algebra} is a lattice ordered effect algebra $M$ in which, for all $a,b\in M$, $(a\lor b)\ominus a=b\ominus (a\land b)$. It is proved in \cite{ChoKop:BDP} that there is a natural, one-to one correspondence between MV-effect algebras and MV-algebras given by the following rules. Let $(M,\oplus,0,1)$ be an MV-effect algebra. Let $\boxplus$ be a total operation given by $x \boxplus y=x\oplus(x'\land y)$. Then $(M,\boxplus,',0)$ is an MV-algebra. Similarly, let $(M,\boxplus,\lnot,0)$ be an MV-algebra. Restrict the operation $\boxplus$ to the pairs $(x,y)$ satisfying $x\leq y'$ and call the new partial operation $\oplus$. Then $(M,\oplus,0,\lnot 0)$ is an MV-effect algebra. Among lattice ordered effect algebras, MV-effect algebras can be characterized in a variety of ways. Three of them are given in the following proposition. \begin{proposition} \cite{BenFou:PSEA}, \cite{ChoKop:BDP} Let $E$ be a lattice ordered effect algebra. The following are equivalent \begin{enumerate} \item[(a)] $E$ is an MV-effect algebra. \item[(b)] For all $a,b\in E$, $a\land b=0$ implies $a\leq b'$. \item[(c)] For all $a,b\in E$, $a\ominus(a\land b)\leq b'$. \item[(d)] For all $a,b\in E$, there exist $a_1,b_1,c\in E$ such that $a_1\oplus b_1\oplus c$ exists, $a_1\oplus c=a$ and $b_1\oplus c=b$. \end{enumerate} \end{proposition} Let $B$ be a Boolean algebra and let $E$ be an effect algebra. An {\em observable} is a mapping $\alpha:B\to E$ such that $\alpha(0)=0$, $\alpha(1)=1$ and for every $x,y\in B$ such that $x\wedge y=0$, $\phi(x\vee y)=\phi(x)\oplus\phi(y)$. \section{Compatibility support mappings --- definition and examples} In this section we introduce (strong) compatibility support mappings and present two examples. \begin{definition} \label{def:csm} Let $E$ be an effect algebra, let $S\subseteq E$ be such that $1\in S$. We say that $\csm{~.~}{~.~}:\Fin(S)\times\Fin(S)\to E$ is a {\em compatibility support mapping for $S$} if and only if the following conditions are satisfied. \begin{enumerate} \item[(a)]If $V_1\subseteq V_2$, then $\csm{U}{V_1}\leq\csm{U}{V_2}$. \item[(b)]$\csm{U}{V}\leq\csm{U}{\{1\}}$. \item[(c)]$\csm{U}{\emptyset}=0$. \item[(d)]$\csm{\emptyset}{\{c\}}=c$. \item[(e)]If $c\notin U\cup V$, then $\csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{V}= \csm{U}{V\cup\{c\}}\ominus\csm{U}{V}$ \end{enumerate} A compatibility mapping is {\em strong} if and only if the following condition is satisfied. \begin{enumerate} \item[(e*)]For all $c$, $\csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{V}= \csm{U}{V\cup\{c\}}\ominus\csm{U}{V}$ \end{enumerate} Note that (e*) implies (e). \end{definition} \begin{example} \label{ex:joinmeet} Let $M$ be an MV-effect algebra. Define $\csm{~.~}{~.~}:\Fin(M)\times\Fin(M)\to E$ by $$ \csm{U}{V}=(\bigwedge U)\wedge(\bigvee V). $$ Then $\csm{~.~}{~.~}$ is a strong compatibility support mapping. The conditions (a)-(d) are easy to prove. Let us prove (e*). \begin{align*} \csm{U}{V\cup\{c\}}\ominus\csm{U}{V}= (\bigwedge U)\wedge(c\vee(\bigvee V))\ominus ((\bigwedge U)\wedge(\bigvee V))=\\ =((\bigwedge U)\wedge c)\vee((\bigwedge U)\wedge(\bigvee V))\ominus ((\bigwedge U)\wedge(\bigvee V))=\\ =((\bigwedge U)\wedge c)\ominus((\bigwedge U)\wedge c\wedge(\bigvee V))=\\ =\csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{V} \end{align*} \end{example} \begin{example} \label{ex:product} Let $\sqcup$ be an operation on the set of all operators on a Hilbert space $\mathbb H$ given by $$ a\sqcup b:=a+b-ab. $$ It is easy to check that $\sqcup$ is associative with neutral element $0$. If $a$ and $b$ are commuting effects, then $a.b$ is an effect with $a.b\leq a,b$. Moreover, $a\sqcup b$ is an effect. Indeed, since $a,b$ are commuting effects, $1-a,1-b$ are commuting effects. Since $1-a,1-b$ are commuting effects, $(1-a).(1-b)$ is an effect and $$ 1-(1-a).(1-b)=1-(1-a-b+ab)=a+b-ab $$ is an effect. Let $S$ be a set of commuting effects with $1\in S$; there exists a commutative $C^*$ algebra $A$ with $S\subseteq A$. The operations $\sqcup, .$ are commutative and associative on $A\cap\mathcal E(\mathbb H)\supseteq S$. Let $U,V$ be a finite subsets of $S$. Write $\bigsqcap U$ for the product of elements of $U$. Write $\bigsqcup\emptyset=0$, $\bigsqcup\{c\}=c$ and, for $V=\{v_1,\dots,v_n\}$ with $n>1$, write $$ \bigsqcup V=v_1\sqcup\dots\sqcup v_n. $$ Define $\csm{~.~}{~.~}:\Fin(S)\times\Fin(S)\to E$ $$ \csm{U}{V}=(\bigsqcap U).(\bigsqcup V). $$ Let us prove that $\csm{~.~}{~.~}$ is a compatibility support mapping. Proof of condition (a): Suppose that $V_1\subseteq V_2$. We need to prove that $\csm{U}{V_1}\leq\csm{U}{V_2}$. Let us prove that $\bigsqcup V_1\leq\bigsqcup V_2$. Since $V_1\subseteq V_2$, we may write \begin{align*} \bigsqcup V_2=(\bigsqcup V_1)\sqcup (\bigsqcup (V_2\setminus V_1))=\\ =(\bigsqcup V_1)+(\bigsqcup (V_2\setminus V_1)) -(\bigsqcup V_1).(\bigsqcup (V_2\setminus V_1)) \end{align*} Therefore, $$ (\bigsqcup V_2)-(\bigsqcup V_1)= (\bigsqcup (V_2\setminus V_1))-(\bigsqcup V_1).(\bigsqcup (V_2\setminus V_1))\geq 0, $$ so $\bigsqcup V_1\leq \bigsqcup V_2$. Since $\bigsqcup V_1\leq \bigsqcup V_2$, $$ \csm{U}{V_1}=(\bigsqcap U).(\bigsqcup V_1)\leq(\bigsqcap U).(\bigsqcup V_1)=\csm{U}{V_2}. $$ The conditions (b)-(d) are trivially satisfied. Proof of condition the (e): \begin{align*} \csm{U}{V\cup\{c\}}-\csm{U}{V}= (\bigsqcap U).(c\sqcup\bigsqcup V)-(\bigsqcap U).(\bigsqcup V)=\\ =(\bigsqcap U).(c+\bigsqcup V-c.(\bigsqcup V))-(\bigsqcap U).(\bigsqcup V)=\\ =(\bigsqcap U).c+(\bigsqcap U).(\bigsqcup V)-(\bigsqcap U).c.(\bigsqcup V)- (\bigsqcap U).(\bigsqcup V)=\\ =(\bigsqcap U).c-(\bigsqcap U).c.(\bigsqcup V)= \csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{V} \end{align*} Note that, if $S$ contains some non-idempotent $c$, then $\csm{~.~}{~.~}$ is not strong. To see that (e*) is not satisfied, put $U=V=\{c\}$ and compute \begin{align*} \csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{V}=c\ominus c.c\neq 0\\ \csm{U}{V\cup\{c\}}\ominus\csm{U}{V}=c.c\ominus c.c=0 \end{align*} \end{example} \section{Observables from compatibility support mappings} The aim of this section is to prove that for every $S$ such that $S\cup\{1\}$ admits a compatibility support mapping, then $S$ is coexistent. The direct limit method used here is a dual of the projective limit method introduced in \cite{CatDAlGiuPul:EAaPM}. See also \cite{Pul:AnoooM} for another application of the projective limit method. Several proofs in this section (Lemma 3 through Theorem 1) are very similar, or even the same, as in \cite{Jen:CiIEA}. The reason for this is that they are basically an application of Lemma \ref{lemma:first}, which is the Proposition 4 of \cite{Jen:CiIEA}. However, the author decided to include them here, to keep the present paper more streamlined. \begin{assumption} In this section, we assume the following. \begin{itemize} \item $E$ is an effect algebra. \item $S$ is a subset of $E$ with $1\in S$. \item $\csm{~.~}{~.~}:\Fin(S)\times \Fin(S)\to S$ is a compatibility support mapping. \end{itemize} \end{assumption} \begin{lemma} \label{lemma:c1isc} For all $c\in S$, $\csm{\{c\}}{\{1\}}=c$. \end{lemma} \begin{proof} Put $U=V=\emptyset$ in condition (e) of Definition \ref{def:csm}. We see that $$ \csm{\{c\}}{\{1\}}\ominus\csm{\{c\}}{\emptyset}= \csm{\emptyset}{\{c\}}\ominus\csm{\emptyset}{\emptyset}. $$ By conditions (c) and (d), this implies that $\csm{\{c\}}{\{1\}}=c$. \end{proof} Let us write, for $A,X\in\Fin(S)$ such that $X\subseteq A$, $$ D(X,A)=\csm{X}{\{1\}}\ominus\csm{X}{A\setminus X}. $$ \begin{lemma} \label{lemma:first} Let $A,X\in\Fin(S)$, $X\subseteq A$ and let $c\in S$ be such that $c\not\in A$. Then $$ D(X,A)=D(X,A\cup\{c\})\oplus D(X\cup\{c\},A\cup\{c\}). $$ \end{lemma} \begin{proof} We see that \begin{align*} D(X,A\cup\{c\})=&\csm{X}{\{1\}}\ominus\csm{X}{\{c\}\cup(A\setminus X)}\\ D(X\cup\{c\},A\cup\{c\})=&\csm{X\cup\{c\}}{\{1\}}\ominus\csm{X\cup\{c\}}{A\setminus X} \end{align*} and, by condition (e) of Definition \ref{def:csm}, we see that $$ \csm{X\cup\{c\}}{\{1\}}\ominus\csm{X\cup\{c\}}{A\setminus X}= \csm{X}{\{c\}\cup(A\setminus X)}\ominus\csm{X}{A\setminus X}. $$ Therefore, \begin{align*} D(X,A\cup\{c\})\oplus D(X\cup\{c\},A\cup\{c\})=\\ =(\csm{X}{\{1\}}\ominus\csm{X}{\{c\}\cup(A\setminus X)}) \oplus (\csm{X}{\{c\}\cup(A\setminus X)}\ominus\csm{X}{A\setminus X})=\\ =\csm{X}{\{1\}}\ominus\csm{X}{A\setminus X}=D(X,A). \end{align*} \end{proof} \begin{lemma} \label{lemma:second} Let $C,A,X\in\Fin(S)$ be such that $X\subseteq A$ and $C\cap A=\emptyset$. Then $(D(X\cup Y,A\cup C))_{Y\subseteq C}$ is an orthogonal family and $$ \bigoplus_{Y\subseteq C}D(X\cup Y,A\cup C)=D(X,A). $$ \end{lemma} \begin{proof} The proof goes by induction with respect to $|C|$. For $C=\emptyset$, Lemma \ref{lemma:second} is trivially true. Assume that Lemma \ref{lemma:second} holds for all $C$ with $|C|=n$ and let $c\in S,c\not\in A\cup C$. Let us consider the family $$ (D(X\cup Z,A\cup C\cup\{c\}))_{Z\subseteq C\cup\{c\}}. $$ For every $Z\subseteq C\cup\{c\}$, either $c\in Z$ or $c\not\in Z$, so either $Z=Y\cup\{c\}$ or $Z=Y$, for some $Y\subseteq C$. Therefore, we can write \begin{align*} (D(X\cup Z,A\cup C\cup\{c\}))_{Z\subseteq C\cup\{c\}}=\\ (D(X\cup Y,A\cup C\cup\{c\}), D(X\cup Y\cup\{c\},A\cup C\cup\{c\}))_{Y\subseteq C}. \end{align*} By Lemma \ref{lemma:first}, $$ D(X\cup Y,A\cup C\cup\{c\})\oplus D(X\cup Y\cup\{c\},A\cup C\cup\{c\})= D(X\cup Y,A\cup C). $$ It only remains to apply the induction hypothesis to finish the proof. \end{proof} \begin{corollary} \label{coro:decomposition} For every $A\in\Fin(S)$, $(D(X,A))_{X\subseteq A}$ is a decomposition of unit. \end{corollary} \begin{proof} Obviously, $$D(\emptyset,\emptyset)=\csm{\emptyset}{\{1\}}\ominus \csm{\emptyset}{\emptyset}=1\ominus 0=1. $$ By Lemma \ref{lemma:second}, $$ \bigoplus_{X\subseteq A}(D(\emptyset\cup X,\emptyset\cup A)) =D(\emptyset,\emptyset). $$ \end{proof} \begin{corollary} \label{coro:alphaAobservable} For every $A\in\Fin(S)$, the mapping $\alpha_A:2^{(2^A)}\to E$ given by $$ \alpha_A(\mathbb X)=\bigoplus_{X\in\mathbb X}D(X,A) $$ is a simple observable. \end{corollary} \begin{proof} The atoms of $2^{(2^A)}$ are of the form $\{X\}$, where $X\subseteq A$. By Corollary \ref{coro:decomposition}, $(\alpha_A(\{X\}):X\subseteq A)$ is a decomposition of unit; the remainder of the proof is trivial. \end{proof} For $A,B\in\Fin(S)$ with $A\subseteq B$, let us define mappings $g^A_B:2^{(2^A)}\to 2^{(2^B)}$ $$ g^A_B(\mathbb X)=\{X\cup C_0:X\in\mathbb X \text{ and }C_0\subseteq (B\setminus A)\} $$ and let us write $\mathcal G$ for the collection of all such mappings. It is an easy exercise to prove that every $g^A_B\in\mathcal G$ is an injective homomorphism of Boolean algebras and that $((2^{(2^A)}:A\in\Fin(S)),\mathcal G)$ is a direct family of Boolean algebras. Let us prove that the mappings $g^A_B$ behave well with respect to the observables $\alpha_A$ and $\alpha_B$. \begin{lemma} \label{lemma:d1commutes} Let $A,B\in\Fin(S)$ with $A\subseteq B$. The diagram \begin{center} \includegraphics{d1} \end{center} commutes. \end{lemma} \begin{proof} For all $\mathbb X\in 2^{(2^A)}$, \begin{align*} \alpha_B(g^A_B(\mathbb X))= \alpha_B(\{X\cup C_0:X\in\mathbb X \text{ and }C_0\subseteq (B\setminus A)\})=\\ =\bigoplus( D(X\cup C_0,B):X\in\mathbb X \text{ and }C_0\subseteq (B\setminus A) )=\\ =\bigoplus_{X\in\mathbb X}\Bigl( \bigoplus_{C_0\subseteq (B\setminus A)} D(X\cup C_0,B)\Bigr) \end{align*} Put $Y:=C_0$, $C:=B\setminus A$; by Lemma \ref{lemma:second}, $$ \bigoplus_{C_0\subseteq (B\setminus A)} D(X\cup C_0,B)=D(X,A). $$ Therefore, $$ \alpha_B(g^A_B(\mathbb X))=\bigoplus_{X\in\mathbb X} D(X,A)= \alpha_A(\mathbb X) $$ and the diagram commutes. \end{proof} \begin{corollary} \label{coro:simplerange} For every $B\in\Fin(S)$, $B$ is a subset of the range of $\alpha_B$. \end{corollary} \begin{proof} We need to prove that every $a\in B$ is an element of the range of $\alpha_B$. For $B=\emptyset$, this is trivial. Suppose that $B$ is nonempty and let $a\in B$. Let $A=\{a\}$. and let $X=g^A_B(\{\{a\}\})$. By Lemma \ref{lemma:d1commutes}, $$ \alpha_B(X)=\alpha_B(g^A_B(\{\{a\}\}))=\alpha_A(\{\{a\}\}), $$ and we see that, by (c) of Definition \ref{def:csm} and by Lemma \ref{lemma:c1isc} $$ \alpha_A(\{\{a\}\})=\alpha_{\{a\}}(\{\{a\}\})=D(\{a\},\{a\})= \D{\{a\}}{\{a\}}=a\ominus 0=a. $$ \end{proof} \begin{theorem} \label{thm:obsfromcsm} Let $E$ be an effect algebra, let $S\subseteq E$. If $S\cup\{1\}$ admits a compatibility support mapping, then $S$ is coexistent. \end{theorem} \begin{proof} Suppose that $S\cup\{1\}$ admits a compatibility support mapping. Let us construct $F_B(S)$ as the direct limit of the direct family $(2^{2^A}:A\in\Fin(S))$, equipped with morphisms of the type $g^A_B$. After that, we shall define an observable $\alpha:F_B(S)\to E$. Consider the set $$ \Gamma_S=\bigcup_{A\in\Fin(S)}\{(\mathbb X,A):\mathbb X\subseteq 2^A\} $$ and define on it a binary relation $\equiv$ by $(\mathbb X,A)\equiv(\mathbb Y,B)$ if and only if $g^A_{A\cup B}(\mathbb X)=g^B_{A\cup B}(\mathbb Y)$, that means $$ \{X\cup C_A:X\in\mathbb X\and C_A\subseteq A\cup B\setminus A\}= \{Y\cup C_B:Y\in\mathbb Y\and C_B\subseteq A\cup B\setminus B\}. $$ Then $F_B(S)=\Gamma_S/\equiv$ and the operations on $F_B(S)$ are defined by $$ [(\mathbb X,A)]_\equiv\vee[(\mathbb Y,B)]_\equiv= [(g^A_{A\cup B}(\mathbb X)\cup g^B_{A\cup B}(\mathbb Y),A\cup B)]_\equiv $$ and similarly for the other operations. Then $F_B(S)$ is a direct limit of Boolean algebras, hence a Boolean algebra. Let $\alpha_S:F_B(S)\to E$ be a mapping given by the rule $\alpha_S([(\mathbb X,A)]_\equiv)=\alpha_A(\mathbb X)$. We shall prove that $\alpha_S$ is an observable. Let us prove $\alpha_S$ is well-defined. Suppose that $(\mathbb X,A)\equiv (\mathbb Y,B)$, that means, $g^A_{A\cup B}(\mathbb X)=g^B_{A\cup B}(\mathbb Y)$. By Lemma \ref{lemma:d1commutes}, $$ \alpha_A(\mathbb X)=\alpha_{A\cup B}(g^A_{A \cup B}(\mathbb X)) $$ and $$ \alpha_B(\mathbb Y)=\alpha_{A\cup B}(g^B_{A \cup B}(\mathbb Y)), $$ hence $\alpha_S$ is a well-defined mapping. Let us prove that $\alpha_S$ is an observable. The bounds of the Boolean algebra $F_B(S)$ are $[(\emptyset,A)]_\equiv$ and $[(2^A,A)]_\equiv$, where $A\in Fin(S)$. Obviously, by Corollary \ref{coro:alphaAobservable}, $$ \alpha_S([(\emptyset,A)]_\equiv)=\alpha_A(\emptyset)=0 $$ and $$ \alpha_S([(2^A,A)]_\equiv)=\alpha_A(2^A)=1. $$ Let $[(\mathbb X,A)]_\equiv$ and $[(\mathbb Y,B)_\equiv]$ be disjoint elements of $F_B(S)$, that is, $g^A_{A\cup B}(\mathbb X)\cap g^B_{A\cup B}(\mathbb Y)=\emptyset$. Then \begin{align*} \alpha_S([(\mathbb X,A)]_\equiv\vee [(\mathbb Y,B)]_\equiv)= \alpha_S([g^A_{A\cup B}(\mathbb X) \cup g^B_{A\cup B}(\mathbb Y),A\cup B]_\equiv)=\\ =\alpha_{A\cup B} (g^A_{A\cup B}(\mathbb X)\cup g^B_{A\cup B}(\mathbb Y)). \end{align*} Since $\alpha_{A\cup B}$ is an observable, $$ \alpha_{A\cup B} (g^A_{A\cup B}(\mathbb X)\cup g^B_{A\cup B}(\mathbb Y))= \alpha_{A\cup B} (g^A_{A\cup B}(\mathbb X))\oplus \alpha_{A\cup B}(g^B_{A\cup B}(\mathbb Y)). $$ It remains to observe that $$ \alpha_{A\cup B} (g^A_{A\cup B}(\mathbb X))= \alpha_S([(\mathbb X,A)]_\equiv) $$ and that $$ \alpha_{A\cup B} (g^B_{A\cup B}(\mathbb Y))= \alpha_S([(\mathbb Y,B)]_\equiv). $$ Let us prove that the range of $\alpha_S$ includes $S$. Let $a\in S$. By Corollary \ref{coro:simplerange}, the range of $\alpha_{\{a\}}$ includes $a$ and, by an obvious direct limit argument, the range of $\alpha_{\{a\}}$ is a subset of the range of $\alpha_S$. \end{proof} \section{Compatibility support mappings from observables} The aim of the single theorem of this section is to prove that every subset $S$ of the range of an observable admits a strong compatibility support mapping. \begin{theorem}\label{thm:csmfromobs} For every coexistent subset $S$ of an effect algebra $E$, $S\cup\{1\}$ admits a strong compatibility support mapping. \end{theorem} \begin{proof} Let $B$ be a Boolean algebra and let $\alpha:B\to E$ be an observable, let $S$ be a subset of the range of $\alpha$. For every $a\in S\cup\{1\}$, fix an element $p_a\in\alpha^{-1}(a)$ and define $$ \csm{U}{V}=\alpha((\bigwedge_{a\in U} p_a)\wedge(\bigvee_{b\in V}p_b)). $$ Let us check the condition in the definition of a strong compatibility support mapping. Let $c\not\in U,V$. Then \begin{align*} \csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{V}=\\ =\alpha((\bigwedge_{a\in U} p_a)\wedge p_c)\ominus \alpha(((\bigwedge_{a\in U} p_a)\wedge p_c)\wedge(\bigvee_{b\in V}p_b)). \end{align*} To simplify the matters, write \begin{align*} m_U=(\bigwedge_{a\in U} p_a)\\ j_V=(\bigvee_{b\in V}p_b) \end{align*} We can write \begin{align*} \alpha((\bigwedge_{a\in U} p_a)\wedge p_c)\ominus \alpha(((\bigwedge_{a\in U} p_a)\wedge p_c)\wedge(\bigvee_{b\in V}p_b))= \alpha(m_U\wedge p_c)\ominus\alpha(m_U\wedge p_c\wedge j_V)=\\ =\alpha((m_U\wedge p_c)\ominus(m_U\wedge p_c\wedge j_V))\\ \end{align*} Similarly, $$ \csm{U}{V\cup\{c\}}\ominus\csm{U}{V}= \alpha((m_U\wedge(p_c\vee j_V))\ominus(m_U\wedge j_V)). $$ Since $B$ is a Boolean algebra, \begin{align*} (m_U\wedge p_c)\ominus(m_U\wedge p_c\wedge j_V)= (m_U\wedge(p_c\vee j_V))\ominus(m_U\wedge j_V) \end{align*} The remaining conditions are trivial to check. \end{proof} Let us note that, if we start with a non-strong compatibility support mapping, apply Theorem \ref{thm:obsfromcsm} to construct an observable and then apply Theorem \ref{thm:csmfromobs} to construct a compatibility support mapping, we cannot obtain the compatibility support mapping we started with, since Theorem \ref{thm:csmfromobs} always produces a strong compatibility support mapping. \section{Properties of strong compatibility support mappings} The aim of this section is to prove that several properties of the Example \ref{ex:joinmeet} are valid for all strong compatibility support mappings. It remains open whether and which of these properties are valid for all compatibility support mappings. The main vehicle here is Proposition \ref{prop:csmfromD}, that is interesting in its own right: it shows that, for a given $S$, every strong compatibility support mapping on $S$ is determined by its $D(~.~,~.~)$. \begin{assumption} In this section, we assume the following. \begin{itemize} \item $E$ is an effect algebra. \item $S$ is a subset of $E$ with $1\in S$. \item $\csm{~.~}{~.~}:\Fin(S)\times \Fin(S)\to S$ is a strong compatibility support mapping. \end{itemize} \end{assumption} \begin{lemma} \label{lemma:nondisjoint} If $U,V$ are not disjoint, then $\csm{U}{V}=\csm{U}{\{1\}}$. \end{lemma} \begin{proof} Let $c\in U\cap V$. This implies that $U\cup\{c\}=U$ and $V\cup\{c\}=V$. Therefore, by (e*), $$ \csm{U}{\{1\}}\ominus\csm{U}{V}= \csm{U}{V}\ominus\csm{U}{V}=0, $$ hence $\csm{U}{V}=\csm{U}{\{1\}}$. \end{proof} \begin{lemma} \label{prop:trop} $\csm{U\cup\{c\}}{\{1\}}=\csm{U}{\{c\}}$. \end{lemma} \begin{proof} Put $V=\emptyset$ in (e*): $$ \csm{U\cup\{c\}}{\{1\}}\ominus\csm{U\cup\{c\}}{\emptyset}= \csm{U}{\{c\}}\ominus\csm{U}{\emptyset}. $$ By condition (c), $\csm{U\cup\{c\}}{\emptyset}=\csm{U}{\emptyset}=0$, therefore $$ \csm{U\cup\{c\}}{\{1\}}=\csm{U}{\{c\}}. $$ \end{proof} \begin{proposition} \label{prop:csmfromD} Let $U,V\subseteq S$. \begin{enumerate} \item If $U\cap V\not =\emptyset$, then $\csm{U}{V}=\csm{U}{\{1\}}=D(U,U)$. \item If $U\cap V=\emptyset$, then $$ \csm{U}{V}=\bigoplus_{\emptyset\neq Y\subseteq V} D(U\cup Y,U\cup V). $$ \end{enumerate} \end{proposition} \begin{proof}~ (1) By Proposition \ref{lemma:nondisjoint}, $\csm{U}{V}=\csm{U}{\{1\}}$ and $$ D(U,U)=\csm{U}{\{1\}}\ominus\csm{U}{\emptyset}= \csm{U}{\{1\}}\ominus 0=\csm{U}{\{1\}}. $$ (2) By Lemma \ref{lemma:second}, $$ D(U,U)=\bigoplus_{Y\subseteq V} D(U\cup Y,U\cup V). $$ Therefore, $$ D(U,U)\ominus D(U,U\cup V)= \bigoplus_{\emptyset\neq Y\subseteq V} D(U\cup Y,U\cup V). $$ Moreover, \begin{align*} D(U,U)\ominus D(U,U\cup V)= (\csm{U}{\{1\}}\ominus\csm{U}{\emptyset})\ominus (\csm{U}{\{1\}}\ominus\csm{U}{V})=\\ =\csm{U}{V}\ominus\csm{U}{\emptyset}=\csm{U}{V}\ominus 0= \csm{U}{V}. \end{align*} \end{proof} \begin{proposition} \label{prop:formera} If $U_1\subseteq U_2$, then $\csm{U_1}{V}\geq\csm{U_2}{V}$. \end{proposition} \begin{proof}~ (Case 1) Suppose that $U_1\cap V\neq\emptyset$. Then $U_2\cap V\neq\emptyset$. By Proposition \ref{prop:csmfromD} and Lemma \ref{lemma:second}, $$ \csm{U_1}{V}=D(U_1,U_1)= \bigoplus_{Y\subseteq U_2\setminus U_1}D(U_1\cup Y,U_2)\geq D(U_2,U_2)=\csm{U_2}{V}. $$ (Case 2) Suppose that $U_2\cap V=\emptyset$. Then $U_1\cap V=\emptyset$. By Proposition \ref{prop:csmfromD}, $$ \csm{U_1}{V}=\bigoplus_{\emptyset\neq Y\subseteq V} D(U_1\cup Y,U_1\cup V). $$ By Lemma \ref{lemma:second}, for every $\emptyset\neq Y\subseteq V$, $$ D(U_1\cup Y,U_1\cup V)=\bigoplus_{W\subseteq U_2\setminus U_1} D(U_1\cup Y\cup W,U_2\cup V). $$ Obviously (put $W=U_2\setminus U_1$), this implies that $$ D(U_1\cup Y,U_1\cup V)\geq D(U_2\cup Y,U_2\cup V), $$ hence we may write $$ \csm{U_1}{V}=\bigoplus_{\emptyset\neq Y\subseteq V} D(U_1\cup Y,U_1\cup V)\geq \bigoplus_{\emptyset\neq Y\subseteq V} D(U_2\cup Y,U_2\cup V). $$ It remains to apply Proposition \ref{prop:csmfromD} again: $$ \bigoplus_{\emptyset\neq Y\subseteq V} D(U_2\cup Y,U_2\cup V)=\csm{U_2}{V}. $$ (Case 3) Suppose that $U_1\cap V=\emptyset$ and $U_2\cap V\neq\emptyset$. By Proposition \ref{prop:csmfromD}, $$ \csm{U_1}{V}=\bigoplus_{\emptyset\neq Y\subseteq V}D(U_1\cup Y,U_1\cup V). $$ By Lemma \ref{lemma:second}, $$ D(U_1\cup Y,U_1\cup V)= \bigoplus_{W\subseteq U_2\setminus(U_1\cup V)} D(U_1\cup W\cup Y,U_2\cup V) $$ We can put $W=U_2\setminus(U_1\cup V)$, proving that $$ D(U_1\cup Y,U_1\cup V)\geq D( (U_2\setminus V)\cup Y,U_2\cup V). $$ Therefore, \begin{align*} \csm{U_1}{V}= \bigoplus_{\emptyset\neq Y\subseteq V} D(U_1\cup Y,U_1\cup V)\geq \bigoplus_{\emptyset\neq Y\subseteq V} D( (U_2\setminus V)\cup Y,U_2\cup V)\geq\\ \bigoplus_{V\cap U_2\subseteq Y\subseteq V} D( (U_2\setminus V)\cup Y,U_2\cup V). \end{align*} For every $V\cap U_2\subseteq Y\subseteq V$, there is exactly one $Z\subseteq V\setminus U_2$ such that $$ (U_2\setminus V)\cup Y=U_2\cup Z. $$ Thus, we can rewrite $$ \bigoplus_{V\cap U_2\subseteq Y\subseteq V} D( (U_2\setminus V)\cup Y,U_2\cup V)= \bigoplus_{Z\subseteq V\setminus U_2} D(U_2\cup Z,U_2\cup V). $$ By Lemma \ref{lemma:second} and Proposition \ref{prop:csmfromD}, $$ \bigoplus_{Z\subseteq V\setminus U_2} D(U_2\cup Z,U_2\cup V)=D(U_2,U_2)=\csm{U_2}{V}. $$ \end{proof} \begin{proposition} \label{prop:lowbound} $\csm{U}{\{1\}}$ is a lower bound of $U$. \end{proposition} \begin{proof} Any element is a lower bound of $\emptyset$. Suppose that the proposition is true for some $U$ and pick $c\in S\setminus U$. By Proposition \ref{prop:formera}, $$ \csm{U\cup\{c\}}{\{1\}}\leq\csm{U}{\{1\}}. $$ By the induction hypothesis, $\csm{U}{\{1\}}$ is a lower bound of $U$. It remains to prove that $\csm{U\cup\{c\}}{\{1\}}\leq c$. By Proposition \ref{prop:trop}, $\csm{U\cup\{c\}}{\{1\}}=\csm{U}{\{c\}}$. By Proposition \ref{prop:formera} and condition (d), $\csm{U}{\{c\}}\leq\csm{\emptyset}{\{c\}}=c$. \end{proof} \begin{corollary} $\csm{U}{V}$ is a lower bound of $U$. \end{corollary} \begin{proof} By Proposition \ref{prop:lowbound}, $\csm{U}{\{1\}}$ is a lower bound of $U$. By condition (b), $\csm{U}{V}\leq\csm{U}{\{1\}}$. \end{proof} \begin{proposition} $\csm{\emptyset}{V}$ is an upper bound of $V$. \end{proposition} \begin{proof} Any element is an upper bound of $\emptyset$. Suppose that the proposition is true for some $V$ and pick $c\in S\setminus V$. By condition (a), $$ \csm{\emptyset}{V}\leq\csm{\emptyset}{V\cup\{c\}} $$ and by induction hypothesis, $\csm{\emptyset}{V}$ is an upper bound of $V$. It remains to prove that $c\leq \csm{\emptyset}{V\cup\{c\}}$. Put $U=\emptyset$ in condition (e*): $$ \csm{\{c\}}{\{1\}}\ominus\csm{\{c\}}{V}= \csm{\emptyset}{V\cup\{c\}}\ominus\csm{\emptyset}{V}. $$ Add $\csm{\emptyset}{V}$ to both sides to obtain $$ (\csm{\{c\}}{\{1\}}\ominus\csm{\{c\}}{V})\oplus\csm{\emptyset}{V}= \csm{\emptyset}{V\cup\{c\}}. $$ As $\csm{\{c\}}{V}\leq\csm{\emptyset}{V}$, $$ \csm{\{c\}}{\{1\}}\leq (\csm{\{c\}}{\{1\}}\ominus\csm{\{c\}}{V})\oplus\csm{\emptyset}{V}. $$ By Lemma \ref{lemma:c1isc}, $\csm{\{c\}}{\{1\}}=c$. \end{proof} \section{Compatibility support mappings and witness mappings} Let $(G,\leq)$ be a partially ordered abelian group and $u\in G$ be a positive element. For $0\leq a,b\leq u$, define $a\oplus b$ if and only if $a+b\leq u$ and put $a\oplus b=a+b$. With such a partial operation $\oplus$, the closed interval $$ [0,u]_G=\{x\in G:0\leq x\leq u\} $$ becomes an effect algebra $([0,u]_G,\oplus,0,u)$. Effect algebras which arise from partially ordered abelian groups in this way are called {\em interval effect algebras}, see \cite{BenFou:IaSEA}. Let $E$ be an interval effect algebra in a partially ordered abelian group $G$. Let $S\subseteq E$. Let us write $\Fin(S)$ for the set of all finite subsets of $S$. We write $I(\Fin(S))$ for the set of all comparable elements of the poset $(Fin(S),\subseteq)$, that means, $$ I(\Fin(S))=\{(X,Y)\in\Fin(S)\times\Fin(S):X\subseteq Y\}. $$ For every mapping $\beta:\Fin(S)\to G$, we define a mapping $D_\beta:I(\Fin(S))\to G$. For $(X,A)\in I(\Fin(S))$, the value $D_\beta(X,A)\in G$ is given by the rule $$ D_\beta(X,A):=\sum_{X\subseteq Z\subseteq A}(-1)^{|X|+|Z|}\beta(Z). $$ In \cite{Jen:CiIEA}, we introduced and studied the following notion: \begin{definition}\label{def:cm} Let $E$ be an interval effect algebra. We say that a mapping $\beta:\Fin(S)\to E$ is a {\em witness mapping for $S$} if and only if the following conditions are satisfied. \begin{enumerate} \item[(A1)]$\beta(\emptyset)=1$, \item[(A2)]for all $c\in S$, $\beta(\{c\})=c$, \item[(A3)]for all $(X,A)\in I(\Fin(S))$, $D_\beta(X,A)\geq 0$. \end{enumerate} \end{definition} We proved there, that a subset $S$ of an interval effect algebra $E$ is coexistent if and only if there is a witness 06 $\beta:\Fin(S)\to E$. The aim of this section is to explore the connection between the notion of a witness mapping and the notion of compatibility support mappings. \begin{proposition} \label{prop:wmfromcsm} Let $E$ be an interval effect algebra, let $S$ be a subset of $E$ with $1\in S$. Suppose there is a compatibility support mapping $\csm{~.~}{~.~}:\Fin(S)\times \Fin(S)\to S$. Then $\beta:\Fin(S)\to E$, given by $\beta(X)=\csm{X}{\{1\}}$ is a witness mapping and $D(X,A)=D_\beta(X,A)$, for all $(X,A)\in I(\Fin(S))$. \end{proposition} \begin{proof} We see that, by the condition (d) of Definition \ref{def:csm}, $$ \beta(\emptyset)=\csm{\emptyset}{\{1\}}=1, $$ so the condition (A1) of Definition \ref{def:cm} is satisfied. By Lemma \ref{lemma:c1isc}, $$ \beta(\{c\})=\csm{\{c\}}{\{1\}}=c, $$ hence (A2) is satisfied. For the proof of (A3), it suffices to prove that $D(X,A)=D_\beta(X,A)$, for all $(X,A)\in I(\Fin(S))$. The positivity of $D_\beta$ then follows from the positivity of $D$. The proof goes by induction with respect to $|A\setminus X|$. If $|A\setminus X|=0$, then $A=X$ and $$ D_\beta(X,A)=\beta(X)=\csm{X}{\{1\}}= \csm{X}{\{1\}}\ominus 0= \csm{X}{\{1\}}\ominus \csm{X}{\emptyset}=D(X,A). $$ Suppose that $D(X,A)=D_\beta(X,A)$, for all $(X,A)\in I(\Fin(S))$ such that $|A\setminus X|=n$. Let $(Y,B)\in I(\Fin(S))$ be such that $|B\setminus Y|=n+1$. Pick $c\in B\setminus Y$ and put $X=Y$, $A=B\setminus\{c\}$. By Lemma 1 of \cite{Jen:CiIEA}, for any mapping $\beta:\Fin(S)\to E$, for all $(X,A)\in I(\Fin(S))$ and for all $c\in S\setminus A$, the following equality is satisfied: $$ D_\beta(X,A)=D_\beta(X,A\cup\{c\})+D_\beta(X\cup\{c\},A\cup\{c\}). $$ Therefore, $$ D_\beta(Y,B)=D_\beta(X,A\cup\{c\})= D_\beta(X,A)\ominus D_\beta(X\cup\{c\},A\cup\{c\}). $$ By the induction hypothesis, $D_\beta(X,A)=D(X,A)$ and $D_\beta(X\cup\{c\},A\cup\{c\})=D_\beta(X\cup\{c\},A\cup\{c\})$. Thus, $$ D_\beta(Y,B)= D(X,A)\ominus D(X\cup\{c\},A\cup\{c\}). $$ By Lemma \ref{lemma:first}, $$ D(X,A)\ominus D(X\cup\{c\},A\cup\{c\})=D(X,A\cup\{c\})=D_\beta(Y,B). $$ \end{proof} The following problem remains open. \begin{problem} Let $E$ be an effect algebra, let $S\subseteq E$, let $\beta:\Fin(S)\to E$ be a witness mapping. Is there always a compatibility support mapping $\csm{~.~}{~.~}:\Fin(S)\times \Fin(S)\to S$ such that $\beta(X)=\csm{X}{\{1\}}$? \end{problem}
proofpile-arXiv_065-6444
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this paper we discuss the approximation of $f(A)b$, where $A \in {\mathbb{C}}^{n\times n}$ is non-Hermitian and $f$ is a function defined on the spectrum of $A$ such that the extension of $f$ to matrix arguments is defined.\footnote{The function $f$ can be extended to matrix arguments by, e.g., a spectral definition or a contour integral. For a thorough treatment of matrix functions see \cite{higham08}; a compact overview is given in \cite{frommer06}.} The motivation for this rather general setting comes from quantum chromodynamics (QCD) formulated on a discrete space-time lattice, where $f = \sign$ is of special interest. As the main object relevant for our discussion we are focusing on the overlap Dirac operator \cite{Narayanan:1994gw,Neuberger:1997fp}. The main numerical effort lies in the inversion of the overlap operator, which is done by iterative methods and requires the repeated application of the sign function of the usual ``symmetrized'' Wilson operator $H_W = \gamma_5 D_W$ (see \cite{Bloch:2006cd} for the notation) on a vector. At zero quark chemical potential $\mu$, the operator $H_W$ is Hermitian. However, one can also study QCD at nonzero $\mu$, which is relevant for many physical systems such as neutron stars, relativistic heavy ion collisions, or the physics of the early universe. The overlap operator has been generalized to this case \cite{Bloch:2006cd,Bloch:2007xi}. The computational challenge is the fact that at non-zero chemical potential $H_W$ becomes non-Hermitian. This contribution is organized as follows. In Section 2 we review multishift methods which have proven to be successful in the Hermitian ($\mu=0$) case. We will point out the problems that occur when applying these methods to the non-Hermitian ($\mu\ne0$) case. In Sections 3 and 4 we present two procedures, restarts and deflation, which --- especially when applied in combination --- make multishift methods applicable to non-Hermitian matrices. We present our numerical results in Section 5, and conclusions are drawn in Section 6. \section{Multishift methods} First we recall some results for the Hermitian case, i.e., we investigate the computation of $f(A)b$, where $A \in {\mathbb{C}}^{n\times n}$ is Hermitian. If $A$ is large, $f(A)$ is too costly to compute, while $f(A)b$ can still be obtained in an efficient manner if $A$ is sparse. Krylov subspace methods, i.e., methods that approximate $f(A)b$ in a Krylov subspace $K_k(A,b) = $span$\{b, Ab, \dots, A^{k-1}b\}$, are suitable for this task. We distinguish between two Krylov subspace approaches: direct projection and multishift. Direct projection methods compute the sign function for the projection of $A$ onto $K_k(A,b)$ and lift the result back to the original space, see \cite{higham08, vdV87}, or \cite{Bloch:2007aw,Bloch:2008gh} in the context of QCD. These methods are not the topic of this paper, but we will use them for comparison in our numerical results. The idea of multishift methods is to approximate $f$ by a rational function $g$, \begin{align} \label{eq:g} f(x) \approx g(x) = \sum\limits_{i=1}^s \frac{\omega_i}{x - \sigma_i}\,. \end{align} The systems \begin{equation}\label{shiftedsystems:eq} (A - \sigma_i I) x^{(i)} = b\,, \quad i = 1, \dots, s \end{equation} are treated with standard Krylov subspace methods such as the conjugate gradient method (CG) or the minimal residual method (MINRES), approximating $x^{(i)}$ by ${x_k}^{(i)}$ from a Krylov subspace. Since Krylov subspaces are shift invariant, i.e., $K_k(A-\sigma_i I,b) = K_k(A,b)$, the approximations ${x_k}^{(i)}$ can be computed simultaneously using the same subspace for all systems. The desired approximation is then obtained by combining the approximations to the $s$ shifted systems \begin{align} f(A)b \approx x_k = \sum\limits_{i=1}^s \omega_i {x_k}^{(i)}\,. \end{align} The core of any such method is the computation of an appropriate basis for the Krylov subspace. For Hermitian matrices an orthonormal basis can be built with short recurrences using the Lanczos process. These short recurrences are essential for the efficiency of the approach. Turning to non-Hermitian matrices, the computation of an orthogonal basis now requires long recurrences and is usually summarized via the Arnoldi relation \begin{align} A V_k = V_k H_k + h _{k+1,k} v_{k+1} e_k^T\,. \end{align} Here, $V_k = \left[ v_1 | \dots | v_k\right] \in {\mathbb{C}}^{n\times k}$ is the matrix which contains the computed basis vectors (the Arnoldi vectors), $H_k = {V_k}^\dagger A V_k$ is the upper Hessenberg matrix containing the recurrence coefficients $h_{i,j}$, and $e_k$ denotes the $k$-th unit vector of ${\mathbb{C}}^k$. For the rational approximation approach this means that the short-recurrence methods CG and MINRES have to be replaced by multishift versions of the corresponding long-recurrence methods, i.e., the full orthogonalization method (FOM) \cite{simoncini03} and the generalized minimal residual method (GMRES) \cite{frommer98}, respectively. Long recurrences slow down computation and increase storage requirements, and thus become inefficient or even infeasible if $k$, the dimension of the Krylov subspace, becomes large. In this paper we investigate restarts to circumvent this problem for non-Hermitian matrices. \section{Restarts} FOM to solve $A x = b$ consists of the Arnoldi process to compute the Arnoldi vectors $v_1, \dots, v_k$ as well as the upper Hessenberg matrix $H_k = {V_k}^\dagger A V_k$ and of approximating $x \approx x_k = \|b\|_2 V_k {H_k}^{-1} e_1$. The Arnoldi process applied to $A - \sigma_i I$ instead of $A$ produces the same matrices $V_k$ with $H_k$ replaced by the shifted counterpart $H_k - \sigma_i I$. The $k$-th approximation to $g(A)b$, with $g(x)$ defined in \eqref{eq:g}, is thus given by $\|b\|_2 \sum_{i=1}^s V_k (H_k-\sigma_i I)^{-1} e_1$. To prevent recurrences from becoming too long one can --- in this case --- use a restart procedure. This means that one stops the Arnoldi process after $k_\text{max}$ iterations. At this point we have a, possibly crude, approximation to $g(A) b$, and to allow for a restart one now has to express the error of this approximation anew as the action of a matrix function, $g_1(A) b_1$, say. A crucial observation concerning multishifts is that for any $k$ the individual residuals ${r_k}^{(i)} = b - (A - \sigma_i I) {x_k}^{(i)}$ of the FOM iterates ${x_k}^{(i)}$ are just scalar multiples of the Arnoldi vector $v_{k+1}$, see, e.g., \cite{simoncini03,frommer03}, i.e., \begin{align} {r_k}^{(i)} = {\rho_k}^{(i)} v_{k+1}\,, \quad i = 1, \dots, s \end{align} with collinearity factors ${\rho_k}^{(i)} \in {\mathbb{C}}$. The error $\Delta_k = g(A) b - x_k$ of the multishift approximation at step $k$ can therefore be expressed as \begin{align} \Delta_k = g_1(A) b_1\,, \quad \text{where } g_1(t) = \sum_{i=1}^s \frac{\omega_i {\rho_k}^{(i)}}{t - \sigma_i} \text{ and }b_1=v_{k+1}\,. \end{align} This allows for a simple restart at step $k_\text{max}$ of the Arnoldi process, with the new function $g_1$ again being rational with the same poles as $g$. This restart process can also be regarded as performing restarted FOM for each of the individual systems $(A-\sigma_i I) x = b$, $i=1,\ldots,s$ (and combining the individual iterates appropriately), the point being that, even after a restart, we need only a single Krylov subspace for all $s$ systems, see \cite{simoncini03}. There also exists a restarted version of multishift GMRES, see \cite{frommer98} for a detailed derivation. \section{Deflation} In \cite{Bloch:2007aw} two deflation approaches were proposed which use eigensystem information, namely Schur vectors (Schur deflation) or left and right eigenvectors (LR deflation) corresponding to some ``critical'' eigenvalues. Critical eigenvalues are those which are close to a singularity of $f$. If they are not reflected very precisely in the Krylov subspace, we get a poor approximation. In case of the sign function the critical eigenvalues are those close to the imaginary axis. Here, we describe LR deflation (see \cite{Bloch:2009in} for the reason why this is the method of choice) and show how it can be combined with multishifts and restarts. Let $R_m = \left[r_1 | \dots | r_m \right]$ be the matrix containing the right eigenvectors and ${L_m}^\dagger = \left[l_1 | \dots | l_m\right]^\dagger$ the matrix containing the left eigenvectors corresponding to $m$ critical eigenvalues of the matrix $A$. This means that we have \begin{align} A R_m = R_m \Lambda_m \quad \text{and} \quad {L_m}^\dagger A = \Lambda_m {L_m}^\dagger\,, \end{align} where $\Lambda_m$ is a diagonal matrix containing the $m$ critical eigenvalues. Since left and right eigenvectors are biorthogonal, we can normalize them such that ${L_m}^\dagger R_m = I_m$. The matrix $P = R_m {L_m}^\dagger$ represents an oblique projector onto the subspace $\Omega_R = \spann\{r_1, \dots,r_m\}$. We now split $f(A)b$ into the two parts \begin{align} f(A)b = f(A)(Pb) + f(A)(I-P)b\,. \end{align} Since we know the left and right eigenvectors which make up $P$, we directly obtain \begin{align} x_P \equiv f(A)(Pb) = f(A)R_m L_m^\dagger b = R_m f(\Lambda_m) (L_m^\dagger b)\,, \end{align} which can be computed exactly. The remaining part $f(A)(I-P)b$ can then be approximated iteratively by using a multishift method. Thus $f(A)b$ is now approximated in augmented Krylov subspaces $\Omega_R + K_k(A,(I - P)b)$, \begin{align} x_k = \underbrace{x_P\phantom{\Bigg|}\!\!\!\!}_{\in\Omega_R} + \;\;\underbrace{\sum_{i=1}^s \omega_i x_k^{(i)}}_{\makebox[0mm]{\scriptsize$\in K_k(A,(I - P)b)$}}\,. \end{align} Theoretically, we have \begin{align} K_k(A,(I-P)b) = (I-P)K_k(A,(I-P)b) \subseteq \range(I-P)\,, \end{align} see \cite{Bloch:2009in}. In computational practice, however, components outside of $\range(I-P)$ will show up gradually when building $K_k(A,(I-P)b)$ due to rounding effects in floating-point arithmetic. It is thus necessary to reapply $I-P$ from time to time in order to eliminate these components. Since the only effect of LR deflation is the replacement of $b$ by $(I-P)b$, no modifications of the restart algorithm are necessary. \section{Algorithms} We combine multishift methods with restarts and deflation. We assume that the original function $f$ is replaced by a rational function (given by the shifts $\sigma_i$ and weights $\omega_i$) which approximates the original function sufficiently well after deflation. Depending on the underlying multishift method (FOM or GMRES), we get LR-deflated multishift FOM (FOM-LR) or LR-deflated multishift GMRES (GMRES-LR). Algorithm~\ref{fom:alg} gives an algorithmic description of FOM-LR. (For an algorithmic description of GMRES-LR we refer to \cite{Bloch:2009in}.) The notation FOM-LR($m,k$) indicates that we LR-deflate a subspace of dimension $m$ and that we restart FOM after a cycle of $k$ iterations. The vector $x$ is the approximation to $f(A)b$. After the completion of each cycle we perform a projection step to eliminate numerical contamination by components outside of $\range(I-P)$. \begin{algorithm} \begin{alg}\rm Restarted FOM-LR($m,k$)\label{fom:alg} \begin{algorithmic} \STATE \{{\bf Input} $m$, $k=k_\text{max}$, $A$, $\{\sigma_1, \dots, \sigma_s\}$, $\{\omega_1,\ldots, \omega_s\}$, $b$, $L=L_m$, $R=R_m$, $\Lambda=\Lambda_m$\} \STATE $x = x_P = R f(\Lambda) {L}^\dagger b$ \STATE $r = (I-P)b$ \STATE $\rho^{(i)} = 1$, $i = 1, \dots, s$ \WHILE[\emph{loop over restart cycles}]{not all systems are converged} \STATE $\beta = \|r\|_2$ \STATE $v_1 = r/\beta$ \STATE compute $V_k$, $H_k$ by running $k$ steps of Arnoldi with $A$ \STATE $y_k^{(i)} = \beta \rho^{(i)} (H_k - \sigma_i I_k)^{-1} e_1$, $i = 1, \dots, s$ \STATE $x = x + V_k\sum_{i=1}^s \omega_i y_k^{(i)}$ \STATE $r = v_{k+1}$ \STATE $\rho ^{(i)} = - h_{k+1,k} e_k^T y_k^{(i)}$, $i = 1, \dots, s$ \STATE $r = (I-P)r$ \COMMENT{\emph{projection step}} \ENDWHILE \end{algorithmic} \end{alg} \end{algorithm} Note that a combination of deflation and a multishift method based on the two-sided \mbox{Lanczos} algorithm is also possible, see \cite{Bloch:2009in}. Of course, since two-sided Lanczos already gives short recurrences, there is no need to restart here. \section{Numerical results} For our numerical experiments we turn to $f = \sign$. In the Hermitian case, the sign function of $A$ can be approximated using the Zolotarev best rational approximation, see \cite{zolotarev77} and, e.g., \cite{ingerman00, vandenEshof:2002ms}. Using the Zolotarev approximation on non-Hermitian matrices gives rather poor results, unless all eigenvalues are close to the real axis. A better choice for generic non-Hermitian matrices is the rational approximation originally suggested by Kenney and Laub \cite{KL} and used by Neuberger \cite{Neuberger:1998my, Neuberger:1999zk} for vanishing chemical potential, \begin{align} \sign(t) \approx g_s(t)\,, \quad \text{where } g_s(t) = \frac{(t+1)^{2s} - (t-1)^{2s}}{(t+1)^{2s} + (t-1)^{2s}}\,. \end{align} The partial fraction expansion of $g_s$ is known to be \begin{align} g_s(t) = t \sum_{i=1}^s \frac{\omega_i}{t^2 - \sigma_i} \quad \text{with } \omega_i = \frac{1}{s} \cos^{-2}\left( \frac{\pi}{2 s} \left(i-\frac{1}{2}\right)\right), \quad \sigma_i = -\tan^2\left(\frac{\pi}{2s}\left(i-\frac{1}{2}\right)\right), \end{align} see \cite{KL,Neuberger:1998my}. Note that actually one uses $g(ct)$, where the parameter $c>0$ is chosen to minimize the number of poles $s$ needed to achieve a given accuracy. If the spectrum of $A$ is known to be contained in the union of two circles $C(m,r) \cup C(-m,r)$, where $C(m,r)$ is the circle $\{ |z-m| \leq r\}$ and $m$ and $r$ are real with $0 < r <m$, then $c = ((m+r)(m-r))^{-1/2}$ is optimal, see \cite{Bloch:2009in, vandenEshof:2002ms}. Figure \ref{Fig:error} shows the performance of FOM-LR in comparison to the direct projection method. The $k$-th approximation in the latter is given as \begin{equation} \label{direct:eq} x_P+\| (I-P)b \|_2 V_k \sign(H_k) e_1\,, \end{equation} where $\sign(H_k)$ is computed via Roberts' method, see \cite{higham08}. The relative performance of the two approaches depends on the parameters of the problem, such as the lattice size, the deflation gap, and the size of the Krylov subspace. For more details, see \cite{Bloch:2009in}. We add that in the meantime an improved method to compute $\sign(H_k)$ in the direct approach has been developed, see \cite{Bloch:2009pos} in these proceedings. \begin{figure}[h] \includegraphics[width=0.48\textwidth]{error_vs_cpu_8888}\hfill \includegraphics[width=0.48\textwidth]{error_vs_cpu_aaaa} \caption{\label{Fig:error} Comparison of the accuracy of the restarted FOM-LR algorithm (rFOM) and the direct two-sided Lanczos-LR method (2sL) as a function of the CPU time in seconds for an $8^4$ (left) and a $10^4$ (right) lattice configuration, using $\mu=0.3$ in both cases. Each plot shows data for two different deflation gaps, given in parentheses. The restart size used in the restarted FOM-LR algorithm is $k_\text{max} = 30$ for the $8^4$ lattice and $k_\text{max} = 40$ for the $10^4$ lattice.} \end{figure} \pagebreak Figure \ref{Fig:projection} is meant to convey a warning. It shows that the projection step after each restart, as formulated in Algorithm~\ref{fom:alg}, may be crucial to ensure convergence. In both plots we give results for Algorithm~\ref{fom:alg} and a variant thereof in which the projection step is omitted. The right plot shows that this may destroy convergence, the left plot shows that this is not necessarily so. Since the CPU time is increased only marginally by the projection step, the latter should always be included. \begin{figure}[h] \includegraphics[width=0.48\textwidth]{reorth_46}\hfill \includegraphics[width=0.48\textwidth]{reorth_8a} \caption{\label{Fig:projection}Error vs CPU time for the FOM-LR algorithm with and without re-orthogonalization for $4^4$ and $6^4$ (left) as well as $8^4$ and $10^4$ (right) lattices. We again used $\mu=0.3$ in all cases.} \end{figure} \section{Conclusion} We have presented an algorithm, FOM-LR, to approximate the action of the sign function of a non-Hermitian matrix on a vector. This algorithm combines LR deflation and a rational approximation to the sign function, which is computed by a restarted multishift method. The latter has fixed storage requirements determined by the restart parameter (maximum size of the Krylov subspace) and the degree of the rational approximation. Occasionally, additional projections of the Krylov vectors are necessary for numerical stability. Whether FOM-LR or a direct method (i.e., the two-sided Lanczos-LR method) performs better depends on many details of the problem. Some of them have been mentioned in Section 6. Others include implementation issues such as optimized linear algebra libraries, and ultimately parallelization. \bibliographystyle{JHEP}
proofpile-arXiv_065-6447
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $A$ be the class of functions \begin{equation} f(z)=z+a_2z^2+\cdots\, \label{1} \end{equation} which are analytic in $E$. A function $f\in A$ is said to be starlike of order $\lambda$, $0\leq\lambda<1$ if and only if, for $z\in E$, \[ Re\;\frac{zf'(z)}{f(z)}>\lambda. \] Also a function $f\in A$ is said to be convex of order $\lambda$, $0\leq\lambda<1$ if and only if, for $z\in E$, \[ Re\;\left\{1+\frac{zf''(z)}{f'(z)}\right\}>\lambda. \] Let $S^\ast(\lambda)$ and $K(\lambda)$ denote, as usual, the classes of starlike and convex functions of order $\lambda$ respectively. Salagean \cite{GS} introduced the operator $D^n$, $n\in\mathbb{N}$ as: \[ D^nf(z)=D(D^{n-1}f(z))=z[D^{n-1}f(z)]' \] with $D^0f(z)=f(z)$ and used it to generalize the concepts of starlikeness and convexity of functions in the unit disk as follows: a function $f\in A$ is said to belong to the classes $S_n(\lambda)$ if and only if \[ Re\frac{D^{n+1}f(z)}{D^nf(z)}>\lambda. \] For $n=0,1$, we have the classes of starlike and convex functions respectively. We refer to functions of the classes $S_n(\lambda)$ as $n$-starlike functions in the unit disk. For $\lambda=0$ we simply write $S^\ast$, $K$ and $S_n$. Let $\beta>0$, $\alpha\geq 0$ be real numbers, $\gamma$ and $\delta$ complex constants with $\alpha+\delta=\beta+\gamma$. For $f\in A$, the generalized integral operator \begin{equation} \mathcal{J}(f)=\left\{\frac{\beta+\gamma}{z^\gamma}\int_0^zt^{\delta-1}f(t)^\alpha dt\right\}^{\frac{1}{\beta}},\;\;\;\beta+Re\;\gamma\geq 0,\ \label{2} \end{equation} and its many special cases (for example: $\beta=\alpha=1$, $\gamma=\delta$; $\beta=\alpha=1$, $\gamma=\delta=0$; $\beta=\alpha=1$, $\gamma=1$ and $\delta=1-\alpha$) have been studied repeatedly in many literatures \cite{SA,BO,SD,WM,JK,ZL,RJ,EP,MM,MN,TO,WC,RS} where $f(z)$ belongs to some favoured classes of functions. More general integral operators were studied in \cite{MM} where the authors used a new method of analysis to obtain results that are both more general and sharper than many earlier ones. Let $\beta>0$, $\alpha\geq 0$ be real numbers, $\gamma$ and $\delta$ complex constants such that $\alpha+\delta=\beta+\gamma$. Define $\mathcal{J}_0^j(z)^\beta=f(z)^\alpha$, $j=1,2$ and for $m\in N$ define \[ \mathcal{J}_m^1(f)=\left\{\frac{(\beta+\gamma)^m}{z^\gamma\Gamma(m)}\int_0^z\left(\log\frac{z}{t}\right)^{m-1}t^{\delta-1}f(t)^\alpha dt\right\}^{\frac{1}{\beta}}, \] where Re $\gamma\geq 0$ and \[ \mathcal{J}_m^2(f)=\left\{\binom{\beta+\gamma+m-1}{\beta+\gamma-1}\frac{m}{z^\gamma}\int_0^z\left(1-\frac{t}{z}\right)^{m-1}t^{\delta-1}f(t)^\alpha dt\right\}^{\frac{1}{\beta}} \] also with $m-1$+Re $\gamma\geq 0$. The integrals $\mathcal{J}^j(f)$ are similar to the Jung-Kim-Srivastava one-parameter families of integral operators \cite{JK}. However, only in the case $\beta=\alpha=1$ and $\gamma$ real, then $\mathcal{J}^j(f)$ are special cases of those in \cite{JK}. Furthermore if $m=1$, both integrals yield the integral operator ~(\ref{2}). In the present paper, we will study the integrals $\mathcal{J}^j(f)$ for $f$ belonging to the classes $S_n(\lambda)$. Furthermore, if $\gamma$ and $\delta$ are real constants we will obtain the best possible inclussion for $\mathcal{J}^j(f)$ given that $f\in S_n(\lambda)$. Natural corollaries to the main results of this work are that: (i) for all real number $\beta$, $\alpha\geq 0$, the integrals $\mathcal{J}^j(f)$, $j=1,2$ preserve starlikeness and convexity in the open unit disk and that (ii) our result will improve and extend many known ones for all the many special cases. The main results are presented in Section 3 while we discuss the many special cases arising from taking $m=1$ in Section 4. In the next section we give some lemmas necessary for the proof of our results. \section{Preliminary Lemmas} Let $P$ denote the class of functions $p(z)=1+c_1z+c_2z^2+\cdots$ which are regular in $E$ and satisfy Re $p(z)>0$, $z\in E$. We shall need the following lemmas. \begin{lemma}[\cite{BO}] Let $u=u_1+u_2i$, $v=v_1+v_2i$ and $\psi(u,v)$ a complex-valued function satisfying: {\rm(a)} $\psi(u,v)$ is continuous in a domain $\Omega$ of $\mathbb{C}^2$, {\rm(b)} $(1,0)\in\Omega$ and Re$\psi(1,0)>0$, {\rm(c)} Re$\psi(\lambda+(1-\lambda)u_2i, v_1)\leq\lambda$ when $(\lambda+(1-\lambda)u_2i, v_1)\in\Omega$ and $2v_1\leq -(1-\lambda)(1+u_2^2)$ for real number $0\leq\lambda<1$. If $p\in P$ such that $(p(z),zp'(z))\in\Omega$ and $Re$ $\psi(p(z),zp'(z))>\lambda$ for $z\in E$, then $Re$ $p(z)>\lambda$ in $E$. \end{lemma} The above lemma is an abridged form of a more detail one in \cite{BO}. \begin{lemma}[\cite{EE}] Let $\eta$ and $\mu$ be complex constants and $h(z)$ a convex univalent function in $E$ satisfying $h(0)=1$, and $Re(\eta h(z)+\mu)>0$. Suppose $p\in P$ satisfies the differential subordination: \begin{equation} p(z)+\frac{zp'(z)}{\eta p(z)+\mu}\prec h(z),\;\;\;z\in E.\, \label{3} \end{equation} If the differential equation: \begin{equation} q(z)+\frac{zq'(z)}{\eta q(z)+\mu}=h(z),\;\;\;q(0)=1\, \label{4} \end{equation} has univalent solution $q(z)$ in $E$, then $p(z)\prec q(z)\prec h(z)$ and $q(z)$ is the best dominant in $~(\ref{3})$. \end{lemma} The formal solution of ~(\ref{4}) is given as \[ q(z)=\frac{zF'(z)}{F(z)}=\frac{\eta+\mu}{\eta}\left(\frac{H(z)}{F(z)}\right)^\eta-\frac{\mu}{\eta} \] where \[ F(z)^\eta=\frac{\eta+\mu}{z^\mu}\int_0^zt^{\mu-1}H(t)^\eta dt \] and \[ H(z)=z.\exp\left(\int_0^z\frac{h(t)-1}{t}dt\right) \] (see \cite{SS,HM}). The authors in \cite{SS} gave sufficient conditions for the univalence of the solution, $q(z)$, of ~(\ref{4}) as well as some generalised univalent solutions for some given $h(z)$. The second part of the next lemma is the completion of Lemma 2.2 in \cite{KT}. \begin{lemma}[\cite{KT}] Let $f\in A$ and $\zeta>0$ be real. {\rm (i)} If for $z\in E$, $D^{n+1}f(z)^\zeta/D^nf(z)^\zeta$ is independent of $n$, then \begin{equation} \frac{D^{n+1}f(z)^\zeta}{D^nf(z)^\zeta}=\zeta\frac{D^{n+1}f(z)}{D^nf(z)}.\, \label{5} \end{equation} {\rm (ii)} The equality $~(\ref{5})$ also holds if $D^{n+1}f(z)/D^nf(z)$ is independent of $n$, $z\in E$. \end{lemma} \begin{proof} The proof of the first part of the above lemma was presented in \cite{KT}. As for (ii), let $D^{n+1}f(z)/D^nf(z)$ assume the same value for all $n\in\mathbb{N}$. For $n=0$, the assertion is easy to verify. Let $n=1$. Then \begin{align*} \frac{D^2f(z)^\zeta}{D^1f(z)^\zeta} &=1+\frac{zf''(z)}{f'(z)}+(\zeta-1)\frac{zf'(z)}{f(z)}\\ &=\frac{D^2f(z)}{D^1f(z)}+(\zeta-1)\frac{D^1f(z)}{D^0f(z)}. \end{align*} Since $D^1f(z)/D^0f(z)=D^2f(z)/D^1f(z)$ we have \[ \frac{D^2f(z)^\zeta}{D^1f(z)^\zeta}=\zeta\frac{D^2f(z)}{D^1f(z)}. \] Now suppose ~(\ref{5}) holds for some integer $k$. Then \begin{equation} \frac{D^{k+2}f(z)^\zeta}{D^{k+1}f(z)^\zeta}=\frac{D^{k+2}f(z)}{D^{k+1}f(z)}+(\zeta-1)\frac{D^{k+1}f(z)}{D^kf(z)}.\, \label{6} \end{equation} Since $D^{k+1}f(z)/D^kf(z)$ has the same value for each $k\in\mathbb{N}$, we can write ~(\ref{6}) as \[ \frac{D^{k+2}f(z)^\zeta}{D^{k+1}f(z)^\zeta}=\frac{D^{k+2}f(z)}{D^{k+1}f(z)}+(\zeta-1)\frac{D^{k+2}f(z)}{D^{k+1}f(z)} \] which implies \[ \frac{D^{k+2}f(z)^\zeta}{D^{k+1}f(z)^\zeta}=\zeta\frac{D^{k+2}f(z)}{D^{k+1}f(z)}. \] Thus the lemma follows by induction. \end{proof} \begin{remark} Let $f\in S_n(\lambda)$. Then there exists $p\in P$ such that \[ \frac{D^{n+1}f(z)}{D^nf(z)}=\lambda+(1-\lambda)p(z) \] independent of $n\in \mathbb{N}$. Hence for $f\in S_n(\lambda)$, the assertion of Lemma 2 holds. Thus we have \[ Re\frac{D^{n+1}f(z)^\zeta}{D^nf(z)^\zeta}=\zeta Re\frac{D^{n+1}f(z)}{D^nf(z)}>\zeta\lambda. \] In particular, if $\lambda=0$, then for $\zeta>0$ we have Re $\frac{D^{n+1}f(z)^\zeta}{D^nf(z)^\zeta}>0$ if and only if Re $\frac{D^{n+1}f(z)}{D^nf(z)}>0$. \end{remark} \section{Main Results} \begin{theorem} Let $\alpha\geq 0$. Suppose for $\alpha>0$, the real number $\lambda$ is defined such that $0\leq\alpha\lambda<1$. If $f\in S_n(\lambda)$, then $\mathcal{J}^j(f)\in S_n(\frac{\alpha}{\beta}\lambda)$, $j=1,2$. \end{theorem} \begin{proof} Let $f\in S_n(\lambda)$ have the form ~(\ref{1}). If $\alpha=0$, then $\mathcal{J}^j(f)=z$ by evaluation using the Beta and Gamma functions. Thus the result holds trivially in this case. Suppose $\alpha>0$, then we can write \[ f(z)^\alpha=z^\alpha+A_2(\alpha)z^{\alpha+1}+... \] where $A_k(\alpha)$, $k=2,3,...$, depends on the coefficients $a_k$ of $f(z)$ and the index $\alpha$. Thus evaluating the integrals in series form, also using the Beta and Gamma functions and noting that \[ \binom{\sigma}{\gamma}=\frac{\Gamma(\sigma+1)}{\Gamma(\sigma-\gamma+1)\Gamma(\gamma+1)} \] we obtain \[ \mathcal{J}_m^1(f)^\beta=z^\beta+\sum_{k=2}^\infty\left(\frac{\beta+\gamma}{\beta+\gamma+k-1}\right)^m A_k(\alpha)z^{\beta+k-1} \] and \[ \mathcal{J}_m^2(f)^\beta=z^\beta+\frac{\Gamma(\beta+\gamma+m)}{\Gamma(\beta+\gamma)}\sum_{k=2}^\infty\frac{\Gamma(\beta+\gamma+k-1)}{\Gamma(\beta+\gamma+m+k-1)}A_k(\alpha)z^{\beta+k-1}. \] From the above series expansions we can see that $\mathcal{J}_0^j(f)^\beta=f(z)^\alpha$, $j=1,2$ are well defined. Also from the series expansions we find the recurssive relation \begin{equation} \mu \mathcal{J}_m^j(z)^\beta+z(\mathcal{J}_m^j(f)^\beta)'=\xi\mathcal{J}_{m-1}^j(f)^\beta,\;\;\;j=1,2\, \label{7} \end{equation} where $\mu=\gamma$ and $\xi=\beta+\gamma$ for $j=1$ and $\mu=\gamma+m-1$ and $\xi=\beta+\gamma+m-1$ if $j=2$. Furthermore let $\mu=\mu_1+\mu_2i$. Now applying the operator $D^n$ on ~(\ref{7}) we have \[ \frac{D^{n+1}\mathcal{J}_{m-1}^j(f)^\beta}{D^n\mathcal{J}_{m-1}^j(f)^\beta}=\frac{\mu D^{n+1}\mathcal{J}_m^j(f)^\beta+D^{n+2}\mathcal{J}_m^j(f)^\beta}{\mu D^n\mathcal{J}_m^j(f)^\beta+D^{n+1}\mathcal{J}_m^j(f)^\beta}. \] Let $p(z)=\frac{D^{n+1}\mathcal{J}_m^j(z)^\beta}{D^n\mathcal{J}_m^j(z)^\beta}$. Then \begin{equation} \frac{D^{n+1}\mathcal{J}_{m-1}^j(z)^\beta}{D^n\mathcal{J}_{m-1}^j(z)^\beta}=p(z)+\frac{zp'(z)}{\mu+p(z)}.\, \label{8} \end{equation} Define $\psi(p(z),zp'(z))=p(z)+\frac{zp'(z)}{\mu+p(z)}$ for $\Omega=[\mathbb{C}-\{-\mu\}]\times\mathbb{C}$. Obviously $\psi$ satisfies the conditions (a) and (b) of Lemma 1. Now let $0\leq\lambda_0=\alpha\lambda<1$. Then $\psi(\lambda_0+(1-\lambda_0)u_2i,v_1)=\lambda_0+(1-\lambda_0)u_2i+\tfrac{v_1}{\mu+(\lambda_0+(1-\lambda_0)u_2i)}$ so that Re $\psi(\lambda_0+(1-\lambda_0)u_2i,v_1)=\lambda_0+\tfrac{(\mu_1+\lambda_0)v_1}{(\mu_1+\lambda_0)^2+(\mu_2+(1-\lambda_0)u_2)^2}$. If $v_1\leq-\tfrac{1}{2}(1-\lambda_0)(1+u_2^2)$, then Re $\psi(\lambda_0+(1-\lambda_0)u_2i,v_1)\leq\lambda_0$ if and only if $\mu_1+\lambda_0\geq 0$. This is true if Re $\mu=\mu_1\geq 0$ since $\lambda_0$ is nonegative. Thus by Lemma 1, if Re $\mu\geq 0$, then Re $\psi(p(z),zp'(z))>\lambda_0$ implies Re $p(z)>\lambda_0$. That is \[ Re\;\frac{D^{n+1}\mathcal{J}_m^j(f)^\beta}{D^n\mathcal{J}_m^j(f)^\beta}>\lambda_0\;\;\text{if}\;\;Re\;\frac{D^{n+1}\mathcal{J}_{m-1}^j(f)^\beta}{D^n\mathcal{J}_{m-1}^j(f)^\beta}>\lambda_0. \] Since $\mathcal{J}_0^j(f)^\beta=f(z)^\alpha$ we have Re $\frac{D^{n+1}f(z)^\alpha}{D^nf(z)^\alpha}>\lambda_0\Rightarrow$ Re $\frac{D^{n+1}\mathcal{J}_1^j(f)^\beta}{D^n\mathcal{J}_1^j(f)^\beta}>\lambda_0\Rightarrow$ Re $\frac{D^{n+1}\mathcal{J}_2^j(f)^\beta}{D^n\mathcal{J}_2^j(f)^\beta}>\lambda_0\Rightarrow...$ and so on for all $m\in N$. By Lemma 2, we have: Re $\frac{D^{n+1}f(z)}{D^nf(z)}>\frac{\lambda_0}{\alpha}\Rightarrow$ Re $\frac{D^{n+1}\mathcal{J}_1^j(f)}{D^n\mathcal{J}_1^j(z)}>\frac{\lambda_0}{\beta}\Rightarrow$ Re $\frac{D^{n+1}\mathcal{J}_2^j(f)}{D^n\mathcal{J}_2^j(f)}>\frac{\lambda_0}{\beta}\Rightarrow...$ and so on for all $m\in N$. By setting $\lambda_0=\alpha\lambda$ we have Theorem 1. \end{proof} The next theorem will leads us to the best possible inclusion relations. \begin{theorem} Let $\alpha\geq 0$. Suppose for $\alpha>0$, the real number $\lambda$ is defined such that $0\leq\alpha\lambda<1$. If \[ Re\;\frac{D^{n+1}\mathcal{J}_{m-1}^j(f)^\beta}{D^n\mathcal{J}_{m-1}^j(f)^\beta}>\alpha\lambda,\;\;then\;\; \frac{D^{n+1}\mathcal{J}_m^j(f)^\beta}{D^n\mathcal{J}_m^j(f)^\beta}\prec q(z) \] where \begin{equation} q(z)=\frac{z^{1+\mu}(1-z)^{-2(1-\alpha\lambda)}}{\int_0^zt^\mu(1-t)^{-2(1-\alpha\lambda)}dt}-1\, \label{9} \end{equation} and $\mu=\gamma$ for $j=1$ and $\mu=\gamma+m-1$ for $j=2$. \end{theorem} \begin{proof} As in the preceeding theorem, the case $\alpha=0$ holds trivially. Now for $\alpha>0$, let $0\leq\lambda_0=\alpha\lambda<1$ and suppose \[ Re\;\frac{D^{n+1}\mathcal{J}_{m-1}^j(f)^\beta}{D^n\mathcal{J}_{m-1}^j(f)^\beta}>\lambda_0. \] Then from ~(\ref{8}), we have \[ p(z)+\frac{zp'(z)}{\mu+p(z)}\prec\frac{1+(1-2\lambda_0)z}{1-z} \] Now by considering the differential equation \[ q(z)+\frac{zq'(z)}{\mu+q(z)}=\frac{1+(1-2\lambda_0)z}{1-z} \] whose univalent solution is given by ~(\ref{9}) (see \cite{SS}), then by Lemma 2 we have the subordination \[ p(z)=\frac{D^{n+1}\mathcal{J}_m^j(f)^\beta}{D^n\mathcal{J}_m^j(f)^\beta}\prec q(z)\prec\frac{1+(1-2\lambda_0)z}{1-z}, \] where $q(z)$ is the best dominant, which proves the theorem. \end{proof} \section{The special case $m=1$} \medskip In this section we discuss the integral ~(\ref{2}), which coincides with the case $m=1$ of both integrals $\mathcal{J}_m^j(f)$. In particular we take $\lambda=0$. In this case, our first corollary, a simple one from Theorem 1, is the following: \begin{corollary} The classes $S_n$ is closed under $\mathcal{J}(f)$. \end{corollary} This result is more genral than the result of Miller et-al \cite{MM} (Theorem 2, pg. 162) in which case $\delta=\gamma$. A major breakthrough with our method is the fact that the integral ~(\ref{2}) passes through, preserving all the goemetry (starlikeness and convexity for example) of $f$ without having to drop any member of the sets on which the parameters $\alpha\geq 0$ and $\beta>0$ were defined, which was not the case in many earlier works. This will become more evident in the following more specific cases (cf. \cite{MM}). {\rm (i)} If $\alpha+\delta=\beta+\gamma=1$, we have \begin{corollary} If $f\in S_n$, then \[ \mathcal{J}(f)=\left\{z^{\beta-1}\int_0^z\left(\frac{f(t)}{t}\right)^\alpha dt\right\}^{\frac{1}{\beta}}=z+\cdots \] also belongs to $S_n$. \end{corollary} {\rm (ii)} If $\alpha+\delta=1$, $\beta=1$ and $\gamma=0$, we have \begin{corollary} If $f\in S_n$, then \[ \mathcal{J}(f)=\int_0^z\left(\frac{f(t)}{t}\right)^\alpha dt=z+\cdots \] also belongs to $S_n$. \end{corollary} {\rm (iii)} If $\alpha+\delta=\beta+\gamma=\alpha+\eta+\gamma$, we have \begin{corollary} If $f\in S_n$, then \[ \mathcal{J}(f)=\left\{\frac{\alpha+\gamma+\eta}{z^\gamma}\int_0^zt^{\gamma+\eta}f(t)^\alpha dt\right\}^{\frac{1}{\beta}}=z+\cdots \] also belongs to $S_n$. \end{corollary} From the above corollary, we can obtain various sequences of starlike and convex functions (and more generally, of $S_n$ functions): For example if $\gamma+\eta=1$, $\alpha=1$ and $\eta=k=0,1,2\cdots$; and if $\gamma=0$, $\alpha=1$ and $\eta=k=0,1,2\cdots$ we obtain, respectively, the following sequences of $S_n$ functions: \[ \left\{2z^{k-1}\int_0^zf(t)dt\right\}^{\frac{1}{k+1}}=z+\cdots,\;\;\;k=0,\;1,\;2,\cdots \] and \[ \left\{(k+1)\int_0^zt^{k-1}f(t)dt\right\}^{\frac{1}{k+1}}=z+\cdots,\;\;\;k=0,\;1,\;2,\cdots. \] For starlike functions, the above sequences are due to Miller et-al \cite{MM}. Next we consider the best possible inclusion for the integral $\mathcal{J}(f)$ for two cases $\mu=0,1$. For these two cases, we have \[ q(z)=\frac{1}{1-z},\;\;\;\mu=0, \] and \[ q(z)=\frac{z^2}{(1-z)[(1-z)\ln(1-z)+z]}-1,\;\;\;\mu=1. \] But Re $q(z)\geq q(-r)$ for $|z|\leq r<1$. Thus we have \[ Re\;q(z)\geq\frac{1}{1+r},\;\;\;\mu=0, \] and \[ Re\;q(z)\geq\frac{1}{(1+r)[(1+r)\ln(1+r)-r]}-1,\;\;\;\mu=1. \] Letting $r\rightarrow 1^{-}$, we have Re $q(z)>\rho$ (say), $z\in E$. Since $\frac{D^{n+1}\mathcal{J}(f)^\beta}{D^n\mathcal{J}(f)^\beta}\prec q(z)$, we have Re $\frac{D^{n+1}\mathcal{J}(f)^\beta}{D^n\mathcal{J}(f)^\beta}>\rho$. By Lemma 3, this implies Re $\frac{D^{n+1}\mathcal{J}(f)}{D^n\mathcal{J}(f)}>\frac{\rho}{\beta}$ so that the following best possible inclusions follow. \begin{corollary} Let $f\in S_n$. If $\delta$ is a real number and $\gamma=0$, then \[ \mathcal{J}(f)=\left\{\beta\int_0^zt^{\delta-1}f(t)^\alpha dt\right\}^{\frac{1}{\beta}}=z+\cdots \] belongs to $S_n(\frac{1}{2\beta})$. \end{corollary} \begin{corollary} Let $f\in S_n$. If $\delta$ is a real number and $\gamma=1$, then \[ \mathcal{J}(f)=\left\{\frac{\beta+1}{z}\int_0^zt^{\delta-1}f(t)^\alpha dt\right\}^{\frac{1}{\beta}}=z+\cdots \] belongs to $S_n(\frac{3-4\ln2}{2\beta(2\ln2-1)})$. \end{corollary} On the final note, if we take $\beta=1$ in Corollaries 5 and 6 we then have the following special cases \begin{corollary} Let $f\in S_n$. If $\delta$ is a real number and $\gamma=0$, then \[ \mathcal{J}(f)=\int_0^zt^{\delta-1}f(t)^\alpha dt=z+\cdots \] belongs to $S_n(\frac{1}{2})$. \end{corollary} \begin{corollary} Let $f\in S_n$. If $\delta$ is a real number and $\gamma=1$, then \[ \mathcal{J}(f)=\frac{2}{z}\int_0^zt^{\delta-1}f(t)^\alpha dt=z+\cdots \] belongs to $S_n(\frac{3-4\ln2}{2(2\ln2-1)})$. \end{corollary} Miller et-al \cite{MM} proved that if $f\in S^\ast$, then $\mathcal{J}(f)\in S^\ast(\frac{\sqrt{17}-3}{4})$ and also if $f\in K$, then $\mathcal{J}(f)\in K(\frac{\sqrt{17}-3}{4})$, which our last corollary has now raised to their best-possible status. \medskip {\it Acknowledgements.} This work was carried out at the Centre for Advanced Studies in Mathematics, CASM, Lahore University of Management Sciences, Lahore, Pakistan during the author's postdoctoral fellowship at the Centre. The author is indebted to all staff of CASM for their hospitality.
proofpile-arXiv_065-6450
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro \subsection{Motivation}\label{motivation The motivation of this paper is to provided a unified approach to the various ways that fluids are described in physics. In particular the methods used by relativists, fluid mechanists, and nuclear physicists have grown distinct. In many areas of physics a unified approach is provided by the lagrange method, which for fluids is developed here. \subsection{Methodology}\label{methodology The methodolgy used is first to simplify a perfect fluid in order to investigate if methods of field theory can be applied to it; and then to generalize a perfect fluid to try and establish contact with more physical fluids. \subsection{Other approaches to fluids}\label{contact A perfect fluid has a variational formulation \cite{hargreaves,RWH,SW,taub} which uses the first law of thermodynamics. In such a formulation clebsch potentials \cite{clebsch,lamb,baldwin,balkovsky,GH,rund79,eckart} for the comoving fluid vector field are used. Here this approach is both applied to less general fluids and to more general fluids. Other approaches to fluids include the following {\bf twelve}. The {\it first} uses lagragians dependent on combinations of clebsh potentials which do not necessarily form a vector \cite{mdr41}. The {\it second} is that the comoving vector can be thought of as $U^a=\dot{x}^a$, so that a perfect fluid is a type of generalization of a point particle, then there turns out to be a fluid generalization of a membrane \cite{mdrfm}. The {\it third} is that the charge subsitution $\partial_a\rightarrow\partial_a+\imath eA_a$ can be applied to fluids as well as fields and this leads to a model of symmetry breaking \cite{mdr25}. The {\it fourth} is that the navier-stokes equation has a lagrangian formulation \cite{CBB,constantin,DMB,finlayson,UC}, but the lagrangian has different measure and also image fields. The {\it fifth} is that hydrodynamics can be expressed using a grad expansion \cite{grad,IS,HL,EXG} which needs an entropy vector. The {\it sixth} is that contemporary bjorken models use the grad expansion \cite{bjorken,moss,muronga,NS,HJP}. The {\it seventh} is fluid plasmas \cite{anile,achterberg}. The {\it eighth} is elastic models \cite{ABS}\S3, where the density rather than the pressure is used as the lagrangian. The {\it ninth} is other quantization methods such as brst and path integral applied to fluids \cite{BH}. The {\it tenth} is superfluids \cite{CK}. The {\it eleventh} is spinning fluids \cite{HHKN,jackiw,NHN}. The {\it twelfth} is cosmology \cite{LR}, where clebsch potentials have been used. \subsection{Conventions}\label{conventions The word potential is disambiguated by refereing to potentials for a vector field as clebsch potentials and potentials that occur in lagrange theory as coefficient functions. When a measure is suppressed it is $\int\sqrt{-g}dx^4$ not $\int d\tau$ unless otherwise stated. $\mu$ is density and ${\cal P}$ is the pressure. $q$ is a clebsch potential. $\sigma$ is used for a clebsch potential, a pauli matrix and the shear of a vector, to disambiguate the pauli matrix is always $\sigma_p$ and the shear labels with which vector it is with respect to $\stackrel{U}{\sigma}$. Capital $\Pi$ indicates a momentum with respect to the proper time $\tau$ not the coordinate time $t$. $a,b,c,\ldots$ are spacetime indices, $i,j,k,\ldots$ label sets of fields and momenta, and $\iota,\kappa,\ldots$ label constraints. The signature is $-+++$. \section{The perfect fluid}\label{pf For a perfect fluid the lagrangian is taken to be the pressure ${\cal L}={\cal P}$, and the action is \begin{equation} I=\int dx^4{\cal P}. \label{pfaction} \end{equation} The clebsch potentials are given by \begin{equation} hV_a=W_a=\sigma_a+\theta s_a,~~~~~~~ V_aV^a=-1, \label{simplecleb} \end{equation} where if more potentials are needed it is straightforward to instate them; there are several sign conventions for \ref{simplecleb}. The clebsch potentials are sometimes given names: $\sigma$ is called the higgs because it has a similar role to the higgs field in symmetry breaking using fluids \cite{mdr25,mdr41}, $\theta$ is called the thermasy and $s$ the entropy \cite{SW}. Variation is achieved via the first law of thermodynamics \begin{equation} \delta{\cal P}=n\delta h-nT\delta s=-nV_a\delta W^a-nT\delta s,~~~~~~~ nh=\mu+{\cal P}, \label{1law} \end{equation} where $n$ is the particle number and $h$ is the enthalpy. Metrical variation yields the stress \begin{equation} T_{ab}=(\mu+{\cal P})V_aV_b+{\cal P} g_{ab}, \label{pfstress} \end{equation} the n\"other currents $j^a=\delta I/\delta q_a$ are \begin{equation} j_\sigma^a=-nV^a,~~~ j_\theta^a=0,~~~ j_s^a=-n\theta V^a, \label{noether} \end{equation} variation with respect to the clebsch potentials gives \begin{equation} (nV^a)_a=\dot{n}+n\Theta=0,~~~ \dot{s}=0,~~~ \dot{\theta}=T, \label{cbemo} \end{equation} where $\Theta\equiv V^a_{.;a}$ is the vectors expansion: thus the consevation of the n\"other currents \ref{noether} gives the same equations \ref{cbemo} as varying the clebsch potentials; the normalization condition $V_aV^a=-1$ and \ref{cbemo} give \begin{equation} \dot{\sigma}=-h. \label{shpf} \end{equation} The bianchi identity is \begin{equation} T^{ab}_{..;b}=n\dot{W}^a+{\cal P}^{,a}, \label{bid} \end{equation} subsituting for $W$ using \ref{simplecleb} and for ${\cal P}$ using \ref{1law} this vanishes identically. If one attempts to apply existing scalar field fourier oscillator quantization procedures to the above there is the equation \begin{equation} W^a_{.;a} =\Box\sigma+\theta_as^a+\theta\Box s =\left(hV^a\right)_a =\dot{h}+h\Theta =\dot{h}-\frac{\dot{n}}{n}h =h\left(\ln\left(\frac{h}{n}\right)\right)^\circ, \label{pfbxsig} \end{equation} and if this vanishes the enthalpy $n$ is proportional to the particle number $n$, for an example of the see the next section \S\ref{eospf}. The pressure ${\cal P}$ and density $\mu$ are only implicity defined in terms of the clebsch potentials so it is not clear what operators should correspond to them. Another possibility is to note that \ref{cbemo} are first order differential equations and to try and replace them with spinorial equations; however this would require a spinorial absolute derivative in place of the vectorial absolute derivative, see \cite{PR}\S4.4. The canonical clebsch momenta are given by $\Pi^i=\delta{I}/\delta\dot{q^i}$ \begin{equation} \Pi^\sigma=-n,~~~ \Pi^\theta=0,~~~ \Pi^s=-n\theta, \label{pfmom} \end{equation} and these allow the n\"other currents \ref{noether} to be expressed as \begin{equation} j^a_q=\Pi_qV^a. \label{jpi} \end{equation} The standard poisson bracket is defined by \begin{equation} \left\{A,B\right\}\equiv \frac{\delta A}{\delta q_i}\frac{\delta B}{\delta \Pi^i}- \frac{\delta A}{\delta \Pi_i}\frac{\delta B}{\delta q^i}, \label{poissonbracket} \end{equation} where $i$ which labels each field is summed; the dirac matrix is defined by \begin{equation} C_{\iota\kappa}\equiv\left\{\phi_\iota,\phi_\kappa\right\}, \label{diracm} \end{equation} and the dirac bracket is defined by \begin{equation} \left\{A,B\right\}\ast\equiv\left\{A,B\right\}- \left\{A,\phi_\iota\right\}{\rm Inv}\left(C_{\iota\kappa}\right)\left\{\phi_\kappa,B\right\}, \label{diracb} \end{equation} where Inv$(C_{\iota\kappa})$ denotes the inverse of $C_{\iota\kappa}$. For a perfect fluid the constraints are \begin{equation} \phi_1=\Pi^s-\theta \Pi^\sigma,~~~ \phi_2=\Pi^\theta, \label{pfcons} \end{equation} and the dirac matrix \ref{diracm} becomes \begin{equation} C_{\iota\kappa}=-\imath\sigma_{p2}\Pi^\sigma,~~~ {\rm Inv}C_{\iota\kappa}=+\imath\frac{\sigma_{p2}}{\Pi^\sigma}, \label{pfdm} \end{equation} where $\sigma_{p2}$ is the pauli matrix \begin{eqnarray} \sigma_{p1}\equiv \left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right),~~~ \sigma_{p2}\equiv \left(\begin{array}{cc} 0 & -\imath \\ +\imath & 0 \end{array} \right),~~~ \sigma_{p3}\equiv \left(\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right). \label{paulimatrix} \end{eqnarray} The dirac bracket \ref{diracb} for \ref{pfcons} is \begin{equation} \{A,B\}\ast=\{A,B\}+\frac{1}{\Pi^\sigma}\frac{\delta B}{\delta \theta} \left(\frac{\delta A}{\delta s}-\theta\frac{\delta A}{\delta \sigma} +\Pi^\sigma\frac{\delta A}{\delta \Pi^\theta}\right) -A\leftrightarrow B, \label{pfdb} \end{equation} acting on the clebsch momenta and clebsch potentials \begin{eqnarray} &&\{\Pi^\sigma,\sigma\}\ast=-1,~~~ \{\Pi^\theta,\theta\}\ast=0,~~~ \{\Pi^s,s\}\ast=-1,\nonumber\\ &&\Pi^\sigma\{\sigma,\theta\}\ast=-\theta,~~~ \Pi^\sigma\{\theta,s\}\ast=-1,~~~ \Pi^\sigma\{\sigma,s\}\ast=0, \label{pfcommun} \end{eqnarray} replacing the dirac brackets by quantum commutators there does not seem to be an easy representation of the resulting algebra as discussed in \cite{mdr27}. To quantize one replaces the dirac bracket with a quantum commutator \begin{equation} \left\{A,B\right\}\ast \rightarrow\frac{1}{\imath\hbar}\left[\hat{A}\hat{B}-\hat{B}\hat{A}\right], \label{qdb} \end{equation} in the present case the second equation of \ref{pfcommun} suggests that $\Pi^\theta=0$, which is assumed from now on. There is the problem of how to realize the coordinate commutation relations in the last three of \ref{pfcommun} which does not seem possible using the pauli matrices \ref{paulimatrix} and so this is left for now. Suppose that $\hat{\Pi}^\sigma$ is replaced by a differential operator \begin{equation} \hat{\Pi}^\sigma=-\imath\hbar\partial_x, \label{diffop} \end{equation} possibilities for $x$ include spacetime coordinates $x_a$, the particle number $n$ or the proper time $\tau$ which restrict $\sigma$ as to a lesser extent does the dirac operator $\gamma^a\partial_a$ so the simplest choice is taken $\partial_\sigma$. To proceed it is necessary to have an explicit hamiltonian which does not exist in the present case as the pressure ${\cal P}$ and the density $\mu$ are not directly expressible in terms of the clebsch potentials. \section{The explicit linear lagrangian}\label{eospf The lagrangian is taken to be linear in $W_a^2$ \begin{equation} {\cal L}=-\frac{1}{2}{\cal F}(n,q)W_a^2-\mho(n,q), \label{feqs} \end{equation} where ${\cal F}(n,q)$ and $\mho(n,q)$ are the first and zeroth order coefficient functions of the particle number and clebsch potentials respectively. Metric variation yields \begin{equation} T_{ab}={\cal F}W_aW_b+{\cal L} g_{ab}={\cal F}h^2V_aV_b+{\cal L} g_{ab}, \label{fmet} \end{equation} requiring this stress to be that of a perfect fluid places the restriction \begin{equation} {\cal F}h^2=nh={\cal P}+\mu,~~~{\rm or}~~~{\cal F}=\frac{n}{h}, \label{noverh} \end{equation} again requiring that the pressure is the lagrangian and using $W^2_a=-h^2$ gives \begin{equation} {\cal L}={\cal P}=\frac{1}{2}nh-\mho=\frac{1}{2}\left({\cal P}+\mu\right)-\mho, \label{nlg} \end{equation} which yields the linear equation of state \begin{equation} {\cal P}=\mu-2\mho, \label{eosl} \end{equation} this is a very restrictive equation of state as it does not have even simple cases such as ${\cal P}=(\gamma-1)\mu$ as examples. Variation with respect to the clebsch potentials gives \begin{equation} \dot{n}+n\Theta-\bar{\mho}_{,\sigma}=0,~~~ n\dot{s}+\bar{\mho}_{,\theta}=0,~~~ n\dot{\theta}-\bar{\mho}_{,s}+\theta\bar{\mho}_{,\sigma}=0, \label{eaemo} \end{equation} where \begin{equation} \bar{\mho}_q=\mho_{,q}-\frac{1}{2}h^2F_{,q}, \label{barmho} \end{equation} and \ref{noverh} has been used. Using the equations of motion \ref{eaemo} the bianchi identities \ref{bid} become \begin{equation} T^{ab}_{..;b}=\frac{1}{2}h^2{\cal F}_a -\left(\theta\bar{\mho}_\theta+\frac{1}{2}h^2{\cal F}_s\right)s_a -n\theta\left(\frac{1}{n}\bar{\mho}_\theta\right)_a -\sigma_a\mho_\sigma -\theta_a\mho_\theta, \label{lbid} \end{equation} for the thermodynamical case \ref{cbemo} and \ref{eaemo} give zeroth order coefficient function $\mho=nTs$ and then the bianchi identity reduces to ${\cal F}_a=0$ or $n=kh$, thus what in lagrangian \ref{feqs} looked like an arbitrary function ${\cal F}$ has been reduced to a constant and this will be further considered in the next section. Variation with respect to everything else except the particle number $n$ gives the same as the implicit approach of the previous section \S\ref{pf}. For variation with respect to the particle number $n$ there are three choices: the {\it first} is to simply do it in which case, for lagrangians in which $n$ is separated and linear, the vectors $W,~V$ are forced to be null and the system is no longer that of a perfect fluid, the {\it second} is to ignore variation with respect to $n$, for lagrangians in which $n$ is separated and linear this amounts to assuming a first law of the form \ref{1law} and one is essentially working in the implicit formalism of the previous section \S\ref{pf}, the {\it third} is to alter lagrangians with coefficient functions ${\cal F},\mho$ that depend on $n$, then variation with respect to $n$ can be chosen so that $h^2{\cal F}_{,n}=-2\mho_{,n}$. Using \ref{pfmom} for $n$, \ref{shpf} for $h$ and \ref{simplecleb} for $V_a^2$ the lagrangian is \begin{equation} {\cal L}={\cal P}=\frac{1}{2}\dot{\sigma}\Pi^\sigma-\mho, \label{elag} \end{equation} using \ref{eaemo} the hamiltonian is \begin{equation} {\cal H}=\frac{1}{2}\dot{\sigma}\Pi^\sigma+\mho-\frac{\theta}{n}\Pi^\sigma\mho_{,\theta}, \label{eham} \end{equation} when $\mho_{,\theta}=0$ the hamiltonian equals the density ${\cal H}=\mu$. For this explicit form \ref{eham} it is possible to use the operator substitution \ref{diffop} so that \begin{equation} {\cal H}\Psi=-\frac{1}{2}\imath\hbar\dot{\sigma}\Psi_\sigma+\mho\Psi=0, \label{hschro} \end{equation} taking $\dot{\sigma}\Psi_\sigma=\dot{\Psi}$, \ref{hschro} becomes \begin{equation} \imath\hbar\dot{\Psi}=2\mho\Psi, \label{odepsi} \end{equation} integrating \begin{equation} \Psi=A\exp\left(-\frac{2\imath}{\hbar}\int\mho d\tau\right), \label{wavefn} \end{equation} this wavefunction turns out to be too restrictive to be of any use. \section{Simplest three clebsch potential fluid}\label{tpoq The explicit linear thermodynamic lagrangian is \begin{equation} {\cal L}=-\frac{1}{2}W_a^2-nTs, \label{tol} \end{equation} see the remarks after \ref{lbid}. Varying with respect to the potentials gives \begin{equation} W^a_a=\Box\sigma+\theta_a s^a+\theta\Box s=0,~ s^a\left(\sigma_a+\theta s_a\right)=0,~ \theta^a\left(\sigma_a+\theta s_a\right)=hT. \label{teom} \end{equation} If these equations are thought of as functions of one variable then the second equation gives either $s_a=0$ or $W_a=0$ both of which are of no practical use: therefore two variables are used specifically to seek plane wave solutions. A simple choice is \begin{equation} \sigma=A\exp\imath\left(k_0t+k_1x\right),~ \theta=B\exp a\imath\left(k_0t+k_1x\right),~ s=C\exp(1-a)\imath\left(k_0t+k_1x\right), \label{wave} \end{equation} which gives \begin{equation} W_a=\imath\left[k_0,k_1,0,0\right]{\cal K},~~~~~~~~ {\cal K}\equiv\left(\frac{A}{BC}+1-a\right)\theta s, \label{WK} \end{equation} the first and second equations of motion \ref{teom} give \begin{equation} (k_0^2-k_1^2){\cal K}=0, \label{12eom} \end{equation} and the third gives \begin{equation} (k_0^2-k_1^2){\cal K}=\frac{T}{a\theta}, \label{3eom} \end{equation} thus the system is decoupled, in other words as \ref{12eom} forces it to be null the terms in the first equation of \ref{teom} vanish separately, and by \ref{3eom} the system is at zero temperature. Working through in the same manner with the clebsch potentials having both left and right movers, i.e. like \ref{mode}, the same problems arise and there is no indication that mixed movers can generate non-zero temperature. Working through without the restriction that the clebsch potentials are co-directional produces equations too general to proceed with. The hamiltonian is \begin{equation} {\cal H}=-\frac{1}{2}\dot{\sigma}\Pi^\sigma-Ts\Pi^\sigma, \label{h3} \end{equation} in the present case $\dot{\sigma}=h=n=-\Pi^\sigma$ so that \begin{equation} {\cal H}=\frac{1}{2}\Pi^\sigma-Ts\Pi^\sigma \label{h3t} \end{equation} using the substitution \ref{diffop} with $x=\sigma$ gives \begin{equation} {\cal H}\Psi=-\frac{\imath\hbar}{2}\Psi_{\sigma\sigma}+Ts\Psi_\sigma=0, \label{hw3} \end{equation} which has solution \begin{equation} \Psi=A\sigma\exp\left(-\frac{2\imath Ts}{\hbar}\right)+B, \label{psi3} \end{equation} which is again too restrictive. \section{One clebsch potential fluid with two coefficient functions}\label{opof The lagrangian is taken to be \begin{equation} {\cal L}=-\frac{1}{2}{\cal F}(\sigma)\sigma_a^2-\mho(\sigma), \label{1potlag} \end{equation} the metric stress is \begin{equation} T_{ab}={\cal F}\sigma_a\sigma_b+{\cal L} g_{ab}, \label{1potms} \end{equation} ${\cal P},\mu,n,h$ are recovered in the same way as in the last section. Varying with respect to $\sigma$ \begin{equation} {\cal F}\Box\sigma+{\cal F}^a\sigma_a+\mho_\sigma=0, \label{1potwe} \end{equation} which for simple ${\cal F}$ has simple spherically symmetric solutions; for wave solutions use zeroth order coefficient function $\mho=m^2\sigma^2/2$ and \begin{equation} \sigma_\pm=A_+\exp(\imath k\cdot x)\pm A_-\exp(-\imath k\cdot x), \label{mode} \end{equation} with $\sigma=\sigma_+$, \ref{1potwe} becomes \begin{equation} \left({\cal F}k_a^2+m^2\right)\sigma+{\cal F}'k_a^2\sigma_-^2=0, \label{wesol} \end{equation} the last term forces either $A_+$ or $A_-$ to vanish. \section{One fluid described using one vector field}\label{ofove In the one fluid one vector approach one hopes to find a lagrangian which recovers as much of the stress \cite{MTW}eq.22.16d as possible \begin{eqnarray} T_{ab}&=&(\mu+{\cal P}-\xi\Theta)U_aU_b-2\eta\stackrel{U}{\sigma}_{ab}+2q_{(a}U_{b)}+({\cal P}-\xi\Theta)g_{ab},\nonumber\\ S_a&=&nsU_a+\frac{q_a}{T}, \label{mtwfluidstress} \end{eqnarray} where $\mu$ is the density, ${\cal P}$ is the pressure, $\xi\ge0$ is the coefficient of bulk viscosity, $\eta\ge0$ is the coefficient of dynamic viscosity, $\stackrel{U}{\sigma}$ is the shear, $q$ is the heat flux, $S_a$ and $s$ are the entropy vector and scalar and $T$ is the temperature. Consider a fluid lagrangian dependent on one velocity which can be expanded \begin{equation} {\cal L}={\cal L}(V)={\cal L}(V^{0+},V^1,V^{1+},V^2,\ldots) \label{lagexpan} \end{equation} where \begin{eqnarray} {\cal L}(V^{0+})&=&k^{0+}{\cal P},\\ {\cal L}(V^1)&=&k^1\Theta=k^1V^a_{.;a},\nonumber\\ {\cal L}(V^{1+})&=&k^{1+}{\cal P}\Theta,\nonumber\\ {\cal L}(V^2)&=&k^2_1\dot{\Theta}+k^2_2R_{ab}V^aV^b+k^2_3\omega^2+k^2_4\sigma^2+k^2_5\Theta^2+k^2_6\dot{V}^a_{.;a}\nonumber\\ &\ddots&\nonumber \label{lagterms} \end{eqnarray} so that the integer superscript on the velocity $V$ indicates the power in which it occurs in the lagrangian, the meaning of the superscript $+$ will become apparent later. Here just the first three terms are considered, terms of second or Raychadhuri order and higher are ignored, as are any auxiliary, image, entropy or electromagnetic fields. From a technical point of view just the first term in this expansion has been considered in the previous section \S\ref{pf} and in this section the next two terms are considered. Metrical variation gives the stress \begin{equation} T_{ab}=(\mu+{\cal P})\left(k^{0+}+k^{1+}\Theta\right)V_aV_b-2k^1V_{(a;b)}+{\cal L} g_{ab}, \label{metstress3} \end{equation} \ref{metstress3} does not bare much resemblance to \ref{mtwfluidstress} except that there is a term similar to bulk viscosity appearing. Variation with respect to the clebsch potentials, particularly of ${\cal L}(V^1)$, gives long expressions which are not helpful. Variation with respect to the clebsch velocities gives the momenta \begin{equation} \Pi^\phi=-n\left(k^{0+}+k^{1+}\Theta\right),~~~ \Pi^s=\theta\Pi^\phi,~~~ \Pi^\theta=0, \label{oomom} \end{equation} and these give two constraints \begin{equation} \phi_1=\Pi^s-\theta\Pi^\phi,~~~ \phi_2=\Pi^\theta. \label{ofovc} \end{equation} In the present case, as the constraints \ref{ofovc} are the same as for the perfect fluid, the dirac bracket between the coordinates and momenta and thus the quantum relations are the same as for the perfect fluid as given above and in \cite{mdr27}. \section{One fluid described using two vector fields}\label{oftve If one tries to incorporate entropy by choosing it to move in the same direction as fluid flow then the entropy is proportional to the enthalpy and $s=kh$ and the first law becomes \begin{eqnarray} \delta p&=&n\delta h-nT\delta s =-nV_a\delta W^a-nTk\delta h\nonumber\\ &=&-nV_a\delta W^a+nTkV_a\delta W^a =-n^*V_a\delta W^a,\nonumber\\ &&{\rm where~~~}n^*\equiv(1-kT)n, \label{coment} \end{eqnarray} so the problem becomes the same as before with the particle number $n$ replaced by $n^*$. In the one fluid two vectors approach one proceeds as before except the clebcsh decomposition \ref{simplecleb} is replaced by \begin{eqnarray} &hV_a=W_a=\sigma_a+\eta\chi_a,&V_aV^a=-1,\nonumber\\ &\ell U_a=X_a=\alpha_a+\beta\gamma_a,&U_aU^a=-1, \label{oftvc} \end{eqnarray} and the first law \ref{1law} is replaced by \begin{equation} \delta p=n\delta h-nT\delta S=-nV^a\delta W_a+nTU^a\delta X_a. \label{1law2} \end{equation} Metrical variation gives \begin{equation} T_{ab}=nhV_aV_b-nT\ell U_aU_b+{\cal P} g_{ab}, \label{oftvmv} \end{equation} when either $T ~ {\rm or} ~ \ell \rightarrow 0$ the second term vanishes. \ref{oftvmv} can be re-written as \begin{equation} T_{ab}=({\cal P}+\mu)\left(V_aV_b-\frac{T}{h \ell}X_aX_b\right)+{\cal P} g_{ab}, \label{oftvstress} \end{equation} Two choices {\it firstly} the free case where the clebsch potential are all independent, {\it secondly} the thermodynamic case where the clebsh potential are chosen so that the thermasy and entropy change are non-vanishing. The free case essentially duplicates the one vector case of the perfect fluid \S\ref{pf}. For the thermodynamic case choose \begin{equation} \theta=\eta=\beta,~~~~~~~ s=\chi=\gamma, \label{thermopot} \end{equation} in \ref{oftvc}. Defining two absolute derivatives \begin{equation} \dot{T}_{abc\dots}=V^eT_{abc\dots;e},~~~ T'_{abc\dots}=U^eT_{abc\dots;e}, \label{defabsol} \end{equation} variation with respect to $\sigma,\alpha,\theta{\rm~and~}s$ gives \begin{equation} \dot{n}+n\stackrel{V}{\Theta}=0,~~~~~ (nT)'+nT\stackrel{U}{\Theta}=0,~~~~~~~ \dot{s}=Ts',~~~~~ \dot{\theta}=T\theta', \label{eqmopot} \end{equation} respectively. The relationship between dot and dash derivatives is needed, note \begin{equation} X_a=(\alpha-\sigma)_a+W_a, \label{relXW} \end{equation} thus for an arbitrary tensor $A_{abc\dots}$ there is the relationship between the derivatives \begin{equation} \ell A'_{abc\dots}=(\alpha-\sigma)^eA_{abc\dots;e}+h\dot{A}_{abc\dots}, \label{reldd} \end{equation} using \ref{reldd}, \ref{eqmopot} becomes \begin{eqnarray} &&h\stackrel{V}{\Theta}-\ell\stackrel{U}{\Theta}=\left(\ln(nT)\right)^a(\alpha-\sigma)_a+h\left(\ln(T)\right)^\circ,\nonumber\\ &&\dot{s}=\frac{\ell(\alpha-\sigma)^as_a}{\ell-Th},~~~~~~~ \dot{\theta}=\frac{\ell(\alpha-\sigma)^a\theta_a}{\ell-Th}. \label{sthetadot} \end{eqnarray} In the limit that the two vectors coincide $\alpha\rightarrow\sigma$ and \ref{relXW} and \ref{sthetadot} give $\dot{T}=\dot{s}=\dot{\theta}=0$ so that the thermasy relation is not recovered unless $T=0$. The dot momenta are the same as that given by \ref{pfmom}, the dashed momenta are \begin{equation} \Pi^\alpha({\rm dashed})=nT,~~~ \Pi^s({\rm dashed})=n\theta T, \label{dmom} \end{equation} it is necessary to convert these to dot momenta using \begin{equation} \Pi^i\equiv\frac{\delta I}{\delta \dot{q}^i} =\frac{\delta q'^i}{\delta \dot{q}^i}\frac{\delta I}{\delta q'^i} =\frac{\delta q'^i}{\delta \dot{q}^i}\Pi^i({\rm dashed}) \equiv f(q)\Pi^i({\rm dashed}), \label{ddashed} \end{equation} where $i$ is not summed and $f$ is an undetermined function of the clebsch potentials; collecting together \ref{pfmom},\ref{dmom},\ref{ddashed} gives the total momenta \begin{equation} \Pi^\sigma=-n,~~~ \Pi^\alpha=nTf,~~~ \Pi^\theta=0,~~~ \Pi^s=-n\theta+n\theta Tf, \label{totmom} \end{equation} and the three constraints \begin{equation} \phi_1=\Pi^s-\theta\Pi^\sigma-\theta\Pi^\alpha,~~~ \phi_2=\Pi^\theta,~~~ \phi_3=\Pi^\alpha+Tf\Pi^\sigma, \label{totcons} \end{equation} the dirac matrix is \begin{equation} C_{12}=-\Pi^\sigma-\Pi^\alpha, C_{13}=\left(-\frac{\delta f}{\delta s} +\theta\frac{\delta f}{\delta \sigma} +\theta\frac{\delta f}{\delta \alpha}\right)T\Pi^\sigma, C_{23}=-\frac{\delta f}{\delta \theta}T\Pi^\sigma, \label{totdm} \end{equation} the inverse Inv$C$ can be found, but there is no simplification in further expressions. \section{Conclusion}\label{conc In \S\ref{pf} the lagrangian approach to perfect fluids was presented, the clebsch potentials contain more information than is needed to describe the system so that it is constrained, dirac constraint analysis removes the superfluous degrees of freedom resulting in the algebra \ref{pfcommun}, see also \cite{mdr27}, which when quantized via \ref{qdb} does not seem to lead to a recognizable algebra; also as the pressure and density are implicit rather than explicit functions of the clebsch potentials it is not clear what quantum operators should represent them. In \S\ref{eospf} to overcome lack of explicit expressions for the pressure and density an explicit lagrangian \ref{feqs} was studied, this has a very restrictive equation of state \ref{eosl}, and for thermodynamic choices of the functions reduces to the example of the next section. In \S\ref{tpoq} the simplest three potential lagrangian \ref{tol} was studied, it seems to lead to an unrealistic quantum theory \ref{hw3}. In \S\ref{opof} a one potential lagrangian \ref{hw3} was studied, it is just a simple generalization of the klein gordon lagrangian, but the generalization is obstructive enough to prevent successful investigation of it by the fourier oscillator method. In \S\ref{ofove} the perfect fluid is generalized so that it depends on higher powers of the comoving fluid, a term similar to the bulk viscosity appears and the momentum constraints are the same as those for a perfect fluid. In \S\ref{oftve} the perfect fluid is generalized to include two vector fields in the hope that these can represent both density and entropy flow, equating some of the clebsch potentials in each vector leads to plausible thermodynamic equations \ref{sthetadot}. \section{Acknowledgements}\label{acknowledgements I would like to thank Tom Bridges, Alex Craik and Mattias Marklund for their interest in parts of this work.
proofpile-arXiv_065-6468
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The idea that remote parts of the universe, or even that {\it different} universes, could be connected by smooth bridges or tunnels constitutes a very intriguing concept, which for many decades has captivated the imagination of science fiction lovers \cite{WHwiki}, \cite{TTwiki}. In physics, spacetimes containing such bridges (called ``wormholes" by J.A. Wheeler) appear in general relativity as solutions of the Einstein field equations. Nearly half a century ago, the concept of wormholes lead Wheeler to the discussion of topological entities called geons\footnote{In \cite{geons} Wheeler provides the first diagram of a wormhole as a tunnel connecting two openings in different regions of spacetime.} \cite{geons} and to the conception of geometrodynamics \cite{Geometrodynamics}, where ``matter comes from no matter", and ``charge comes from no charge". Lately, after the fundamental papers by Morris, Thorne and Yurtsewer \cite{Morris 1} and Morris and Thorne \cite{Morris 2}, the notion of traversable Lorentzian wormholes has gained much attention within the physics community. These authors showed that such wormholes could, in principle, allow humans not only to travel between universes, or distant parts of a single universe, but also to construct time machines. It has been suggested that black holes and wormholes are interconvertible. In particular that stationary wormholes could be possible final states of black-hole evaporation \cite{Hayward}. Also, that astrophysical accretion of ordinary matter could convert wormholes into black holes \cite{Novikov} (this issue has recently been discussed in literature and different approaches give, in general, different results - see \cite{Kuhfittig1}, \cite{Sergey}). Today, it is well known that a wormhole geometry can only appear as a solution of the Einstein field equations if the energy-momentum tensor (EMT) of the matter supporting such a geometry violates the null energy condition (NEC) at least in the neighborhood of the wormhole throat \cite{Visser 1}-\cite{Visser 3} (matter that violates the NEC is usually called {\it exotic}). Although in general relativity there are many examples of matter that are consistent with wormhole spacetimes (see, e.g., \cite{Barcelo}-\cite{Kuhfittig2}), none of them are observable in the real world of astrophysics\footnote{To be fair, we should mention that the solutions discussed in \cite{Barcelo}-\cite{Sushkov 1} were obtained as early as in 1973 by Bronnikov \cite{AP} and also by Ellis \cite{Ellis} describing wormhole solutions with a massless, minimally coupled phantom scalar field. }: ``all these spacetimes still remain in the domain of fiction" \cite{Dadhich}. This is a very important and challenging issue in wormhole physics, which is known as the ``exotic matter problem". There are numerous attempts at solving this issue in the literature. Some consider alternative theories of gravity \cite{Nandy}-\cite{Hochberg1} or invoke quantum effects in curved spacetimes, considering wormholes as semiclasical objects \cite{Hochberg2}, \cite{Remo}. Recently, a general no-go theorem was proved by Bronnikov and Starobinsky showing the absence of wormholes in scalar-tensor gravity without ghosts \cite{BS}. A different approach to this issue arises in the context of higher-dimensional theories. In Kaluza-Klein gravity the exotic matter necessary for the formation of a wormhole can appear from the off-diagonal elements of the metric (the gauge fields) and from the $\gamma_{5 5}$ component of the metric (the scalar field), rather than coming from some externally given exotic matter \cite{Singleton}. Also the so-called Einstein-Gauss-Bonnet theory admits wormhole solutions that would not violate the NEC provided the Gauss-Bonnet coupling constant is negative \cite{Bhawal}, \cite{Dotti}. In addition, it has been proposed that braneworld gravity provides a natural scenario for the existence of traversable wormholes \cite{Bronnikov 1}, \cite{Lobo}. This is because the local high-energy effects and non-local corrections from the Weyl curvature in the bulk could lead to an effective violation of the NEC on the brane even when the standard-model fields satisfy the energy conditions. In this paper we adhere to the latter framework and develop a number of spherically symmetric, static Lorentzian wormholes which are analytic solutions to the equations on the brane. In the Randall $\&$ Sundrum braneworld scenario \cite{Randall2} the effective equations for gravity in $4D$ were derived by Shiromizu, Maeda and Sasaki \cite{Shiromizu}. In vacuum, when matter on the brane is absent and the $4$-dimensional cosmological constant vanishes, these equations reduce to\footnote{Throughout the paper we use the conventions and definitions of Landau and Lifshitz \cite{Landau} and set $G = c = 1$.} \begin{equation} \label{SMS equations} ^{(4)}G_{\alpha \beta} = - \epsilon E_{\alpha\beta}, \end{equation} where $^{(4)}G_{\alpha \beta}$ is the usual Einstein tensor in $4D$; $\epsilon$ is taken to be $- 1$ or $+ 1$ depending on whether the extra dimension is spacelike or timelike, respectively; $E_{\alpha \beta}$ is the projection onto the brane of the Weyl tensor in $5D$. Explicitly, $E_{\alpha\beta} = {^{(5)}C}_{\alpha A \beta B}n^An^B$, where $n^{A}$ is the $5D$ unit vector $(n_{A}n^{A} = \epsilon)$ orthogonal to the brane. This quantity connects the physics in $4D$ with the geometry of the bulk. The vacuum field equations on the brane (\ref{SMS equations}) are formally equivalent to the Einstein equations of general relativity with an effective EMT given by $T_{\alpha \beta} = - \left(\epsilon/8 \pi\right) E_{\alpha \beta}$. The crucial point here is that due to its geometrical nature, $E_{\alpha \beta}$ does not have to satisfy the energy conditions applicable to ordinary matter. In fact, there are a number of examples in the literature where the effective EMT corresponds to exotic matter on the brane \cite{Vollick}, \cite{Bronnikov 2}. Thus, $E_{\alpha\beta}$ is the most natural ``matter" supporting wormholes \cite{Bronnikov 1}. However, the set of equations (\ref{SMS equations}) does not form a closed system in $4D$, because $E_{\alpha \beta}$ is unknown without specifying, {\it both} the metric in $5D$, and the way the $4D$ spacetime is identified, i.e., $n^{A}$. The only truly general thing we know is that $E_{\alpha\beta}$ is traceless. Therefore, the only quantity that can be unambiguously specified on the brane is the trace of the curvature scalar $^{(4)}R = {^{(4)}R}^{\alpha}_{\alpha}$. In particular, in empty space \begin{equation} \label{field eqs. for empty space} ^{(4)}R = 0. \end{equation} In this paper we investigate spherically symmetric solutions to this equation of the form \begin{equation} \label{the metric under study} ds^2 = A^2(r) dt^2 - B^2(r) dr^2 - r^2 C^2(r) \; d\Omega^2, \end{equation} where $d\Omega^2 = d\theta^2 + \sin^2{\theta} d\phi^2$ is the line element on a unit sphere. In these coordinates the field equation (\ref{field eqs. for empty space}) can be written as (a prime denotes differentiation with respect to $r$) \begin{equation} \label{field equation for the general spherical metric} \frac{A''}{A} + 2\left(\frac{A'}{A} - \frac{B'}{B}\right)\left(\frac{1}{r} + \frac{C'}{C}\right) - \frac{A' B'}{A B} + \frac{1}{r^2}\left(1 - \frac{B^2}{C^2}\right) + 2 \frac{C''}{C} + \frac{C'}{C}\left(\frac{6}{r} + \frac{C'}{C}\right) = 0. \end{equation} In curvature coordinates, i.e., in coordinates where $C(r) = 1$, this is a second-order differential equation for $A(r)$ and a first-order one for $B(r)$. Therefore, it has a nondenumerable infinity of solutions parameterized by some arbitrary function of the radial coordinate $r$ \cite{Visser}. In curvature coordinates the simplest (non-trivial) solutions to (\ref{field equation for the general spherical metric}) are obtained by setting either $A = 1$ or $B = 1$. The former case yields the spatial-Schwarzschild wormhole \cite{Dadhich}, while the latter one gives $A^2 = \left(1 - m/r\right)^2$, which is a black hole with total mass $M = m$ and a horizon at $r = M$. The next simple solution is the Schwarzschild metric \begin{equation} \label{Schwarzschild metric} ds^2 = \left(1 - \frac{2 m}{r}\right) d t^2 - \left(1 - \frac{2 m}{r}\right)^{- 1} d r^2 - r^2 d\Omega^2. \end{equation} If one chooses either $A(r)$ or $B(r)$ as in the Schwarzschild metric, then the asymptotic flatness of the solutions is guaranteed, and the vacuum braneworld solutions contain the Schwarzschild spacetime as a particular case. This choice generates the ``temporal Schwarzschild" metric \cite{Germani}, \cite{CFM}, \begin{equation} \label{T-Schw exterior} ds^2 = \left(1 - \frac{2{{m}}}{r}\right) dt^2 - \frac{(1 - 3{{m}}/2r)}{(1 - 2{{m}}/r)[1 - (3{{m}}/2r)\;\sigma]}dr^2 - r^2 d\Omega^2, \end{equation} and the ``spatial Schwarzschild" metric \cite{CFM} \begin{equation} \label{S-Schw exterior} ds^2 = \frac{1}{\alpha^2}\left(\alpha - 1 + \sqrt{1 - \frac{2 \alpha m}{r}}\right)^2\;dt^2 - \left( 1 - \frac{2 \alpha m}{r}\right)^{- 1}dr^2 - r^2 d\Omega^2, \end{equation} where $\sigma$ and $\alpha$ are dimensionless constant parameters. For $\sigma = 1$, $\alpha = 1$ the corresponding solutions reduce to the Schwarzschild spacetime. The above metrics have thoroughly been discussed in different contexts: as braneworld black holes \cite{Bronnikov 1}, \cite{CFM}; as possible non-Schwarzshild exteriors for spherical stars on the brane \cite{Germani}, \cite{JPdeL 1}, \cite{JPdeL 2}, and as wormhole spacetimes \cite{Bronnikov 2}. In this work we construct several families of new solutions to $^{(4)}R = 0$ of the form (\ref{the metric under study}), with $C(r) \neq 1$. The new solutions are specifically designed so that they contain the Schwarzschild black hole and generalize the temporal and spatial Schwarzschild solutions mentioned above. They generate new families of traversable Lorenzian wormholes as well as nakedly singular spacetimes. The solutions are obtained by demanding that the spacetime must contain, instead of the Schwarzschild geometry as in (\ref{T-Schw exterior})-(\ref{S-Schw exterior}), a simple static solution to $^{(4)}R = 0$ that follows from $5$-dimensional Kaluza-Klein gravity (see Eq. (\ref{Kramer-like solution}) bellow). Some interesting features of our models are that, for certain values of the parameters of the solutions, (i) the size of the throat can be less than the Schwarzschild radius $2 M$, which no longer defines the horizon, i.e., to a distant observer a particle or light falling down crosses the Schwarzschild radius in a finite time; (ii) they contain three spherical surfaces (instead of one as in Lorentzian wormholes {\it a la } Morris and Thorne) which are extremal and have finite area. Two of them have the same size and meet the ``flare-out" requirements\footnote{The flare-out condition defines the throat of a wormhole as a closed two-dimensional spatial minimal hypersurface, i.e., as an extremal surface of minimal area \cite{Visser 3}. Thus, as seen from outside, a wormhole entrance is a local object like a star or a black hole \cite{Lemos}.}, and show the typical violation of the energy conditions that characterizes a wormhole throat. The other extremal sphere is ``flaring-in" in the sense that its sectional area is a local maximum and the weak, null and dominant energy conditions are satisfied in its neighborhood. After bouncing back at this second surface a traveler crosses into another space which is the double of the one she/he started in. The paper is organized as follows. In section $2$ we present our solutions on the brane and discuss their physical interpretation. In section $3$ we construct symmetric traversable wormholes from configurations which only have one asymptotic region. In section $4$ we give a brief summary of our results. \section{Schwarzschild-like solutions on the brane from Kaluza-Klein gravity} In this section we construct several new families of analytic solutions to the brane field equation $^{(4)}R = 0$, of the form (\ref{the metric under study}), which are inspired by five-dimensional Kaluza-Klein gravity and generalize the Schwarzschild-like spacetimes (\ref{T-Schw exterior}) and (\ref{S-Schw exterior}). In Kaluza-Klein gravity there is only one family of spherically symmetric exact solutions to the field equations $R_{A B} = 0$ which are asymptotically flat, static and independent of the ``extra" coordinates (see, e.g., Ref. \cite{JPdeL3} and references therein). In five dimensions, in the form given by Kramer \cite{Kramer}, they are described by the line element \begin{equation} \label{Kramer's solution in 5D} dS^2 = f^a \; dt^2 - f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)}d\Omega^2 - f^b dy^2, \end{equation} where $y$ is the coordinate along the fifth dimension; \begin{equation} \label{definition of f} f = 1 - \frac{2 m}{r},\;\;\;m = \mbox{constant}, \end{equation} and $a$, $b$ are parameters satisfying the consistency relation \begin{equation} \label{condition on a and b} a^2 + ab + b^2 = 1. \end{equation} When the extra dimension is large, instead of being rolled up to a small size, our spacetime can be identified with some $4D$ hypersurface $y = $ const, which is orthogonal to the extra dimension. In this case the metric induced in $4D$ is \begin{equation} \label{Kramer-like solution} ds^2 = f^a \; dt^2 - f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)}d\Omega^2. \end{equation} It is straightforward to verify that this line element, which in what follows we will call `Kramer-like', is an exact solution to the field equation $^{(4)}R = 0$. In this section, we present a number of new families of analytic solutions of the form (\ref{the metric under study}) on the brane obtained by choosing \begin{equation} \label{Choosing C} C^2(r) = f^{1 - a - b}, \end{equation} and fixing $A(r)$ or $B(r)$ as in the Kramer-like metric (\ref{Kramer-like solution}). Before discussing the new solutions, let us briefly examine some of the properties of this metric. When $a = 0$, from (\ref{condition on a and b}) it follows that $b = \pm 1$ in which case (\ref{Kramer-like solution}) becomes \begin{equation} \label{Spatial-Schw wormhole} ds^2 = d t^2 - \left(1 - \frac{R_{0}}{R}\right)^{- 1} d R^2 - R^2 d\Omega^2,\;\;\;R_{0} \equiv = \frac{2 |b| m}{b}. \end{equation} For $R_{0} > 0$ this is the spatial-Schwarzschild wormhole \cite{Dadhich}, for $R_{0} < 0$ it is a naked singularity and for $R_{0} = 0$ it is Minkowski spacetime. When $b = 0$, from (\ref{condition on a and b}) we find $a = \pm 1$ and the line element (\ref{Kramer-like solution}) reduces to \begin{equation} \label{Schw solution with both signs} ds^2 = \left(1 - \frac{2 \bar{m}}{R}\right)d t^2 - \left(1 - \frac{2 \bar{m}}{R}\right)^{- 1} d R^2 - R^2 d\Omega^2,\;\;\;\bar{m} \equiv \frac{|a| m}{a}. \end{equation} For $\bar{m} > 0$ this is the Schwarzschild black hole solution of general relativity with gravitational mass $M = \bar{m}$, a naked singularity for $\bar{m} < 0$ and Minkowski spacetime for $\bar{m} = 0$. In any other case the gravitational mass is $M = a m$, which follows from the comparison of the asymptotic behavior $(r \rightarrow \infty)$ of $g_{tt}$ with Newton's theory [see (\ref{Post-Newtonian parameters}) and (\ref{Post-Newtonian parameters for Kramer's solution}) bellow]. To assure the positivity of $M$, in what follows without loss of generality we take $m > 0$ and $a \geq 0$. Consequently, the appropriate solution of (\ref{condition on a and b}) is \begin{equation} \label{a in terms of b} a = - \frac{b}{2} + \frac{\sqrt{4 - 3 b^2}}{2} \geq 0, \end{equation} which holds in the range $- 2/\sqrt{3} \leq b \leq 1$. The Schwarzschild spacetime is recovered when $b = 0$. To study the singularities of the solutions it is useful to calculate the Kretschmann scalar ${\cal{K}} = R_{\alpha\beta \gamma\delta}R^{\alpha\beta\gamma\delta}$. For the metric (\ref{the metric under study}) it is given by \begin{equation} \label{Kretschmann scalar} {\cal{K}} = R_{\alpha \beta \mu\nu}R^{\alpha \beta \mu\nu} = 4 K_{1}^2 + 8 K_{2}^2 + 8 K_{3}^2 + 4 K_{4}^2, \end{equation} where \begin{eqnarray} \label{K1, K2, K3, K4} K_{1} &=& \frac{1}{ B^2}\left[\frac{A''}{A} - \frac{A' B' }{A B}\right], \nonumber \\ K_{2} &=& \frac{\left(C + r C'\right) A'}{r C B^2 A}, \nonumber \\ K_{3} &=& \frac{1}{B^2}\left(\frac{B' C'}{B C} + \frac{B'}{r B} - \frac{2 C'}{r C} - \frac{C''}{C}\right), \nonumber \\ K_{4} &=& \frac{\left(C + r C'\right)^2 - B^2}{r^2 C^2 B^2}. \end{eqnarray} The finiteness of ${\cal{K}}$ is a necessary and sufficient criterion for the regularity of all curvature invariants. For the Kramer-like metric (\ref{Kramer-like solution}) we obtain \begin{equation} \label{Kretschmann scalar for Kramer-like metric} {\cal{K}} = \frac{48 m^2 k}{r^8 f^{2(2 - a - b)}}, \;\;\; \end{equation} with \begin{equation} \label{k} k = m^2 \left[2 (a + 1) + b - \frac{2 b^2 \left(a + 2\right)}{3} + \frac{a b^3}{6} + \frac{b^4 }{3}\right] - m r\left[2 \left(a + 1\right) + b - \frac{b^2 \left(2 a + 3\right)}{3}\right] + \frac{r^2 \left(2 - b^2\right)}{2}. \end{equation} For $b = 0$, $(a = 1)$ the expression for $k$ reduces to $k = r^2 f^{2}$. Therefore, for $b = 0$ we recover the usual Schwarzschild singulatity at $r = 0$, viz., ${\cal{K}} = 48 m^2/r^6$, as expected. Since $(2 - a - b) > 0$, it follows that for $b \neq 0$ there is a physical singularity at $f = 0$, i.e., at $r = 2 m$. The physical radius of a sphere with coordinate $r$ is given by $R = r f^{(1 - a - b)/2}$. In the limit $r \rightarrow 2 m$ it behaves either as $R \rightarrow 0$ or $R \rightarrow \infty$ depending on whether $b \in \left[- 2/\sqrt{3}, 0\right)$ or $b \in \left(0, 1\right)$, respectively\footnote{The cases $b = 0$ and $b = 1$ corresponds to the Schwarzschild spacetime and to the spatial-Schwarszchild wormhole, respectively.}. In the former range we have $\left(1 - a - b\right) > 0$, and consequently $R$ is a monotonically increasing function of $r$. In the latter range, where $\left(1 - a - b\right) < 0$, the physical radius $R$ is not a monotonic function of $r$; it reaches a minimum at $r = \bar{r} = m (1 + a + b) > 2 m$, viz., \begin{equation} \label{Rmin} \bar{R}= R(\bar{r}) = m \sqrt{a b}\left(\frac{a + b + 1}{a + b - 1}\right)^{(a + b)/2}, \end{equation} and then re-expands in $2 m < r < \bar{r}$ in such a way that $R \rightarrow \infty$ as $r \rightarrow 2 m$. We note that $2 m < \bar{r} < \left(1 + 2/\sqrt{3}\right) m \approx 2.155 m$ and $\bar{R} > 2 M$. Regarding $g_{tt}$ and $g_{rr}$, we find that $g_{tt} \rightarrow 0$ as $r \rightarrow 2 m$, in the whole range of $b$, i.e., $b \in \left[- 2\sqrt{3}, 1\right)$. In the same limit we have $g_{rr} \rightarrow \left(0, - 1, - \infty \right)$ for $-2/\sqrt{3}\leq b < - 1$, $b = -1$ and $- 1 < b < 1$, respectively. The effective energy density $\rho = T_{0}^{0}$ is given by \begin{equation} \label{negative energy density} 8 \pi \rho = - \frac{m^2 a b}{r^4 f^{2 - a - b}}. \end{equation} Thus, for $b \neq 0$ the solutions have a naked singularity at $r = 2 m$ where $\rho$ diverges. For $b < 0$ the density is positive and a traveler moving radially towards the center $R = 0$ reaches the singularity. For $b > 0$ the density is negative and a traveler moving towards $r = 2m$ never reaches the singularity, instead (since there is a throat at $r = \bar{r}$, with physical radius $R = \bar{R}$) she/he crosses into another space, which does not have a second flat asymptotic because $g_{tt} \rightarrow 0$ as $r \rightarrow 2 m$, despite the fact that $R \rightarrow \infty$ in this limit. Consequently, even though the spacetime configurations with $b \in \left(0, 1\right)$ have a throat, and violate the null energy condition, they are topologically different from wormholes which, by definition, connect two asymptotic regions. \medskip In order to experimentally distinguish between different asymptotically flat metrics, it is useful to calculate the post-Newtonian parameters $\beta$ and $\gamma$ in the Eddington-Robertson expansion \cite{Weinberg} \begin{equation} \label{Post-Newtonian parameters} ds^2 = \left[1 - \frac{2 M}{R} + 2 (\beta - \gamma)\frac{M^2}{R^2} + \cdots\right]\; dt^2 - \left(1 + \frac{2 \gamma M}{R} + \cdots\right)\; dR^2 - R^2 d\Omega^2. \end{equation} The parameter $\beta$ affects the precession of the perihelion and the Nordtvedt effect, while $\gamma $ affects the deflection of light and the time delay of light \cite{Will}. For the solution (\ref{Kramer-like solution}) we obtain \begin{eqnarray} \label{Post-Newtonian parameters for Kramer's solution} M &=& a m,\nonumber \\ \beta &=& 1,\nonumber \\ \gamma &=& 1 + \frac{b}{a}. \end{eqnarray} However, we should keep in mind that ``such hypothetical objects as braneworld black holes or wormholes, not necessarily of astrophysical size, need not necessarily conform to the restrictions on the post-Newtonian parameters obtained from the Solar system and binary pulsar observations, and it therefore makes sense to discuss the full range of parameters which are present in the solutions" \cite{Bronnikov 1}. \subsection{Temporal Schwarzschild-Kramer-like solution} Following the same philosophy as in curvature coordinates, the line element (\ref{Kramer-like solution}) can be used to generate other asymptotically flat vacuum solutions on the brane. For example, by demanding that \begin{equation} \label{A and C for the temporal Schwarzschild-Kramer-like solution} A^2(r) = f^a, \;\;\;C^2(r) = f^{1 - a - b}, \end{equation} we find that the field equation (\ref{field equation for the general spherical metric}) has two solutions. One of them is just $B^2(r) = f^{- (a + b)}$, which gives back (\ref{Kramer-like solution}). The other one is a general solution which can be written as \begin{equation} \label{B for the temporal Schwarzschild-Kramer-like solution} B^2(r) = \left[\frac{1 - \frac{m (2 + a + 2 b)}{2r}}{ 1 - \frac{3\sigma m }{2 r}}\right] f^{- (a + b)}, \end{equation} where $\sigma$ is a constant of integration. For an arbitrary $\sigma$ and $a = 1$ $(b = 0)$ we recover the temporal Schwarzschild metric (\ref{T-Schw exterior}). The total gravitational mass $M$ and the PPN parameters $\beta$ and $\gamma$ are given by \begin{eqnarray} \label{Post-Newtonian parameters for temporal Kramer's solution} M &=& a m,\nonumber \\ \beta &=& \frac{3 a - 2 b - 2 + 3 \sigma}{4 a},\nonumber \\ \gamma &=& \frac{3 a + 2 b - 2 + 3 \sigma}{4 a}. \end{eqnarray} If we denote \begin{equation} \label{definition of r tilde and r0} \tilde{r} = \frac{m (2 + a + 2 b)}{2}, \;\;\;\;r_{0} = \frac{3 \sigma m}{2}, \end{equation} the solution can be written as \begin{equation} \label{General temporal Schwarzschild-Kramer-like solution} ds^2 = f^a d t^2 - \left(\frac{r - \tilde{r}}{r - r_{0}}\right)f^{- (a + b)} dr^2 - r^2 f^{1 - a - b} d\Omega^2. \end{equation} Here the Kretschmann scalar diverges at $r = 2 m$ and at $r = \tilde{r}$. In contrast, $r = r_{0}$ is a coordinate singularity; not a physical one. It should be noted that $0 < \tilde{r} < 2 m$ for any $a > 0$. Therefore, the condition $g_{rr} < 0$ implies that the above solution makes sense only for $r \geq r_{0} = 3 m \sigma /2$. If $r_{0} \leq 2m$, i.e. $\sigma \leq 4/3$, the solution is a naked singularity. However for $r_{0} > 2 m$ $(\sigma > 4/3)$ it is a traversable wormhole. Since the Kretschmann scalar is regular at $r _{0}$, the singularity can be removed by introducing a new coordinate $x$ by the relation $r = r_{0} + x^2$. The explicit form of the solution in terms of $x$ is \begin{equation} \label{General temporal Schwarzschild-Kramer-like solution in terms of x} ds^2 = \left(\frac{x^2 + r_{0} - 2 m}{r_{0} + x^2}\right)^a d t^2 - \frac{4 \left(r_{0} + x^2\right)^{a + b}\left(x^2 + r_{0} - \tilde{r}\right)}{\left(x^2 + r_{0} - 2m\right)^{a + b}} \; dx^2 - \frac{\left(x^2 + r_{0}\right)^{a + b + 1}}{\left(x^2 + r_{0} - 2m\right)^{a + b - 1}} d\Omega^2. \end{equation} For $r_{0} > 2 m$ $(\sigma > 4/3)$, the metric is regular for all values of $x \in \left( - \infty , + \infty \right)$ and invariant under sign reversal $x \rightarrow - x$. Therefore, both $x \rightarrow \infty$ and $x \rightarrow - \infty$ are flat asymptotics. The physical radius $R$ of a spherical shell with coordinate $x$ is \begin{equation} \label{physical radius of a spherical shell with coordinate x} R(x) = \frac{\left({{x}}^2 + r_{0}\right)^{(a + b + 1)/2}}{\left({{x}}^2 + r_{0} - 2 m\right)^{(a + b - 1)/2}}. \end{equation} The equation $dR/d{x} = 0$ has the following roots \[ {{x}}_{0} = 0, \;\;\;{{x}}_{(\pm)} = \pm \sqrt{\bar{r} - r_{0}}, \;\;\;\; \] where \[ \bar{r} = m \left(1 + a + b\right). \] The ${{x}}_{(\pm)}$ solutions are real only if $\bar{r} > r_{0}$. Since $r_{0} > 2 m$, this imposes the condition $(a + b ) > 1$, which requires $b \in \left(0, 1\right)$. Thus, the metric (\ref{General temporal Schwarzschild-Kramer-like solution in terms of x}) is regular for all values of $x$ and $R(x)$ has (real) extremuma at $x = x_{(\pm)}$ if \begin{equation} \label{condition for minimum at x neq 0} 2 m < r_{0} < \bar{r} < \left(1 + \frac{2}{\sqrt{3}}\right) m \approx 2.155 m. \end{equation} In terms of the dimensionless quantities $\sigma = 2 r_{0}/3 m$ and $\bar{\sigma} = 2 \bar{r}/3 m$, this inequality can be written as\footnote{We note that $\bar{\sigma}$ is bounded from above, namely, $\bar{\sigma} < {\bar{\sigma}}_{max} = 2\left(3 + 2\sqrt{3}\right)/9 \approx 1.436$, which corresponds to $b = 1/\sqrt{3} \approx 0.577$. } \begin{equation} \label{condition for minimum at x neq 0 in terms of sigma} \frac{4}{3} < \sigma < \bar{\sigma} < \frac{2\left(3 + 2\sqrt{3}\right)}{9} \approx 1.436. \end{equation} In Table $1$ we provide $\bar{\sigma} = 2 (1 + a + b)/3$ calculated for various values of $b \in \left(0, 1\right)$. \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{8}{|c|}{\bf Table 1. $\bar{\sigma}$ for various values of $b \in \left(0, 1\right)$} \\ \hline \multicolumn {1}{|c|}{$b$} & $0.01$ & $0.1$ &$0.3$&$0.5$&$0.8$&$0.9$&\multicolumn{1}{|c|}{$0.99$} \\ \hline\hline $\bar{\sigma}$ & $1.337$& $1.364$& $1.410$&$1.434$ &$1.414$&$1.384$&$1.340$ \\ \hline \end{tabular} \end{center} In summary, we have the following cases: \paragraph{Case 1:} The function $R = R({x})$ has only one extremum, which is located at $\bar{x} = 0$ if (i) $b \in \left[- 2/\sqrt{3}, 0\right) $ and {\it any} $r_{0} > 2 m$ $(\sigma > 4/3)$, or (ii) $b \in \left(0, 1\right)$ and $r_{0} > 3 m \bar{\sigma}/2$ (for $b > 0$, a sufficient criterion for one extremum is $r_{0} > 2.155 m$). Under these conditions $\bar{x} = 0$ is the minimum of (\ref{physical radius of a spherical shell with coordinate x}). Therefore, in this case the wormhole throat is located at $R = R_{0}$ given by \begin{equation} \label{location of the throat for temporal Schw. wormhole} R_{0} = \frac{3 \sigma M}{2 a}\left(1 - \frac{4}{3 \sigma}\right)^{(1 - a - b)/2}, \end{equation} with $(b < 0, \sigma > 4/3)$ and $(b > 0, \sigma > \bar{\sigma})$. We observe that the condition $\sigma > \bar{\sigma}$ is automatically satisfied when $b < 0$, because ${\bar{\sigma}}_{(b < 0)} < 4/3$, while on the contrary ${\bar{\sigma}}_{(b > 0)} > 4/3$. The metric obtained by Casadio, Fabbri and Mazzacurati \cite{CFM} for new braneworld black holes and the symmetric traversable wormhole solutions discussed by Bronnikov and Kim \cite{Bronnikov 1}, in their example $2$, are restored from (\ref{General temporal Schwarzschild-Kramer-like solution in terms of x})-(\ref{location of the throat for temporal Schw. wormhole}) in the case $a = 1$, $b = 0$, which in turn reduces to the Schwarzschild metric for $\sigma = 1$, i.e. $r_{0} = \tilde{r} = 3 m/2$. From (\ref{location of the throat for temporal Schw. wormhole}) we find that in this case we can have $R_{0} < 2 M$ or $R_{0} > 2 M$ depending on whether $b < 0$ or $b \geq 0$. To illustrate this we have evaluated (\ref{location of the throat for temporal Schw. wormhole}) for some specific values of $b$ and $\sigma$. The results are presented in Figure $1$. For a light signal the time for propagation, measured by an external observer, from the throat $x = 0$ to some $x = x_{0} > 0$ is given by \begin{equation} \label{time to the throat} \Delta t = 2 m \int^{{\bar{x}}_{0}}_0\sqrt{{\bar{x}}^2 + \frac{3 \sigma}{2} - \frac{2 + a + 2 b}{2}}\left[1 - \frac{2}{{\bar{x}}^2 + 3 \sigma/2}\right]^{- (a + b/2)} d\bar{x}, \end{equation} where $\bar{x}$ is a dimensionless quantity defined as $\bar{x} = x/\sqrt{m}$. To test the above equations, let us take, e.g., $b = - 0.2$ and $\sigma = 1.5$, which are in the range for which (\ref{location of the throat for temporal Schw. wormhole}) defines the location of the throat. For these values $R_{0} \approx 1.83 M$ and $\Delta t$ is finite for any ${\bar{x}}_{0} < \infty$. One can numerically verify that $\Delta t$ is finite for any $b < 0$ and $\sigma > 4/3$. The conclusion is that for a distant observer a light signal crosses the Schwarzschild radius $R = 2 M$, which now is not a horizon, in a finite time. Also, from (\ref{location of the throat for temporal Schw. wormhole}) we find \begin{equation} \label{dR/dsigma} \frac{1}{R_{0}}\frac{d R_{0}}{d \sigma} = \frac{\sigma - \bar{\sigma}}{\sigma \left(3 \sigma - 4\right)}. \end{equation} Thus, in the present case $\left(\sigma > \bar{\sigma}\right)$, $R_{0}$ increases monotonically with $\sigma.$ We note that at $x = 0$ \begin{eqnarray} \label{energy conditions} \rho > 0, \;\;\;&&\rho - p_{r} > 0, \;\;\;\rho + p_{r} \sim \frac{\bar{\sigma} - \sigma}{r_{0} - \tilde{r}},\nonumber \\ &&\rho + p_{\perp} > 0, \;\;\;\rho - p_{\perp} \sim \frac{\bar{\sigma} - \sigma}{r_{0} - \tilde{r}}. \end{eqnarray} where $T_{0}^{0} = \rho$, $T_{1}^{1} = - p_{r}$, $T_{2}^{2} = - p_{\perp}$. Thus, in the present case $\left(\sigma > \bar{\sigma}\right)$, in a neighborhood of the wormhole throat the effective EMT violates not only the null energy condition $(\rho + p_{r}) > 0$, which is in agreement with a well known general result, but also violates the dominant energy condition. \begin{figure}[tbp] \centering \includegraphics[bb=0 0 991 991,width=3.00in,height=3.00in,keepaspectratio]{A1} \caption{In Case $1$ the wormhole throat is located at $R = R_{0}$. The figure shows that $R_{0}$ increases with $\sigma$. More interesting is that for $b < 0$ there is a range of values of $\sigma $ for which $R_{0} < 2 M$. For $b > 0$, $R_{0} > 2 M$.} \label{Figure $1$} \end{figure} \begin{figure}[tbp] \centering \includegraphics[bb=0 0 990 990,width=3.00in,height=3.00in,keepaspectratio]{A2} \caption{In Case $2$ there are three extremuma, i.e., the spacelike surfaces $R_{(x_{\pm})}$ and $R_{0}$ are turning points for light and all material particles. The figure gives the physical radius as a function of $x$ for $b = 0.5$ and the values of $\sigma$ considered in Table $2$. Without loss of generality we have set $m = 1$, which is equivalent to introducing a dimensionless coordinate $x \rightarrow x/\sqrt{m}$ in $(30)$. As we approach $r = 2 m$ we never reach distances $R < R_{(x_{\pm})} = 4.038 M$ regardless of the choice of $\sigma$, although $R_{(x_{\pm})}$ is attained at different values of $x$. After a re-bounce at $R_{0}$, which does depend on $\sigma$ but corresponds to the same $x$, namely $x = 0$, we cross into another space which is the double of the one we started in. The two minimuma coalesce for $\sigma \rightarrow \bar{\sigma}$} \label{Figure $2$} \end{figure} \paragraph{Case 2:} When $b \in \left(0, 1\right)$ and $2 m < r_{0} < \bar{r}$, or what is equivalent $4/3 < \sigma < \bar{\sigma}$ , there are three extremuma. One of them is at $x = 0$ and corresponds to $R = R_{0}$ given by (\ref{location of the throat for temporal Schw. wormhole}); the other two are at $x = x_{\pm}$ for which \begin{equation} \label{Rmin for b positive, temporal Schwarzschild-Kramer-like} R_{(x_{\pm})} = M \sqrt{\frac{b}{a}} \left(\frac{a + b + 1}{a + b - 1}\right)^{(a + b)/2} > 2 M. \end{equation} In this case $\left(1 - a - b\right) < 0$, therefore from (\ref{location of the throat for temporal Schw. wormhole}) it follows that $R_{0} \rightarrow \infty$ as $\sigma \rightarrow \left(4/3\right)^{+}$. In addition, $R_{0} \rightarrow R_{(x_{\pm})}^{+}$ as $\sigma \rightarrow \bar{\sigma}$, i.e. $R_{0}$ decreases with the increase of $\sigma$, which is in agreement with (\ref{dR/dsigma}) because in the present case $\sigma < \bar{\sigma}$. What this means is that in this case $R_{(x_{\pm})}$ is a local minimum and $R_{0}$ is a local maximum. This is illustrated in Figure $2$. As an example, let us choose some particular $b$ in the range $\left(0, 1\right)$, e.g. $b = 0.5$. For this choice $R_{(x_{\pm})} = 4.038 M$. Besides, from Table $1$ we get $\bar{\sigma} = 1.434$, $(\bar{r} = 2.151 m)$. In Table $2$ we compute $R_{0}$ for various values of $\sigma$ in the range $1.333 < \sigma < 1.434$ $(2 m < r_{0} < 2.151 m)$. Similar results, illustrating the fact that $R_{0} > R_{(x_{\pm})} > 2 M$, can be obtained for other values of $b$ and $\bar{\sigma}$ considered in Table $1$. \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{\bf Table 2. $R_{0}/M$ for $b = 0.5$ and $\sigma \in \left(4/3, 1.434\right)$} \\ \hline \multicolumn {1}{|c|}{$\sigma$} & $1.34$&$1.37$&$1.40$&\multicolumn{1}{|c|}{$1.43$} \\ \hline\hline $R_{0}/M$ & $4.610$ &$4.149$&$4.059$&$4.037$ \\ \hline \end{tabular} \end{center} From (\ref{energy conditions}) it follows that, in the present case $(\bar{\sigma} > \sigma)$, the effective EMT satisfies the weak, null and dominant energy conditions in a neighborhood of $x = 0$. However, using (\ref{condition for minimum at x neq 0 in terms of sigma}) and Table $1$, it is not difficult to verify that these conditions are now violated in a neighborhood of $x = x_{(\pm)}$, which is where in this case the wormhole throat is located at, as expected. \subsection{Spatial Schwarzschild-Kramer-like solution} In order to generalize the spatial-Schwarzschild metric (\ref{S-Schw exterior}) we now choose \begin{equation} \label{B and C for Spatial Kramer-Schwarzschild-like solution} B^2(r) = \left(1 - \frac{2 \alpha m}{r}\right)^{- (a + b)}, \;\;\;C^2(r) = \left(1 - \frac{2 \alpha m}{r}\right)^{1 - a - b}, \end{equation} where $\alpha$ is some positive constant. Substituting these expressions into (\ref{field equation for the general spherical metric}) we obtain a second order differential equation for $A(r)$. To determine the arbitrary constants of integration we impose two conditions. First, that $A \rightarrow 1$ as $r \rightarrow \infty$. Second, that for $a = 1$ $(b = 0)$ we recover the spatial Schwarzschild solution. With these conditions we obtain \begin{equation} \label{A for Spatial Kramer-Schwarzschild-like solution} A^2(r) = \frac{1}{\alpha^2}\left[\alpha - 1 + \left({1 - \frac{2 \alpha m}{r}}\right)^{(a - b)/2}\right]^{2}\left(1 - \frac{2 \alpha m}{r}\right)^b. \end{equation} For this solution the total gravitational mass and the PPN parameters are \begin{eqnarray} \label{Post-Newtonian parameters for the spatial Schwarzschild-Kramer-like solution} M &=& m \left[a + b(\alpha - 1)\right],\nonumber \\ \beta &=& \frac{1}{2}\left\{1 + \frac{\alpha \left[a^2 + \alpha b^2 (\alpha - 1)\right]}{\left[a + b \left(\alpha - 1\right)\right]^2}\right\},\nonumber \\ \gamma &=& \frac{\alpha (a + b)}{a + b (\alpha - 1)}. \end{eqnarray} The above solution for $\alpha = 1$ reduces to the Kramer-like solution (\ref{Kramer-like solution}), and for $\alpha \neq 1$, but $a = 1$ $(b = 0)$, gives back (\ref{S-Schw exterior}), as expected. In general, for $\alpha \neq 1$, $a \neq 1$ and $b \neq 0$ the quantities $K_{1}$ and $K_{2}$ in (\ref{K1, K2, K3, K4}) behave like $\sim \left[1/(r^2 Af^{2 - a - b})\right]$. Therefore the Kretschmann scalar diverges at $r = 0$, $A = 0$ and $r = 2 m \alpha$, as $\left(2 - a - b\right) > 0$ in the whole range of $b$. However, there are two cases in which the curvature invariants are regular at $r = 2 m \alpha$. One of them is $a = 1$, $b = 0$ discussed by Casadio, Fabbri and Mazzacurati \cite{CFM}, the other case is when $a = 0$ and $b = 1$. In the latter case the metric (\ref{B and C for Spatial Kramer-Schwarzschild-like solution})-(\ref{A for Spatial Kramer-Schwarzschild-like solution}) becomes \begin{equation} \label{a = 0, b = 1, Spatial Kramer-Schwarzschild-like solution} ds^2 = \frac{1}{\alpha^2}\left[1 + \left(\alpha - 1\right)\sqrt{1 - \frac{2 \alpha m }{r}}\right]^2 d t^2 - \left(1 - \frac{2 \alpha m}{r}\right)^{- 1} d r^2 - r^2 d\Omega^2. \end{equation} For $\alpha = 1$ we recover the spatial-Schwarzschild wormhole (\ref{Spatial-Schw wormhole}) with $R_{0} = 2 \alpha m$. For $\alpha < 1$, the equation $g_{tt} = 0$ has no positive solutions for $r$. Thus, there is no horizon. Since the Kretschmann scalar is regular at $r = r_{0} = 2 \alpha m$, the metric is regularized at $r = r_{0}$ by the substitution $x^2 = r - r_{0}$. As a result, (\ref{a = 0, b = 1, Spatial Kramer-Schwarzschild-like solution}) becomes \begin{equation} \label{a = 0, b = 1, Spatial Kramer-Schwarzschild-like solution in terms of x} ds^2 = \frac{1}{\alpha^2}\left[1 + \frac{(\alpha - 1) |x|}{\sqrt{x^2 + r_{0}}}\right]^2 d t^2 - 4 \left(x^2 + r_{0}\right)d x^2 - (x^2 + r_{0})^2 d\Omega^2,\;\;\;\alpha > 1, \end{equation} which is a symmetric traversable wormhole with throat at $R_{0} = 2 M \alpha/(\alpha - 1)$ and total gravitational mass $M = m \left(\alpha - 1\right)$. We emphasize that here $R_{0} > 2 M$ for all values of $\alpha$. In order to make contact with other works in the literature, let us notice that the solution for $\alpha = 1$ can alternatively be written as \begin{eqnarray} \label{solution for alpha = 1} d s^2 &=& \left[\kappa + \lambda f^{(a - b)/2}\right]^2 f^b d t^2 - f^{- (a + b)} d r^2 - r^2 f^{1 - a - b} d\Omega, \nonumber \\ \end{eqnarray} where $\kappa$ and $\lambda$ are arbitrary constants. The choice $\kappa = 0$ and $\lambda = 1$ gives back the Kramer-like solution (\ref{Kramer-like solution}); $\lambda = 0$ and $\kappa = 1$ yields a line element which is similar to (\ref{Kramer-like solution}) with $a$ replaced by $b$ and vice versa. The class of self-dual Lorentzian wormholes discussed in Dadhich {\it et al} \cite{Dadhich}, which in our notation are given by (\ref{a = 0, b = 1, Spatial Kramer-Schwarzschild-like solution in terms of x}), is recovered from (\ref{solution for alpha = 1}) in the special cases where $a = 1$, $b = 0$ or $a = 0$, $b = 1$. This class of solutions is a particular case of the wormhole spacetimes discussed by Bronnikov and Kim \cite{Bronnikov 1} in their example $4$. \medskip $\bullet$ A more detailed investigation shows that in the case where $\alpha = 1$, there is a family of ``spatial Kramer-like solutions" not included in (\ref{solution for alpha = 1}). Indeed, it is not difficult to verify that the line element \begin{equation} \label{spatial Kramer-like solution} ds^2 = f^{[a + b + \sqrt{1 - 3 a b}]/2} \; dt^2 - f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)}\; d\Omega^2, \end{equation} also satisfies the field equation (\ref{field equation for the general spherical metric}). Note that $a + b + \sqrt{1 - 3 a b} > 0$ for all $a > 0$ (i.e., $- 2\sqrt{3} \leq b < 1$). The Schwarzwschild geometry can be recovered in two distinct limits: either $a = 1$, $b = 0$ or $a = 0$, $b = 1$. From an observational point of view this solution is distinct from the ones considered above. This follows from the fact that the PPN parameters are different from the ones calculated in (\ref{Post-Newtonian parameters for the spatial Schwarzschild-Kramer-like solution}). Namely, for the line element (\ref{spatial Kramer-like solution}) we find \begin{eqnarray} \label{Post-Newtonian parameters for spatial Kramer solution} M &=& \frac{m}{2}\left(a + b + \sqrt{1 - 3 a b}\right),\nonumber \\ \beta &=& 1,\nonumber \\ \gamma &=& \frac{2(a + b)}{a + b + \sqrt{1 - 3 a b}}. \end{eqnarray} $\bullet$ It should be emphasized that we can use any of the above solutions to generate other asymptotically flat solutions to (\ref{field equation for the general spherical metric}) that contain the Schwarzschild spacetime. As an example, let us consider the case where the temporal part of the metric is identical to $g_{tt}$ in the spatial Kramer-like solution (\ref{spatial Kramer-like solution}). From (\ref{field equation for the general spherical metric}) we obtain a first-order differential equation for $B(r)$, which can be easily integrated. The new solution can be written as \begin{equation} \label{spatial Kramer-like solution modified} ds^2 = f^{[a + b + \sqrt{1 - 3 a b}]/2} \; dt^2 - \left(\frac{r - \hat{r}}{r - r_{0}}\right) f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)}\; d\Omega^2, \end{equation} with \[ \hat{r} = \frac{m \left[4 + 3 \left(a + b\right) - \sqrt{1 - 3 a b}\right]}{4}, \;\;\;\;r_{0} = \frac{3 m \sigma}{2} = \mbox{constant}, \] where $\sigma$ is a dimensionless, arbitrary, constant of integration. For $r_{0} = \hat{r}$ the metric (\ref{spatial Kramer-like solution modified}) reduces to (\ref{spatial Kramer-like solution}). In addition, for $a = 1$, $b = 0$ the solutions (\ref{General temporal Schwarzschild-Kramer-like solution}) and (\ref{spatial Kramer-like solution modified}) yield the temporal Schwarzschild metric (\ref{T-Schw exterior}). It should be noted that $0 < \hat{r} < 2 m$ in the whole range $- 2/\sqrt{3} \leq b < 1$. The Kretschmann scalar diverges at $r = 0$ and $r = 2 m$, but is regular at $r = \hat{r}$ and $r = r_{0}$. Once again, smooth continuation at $r = r_{0} > 2 m$, which requires $\sigma > 4/3$, is achieved in terms of the coordinate $x$ defined by the relation $r = r_{0} + x^2$, viz., \begin{equation} \label{spatial Kramer-like solution modified in terms of x} ds^2 = \left(\frac{x^2 + r_{0} - 2 m}{x^2 + r_{0}}\right)^{[a + b + \sqrt{1 - 3 a b}]/2} \; dt^2 - \frac{4 \left(x^2 + r_{0} - \hat{r}\right)\left(x^2 + r_{0}\right)^{a + b}}{\left(x^2 + r_{0} - 2 m\right)^{a + b}}\; dx^2 - \frac{\left(x^2 + r_{0}\right)^{a + b + 1}}{\left(x^2 + r_{0} - 2 m\right)^{a + b - 1}}\; d\Omega^2. \end{equation} Here the physical radius of a spherical shell with coordinate $x$ is the same as in (\ref{physical radius of a spherical shell with coordinate x}). Therefore, under the same conditions on $b$ and $\sigma$ we have the Cases $1$ and $2$ discussed above. The only difference is that now the total gravitational mass is given by the first equation in (\ref{Post-Newtonian parameters for spatial Kramer solution}). Therefore, in Case $1$ the wormhole throat is at \begin{equation} \label{location of the throat for the spatial Kramer-like solution modified} R_{0} = \frac{3 \sigma M}{a + b + \sqrt{1 - 3 a b}}\left(1 - \frac{4}{3 \sigma}\right)^{(1 - a - b)/2}, \;\;\;\sigma > 4/3. \end{equation} In case $2$ we now have \begin{equation} \label{R + - for the spatial Kramer-like solution modified} R_{(x_{\pm})} = \frac{2 M \sqrt{a b}}{a + b + \sqrt{1 - 3 a b}}\left(\frac{a + b + 1}{a + b - 1}\right)^{(a + b)/2}. \end{equation} One can verify that in Case $2$, the weak, null and dominant energy conditions are satisfied in a neighborhood of $R_{0}$. Once again as in (\ref{location of the throat for temporal Schw. wormhole}) and (\ref{Rmin for b positive, temporal Schwarzschild-Kramer-like}), (i) if $b < 0$ we can have $R_{0} < 2 M$ for a wide range of values of $\sigma$ (see Fig. $1$); (ii) $R_{0} > 2 M$ for $b > 0$ (with $\sigma > \bar{\sigma}$), and (iii) $R_{(x_{\pm})} > 2 M$ for all $b > 0$. \section{Symmetric wormholes from solutions with one regular asymptotic} Let us note that, from a mathematical point of view, the Kramer-like metric (\ref{Kramer-like solution}) and its temporal Schwarzschild generalization (\ref{General temporal Schwarzschild-Kramer-like solution}) differ only by a factor in $g_{rr}$. However, from a physical point of view they are very different: these metrics have distinct PPN parameters and (\ref{Kramer-like solution}) only has one asymptotic region for $b > 0$. On the other hand, (\ref{General temporal Schwarzschild-Kramer-like solution}) describes symmetric wormholes in the whole range of $b$ provided $r_{0} > 2 m$. A similar situation occurs between the spatial Kramer-like solution (\ref{spatial Kramer-like solution}) and its generalization (\ref{spatial Kramer-like solution modified}). The aim of this section is to generate two more families of traversable symmetric wormholes from solutions that only have one regular asymptotic region. The new solutions arise from the observation that any solution to the field equation (\ref{the metric under study}) originates new ones given by \begin{equation} \label{old solutions originate new solutions} ds^2 = A^2(r) dt^2 - h(r) B^2(r) dr^2 - r^2 C^2(r) \; d\Omega^2, \end{equation} where \begin{equation} \label{equation for h} h(r) = \left[1 + c\; e^{- \int{\frac{B^2\; dr}{L^2\left(L'/L + A'/2 A\right)}}}\right]^{- 1}; \end{equation} $L \equiv r C(r)$, and $c$ is some arbitrary constant. \medskip $\bullet$ First, let us consider the solution (\ref{General temporal Schwarzschild-Kramer-like solution}). If we demand that it must contain the Schwarzschild spacetime for $b = 0$ $(a = 1)$, then we should set $\sigma = 1$. The result is the ``temporal Kramer-like" metric \begin{equation} \label{temporal Kramer-like solution} ds^2 = f^a \; dt^2 - \frac{2 (f + 1) + (f - 1)(a + 2 b)}{1 + 3 f}f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)}d \Omega^2. \end{equation} We note that $g_{rr} < 0$ for any positive $a$. If $a = 0$ and $b = 1$ $(b = - 1)$, the solution describes a wormhole with throat at $R_{0} = 3 m/2$ $(R_{0} = - m/2, \; m < 0)$. In general, for any $b < 0$ there is naked singularity at $r = 2m$. However, for $b > 0$ the physical radius has a minimum at $\bar{r} = m \left(1 + a + b\right) > 2 m$. Substituting (\ref{temporal Kramer-like solution}) into (\ref{equation for h}) we generate another solution to (\ref{field equation for the general spherical metric}), namely \begin{equation} \label{temporal Kramer-like solution, wormhole} ds^2 = f^a \; dt^2 - \left(\frac{r - 3 m/2}{r - r_{0}}\right)\left[\frac{2 (f + 1) + (f - 1)(a + 2 b)}{1 + 3 f}\right]f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)}d \Omega^2, \end{equation} where $r_{0} \equiv \left(3 m + c\right)/2$. \medskip $\bullet$ Second, we consider the line element (\ref{spatial Kramer-like solution modified}). If we set $\sigma = 1$ we recover the Schwarzschild metric when $b = 0$ $(a = 1)$. With this choice the solution becomes \begin{equation} \label{mixed Kramer-like solution} ds^2 = f^{[a + b + \sqrt{1 - 3 a b}]/2} \; dt^2 - \frac{4 \left(f + 1\right) + \left(f - 1\right)\left[3\left(a + b\right) - \sqrt{1 - 3 a b}\right]}{2 \left(1 + 3 f\right)} \;f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)} d\Omega^2. \end{equation} For this metric we find \begin{eqnarray} \label{Post-Newtonian parameters for mixed Kramer solution} M &=& \frac{m}{2}\left(a + b + \sqrt{1 - 3 a b}\right),\nonumber \\ \beta &=& \frac{2 + a + b + 5 \sqrt{1 - 3 a b}}{4(a + b + \sqrt{1 - 3 a b})},\nonumber \\ \gamma &=& \frac{5(a + b) + 2 + \sqrt{1 - 3 a b}}{4\left(a + b + \sqrt{1 - 3 a b}\right)}. \end{eqnarray} Substituting (\ref{mixed Kramer-like solution}) into (\ref{equation for h}) we get another solution to (\ref{field equation for the general spherical metric}), viz., \begin{equation} \label{mixed Kramer-like solution, wormhole} ds^2 = f^{[a + b + \sqrt{1 - 3 a b}]/2} \; dt^2 - \left(\frac{r - 3 m/2}{r - r_{0}}\right)\left[\frac{4 \left(f + 1\right) + \left(f - 1\right)\left[3\left(a + b\right) - \sqrt{1 - 3 a b}\right]}{2 \left(1 + 3 f\right)}\right] \;f^{- (a + b)}\; dr^2 - r^2 f^{(1 - a - b)} d\Omega^2. \end{equation} Thus, although the original metrics (\ref{temporal Kramer-like solution}) and (\ref{mixed Kramer-like solution}) only have one regular asymptotic region for $b > 0$, the new solutions (\ref{temporal Kramer-like solution, wormhole}) and (\ref{mixed Kramer-like solution, wormhole}) in terms of the coordinate $x$ defined by $r = x^2 + r_{0}$ are regular and symmetric for all values of $b$ and $r_{0} > 2 m$, so that both $x \rightarrow \infty$ and $x \rightarrow - \infty$ are flat asymptotics. All our symmetric wormholes have a factor proportional to $\left(r - r_{0}\right)^{- 1}$ in $g_{rr}$. In this regard, it is interesting to mention a general result obtained by Bronnikov and Kim \cite{Bronnikov 1} in curvature coordinates $(g_{\theta\theta} = -R^2)$. They showed that in traversable, twice asymptotically flat, wormhole solutions the metric function $g_{RR}$ near the throat must behave as $\left(R - R_{0}\right)^{- 1}$. \section{Summary} The aim of this work has been to generate new exact static, spherically symmetric Lorentzian wormhole solutions on the brane. Since (\ref{field equation for the general spherical metric}) is a second-order differential equation for $A(r)$ and $C(r)$ and first order for $B(r)$, the simplest way for generating static solutions is to provide some smooth functions $A(r)$ and $C(r)$. Then, the field equation (\ref{field equation for the general spherical metric}) reduces to a linear first-order differential equation for $B(r)$. In the context of curvature coordinates, $C(r) = 1$, Bronnikov and Kim \cite{Bronnikov 1} have given a thorough discussion of the general conditions on the so-called redshift function $\ln{A(r)}$ under which the solution describes symmetric and asymmetric wormholes. In this paper, to accomplish our goal we have solved (\ref{field equation for the general spherical metric}) using a different approach: (i) we have relaxed the condition $C(r) = 1$, which is required in curvature coordinates. Instead we have chosen $C(r) = f^{(1 - a - b)/2}$. From a physical point of view, this choice automatically incorporates the requirement of existence of a throat. Namely, $R(r)= r f^{(1 - a - b)/2}$ has at least one regular minimum (a throat) at some $r = \bar{r} > 2 m$, for $b > 0$ (see (\ref{Rmin})). In principle, one can always set $C = 1$ by redefining the radial coordinate. However, the line element (\ref{Kramer-like solution}) cannot in general (i.e., for any $b \neq 0$) be written in a simple analytical form in terms of the radial coordinate $R = r f^{(1 - a - b)/2}$. Therefore, from a practical point of view the choice $C(r) = f^{(1 - a - b)/2}$ generates solutions to (\ref{field equation for the general spherical metric}) which are algebraically simple in terms of $r$, but (in general) not expressible in terms of elementary functions of $R$; (ii) we have demanded that the spacetime must contain the Kramer-like spacetime (\ref{Kramer-like solution}), which is a vacuum solution on the brane constructed from the Kaluza-Klein $5D$ solution (\ref{Kramer's solution in 5D}). This assumption guarantees that the Schwarzschild spacetime is recovered for some particular choices of the parameters. We note that in the cosmological realm, $5D$ Kaluza-Klein solutions have been used to generate braneworld cosmological models (with vanishing bulk cosmological constant) via a relatively simple procedure \cite{Equiv}. For $b = 0$ and $a = 1$ we return to curvature coordinates and our solutions reduce to some well-known ones in the literature. For example, the line element (\ref{General temporal Schwarzschild-Kramer-like solution}) reproduces the temporal Schwarzschild spacetime (\ref{T-Schw exterior}) obtained by Casadio, Fabbri and Mazzacurati \cite{CFM} in search for new black holes on the brane and by Germani and Maartens \cite{Germani} as a possible external metric for an isolated braneworld star. Bronnikov and Kim \cite{Bronnikov 1}, in their example $2$, showed that these spacetimes allow the existence of symmetric traversable wormholes for $r_{0} > 2 m$ ($R_{0} > 2 M$ in our notation). For $b \neq 0$, our solutions display some interesting physical properties: \begin{enumerate} \item The models with $b < 0$ (Case $1$) represent traversable wormholes that can have throats located at $R_{0} < 2 M$. What this means is that, as seen from outside, a particle or light falling down reaches the Schwarzschild radius (which no longer defines the horizon) in a finite time. It should be noted that in general relativity, in order to have a throat larger (less) than the Schwarzschild radius for a given mass at a flat asymptotic, it is necessary to have matter with negative (positive) energy density \cite{B-R Book}. However, for the braneworld wormholes under consideration here this is not necessarily so. Indeed, following the reasoning of \cite{B-R Book} we obtain\footnote{In curvature coordinates $ds^2 = e^{\nu(R)}d t^2 - e^{\lambda(R)} d R^2 - R^2 d\Omega^2$, setting $e^{- \lambda} = 1 - 2 M(R)/R$, the effective field equation $G_{0}^{0} = 8 \pi T_{0}^{0}$ yields $M(R) = 4\pi \int{R^2\rho dR} + C$, where $C$ is a constant of integration. Evaluating this expression at the throat $R = R_{0}$ we obtain $C$ in such a way that $M(R) = M(R_{0}) + 4 \pi\int_{R_{0}}^{R}{R^2\rho dR}$. Now, taking into consideration that $e^{- \lambda(R_{0})} = 1 - 2 M(R_{0})/R_{0} = 0$ and that in the present case $M(\infty) = \gamma M$, which can be checked for all our solutions, we obtain (\ref{relation between the wormhole radius and the effective density}).} \begin{equation} \label{relation between the wormhole radius and the effective density} R_{0} = 2 \gamma M - 8\pi\int_{R_{0}}^{\infty}{R^2 \rho(R)dR}, \end{equation} where $\gamma$ is one of the post-Newtonian parameters in the Eddington-Roberston expansion (\ref{Post-Newtonian parameters}). In general relativity $\gamma = 1$, but in our braneworld solutions it can be less or bigger than $1$ depending on the choice of various parameters (see, e.g. (\ref{Post-Newtonian parameters for temporal Kramer's solution})). \item The models with $b > 0$ (Case $2$) are wormhole spacetimes which have three extremal spheres with finite area. This is in contrast to standard Lorenzian wormholes, {\it a la} Morris and Thorne, that have only one extremal surface of minimal area which is identified with the throat. Here, although the wormholes are symmetrical and twice asymptotically flat the throat is not located at $x = 0$, as in \cite{Bronnikov 1} and \cite{CFM}, but instead is located at some $x \neq 0$ which explicitly depends on the choice of $\sigma$. However, the specific value of the wormhole radius is the same for the whole range $4/3 < \sigma < \bar{\sigma}$; it depends only on the choice of $b > 0$ (see equations (\ref{Rmin for b positive, temporal Schwarzschild-Kramer-like}), (\ref{R + - for the spatial Kramer-like solution modified})). We note that the extremal spheres have radii larger than $2 M$ in the whole range of allowed parameters $b$ and $\sigma$. These conclusions concerning the Case $2$ are neatly summarized by figure $2$. \end{enumerate} The two-parameter solutions (\ref{General temporal Schwarzschild-Kramer-like solution}), (\ref{spatial Kramer-like solution modified}), (\ref{temporal Kramer-like solution, wormhole}), (\ref{mixed Kramer-like solution, wormhole}) share similar properties, viz., for different values of $b$ (or $a$) and $\sigma$ they can describe black holes, naked singularities and symmetric traversable wormholes of the types discussed in Cases $1$ and $2$. However, from an experimental point of view they are not equivalent. This follows from the fact that the PPN parameters and the total masses are distinct in each of these solutions. Analogous characteristics show the one-parameter solutions (\ref{Kramer-like solution}), (\ref{spatial Kramer-like solution}), (\ref{temporal Kramer-like solution}), (\ref{mixed Kramer-like solution}) which yield the Schwarzschild spacetime for $b = 0$, naked singularities for $b < 0$ and configurations that only have one asymptotically flat region for $b > 0$. From a formal point of view the symmetric solutions are obtained from those with only one asymptotic region by replacing in the latter $g_{rr} \rightarrow \left(\frac{r - r_{s}}{r - r_{0}}\right) g_{rr}$ and keeping the other metric functions fixed. Here $r_{0} > 2 m $ is an arbitrary parameter, and $r_{s}$ is a constant determined by the field equations, which in all cases turns to be less than $2 m$. Besides the geometric differences discussed above, the effective matter is quite different in both cases. In particular, configurations with only one asymptotic region have $T_{0}^{0} < 0 $ everywhere (but their total gravitational mass $M$ is positive), while the symmetric wormholes have positive effective energy density. Thus, in this work we have obtained a number of models for wormholes on the brane with interesting physical properties. One can use them to generate new ones by means of iteration. We note that, although we have restricted our study to the solutions engendered by the Kramer-like metric (\ref{Kramer-like solution}) for which $C = A^{(1 - a - b)/a}$, our discussion can be generalized by considering $C \propto A^{p}$, where $p$ is some constant parameter (not necessarily $p = (1 - a - b)/a$). With this assumption we can follow the general approach of Bronnikov and Kim \cite{Bronnikov 1} to study how to choose $A(r)$ in (\ref{field equation for the general spherical metric}) to obtain solutions satisfying wormhole conditions. The next logical step, to obtain a complete wormhole model within the braneworld paradigm, is to investigate the extension of our solutions into the bulk. However, finding an exact solution in $5D$ which is consistent with a particular metric in $4D$ is not an easy task. In spite of this, the existence of such a solution is guaranteed by the Campbell-Magaard's embedding theorems \cite{Seahra}. The coupling of our wormholes solutions to the bulk geometry, though important, is beyond the scope of the present paper. \paragraph{Acknowledgments:} I wish to thank Kirill Bronnikov for helpful comments and constructive suggestions.
proofpile-arXiv_065-6473
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Since the first quasar was discovered \citep{schmidt63}, methods have been developed to differentiate these rare objects from other astronomical sources in the sky. In the standard methods, it is assumed that QSOs have point-like morphology. They are then separated from the much more numerous stars by their photometric colors. The UVX selection, e.g. \citep{croom01}, can be largely complete ($>$90\%) for QSOs with 0.3~$<$~$z$~$<$~2.2 but this completeness drops at higher redshift. The selection purity was brought up to 97\% for $g<21$ using Kernel Density Estimation techniques applied to SDSS colors \citep{kde04} and extended to the infrared by \citet{richards09} implying that spectroscopy is not needed to confirm the corresponding statistical sample of quasars at high galactic latitudes. This led to the definition of a one-million-QSO catalog \citep{kde09} down to $i=21.3$ from the photometry of SDSS Data Release 6 \citep{adelman08}. Extending quasar selection methods to higher redshifts and magnitudes presents several difficulties. For example, at fainter magnitudes, galaxies start to contaminate ``point-like'' photometric catalogs both because of increasing photometric errors and because of non-negligible contributions of AGN's in certain bands. Nevertheless, such an extension is very desirable, not only to study the AGN population but also to use the quasars to study the foreground absorbers. In particular studies of spatial correlations in the IGM from the Lyman-$\alpha$ forest and/or metal absorption lines are in need of higher target density at high redshift \citep{petitjean97,nusser99,pichon01,caucci08}. More recently, it was realized that the Baryonic Acoustic Oscillations (BAO) could be detected in the Lyman-$\alpha$ forest. BAO in the pre-recombination Universe imprint features in the matter power spectrum that have led to important constraints on the cosmological parameters. So far, BAO effects have been seen using galaxies of redshift $z<0.4$ to sample the matter density \citep{eisenstein,cole,percival}. The Baryon Oscillation Spectroscopic Survey (BOSS) \citep{boss} of the Sloan Digital Sky Survey (SDSS-III) \citep{sdss3} proposes to extend these studies using galaxies of higher redshifts, $z<0.9$. The BOSS project will also study BAO effects in the range $2.2<z<3.5$ using Lyman-$\alpha$ absorption towards high redshift quasars (QSOs) to sample the matter density as proposed by \citet{McDonald}. \begin{figure*}[htb] \centering \includegraphics[width=14.0cm]{Colors.eps} \caption{2D distributions of colors ($u-g$, $g-r$, $r-i$, $i-z$ and $g-i$) for objects classified as PLO in SDSS photometric catalog (blue lines for contours) and for objects spectroscopically classified as QSO (red solid lines for contours). The PSF magnitudes ($ugriz$) have been corrected for Galactic extinction according to the model of \citet{schlegel98}. } \label{fig:Colors} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=14.0cm]{compVar.eps} \caption{Distributions of the discriminating variables used as input in the NN for objects classified as PLO in SDSS photometric catalog (blue dotted histogram) and for objects spectroscopically classified as QSO (red slashed histogram): {\bf a)} Distribution of the PSF $g$ magnitude, {\bf b), c), d) e) and f)} Distributions of, respectively, $\sigma(u)$, $\sigma(g)$, $\sigma(r)$, $\sigma(i)$ and $\sigma(z)$, the errors on the corresponding PSF magnitudes. } \label{fig:compVar} \end{figure*} The power spectrum has already been measured at $z\sim2.5$ via the 1-dimensional matter power spectrum derived from quasar spectra \citep{croft}. The observation of BAO effects will require a full 3-dimensional sampling of the matter density, requiring a much higher number of quasars than previously available. BOSS aims to study around 100,000 QSOs over 8,000 square degrees. The requirement that the Lyman-$\alpha$ absorption fall in the range of the BOSS spectrograph requires that the quasars be in the redshift range $2.2<z<3.5$. The quasars to be targeted must be chosen using only available photometric information, mostly from the SDSS-I point-source catalog. The target selection method must be able to reject the non-quasar point-like objects (PLOs; mainly stars) by more than two orders of magnitude with a selection efficiency of QSOs better than 50\%. The BOSS project needs a high density of $z>2.2$ fainter QSOs ($\sim 20$ QSOs per sq degree) and therefore requires the selection to be pushed up to $g\sim 22$. We developed a new method to select quasars using more information than the standard color selection methods. The classification of objects is a task that is generally performed by applying cuts on various distributions which distinguish signal objects from background objects. This approach is not optimal because all the information (the shapes of the variable distributions, the correlations between the variables) is not exploited and this leads to a loss in classification efficiency. Statistical methods based on multivariate analysis have been developed to tackle this kind of problem. For historical reasons these methods have been focused on linear problems which are easily tractable. In order to deal with nonlinearities, Artificial Neural Networks (NN) have been shown to be a powerful tool in the classification task (see for instance \citet{bishop}). By combining photometric measurements such as the magnitude values and their errors for the five bands ($ugriz$) of SDSS photometry, a NN approach will allow us both to select the QSO candidates and to predict their redshift. Similar methods such as Kernel Density Estimation (KDE) \citep{kde04,kde09} already exist to select QSOs. Our approach based on NN is an extension of these methods because we will use more information (errors and absolute magnitude $g$ instead of only colors (difference between two magnitudes)). Moreover, we propose to treat in parallel the determination of the redshift with the same tool. This approach contrasts with the usual methods to compute photometric redshift which deal with $\chi^2$ minimization techniques \citep{photoz,weinstein04}. \section {QSO and Background Samples} The quasar candidates should be selected among a photometric catalog of objects including real quasars and what we will call background objects. Here, both for the background and QSO samples, the photometric information comes from the SDSS-DR7 imaging database of point-like objects \citep{dr7}, PLOs. We apply the same quality cuts on the photometry for the two samples and select objects with $g$ magnitude in the range $18 \leq g \leq 22$. Note that in the following, magnitudes will be point spread function (PSF) magnitudes \citep{lupton99} in the SDSS pseudo-AB magnitude system \citep{oke83}. \subsection{Background Sample} \label{sec:qsosample} For the background sample, we would ideally use an unbiased sample of spectroscopically confirmed SDSS point-like objects \emph{that are not QSOs}. Unfortunately, we have no unbiased sample of such objects because spectroscopic targets were chosen in SDSS-I to favor particular types of objects. Fortunately, the number of QSOs among PLOs is sufficiently small that using all PLOs as background does not affect the NN's ability to identify QSOs. We have verified that this strategy works by using the synthetic PLO catalog of \citet{fan}. We degraded the star sample by adding a few percent of QSOs in it. then, we retrained the NN and we compared the NN trained with a pure star sample. We did not observe any significant worsening of the NN performances. The background sample used in the following was drawn from the SDSS PLO sample. We used objects with galactic latitude $b$ around $45^\circ$ to average the effect of Galactic extinction. In the future, we may consider the possibility of having a different NN for each stripe of constant galactic latitude. The final sample contains ~30,000 PLOs: half of them constituting the ``training" sample, the other half the ``control" sample, as explained in the next section. \subsection{QSO Sample} For the QSO training sample, we use a list of 122,818 spectroscopically-confirmed quasars obtained from the 2QZ quasar catalog \citep{croom04}, the SDSS-2dF LRG and QSO Survey (2SLAQ) \citep{croom09}, and the SDSS-DR7 spectroscopic database \citep{dr7}. These quasars have redshifts in the range $ 0.05 \leq z \leq 5.0 $ and $g$ magnitudes in the range $18 \leq g \leq 22$ (galactic extinction corrected). Since quasars will be observed over a limited blue wavelength range (down to about 3700~\AA), we will target only quasars with $z>2.2$. Therefore, the sample of known quasars includes 33,918 QSOs with $z\geq 1.8$: half of them constituting the effective ``training" sample, the other half the ``control" sample. For the determination of the photometric redshift, we use a wider sample of 95,266 QSOs with $z\geq 1$. In order to compare together QSOs with background objects from different regions of the sky, the QSO magnitudes have been corrected for Galactic extinction with the model of \citet{schlegel98}. \subsection{Discriminating variables} The photometric information is extracted from the SDSS-DR7 imaging database \citep{dr7}. The 10 elementary variables are the PSF magnitudes for the 5 SDSS bands ($ugriz$) and their errors. As explained in \citet{kde09}, the most powerful variables are the four usual colors ($u-g$,$g-r$,$r-i$,$i-z$) which combine the PSF magnitudes. Fig.~\ref{fig:Colors} shows the 2D color-color distributions for the QSO and PLO samples. These plots give the impression that it is easy to disentangle the two classes of objects but one needs to keep in mind that the final goal is to obtain a 50\% efficiency for QSOs with a non-quasar PLO efficiency of the order of $\sim10^{-3}$. Therefore to improve the NN performances, we added the absolute magnitude $g$ and the five magnitude errors. Their distributions for the two classes are given on Fig.~\ref{fig:compVar}. An improvement can be expected from the additional variables and also from the correlations between the variables. Indeed, for example, it is expected that errors be larger for compact galaxies compared to intrinsic point-like objects. Note that the $g$ distribution for the QSOs is likely to be biased by the spectroscopic selection. This issue will be addressed in the future with the first observations of BOSS. Indeed the photometric selection of QSOs for these first observations is based on loose selection criteria and it should provide a ``less biased" catalog of spectroscopically confirmed quasars, close to completeness up to $g=22$. \section{Neural Network Approach} The basic building block of the NN architecture \footnote{ For this study, both for target selection and redshift determination, we use a C++ package, TMultiLayerPerceptron developed in the ROOT environment \citep{root}.} is a processing element called a neuron. The NN architecture used in this study is illustrated in Fig.~\ref{fig:ArchitectureNN} where each neuron is placed on one of four ``layers'', with $N_l$ neurons in layer $l,\,l=1,2,3,4$. The output of each neuron on the first (input) layer is one of the $N_1$ variables defining an object, e.g. magnitudes, colors and uncertainties. The inputs of neurons on subsequent layers ($l=2,3,4$) are the $N_{l-1}$ outputs of the previous layer, i.e. the $x^{l-1}_j ,\,j=1,..,N_{l-1}$. The inputs of any neuron are first linearly combined according to ``weights'', $w^l_{ij}$ and ``offsets'' $\theta^l_j$ \begin{equation} y^l_j=\sum_{i=1}^{N_l} w^l_{ij}\, x^{l-1}_i + \theta^l_j\,\, \hspace*{5mm}l\,\geq\,2 \;. \end{equation} The output of neuron $j$ on layer $l$ is then defined by the non-linear function \begin{equation} x^{l}_j = \frac{1}{1+ \exp\left(-y^l_j\right)}\,\, \hspace*{5mm} 2\leq \,l\,\leq 3 \;. \label{eq:activation} \end{equation} The fourth layer has only one neuron giving an output $y_{NN}\equiv y^4_1$, reflecting the likelihood that the object defined by the $N_1$ input variables is a QSO. \begin{figure}[htb] \centering \includegraphics[width=7.cm]{SchemaNN.eps} \caption{ Schematic representation of the Neurone Network used here with $N_1$ input variables, two hidden layers and one output neuron.} \label{fig:ArchitectureNN} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{NNOutputEffi.eps} \caption{{\bf a)} NN output for objects classified as PLO in the SDSS photometric catalog, i.e. background objects, (blue dotted histogram) and for objects spectroscopically classified as QSO (red slashed histogram) in the control samples, using 10 discriminating variables: 4 colors, $g$ magnitude and errors on the five ($u,g,r,i$ and $z$) magnitudes. {\bf b)} PLO efficiency as a function of the QSO efficiency for three NN configurations. Blue dashed line: 4 colors ($u-g,g-r,r-i,i-z$). Black dotted line: 4 colors + $g$ magnitude. Red solid line: 4 colors + $g$ magnitude + errors on the five ($u,g,r,i$ and $z$) magnitudes. The curves are obtained by varying the cut value, $y^{min}_{NN}$ for the two distributions of Fig.~\ref{fig:NNOutputEffi}-a. Efficiency is defined as the ratio of the number of objects with a NN output greater than $y^{min}_{NN}$ over the number of objects in the sample. The dots correspond, from left to right, to $y^{min}_{NN}$ equal to, respectively, 0.2, 0.5, 0.8, 0.9, 0.95 and 0.98. } \label{fig:NNOutputEffi} \end{figure*} Certain aspects of the NN procedure, especially the number of layers and the number of nodes per layer, are somewhat arbitrary and are chosen by experience and for simplicity. On the other hand, the weights and offsets must be optimized so that the NN output, $y_{NN}$, correctly reflects the probability that an input object is a QSO. The NN must therefore be ``trained'' with a set of objects that are known to be QSOs or not QSOs (background objects). More precisely, the weights and offsets are determined by minimizing the ``error'' function \begin{equation} E= \frac{1}{2n}\sum_{p=1}^{n}(y_{NN}(p)-y(p))^2\,\, , \label{eq:error} \end{equation} where the sum is over $n$ objects, $p$, and where $y(p)$ is a discrete value defined as $y(p)=1$ (resp. $y(p)=0$) if the object $p$ is a QSO (resp. is not a QSO). In the case of the NN developed to estimate a photometric redshift, the targeted value $y(p)$ is a continuous value equal to the true spectrometric redshift, $z_{spectro}$. Note that in the NN architecture used for this study, the activation function, defined in Eq.~\ref{eq:activation}, is not applied to the last neuron, allowing the output variable to vary in a range wider than $[0;1]$. In this kind of classification analysis, the major risk is the ``over-training" of the NN. It occurs when the NN has too many parameters ($w_{ij}$ and $\theta_j$) determined by too few training objects. Over-training leads to an apparent increase in the classification efficiency because the NN learns by heart the objects in the training sample. To prevent such a behavior, the QSO and background samples are split into two independent sub-samples, called ``training" and ``control" samples. The determination of the NN parameters ($w_{ij}$ and $\theta_j$) is obtained by minimizing the error $E$, computed over the QSO and background training samples. The minimization is suspended as soon as the error for the control samples stops decreasing even if the error is still decreasing for the training samples. We have followed this procedure both for the target selection and the determination of the photometric redshift. The result of the NN training procedure is shown in Fig.~\ref{fig:NNOutputEffi}-a. The histograms of $y_{NN}$ for the control QSO and background samples are overplotted. Most objects have either $y_{NN}\sim 1$ (corresponding to QSOs) or $y_{NN}\sim 0$ (corresponding to background objects). QSO target selection is achieved by defining a threshold value $y_{NN}^{min}$ to be chosen between $y_{NN}= 1$ and $y_{NN}\sim 0$. The optimal value of the threshold is obtained by balancing the number of accepted QSOs against the number of accepted background objects. A plot of the QSO efficiency vs. the background efficiency is shown in Fig.~\ref{fig:NNOutputEffi}-b. \section {Photometric Selection of Quasar} For illustration, we have considered three NN configurations that differ by the number of discriminating variables. The first one uses only the four standard colors ($u-g$,$g-r$,$r-i$,$i-z$). In the second configuration, we add the absolute magnitude $g$ and finally in the third configuration, the errors on the five PSF magnitudes are also taken into account. For each configuration, we have optimized the number of neurons in the hidden layers and the number of iterations in the minimization to get the best ``PLO efficiency--QSO efficiency" curve. The three curves are superimposed on Fig.~\ref{fig:NNOutputEffi}-b. Adding information, i.e discriminating variables, clearly improves the classification performances. For instance, for a QSO efficiency of 50\%, the PLO rejection fraction increases from 98.8\%, to 99.4\% and to 99.6\% when the number of variables increases respectively from 4 to 5 and to 10. In the region of QSO efficiency in which we want to work, between 50\% and 80\%, the PLO background is reduced by a factor 3 by adding 6 variables to the four usual colors. The small improvement found by using photometric errors may be due to a small contamination of the PLO catalog by compact galaxies. It is therefore apparent that the 10-variable NN should be used for the purpose of selecting quasars in any photometric catalog. In that case, the PLO rejection factors are respectively, 99.6\%, 99.2\% and 98.5\% for QSO efficiencies of 50\%, 70\% and 85\%. \begin{figure*}[htb] \centering \includegraphics[width=15cm]{PhotozNN.eps} \caption{ {\bf a)} Photometric redshift determined with the NN ($z_{NN}$) as a function of the redshift measured from spectroscopy ($z_{spectro}$). {\bf b)} The $z_{NN}-z_{spectro}$ distribution is fitted with three gaussians contributing $93.4\%$, $6.4\%$ and $0.2\%$ of the histogram and of width, respectively, $\sigma$~=~0.1, 0.4 and 1.0. The RMS of the $z_{NN}-z_{spectro}$ distribution is 0.18 and its mean is 0.00.} \label{fig:PhotozNN} \end{figure*} According to the \citet{McDonald} computation based on the \citet{jiang} survey of faint QSOs, we expect $\sim$20 QSOs per deg$^2$, with $g<22$ and $2.2 \lesssim z \lesssim 3.5$. For a galactic latitude $b\sim45^\circ$, the number of objects selected in the SDSS-DR7 imaging database is $\sim$4000. Thus, with a QSO efficiency of 70\% and a PLO efficiency\footnote{Note that by its definition in Sec.\ref{sec:qsosample}, the PLO sample contains QSOs.} of 0.8\%, we will select 32 objects per deg$^2$ including $\sim$14 ``true" QSOs. These numbers corresponds roughly to what is required for BOSS project. \section {Photometric Redshift of Quasar} For the BOSS project, only quasars with a redshift in the range $2.2 \lesssim z \lesssim 3.5$ are useful. In the definition of the training sample, we have already applied a cut on the redshift, $z\geq 1.8$, to reinforce the selection of high-$z$ QSOs. But it is useful to add an additional constrain and select only QSOs with $u-g>0.4$. This a-posteriori color cut helps to remove QSOs in the region $0.8 \lesssim z \lesssim 2.2$. However, we propose a more elegant method which consists of estimating the redshift of the QSO from the photometric information with another NN. For the determination of the photometric redshift we use the same 10 variables as those in the NN for target selection. The difference is that in the definition of the error $E$, in Eq.~\ref{eq:error}, the targeted value $y(p)$ is a continuous value equal to the true spectrometric redshift, $z_{spectro}$. Except for this difference, the NN architecture is the same as for target selection with two hidden layers with the same number of hidden neurons. The minimization is computed with a single ``training" sample of spectroscopically-confirmed QSOs and it is suspended as soon as the error $E$ for the QSO ``control" sample stops decreasing. Fig.~\ref{fig:PhotozNN}-a shows the photometric redshift $z_{NN}$, determined with the NN versus the spectroscopic redshift of the spectroscopically-confirmed QSOs. Most of the objects are distributed along the diagonal demonstrating the good agreement between the two measurements. This can be quantified by plotting the difference $z_{NN}-z_{spectro}$ (Fig.~\ref{fig:PhotozNN}-b). The fit of this distribution with three gaussians gives $93.4\%$ and $6.4\%$ of the objects respectively in core and wide Gaussians. The fraction of outliers, determined with the third Gaussian is only $0.2\%$. The corresponding distribution can be fitted with three Gaussian functions comprizing, respectively, $93.4\%$, $6.4\%$ and 0.2\% of the distribution and of width, $\sigma$~=~0.1, 0.4 and 1. Therefore, as shown on Fig.~\ref{fig:QSORedshiftNN}, by applying a conservative cut on the photometric redshift, $z_{NN}>2.1$, we can remove 90.0\% of the QSOs with $z<2.2$. The fraction of lost QSOs with a redshift in the relevant region, $2.2< z <3.5$, stays at a reasonable level of 5.3\%. \begin{figure}[htb] \centering \includegraphics[width=10cm]{QSORedshiftNN.eps} \caption{Spectrometric redshift distribution in the QSO sample (blue slashed histogram). The distribution for the QSO passing the cut $z_{NN}>2.1$ is overplotted (red dotted histogram). After this cut, 90.0\% of the QSOs with $z<2.2$ are removed and only 5.3\% of the QSOs in the $2.2< z <3.5$ region are lost. } \label{fig:QSORedshiftNN} \end{figure} \section{Conclusions} In this paper we have presented a new promising approach to select quasars from photometric catalogs and to estimate their redshift. We use an Neurone Network with a multilayer perceptron architecture. The input variables are photometric measurements, i.e. the magnitudes and their errors for the five bands ($ugriz$) of the SDSS photometry. For the target selection, we achieve a PLO rejection factor of 99.6\% and 98.5\% for, respectively, a quasar efficiency of 50\% and 85\%. The rms of the difference between the photometric redshift and the spectroscopic redshift is of the order of 0.15 over the region relevant for BAO studies. These new statistical methods developed in the context of the BOSS project can easily be extended to any other analysis requiring QSO selection and/or determination of their photometric redshift. \begin{acknowledgements} We thank N. P. Ross and D. H. Weinberg for triggering our interest in QSO target selection in the context of the BOSS project and for many interesting discussions. The authors are also grateful to G. T. Richards, A. D. Myers and E. Sheldon for important discussions and for providing the QSO catalog developed for the target selection in BOSS and used in this paper. We like also to thank Fan X. who has provided us some synthetic catalogs of PLOs. \end{acknowledgements}
proofpile-arXiv_065-6477
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{intro} Let ${\cal E}$ be a category with finite limits. For the bicategory $\mathrm{Span}\,{\cal E}$, the locally full subbicategory $\mathrm{Map}\mathrm{Span}\,{\cal E}$ determined by the left adjoint arrows is essentially locally discrete, meaning that each hom category $\mathrm{Map}\mathrm{Span}\,{\cal E}(X,A)$ is an equivalence relation, and so is equivalent to a discrete category. Indeed, a span $x{\kern -.25em}:{\kern -.25em} X\,{\s\toleft}\, S{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} a$ has a right adjoint if and only if $x{\kern -.25em}:{\kern -.25em} S{\,\s\to}\, X$ is invertible. The functors $$\mathrm{Map}\mathrm{Span}\,{\cal E}(X,A)\to{\cal E}(X,A)\quad\mbox{given by}\quad(x,a)\mapsto ax^{-1}$$ provide equivalences of categories which are the effects on homs for a biequivalence $$\mathrm{Map}\mathrm{Span}\,{\cal E}{\,\s\to}\,{\cal E}\, .$$ Since ${\cal E}$ has finite products, $\mathrm{Map}\mathrm{Span}\,{\cal E}$ has finite products {\em as a bicategory}. We refer the reader to \cite{ckww} for a thorough treatment of bicategories with finite products. Each hom category $\mathrm{Span}\,{\cal E}(X,A)$ is {\em isomorphic} to the slice category ${\cal E}/(X\times A)$ which has binary products given by pullback in ${\cal E}$ and terminal object $1{\kern -.25em}:{\kern -.25em} X\times A{\,\s\to}\, X\times A$. Thus $\mathrm{Span}\,{\cal E}$ is a {\em precartesian bicategory} in the sense of \cite{ckww}. The canonical lax monoidal structure $$\mathrm{Span}\,{\cal E}\times \mathrm{Span}\,{\cal E}\to\mathrm{Span}\,{\cal E}\toleft\mathbf{1}$$ for this precartesian bicategory is seen to have its binary aspect given on arrows by $$(X\xleftarrow{x} S \xrightarrow{y} A\,,\, Y\xleftarrow{y} T\xrightarrow{b} B ) \mapsto (X\times Y \xleftarrow{x\times y} S\times T \xrightarrow{a\times b} A\times B)\, ,$$ and its nullary aspect provided by $$1\xleftarrow1 1 \xrightarrow1 1\, ,$$ the terminal object of $\mathrm{Span}\,{\cal E}(1,1)$. Both of these lax functors are readily seen to be pseudofunctors so that $\mathrm{Span}\,{\cal E}$ is a {\em cartesian bicategory} as in \cite{ckww}. The purpose of this paper is to characterize those cartesian bicategories $\mathbf{B}$ which are biequivalent to $\mathrm{Span}\,{\cal E}$, for some category ${\cal E}$ with finite limits. Certain aspects of a solution to the problem are immediate. A biequivalence $\mathbf{B}\sim\mathrm{Span}\,{\cal E}$ provides $$\mathrm{Map}\mathbf{B}\sim\mathrm{Map}\mathrm{Span}\,{\cal E}\sim{\cal E}$$ so that we must ensure firstly that $\mathrm{Map}\mathbf{B}$ is essentially locally discrete. From the characterization of bicategories of relations as locally ordered cartesian bicategories in \cite{caw} one suspects that the following axiom will figure prominently in providing essential local discreteness for $\mathrm{Map}\mathbf{B}$. \axm\label{frob}{\em Frobenius:}\quad A cartesian bicategory $\mathbf{B}$ is said to satisfy the {\em Frobenius} axiom if, for each $A$ in $\mathbf{B}$, $A$ is Frobenius. \eth \noindent Indeed Frobenius objects in cartesian bicategories were defined and studied in \cite{ww} where amongst other things it is shown that if $A$ is Frobenius in cartesian $\mathbf{B}$ then, for all $X$, $\mathrm{Map}\mathbf{B}(X,A)$ is a groupoid. (This theorem was generalized considerably in \cite{lsw} which explained further aspects of the Frobenius concept.) However, essential local discreteness for $\mathrm{Map}\mathbf{B}$ requires also that the $\mathrm{Map}\mathbf{B}(X,A)$ be ordered sets (which is automatic for locally ordered $\mathbf{B}$). Here we study also {\em separable} objects in cartesian bicategories for which we are able to show that if $A$ is separable in cartesian $\mathbf{B}$ then, for all $X$, $\mathrm{Map}\mathbf{B}(X,A)$ is an ordered set and a candidate axiom is: \axm\label{sepax}{\em Separable:}\quad A cartesian bicategory $\mathbf{B}$ is said to satisfy the {\em Separable} axiom if, for each $A$ in $\mathbf{B}$, $A$ is separable. \eth In addition to essential local discreteness, it is clear that we will need an axiom which provides {\em tabulation} of each arrow of $\mathbf{B}$ by a span of maps. Since existence of Eilenberg-Moore objects is a basic 2-dimensional limit concept, we will express tabulation in terms of this requirement; we note that existence of pullbacks in $\mathrm{Map}\mathbf{B}$ follows easily from tabulation. In the bicategory $\mathrm{Span}\,{\cal E}$, the comonads $G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ are precisely the symmetric spans $g{\kern -.25em}:{\kern -.25em} A\,{\s\toleft}\, X{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} g$; the map $g{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ together with $g\eta_g{\kern -.25em}:{\kern -.25em} g{\,\s\to}\, gg^*g$ provides an Eilenberg-Moore coalgebra for $g{\kern -.25em}:{\kern -.25em} A\,{\s\toleft}\, X{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} g$. We will posit: \axm\label{emc}{\em Eilenberg-Moore for Comonads:}\quad Each comonad $(A,G)$ in $\mathbf{B}$ has an Eilenberg-Moore object. \eth Conversely, any map (left adjoint) $g{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathrm{Span}\,{\cal E}$ provides an Eilenberg-Moore object for the comonad $gg^*$. \noindent We further posit: \axm\label{mc}{\em Maps are Comonadic:}\quad Each left adjoint $g{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{B}$ is comonadic. \eth \noindent from which, in our context, we can also deduce the Frobenius and Separable axioms. In fact we shall also give, in Proposition~\ref{easy} below, a straightforward proof that $\mathrm{Map}\mathbf{B}$ is locally essentially discrete whenever Axiom~\ref{mc} holds. But due to the importance of the Frobenius and separability conditions in other contexts, we have chosen to analyze them in their own right. \section{Preliminaries}\label{prelim} We recall from \cite{ckww} that a bicategory $\mathbf{B}$ (always, for convenience, assumed to be normal) is said to be {\em cartesian} if the subbicategory of maps (by which we mean left adjoint arrows), $\mathbf{M}=\mathrm{Map}\mathbf{B}$, has finite products $-\times-$ and $1$; each hom-category $\mathbf{B}(B,C)$ has finite products $-\wedge-$ and $\top$; and a certain derived tensor product $-\otimes-$ and $I$ on $\mathbf{B}$, extending the product structure of $\mathbf{M}$, is functorial. As in \cite{ckww}, we write $p$ and $r$ for the first and second projections at the global level, and similarly $\pi$ and $\rho$ for the projections at the local level. If $f$ is a map of $\mathbf{B}$ --- an arrow of $\mathbf{M}$ --- we will write $\eta_f,\epsilon_f{\kern -.25em}:{\kern -.25em} f\dashv f^*$ for a chosen adjunction in $\mathbf{B}$ that makes it so. It was shown that the derived tensor product of a cartesian bicategory underlies a symmetric monoidal bicategory structure. We recall too that in \cite{ww} Frobenius objects in a general cartesian bicategory were defined and studied. We will need the central results of that paper too. Throughout this paper, $\mathbf{B}$ is assumed to be a cartesian bicategory. As in \cite{ckww} we write $$\bfig \Atriangle/->`->`/[\mathbf{G}={\rm Gro}\mathbf{B}`\mathbf{M}`\mathbf{M};\partial_0`\partial_1`] \efig$$ for the Grothendieck span corresponding to $$\mathbf{M}{^\mathrm{op}}\times\mathbf{M}\to^{i{^\mathrm{op}}\times i}\mathbf{B}{^\mathrm{op}}\times\mathbf{B}\to^{\mathbf{B}(-,-)}\mathbf{CAT}$$ where $i{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{B}$ is the inclusion. A typical arrow of $\mathbf{G}$, $(f,\alpha,u){\kern -.25em}:{\kern -.25em}(X,R,A){\,\s\to}\,(Y,S,B)$ can be depicted by a square \begin{equation}\label{square} \bfig \square(0,0)[X`Y`A`B;f`R`S`u] \morphism(125,250)|m|<250,0>[`;\alpha] \efig \end{equation} and such arrows are composed by pasting. A 2-cell $(\phi,\psi){\kern -.25em}:{\kern -.25em}(f,\alpha,u){\,\s\to}\,(g,\beta,v)$ in $\mathbf{G}$ is a pair of 2-cells $\phi{\kern -.25em}:{\kern -.25em} f{\,\s\to}\, g$, $\psi{\kern -.25em}:{\kern -.25em} u{\,\s\to}\, v$ in $\mathbf{M}$ which satisfy the obvious equation. The (strict) pseudofunctors $\partial_0$ and $\partial_1$ should be regarded as {\em domain} and {\em codomain} respectively. Thus, applied to (\ref{square}), $\partial_0$ gives $f$ and $\partial_1$ gives $u$. The bicategory $\mathbf{G}$ also has finite products, which are given on objects by $-\otimes-$ and $I$; these are preserved by $\partial_0$ and $\partial_1$. The Grothendieck span can also be thought of as giving a double category (of a suitably weak flavour), although we shall not emphasize that point of view. \subsection{}\label{xredux} The arrows of $\mathbf{G}$ are particularly well suited to relating the various product structures in a cartesian bicategory. In 3.31 of \cite{ckww} it was shown that the local binary product, for $R,S{\kern -.25em}:{\kern -.25em} X{\scalefactor{0.5} \two}A$, can be recovered to within isomorphism from the defined tensor product by $$R\wedge S\cong d^*_A(R\otimes S)d_X$$ A slightly more precise version of this is that the mate of the isomorphism above, with respect to the single adjunction $d_A\dashv d^*_A$, defines an arrow in $\mathbf{G}$ $$\bfig \square(0,0)[X`X\otimes X`A`A\otimes A;d_X`R\wedge S`R\otimes S`d_A] \morphism(125,250)|m|<250,0>[`;] \efig$$ which when composed with the projections of $\mathbf{G}$, recovers the local projections as in $$ \bfig \square(0,0)|almb|[X`X\otimes X`A`A\otimes A;d_X`R\wedge S`R\otimes S`d_A] \morphism(125,250)|m|<250,0>[`;] \square(500,0)|amrb|[X\otimes X`X`A\otimes A`A;p_{X,X}`R\otimes S`R`p_{A,A}] \morphism(625,250)|m|<250,0>[`;\tilde p_{R,S}] \place(1250,250)[\cong] \square(1550,0)|almb|[X`X`A`A;1_X`R\wedge S`R`1_A] \morphism(1675,250)|m|<250,0>[`;\pi] \efig$$ for the first projection, and similarly for the second. The unspecified $\cong$ in $\mathbf{G}$ is given by a pair of convenient isomorphisms $p_{X,X}d_X\cong 1_X$ and $p_{A,A}d_A\cong 1_A$ in $\mathbf{M}$. Similarly, when $R\wedge S{\,\s\to}\, R\otimes S$ is composed with $(r_{X,X},\tilde r_{R,S},r_{A,A})$ the result is $(1_X,\rho,1_A){\kern -.25em}:{\kern -.25em} R\wedge S{\,\s\to}\, S$. \subsection{}\label{bc} Quite generally, an arrow of $\mathbf{G}$ as given by the square (\ref{square}) will be called a {\em commutative} square if $\alpha$ is invertible. An arrow of $\mathbf{G}$ will be said to satisfy the {\em Beck condition} if the mate of $\alpha$ under the adjunctions $f\dashv f^*$ and $u\dashv u^*$, as given in the square below (no longer an arrow of $\mathbf{G}$), is invertible. $$\bfig \square(1000,0)/<-`->`->`<-/[X`Y`A`B;f^*`R`S`u^*] \morphism(1125,250)|m|<250,0>[`;\alpha^*] \efig$$ Thus Proposition 4.7 of \cite{ckww} says that projection squares of the form $\tilde p_{R,1_Y}$ and $\tilde r_{1_X,S}$ are commutative while Proposition 4.8 of \cite{ckww} says that these same squares satisfy the Beck condition. If $R$ and $S$ are also maps and $\alpha$ is invertible then $\alpha^{-1}$ gives rise to another arrow of $\mathbf{G}$, from $f$ to $u$ with reference to the square above, which may or may not satisfy the Beck condition. The point here is that a commutative square of maps gives rise to two, generally distinct, Beck conditions. It is well known that, for bicategories of the form $\mathrm{Span}\,{\cal E}$ and $\mathrm{Rel}\,{\cal E}$, all pullback squares of maps satisfy both Beck conditions. A category with finite products has automatically a number of pullbacks which we might call {\em product-absolute} pullbacks because they are preserved by all functors which preserve products. In \cite{ww} the Beck conditions for the product-absolute pullback squares of the form $$\bfig \Square(1000,0)/->`<-`<-`->/[A\times A`A\times A\times A`A`A\times A;d\times A`d`A\times d`d] \efig$$ were investigated. (In fact, in this case it was shown that either Beck condition implies the other.) The objects for which these conditions are met are called {\em Frobenius} objects. \prp\label{mcifro} For a cartesian bicategory, the axiom {\em Maps are Comonadic} implies the axiom {\em Frobenius}. \eth \prf It suffices to show that the 2-cell $\delta_1$ below is invertible: $$\bfig \square(0,0)|alrb|/<-`->``<-/<750,500>% [A`A\otimes A`A\otimes A`A\otimes(A\otimes A);d^*`d``1\otimes d^*] \morphism(750,500)|r|<0,-250>[A\otimes A`(A\otimes A)\otimes A;d\otimes 1] \morphism(750,250)|r|<0,-250>[(A\otimes A)\otimes A`A\otimes(A\otimes A);a] \morphism(175,250)|a|<150,0>[`;\delta_1] \square(0,-500)|blrb|/<-`->`->`<-/<750,500>% [A\otimes A`A\otimes(A\otimes A)`A`A\otimes A;1\otimes d^*`r`r`d^*] \morphism(250,-250)|a|<150,0>[`;\tilde r_{1_A,d^*}] \efig$$ The paste composite of the squares is invertible (being essentially the identity 2-cell on $d^*$). The lower 2-cell is invertible by Proposition 4.7 of \cite{ckww} so that the whisker composite $r\delta_1$ is invertible. Since $r$ is a map it reflects isomorphisms, by Maps are Comonadic, and hence $\delta_1$ is invertible. \frp \rmk\label{frobclofin} It was shown in \cite{ww} that, in a cartesian bicategory, the Frobenius objects are closed under finite products. It follows that the full subbicategory of a cartesian bicategory determined by the Frobenius objects is a cartesian bicategory which satisfies the Frobenius axiom. \eth \section{Separable Objects and Discrete Objects in Cartesian Bicategories}\label{Sep} In this section we look at separability for objects of cartesian bicategories. Since for an object $A$ which is both separable and Frobenius, the hom-category $\mathrm{Map}\mathbf{B}(X,A)$ is essentially discrete, for all $X$, we shall then be able to show that $\mathrm{Map}\mathbf{B}$ is essentially discrete by showing that all objects in $\mathbf{B}$ are separable and Frobenius. But first we record the following direct argument: \begin{proposition}\label{easy} If $\mathbf{B}$ is a bicategory in which all maps are comonadic and $\mathrm{Map}\mathbf{B}$ has a terminal object, then $\mathrm{Map}\mathbf{B}$ is locally essentially discrete. \end{proposition} \prf We must show that for all objects $X$ and $A$, the hom-category $\mathrm{Map}\mathbf{B}(X,A)$ is essentially discrete. As usual, we write $1$ for the terminal object of $\mathrm{Map}\mathbf{B}$ and $t_A:A\to 1$ for the essentially unique map, which by assumption is comonadic. Let $f,g:X\to A$ be maps from $X$ to $A$. If $\alpha:f\to g$ is any 2-cell, then $t_A\alpha$ is invertible, since $1$ is terminal in $\mathrm{Map}\mathbf{B}$. But since $t_A$ is comonadic, it reflects isomorphisms, and so $\alpha$ is invertible. Furthermore, if $\beta:f\to g$ is another 2-cell, then $t_A\alpha=t_A\beta$ by the universal property of $1$ once again, and now $\alpha=\beta$ since $t_A$ is faithful. Thus there is at most one 2-cell from $f$ to $g$, and any such 2-cell is invertible. \frp In any (bi)category with finite products the diagonal arrows $d_A{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A\times A$ are (split) monomorphisms so that in the bicategory $\mathbf{M}$ the following square is a product-absolute pullback $$\bfig \square(0,0)[A`A`A`A\otimes A;1_A`1_A`d_A`d_A] \efig$$ that gives rise to a single $\mathbf{G}$ arrow. \dfn\label{sep} An object $A$ in a cartesian bicategory is said to be {\em separable} if the $\mathbf{G}$ arrow above satisfies the Beck condition. \eth Of course the invertible mate condition here says precisely that the unit $\eta_{d_A}\f1_A{\,\s\to}\, d_A^*d_A$ for the adjunction $d_A\dashv d_A^*$ is invertible. Thus Axiom \ref{sepax}, as stated in the Introduction, says that, for all $A$ in $\mathbf{B}$, $\eta_{d_A}$ is invertible. \rmk\label{sepcat} For a map $f$ it makes sense to define {\em $f$ is fully faithful} to mean that $\eta_f$ is invertible. For a {\em category $A$} the diagonal $d_A$ is fully faithful if and only if $A$ is an ordered set. \eth \prp\label{sepmeans} For an object $A$ in a cartesian bicategory, the following are equivalent: \begin{enumerate}[$i)$] \item $A$ is separable; \item for all $f{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{M}$, the diagram $f\,{\s\toleft}\, f{\,\s\to}\, f$ is a product in $\mathbf{B}(X,A)$; \item $1_A\,{\s\toleft}\, 1_A{\,\s\to}\, 1_A$ is a product in $\mathbf{B}(A,A)$; \item $1_A{\,\s\to}\,\top_{A,A}$ is a monomorphism in $\mathbf{B}(A,A)$; \item for all $G\ra1_A$ in $\mathbf{B}(A,A)$, the diagram $G\,{\s\toleft}\, G\ra1_A$ is a product in $\mathbf{B}(A,A)$. \end{enumerate} \eth \prf $[i)\Longrightarrow$ $ii)]$ A local product of maps is not generally a map but here we have: $$f\wedge f\cong d_A^*(f\otimes f)d_X\cong d_A^*(f\times f)d_X\cong% d_A^*d_A f\cong f$$ $[ii)\Longrightarrow$ $iii)]$ is trivial. $[iii)\Longrightarrow$ $i)]$ Note the use of pseudo-functoriality of $\otimes$: $$d^*_Ad_A\cong d^*_A1_{A\otimes A}d_A\cong d^*_A(1_A\ox1_A)d_A% \iso1_A\wedge1_A\iso1_A$$ $[iii)\Longrightarrow$ $iv)]$ To say that $1_A\,{\s\toleft}\, 1_A{\,\s\to}\, 1_A$ is a product in $\mathbf{B}(A,A)$ is precisely to say that $$\bfig \square(0,0)[1_A`1_A`1_A`\top_{A,A};1_{1_A}`1_{1_A}``] \efig$$ is a pullback in $\mathbf{B}(A,A)$ which in turn is precisely to say that $1_A{\,\s\to}\,\top_{A,A}$ is a monomorphism in $\mathbf{B}(A,A)$ $[iv)\Longrightarrow$ $v)]$ It is a generality that if an object $S$ in a category is subterminal then for any $G{\,\s\to}\, S$, necessarily unique, $G\,{\s\toleft}\, G{\,\s\to}\, S$ is a product diagram. $[v)\Longrightarrow$ $iii)]$ is trivial. \frp \cor\label{mcisep}{\rm [Of $iv)$]} For a cartesian bicategory, the axiom {\em Maps are Comonadic} implies the axiom {\em Separable}. \eth \prf We have $\top_{A,A}=t_A^*t_A$ for the map $t_A{\kern -.25em}:{\kern -.25em} A\ra1$. It follows that the unique $1_A{\,\s\to}\, t_A^*t_A$ is $\eta_{t_A}$. Since $t_A$ is comonadic, $\eta_{t_A}$ is the equalizer shown: $$1_A\to^{\eta_{t_A}}t_A^*t_A\two% ^{t_A^*t_A\eta_{t_A}}_{\eta_{t_A}t_A^*t_A}t_A^*t_At_A^*t_A$$ and hence a monomorphism. \frp \cor\label{copt}{\rm [Of $iv)$]} For separable $A$ in cartesian $\mathbf{B}$, an arrow $G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ admits at most one copoint $G{\,\s\to}\, 1_A$ depending upon whether the unique arrow $G{\,\s\to}\,\top_{A,A}$ factors through $1_A{\scalefactor{0.75}\mon}\top_{A,A}$. \frp \eth \prp\label{sepclofin} In a cartesian bicategory, the separable objects are closed under finite products. \eth \prf If $A$ and $B$ are separable objects then applying the homomorphism $\otimes{\kern -.25em}:{\kern -.25em}\mathbf{B}\times\mathbf{B}{\,\s\to}\,\mathbf{B}$ we have an adjunction $d_A\times d_B\dashv d_A^*\otimes d_B^*$ with unit $\eta_{d_A}\otimes\eta_{d_B}$ which being an isomorph of the adjunction $d_{A\otimes B}\dashv d^*_{A\otimes B}$ with unit $\eta_{d_{A\otimes B}}$ (via middle-four interchange) shows that the separable objects are closed under binary products. On the other hand, $d_I$ is an equivalence so that $I$ is also separable. \frp \cor\label{eos} For a cartesian bicategory, the full subbicategory determined by the separable objects is a cartesian bicategory which satisfies the axiom {\em Separable}. \frp \eth \prp\label{ordhom} If $A$ is a separable object in a cartesian bicategory $\mathbf{B}$, then, for all $X$ in $\mathbf{B}$, the hom-category $\mathbf{M}(X,A)$ is an ordered set, meaning that the category structure forms a reflexive, transitive relation. \eth \prf Suppose that we have arrows $\alpha,\beta{\kern -.25em}:{\kern -.25em} g{\scalefactor{0.5} \two}f$ in $\mathbf{M}(X,A)$. In $\mathbf{B}(X,A)$ we have $$\bfig \Atriangle(0,0)/->`->`/[g`f`f;\alpha`\beta`] \morphism(500,500)|m|<0,-500>[g`f\wedge f;\gamma] \morphism(500,0)|b|<-500,0>[f\wedge f`f;\pi] \morphism(500,0)|b|<500,0>[f\wedge f`f;\rho] \efig$$ By Proposition \ref{sepmeans} we can take $f\wedge f =f$ and $\pi=1_f=\rho$ so that we have $\alpha=\gamma=\beta$. It follows that $\mathbf{M}(X,A)$ is an ordered set. \frp \dfn\label{discrete} An object $A$ in a cartesian bicategory is said to be {\em discrete} if it is both Frobenius and separable. We write $\mathrm{Dis}\mathbf{B}$ for the full subbicategory of $\mathbf{B}$ determined by the discrete objects. \eth \begin{remark} Beware that this is quite different to the notion of discreteness in a bicategory. An object $A$ of a bicategory is discrete if each hom-category $\mathbf{B}(X,A)$ is discrete; $A$ is essentially discrete if each $\mathbf{B}(X,A)$ is equivalent to a discrete category. The notion of discreteness for cartesian bicategories defined above turns out to mean that $A$ is essentially discrete in the bicategory $\mathrm{Map}\mathbf{B}$. \end{remark} From Proposition \ref{sepclofin} above and Proposition 3.4 of \cite{ww} we immediately have \prp\label{eod} For a cartesian bicategory $\mathbf{B}$, the full subbicategory $\mathrm{Dis}\mathbf{B}$ of discrete objects is a cartesian bicategory in which every object is discrete. \frp \eth And from Proposition \ref{ordhom} above and Theorem 3.13 of \cite{ww} we have \prp\label{dishom} If $A$ is a discrete object in a cartesian bicategory $\mathbf{B}$ then, for all $X$ in $\mathbf{B}$, the hom category $\mathbf{M}(X,A)$ is an equivalence relation. \frp \eth If both the {\em Frobenius} axiom of \cite{ww} and the {\em Separable} axiom of this paper hold for our cartesian bicategory $\mathbf{B}$, then every object of $\mathbf{B}$ is discrete. In this case, because $\mathbf{M}$ is a bicategory, the equivalence relations $\mathbf{M}(X,A)$ are stable under composition from both sides. Thus writing $|\mathbf{M}(X,A)|$ for the set of objects of $\mathbf{M}(X,A)$ we have a mere category, ${\cal E}$ whose objects are those of $\mathbf{M}$ (and hence also those of $\mathbf{B}$) and whose hom sets are the quotients $|\mathbf{M}(X,A)|/\mathbf{M}(X,A)$. If the ${\cal E}(X,A)$ are regarded as discrete categories, so that ${\cal E}$ is a locally discrete bicategory then the functors $\mathbf{M}(X,A){\,\s\to}\, |\mathbf{M}(X,A)|/\mathbf{M}(X,A)$ constitute the effect on homs functors for an identity on objects biequivalence $\mathbf{M}{\,\s\to}\, {\cal E}$. To summarize \thm\label{odibld} If a cartesian bicategory $\mathbf{B}$ satisfies both the Frobenius and Separable axioms then the bicategory of maps $\mathbf{M}$ is biequivalent to the locally discrete bicategory ${\cal E}$. \frp \eth In the following lemma we show that any copointed endomorphism of a discrete object can be made into a comonad; later on, we shall see that this comonad structure is in fact unique. \lem\label{diag} If $A$ is a discrete object in a cartesian bicategory $\mathbf{B}$ then, for any copointed endomorphism arrow $\epsilon{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, 1_A{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$, there is a 2-cell $\delta=\delta_G{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, GG$ satisfying $$\bfig \Atriangle(0,0)/->`->`/[G`G`G;1`1`] \morphism(500,500)|m|<0,-500>[G`GG;\delta] \morphism(500,0)|b|<-500,0>[GG`G;G\epsilon] \morphism(500,0)|b|<500,0>[GG`G;\epsilon G] \efig$$ and if both $G,H{\kern -.25em}:{\kern -.25em} A{\s\two} A$ are copointed, so that $GH{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ is also copointed, and $\phi{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, H$ is any 2-cell, then the $\delta$'s satisfy $$\bfig \square(0,0)[G`H`GG`HH;\phi`\delta`\delta`\phi\phi] \place(1000,0)[\mbox{and}] \Atriangle(1500,0)/<-`->`->/% [GHGH`GH`GH;\delta`(G\epsilon)(\epsilon H)`1] \efig$$ \eth \prf We define $\delta=\delta_G$ to be the pasting composite $$\bfig \qtriangle(0,0)|amm|[A`AA`AAA;d`d_3`1d] \square(500,0)|amma|[AA`AA`AAA`AAA;G1`1d`1d`G11] \square(1000,0)|amma|[AA`A`AAA`AA;d^*`1d`d`d^*1] \qtriangle(500,-500)|abr|[AAA`AAA`AAA;G11`GGG`11G] \square(1000,-500)|arma|[AAA`AA`AAA`AA;d^*1`11G`1G`d^*1] \qtriangle(1000,-1000)|amm|[AAA`AA`A;d^*1`d_3^*`d^*] \morphism(0,500)|b|/{@{->}@/_4em/}/<1500,-1500>[A`A;G\wedge G\wedge G] \morphism(0,500)|b|/{@{->}@/_8em/}/<1500,-1500>[A`A;G] \morphism(0,500)|a|/{@{->}@/^3em/}/<1500,0>[A`A;G] \morphism(1500,500)|r|/{@{->}@/^3em/}/<0,-1500>[A`A;G] \morphism(800,-250)|m|<150,150>[`;G\epsilon G] \morphism(300,-800)|m|<150,150>[`;\delta_3] \place(1250,250)[1] \place(750,675)[2] \place(1700,-250)[3] \place(750,250)[4] \place(1250,-250)[5] \place(600,-450)[6] \efig$$ wherein $\otimes$ has been abbreviated by juxtaposition and all subregions not explicitly inhabited by a 2-cell are deemed to be inhabited by the obvious invertible 2-cell. A reference number has been assigned to those invertible 2-cells which arise from the hypotheses. As in \cite{ww}, $d_3$'s denote 3-fold diagonal maps and, similarly, we write $\delta_3$ for a local 3-fold diagonal. The invertible 2-cell labelled by `1' is that defining $A$ to be Frobenius. The 3-fold composite of arrows in the region labelled by `2' is $G\wedge1_A$ and, similarly, in that labelled by `3' we have $1_A\wedge G$. Each of these is isomorphic to $G$ because $A$ is separable and $G$ is copointed. The isomorphisms in `4' and `5' express the pseudo-functoriality of $\otimes$ in the cartesian bicategory $\mathbf{B}$. Finally `6' expresses the ternary local product in terms of the ternary $\otimes$ as in \cite{ww}. Demonstration of the equations is effected easily by pasting composition calculations. \frp \thm\label{wedge=.} If $G$ and $H$ are copointed endomorphisms on a discrete $A$ in a cartesian $\mathbf{B}$ then $$G\toleft^{G\epsilon}GH\to^{\epsilon H}H$$ is a product diagram in $\mathbf{B}(A,A)$. \eth \prf If we are given $\alpha{\kern -.25em}:{\kern -.25em} K{\,\s\to}\, G$ and $\beta{\kern -.25em}:{\kern -.25em} K{\,\s\to}\, H$ then $K$ is also copointed and we have $$K\to^\delta KK\to^{\alpha\beta}GH$$ as a candidate pairing. That this candidate satisfies the universal property follows from the equations of Lemma \ref{diag} which are precisely those in the equational description of binary products. We remark that the `naturality' equations for the projections follow immediately from uniqueness of copoints. \frp \cor\label{comsim} If $A$ is discrete in a cartesian $\mathbf{B}$, then an endo-arrow $G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ admits a comonad structure if and only if $G$ has the copointed property, and any such comonad structure is unique. \eth \prf The Theorem shows that the arrow $\delta{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, GG$ constructed in Lemma \ref{diag} is the product diagonal on $G$ in the category $\mathbf{B}(A,A)$ and, given $\epsilon{\kern -.25em}:{\kern -.25em} G{\,\s\to}\, 1_A$, this is the only comonad comultiplication on $G$. \frp \rmk It is clear that $1_A$ is terminal with respect to the copointed objects in $\mathbf{B}(A,A)$. \eth \prp\label{subterm} If an object $B$ in a bicategory $\mathbf{B}$ has $1_B$ subterminal in $\mathbf{B}(B,B)$ then, for any map $f{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, B$, $f$ is subterminal in $\mathbf{B}(A,B)$ and $f^*$ is subterminal in $\mathbf{B}(B,A)$. In particular, in a cartesian bicategory in which every object is separable, every adjoint arrow is subterminal. \eth \prf Precomposition with a map preserves terminal objects and monomorphisms, as does postcomposition with a right adjoint. \frp \section{Bicategories of Comonads}\label{Coms} The starting point of this section is the observation, made in the introduction, that a comonad in the bicategory $\mathrm{Span}\,{\cal E}$ is precisely a span of the form $$A \xleftarrow{g} X \xrightarrow{g} A$$ in which both legs are equal. We will write $\mathbf{C}=\mathrm{Com}\mathbf{B}$ for the bicategory of comonads in $\mathbf{B}$, $\mathrm{Com}$ being one of the duals of Street's construction $\mathrm{Mnd}$ in \cite{ftm}. Thus $\mathbf{C}$ has objects given by the comonads $(A,G)$ of $\mathbf{B}$. The structure 2-cells for comonads will be denoted $\epsilon=\epsilon_G$, for counit and $\delta=\delta_G$, for comultiplication. An arrow in $\mathbf{C}$ from $(A,G)$ to $(B,H)$ is a pair $(F,\phi)$ as shown in $$\bfig \square(0,0)[A`B`A`B;F`G`H`F] \morphism(125,250)|a|<250,0>[`;\phi] \efig$$ satisfying \begin{equation}\label{comarrow} \bfig \square(0,0)[FG`HF`F1_A`1_BF;\phi`F\epsilon`\epsilon F`=] \place(1000,250)[\mbox{and}] \square(1500,0)/`->``->/[FG``FGG`HFG;`F\delta``\phi G] \square(2000,0)/``->`->/[`HF`HFG`HHF;``\delta F`H\phi] \morphism(1500,500)<1000,0>[FG`HF;\phi] \efig \end{equation} (where, as often, we have suppressed the associativity constraints of our normal, cartesian, bicategory $\mathbf{B}$). A 2-cell $\tau{\kern -.25em}:{\kern -.25em}(F,\phi){\,\s\to}\,(F',\phi'){\kern -.25em}:{\kern -.25em}(A,G){\,\s\to}\,(B,H)$ in $\mathbf{C}$ is a 2-cell $\tau{\kern -.25em}:{\kern -.25em} F{\,\s\to}\, F'$ in $\mathbf{B}$ satisfying \begin{equation}\label{comtrans} \bfig \square(0,0)[FG`HF`F'G`HF';\phi`\tau G`H\tau`\phi'] \efig \end{equation} There is a pseudofunctor $I{\kern -.25em}:{\kern -.25em}\mathbf{B}{\,\s\to}\,\mathbf{C}$ given by $$I(\tau{\kern -.25em}:{\kern -.25em} F{\,\s\to}\, F'{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, B)=% \tau{\kern -.25em}:{\kern -.25em} (F,1_{F}){\,\s\to}\, (F',1_{F'}){\kern -.25em}:{\kern -.25em} (A,1_A){\,\s\to}\, (B,1_B)$$ From \cite{ftm} it is well known that a bicategory $\mathbf{B}$ has Eilenberg-Moore objects for comonads if and only if $I{\kern -.25em}:{\kern -.25em}\mathbf{B}{\,\s\to}\,\mathbf{C}$ has a right biadjoint, which we will denote by $E{\kern -.25em}:{\kern -.25em}\mathbf{C}{\,\s\to}\,\mathbf{B}$. We write $E(A,G)=A_G$ and the counit for $I\dashv E$ is denoted by $$\bfig \square(0,0)[A_G`A`A_G`A;g_G`1_{A_G}`G`g_G] \morphism(125,250)|a|<250,0>[`;\gamma_G] \Ctriangle(2500,0)/<-`->`->/<500,250>[A`A_G`A;g_G`G`g_G] \place(1500,250)[\mbox{ or, using normality of $\mathbf{B}$, better by }] \morphism(2700,250)|m|<200,0>[`;\gamma_G] \efig$$ with $(g_G,\gamma_G)$ abbreviated to $(g,\gamma)$ when there is no danger of confusion. It is standard that each $g=g_G$ is necessarily a map (whence our lower case notation) and the mate $gg^*{\,\s\to}\, G$ of $\gamma$ is an isomorphism which identifies $\epsilon_g$ and $\epsilon_G$. We will write $\mathbf{D}$ for the locally full subbicategory of $\mathbf{C}$ determined by all the objects and those arrows of the form $(f,\phi)$, where $f$ is a map, and write $j{\kern -.25em}:{\kern -.25em}\mathbf{D}{\,\s\to}\,\mathbf{C}$ for the inclusion. It is clear that the pseudofunctor $I{\kern -.25em}:{\kern -.25em}\mathbf{B}{\,\s\to}\,\mathbf{C}$ restricts to give a pseudofunctor $J{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{D}$. We say that the bicategory $\mathbf{B}$ {\em has Eilenberg-Moore objects for comonads, as seen by $\mathbf{M}$}, if $J{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{D}$ has a right biadjoint. (In general, this property does not follow from that of Eilenberg-Moore objects for comonads.) \begin{remark}\label{rmk:D-for-dummies} In the case $\mathbf{B}=\mathrm{Span}\,{\cal E}$, a comonad in $\mathbf{B}$ can, as we have seen, be identified with a morphism in {\cal E}. This can be made into the object part of a biequivalence between the bicategory $\mathbf{D}$ and the category ${\cal E}^\mathbf{2}$ of arrows in ${\cal E}$. If we further identify $\mathbf{M}$ with ${\cal E}$, then the inclusion $j:\mathbf{D}\to\mathbf{C}$ becomes the diagonal ${\cal E}\to{\cal E}^\mathbf{2}$; of course this does have a right adjoint, given by the domain functor. \end{remark} \thm\label{simcom} If $\mathbf{B}$ is a cartesian bicategory in which every object is discrete, the bicategory $\mathbf{D}=\mathbf{D}(\mathbf{B})$ admits the following simpler description: \begin{enumerate} \item[$i)$] An object is a pair $(A,G)$ where $A$ is an object of $\mathbf{B}$ and $G{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, A$ admits a copoint; \item[$ii)$] An arrow $(f,\phi){\kern -.25em}:{\kern -.25em}(A,G){\,\s\to}\,(B,H)$ is a map $f{\kern -.25em}:{\kern -.25em} A{\,\s\to}\, B$ and a 2-cell $\phi{\kern -.25em}:{\kern -.25em} fG{\,\s\to}\, Hf$; \item[$iii)$] A 2-cell $\tau{\kern -.25em}:{\kern -.25em}(f,\phi){\,\s\to}\,(f',\phi'){\kern -.25em}:{\kern -.25em}(A,G){\,\s\to}\,(B,H)$ is a 2-cell satisfying $\tau{\kern -.25em}:{\kern -.25em} f{\,\s\to}\, f'$ satisfying equation (\ref{comtrans}). \end{enumerate} \eth \prf We have i) by Corollary \ref{comsim} while iii) is precisely the description of a 2-cell in $\mathbf{D}$, modulo the description of the domain and codomain arrows. So, we have only to show ii), which is to show that the equations (\ref{comarrow}) hold automatically under the hypotheses. For the first equation of (\ref{comarrow}) we have uniqueness of any 2-cell $fG{\,\s\to}\, f$ because $f$ is subterminal by Proposition \ref{subterm}. For the second, observe that the terminating vertex, $HHf$, is the product $Hf\wedge Hf$ in $\mathbf{M}(A,B)$ because $HH$ is the product $H\wedge H$ in $\mathbf{M}(B,B)$ by Theorem~\ref{wedge=.} and precomposition with a map preserves all limits. For $HHf$ seen as a product, the projections are, again by Theorem~\ref{wedge=.}, $H\epsilon f$ and $\epsilon Hf$. Thus, it suffices to show that the diagram for the second equation commutes when composed with both $H\epsilon f$ and $\epsilon Hf$. We have $$\bfig \square(0,0)/`->``->/[fG``fGG`HfG;`f\delta``\phi G] \square(500,0)/``->`->/[`Hf`HfG`HHf;``\delta f`H\phi] \morphism(0,500)<1000,0>[fG`Hf;\phi] \qtriangle(500,-500)|blr|[HfG`HHf`Hf;H\phi`Hf\epsilon`H\epsilon f] \morphism(0,-500)|b|<1000,0>[fG`Hf;\phi] \morphism(0,0)<0,-500>[fGG`fG;fG\epsilon] \square(2000,0)/`->``->/[fG``fGG`HfG;`f\delta``\phi G] \square(2500,0)/``->`->/[`Hf`HfG`HHf;``\delta f`H\phi] \morphism(2000,500)<1000,0>[fG`Hf;\phi] \ptriangle(2000,-500)|blr|[fGG`HfG`fG;\phi G`f\epsilon G`\epsilon fG] \morphism(2000,-500)|b|<1000,0>[fG`Hf;\phi] \morphism(3000,0)|r|<0,-500>[HHf`Hf;\epsilon Hf] \efig$$ in which each of the lower triangles commutes by the first equation of (\ref{comarrow}) already established. Using comonad equations for $G$ and $H$, it is obvious that each composite is $\phi$. \frp Finally, let us note that $\mathbf{D}$ is a subbicategory, neither full nor locally full, of the Grothendieck bicategory $\mathbf{G}$ and write $K{\kern -.25em}:{\kern -.25em}\mathbf{D}{\,\s\to}\,\mathbf{G}$ for the inclusion. We also write $\iota{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{G}$ for the composite pseudofunctor $KJ$. Summarizing, we have introduced the following commutative diagram of bicategories and pseudofunctors $$\bfig \square(0,0)[\mathbf{M}`\mathbf{D}`\mathbf{B}`\mathbf{C};J`i`j`I] \morphism(500,500)<500,0>[\mathbf{D}`\mathbf{G};K] \morphism(0,500)|a|/{@{->}@/^2em/}/<1000,0>[\mathbf{M}`\mathbf{G};\iota] \efig ;$$ note also that in our main case of interest $\mathbf{B}=\mathrm{Span}\,{\cal E}$, each of $\mathbf{M}$, $\mathbf{D}$, and $\mathbf{G}$ is biequivalent to a mere category. Ultimately, we are interested in having a right biadjoint, say $\tau$, of $\iota$. For such a biadjunction $\iota\dashv\tau$ the counit at an object $R{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{G}$ will take the form \begin{equation}\label{tabcounit} \bfig \Ctriangle/<-`->`->/<500,250>[X`\tau R`A;u_R`R`v_R] \morphism(200,250)|m|<200,0>[`;\omega_R] \efig \end{equation} (where, as for a biadjunction $I\dashv E{\kern -.25em}:{\kern -.25em}\mathbf{C}{\,\s\to}\,\mathbf{B}$, a triangle rather than a square can be taken as the boundary of the 2-cell by the normality of $\mathbf{B}$). In fact, we are interested in the case where we have $\iota\dashv\tau$ and moreover the counit components $\omega_R{\kern -.25em}:{\kern -.25em} v_R{\,\s\to}\, Ru_R$ enjoy the property that their mates $v_Ru^*_R{\,\s\to}\, R$ with respect to the adjunction $u_R\dashv u^*_R$ are invertible. In this way we represent a general arrow of $\mathbf{B}$ in terms of a span of maps. Since biadjunctions compose we will consider adjunctions $J\dashv F$ and $K\dashv G$ and we begin with the second of these. \thm\label{G(R)} For a cartesian bicategory $\mathbf{B}$ in which every object is discrete, there is an adjunction $K\dashv G{\kern -.25em}:{\kern -.25em}\mathbf{G}{\,\s\to}\,\mathbf{D}$ where, for $R{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$ in $\mathbf{G}$, the comonad $G(R)$ and its witnessing copoint $\epsilon{\kern -.25em}:{\kern -.25em} G(R){\,\s\to}\, 1_{XA}$ are given by the left diagram below and the counit $\mu{\kern -.25em}:{\kern -.25em} KG(R){\,\s\to}\, R$ is given by the right diagram below, all in notation suppressing $\otimes$: $$\bfig \ptriangle(-1500,1000)/->`->`<-/[XA`XA`XXA;1_{XA}`dA`p_{1,3}] \morphism(-1400,1350)|a|<150,0>[`;\simeq] \morphism(-1500,1000)|l|<0,-500>[XXA`XAA;XRA] \morphism(-1000,1500)|r|<0,-1500>[XA`XA;1_{XA}] \morphism(-1375,750)|m|<250,0>[`;\tilde p_{1,3}] \btriangle(-1500,0)[XAA`XA`XA;Xd^*`p_{1,3}`1_{XA}] \morphism(-1400,150)|a|<150,0>[`;] \morphism(-1500,1500)|l|/{@{->}@/_3.5em/}/<0,-1500>[XA`XA;G(R)] \ptriangle(0,1000)/->`->`<-/[XA`X`XXA;p`dA`p_2] \morphism(0,1000)|l|<0,-500>[XXA`XAA;XRA] \morphism(500,1500)|r|<0,-1500>[X`A;R] \btriangle(0,0)[XAA`XA`A;Xd^*`p_2`r] \morphism(100,1350)|a|<150,0>[`;\simeq] \morphism(125,750)|m|<250,0>[`;\tilde p_2] \morphism(100,150)|b|<150,0>[`;] \morphism(0,1500)|l|/{@{->}@/_3.5em/}/<0,-1500>[XA`XA;G(R)] \efig$$ Moreover, the mate $rG(R)p^*{\,\s\to}\, R$ of the counit $\mu$ is invertible. In the left diagram, the $p_{1,3}$ collectively denote projection from the three-fold product in $\mathbf{G}$ to the product of the first and third factors. In the right diagram, the $p_2$ collectively denote projection from the three-fold product in $\mathbf{G}$ to the second factor. The upper triangles of the two diagrams are the canonical isomorphisms. The lower left triangle is the mate of the canonical isomorphism $1{\scalefactor{0.5} \to^{\simeq}}p_{1,3}(Xd)$. The lower right triangle is the mate of the canonical isomorphism $r{\scalefactor{0.5} \to^{\simeq}} p_2(Xd)$. \eth \prf Given a comonad $H{\kern -.25em}:{\kern -.25em} T{\,\s\to}\, T$ and an arrow $$\bfig \square(0,0)[T`X`T`A;x`H`R`a] \morphism(125,250)|a|<250,0>[`;\psi] \efig$$ in $\mathbf{G}$, we verify the adjunction claim by showing that there is a unique arrow $$\bfig \square(0,0)[T`XA`T`XA;f`H`G(R)`f] \morphism(125,250)|a|<250,0>[`;\phi] \efig$$ in $\mathbf{D}$, whose composite with the putative counit $\mu$ is $(x,\psi,a)$. It is immediately clear that the unique solution for $f$ is $(x,a)$ and to give $\phi{\kern -.25em}:{\kern -.25em}(x,a)H{\,\s\to}\, Xd^*(XRA)dA(x,a)$ is to give the mate $Xd(x,a)H{\,\s\to}\, (XRA)dA(x,a)$ which is $(x,a,a)H{\,\s\to}\, (XRA)(x,x,a)$ and can be seen as a $\mathbf{G}$ arrow: $$\bfig \square(0,0)[T`XXA`T`XAA;(x,x,a)`H`XRA`(x,a,a)] \morphism(125,250)|a|<250,0>[`;(\alpha,\beta,\gamma)] \efig$$ where we exploit the description of products in $\mathbf{G}$. From this description it is clear, since $\tilde p_2(\alpha,\beta,\gamma)=\beta$ as a composite in $\mathbf{G}$, that the unique solution for $\beta$ is $\psi$. We have seen in Theorem \ref{comsim} that the conditions (\ref{comarrow}) hold automatically in $\mathbf{D}$ under the assumptions of the Theorem. From the first of these we have: $$\bfig \square(0,0)|almb|[T`XXA`T`XAA;(x,x,a)`H`XRA`(x,a,a)] \morphism(125,250)|a|<250,0>[`;(\alpha,\beta,\gamma)] \square(500,0)|amrb|[XXA`XA`XAA`XA;p_{1,3}`XRA`1_{XA}`p_{1,3}] \morphism(625,250)|a|<250,0>[`;p_{1,3}] \place(1375,250)[=] \square(2000,0)|arrb|[T`XA`T`XA;(x,a)`1_T`1_{XA}`(x,a)] \morphism(2125,250)|a|<250,0>[`;\kappa_{(x,a)}] \morphism(2000,500)|l|/{@{->}@/_3em/}/<0,-500>[T`T;H] \morphism(1750,250)|a|<200,0>[`;\epsilon_H] \efig$$ So, with a mild abuse of notation, we have $(\alpha,\gamma)=(1_x\epsilon_H, 1_a\epsilon_H)$, uniquely, and thus the unique solutions for $\alpha$ and $\gamma$ are $1_x\epsilon_H$ and $1_a\epsilon_H$ respectively. This shows that $\phi$ is necessarily the mate under the adjunctions considered of $(1_x\epsilon_H,\psi,1_a\epsilon_H,)$. Since $\mathbf{D}$ and $\mathbf{G}$ are essentially locally discrete this suffices to complete the claim that $K\dashv G$. It only remains to show that the mate $rG(R)p^*{\,\s\to}\, R$ of the counit $\mu$ is invertible. In the three middle squares of the diagram $$\bfig \morphism(0,500)|l|/{@{->}@/_3.5em/}/<0,-1500>[XA`XA;G(R)] \square(0,0)|alrm|/<-`->`->`<-/[XA`X`XXA`XX;p^*`dA`d`p^*] \morphism(125,250)|a|<250,0>[`;\tilde p^*_{d,1_A}] \square(0,-500)|mlmm|/<-`->`->`<-/[XXA`XX`XAA`XA;p^*`XRA`XR`p^*] \morphism(125,-250)|a|<250,0>[`;\tilde p^*_{XR,1_A}] \square(0,-1000)|mlrb|/<-`->`->`->/[XAA`XA`XA`A;p^*`Xd^*`r`r] \morphism(175,-750)|a|<150,0>[`;\simeq] \square(500,-500)|amrb|[XX`X`XA`A;r`XR`R`r] \morphism(625,-250)|a|<250,0>[`;r_{1_X,R}] \morphism(500,500)|r|<500,-500>[X`X;1_X] \morphism(1000,-500)|r|<-500,-500>[A`A;1_A] \efig$$ the top two are invertible 2-cells by Proposition 4.18 of \cite{ckww} while the lower one is the obvious invertible 2-cell constructed from $Xd^*p^*\iso1_{X,A}$. The right square is an invertible 2-cell by Proposition 4.17 of \cite{ckww}. This shows that the mate $rG(R)p^*{\,\s\to}\, R$ of $\mu$ is invertible. \frp \rmk \label{unit} It now follows that the unit of the adjunction $K\dashv G$ is given (in notation suppressing $\otimes$) by: $$\bfig \qtriangle(1500,1000)[T`TT`TTT;d`d_3`dT] \dtriangle(1500,0)/<-`->`->/[TTT`T`TT;d_3`Td^*`d] \morphism(1500,1500)|l|<0,-1500>[T`T;H] \morphism(2000,1000)|l|<0,-500>[TTT`TTT;HHH] \morphism(2000,1000)|r|/{@{->}@/^3em/}/<0,-500>[TTT`TTT;THT] \morphism(1750,1350)|a|<150,0>[`;\simeq] \morphism(1550,750)|m|<175,0>[`;\tilde d_3] \morphism(2050,750)|m|<225,0>[`;\epsilon H\epsilon] \morphism(1750,150)|a|<150,0>[`;\simeq] \efig$$ where the $d_3$ collectively denote 3-fold diagonalization $(1,1,1)$ in $\mathbf{G}$. The top triangle is a canonical isomorphism while the lower triangle is the mate of the canonical isomorphism $(T\otimes d)d{\scalefactor{0.5} \to^{\simeq}}d_3$ and is itself invertible, by separability of $T$. \eth Before turning to the question of an adjunction $J\dashv F$, we note: \lem\label{maplikeanm} In a cartesian bicategory in which Maps are Comonadic, if $gF\cong h$ with $g$ and $h$ maps, then $F$ is also a map. \eth \prf By Theorem 3.11 of \cite{ww} it suffices to show that $F$ is a comonoid homomorphism, which is to show that the canonical 2-cells $\tilde t_F{\kern -.25em}:{\kern -.25em} tF{\,\s\to}\, t$ and $\tilde d_F{\kern -.25em}:{\kern -.25em} dF{\,\s\to}\,(F\otimes F)d$ are invertible. For the first we have: $$tF\cong tgF\cong th\cong t$$ Simple diagrams show that we do get the right isomorphism in this case and also for the next: $$(g\otimes g)(dF)\cong dgF\cong dh\cong(h\otimes h)d\cong (g\otimes g)(F\otimes F)d$$ which gives $dF\cong(F\otimes F)d$ since the map $g\otimes g$ reflects isomorphisms. \frp \thm\label{emasbm} If $\mathbf{B}$ is a cartesian bicategory which has Eilenberg-Moore objects for Comonads and for which Maps are Comonadic then $\mathbf{B}$ has Eilenberg-Moore objects for Comonads as Seen by $\mathbf{M}$, which is to say that $J{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{D}$ has a right adjoint. Moreover, the counit for the adjunction, say $JF{\,\s\to}\, 1_{\mathbf{D}}$, necessarily having components of the form $\gamma{\kern -.25em}:{\kern -.25em} g{\,\s\to}\, Gg$ with $g$ a map, has $gg^*{\,\s\to}\, G$ invertible. \eth \prf It suffices to show that the adjunction $I\dashv E{\kern -.25em}:{\kern -.25em}\mathbf{C}{\,\s\to}\,\mathbf{B}$ restricts to $J\dashv F{\kern -.25em}:{\kern -.25em}\mathbf{D}{\,\s\to}\,\mathbf{M}$. For this it suffices to show that, given $(h,\theta){\kern -.25em}:{\kern -.25em} JT{\,\s\to}\,(A,G)$, the $F{\kern -.25em}:{\kern -.25em} T{\,\s\to}\, A_G$ with $gF\cong h$ which can be found using $I\dashv E$ has $F$ a map. This follows from Lemma \ref{maplikeanm}. \frp \thm\label{tabulation} A cartesian bicategory which has Eilenberg-Moore objects for Comonads and for which Maps are Comonadic has tabulation in the sense that the inclusion $\iota{\kern -.25em}:{\kern -.25em}\mathbf{M}{\,\s\to}\,\mathbf{G}$ has a right adjoint $\tau$ and the counit components $\omega_R{\kern -.25em}:{\kern -.25em} v_R{\,\s\to}\, Ru_R$ as in (\ref{tabcounit}) have the property that the mates $v_Ru^*_R{\,\s\to}\, R$, with respect to the adjunctions $u_R\dashv u^*_R$, are invertible. \eth \prf Using Theorems \ref{G(R)} and \ref{emasbm} we can construct the adjunction $\iota\dashv\tau$ by composing $J\dashv F$ with $K\dashv G$. Moreover, the counit for $\iota\dashv\tau$ is the pasting composite: $$\bfig \Ctriangle(0,0)|lmb|/<-`->`->/<500,250>[X\otimes A`T`X\otimes A;(u,v)`G(R)`(u,v)] \morphism(0,250)|l|/{@{->}@/^4.0em/}/<1000,250>[T`X;u] \morphism(0,250)|l|/{@{->}@/_4.0em/}/<1000,-250>[T`A;v] \morphism(300,160)|m|<0,180>[`;\gamma] \square(500,0)|amrb|<500,500>[X\otimes A`X`X\otimes A`A;p`G(R)`R`r] \morphism(650,250)|m|<200,0>[`;\mu] \efig$$ where the square is the counit for $K\dashv G$; and the triangle, the counit for $J\dashv F$, is an Eilenberg-Moore coalgebra for the comonad $G(R)$. The arrow component of the Eilenberg-Moore coalgebra is necessarily of the form $(u,v)$, where $u$ and $v$ are maps, and it also follows that we have $(u,v)(u,v)^*\cong G(R)$. Thus we have $$vu^*\cong r(u,v)(p(u,v))^*\cong r(u,v)(u,v)^*p^*\cong rG(R)p^*\cong R$$ where the first two isomorphisms are trivial, the third arises from the invertibility of the mate of $\gamma$ as an Eilenberg-Moore structure, and the fourth is invertibility of $\mu$, as in Theorem \ref{G(R)}. \frp \thm\label{mapbhaspb} For a cartesian bicategory $\mathbf{B}$ with Eilenberg-Moore objects for Comonads and for which Maps are Comonadic, $\mathrm{Map}\mathbf{B}$ has pullbacks satisfying the Beck condition (meaning that for a pullback square \begin{equation}\label{beckforpb} \bfig \square(0,0)[P`M`N`A;r`p`b`a] \morphism(200,250)<100,0>[`;\simeq] \efig \end{equation} the mate $pr^*{\,\s\to}\, a^*b$ of $ap\cong br$ in $\mathbf{B}$, with respect to the adjunctions $r\dashv r^*$ and $a\dashv a^*$, is invertible). \eth \prf Given the cospan $a{\kern -.25em}:{\kern -.25em} N{\,\s\to}\, A\,{\s\toleft}\, M{\kern -.25em}:{\kern -.25em} b$ in $\mathrm{Map}\mathbf{B}$, let $P$ together with $(r,\sigma,p)$ be a tabulation for $a^*b{\kern -.25em}:{\kern -.25em} M{\,\s\to}\, N$. Then $pr^*{\,\s\to}\, a^*b$, the mate of $\sigma{\kern -.25em}:{\kern -.25em} p{\,\s\to}\, a^*br$ with respect to $r\dashv r^*$, is invertible by Theorem \ref{tabulation}. We have also $ap{\,\s\to}\, br$, the mate of $\sigma{\kern -.25em}:{\kern -.25em} p{\,\s\to}\, a^*br$ with respect to $a\dashv a^*$. Since $A$ is discrete, $ap{\,\s\to}\, br$ is also invertible and is the only 2-cell between the composite maps in question. If we have also $u{\kern -.25em}:{\kern -.25em} N\,{\s\toleft}\, T{\,\s\to}\, M{\kern -.25em}:{\kern -.25em} v$, for maps $u$ and $v$ with $au\cong bv$, then the mate $u{\,\s\to}\, a^*bv$ ensures that the span $u{\kern -.25em}:{\kern -.25em} N\,{\s\toleft}\, T{\,\s\to}\, M{\kern -.25em}:{\kern -.25em} v$ factors through $P$ by an essentially unique map $w{\kern -.25em}:{\kern -.25em} T{\,\s\to}\, P$ with $pw\cong u$ and $rw\cong v$. \frp \prp\label{tabonadic} In a cartesian bicategory with Eilenberg-Moore objects for Comonads and for which Maps are Comonadic, every span of maps $x{\kern -.25em}:{\kern -.25em} X\,{\s\toleft}\, S{\,\s\to}\, A{\kern -.25em}:{\kern -.25em} a$ gives rise to the following tabulation diagram: $$\bfig \Ctriangle(0,0)|lmb|/<-`->`->/<500,250>[X`S`A;x`ax^*`a] \morphism(300,160)|m|<0,180>[`;a\eta_x] \efig$$ \eth \prf A general tabulation counit $\omega_R{\kern -.25em}:{\kern -.25em} v_R{\,\s\to}\, Ru_R$ is given in terms of the Eilenberg-Moore coalgebra for the comonad $(u,v)(u,v)^*$ and necessarily $(u,v)(u,v)^*\cong G(R)$. It follows that for $R=ax^*$, it suffices to show that $G(ax^*)\cong (x,a)(x,a)^*$. Consider the diagram (with $\otimes$ suppressed): $$\bfig \Atriangle(0,0)|bba|/->`->`/[XSA`XXA`XAA;XxA`XaA`] \Vtriangle(0,500)|mmm|/`->`->/[SA`XS`XSA;`(x,S)A`X(S,a)] \Atriangle(0,1000)|lrm|/->`->`/[S`SA`XS;(S,a)`(x,S)`] \Ctriangle(-500,0)|lml|/->``->/[SA`XA`XXA;xA``dA] \Dtriangle(1000,0)|mrr|/`->`->/[XS`XA`XAA;`Xa`Xd] \morphism(500,1500)|l|/{@{->}@/_3em/}/<-1000,-1000>[S`XA;(x,a)] \morphism(500,1500)|r|/{@{->}@/^3em/}/<1000,-1000>[S`XA;(x,a)] \efig$$ The comonoid $G(ax^*)$ can be read, from left to right, along the `W' shape of the lower edge as $G(ax^*)\cong Xd^*.XaA.Xx^*A.dA$. But each of the squares in the diagram is a (product-absolute) pullback so that with Proposition \ref{mapbhaspb} at hand we can continue: $$Xd^*.XaA.Xx^*A.dA\cong Xa.X(S,a)^*.(x,S)A.x^*A\cong% Xa.(x,S).(S,a)^*. x^*A\cong (x,a)(x,a)^*$$ as required. \frp \section{Characterization of Bicategories of Spans}\label{charspan} \subsection{}\label{Cfun} If $\mathbf{B}$ is a cartesian bicategory with $\mathrm{Map}\mathbf{B}$ essentially locally discrete then each slice $\mathrm{Map}\mathbf{B}/(X\otimes A)$ is also essentially locally discrete and we can write $\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$ for the categories obtained by taking the quotients of the equivalence relations comprising the hom categories of the $\mathrm{Map}\mathbf{B}/(X\otimes A)$. Then we can construct functors $C_{X,A}{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A){\,\s\to}\,\mathbf{B}(X,A)$, where for an arrow in $\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$ as shown, $$\bfig \Ctriangle(0,0)|lml|/->`->`<-/[M`A`N;a`h`b] \Dtriangle(500,0)|mrr|/->`->`<-/[M`X`N;h`x`y] \efig$$ we define $C(y,N,b)=by^*$ and $C(h){\kern -.25em}:{\kern -.25em} ax^*=(bh)(yh)^*\cong bhh^*y^*\to^{b\epsilon_hy^*} by^*$. If $\mathrm{Map}\mathbf{B}$ is known to have pullbacks then the $\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$ become the hom-categories for a bicategory $\mathrm{Span}\,\mathrm{Map}\mathbf{B}$ and we can consider whether the $C_{X,A}$ provide the effects on homs for an identity-on-objects pseudofunctor $C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$. Consider \begin{equation}\label{beck} \bfig \Atriangle(0,0)/->`->`/[N`Y`A;y`b`] \Vtriangle(500,0)/`->`->/[N`M`A;``] \Atriangle(1000,0)/->`->`/[M`A`X;a`x`] \Atriangle(500,500)/->`->`/[P`N`M;p`r`] \efig \end{equation} where the square is a pullback. In somewhat abbreviated notation, what is needed further are coherent, invertible 2-cells $\widetilde C{\kern -.25em}:{\kern -.25em} CN.CM{\,\s\to}\, C(NM)=CP$, for each composable pair of spans $M$, $N$, and coherent, invertible 2-cells $C^\circ{\kern -.25em}:{\kern -.25em} 1_A{\,\s\to}\, C(1_A)$, for each object $A$. Since the identity span on $A$ is $(1_A,A,1_A)$, and $C(1_A)=1_A.1^*_A\iso1_A.1_A\iso1_A$ we take the inverse of this composite for $C^\circ$. To give the $\widetilde C$ though is to give 2-cells $yb^*ax^*{\,\s\to}\, ypr^*x^*$ and since spans of the form $(1_N,N,b)$ and $(a,M,1_M)$ arise as special cases, it is easy to verify that to give the $\widetilde C$ it is necessary and sufficient to give coherent, invertible 2-cells $b^*a{\,\s\to}\, pr^*$ for each pullback square in $\mathrm{Map}\mathbf{B}$. The inverse of such a 2-cell $pr^*{\,\s\to}\, b^*a$ is the mate of a 2-cell $bp{\,\s\to}\, aq$. But by discreteness a 2-cell $bp{\,\s\to}\, aq$ must be essentially an identity. Thus, definability of $\widetilde C$ is equivalent to the invertibility in $\mathbf{B}$ of the mate $pr^*{\,\s\to}\, b^*a$ of the identity $bp{\,\s\to}\, ar$, for each pullback square as displayed in (\ref{beck}). In short, if $\mathrm{Map}\mathbf{B}$ has pullbacks and these satisfy the Beck condition as in Proposition \ref{mapbhaspb} then we have a canonical pseudofunctor $C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$. \thm\label{spanmain} For a bicategory $\mathbf{B}$ the following are equivalent: \begin{enumerate}[$i)$] \item There is a biequivalence $\mathbf{B}\simeq\mathrm{Span}\,{\cal E}$, for ${\cal E}$ a category with finite limits; \item The bicategory $\mathbf{B}$ is cartesian, each comonad has an Eilenberg-Moore object, and every map is comonadic. \item The bicategory $\mathrm{Map}\mathbf{B}$ is an essentially locally discrete bicategory with finite limits, satisfying in $\mathbf{B}$ the Beck condition for pullbacks of maps, and the canonical $$C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$$ is a biequivalence of bicategories. \end{enumerate} \eth \prf That $i)$ implies $ii)$ follows from our discussion in the Introduction. That $iii)$ implies $i)$ is trivial so we show that $ii)$ implies $iii)$. We have already observed in Theorem \ref{odibld} that, for $\mathbf{B}$ cartesian with every object discrete, $\mathrm{Map}\mathbf{B}$ is essentially locally discrete and we have seen by Propositions \ref{mcifro} and \ref{mcisep} that, in a cartesian bicategory in which Maps are Comonadic, every object is discrete. In Theorem~\ref{mapbhaspb} we have seen that, for $\mathbf{B}$ satisfying the conditions of $ii)$, $\mathrm{Map}\mathbf{B}$ has pullbacks, and hence all finite limits and, in $\mathbf{B}$ the Beck condition holds for pullbacks. Therefore we have the canonical pseudofunctor $C{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}{\,\s\to}\,\mathbf{B}$ developed in \ref{Cfun}. To complete the proof it suffices to show that the $C_{X,A}{\kern -.25em}:{\kern -.25em}\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A){\,\s\to}\,\mathbf{B}(X,A)$ are equivalences of categories. Define functors $F_{X,A}{\kern -.25em}:{\kern -.25em}\mathbf{B}(X,A){\,\s\to}\,\mathrm{Span}\,\mathrm{Map}\mathbf{B}(X,A)$ by $F(R)=F_{X,A}(R)=(u,\tau R,v)$ where $$\bfig \Ctriangle/<-`->`->/<500,250>[X`\tau R`A;u`R`v] \morphism(200,250)|m|<200,0>[`;\omega] \efig$$ is the $R$-component of the counit for $\iota\dashv\tau{\kern -.25em}:{\kern -.25em}\mathbf{G}{\,\s\to}\,\mathrm{Map}\mathbf{B}$. For a 2-cell $\alpha{\kern -.25em}:{\kern -.25em} R{\,\s\to}\, R'$ we define $F(\alpha)$ to be the essentially unique map satisfying $$\bfig \morphism(-500,500)|m|[\tau R`\tau R';F(\alpha)] \Ctriangle(0,0)|rrr|/<-`->`->/[X`\tau R'`A;u'`R'`v'] \morphism(175,400)|a|<150,0>[`;\omega'] \morphism(-500,500)|l|<1000,500>[\tau R`X;u] \morphism(-500,500)|l|<1000,-500>[\tau R`A;v] \place(750,500)[=] \Ctriangle(1000,0)/<-``->/[X`\tau R`A;u``v] \morphism(1500,1000)|l|/{@{->}@/_1em/}/<0,-1000>[X`A;R] \morphism(1500,1000)|r|/{@{->}@/^1.5em/}/<0,-1000>[X`A;R'] \morphism(1150,400)|a|<150,0>[`;\omega] \morphism(1450,400)|a|<150,0>[`;\alpha] \efig$$ (We remark that essential uniqueness here means that $F(\alpha)$ is determined to within unique invertible 2-cell.) Since $\omega{\kern -.25em}:{\kern -.25em} v{\,\s\to}\, Ru$ has mate $vu^*{\,\s\to}\, R$ invertible, because $(v,\tau R,u)$ is a tabulation of $R$, it follows that we have a natural isomorphism $CFR{\,\s\to}\, R$. On the other hand, starting with a span $(x,S,a)$ from $X$ to $A$ we have as a consequence of Theorem \ref{tabonadic} that $(x,S,a)$ is part of a tabulation of $ax^*{\kern -.25em}:{\kern -.25em} X{\,\s\to}\, A$. It follows that we have a natural isomorphism $(x,S,a){\,\s\to}\, FC(x,S,a)$, which completes the demonstration that $C_{X,A}$ and $F_{X,A}$ are inverse equivalences. \frp \section{Direct sums in bicategories of spans} In the previous section we gave a characterization of those (cartesian) bicategories of the form $\mathrm{Span}\,{\cal E}$ for a category ${\cal E}$ with finite limits. In this final section we give a refinement, showing that $\mathrm{Span}\,{\cal E}$ has direct sums if and only if the original category ${\cal E}$ is lextensive \cite{ext}. Direct sums are of course understood in the bicategorical sense. A {\em zero object} in a bicategory is an object which is both initial and terminal. In a bicategory with finite products and finite coproducts in which the initial object is also terminal there is a canonical induced arrow $X+Y\to X\times Y$, and we say that the bicategory has {\em direct sums} when this map is an equivalence. \begin{remark} Just as in the case of ordinary categories, the existence of direct sums gives rise to a calculus of matrices. A morphism $X_1+\ldots+X_m\to Y_1+\ldots+Y_n$ can be represented by an $m\times n$ matrix of morphisms between the summands, and composition can be represented by matrix multiplication. \end{remark} \thm Let ${\cal E}$ be a category with finite limits, and $\mathbf{B}=\mathrm{Span}\,{\cal E}$. Then the following are equivalent: \begin{enumerate}[$i)$] \item $\mathbf{B}$ has direct sums; \item $\mathbf{B}$ has finite coproducts; \item $\mathbf{B}$ has finite products; \item ${\cal E}$ is lextensive. \end{enumerate} \eth \prf $[i)\Longrightarrow$ $ii)]$ is trivial. $[ii)\Longleftrightarrow$ $iii)]$ follows from the fact that $\mathbf{B}{^\mathrm{op}}$ is biequivalent to $\mathbf{B}$. $[ii)\Longrightarrow$ $iv)]$ Suppose that $\mathbf{B}$ has finite coproducts, and write $0$ for the initial object and $+$ for the coproducts. For every object $X$ there is a unique span $0\,{\s\toleft}\, D{\,\s\to}\, X$. By uniqueness, any map into $D$ must be invertible, and any two such with the same domain must be equal. Thus when we compose the span with its opposite, as in $0\,{\s\toleft}\, D{\,\s\to}\, X\,{\s\toleft}\, D{\,\s\to}\, 0$, the resulting span is just $0\,{\s\toleft}\, D{\,\s\to}\, 0$. Now by the universal property of $0$ once again, this must just be $0\,{\s\toleft}\, 0{\,\s\to}\, 0$, and so $D\cong 0$, and our unique span $0\to X$ is a map. Clearly coproducts of maps are coproducts, and so the coproduct injections $X+0\to X+Y$ and $0+Y\to X+Y$ are also maps. Thus the coproducts in $\mathbf{B}$ will restrict to ${\cal E}$ provided that the codiagonal $u{\kern -.25em}:{\kern -.25em} X+X\,{\s\toleft}\, E {\,\s\to}\, X{\kern -.25em}:{\kern -.25em} v$ is a map for all objects $X$. Now the fact that the codiagonal composed with the first injection $i:X\to X+X$ is the identity tells us that we have a diagram as on the left below $$\xymatrix @!R=1pc @!C=1pc { && X \ar[dr]_{i'} \ar@{=}[dl] \ar@/^2pc/[ddrr]^{1} && & && X \ar[dr]_{i'} \ar@{=}[dl] \ar@/^2pc/[ddrr]^{i} \\ & X \ar[dr]^{i} \ar@{=}[dl] && E \ar[dl]_{u} \ar[dr]_{v} & & & X \ar[dr]^{i} \ar@{=}[dl] && E \ar[dl]_{u} \ar[dr]_{u} \\ X && X+X && X & X && X+X && X+X }$$ in which the square is a pullback; but then the diagram on the right shows that the composite of $u{\kern -.25em}:{\kern -.25em} X+X\,{\s\toleft}\, E{\,\s\to}\, X+X{\kern -.25em}:{\kern -.25em} u$ with the injection $i:X\to X+X$ is just $i$. Similarly its composite with the other injection $j:X\to X+X$ is $j$, and so $u{\kern -.25em}:{\kern -.25em} X+X\,{\s\toleft}\, E{\,\s\to}\, X+X{\kern -.25em}:{\kern -.25em} u$ is the identity. This proves that the codiagonal is indeed a map, and so that ${\cal E}$ has finite coproducts; we have already assumed that it has finite limits. To see that ${\cal E}$ is lextensive observe that we have equivalences $${\cal E}/(X+Y)\simeq \mathbf{B}(X+Y,1) \simeq \mathbf{B}(X,1)\times\mathbf{B}(Y,1)\simeq {\cal E}/X\times {\cal E}/Y.$$ $[iv)\Longrightarrow$ $i)]$ Suppose that ${\cal E}$ is lextensive. Then in particular, it is distributive, so that $(X+Y)\times Z\cong X\times Z+X\times Y$, and we have \begin{align*} \mathbf{B}(X+Y,Z) &\simeq {\cal E}/\bigl((X+Y)\times Z\bigr) \simeq {\cal E}/(X\times Z+Y\times Z) \\ &\simeq {\cal E}/(X\times Z)\times {\cal E}/(Y\times Z) \simeq \mathbf{B}(X,Z)\times\mathbf{B}(Y,Z) \end{align*} which shows that $X+Y$ is the coproduct in $\mathbf{B}$; but a similar argument shows that it is also the product. \frp \rmk The implication $iv)\Rightarrow i)$ was proved in \cite[Section~3]{SP07}. \eth \rmk The equivalence $ii)\Leftrightarrow iv)$ can be seen as a special case of a more general result \cite{HS} characterizing colimits in ${\cal E}$ which are also (bicategorical) colimits in $\mathrm{Span}\,{\cal E}$. \rmk There is a corresponding result involving partial maps in lextensive categories, although the situation there is more complicated as one does not have direct sums but only a weakened relationship between products and coproducts, and a similarly weakened calculus of matrices. See \cite[Section~2]{restiii}. \eth There is also a nullary version of the theorem. We simply recall that an initial object in a category ${\cal E}$ is said to be {\em strict}, if any morphism into it is invertible, and then leave the proof to the reader. Once again the equivalence $ii)\Leftrightarrow iv)$ is a special case of \cite{HS}. \thm Let ${\cal E}$ be a category with finite limits, and $\mathbf{B}=\mathrm{Span}\,{\cal E}$. Then the following are equivalent: \begin{enumerate}[$i)$] \item $\mathbf{B}$ has a zero object; \item $\mathbf{B}$ has an initial object; \item $\mathbf{B}$ has a terminal object; \item ${\cal E}$ has a strict initial object. \end{enumerate} \eth \references \bibitem[CKWW]{ckww} A. Carboni, G.M. Kelly, R.F.C. Walters, and R.J. Wood. Cartesian bicategories II, {\em Theory Appl. Categ.\/} 19 (2008), 93--124. \bibitem[CLW]{ext} A. Carboni, Stephen Lack, and R.F.C. Walters. Introduction to extensive and distributive categories. {\em J. Pure Appl. Algebra\/} 84 (1993), 145--158. \bibitem[C\&W]{caw} A. Carboni and R.F.C. Walters. Cartesian bicategories. I. {\em J. Pure Appl. Algebra\/} 49 (1987), 11--32. \bibitem[C\&L]{restiii} J.R.B. Cockett and Stephen Lack. Restriction categories III: colimits, partial limits, and extensivity, {\em Math. Struct. in Comp. Science\/} 17 (2007), 775--817. \bibitem[LSW]{lsw} I. Franco Lopez, R. Street, and R.J. Wood, Duals Invert, {\em Applied Categorical Structures\/}, to appear. \bibitem[H\&S]{HS} T. Heindel and P. Soboci\'nski, Van Kampen colimits as bicolimits in Span, {\em Lecture Notes in Computer Science,\/} (CALCO 2009), 5728 (2009), 335--349. \bibitem[P\&S]{SP07} Elango Panchadcharam and Ross Street, Mackey functors on compact closed categories, {\em J. Homotopy and Related Structures\/} 2 (2007), 261--293. \bibitem[ST]{ftm} R. Street. The formal theory of monads, {\em J. Pure Appl. Algebra\/} 2 (1972), 149--168. \bibitem[W\&W]{ww} R.F.C. Walters and R.J. Wood. Frobenius objects in cartesian bicategories, {\em Theory Appl. Categ.\/} 20 (2008), 25--47. \endreferences \end{document}
proofpile-arXiv_065-6488
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{Sec:Intro}Introduction} Liquid crystalline gels\cite{2003LCEWarner.M;Terentjev.E} refer to special soft materials that incorporate the symmetry properties of liquid crystalline\cite{1993PhysLC_Gennes.P;Prost.J} phases into the crosslinked polymeric backbones, thus the translational response of crosslinked polymeric networks and the orientational response of liquid crystalline mesogens are coupled together. Among all the possible liquid crystalline gels, nematic gel has the simplest symmetry where the crosslinked polymeric backbones are spontaneously elongated along one certain direction (usually the nematic director $\mathbf{\hat{n}}$) under the effect of symmetry broken properties of the nematic solvent. The uniaxial prolate ellipsoidal polymer backbones can be described by a step length tensor $l_{ij}=l_\perp\delta_{ij}+(l_\parallel-l_\perp)n_in_j$ and the anisotropic parameter $r$ is defined as the ratio of effective step length of polymer coil parallel ($l_{\parallel}$) and perpendicular ($l_{\perp}$) to the nematic director $\mathbf{\hat{n}}$. The value of $r$ depends on the symmetry properties of the nematic solvent: $r$ increases as the system becomes more ordered, i.e. lower temperature for thermotropic liquid crystal. When there are no rigid mechanical constrains, such relationship can be verified by observing the macroscopic shape change of the nematic gel between the isotropic phase and the nematic phase\cite{__In-Preparation_Vol:_Pg:_Meng.G;Meyer.R,2002_11_Physical-Review-Letters_Vol:89_Pg:225701_Selinger.J;Jeon.H;etal}. As the temperature decreases, the mono-domain nematic gel sample will become elongated along the direction of nematic director when sample changes from the isotropic phase into the nematic phase. When the material is confined by rigid boundaries, for example, on the direction of elongation during the cooling, a buckling transition is expected to happen and it has been experimentally observed as stripe patterns under polarized light microscopy\cite{2006_04_Physical-Review-Letters_Vol:96_Pg:147802_Verduzco.R;Meng.G;etal}. Here, we report another buckling transition in thin layers of same nematic liquid crystalline gels within different confined geometries. The physical reasons for such buckling transition can be qualitatively interpreted by the coupling between the mechanical response of crosslinked polymeric backbones and the orientation response of the nematic solvent. The instability analysis were applied to explain the experimental phenomena such as the temperature dependence of critical point and wavelength of periodic patterns. The study about this buckling phenomena is helpful to provide insights to the buckling transition found in other soft materials, i.e. microtubules\cite{1996_Physical-Review-Letters_Vol.76_No.21_Pg.4078-4081_Elbaum.M;Fygenson.D;Libchaber.A_}, F-actin networks\cite{2007_Nature_Vol.445_No.7125_Pg.295-298_Chaudhuri.O;Parekh.S;Fletcher.D_}. \section{Material and Experimental} Nematic gel material was synthesized in Kornfield's group\cite{2004_11_Macromolecules_Vol:37_Pg:8730--8738_Kempe.M;Kornfield.J;etal,2004_05_Macromolecules_Vol:37_Pg:3569--3575_Kempe.M;Kornfield.J;etal,2004_03_Nature-Materials_Vol:3_Pg:139--140_Palffy-Muhoray.P;Meyer.R,2004_03_Nature-Materials_Vol:3_Pg:177--182_Kempe.M;Scruggs.N;etal}. Briefly, 5-wt\% of ABA triblock copolymer, which consist of polystyrene as end blocks and side group liquid crystalline polymer as middle blocks, were dissolved into a nematic solvent (4-\emph{n}-pentyl-4'-cyanobiphenyl, 5CB). The formation of weak physical network is controlled by the order parameter of the solvent: the polystyrene end blocks are soluble in the isotropic phase and aggregate in the nematic phase. The phase transition temperature of such nematic gel ($T_{\mathrm{NI}}\approx 37$\textcelsius ) is very close to the transition temperature of 5CB ($T_{\mathrm{NI}}\approx 35$\textcelsius ), and the reversibility of the physical crosslinking mechanism allows repeatable experiments being easily conducted on the same sample. The nematic gel is loaded in a 25$\mu\mathrm{m}$ thick homeotropic electro-opitcal cell, in which a thin layer of \emph{n},\emph{n}-dimethyl-\emph{n}-octadecyl-3-aminopropyl-trimethoxysilyl-chloride were spin coated on the surface of transparent indium-tin oxide conductors of glass slides. In the presence of a strong applied electric field ($E_{0}=3\mathrm{V}/\mu\mathrm{m}$, 1kHz) across the cell, the nematic mesogens can be easily aligned vertically throughout the cell, where the long axis pointing perpendicularly to the boundary surfaces. The temperature of the sample was controlled by a peltier-based microscope stage during the observation. Initially, the sample was heated up to 45\textcelsius\ in the isotropic phase with no crosslinked polymer networks. The sample was cooled down (2\textcelsius/minute) across its $T_{\mathrm{NI}}$ to certain final temperatures ($T_{f}$) and a thin layer film of mono-domain nematic gel was obtained during the gelation throughout the cell volume, in which $\mathbf{\hat{n}}$ points perpendicularly to the boundary surfaces. The sample appeared homogenous dark under the cross polarized optical microscopy while the aligning field maintained its original magnitude. When the electric filed was turned off, birefringent stripe patterns with wavelength about 5$\mu\textrm{m}$ appeared throughout the sample, as shown in Fig.~\ref{Fig:BucklingTransition}. Both the wavelength of stripe pattern and the critical field ($E_{\mathrm{C}}$), at which the sample changes from homogeneous dark to birefringent patterned, depend on the sample's final cooling temperature ($T_{f}$), such temperature dependance are recorded and plotted in Fig.~\ref{Fig:TempExperiment}. It can be seen that both $E_{\mathrm{C}}$ and wavelength stay in a plateau when 10\textcelsius$<T_{f}<$24\textcelsius\ , and when $T_{f}>24$\textcelsius\ $E_{\mathrm{C}}$ decreases as $T_{f}$ increases, while the wavelength increases. \begin{figure} \def 0.225\textwidth {0.45\textwidth} \includegraphics[width=0.225\textwidth]{Figure1.pdf} \caption{\label{Fig:BucklingTransition} Optical micrographs of birefringent stripe pattern of buckled nematic gel observed through polarized optical microscopy with applied electric field decreased to zero at different temperatures. The scaling bars stand for 50$\mu\mathrm{m}$ in all the images. } \end{figure} \begin{figure} \def 0.225\textwidth {0.225\textwidth} \subfigure ]{\label{Fig:WavelengthTemp}\includegraphics[width=0.225\textwidth]{Figure2a.pdf}} \subfigure ]{\label{Fig:FieldTemp}\includegraphics[width=0.225\textwidth]{Figure2b.pdf}} \caption{\label{Fig:TempExperiment} Experimental measurement about the temperature dependence of \subref{Fig:WavelengthTemp} wavelength of stripes in the buckled state and \subref{Fig:FieldTemp} critical field ($E_{\mathrm{C}}$) of the buckling transition in nematic gel. } \end{figure} \section{Discussions} \begin{figure} \def 0.225\textwidth {0.225\textwidth} \subfigure[ $T_{i}\lessapprox T_{\mathrm{NI}}$]{\label{Fig:BucklingInitial}\includegraphics[width=0.225\textwidth]{Figure3a}} \subfigure[ $T_{f}<T_{i}$]{\label{Fig:BucklingFinal}\includegraphics[width=0.225\textwidth]{Figure3b}} \caption{\label{Fig:BucklingDiagram} Diagrams of the buckling transition in nematic gel. The rigid boundaries cause the material buckling within itself as the polymeric backbones elongate in a more ordered state \subref{Fig:BucklingFinal} in comparison to a less ordered gelation state \subref{Fig:BucklingInitial}, at which the liquid crystal mesogens are aligned vertically by the applied electric field $\mathbf{E}$. } \end{figure} The driving force of such transition can be attributed to the thermo-mechanical-optical coupling between the nematic liquid crystalline solvent and anisotropic crosslinked polymeric backbones in the nematic gel within a confined boundary condition. Diagrams in Fig.~\ref{Fig:BucklingDiagram} are used to interpret the physical reasons for such transition. Initially, the monodomain nematic gel is formed at a higher initial temperature ($T_{i}\lessapprox T_{\mathrm{NI}}$) within the glass cell as the electric field is applied across the cell, as show in Fig.~\ref{Fig:BucklingInitial}. The micro picture of the nematic gel is sketched as the ellipsoid and macroscopic shape of the gel is expressed as the square shape in the diagram. As the temperature is lowered to final temperature ($T_{f}<T_{i}$), the system becomes more ordered and the anisotropic polymeric coil will elongate along the nematic director's direction, as shown in the ellipsoid of Fig.~\ref{Fig:BucklingFinal}. If there is no boundaries to confine the shape of the material, the macroscopic shape of the mono-domain gel sample will elongate vertically, which is illustrated as the rectangular dashed line. Such ``artificial muscle'' effect has been observed experimentally in nematic elastomers and gels\cite{2002_11_Physical-Review-Letters_Vol:89_Pg:225701_Selinger.J;Jeon.H;etal}. When the sample is put into an environment with rigid constrains, e.g. a cell with two glass slides glued together, the material has to buckle within the boundaries in order to gain the elongation along the direction perpendicular to the rigid boundaries. Due to the coupling between the translational response of the polymeric coil and rotational response of the liquid crystalline mesogen, the nematic director $\hat{\mathbf{n}}$ will rotate correspondingly. The applied electric field with enough magnitude can be used to keep the nematic director aligned vertically, and the material will buckle spontaneously as the electric field is decreased due to the instability behavior. To prove our physical explanation, detailed analytical calculations are conducted and discussed in the following. \subsection{Modeling and Free Energy Calculation} The coordinate origin is selected at the middle of the cell gap and the $z$-axis is perpendicular to the cell boundaries ($z=\pm d/2$). Initially, the aligning electric field is applied along the $z$-axis ($\mathbf{E}=E_{0}\hat{\mathbf{z}}$) and the nematic director is initially aligned vertically ($\mathbf{\hat{n}}^0=\hat{\mathbf{z}}$) as well, as shown in Fig.~\ref{Fig:BucklingInitial}. The superscript $0$ is added onto parameters for the initial gelation state of the material. The polymeric networks are formed during the gelation process at temperature $T_{i}$, with $r^{0}$ as the anisotropic parameter. When the material is cooled to a lower final temperature $T_{f}<T_{i}$, the crosslinked polymeric network will be more elongated along nematic director $\hat{\mathbf{n}}$ as the nematic solvent become more ordered and the anisotropy parameter $r$ that is larger than the initial value, $r>r^0$. When the electric field is decreased to zero ($\mathbf{E}=0$), the elongation of the polymeric coils within the nematic liquid crystalline gel buckle with a displacement field within the $xz$-plane: $\mathbf{R}=\zeta\cos{kx}\cos{qz}\hat{\mathbf{x}}+\eta\sin{kx}\sin{qz}\hat{\mathbf{z}}$, in which $k$ is the wavevector of the shear wave within $xy$-plane, and we select it along the $x$-axis, $q$ is the wavevector along the $z$-direction. The value of $q$ is determined by the sample thickness $d$ as $q=\pi/d$ for the first harmonic mode. $\zeta$ and $\eta$ are related by the incompressibility condition\cite{2003LCEWarner.M;Terentjev.E} as $q\eta=k\zeta$. Such shear motion of the polymeric network will induce the nematic director to rotate within $xz$-plane of small amplitude $\xi$ about $z$-axis: $\mathbf{\hat{n}}=\xi\cos{kx}\sin{qz}\hat{\mathbf{x}}+\hat{\mathbf{z}}$. We plug these conditions into the formulas for free energy density, which include the Frank curvature elastic energy of the nematic solvent\cite{1993PhysLC_Gennes.P;Prost.J} and the nematic rubber elastic energ of crosslinked polymeric backbones\cite{2003LCEWarner.M;Terentjev.E}: \begin{widetext} \begin{eqnarray}\label{Eq:NematicCurvature} F&=&\frac{1}{2}\left( K_{S}\left(\nabla\cdot\mathbf{\hat{n}}\right)^2+K_{B}\left(\mathbf{\hat{n}}\times\left(\nabla\times\mathbf{\hat{n}}\right)\right)^2\right)-\frac{1}{2}\epsilon_a(\mathbf{E\cdot\hat{n}})^2\nonumber\\ &&+\frac{1}{2}\mu\cdot Tr(\underline{\underline{l^0}}\cdot\underline{\underline{\Lambda}}^T\cdot\underline{\underline{l}}^{-1}\cdot \underline{\underline{\Lambda}})-\frac{1}{2}A\mu\cdot Tr(\underline{\underline{\Lambda}}^T\cdot\underline{n}\cdot\underline{\underline{\Lambda}}-\underline{n^0}\cdot\underline{\underline{\Lambda}}^T\cdot\underline{n}\cdot \underline{\underline{\Lambda}}). \end{eqnarray} \end{widetext} In Eq.~\ref{Eq:NematicCurvature}, $K_{S}$ and $K_{B}$ are the curvature elastic constants for the nematic solvent; $\mu$ is the shear modulus of the gel and $A$ is the semisoftness coefficient; $\underline{\underline{\Lambda}}$ is the Cauchy strain tensor, $\Lambda_{ij}=\delta_{ij}+\partial R_{i}/\partial x_{j}$. By averaging this free energy density over the space, minimizing with respect to $\xi$, and keeping only terms only to second order in $\zeta$, the free energy density $f$ in the material can be written in Eq.~\ref{Eq:FinalFreeEnergy}. \begin{widetext} \begin{eqnarray} \label{Eq:FinalFreeEnergy} f&=&\frac{\mu\zeta^2}{4 q^2r\big(1-r^{0}+r(A-1+\frac{\epsilon_aE^2+K_{S}k^2+K_{B}q^2}{\mu}+r^{0})\big)}\times\nonumber\\ &&\bigg(k^6r(1+Ar)\frac{K_{S}}{\mu} +q^4r^{0}\big(r+(A-1+\frac{\epsilon_aE^2+K_{B}q^2}{\mu})r^2+(r-1)r^{0}\big) \nonumber\\ &&+k^4r\big((1+Ar)(1+r^{0})-r-Ar^{0}+(1+Ar)\frac{\epsilon_aE^2+K_{B}q^2}{\mu}+(r+r^{o})\frac{K_{S}q^2}{\mu}-\frac{r^{0}}{r}\big)\nonumber\\ &&+k^2q^2\Big((3-r^{0})r^{0}+r\big(1+r^{0}(3A-6+\frac{\epsilon_aE^2+K_{B}q^2}{\mu}+r^{0})\big)\Big)\nonumber\\ &&+r^2\big(A-1+\frac{\epsilon_aE^2}{\mu}+3r^{0}-2Ar^{0}+\frac{q^2}{\mu}(K_{B}+K_{S}r^{0})\big)\bigg) +o(\zeta^4). \end{eqnarray} \end{widetext} It can be seen that the free energy is proportional to the square of the perturbation's amplitude $\zeta^2$. If $f<0$, the perturbed state would be stable and the system would have a transition with $\zeta\ne0$; on the other hand, if $f>0$, the perturbed state is unstable and the system would stay in its initial state with $\zeta=0$. $f=0$ is the critical point at which the transition starts. In this way, the values of the material's physical parameters determine the instability behavior under different external experimental conditions. e.g. applied field, temperature. We can plug the known parameters of the nematic liquid crystalline gel\cite{2004_03_Nature-Materials_Vol:3_Pg:177--182_Kempe.M;Scruggs.N;etal,2006_04_Physical-Review-Letters_Vol:96_Pg:147802_Verduzco.R;Meng.G;etal} into Eq.~\ref{Eq:FinalFreeEnergy}, as $d=25 \mu\mathrm{m}$, $\mu=50\mathrm{Jm^{-3}}$, $A=0.1$, $K_{B}=10^{-11}\mathrm{Jm^{-1}}$, $K_{S}=1.5\times10^{-11}\mathrm{Jm^{-1}}$, $\epsilon_a=15\epsilon_0$. We choose $r^0=1.5$ for the initial gelation state at temperature $T_{i}$, and $r$ depends on the order parameter of the nematic solvent in the gel for specific temperature $T_{f}$ with elongated polymeric coil. \subsection{Instability Analysis: Critical Field} For the case of a homogeneous birefringence change as the basic mode of transition, $f$ can be further simplified by setting $k=0$: \begin{widetext} \begin{equation}\label{Eq:EnergyField} f_{k\to0}=\frac{\mu \zeta^2q^2r^0}{4r \left(\frac{r+(r-1)r^0+r^2\left(A-1+\frac{\epsilon_{a}E^2+K_{B}q^2}{\mu}\right)}{1-r^0+r\left(A-1+r^0+\frac{\epsilon_{a}E^2+K_{B}q^2}{\mu}\right)}\right). \end{equation} \end{widetext} Fig.~\ref{Fig:FreeEnergyField} shows the plot of $f_{k\to0}$ as a function of electric field's intensity $E$, which corresponds to the situation when we decrease the electric field at a certain temperature of final state. The free energy is positive when the electric field is still large enough to keep the vertical alignment of the nematic directors across the cell, the buckling transition is energetic unfavorable; as the electric field is further decreased, the energy become negative, which means the system transforms into the buckled state to minimize the free energy. The critical electric field $E_{\mathrm{C}}$ can be found at the point where the free energy equals zero. The relationship between the $E_C$ and $r$ can be studied numerically and plotted in Fig.~\ref{Fig:FieldRPlot}, where $E_{\mathrm{C}}$ increases with $r$, in another word, $E_{\mathrm{C}}$ decreases with final temperature $T_{f}$. This agree qualitatively with our experimental measurement in Fig.~\ref{Fig:FieldTemp}: when $r=3.5$, critical field is calculated as $E_{\mathrm{C}}=0.48\textrm{V}/\mu\mathrm{m}$ comparing with experimental value $0.56\textrm{V}/\mu\mathrm{m}$ at 31\textcelsius ; when $r=5.8$ corresponding to a lower $T_{f}$, critical field is calculated as $E_{\mathrm{C}}=0.621\textrm{V}/\mu\mathrm{m}$ comparing with experimental value $0.852\textrm{V}/\mu\mathrm{m}$ at 25\textcelsius . \begin{figure} \def 0.225\textwidth {0.225\textwidth} \centering \subfigure[ $f_{k\to0}$ vs. $E$]{\label{Fig:FreeEnergyField}\includegraphics[width=0.225\textwidth]{Figure4a}} \subfigure[ $E_{\mathrm{C}}$ vs. $r$]{\label{Fig:FieldRPlot}\includegraphics[width=0.225\textwidth]{Figure4b}} \subfigure[ $f_{E\to0}$ vs. $k$]{\label{Fig:FreeEnergyK}\includegraphics[width=0.225\textwidth]{Figure4c}} \subfigure[ $\lambda$ vs. $r$]{\label{Fig:KR}\includegraphics[width=0.225\textwidth]{Figure4d}} \caption{ The free energy density ($f$) depends both on \subref{Fig:FreeEnergyField} electric field and \subref{Fig:FreeEnergyK} wavevector $k$. The free energy is minimized by decreasing applied field to zero and spatial modulation of director field $\hat{\mathbf{n}}$ with a finite $k$. Both critical field $E_{\mathrm{C}}$ and wavelength depend on the anisotropic properties of final buckled state: as the nematic gel is more ordered at its final state (corresponding lower $T_{f}$), \subref{Fig:FieldRPlot} $E_{\mathrm{C}}$ increases and \subref{Fig:KR} wavelength decreases. } \end{figure} \subsection{Instability Analysis: Stripe Wavelength} Since the nematic gel buckles when the applied electric field removed ($E=0$), the free energy $f$ depends on the wavevector $k$ in $xy$-plane: \begin{widetext} \begin{eqnarray}\label{Eq:EnergyK} f_{E\to0}&=&\frac{\mu\zeta^2}{4 q^2r\big(1-r^{0}+r(A-1+\frac{K_{S}k^2+K_{B}q^2}{\mu}+r^{0})\big)}\times\bigg(k^6K_{S}r\frac{1+Ar}{\mu} \nonumber \\ &&+k^4r\big(1+r^{0}-Ar^{0}+\frac{q^2}{\mu}(K_{B}+K_{S}r^{0})\big)+k^2q^2\Big((3-r^{0})r^{0}+r\big(1+r^{0}(3A-6+\frac{K_{B}q^2}{\mu}+r^{0})\big)\Big) \nonumber \\ &&+q^4r^{0}\big(r+(A-1+\frac{K_{B}q^2}{\mu})r^2+(r-1)r^{0}\big)+r^2\big(A-1+3r^{0}-2Ar^{0}+\frac{q^2}{\mu}(K_{B}+K_{S}r^{0})\big)\nonumber \\ &&+r^2\big(\frac{K_{S}q^2}{\mu}-1+A(1+\frac{K_{B}q^2r^{0}}{\mu})\big)-r^{0}\bigg). \end{eqnarray} \end{widetext} Fig.~\ref{Fig:FreeEnergyK} shows the plot of $f_{E\to0}$ as a function of wavevectore $k$. It can be seen that free energy is negative at a finite $k$, which corresponding to the stripe pattern observed experimetally. The spatial modulation of the translational order ($\mathbf{R}$) and the orientational order ($\hat{\mathbf{n}}$) further minimize the free energy, which is similar to the stripe pattern observed in previous experimental observation in the planar aligned sample\cite{2006_04_Physical-Review-Letters_Vol:96_Pg:147802_Verduzco.R;Meng.G;etal}. Furthermore, wavelength can be numerically calculated as a function of the sample's anisotropic parameter $r$ at final state, which is plotted in Fig.~\ref{Fig:KR}. It can be seen that wavelength decreases as $r$ increases, in another word, stripe's wavelength is smaller for lower final temperature $T_{f}$. This agrees with our experimental observations in Fig.~\ref{Fig:WavelengthTemp}: when $r=3.5$, the wavelength is calculated as $6.48\mu\mathrm{m}$ comparing with experimental value $6.46\mu\mathrm{m}$ at 31\textcelsius ; when $r=5.8$ corresponding to a lower $T_{f}$, the wavelength is calculated as $3.82\mu\mathrm{m}$ comparing with experimental value $3.82\mu\mathrm{m}$ at 25\textcelsius . Currently, the analytic relationship between temperature $T$ (or the order parameter of nematic liquid crystals) and the anisotropy parameter $r$ of polymeric networks is not known experimentally. Therefore we can not fit our experimental measurement of the critical field and wavelength with temperature. The theoretical calculation can only be compared with experimental data qualitatively. It can be seen that it agrees well qualitatively with the experimental measurements. \section{\label{Sec:Conclusion}Conclusions} In summary, the spontaneous buckling transitions of thin layers of nematic liquid crystalline gel in a homeotropic cell were observed by polarized light microscopy. This is good example to show the coupling between the liquid crystalline ordering and the crosslinked polymer backbones inside the nematic gel material. As the nematic mesogens become more ordered when the gel is cooled down from the initial crosslinking stage with a higher temperature, the polymer network tends to elongate along the direction parallel to the initial nematic director, which is perpendicular to the rigid glass surfaces in the experimental setup. The shape change of such confined gel sample lead to the spontaneous buckling transition. The applied electric field will change the in stability behavior, and spatial modulated stripe pattern in orientational ordering of nematic solvent helps to accommodate the buckling transformation of gel network and minimize the free energy. The experimental observation and measurement can be can be explained qualitatively at different temperature. \section{Acknowledgments} \begin{acknowledgments} We gratefully thank for Julia~A.~Kornfield' group at California Institute of Technology for providing nematic gel material. This research was supported by NSF Grant No. DMR-0322530. \end{acknowledgments} \bibliographystyle{apsrev}
proofpile-arXiv_065-6492
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \subsection{Motivation} The study of the Sine-Gordon model has a long history. It has in particular served as an important toy model for interacting quantum field theories. The integrability of this model gives access to detailed non-perturbative information about various characteristic quantities, which allows one to check physical ideas about quantum field theory against exact quantitative results. It is particularly fascinating to compare the Sine-Gordon model with the Sinh-Gordon model. The Hamiltonian density $h_{SG}$ of the Sine-Gordon model and the corresponding object $h_{ShG}$ of the Sinh-Gordon model, \begin{equation}\label{Hdef} H\,=\,\int_0^{R}\frac{dx}{4\pi}\;h(x)\,,\qquad \begin{aligned} h_{SG}^{} &\,=\,\Pi^2+(\pa_x\phi)^2+8\pi\mu\cos(2\be\phi)\,,\\ h_{ShG}^{}& \,=\,\Pi^2+(\pa_x\phi)^2+8\pi\mu\cosh(2b\phi)\,, \end{aligned} \end{equation} are related by analytic continuation w.r.t. the parameter $\beta$ and setting $\beta=ib$. The integrability of both models is governed by the same algebraic structure $\CU_q(\widehat{\fsl}_2)$ with $q=e^{-\pi i \be^2}$. This leads one to expect that both models should be closely related, or at least have the same ``degree of complexity''. The physics of these two models turns out to be very different, though. Many of the key objects characteristic for the respective quantum field theories are not related by analytic continuation in the usual sense. While the Sine-Gordon model has much richer spectrum of excitations and scattering theory in the infrared (infinite $R$) limit, one may observe rather intricate structures in the UV-limit of the Sinh-Gordon model \cite{Z2}, which turn out to be related to the Liouville theory \cite{ZZ,T,BT}. These differences can be traced back to the fact that the periodicity of the interaction term $8\pi\mu\cos(2\be\phi)$ of the Sine-Gordon model allows one to treat the variable $\phi$ as angular variable parameterizing a compact space, while $\phi$ is truly non-compact in the Sinh-Gordon model. The qualitative differences between the Sine-Gordon and the Sinh-Gordon model can be seen as a simple model for the differences between Nonlinear Sigma-Models on compact and non-compact spaces respectively. This forms part of our motivation to revisit the Sine-Gordon model in a way that makes comparison with the Sinh-Gordon model easier. \subsection{Open problems} A lot of important exact results are known about the Sine-Gordon model. Well-understood are in particular the scattering theory in the infinite volume. The spectrum of elementary particle excitations and the S-matrix of the theory are known exactly \cite{KT77,Za77,FST,Ko80}. Relatedly, there is a wealth of information on the form-factors of local fields, see e.g. \cite{Sm92,BFKZ,LZ01} for the state of the art and further references. In the case of finite spacial volume, the nonlinear integral equations\footnote{This type of equations were before introduced in a different framework in \cite{KP91,KBP91}} derived by Destri and De Vega \cite{DDV92,DDV94,DDV97,FMQR97,FRT98,FRT99} give a powerful tool for the study of the finite-size corrections to the spectrum of the Sine-Gordon model. However, there are several questions, some of them fairly basic, where our understanding does not seem to be fully satisfactory. We do not have exact results on correlation functions on the one hand, or on expectation values of local fields in the finite volume on the other hand at present. Even the present level of understanding of the spectrum of the model does not seem to be fully satisfactory. The truth of the commonly accepted hypothesis that the equations derived by Destri and De Vega describe all of the states of the Sine-Gordon model has not been demonstrated yet. The approach of Destri and De Vega is based on the Bethe ansatz in the fermionized version of the Sine-Gordon model, the massive Thirring model \cite{DDV87}. This approach a priori only allows one to describe the states with even topological charge, and it inherits from its roots in the algebraic Bethe ansatz some basic difficulties like the issue of its completeness. In the Bethe ansatz approach it is a long-standing problem to prove that the set of states that is obtained in this way is complete. Early attempts to show completeness used the so-called string hypothesis which is hard to justify, and sometimes even incorrect. At the moment there are only a few examples of integrable models where the completeness of the Bethe ansatz has been proven, including the XXX Heisenberg model, see \cite{MTV} and references therein. A similar result has not been available for the Sine-Gordon model or its lattice discretizations yet. One of the main results in this paper is the completeness result for the lattice Sine-Gordon model. We prove a one-to-one correspondence between eigenstates of the transfer matrix and the solutions to a system of algebraic equations of the Bethe ansatz type. For brevity, we will refer to this result as {\it completeness} of the Bethe ansatz. We furthermore show that the spectrum of the transfer matrix is simple in the case of odd number of lattice sites, and find the operator which resolves the possible double degeneracy of the spectrum of the transfer matrix in the case of even number of lattice sites. \subsection{Our approach} We will use a lattice regularization of the Sine-Gordon model that is different from the one used by Destri and De Vega. It goes back to \cite{FST,IK}, and it has more recently been studied in \cite{F94,FV94}. For even number of lattice sites the model is related to the Fateev-Zamolodchikov model \cite{FZ}, as was observed in \cite{FV94}, or more generally to the Chiral Potts model, as discussed in the more recent works \cite{BBR,Ba08}. This allows one to use some powerful algebraic tools developed for the study of the chiral Potts model \cite{BS} in the analysis of the lattice Sine-Gordon model. The issue of completeness of the Bethe ansatz had not been solved in any of these models yet. What allows us to address this issue is the combination of Separation of Variables method (SOV-method) of Sklyanin \cite{Sk1,Sk2,Sk3} with the use of the $\SQ$-operators introduced by Baxter \cite{Ba72}. We will throughout be working with a certain number of inhomogeneity parameters. It turns out that the SOV-method works in the case of generic inhomogeneity parameters where the algebraic Bethe ansatz method fails. It replaces the algebraic Bethe ansatz as a tool to construct the eigenstates of the transfer matrix which correspond to the solutions of Bethe's equations. In a future publication we will show that the results of our approach are consistent with the results of Destri and De Vega. Another advantage of the lattice discretization used in this paper which may become useful in the future is due to the fact that one directly works with the discretized Sine-Gordon degrees of freedom, which is not the case in the lattice formulation used by Destri and De Vega. Working more directly with the Sine-Gordon degrees of freedom should in particular be useful for the problem to calculate expectation values of local fields. This in particular requires the determination of the SOV-representation of local fields analogously to what has been done in the framework of the algebraic Bethe ansatz in \cite{KMT99,MT00}. The SOV-method in principle offers a rather direct way to the construction of the expectation values, as illustrated in the case of the Sinh-Gordon model by the work \cite{Lu}. \vspace*{1mm} {\par\small {\em Acknowledgements.} We would like to thank V. Bazhanov and F. Smirnov for stimulating discussions, and J.-M. Maillet for interest in our work. We gratefully acknowledge support from the EC by the Marie Curie Excellence Grant MEXT-CT-2006-042695.} \section{Definition of the model} \setcounter{equation}{0} \subsection{Classical Sine-Gordon model} The classical counterpart of the Sine-Gordon model is a dynamical system whose degrees of freedom are described by the field $\phi(x,t)$ defined for $(x,t)\in[0,R]\times \BR$ with periodic boundary conditions $\phi(x+R,t)=\phi(x,t)$. The dynamics of this model may be described in the Hamiltonian form in terms of variables $\phi(x,t)$, $\Pi(x,t)$, the Poisson brackets being \[ \{\,\Pi(x,t)\,,\,\phi(x',t)\,\}\,\,=\,2\pi\,\de(x-x')\,. \] The time-evolution of an arbitrary observable $O(t)$ is then given as \[ \pa_tO(t)\,=\,\{\,H\,,\,O(t)\,\}\,, \] with Hamiltonian $H$ being defined in \rf{Hdef}. The equation of motion for the Sine-Gordon model can be represented as a zero curvature condition, \begin{equation} [\,\pa_t-V(x,t;\lambda)\,,\,\pa_x-U(x,t;\lambda)\,]\,=\,0\,, \end{equation} with matrices $U(x,t;\lambda)$ and $V(x,t;\lambda)$ being given by \begin{equation} \begin{aligned}\label{ZCC} &U(x,t;\lambda)\,=\,\left( \begin{matrix}i\frac{\be}{2}\Pi & -{i}m(\la e^{-i\be\phi}-\la^{-1}e^{i\be\phi})\\ -{i}m(\la e^{i\be\phi}-\la^{-1}e^{-i\be\phi}) & - i\frac{\be}{2}\Pi \end{matrix}\right)\\ &V(x,t;\la)\,=\,\left(\begin{matrix} i\frac{\be}{2}\phi' & +{i}m(\la e^{-i\be\phi}+\la^{-1}e^{i\be\phi}) \\ +{i}m(\la e^{i\be\phi}+\la^{-1}e^{-i\be\phi}) & -i\frac{\be}{2}\phi' \end{matrix}\right) \end{aligned} \end{equation} and $m$ related to $\mu$ by $m^2=\pi \be^2\mu$. \subsection{Discretization and canonical quantization} In order to regularize the ultraviolet divergences that arise in the quantization of these models we will pass to integrable lattice discretizations. First discretize the field variables according to the standard recipe \begin{equation*} \phi_n \equiv \phi(n\Delta) \,, \quad \Pi_n \equiv \De\Pi(n\Delta) \,, \end{equation*} where $\Delta=R/\SRN$ is the lattice spacing. In the canonical quantization one would replace $\phi_n$, $\Pi_n$ by corresponding quantum operators with commutation relations \begin{equation} [\,\phi_n\,,\,\Pi_n\,]\,=\,2\pi i\de_{n,m}\,. \end{equation} Planck's constant can be identified with $\be^2$ by means of a rescaling of the fields. The scheme of quantization of the Sine-Gordon model considered in this paper will deviate from the canonical quantization by using $\su_n\equiv e^{i\frac{\be}{2}\Pi_n}$ and $\sv_n\equiv e^{-i\beta\phi_n}$ as basic variables. For technical reasons we will consider representations where both $\su_n$ and $\sv_n$ have discrete spectrum. Let us therefore take a moment to explain why one may nevertheless expect that the resulting quantum theory will describe the quantum Sine-Gordon model in the continuum limit. First note (following the discussion in \cite{Za94}) that the periodicity of the potential $8\pi\mu \cos(2\beta\phi)$ in \rf{Hdef} implies that shifting the zero mode $\phi_0\equiv \frac{1}{R} \int_0^R dx \,\phi(x)$ by the amount $\pi/\beta$ is a symmetry. In canonical quantization one could build the unitary operator $\SW=e^{\frac{i}{2\be}R\spp_\0}$ which generates this symmetry out of the zero mode $\spp_\0 \equiv\frac{1}{R}\int_0^R dx \,\Pi(x)$ of the conjugate momentum $\Pi$. $\SW$ should commute with the Hamiltonian $\SH$. One may therefore diagonalize $\SW$ and $\SH$ simultaneously, leading to a representation for the space of states in the form \begin{equation} \CH\,\simeq\,\int_{S_1}d\alpha\;\CH_\al\,\qquad{\rm where}\qquad \SW\cdot\CH_\al\,=\,e^{i\al}\CH_\al\,. \end{equation} An alternative way to take this symmetry into account in the construction of the quantum theory is to construct the quantum theory separately for each $\al$-sector. This implies that the field $\phi$ should be treated as periodic with periodicity $\pi/\beta$, and that the conjugate variables $\Pi_n$ have eigenvalues quantized in units of $\beta$, with spectrum contained in $\{\,2\al\be/ \SRN+4\pi \beta k\,;\,k\in\BZ\,\}$. The spectrum of $\Pi_n$ is such that the operator $\SW=e^{\frac{i}{2\be}R\spp_\0}$, with $R\spp_\0$ approximated by $\sum_{n=1}^\SRN \Pi_n$, is realized as the operator of multiplication by $e^{i\al}$. Let us furthermore note that it is possible, and technically useful to assume that the lattice field observable $\phi_n$ has discrete spectrum, which we will take to be quantized in units of $\beta$. In order to see this, note that the field $\phi(x)$ is not a well-defined observable due to short-distance singularities, whereas smeared fields like $\int_I dx\,\phi(x)$, $I\subset [0,R]$ may be well-defined. The observable $\int_I dx\,\phi(x)$ would in the lattice discretization be approximated by \begin{equation} \phi[I]\,\sim\,\sum_{n\Delta\in I} \Delta\phi_n\,. \end{equation} So even if $\phi_n$ is discretized in units of $\beta$, say, we find that the observable $\phi[I]$ is quantized in units of $\Delta\beta$, which fills out a continuum for $\Delta\ra 0$. \subsection{Non-canonical quantization} As motivated above, we will use a quantization scheme based on the quantum counterparts of the variables $u_n$, $v_n$ $n=1,\dots,\SRN$ related to $\Pi_n$, $\phi_n$ as \begin{equation} u_n\,=\,e^{i\frac{\be}{2}\Pi_n}\,,\qquad v_n\,=\,e^{-i\beta\phi_n}\,. \end{equation} The quantization of the variables $u_n$, $v_n$ produces operators $\su_n$, $\sv_m$ which satisfy the relations \begin{equation}\label{Weyl} \su_n\sv_m=q^{\de_{nm}}\sv_m\su_n\,,\qquad{\rm where}\;\; q=e^{-\pi i \beta^2}\,. \end{equation} We are looking for representations for the commutation relations \rf{Weyl} which have discrete spectrum both for $\su_n$ and $\sv_n$. Such representations exist provided that the parameter $q$ is a root of unity, \begin{equation} \beta^2\,=\,\frac{p'}{p}\,,\qquad p,p\in\BZ^{>0}\,. \end{equation} We will restrict our attention to the case $p$ odd and $p'$ even so that $q^{p}=1$. It will often be convenient to parameterize $p$ as \begin{equation} p\;=\;2l+1\,,\qquad l\in\BZ^{\geq 0}\,. \end{equation} Let us consider the subset $\BS_p=\{q^{2n};n=0,\dots,2l\}$ of the unit circle. Note that $\BS_p=\{q^{n};n=0,\dots,2l\}$ since $q^{2l+2}=q$. This allows us to represent the operators $\su_n$, $\sv_n$ on the space of complex-valued functions $\psi:\BS_p^{\SRN}\ra \BC$ as \begin{equation}\label{reprdef} \begin{aligned} &\su_n\cdot\psi(z_1,\dots,z_\SRN)\,=\,u_n z_n\psi(z_1,\dots, z_n,\dots,z_\SRN)\,,\\ &\sv_n\cdot\psi(z_1,\dots,z_\SRN)\,=\,v_n\psi(z_1,\dots, q^{-1} z_n,\dots,z_\SRN)\,. \end{aligned} \end{equation} The representation is such that the operator $\su_n$ is represented as a multiplication operator. The parameters $u_n$, $v_n$ introduced in \rf{reprdef} can be interpreted as ``classical expectation values'' of the operators $\su_n$ and $\sv_n$. The discussion in the previous subsection suggests that the $v_n$ will be irrelevant in the continuum limit, while the average value of $u_n$ will be related to the eigenvalue $e^{i\al}$ of $\SW$ via $u_n=\exp(i\beta^2{\al}/{\SRN})$. \subsection{Lattice dynamics}\label{dyn} There is a beautiful discrete time evolution that can be defined in terms of the variables introduced above which reproduces the Sine-Gordon equation in the classical continuum limit \cite{FV94}. It is simplest in the case where $u_n=1$, $v_n=1$, $n=1,\dots,\SRN$. We will mostly\footnote{Except for Section \ref{SOV}.} restrict to this case in the rest of this paper. More general cases were treated in \cite{BBR,Ba08}. \subsubsection{Parameterization of the initial values} As a convenient set of variables let us introduce the observables $f_{k}$ defined as \begin{equation} f_{2n}\,\equiv\,e^{-2i\beta\phi_n}\,,\qquad f_{2n-1}\,\equiv\, e^{i\frac{\beta}{2}(\Pi_n+\Pi_{n-1}-2\phi_n-2\phi_{n-1})}\,. \end{equation} These observables turn out to represent the initial data for time evolution in a particularly convenient way. The quantum operators $\sf_n$ which correspond to the classical observables $f_n$ satisfy the algebraic relations \begin{equation}\label{funalg} \sf_{2n\pm 1}\,\sf_{2n}\,=\,q^2\,\sf_{2n}\,\sf_{2n\pm 1}\,,\quad q=e^{-\pi i \beta^2}\,, \qquad \sf_{n}\,\sf_{n+m}\,=\,\sf_{n+m}\,\sf_{n}\;\;{\rm for}\;\;m\geq 2\,. \end{equation} There exist simple representations of the algebra \rf{funalg} which may be constructed out of the operators $\su_n$, $\sv_n$, given by \begin{equation}\label{f-uv} \sf_{2n}\,=\,\sv_n^2\,,\qquad\sf_{2n-1}\,=\,\su_n^{}\su_{n-1}\,. \end{equation} The change of variables defined in \rf{f-uv} is invertible if $\SRN$ is odd. \subsubsection{Discrete evolution law} Let us now describe the discrete time evolution proposed by Faddeev and Volkov \cite{FV94}. Space-time is replaced by the cylindric lattice \[ \CL\,\equiv\,\big\{\,(\nu,\tau)\,,\,\nu\in\BZ/\SRN\BZ\,,\,\tau\in\BZ\,,\,\nu+\tau={\rm even}\,\big\}\,. \] The condition that $\nu+\tau$ is even means that the lattice is rhombic: The lattice points closest to $(\nu,\tau)$ are $(\nu\pm 1,\tau+1)$ and $(\nu\pm 1,\tau-1)$. We identify the variables $\sf_n$ with the initial values of a discrete "field" $\sf_{\nu,\tau}$ as \[ \sf_{2r,0}\,\equiv\,\sf_{2r}\,,\qquad \sf_{2r-1,1}\,\equiv\,\sf_{2r-1}\,. \] One may then extend the definition recursively to all $(\nu,\tau)\in\CL$ by means of the evolution law \begin{equation}\label{Hirota} {\sf}_{\nu,\tau+1}\,\equiv\,g_\kappa^{}\big(q\sf_{\nu-1,\tau}^{}\big)\cdot \sf_{\nu,\tau-1}^{-{1}}\cdot g_\kappa^{}\big(q\sf_{\nu+1,\tau}^{}\big) \,, \end{equation} with function $g$ defined as \begin{equation}\label{gkappadef} g_\kappa(z)\,=\,\frac{\kappa^2+z}{1+\kappa^2 z} \end{equation} where $\kappa$ plays the role of a scale-parameter of the theory. We refer to \cite{FV94} for a nice discussion of the relation between the lattice evolution equation \rf{Hirota} and the classical Hirota equation, explaining in particular how to recover the Sine-Gordon equation in the classical continuum limit. \subsubsection{Construction of the evolution operator} In order to construct the unitary operators $\SU$ that generate the time evolution \rf{Hirota} let us introduce the function \begin{align}\label{W-big} & W_{\la}(q^{2n})\,=\,\prod_{r=1}^{n} \frac{1+\la q^{2r-1}}{\la+ q^{2r-1}}\,, \end{align} which is cyclic, i.e. defined on $\BZ_p$. The function $W_\la(z)$ is a solution to the functional equation \begin{align}\label{funrel} & (z+\la)W_\la(qz)\,=\,(1+\la z)W_\la(q^{-1}z)\,, \end{align} which satisfies the unitarity relation \begin{equation} (W_\la(z))^*_{}\,=\,(W_{\la^*}^{}(z))^{-1}\,. \end{equation} Note in particular that $W_\la(z)$ is "even", i.e. $W_\la(z)=W_{\la}(1/z)$. Further properties of this function are collected in Appendix A. Let us then consider the operator $\SU$, defined as \begin{equation} \SU\,=\,\prod_{n=1}^{\SRN}W_{\kappa^{-2}}(\sf_{2n}) \cdot\SU_0\cdot\prod_{n=1}^{\SRN}W_{\kappa^{-2}}(\sf_{2n-1})\,, \end{equation} where $\SU_0$ is the parity operator that acts as $\SU_0^{}\cdot\sf_k^{}=\sf^{-1}_k\cdot\SU_0^{}$. It easily follows from \rf{funrel} that $\SU$ is indeed the generator of the time-evolution \rf{Hirota}, \begin{equation} \sf_{\nu,\tau+1}\,=\,\SU^{-1}\cdot \sf_{\nu,\tau-1}\cdot\SU\,. \end{equation} One of our tasks is to exhibit the integrability of this discrete time evolution. \section{Integrability} \setcounter{equation}{0} The integrability of the lattice Sine-Gordon model is known \cite{IK,FV,BKP93,BBR}. The most convenient way to formulate it uses the Baxter $\SQ$-operators \cite{Ba72}. These operators have been constructed for the closely related Chiral Potts model in \cite{BS}. By means of the relation between the lattice Sine Gordon model and the Fateev-Zamolodchikov model summarized in Appendix \ref{FZ} one may adapt these constructions to the formulation used in this paper. For the reader's convenience we will give a self-contained summary of the construction of the $\ST$- and $\SQ$-operators and of their relevant properties in the following section. \subsection{$\ST$-operators}\label{T-op} As usual in the quantum inverse scattering method, we will represent the family $\CQ$ by means of a Laurent-polynomial $\ST(\la)$ which depends on the spectral parameter $\la$. The definition of operators $\ST(\la)$ for the models in question is standard. It is of the general form \begin{equation}\label{Mdef} \ST^{}(\la)\,=\,{\rm tr}_{\BC^2}^{}\SM(\la)\,,\qquad \SM(\la)\,\equiv\, L_\SRN^{}(\la/\xi_\SRN)\dots L_1^{}(\la/\xi_1)\,, \end{equation} where we have introduced inhomogeneity parameters $\xi_1,\dots,\xi_\SRN$ as a useful technical device. The Lax-matrix may be chosen as \begin{equation}\label{Lax}\begin{aligned} L^{\rm\sst SG}_n(\la) &= \frac{\kappa_n}{i} \left( \begin{array}{cc}i\,\su_n^{}(q^{-\frac{1}{2}}\kappa_n^{}\sv_n^{}+q^{+\frac{1}{2}}\kappa^{-1}_n\sv_n^{-1}) & \la_n^{} \sv_n^{} - \la^{-1}_n \sv_n^{-1} \\ \la_n^{} \sv_n^{-1} - \la^{-1}_n \sv_n^{} & i\,\su_n^{-1}(q^{+\frac{1}{2}}\kappa^{-1}_n\sv_n^{}+q^{-\frac{1}{2}}\kappa_n^{}\sv_n^{-1}) \end{array} \right) . \end{aligned}\end{equation} An important motivation for the definitions \rf{Mdef}, \rf{Lax} comes from the fact that the Lax-matrix $L^{\rm\sst SG}_n(\la)$ reproduces the Lax-connection $U(x)$ in the continuum limit. The elements of the matrix $\SM(\la)$ will be denoted by \begin{equation}\label{ABCD} \SM(\la)=\left(\begin{matrix}\SA(\la) & \SB(\la)\\ \SC(\la) & \SD(\la)\end{matrix}\right)\,. \end{equation} They satisfy commutation relations that may be summarized in the form \begin{equation}\label{YBA} R(\la/\mu)\,(\SM(\la)\ot 1)\,(1\ot\SM(\mu))\,=\,(1\ot\SM(\mu))\,(\SM(\la)\ot 1)R(\la/\mu)\,, \end{equation} where the auxiliary R--matrix is given~by \begin{equation}\label{Rlsg} R(\la) = \left( \begin{array}{cccc} q^{}\la-q^{-1}\la^{-1} & & & \\ [-1mm] & \la-\la^{-1} & q-q^{-1} & \\ [-1mm] & q-q^{-1} & \la-\la^{-1} & \\ [-1mm] & & & q\la-q^{-1}\la^{-1} \end{array} \right) \,. \end{equation} It will be useful for us to regard the definition \rf{Mdef} as the construction of operators which generate a representation $\CR_\SRN$ of the so-called Yang-Baxter algebra defined by the quadratic relations \rf{YBA}. The representation $\CR_\SRN$ is characterized by the $4\SRN$ parameters $\kappa=(\kappa_1,\dots,\kappa_\SRN)$, $\xi=(\xi_1,\dots,\xi_\SRN)$, $u=(u_1,\dots,u_\SRN)$ and $v=(v_1,\dots,v_\SRN)$. The fact that the elements of $\SM(\la)$ satisfy the commutation relations \rf{YBA} forms the basis for the application of the quantum inverse scattering method. The mutual commutativity of the $\ST$-operators, \begin{equation} [\,\ST(\la)\,,\,\ST(\mu)\,]\,=\,0\,, \end{equation} follows from \rf{YBA} by standard arguments. The expansion of $\ST(\la)$ into powers of $\la$ produces $\SRN$ algebraically independent operators $\ST_1,\dots,\ST_{\SRN}$. Our main objective in the following will be the study of the spectral problem for $\ST(\la)$. The importance of this spectral problem follows from the fact that the time-evolution operator $\SU$ of the lattice Sine-Gordon model will be shown to commute with $\ST(\la)$ in the next section. \subsection{$\SQ$-operators} Let us now introduce the Baxter $\SQ$-operators $\SQ(\mu)$. These operators are mutually commuting for arbitrary values of the spectral parameters $\la$ and $\mu$, and satisfy a functional relation of the form \begin{equation} \ST(\la)\SQ(\la)\,=\,{\tt a}(\la)\SQ(q^{-1}\la)+{\tt d}(\la)\SQ(q\la)\,, \end{equation} with $a(\la)$ and $d(\la)$ being certain model-dependent coefficient functions. The generator of lattice time evolution will be constructed from the specialization of the $\SQ$-operators to certain values of the spectral parameter $\la$, making the integrability of the evolution manifest. \subsubsection{Construction} In order to construct the $\SQ$-operators let us introduce the following renormalized version of the function $W_{\la}(z)$, \begin{align} & w_{\la}(q^{2n} \,=\, \prod_{r=1}^{n}\frac{1+\la q^{2r-1}}{\la+ q^{2r-1}} \prod_{r=1}^{l}\frac{\la+ q^{2r-1}}{1+q^{2r-1}}\,, \end{align} The function $w_\la(z)$ is the unique solution to the functional equation \rf{funrel} which is a polynomial of order $l$ in $\la$ and which satisfies the normalization condition $w_1(q^{2n})=1$. The $\SQ$-operators can then be constructed in the form \begin{equation} \SQ(\la,\mu)\,=\,\SY(\la)\cdot(\SY(\mu^*))^{\dagger}\,, \end{equation} where $\SY(\la)$ is defined by its matrix elements $Y_\la^{}(\bz,\bz')\,\equiv\,\langle\,{\mathbf z}\,|\,\SY(\la)\,|\,{\mathbf z}'\,\rangle$ which read \begin{equation}\label{Ydef} Y_\la^{}(\bz,\bz')\,=\,\prod_{n=1}^{\SRN}\overline{w}_{\ep\la/\kappa_n\xi_n}^{}(z_n^{}/z_n')\,w_{\ep\la\kappa_n/\xi_n}^{}(z_n^{}z_{n+1}')\,, \end{equation} where $\ep=-iq^{-\frac{1}{2}}$, and $\overline{w}_\la(z)$ is the discrete Fourier transformation of $w(z)$, \begin{align}\label{FT} \overline{w}_{\la}(z)\,=\,\frac{1}{p}\sum_{r=-l}^l z^r\,w_{\la}(q^{r})\,,\qquad w_\la(y)\,=\,\sum_{r=-l}^{l} y^{-r}\,\overline{w}_{\la}(q^r)\,. \end{align} Note in particular the normalization condition $\overline{w}_1(q^r)=\de_{r,0}$. Despite the fact that $\SQ(\la,\mu)$ is symmetric in $\la$ and $\mu$, $\SQ(\la,\mu)=\SQ(\mu,\la)$ as follows from the identity \rf{Yex} proven in Appendix B, we will mostly consider $\mu$ as a fixed parameter which will later be chosen conveniently. This being understood we will henceforth write $\SQ(\la)$ whenever the dependence of $\SQ(\la,\mu)$ on $\mu$ is not of interest. \subsubsection{Properties} \begin{thm} \label{Qprop} $\quad$ --- $\quad${\bf Properties of $\ST$- and $\SQ$-operators}$\quad$ --- {\sc (A) Analyticity} The operator $\la^{\tilde{\SRN}}\ST(\la)$ is a polynomial in $\la^2$ of degree\footnote{Here, we use the notation ${\rm e}_\SRN=1$ for even $\SRN$, ${\rm e}_\SRN=0$ otherwise.} $\tilde{\SRN}:=\SRN+\en-1$ while the operator $\SQ(\la)$ is a polynomial in $\la$ of maximal degree $2l\SRN$. In the case $\SRN$ odd the operators $\SQ_{2l\SRN}:=\lim_{\la\ra\infty}\la^{-2l\SRN}\SQ(\la)$ and $\SQ_0:=\SQ(0)$ are invertible operators and the normalization of the $\SQ$-operator can be fixed by $\SQ_{2l\SRN}={\rm id}$. {\sc (B) Baxter equation} The operators $\ST(\la)$ and $\SQ(\la)$ are related by the Baxter equation \begin{equation}\label{BAX} \ST(\la)\SQ(\la)\,=\,{\tt a}_\SRN^{}(\la)\SQ(q^{-1}\la)+{\tt d}_\SRN^{}(\la)\SQ(q\la)\,, \end{equation} with coefficient functions \begin{equation}\label{addef}\begin{aligned} & {\tt a}_\SRN^{}(\la)\,=\,(-i)^\SRN\prod_{r=1}^{\SRN}\kappa_r/\la_r(1+iq^{-\frac{1}{2}}\la_r\kappa_r)(1+iq^{-\frac{1}{2}}\la_r/\kappa_r)\,,\\ & {\tt d}_\SRN^{}(\la)\,=\,(+i)^\SRN\prod_{r=1}^{\SRN}\kappa_r/\la_r(1-iq^{+\frac{1}{2}}\la_r\kappa_r)(1-iq^{+\frac{1}{2}}\la_r/\kappa_r)\,. \end{aligned} \end{equation} {\sc(C) Commutativity} \begin{equation} \begin{aligned} &[\, \SQ(\la)\,,\,\SQ(\mu)\,]\,=\,0\,,\\ &[\, \ST(\la)\,,\,\SQ(\mu)\,]\,=\,0\,, \end{aligned}\qquad \forall \la,\mu\,. \end{equation} {\sc (S) Self-adjointness}\\ Under the assumption $\xi _{r}$ and $\kappa _{r}$ real or imaginary numbers, the following holds: \begin{equation} (\ST(\la))^{\dagger}\,=\,\ST(\la^*)\,,\qquad(\SQ(\la))^{\dagger}\,=\,\SQ(\la^*)\,. \end{equation} \end{thm} For the reader's convenience we have included a self-contained proof in Appendix \ref{Qproofs}. It follows from these properties that $\ST(\la)$ and $\SQ(\mu)$ can be diagonalized simultaneously for all $\la,\mu$. The eigenvalues $Q(\la)$ of $\SQ(\la)$ must satisfy \begin{equation}\label{BaxEV} t(\la)Q(\la)\,=\,{\tt a}_{\SRN}(\la)Q(q^{-1}\la)+{\tt d}_{\SRN}(\la)Q(q\la)\,. \end{equation} It follows from the property (A) of $\SQ(\la)$ that any eigenvalue $Q(\la)$ must be a polynomial of order $2l\SRN$ normalized by the condition $Q_{2l\SRN}=1$. Such a polynomial is fully characterized by its zeros $\la_1,\dots,\la_{2l\SRN}$, \begin{equation}\label{Qfromzeros} Q(\la)\,=\,\prod_{k=1}^{2l\SRN}(\la-\la_k)\,. \end{equation} It follows from the Baxter equation \rf{BaxEV} that the zeros must satisfy the Bethe equations \begin{equation} \label{BAE} \frac{{\tt a}(\la_r)}{{\tt d}(\la_r)}\,=\,- \prod_{s=1}^{2l\SRN}\frac{\la_s-\la_rq}{\la_s-\la_r/q}\,. \end{equation} What is not clear at this stage is if for each solution of the Bethe equations \rf{BAE} there indeed exists an eigenstate of $\ST(\la)$ and $\SQ(\mu)$. In order to show that this is the case we need a method to construct eigenstates from solutions to \rf{BAE}. The Separation of Variables method will give us such a construction, replacing the algebraic Bethe ansatz in the cases we consider. \subsection{Integrability} In order to recover the light-cone dynamics discussed in subsection \ref{dyn}, let us temporarily return to the homogeneous case where $\xi_n=1$ and $\kappa_n=\kappa$ for $n=1,\dots,\SRN$. Let us note that the operators $\SY(\la)$ simplify when $\la$ is sent to $0$ or $\infty$. Multiplying by suitable normalization factors one find the {\it unitary} operators \[ \SY_0\,\equiv\,\ga_{0}^{\SRN}\,\SY(0)\quad{\rm and}\quad \SY_\infty\,\equiv\,\lim_{\mu\ra\infty}\ga_{\infty}^{\SRN}\,\mu^{-2l\SRN}\SY(\mu)\,, \] where $\ga_{0}=\prod_{r=1}^l(1-q^{4r})$ and $\ga_{\infty}=(-1)^{l}q^{l}\prod_{r=1}^l(1-q^{4r-2})$. The operators $\SY_0$ and $\SY_\infty$ have the simple matrix elements \begin{equation}\label{Y0matel} \begin{aligned} \langle\,{\mathbf z}\,|\,\SY_0\,|\,{\mathbf z}'\,\rangle\,& =\,\prod_{n=1}^{\SRN}q^{-2k_n^{}(k_n'+k_{n+1}')}\,,\\ \langle\,{\mathbf z}\,|\,\SY_\infty\,|\,{\mathbf z}'\,\rangle\,&=\,\prod_{n=1}^{\SRN}q^{+2k_n^{}(k_n'+k_{n+1}')}\,, \end{aligned}\quad{\rm if} \quad \left\{\;\begin{aligned} &{\mathbf z}\,=\,(q^{2k_1},\dots,q^{2k_{\SRN}}),\\[1ex] &{\mathbf z}'\,=\,(q^{2k_1'},\dots,q^{2k_{\SRN}'}), \end{aligned}\;\right\}\end{equation} and \begin{equation} \SQ^+(\la)\,=\,\SY(\la)\cdot\SY_\infty^\dagger\,,\qquad \SQ^-(\la)\,=\,\big(\SY(\la)\cdot\SY_0^\dagger\,\big)^{-1} \end{equation} Integrability follows immediately from the following observation: \begin{equation}\label{UfromQ} \boxed{ \qquad\SU\,=\,\alpha_{\kappa} \,\SU^+\cdot\SU^-,\qquad\SU^+\,=\,\SQ^+(1/\kappa\ep),\qquad\SU^-\,=\,\SQ^-(\kappa/\ep),\qquad } \end{equation} where $\alpha_{\kappa}\equiv \prod_{r=1}^{l}(1-q^{4r-2})^{2\SRN}/( \kappa^2-q^{4r-2})^{2\SRN}$. The proof can be found in Appendix B. It is very important to remark that there is of course no problem to construct time evolution operators in the inhomogeneous cases by specializing the spectral parameter of the $\SQ$-operator in a suitable way. We are just not able to represent the time evolution as simple as in \rf{Hirota}. One will still have a lattice approximation to the time evolution in the continuum field theory as long as the inhomogeneity parameters are scaled to unity in the continuum limit. \section{Separation of variables I --- Statement of results}\label{SOV} \setcounter{equation}{0} The Separation of Variables (SOV) method of Sklyanin \cite{Sk1}-\cite{Sk3} as developed for lattice Sine-Gordon model in this section will allow us to take an important step towards the simultaneous diagonalization of the $\ST$- and $\SQ$-operators. The separation of variables method is based on the observation that the spectral problem for $\ST(\la)$ simplifies considerably if one works in an auxiliary representation where the commutative family $\SB(\la)$ of operators introduced in \rf{ABCD} is diagonal. In the following subsection we will discuss a family of representations of the Yang-Baxter algebra \rf{YBA} that has this property. We will refer to this class of representations as the SOV-representations. We will subsequently show that our original representation introduced in \rf{Mdef}, \rf{Lax} is indeed equivalent to a certain SOV-representation. \subsection{The SOV-representation} The operators representing \rf{YBA} in the SOV-representation relevant for the case of a lattice with $\SRN$ sites will be denoted as \begin{equation} \SM^{\rm\sst SOV}(\la)=\left(\begin{matrix}\SA_\SRN(\la) & \SB_\SRN(\la)\\ \SC_\SRN(\la) & \SD_\SRN(\la)\end{matrix}\right)\,. \end{equation} We will now describe the representation of the algebra \rf{YBA} in which $\SB_\SRN(\la)$ acts diagonally. \subsubsection{The spectrum of $\SB_\SRN(\la)$} By definition, we require that $\SB_\SRN(\la)$ is represented by a diagonal matrix. In order to parameterize the eigenvalues, let us fix a tuple ${\mathbf \zeta}=(\zeta_1^{},\dots,\zeta_\SRN^{})$ of complex numbers such that $\zeta_a^p\neq\zeta_b^p$ for $a\neq b$. The vector space $\BC^{p^\SRN}$ underlying the SOV-representation will be identified with the space of functions $\Psi(\eta)$ defined for $\eta$ taken from the discrete set \begin{equation} {\mathbb B_\SRN}\,\equiv\,\big\{\,(q^{k_1}\zeta_1,\dots,q^{k_\SRN}\zeta_\SRN)\,;\,(k_1,\dots,k_\SRN)\in\BZ_p^\SRN\,\big\}\,. \end{equation} The SOV-representation is characterized by the property that $\SB(\la)$ acts on the functions $\Psi(\eta)$, $\eta=(\eta_1,\dots,\eta_\SRN)\in\BB_\SRN$ as a multiplication operator, \begin{equation}\label{Bdef} \SB_\SRN(\la)\,\Psi(\eta)\,=\,\eta_\SRN^{{\rm e}_\SRN}\,b_\eta(\la)\,\Psi(\eta)\,,\qquad b_\eta(\la)\,\equiv\, \prod_{n=1}^{\SRN}\frac{\kappa _{n}}{i}\prod_{a=1}^{[\SRN]}\left( \la/\eta_a-\eta_a/\la\right)\,; \end{equation} where $[\SRN]\equiv\SRN-\en$. We see that $\eta_1,^{}\dots,\eta_{[\SRN]}^{}$ represent the zeros of $b_\eta(\la)$. In the case of even $\SRN$ it turns out that we need a supplementary variable $\eta_\SRN$ in order to be able to parameterize the spectrum of $\SB(\la)$. \subsubsection{Representation of the remaining operators} Given that $\SB_\SRN(\la)$ is represented as in \rf{Bdef}, it can be shown \cite{Sk1}-\cite{Sk3}\footnote{See \cite{BT} for the case of the Sinh-Gordon model which is very similar to the case at hand.} that the representation of the remaining operators $\SA_\SRN(\la)$, $\SC_\SRN(\la)$ $\SD_\SRN(\la)$ is to a large extend determined by the algebra \rf{YBA}. First note (see e.g. \cite[Appendix C.2]{BT} for a proof) that the so-called quantum determinant \begin{equation}\label{qdetdef} {\rm det_q}(\SM(\la))\,\equiv\, \SA(\la)\SD(q^{-1}\la)-\SB(\la)\SC(q^{-1}\la) \end{equation} generates central elements of the algebra \rf{YBA}. In the representation defined by \rf{Mdef}, \rf{Lax} we find that $\la^{2\SRN}{\rm det_q}(\SM(\la))$ is a polynomial in $\la^2$ of order $2\SRN$. We therefore require that \begin{equation}\label{detcond} \SA_\SRN(\la)\SD_\SRN(q^{-1}\la)-\SB_\SRN(\la)\SC_\SRN(q^{-1}\la)\,=\,\Delta_\SRN(\la)\cdot {\rm id}\,, \end{equation} with $\la^{2\SRN}\Delta_\SRN(\la)$ being a polynomial in $\la^2$ of order $2\SRN$. The algebra \rf{YBA} furthermore implies that $\SA_\SRN(\la)$ and $\SD_\SRN(\la)$ can be represented in the form \begin{align}\label{SAdef} \SA_\SRN(\la)\,=\,&\,{\rm e}_{\SRN}^{}\,b_\eta(\la)\left[ \frac{\la}{\eta_\SA^{}}\ST^+_\SRN-\frac{\eta_\SA^{}}{\la}\ST^-_\SRN\right] +\sum_{a=1}^{[\SRN]}\prod_{b\neq a}\frac{\la/\eta_b-\eta_b/\la}{\eta_a/\eta_b-\eta_b/\eta_a} \,a_\SRN^{}(\eta_a)\,\ST_a^-\,,\\ \SD_\SRN(\la)\,=\,&\,{\rm e}_{\SRN}^{}\,b_\eta(\la)\left[\frac{\la}{\eta_\SD^{}}\ST^-_\SRN-\frac{\eta_\SD^{}}{\la}\ST^+_\SRN\right] +\sum_{a=1}^{[\SRN]}\prod_{b\neq a}\frac{\la/\eta_b-\eta_b/\la}{\eta_a/\eta_b-\eta_b/\eta_a} \,d_\SRN^{}(\eta_a)\,\ST_a^+\,, \label{SDdef}\end{align} where $\ST_a^{\pm}$ are the operators defined by \[ \ST_a^{\pm}\Psi(\eta_1,\dots,\eta_\SRN)=\Psi(\eta_1,\dots,q^{\pm 1}\eta_a,\dots,\eta_\SRN)\,. \] The expressions \rf{SAdef} and \rf{SDdef} contain complex-valued coefficients $\eta_\SA^{}$, $\eta_\SD^{}$, $a_\SRN(\eta_r)$ and $d_\SRN(\eta_r)$. The coefficients $a_\SRN(\eta_r)$ and $d_\SRN(\eta_r)$ are restricted by the condition \begin{equation}\label{addet} \Delta_\SRN(\eta_r)\,=\, a_\SRN(\eta_r)d_\SRN(q^{-1}\eta_r)\,, \quad\forall r=1,\dots,\SRN\,, \end{equation} as follows from the consistency of \rf{detcond}, \rf{Bdef}, \rf{SAdef} and \rf{SDdef}. This leaves some freedom in the choice of $a_\SRN(\eta_r)$ and $d_\SRN(\eta_r)$ that will be further discussed later. The operator $\SC_\SRN(\la)$ is finally univocally\footnote{Note that the operator $\SB_\SRN(\la)$ is invertible except for $\la$ which coincides with a zero of $\SB_\SRN$, so in general $\SC_\SRN(\la)$ is defined by (4.5) just inverting $\SB_\SRN(\la)$. This is enough to fix in an unique way the operator $\SC_\SRN$ being it a Laurent polynomial of degree [$\SRN$] in $\la$.} defined such that the quantum determinant condition \rf{detcond} is satisfied. \subsubsection{Central elements} \label{center} For the representations in question, the algebra \rf{YBA} has a large center. For its description let us, following \cite{Ta}, define the average value $\CO$ of the elements of the monodromy matrix $\SM^{\rm\sst SOV}(\la)$ as \begin{equation}\label{avdef} \CO(\Lambda)\,=\,\prod_{k=1}^{p}\SO(q^k\la)\,,\qquad \Lambda\,=\,\la^p, \end{equation} where $\SO$ can be $\SA_\SRN$, $\SB_\SRN$, $\SC_\SRN$ or $\SD_\SRN$. \begin{propn} The average values $\CA_\SRN(\Lambda)$, $\CB_\SRN(\Lambda)$, $\CC_\SRN(\Lambda)$, $\CD_\SRN(\Lambda)$ of the monodromy matrix $\SM(\la)$ elements are central elements. \end{propn} The Proposition is proven in \cite{Ta}, see Subsection \ref{Avvalapp} for an alternative proof. The average values are of course unchanged by similarity transformations. They therefore represent parameters of the representation. Let us briefly discuss how these parameters are related to the parameters of the SOV-representation introduced above. First, let us note that $\CB_\SRN(\Lambda)$ is easily found from \rf{Bdef} to be given by the formula \begin{equation}\label{CB} \CB_\SRN(\Lambda)\,=\,Z_{{\SRN}}^{{\rm e}_{\SRN}} \prod_{n=1}^{\SRN}\frac{K_n}{i^p}\prod_{a=1}^{{[}\SRN{]}}(\Lambda/Z_a-Z_a/\Lambda)\,,\qquad \begin{aligned} & Z_a\equiv \eta_a^p\,,\\ & K_a\equiv\kappa_a^p\,. \end{aligned} \end{equation} The values $\CA_\SRN^{}(Z_r)$ and $\CD_\SRN^{}(Z_r)$ are related to the coefficients $a_\SRN(q^k\eta_r)$ and $d_\SRN(q^k\eta_r)$ by \begin{equation}\label{ADaver} \CA_\SRN^{}(Z_r)\,\equiv\,\prod_{k=1}^{p}a_\SRN(q^k\eta_r)\,,\qquad \CD_\SRN^{}(Z_r)\,\equiv\,\prod_{k=1}^{p}d_\SRN(q^k\eta_r)\,. \end{equation} Note that the condition \rf{addet} leaves some remaining arbitrariness in the choice of the coefficients $a_\SRN(\eta)$, $d_\SRN(\eta)$. The gauge transformations \begin{equation} \Psi(\eta)\,\equiv\,\prod_{r=1}^{\SRN}f(\eta_r)\Psi'(\eta)\,, \end{equation} induce a change of coefficients \begin{equation}\label{gauge} a_\SRN'(\eta_r)\,=\,a_\SRN(\eta_r)\frac{f(q^{-1}\eta_r)}{f(\eta_r)}\,, \qquad d_\SRN'(\eta_r)\,=\,d_\SRN(\eta_r)\frac{f(q^{+1}\eta_r)}{f(\eta_r)}\,, \end{equation} but clearly leave $\CA_\SRN^{}(Z_r)$ and $\CD_\SRN^{}(Z_r)$ unchanged. The data $\CA_\SRN^{}(Z_r)$ and $\CD_\SRN^{}(Z_r)$ therefore characterize gauge-equivalence classes of representations for $\SA_\SRN(\la)$ and $\SD_\SRN(\la)$ in the form \rf{SAdef}. \subsection{Existence of SOV-representation for the lattice Sine-Gordon model} \label{SOVex} We are looking for an invertible transformation $\SW^{\rm\sst SOV}$ that maps the lattice Sine-Gordon model defined in the previous sections to a SOV-representation, \begin{equation}\label{inter} (\SW^{\rm\sst SOV})^{-1}\cdot\SM^{\rm\sst SOV}(\la)\cdot\SW^{\rm\sst SOV}\,=\,\SM(\la)\,. \end{equation} Constructing $\SM^{\rm\sst SOV}(\la)$ is of course equivalent to the construction of a basis for $\CH$ consisting of eigenvectors $\langle\,\eta\,|$ of $\SB(\la)$, \begin{equation} \langle\,\eta\,|\,\SB(\la)\,=\,\eta_\SRN^{{\rm e}_\SRN}b_\eta(\la)\,\langle\,\eta\,|\,. \end{equation} The transformation $\SW^{\rm\sst SOV}$ is then described in terms of $\langle \,\eta\,|\,z\,\rangle$ as \begin{equation}\label{Ws} (\SW^{\rm\sst SOV}\psi)(\eta)\,=\,\sum_{z\in(\BS_p)^{\SRN}} \,\langle \,\eta\,|\,z\,\rangle\,\psi(z)\,. \end{equation} The existence of an eigenbasis for $\SB(\la)$ is not trivial since $\SB(\la)$ is not a normal operator. It turns out that such a similarity transformation exists for generic values of the parameters $u$, $v$, $\xi$ and $\kappa $. \begin{thm} \label{SOVthm} $\;\;$ -- $\;\;${\bf Existence of SOV-representation for the lattice Sine-Gordon model}$\;\;$ --\\[1ex] For generic values of the parameters $u$, $v$, $\xi$ and $\kappa$ there exists an invertible operator $\SW^{\rm\sst SOV}:\CH\ra\CH^{\rm\sst SOV}$ which satisfies \rf{inter}. \end{thm} The proof is given in the following Section \ref{SOVapp}. It follows from \rf{SAdef}, \rf{SDdef} that the wave-functions $\Psi(\eta)=\langle\,\eta\,|\,t\,\rangle$ of eigenstates $|\,t\,\rangle$ must satisfy the discrete Baxter equations \begin{equation}\label{SOVBax1} t(\eta_n)\Psi(\eta)\,=\,a(\eta_n)\ST_n^-\Psi(\eta) +d(\eta_n)\ST_n^+\Psi(\eta)\,, \end{equation} where $n=1,\dots,\SRN$. Equation \rf{SOVBax1} represents a system of $p^\SRN$ linear equations for the $p^\SRN$ different components $\Psi(\eta)$ of the vector $\Psi$. It may be written in the form ${D}_t\cdot\Psi=0$, where ${D}_t$ is a $p^\SRN\times p^\SRN$-matrix that depends on $t=t(\la)$. The condition for existence of solutions ${\rm det}{D}_t=0$ is a polynomial equation of order $p^\SRN$ on $t(\la)$. We therefore expect to find $p^\SRN$ different solutions, just enough to get a basis for $\CH$. We will return to the analysis of the spectral problem of $\ST(\la)$ in Section \ref{Spec}. Let us now describe more precisely the set of values of the parameters for which a SOV-representation exists. \subsection{Calculation of the average values} Necessary condition for the existence of $\SW^{\rm\sst SOV}$ is of course the equality \begin{equation}\label{CM=CM} \CM(\Lambda)\,=\,\CM^{\rm\sst SOV}(\Lambda)\,, \end{equation} of the matrices formed out of the average values of $\SM(\lambda)$ and $\SM^{\rm\sst SOV}(\lambda)$, respectively. It turns out that $\CM(\Lambda)$ can be calculated recursively from the average values of the elements of the Lax matrices $L_n^{\rm\sst SG}(\la)$, which are explicitly given by \begin{align}\label{L_n} \CL_n(\Lambda)& \,=\,\frac{1}{i^p} \left( \begin{matrix} i^p U_n^{}(K_n^{2}V_n^{}+V_n^{-1}) & K_n(\Lambda V_n/X_n-X_n/V_n\Lambda) \\ K_n(\Lambda/ X_nV_n-X_nV_n/\Lambda) & i^p U_n^{-1}(K_n^{2}V_n^{-1}+V_n^{}) \end{matrix} \right)\,, \end{align} where we have used the notations $K_n=\kappa_n^p$, $X_n=\xi_n^p$, $U_n=u_n^p$ and $V_n=v_n^p$. Indeed, we have: \begin{propn}\label{Avrec} We have \begin{align}\label{RRel1a} \CM_{\SRN}^{}(\Lambda)\,=\, \CL_{\SRN}^{}(\Lambda)\,\CL_{\SRN-1}^{}(\Lambda)\,\dots\,\CL_1^{}(\Lambda)\,. \end{align} \end{propn} This has been proven in \cite{Ta}, see Subsection \ref{Avvalapp} for an alternative proof. The equality \rf{CM=CM} defines the mapping between the parameters $u,v,\kappa$ and $\xi$ of the representation defined in Subsection \ref{T-op} and the parameters of the SOV-representation. Formula \rf{RRel1a} in particular allows us to calculate $\CB(\Lambda)$ in terms of $u,v,\kappa$ and $\xi$. Equation \rf{CB} then defines the numbers $Z_a\equiv\eta_a^p$ uniquely up to permutations of $a=1,\dots,[\SRN]$. Existence of a SOV-representation in particular requires that $Z_a\neq Z_b$ for all $a\neq b$, $a,b=1,\dots,[\SRN]$. It can be shown (see Subsection \ref{nondegapp} below) that the subspace of the space of parameters $u$, $v$, $\kappa$ and $\xi$ for which this is not the case has codimension at least one. Sufficient for the existence of a SOV-representation is the condition that the representations $\CR_\SRM$ exist for all $\SRM=1,\dots,\SRN-1$. \section{Separation of variables II --- Proofs}\label{SOVapp} \setcounter{equation}{0} We are now proving Theorem \ref{SOVthm} by constructing a set of $p^N$ linearly independent vectors $\langle\,\eta\,|$ which are eigenvectors of $\SB(\la)$ with distinct eigenvalues. This will be equivalent to a recursive construction of the matrix of elements $\langle \,\eta\,|\,z\,\rangle$ and so of the invertible operator $\SW^{\rm\sst SOV}:\CH\ra\CH^{\rm\sst SOV}$ by relation \rf{Ws}. \subsection{Construction of an eigenbasis for $\SB(\la)$} We will construct the eigenstates $\langle\,\eta\,|$ of $\SB(\la)\equiv \SB_\srn(\la)$ recursively by induction on $\SRN$. The corresponding eigenvalues $B(\la)$ are parameterized by the tuple $\eta=(\eta_a)^{}_{a=1,\dots,\SRN}$ as \begin{equation}\label{Bparam} B(\la)\,=\,\eta_\SRN^{{\rm e}_\SRN}\,b_\eta(\la)\,,\qquad b_\eta(\la)\,\equiv\, \prod_{n=1}^{\SRN}\frac{\kappa _{n}}{i}\prod_{a=1}^{[\SRN]}\left( \la/\eta_a-\eta_a/\la\right)\,; \end{equation} We remind that $e_\srn$ is zero for $\SRN$ odd and $1$ for $\SRN$ even. In the case $\SRN=1$ we may simply take $\langle\,\eta_1\,|\,=\,\langle\, v\,|, $ where $\langle\, v\,|$ is an eigenstate of the operator $\sv_1$ with eigenvalue $v$. It is useful to note that the inhomogeneity parameter determines the subset of $\BC$ on which the variable $\eta_1$ lives, $\eta_1\in\xi_1\BS_p$. Now assume we have constructed the eigenstates $\langle\,\chi\,|$ of $\SB_{\srm}(\la)$ for any $\SRM<\SRN$. The eigenstates $\langle\,\eta\,|$, $\eta=(\eta_\srn,\dots,\eta_1)$, of $\SB_{\srn}(\la)$ may then be constructed in the following form \begin{equation} \langle\,\eta\,|\,=\,\sum_{\chi_{\1}^{}}\sum_{\chi_\2^{}}\,K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_\1^{}\,)\, \langle \,\chi_{\2}^{}\,|\ot\langle\,\chi_\1^{}\,|\;, \end{equation} where $\langle \,\chi_{\2}^{}\,|$ and $\langle \,\chi_{\1}^{}\,|$ are eigenstates of $\SB_\srm(\la)$ and $\SB_{\srn-\srm}(\la)$ with eigenvalues parameterized as in \rf{Bparam} by the tuples $\chi_\2^{}=(\chi_{\2 a}^{})_{a=1,\dots ,\srm}^{}$ and $\chi_\1^{}=(\chi_{\1 a}^{})_{a=1,\dots,\srn-\srm}^{}$, respectively. It suffices to consider the cases where $\SRN-\SRM$ is odd. It follows from the formula \begin{equation}\begin{aligned} \SB_\srn(\la)&\,=\,\SA_\srm(\la)\ot\SB_{\srn-\srm}(\la)+{\SB}_\srm(\la)\ot\SD_{\srn-\srm}(\la)\\ &\,\equiv\,\SA_{\2 \ \srm}(\la)\SB_{\1 \ \srn-\srm}(\la)+{\SB}_{\2 \ \srm}(\la)\SD_{\1 \ \srn-\srm}(\la) \end{aligned} \end{equation} that the matrix elements $K_\srn(\,\eta\,|\,{\chi}_{\2}^{};{\chi}_\1^{}\,)$ have to satisfy the relations \begin{equation}\begin{aligned}\label{recrel} \big(\SA_{\2 \ \srm}(\la)\SB_{\1 \ \srn-\srm}(\la) +{\SB}_{\2 \ \srm}(\la)& \SD_{\1 \ \srn-\srm}(\la)\big)^t\,K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_\1^{}\,) \\ &\,=\,\eta_\srn^{{\rm e}_\srn}\prod_{n=1}^{\SRN}\frac{\kappa _{n}}{i}\prod_{a=1}^{[\SRN]}\left( \la/\eta_a-\eta_a/\la\right)\,K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_{\1}^{}\,)\,, \end{aligned}\end{equation} where we used the notation $\SO^t$ for the transpose of an operator $\SO$. Let us assume that \begin{equation} \chi_{\1a}q^{h_{1}}\notin \Delta _{\1}, \ \ \chi _{\2b}q^{h_{2}}\notin \Delta _{\2}\ \ \text{and} \ \ \chi _{\1a}q^{h_{1}}\neq \chi _{\2b}q^{h_{2}}, \end{equation} where $h_{i}\in \{1,...,p\}$, $a\in \{1,...,\SRN-\SRM\}$\ , $b\in \{1,...,\SRM\}$\ and $\Delta _{i}$ is the set of zeros of the quantum determinant on the subchain $i$, with $i=\1$,$\2$. Under these assumptions\footnote The subspace within the space of parameters where these conditions are not satisfied has codimension at least one.} the previous equations yield recursion relations for the dependence of the kernels in the variables $\chi_{\1 a}$ and $\chi_{\2 b}$ simply by setting $\la=\chi_{\1 a}$ and $\la=\chi_{\2 b}$. Indeed for $\la=\chi_{\1 a}$ the first term on the left of \rf{recrel} vanishes leading to \begin{equation}\label{recrel1}\begin{aligned} {\ST_{\1 a}^{^-} K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_{\1}^{}\,)}\;&{d_\1^{}(q^{-1}\chi_{\1a}^{})}\; \,\chi_\srm^{{\rm e}_\srm}\prod_{n=1}^{\SRN-\SRM} \frac{i}{\kappa _{n}}\prod_{a=1}^{[\rm M]}(\chi_{\1 a}^{}/\chi_{\2 b}^{}-\chi_{\2 b}^{}/\chi_{\1 a}^{})\, \\[-1ex] & \,=\, {K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_{\1}^{}\,)}\; \,\eta_\srn^{{\rm e}_\srn}\prod_{b=1}^{[\SRN]}(\chi_{\1 a}^{}/\eta_b^{}-\eta_b^{}/\chi_{\1 a}^{}) \,, \end{aligned}\end{equation} while for $\la=\chi_{\2a}$ one finds similarly \begin{equation}\label{recrel2}\begin{aligned} {\ST_{\2 a}^{^+} K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_{\1}^{}\,)}\; &{a_\2^{}(q^{+1}\chi_{\2a}^{})}\; \prod_{n=1}^{\rm M}\frac{i}{\kappa _{n}} \prod_{b=1}^{\SRN-\rm M}(\chi_{\2 a}^{}/\chi_{\1 b}^{}-\chi_{\1 b}^{}/\chi_{\2 a}^{} )\, \\[-1ex] &\,=\,{K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\eta_{\1}^{}\,)}\; \,\eta_\srn^{{\rm e}_\srn}\prod_{b=1}^{[\SRN]} (\chi_{\2 a}^{}/\eta_b^{}-\eta_b^{}/\chi_{\2 a}^{}) \,. \end{aligned}\end{equation} If ${\rm M}$ is even we find the recursion relation determining the dependence on $\chi_{\2\srm}$ by sending $\la\ra\infty$ in \rf{recrel}, leading to \begin{equation}\label{recrelzero} {\ST_{\2 \srm}^{^+} K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\chi_{\1}^{}\,)}\; \frac{1}{\chi_{\2 \SA}^{}}\,\prod_{a=1}^{\rm M-1}\frac{1}{\chi_{\2 a}^{}}\,\prod_{b=1}^{\SRN-\rm M}\frac{1}{\chi_{\1 b}^{}}\, \,=\,{K_\srn^{}(\,\eta\,|\,\chi_{\2}^{};\eta_{\1}^{}\,)}\; \prod_{b=1}^{\SRN}\frac{1}{\eta_b^{}} \,. \end{equation} The recursion relations \rf{recrel1}, \rf{recrel2} have solutions compatible with the requirement of cyclicity, $ ({\ST_{\1 a}^{^-}})^p=1$ and $({\ST_{\2 a}^{^+})^p=1}$ for all values of $a$, provided that the algebraic equations \begin{equation}\label{Exrecrel1}\begin{aligned} & {D_\1^{}(\chi_{\1a}^{})}\; (\chi_{\2\srm}^{{\rm e}_\srm})^{p}\prod_{n=1}^{\SRN-\SRM} \frac{i^p}{\kappa _{n}^p}\prod_{b=1}^{[{\rm M}]} (\chi_{\1 a}^{p}/\chi_{\2 b}^{p}-\chi_{\2 b}^{p}/\chi_{\1 a}^{p})\, \,=\, (\eta_\srn^{{\rm e}_\srn})^{p}\prod_{b=1}^{[\SRN]}(\chi_{\1 a}^{p}/\eta_b^{p}-\eta_b^{p}/\chi_{\1 a}^{p})\,,\\ & {\rm where}\;\;D_\1(\chi_{\1 a})\, \equiv\,\prod_{k=1}^p d_\1(q^k\chi_{\1 a})\,, \end{aligned}\end{equation} and \begin{equation}\label{Exrecrel2}\begin{aligned} &{A_\2^{}(\chi_{\2 a})}\; \prod_{n=1}^{\SRM} \frac{i^p}{\kappa _{n}^p} \prod_{b=1}^{\SRN-\rm M}(\chi_{\2 a}^{p}/\chi_{\1 b}^{p}-\chi_{\1 b}^{p}/\chi_{\2 a}^{p} )\,=\, (\eta_\srn^{{\rm e}_\srn})^{p}\prod_{b=1}^{[\SRN]}(\chi_{\2 a}^{p}/\eta_b^{p}-\eta_b^{p}/\chi_{\2 a}^{p})\,,\\ & {\rm where}\;\;A_\2(\chi_{\2 a})\,\equiv\,\prod_{k=1}^p a_\2(q^k\chi_{\2 a}) \,, \end{aligned} \end{equation} are satisfied. If $\SRM$ is even the recursion relation \rf{recrelzero} yields the additional relation \begin{equation}\label{Exrecrel3} \frac{1}{\chi_{\2 \SA}^{p}}\,\prod_{a=1}^{\rm M-1}\frac{1}{\chi_{\2 a}^{p}}\, \prod_{b=1}^{\SRN-\rm M}\frac{1}{\chi_{\1 b}^{p}}\, \,=\, \prod_{b=1}^{\SRN}\frac{1}{\eta_b^{p}} \,. \end{equation} We will show in the next subsection that the equations \rf{Exrecrel1}-\rf{Exrecrel3} completely determine $\eta_a^p$ in terms of $\chi_{\2 a}^p$, $\chi_{\1 a}^p$. By using \rf{CB} and \rf{AADD} it is easy to see that the conditions \rf{Exrecrel1} and \rf{Exrecrel2} are nothing but the equations \begin{equation}\label{Brecrel} \mathcal{B}_{\SRN}(\Lambda ) =\mathcal{A}_{\SRM}(\Lambda )\mathcal{B _{\SRN-\SRM}(\Lambda )+\mathcal{B}_{\SRM}(\Lambda )\mathcal{D}_{\SRN-\SRM}(\Lambda ), \end{equation} evaluated at $\Lambda=\chi_{\1a}^p$ and $\Lambda=\chi_{\2a}^p$, respectively. The relation \rf{Exrecrel3} follows from \rf{Brecrel} in the limit $\la\ra\infty$. The relations \rf{Brecrel} are implied by \rf{RRel1a}. We conclude that our construction of $\SB(\la)$-eigenstates will work if the representations $\CR_{\SRN}$, $\CR_{\SRM}$ and $\CR_{\SRN-\SRM}$ are all non-degenerate. Theorem 2 follows by induction. \subsection{On average value formulae}\label{Avvalapp} \begin{propn} The average values of the Yang-Baxter generators are central elements which satisfy the following recursive equations: \begin{eqnarray} \mathcal{B}_{\SRN}(\Lambda ) &=&\mathcal{A}_{\SRM}(\Lambda )\mathcal{B _{\SRN-\SRM}(\Lambda )+\mathcal{B}_{\SRM}(\Lambda )\mathcal{D}_{\SRN-\SRM}(\Lambda ), \label{average value-B} \\ \mathcal{C}_{\SRN}(\Lambda ) &=&\mathcal{D}_{\SRM}(\Lambda )\mathcal{C _{\SRN-\SRM}(\Lambda )+\mathcal{C}_{\SRM}(\Lambda )\mathcal{A}_{\SRN-\SRM}(\Lambda ), \label{average value-C} \\ \mathcal{A}_{\SRN}(\Lambda ) &=&\mathcal{A}_{\SRM}(\Lambda )\mathcal{A _{\SRN-\SRM}(\Lambda )+\mathcal{B}_{\SRM}(\Lambda )\mathcal{C}_{\SRN-\SRM}(\Lambda ), \label{average value-A} \\ \mathcal{D}_{\SRN}(\Lambda ) &=&\mathcal{D}_{\SRM}(\Lambda )\mathcal{D _{\SRN-\SRM}(\Lambda )+\mathcal{C}_{\SRM}(\Lambda )\mathcal{B}_{\SRN-\SRM}(\Lambda ), \label{average value-D} \end{eqnarray} where $\SRN-\SRM$ or $\SRM$ is odd. \end{propn} \begin{proof} In the previous subsection we have proven the existence of SOV-representations, i.e. the diagonalizability of the $\SB$-operator. First of all let us point out that $\SA(\lambda )$, $\SB(\lambda )$, $\SC(\lambda )$ and $\SD(\lambda )$ are one parameter families of commuting operators. This implies that the corresponding average values are functions of $\Lambda =\lambda ^{p}$. The fact that $\CB_\SRN(\Lambda)$ is central trivially follows from the fact that $\SB_\SRN(\la)$ is diagonal in the SOV-representation, while for the operators $\SA$ and $\SD$ we have that for $\SRN$ odd, $\mathcal{A}_{\SRN}(\Lambda )\Lambda ^{\SRN-1}$ and $\mathcal{D}_{\SRN}(\Lambda )\Lambda ^{\SRN-1}$ are polynomials in $\Lambda ^{2} $ of degree $\SRN-1$. It follows that the special values given by \rf{ADaver} characterize them completely, \begin{equation} \begin{aligned} \mathcal{A}_{\SRN}(\Lambda )&\,=\, \sum_{a=1}^{[\SRN]}\prod_{b\neq a}\frac{(\Lambda/Z _{b}-Z _{b}/\Lambda )}{(Z_{a}/Z_{b}-Z _{b}/Z_{a})}\emph{A}_{\SRN}(Z_{a})\,,\\ \mathcal{D}_{\SRN}(\Lambda )&\,=\, \sum_{a=1}^{[\SRN]}\prod_{b\neq a} \frac{(\Lambda/Z_{b}-Z_{b}/\Lambda)}{(Z_{a}/Z_{b}-Z_{b}/Z_{a})} \emph{D}_{\SRN}(Z_{a}), \end{aligned} \end{equation} where $\emph{A}_{\SRN}(Z_{a})$ and $\emph{D}_{\SRN}(Z_{a})$ are the average values of the coefficients of the SOV-representation. In the case of $\SRN$ even we have just to add the asymptotic property of $\mathcal{A}_{\SRN}(\Lambda )$ and $\mathcal{D}_{\SRN}(\Lambda )$\ discussed in appendix \ref{Asymp-A-D} to complete the statement. Finally, the fact that \mathcal{C}_{\SRN}(\Lambda )$ is central follows by its diagonalizability in the cyclic representations. Now the above recursive formulae (\ref{average value-B}-\ref{average value-D}) are a simple consequence of the centrality of the average values of the monodromy matrix elements. Let us consider only the case of the average value of $\SA_{\SRN}(\lambda )$. We have the expansion: \begin{equation} \SA_{\SRN}(\lambda )=\SA_{\2 \ \SRM}(\lambda )\SA_{\1 \ \SRN-\SRM}(\lambda )+\SB_{\2 \ \SRM}(\lambda )\SC_{\1 \ \SRN-\SRM}(\lambda ), \label{A-expan} \end{equation} in terms of the entries of the monodromy matrix of the subchains $\1$ and $\2$ with $(\SRN-\SRM)$-sites and $\SRM$-sites, respectively. It follows directly from definition \rf{avdef} of the average value together with \rf{A-expan} that $\mathcal{A}_{\SRN}(\Lambda )$ can be represented in the form \begin{equation} \mathcal{A}_{\SRN}(\Lambda )=\mathcal{A}_{\2 \ \SRM}(\Lambda )\mathcal{A}_{\1 \ \SRN-\SRM}(\Lambda )+\mathcal{B}_{\2 \ \SRM}(\Lambda )\mathcal{C}_{\1 \ \SRN-\SRM}(\Lambda )+\Delta _{\SRN}(\lambda ) \label{Central-D} \end{equation} where $\Delta _{\SRN}(\lambda )$ is a sum over monomials which contain at least one and at most $p-2$ factors of $\SA_{\2 \ \SRM}(\la q^m)$. As before, we may work in a representation where the $\SB_{\2 \ \SRM}(\la q^n)$ are diagonal, spanned by the states $\langle\,\chi_\2\,|$ introduced in the previous subsection. As the factors $\SA_{\2 \ \SRM}(\la q^m)$ contained in $\Delta _{\SRN}(\lambda )$ produce states with modified eigenvalue of $\SB_{\2 \ \SRM}(\la q^n)$, none of the states produced by acting with $\Delta _{\SRN}(\lambda )$ on $\langle\,\chi_\2\,|$ can be proportional to $\langle\,\chi_\2\,|$. This would be in contradiction to the fact that $\mathcal{A}_{\SRN}(\Lambda )$ is central unless $\Delta _{\SRN}(\lambda )=0$. \end{proof} \subsection{Non-degeneracy condition}\label{nondegapp} \begin{propn} \label{B-simplicity}The condition $Z_{r}=Z_{s}$ for certain $r\neq s$ with r,s\in \{1,...,[\SRN]\}$ defines a subspace in the space of the parameters \{\kappa_{1},...,\kappa_{\SRN},\xi_{1},...,\xi_{\SRN}\}\in \mathbb{C}^{2\SRN}$ of codimension at least one. \end{propn} \begin{proof} The parameters $Z_r$ are related to the expectation value $\CB_\SRN(\Lambda)$ by means of the equation \begin{equation}\label{CB'} \CB_\SRN(\Lambda)\,=\,Z_{{\SRN}}^{{\rm e}_{\SRN}} \prod_{n=1}^{\SRN}\frac{K_n}{i^p} \prod_{a=1}^{{[}\SRN{]}}(\Lambda/Z_a-Z_a/\Lambda)\,. \end{equation} It follows from \rf{RRel1a} and \rf{L_n} that $\CB_\SRN(\Lambda)$ is a Laurent polynomial in $X_n$ that depends polynomially on each of the parameters $K_n$. Equation \rf{CB'} defines the tuple $Z=(Z_1^{},\dots,Z_{[\SRN]}^{})$ uniquely up to permutations of $Z_1^{},\dots,Z_{[\SRN]}^{}$ as function of the parameters $X=(X_1,\dots,X_\SRN)$ and $K=(K_1,\dots,K_\SRN)$. We are going to show that\footnote{It should be noted that for even $\SRN$ it is indeed sufficient to consider the dependence w.r.t. $X_1,\dots,X_{{\SRN}-1}$.} \begin{equation} J(X;K)\,\equiv\, {\rm det}\left(\frac{\pa Z_r}{\pa X_s}\right)_{r,s=1,\dots,[\SRN]}\neq\,0\,. \end{equation} The functional dependence\footnote{Let $\sigma_n^{{[\SRN]}}(Z)$ be the degree n elementary symmetric polynomial in the variables Z, then $\sigma_n^{{[\SRN]}}(Z)/\sigma_{[\SRN]}^{[\SRN]}(Z)$ are Laurent polynomials of degree 1 in all the parameters X and K.} of the $Z_1^{},\dots,Z_{[\SRN]}^{}$ w.r.t. the parameters $K$ implies that it is sufficient to show that $J(X;K)\neq 0$ for special values of $K$ in order to prove that $J(X;K)\neq 0$ except for values of $K$ within a subset of $\BC^\SRN$ of dimension less than $\SRN$. Let us choose $K_{n}=i^p$ for $n=1,...,[\SRN]$, then the average values \rf{L_n} of the Lax operators simplify to \begin{equation} \mathcal{L}_{n}^{SG}(\Lambda )=\left( \begin{array}{cc} 0 & \Lambda /X_{n}-X_{n}/\Lambda \\ \Lambda /X_{n}-X_{n}/\Lambda & \end{array} \right) \,. \end{equation} Inserting this into \rf{RRel1a} yields \begin{equation}\label{CBsimp} \mathcal{{B}}_{\SRN}(\Lambda )\,=\,(K_\SRN^2+1)^{{\rm e}_\SRN} \prod_{n=1}^{[\SRN]}(\Lambda /X_{n}-X_{n}/\Lambda )\,. \end{equation} The fact that $J(X;K)\neq 0$ follows for the case under consideration easily from \rf{CBsimp}. Whenever $J(X;K)\neq 0$, we have invertibility of the mapping $Z=Z(X_1,\dots,X_{[\SRN]})$. The claim follows from this observation. \end{proof} \section{The spectrum --- odd number of sites}\label{Spec} \setcounter{equation}{0} Let us now return to the analysis of the spectrum of the model. For simplicity we will consider here the case of odd $\SRN$, while we will discuss the case of even $\SRN$ in the next section. The existence of the SOV-representation allows one to reformulate the spectral problem for $\ST(\la)$ as the problem to find all solutions of the {\it discrete} Baxter equations \rf{SOVBax1}. This equation may be written in the form \begin{equation}\label{SOVBax} \CD_r\,\Psi(\eta)\,=\,0\,,\qquad \CD_r\equiv\,a(\eta_r)\ST_r^-+d(\eta_r)\ST_r^+-t(\eta_r)\,, \end{equation} where $r=1,\dots,\SRN$. Previous experience with the SOV method suggests to consider the ansatz \begin{equation}\label{Qeigenstate1} \Psi(\eta)=\prod_{r=1}^{\SRN}Q_t(\eta_r)\,, \end{equation} where $Q_t(\la)$ is the eigenvalue of the corresponding $\SQ$-operator which satisfies the {\it functional} Baxter equations \begin{equation}\label{BaxEV2a} \begin{aligned} &t(\la)Q_t(\la)\,=\,{\tt a}_\SRN(\la)Q_t(q^{-1}\la)+ {\tt d}_\SRN(\la)Q_t(q\la)\,. \end{aligned}\end{equation} This approach will turn out to work, but in a way that is more subtle than in previously analyzed cases. \subsection{States from solutions of the Baxter equation} First, in the present case it is not immediately clear if the functional Baxter equation \rf{BaxEV2a} and the discrete Baxter equation \rf{SOVBax} are compatible. The question is if one can always assume that the coefficients $a(\eta_r)$ and $d(\eta_r)$ in \rf{SOVBax} coincide with the coefficients ${\tt a}_\SRN(\eta_r)$, ${\tt d}_\SRN(\eta_r)$ appearing in the functional equation \rf{BaxEV2a} satisfied by the $\SQ$-operator. The key point to observe is contained in the following Lemma. \begin{lem}\label{A=A-B} Let ${\tt A}_\SRN^{}(\Lambda)$ and ${\tt D}_\SRN^{}(\Lambda)$ be the average values of the coefficients ${\tt a}_\SRN(\la)$ and ${\tt d}_\SRN(\la)$ of the Baxter equation \rf{BaxEV2a}, \begin{equation} {\tt A}_\SRN^{}(\Lambda)\,\equiv\,\prod_{k=1}^{p}{\tt a}_\SRN(q^k\lambda)\,,\qquad {\tt D}_\SRN^{}(\Lambda)\,\equiv\,\prod_{k=1}^{p}{\tt d}_\SRN(q^k\lambda)\,. \end{equation} We then have \begin{equation} {\tt A}_\SRN^{}(\Lambda)\,=\,\CA_\SRN(\Lambda)-\CB_\SRN(\Lambda)\,,\qquad {\tt D}_\SRN^{}(\Lambda)\,=\,\CA_\SRN(\Lambda)+\CB_\SRN(\Lambda)\,. \end{equation} \end{lem} \begin{proof} The claim is checked for $\SRN=1$ by straightforward computation. Let us assume now that the statement holds for $\SRN-1$ and let us show it for $\SRN$. The average values ${\tt A}_{\SRN}(\Lambda )$ and ${\tt D}_{\SRN}(\Lambda )$ satisfy by definition the factorization: \begin{equation} {\tt A}_{\SRN} (\Lambda )={\tt A}_{1}^{(\SRN)}(\Lambda ){\tt A}_{\SRN-1}^{(\SRN-1,...,1)}(\Lambda ),\text{ \ }{\tt D}_{\SRN} (\Lambda )={\tt D}_{1}^{(\SRN)}(\Lambda ){\tt D}_{\SRN-1}^{(\SRN-1,...,1)}(\Lambda ), \end{equation} where the upper indices are referred to the quantum sites involved while the lower indices to the total number of sites. We can use now the induction hypothesis to get the result: \begin{align} {\tt A}_{\SRN} (\Lambda )& =(\mathcal{A}_{1}^{(\SRN)} \mathcal{B}_{1}^{(\SRN)}(\Lambda ))(\mathcal{A}_{\SRN-1}^{(\SRN-1,...,1)}(\Lambda ) \mathcal{B}_{\SRN-1}^{(\SRN-1,...,1)}(\Lambda ))=\mathcal{A}_{\SRN}(\Lambda ) \mathcal{B}_{\SRN}(\Lambda ), \\ {\tt D}_{\SRN} (\Lambda )& =(\mathcal{A}_{1}^{(\SRN)} \mathcal{B}_{1}^{(\SRN)}(\Lambda ))(\mathcal{A}_{\SRN-1}^{(\SRN-1,...,1)}(\Lambda ) \mathcal{B}_{\SRN-1}^{(\SRN-1,...,1)}(\Lambda ))=\mathcal{A}_{\SRN}(\Lambda ) \mathcal{B}_{\SRN}(\Lambda ), \end{align} where in the last formulae we have used \rf{RRel1a} together with the fact that $\CA_\SRN(\Lambda)=\CD_\SRN(\Lambda)$ and $\CB_\SRN(\Lambda)=\CC_\SRN(\Lambda)$ for $u_n=1$, $v_n=1$, $n=1,\dots,\SRN$. \end{proof} The Lemma implies in particular \begin{equation}\label{AADD} {\tt A}_\SRN^{}(Z_r)\,=\,\CA_\SRN(Z_r)\,,\qquad {\tt D}_\SRN^{}(Z_r)\,=\,\CD_\SRN(Z_r)\,, \end{equation} for all $r=1,\dots,\SRN$. We may therefore always find a gauge transformation \rf{gauge} such that the coefficients $a_\SRN^{}(\eta_r)$ and $d_\SRN^{}(\eta_r)$ in \rf{SOVBax} become equal to \begin{equation}\label{aadd} a_\SRN^{}(\eta_r)\,=\,{\tt a}_\SRN^{}(\eta_r)\,,\qquad d_\SRN^{}(\eta_r)\,=\,{\tt d}_\SRN^{}(\eta_r)\,, \end{equation} respectively. So from now on we will denote also the coefficients in \rf{SOVBax1} with ${\tt a}$ and ${\tt d}$ omitting the index $\SRN$ unless necessary. The ansatz \rf{Qeigenstate1} therefore indeed yields an eigenstate of $\ST(\la)$ for each solution $Q_t(\la)$ of the functional Baxter equation \rf{BaxEV}. We are going to show that {\it all} eigenstates can be obtained in this way. \subsection{Non-degeneracy of $\ST(\la)$-eigenvalues}\label{Compatib} In order to analyze the equations \rf{SOVBax}, let us note that the matrix representation of the operator $\CD_r$ defined in \rf{SOVBax} is block diagonal with blocks labeled by $n=1,\dots,\SRN$. Let $\Psi_{n}(\eta)\in\BC^{p}$ be the vector with components \[ \Psi_{n,k}(\eta)\,=\, \Psi(\eta_1,\dots,\eta_{n-1},\zeta_nq^k,\eta_{n+1},\dots,\eta_\SRN)\,. \] Equation \rf{SOVBax} is then equivalent to the set of linear equations \begin{equation}\label{BAXmatrix} D^{(r)}\cdot\Psi_r(\eta)\,=\,0\,,\qquad r=1,\dots,\SRN\,. \end{equation} where $D^{(r)}$ is the $p\times p$-matrix \begin{equation}\label{D-matrix} \begin{pmatrix} t(\zeta_r) &-{\tt d}(\zeta_r)& 0 &\cdots & 0 & -{\tt a}(\zeta_r)\\ -{\tt a}(q\zeta_r)& t(q\zeta_r)&-{\tt d}(q\zeta_r)& 0 &\cdots & 0 \\ 0 & {\quad} \ddots & & & & \vdots \\ \vdots & & \cdots & & & \vdots \\ \vdots & & & \cdots & & \vdots \\ \vdots & & & & \ddots{\qquad} & 0 \\ 0&\ldots&0& -{\tt a}(q^{2l-1}\zeta_r)& t(q^{2l-1}\zeta_r) & -{\tt d}(q^{2l-1}\zeta_r)\\ -{\tt d}(q^{2l}\zeta_r) & 0 &\ldots & 0 & -{\tt a}(q^{2l}\zeta_r)& t(q^{2l}\zeta_r) \end{pmatrix} \end{equation} The equation \rf{BAXmatrix} can have solutions only if ${\rm det}(D^{(r)})=0$. The determinant ${\rm det}(D^{(r)})$ is a polynomial of degree $p$ in each of the $\SRN$ coefficients of the polynomial $t(\la)$. \begin{propn}\label{Simply1} Given that ${\rm det}(D^{(r)})=0$, the dimension of the space of solutions to the equation \rf{BAXmatrix} for any $r=1,\dots,\SRN\,$ is one for generic values of the parameters $\xi$ and $\kappa$. \end{propn} \begin{proof} Let us decompose the $p\times p$ matrix ${D}^{(r)}$ into the block form \begin{equation} {D}^{(r)}\,=\,\left(\begin{matrix} v^{(r)} & E^{(r)} \\ d^{(r)} & w^{(r)}\end{matrix}\right)\,, \end{equation} where the submatrix $E^{(r)}$ is a $(p-1)\times (p-1)$ matrix, $v^{(r)}$ and $w^{(r)}$ are column and row vectors with $p-1$ components, respectively. We assume that ${\rm det}(D^{(r)})=0$, so existence of a solution to ${D}^{(r)}\Psi=0$ is ensured. It is easy to see that the equation ${D}^{(r)}\Psi=0$ has a unique solution provided that ${\rm det}(E^{(r)})\neq 0$. It remains to show that ${\rm det}(E^{(r)})\neq 0$ holds for generic values of the parameters $\xi$ and $\kappa$. To this aim let us observe that the coefficients ${\tt a}(q^k\zeta_r^{})$ and ${\tt d}(q^k\zeta_r^{})$ appearing in \rf{BAXmatrix} depend analytically on the parameters $\kappa$. If ${\rm det}(E^{(r)})=0$ is not identically zero, it can therefore only vanish at isolated points. It therefore suffices to prove the statement in a neighborhood of the values for the parameters $\kappa$ which are such that \begin{equation}\label{adzero} {\tt a}(\zeta_r^{})\,=\,0\,,\qquad {\tt d}(q^{-1}\zeta_r^{})\,=\,0\,. \end{equation} Such values of $\kappa$ and $\xi$ exist: Setting $\kappa_n=\pm i$ for $n=1,\dots,\SRN$, one finds that \begin{equation} \CB_\SRN(\Lambda)\,=\, \prod_{n=1}^{\SRN}\left( \Lambda/X_n-X/\Lambda\right)\,, \end{equation} which vanishes for $\la=q^{\frac{1}{2}}\xi_n$. We may therefore choose\footnote Note that this choice implies that $v_{n}\in (-1)^{p^{\prime }/2}q^{1/2}\BS_p$.} $\zeta_n=q^{\frac{1}{2}}\xi_n$. We then find \rf{adzero} from \rf{addef}, \rf{aadd}. Given that \rf{adzero} holds, it is easy to see that ${\rm det}(E^{(r)})\neq 0$ . Indeed, the submatrix $E^{(r)}_{kl}$, is lower triangular if \rf{adzero} is valid, and it has $-{\tt d}(q^k\zeta_r^{})$, $k=0,\dots,p-2$ as its diagonal elements. It follows that ${\rm det}(E^{(r)})=\prod_{k=0}^{p-2}{\tt d}(q^k\zeta_r^{})$ which is always nonzero if \rf{adzero} is satisfied. \end{proof} The previous results admit the following reformulation which is central for the classification and construction of the spectrum of $\ST(\lambda )$: \begin{thm} For generic values of the parameters $\kappa $ and $\xi $ the spectrum of $\ST(\lambda )$ is simple and all the wave-functions $\Psi _{t}(\eta )$ can be represented in the factorized form \rf{Qeigenstate1} with $Q_{t}$ being the eigenvalue of the $\SQ$-operator on the eigenstate $|\,t\,\rangle $. The eigenvectors $|\,t\,\rangle $ of $\ST(\la)$ are in one-to-one correspondence with the polynomials $Q_{t}(\la)$ of order $2l\SRN$, with $Q_{t}(0)\neq 0$, which satisfy the Baxter equation \rf{BaxEV} with $t(\la)$ being an even Laurent polynomial in $\lambda $ of degree $\SRN-1$. \end{thm} \begin{proof} Proposition \ref{Simply1} implies that the spectrum of $\ST(\lambda )$ is simple. Let $|\,t\,\rangle $ be an eigenstate of $\ST(\la)$. Self-adjointness and mutual commutativity of $\ST(\la)$ and $\SQ(\mu )$ imply that $|\,t\,\rangle $ is also eigenstate of $\SQ(\la)$. Let $Q_{t}(\la)$ be the $\SQ$-eigenvalue on $|\,t\,\rangle $. The polynomial $Q_{t}(\la)$ is related to $t(\la)$ by the Baxter equation \rf{BaxEV} which specialized to the values $\la=\eta _{r}$ yields the equations \rf{BAXmatrix}. It follows that there must exist nonzero numbers $\nu_{r}$ such that \begin{equation}\label{QvsPsi} Q_{t}^{}(\zeta _{r}q^{k})\,=\,\nu _{r}\Psi _{r,k}(\zeta _{1},\dots , \zeta _{\SRN})\,. \end{equation} This implies that the wave-functions $\Psi (\eta )$ can be represented in the form \rf{Qeigenstate1} with $Q_{t}$ being the eigenvalue of the $\SQ$-operator on the eigenstate $|\,t\,\rangle $. \end{proof} \begin{rem} It may be worth noting that the equivalence with the Fateev-Zamolodchikov model does not hold for odd number of lattice sites. The spectrum of the two models is qualitatively different, being doubly degenerate in the Fateev-Zamolodchikov model but simple in the lattice Sine-Gordon model, as illustrated in Appendix \ref{FZ}. \end{rem} \subsection{Completeness of the Bethe ansatz} Assume we are given a solution $(\la_1,\dots,\la_{2l\SRN})$ of the Bethe equations \rf{BAE}. Let us construct the polynomial $Q(\la)$ via equation \rf{Qfromzeros}. Define \begin{equation}\label{TfromQ} t(\la)\,:=\,\frac{{\tt a}(\la)Q(q^{-1}\la)+{\tt d}(\la)Q(q\la)}{Q(\la)}\,. \end{equation} $t(\la)$ is nonsingular for $\la=\la_k$, $k=1,\dots,{\rm M}$ thanks to the Bethe equations \rf{BAE}. The pairs $(Q(\eta_r),t(\eta_r))$ satisfy the discrete Baxter equation by construction. Inserting this solution into \rf{Qeigenstate1} produces an eigenstate $|\,t\,\rangle$ of the transfer matrix $\ST(\la)$ within the SOV-representation. Conversely, let $|\,t\,\rangle$ be an eigenvector of $\ST(\la)$ with eigenvalue $t(\la)$. Let $Q_t'(\la)$ be the eigenvalue of $\SQ(\la)$ on $|\,t\,\rangle$. Thanks to the properties of $\SQ(\la)$ listed in Theorem \ref{Qprop} one may factorize $Q_t'(\la)$ in the form \rf{Qfromzeros}. The tuple of zeros $(\la_1',\dots,\la_{2l\SRN}')$ of $Q_t'(\la)$ must satisfy the Bethe equations \rf{BAE} as follows from the Baxter equation \rf{BAX} satisfied by $\SQ(\la)$. Inserting $Q_t'(\eta_r)$ into \rf{Qeigenstate1} produces an eigenstate $|\,t'\,\rangle$ that must be proportional to $|\,t\,\rangle$ due to the simplicity of the spectrum of $\ST(\la)$. It follows that there is a one-to-one correspondence between the solutions to \rf{BAE} and the eigenstates of the transfer matrix ({\it Completeness of the Bethe ansatz}). \section{The spectrum --- even number of sites}\label{ap-even} \setcounter{equation}{0} We will now generalize these results to the case of a chain with even number $N$ of sites. It turns out that the spectrum of $\ST(\la)$ is degenerate in this case, but the degeneracy is resolved by introducing an operator $\Theta$ which commutes both with $\ST(\la)$ and $\SQ(\la)$. The joint spectrum of $\ST(\la)$, $\SQ(\la)$ and $\Theta$ is found to be simple. \subsection{The $\Theta $-charge} In the case of a lattice with $\SRN$ even quantum sites, we can introduce the operator: \renewcommand{\su}{{\mathsf u}} \renewcommand{\sv}{{\mathsf v}} \begin{equation} \Theta =\prod_{n=1}^{\SRN}\sv_{n}^{(-1)^{1+n}}. \label{topological-charge} \end{equation} \begin{propn} \label{Lemma-Theta}$\Theta $ commutes with the transfer matrix and satisfies the following commutation relations with the entries of the monodromy matrix: \begin{eqnarray} \Theta \SC(\lambda ) &=&q\SC(\lambda )\Theta \text{, \ \ \ }[\SA(\lambda ),\Theta ]=0, \\ \SB(\lambda )\Theta &=&q\Theta \SB(\lambda ),\text{ \ \ }[\SD(\lambda ),\Theta ]=0. \end{eqnarray} \end{propn} \begin{proof} The claim can be easily verified explicitly for $\SRN=2$. The proof for the case of general even $\SRN=2\SRM$ follows by induction. Indeed, \begin{equation*} \SM_{\2\,2\SRM}\,\SM_{\1\,2(\SRN-\SRM)}\,=\,\left(\begin{matrix} \SA_{\2\,2\SRM}\,\SA_{\1\,2(\SRN-\SRM)}+\SB_{\2\,2\SRM}\,\SC_{\1\,2(\SRN-\SRM)} & \SA_{\2\,2\SRM}\,\SB_{\1\,2(\SRN-\SRM)}+\SB_{\2\,2\SRM}\,\SD_{\1\,2(\SRN-\SRM)} \\ \SC_{\2\,2\SRM}\,\SA_{\1\,2(\SRN-\SRM)}+\SD_{\2\,2\SRM}\,\SC_{\1\,2(\SRN-\SRM)} & \SC_{\2\,2\SRM}\,\SB_{\1\,2(\SRN-\SRM)}+\SD_{\2\,2\SRM}\,\SD_{\1\,2(\SRN-\SRM)} \end{matrix} \right)\,, \end{equation*} which easily allows one to deduce that the claim holds if it holds for all $\SRM<\SRN$. \end{proof} \subsection{$T$-$\Theta $-spectrum simplicity} \begin{lem} Let $k\in \{-l,..,l\}$ and $|t_{k}\rangle $\ be a simultaneous eigenstate of the transfer matrix $\ST(\lambda )$ and of the $\Theta $-charge with eigenvalues $t_{|k|}(\lambda )$ and $q^{k}$, respectively, then $\lambda ^{\SRN}t_{|k|}(\lambda )$ is a polynomial in $\lambda ^{2}$ of degree $\SRN$ which is a solution of the system of equations \begin{equation} \det(D^{(r)})\,=\,0\text{ \ \ }\forall r\in \{1,...,[\SRN]\}\text ,} \label{system-t} \end{equation where the $p\times p$ matrices \textsc{D}$^{(r)}$\ are defined in \rf{D-matrix}, with asymptotics of $t_{|k|}(\la)$ given by: \begin{equation} \lim_{\log\lambda\rightarrow \pm\infty}\lambda ^{\mp\SRN}t_{|k|}(\lambda )=\left( \prod_{a=1}^{\SRN}\frac{\kappa _{a}\xi _{a}^{\mp 1}}{i}\right) \left( q^{k}+q^{-k}\right). \label{asymptotics-t} \end{equation} \end{lem} \begin{proof} The fact that the generic eigenvalue of the transfer matrix has to satisfy the system \rf{system-t} has been discussed in Section \ref{Spec}; so we have just to verify the asymptotics (\ref{asymptotics-t}) for the $\ST$-eigenvalue $t_{|k|}(\lambda )$. This follows by the assumption that $|t_{k}\rangle $ is an eigenstate of $\Theta $\ with eigenvalue $q^{k}$, and by formulae \begin{equation} \lim_{\log{\lambda} \rightarrow \pm\infty}\lambda ^{\mp\SRN}\ST(\lambda )=\left( \prod_{a=1}^{\SRN}\frac{\kappa _{a}\xi _{a}^{\mp 1}}{i}\right) \left( \Theta +\Theta ^{-1}\right) , \end{equation} derived in appendix \ref{Asymp-A-D}. \end{proof} The previous Lemma implies in particular the following: \begin{thm} For generic values of the parameters $\kappa $ and $\xi $ the simultaneous spectrum of $\ST$ and $\Theta $\ operators is simple and the generic eigenstate $|t_{k}\rangle$ of the $\ST$-$\Theta $-eigenbasis has a wave-function of the form \begin{equation}\label{Psi-factor} \Psi(\eta )=\eta _{\SRN}^{-k}\prod_{a=1}^{\SRN-1}\psi_{|k|}(\eta _{a}), \end{equation} where, for any $r\in \{1,...,\SRN-1\}$, the vector $(\psi_{|k|}(\zeta _{r}),\psi_{|k|}(\zeta _{r}q),...,\psi_{|k|}(\zeta _{r}q^{2l}))$ is the unique (up to normalization) solution of the linear equations (\ref{BAXmatrix}) corresponding to $t_{|k|}(\lambda )$. \end{thm} \begin{proof} Let us use the SOV-construction of $\ST$-eigenstates and let us observe that an analog of Proposition \ref{Simply1} also holds\footnote The proof given previously holds for both the cases $\SRN$ even and odd just changing $\SRN$ into $[\SRN]$ everywhere.} for even $\SRN$. This implies that the wave-function $\Psi(\eta )$ can be represented in the form \begin{equation} \Psi(\eta )=f_{t_{k}}(\eta _{\SRN})\prod_{a=1}^{\SRN-1}\psi_{|k|}(\eta _{a})\,. \end{equation} Finally, using that $|t_{k}\rangle $ is eigenstate of $\Theta $\ with eigenvalue $q^{k}$ we get $f_{t_{k}}(\eta _{\SRN})\propto \eta _{\SRN}^{-k}$. \end{proof} Thanks to the explicit construction of the simultaneous $\ST$-$\Theta $\ eigenstates given in \rf{Psi-factor}, we have that the eigenstates of $\ST(\lambda )$ with $\Theta $-charge eigenvalue 1 are simple, while all the others are doubly degenerate with eigenspaces generated by a pair of $\ST$-eigenstates with $\Theta $-charge eigenvalues $q^{\pm k}$. \subsection{$\SQ$-operator and Bethe ansatz} Let us point out some peculiarity of the $\SQ$-operator in the case of even chain. In order to see this, we need the following Lemma which is of interest in its own right. \begin{lem}\label{Unilem} For a given $t(\la)$, there is at most one polynomial of degree $2l\SRN$ which satisfies the Baxter equation \rf{BaxEV}. \end{lem} \begin{proof} Let us define the q-Wronskian: \begin{equation}\label{q-W} W(\la)\,=\,Q_1(\la)Q_2(q^{-1}\la)-Q_2(\la)Q_1(q^{-1}\la)\,. \end{equation} written in terms of two solutions $Q_{1}(\lambda )$ and $Q_{2}(\lambda )$ of the Baxter equation; then $W(\lambda)$ satisfies the equation \begin{equation} {\tt a}(\lambda )\,W(\lambda )\,=\,{\tt d}(\lambda )\,\ST^{+}W(\lambda )\,. \end{equation Note now that Lemma \ref{A=A-B} implies \begin{equation} \prod_{k=0}^{2l}{\tt a}(\lambda q^{k})\,\neq \,\prod_{k=0}^{2l}{\tt d}(\lambda q^{k}) \text{ \ \ \ }\forall \lambda \notin \mathbb{B}_{\SRN}, \label{degcond} \end{equation so for any $\lambda \notin \mathbb{B}_{\SRN}$ the only solution consistent with cyclicity $(\ST^{+})^{p}=1$ is $W(\lambda )\equiv 0$. It is then easy to see that this implies that $Q_1(\la)=Q_2(\la)$.\end{proof} Now we can prove the following: \begin{propn} The $\SQ$-operators commute with the $\Theta $-charge and $|t_{\pm |k|}\rangle $ are $\SQ$-eigenstates with common eigenvalue $Q_{|k|}(\lambda )$ of degree $2l\SRN-k(a_{\infty }^{\pm }p\pm 1)$ in $\lambda $ and a zero of order $k(a_{0}^{\pm }p\pm 1)$ at $\lambda =0$, where $a_{0}^{+}$ and $a_{\infty }^{+}$ are non-negative integers, while $a_{0}^{-}$ and $a_{\infty }^{-}$ are positive integers. \end{propn} \begin{proof} The commutativity of $\ST$ and $\SQ$-operators implies that the $\ST$-eigenspace \mathcal{L}(|t_{\pm |k|}\rangle )$\ corresponding to the eigenvalue t_{|k|}(\lambda )$ is invariant under the action of $\SQ$ and so for $k=0$ any $T$-eigenstate $|t_{0}\rangle $ is directly a $\SQ$-eigenstate. Let us observe that the self-adjointness of $\SQ$ implies that in the two-dimensional $\ST$-eigenspace $\mathcal{L}(|t_{\pm |k|}\rangle )$ with $k\neq 0$ we can always take two linear combinations of the states $|t_{|k|}\rangle $ and $|t_{-|k|}\rangle $ which are $\SQ$-eigenstates. Now thanks to the Lemma \ref{Unilem} for fixed $\ST -eigenvalue $t_{|k|}(\lambda )$ the corresponding $\SQ$-eigenvalue $Q_{|k|}(\lambda )$\ is unique which implies that $|t_{\pm|k|}\rangle $ are themselves $\SQ$-eigenstates. The commutativity of the $\SQ -operator with the $\Theta $-charge follows by observing that the $|t_{\pm |k|}\rangle $ define a basis. Let us complete the proof showing that the conditions on the polynomial Q_{|k|}(\lambda )$ stated in the Proposition are simple consequences of the fact that $|t_{\pm |k|}\rangle $ are eigenstates of the $\Theta $-charge with eigenvalues q^{\pm |k|}$. Indeed, the compatibility of the asymptotics conditions (\re {asymptotics-t}) with the $\ST\SQ$ Baxter equation implies \begin{equation}\label{Asymp-Q_k} \lim_{\lambda \rightarrow 0}\frac{Q_{|k|}(\lambda q)}{Q_{|k|}(\lambda ) =q^{\pm |k|},\text{ \ \ }\lim_{\lambda \rightarrow \infty }\frac Q_{|k|}(\lambda q)}{Q_{|k|}(\lambda )}=q^{-(\SRN\pm |k|)}, \end{equation which are equivalent to the conditions on the polynomial $Q_{|k|}(\lambda )$ stated in the Proposition. \end{proof} Note that the uniqueness of the $\SQ$-eigenvalue $Q_{|k|}(\lambda )$ corresponding to a given $\ST$-eigenvalue $t_{|k|}(\lambda )$ implies that each vector $(\psi_{|k|}(\zeta_{r}),\psi_{|k|}(\zeta _{r}q),...,\psi_{|k|}(\zeta _{r}q^{2l}))$ appearing in \rf{Psi-factor} must be proportional to the vector $(Q_{|k|}(\zeta _{r}),Q_{|k|}(\zeta _{r}q),...,Q_{|k|}(\zeta _{r}q^{2l}))$ so that the previous results admit the following reformulation: \begin{thm} The pairs of eigenvectors $|t_{|k|}\rangle $ and $|t_{-|k|}\rangle $ of \ST(\la)$ are in one-to-one correspondence with the polynomials $Q_{|k|}(\la) $ of maximal order $2l\SRN$ which have the asymptotics (\ref{Asymp-Q_k}) and satisfy the Baxter equation \rf{BaxEV} with $t_{|k|}(\la)$ being an even Laurent polynomial in $\lambda $ of degree $\SRN$. \end{thm} As in the case of $\SRN$ odd this reformulation allows the classification and construction of the spectrum of $\ST(\lambda )$ by the analysis of the solutions to the system of the Bethe equations.
proofpile-arXiv_065-6506
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and Main Results} We consider ground states for the {\em massless boson star equation} in $d=3$ dimensions given by \begin{equation} \label{eq:eq} \left \{ \begin{array}{l} \displaystyle \sqrt{-\Delta} \, u - \big ( |x|^{-1} \ast |u|^2 \big ) u = -u , \\[1ex] u \in H^{1/2}(\mathbb{R}^3), \quad u \geq 0, \quad u \not \equiv 0 . \end{array} \right . \end{equation} Here $H^s(\mathbb{R}^3)$ is the inhomogeneous Sobolev space of order $s \in \mathbb{R}$, and the symbol $\ast$ denotes the convolution on $\mathbb{R}^3$. The nonlinear equation \eqref{eq:eq} plays a central role in the mathematical theory of gravitational collapse of boson stars, which we briefly summarize as follows. In the seminal work of Lieb and Yau \cite{LiYa}, the universal constant \begin{equation} \label{eq:Nstar} N_* = \| u \|_{2}^2 \end{equation} was found to be the so-called {\em ``Chandrasekhar limiting mass''} for boson stars in the time-independent setting. Here the ground state $u \in H^{1/2}(\mathbb{R}^3)$, appearing in equation \eqref{eq:Nstar}, is a certain optimizer that solves problem \eqref{eq:eq}. As one main result, it was proven in \cite{LiYa} that boson stars with total mass strictly less than $N_*$ are gravitationally stable, whereas boson stars whose total mass exceed $N_*$ may undergo a ``gravitational collapse'' based on variational arguments and many-body quantum theory. Moreover, it was conjectured by Lieb and Yau in \cite{LiYa} as an open problem that uniqueness for ground states holds. More recently, the mathematical theory of boson stars has entered the field of nonlinear dispersive equations: In \cite{ElSc}, it was shown that the {\em dynamical evolution} of boson stars is effectively described by the nonlinear evolution equation (with mass parameter $m \geq 0$) \begin{equation} \label{eq:boson} i \partial_t \psi = \sqrt{-\Delta + m^2} \, \psi - \big ( |x|^{-1} \ast |\psi|^2 \big ) \psi \end{equation} for the wave field $\psi : [0,T) \times \mathbb{R}^3 \to \mathbb{C}$. In fact, this dispersive nonlinear $L_2$-critical PDE displays a rich variety of phenomena such as stable/unstable traveling solitary waves and finite-time blowup. In particular, the ground states $u(x) \geq 0$ for \eqref{eq:eq} and the constant $N_* >0$ given by \eqref{eq:Nstar} both play a fundamental role as follows: First, the ground state solutions $u(x)$ of \eqref{eq:eq} give rise to {\em ground state solitary waves} of the form \begin{equation} \psi(t,x) = e^{it} u(x) \end{equation} for the evolution equation \eqref{eq:boson} in the case of vanishing mass $m=0$. Second, the universal constant $N_* > 0$ sets the scale between ``small'' and ``large'' solutions of the $L_2$-critical nonlinear dispersive PDE \eqref{eq:boson}, irrespectively of the value for $m \geq 0$. More precisely, as shown in \cite{FrLe,Le2}, all solutions $\psi \in C_0^t H^{1/2}_x([0,T) \times \mathbb{R}^3)$ with small $L_2$-mass $$\| \psi(t) \|_{2}^2 < N_*$$ extend globally in time (i.\,e. we have $T=\infty$); whereas solutions with $$\| \psi(t) \|_{2}^2 > N_*$$ can lead to blowup at some finite time $T < \infty$. (This singularity formation indicates the dynamical ``gravitational collapse'' of a boson star.) Thus, any analytical insight into some key properties (e.\,g., uniqueness up to translation) of the ground states $u(x) \geq 0$ and the spectrum of their linearization will be of considerable importance for a detailed blowup analysis for the nonlinear dispersive equation \eqref{eq:boson}. \medskip Our main result is as follows. \begin{theorem}\label{main} {\bf (Radiality and Analyticity).} Every solution $u \in H^{1/2}(\mathbb{R}^3)$ of problem \eqref{eq:eq} is of the form $u(x)=Q(x-a)$ for some $a \in \mathbb{R}^3$, where $Q$ satisfies the following properties. \begin{enumerate} \item[(i)] \label{eq:symmdecr} $Q$ is positive, radial and strictly decreasing. \item[(ii)] $Q$ is real-analytic. More precisely, there exists a constant $\sigma>0$ and an analytic function $\tilde Q$ on $\{z\in\mathbb{C}^3 : |\im z_j|<\sigma, 1\leq j\leq 3 \}$ such that $\tilde Q(x)=Q(x)$ if $x\in\mathbb{R}^3$. \end{enumerate} \end{theorem} \begin{remark} A natural open question is uniqueness of the ground state $Q=Q(|x|) > 0$. We refer to our recent work \cite{FraLe}, where uniqueness has been proven for ground state of nonlinear equations with fractional Laplacians $(-\Delta)^s$ in $d=1$ dimension. \end{remark} \begin{remark} Our proof that any solution of problem \eqref{eq:eq} must be radially symmetric (with respect to some point) employs the classical method of moving planes introduced in \cite{GiNiNi}; see Section~\ref{sec:symmetry} below. See also \cite{BiLoWa} for a similar symmetry result for the moving plane method applied to equation with fractional Laplacians on the unit ball $\{ x \in \mathbb{R}^3 : |x| < 1\}$. We remark that the arguments, which we present in Section \ref{sec:symmetry} below, are able to deal with the unbounded domain $\mathbb{R}^3$, and thus settling an open problem stated in \cite{BiLoWa}. While finalizing the present paper, we learned that \cite{MaZh} have very recently and independently established a symmetry result for the equation $-\Delta \, u - \big ( |x|^{-1} \ast |u|^2 \big ) u = -u$ in $\mathbb{R}^3$. They also briefly sketch \cite[Sec. 5]{MaZh} how to extend their approach to more general equations, including \eqref{eq:eq}. Their method is different from ours and uses the integral version of the method of moving planes developed in \cite{ChLiOu}. We believe that our non-local Hopf's lemma, on which our differential version of moving planes is based, might have applications beyond the context of the present paper. \end{remark} \begin{remark} \label{rem:rescale} Note that Theorem \ref{main} implies an analagous statement for positive solutions of the equation $$\sqrt{-\Delta}\, u - \kappa(u^2*|x|^{-1}) u = -\lambda u.$$ with constants $\kappa,\lambda>0$. Indeed, $u$ solves this equation if and only if $\kappa^{-1/2} \lambda^{-3/2} u(x/\lambda)$ solves \eqref{eq:eq}. One might also ask whether this equation can have a solution for $-\lambda=E\geq0$. The answer is negative, even if the positivity assumption of $u$ is dropped, as shown by the next result (whose proof is given in Subsection \ref{sec:nonexist} below). Without loss of generality, we can put $\kappa=1$ in the following. \end{remark} \begin{proposition}\label{nonexist} Let $E\geq0$. If $u\in H^{1/2}(\mathbb{R}^3)$ is radial and satisfies $\sqrt{-\Delta} \, u - (|u|^2 * |x|^{-1})u = Eu$, then $u\equiv 0$. \end{proposition} \subsection*{Organization of the Paper} In Sections \ref{sec:prelim}--\ref{sec:anal}, we organize the proof of Theorem \ref{main} as follows. In Section \ref{sec:prelim} we collect some preliminary results on \eqref{eq:eq} about the existence, regularity, and spatial decay of solutions. Moreover, we give the proof of Proposition \ref{nonexist}. In Section \ref{sec:symmetry} we implement the method of moving planes and we prove that every solution of \eqref{eq:eq} is spherically symmetric with respect to some point. In Section \ref{sec:anal} we prove the real analyticity of solutions. In Section \ref{sec:anal2}, we provide further analyticity results about elements in the kernel of the linearization of \eqref{eq:eq}. \subsection*{Notation and Conventions} For the Fourier transform on $\mathbb{R}^3$, we use the convention \begin{equation} \label{eq:fourier} \hat u(\xi) := (2\pi)^{-3/2} \int_{\mathbb{R}^3} u(x) e^{-i\xi\cdot x} \,dx . \end{equation} As usual, the fractional derivative operators $(-\Delta)^s$ and $(1-\Delta)^s$ are defined via their multipliers $|\xi|^{2s}$ and $(1+|\xi|^2)^s$ in Fourier space, respectively. Lebesgue spaces of functions on $\mathbb{R}^3$ will be denoted by $L_p = L_p(\mathbb{R}^3)$ with norm $\| \cdot \|_{p}$ and $1 \leq p \leq \infty$. For the sake of brevity, we shall use the notation $\|u \| \equiv \| u \|_2$ occasionally. We employ inhomogeneous Sobolev norms $\| u \|_{H^s} := \| (1-\Delta)^{s/2} u \|_2$, as well as homogeneous Sobolev norms $\| u \|_{\dot{H}^s} = \| (-\Delta)^{s/2} u \|_2$. The equation \eqref{eq:eq} is always understood to hold in the $H^{-1/2}$ sense. That is, we say that $u \in H^{1/2}(\mathbb{R}^3)$ solves the equation in \eqref{eq:eq} if $$ \int_{\mathbb{R}^3} |\xi| \overline{\hat v(\xi)} \hat u(\xi) \,d\xi - \iint_{\mathbb{R}^3\times\mathbb{R}^3} \frac{\overline{v(x)} u(x) |u(y)|^2}{|x-y|} \,dx\,dy = - \int_{\mathbb{R}^3} \overline{v(x)} u(x) \,dx \,, $$ for all $v \in H^{1/2}(\mathbb{R}^3)$. In what follows, the letter $C$ denotes a constant which is allowed to change from inequality to inequality. With the usual abuse of notation, we shall not distinguish between the functions $f(|x|)$ and $f(x)$ whenever $f : \mathbb{R}^3 \rightarrow \mathbb{C}$ is radial. \subsection*{Acknowledgments} R.\,F. gratefully acknowledges support through DFG grant FR 2664/1-1 and NSF grant PHY 06 52854. E.\,L.~is supported by a Steno fellowship of the Danish research council and NSF grant DMS-0702492. \section{Preliminary results} \label{sec:prelim} To prepare the proof of our main results, we first collect some preliminary results on the existence, regularity, and decay of solutions to problem \eqref{eq:eq}. Since all these facts follow from the literature and standard arguments, we will keep our exposition brief throughout this section. \subsection{Existence and properties of a minimizing solution} The existence of a nonnegative, radial solution $Q(|x|) \geq 0$ of problem \eqref{eq:eq} can be established by direct variational arguments, as remarked in \cite[App. A.2]{LiYa}. More precisely, we consider the minimization problem \begin{equation} \label{eq:varprob} \inf \big \{ I[u]: u\in H^{1/2}(\mathbb{R}^3), \, u \not \equiv 0 \big \} , \end{equation} where \begin{equation} I[u] := \frac{\|(-\Delta)^{1/4} u\|^2 \ \|u\|^2}{ \iint_{\mathbb{R}^3 \times \mathbb{R}^3} |u(x)|^2 |x-y|^{-1} |u(y)|^2 \, dx\,dy} \,. \end{equation} Thanks to strict rearrangement inequalities (see \cite{Li,FrSe}), we have that $I[u^*] \leq I[u]$ with equality if and only if $u(x)$ equals its symmetric-decreasing rearrangement $u^*(|x|)\geq 0$ (modulo translation in space and multiplication by a complex number). As pointed out in \cite[App. A.2]{LiYa}, this fact permits us to imitate the proof in \cite{LiOx} to deduce the existence of a symmetric-decreasing minimizer $Q = Q^* \in H^{1/2}(\mathbb{R}^3)$ for problem (\ref{eq:varprob}). Moreover, an elementary calculation shows that any minimizer for \eqref{eq:varprob} satisfies the Euler-Lagrange equation $$ \sqrt{-\Delta} \, Q - \kappa(|Q|^2*|x|^{-1}) Q = -\lambda Q, $$ with some constants $\lambda >0$ and $\kappa>0$ that both depend on $Q$. By Remark \ref{rem:rescale}, we see that any symmetric-decreasing minimizer $Q = Q^* \in H^{1/2}(\mathbb{R}^3)$ for \eqref{eq:varprob} furnishes (after suitable rescaling) a solution of problem \eqref{eq:eq}. \iffalse \texttt{I don't think the following paragraph is correct. We might have to cut this here and in the main theorem.} To see that the Fourier transform $\hat Q$ is radial and non-increasing, we write $$ I[u] = \frac{ \int_{\mathbb{R}^3} |\xi| |\hat u(\xi)|^2 \,d\xi \ \int_{\mathbb{R}^3} |\hat u(\xi)|^2 \,d\xi}{(2\pi)^{-2} \iiint \overline{\hat u(\eta)}\hat u(\eta-\xi)|\xi|^{-2}\overline{\hat u(\eta'-\xi)} \hat u(\eta') \, d\eta\,d\eta'\,d\xi} $$ By the simplest rearrangement inequality \cite[Thm. 3.4]{LiLo} and the Brascamp-Lieb-Luttinger inequality \cite{BrLiLu}, we have that $I[u]$ does not increase if $\hat u$ is replaced by $(\hat u)^*$. Moreover, it goes strictly down unless $|\hat u|$ is a radial non-increasing function. Finally, in order to see that $\hat Q$ is non-negative, we note that the above argument shows that both $Q$ and the inverse Fourier transform of $|\hat Q|$ minimize the functional $I$. Hence by the uniqueness statement up to translations in Theorem \ref{main} (the proof of which does \emph{not} use the non-negativity of the Fourier transform) implies that $|\hat Q(\xi)|=e^{ia\cdot\xi} \hat Q(\xi)$ for some $a\in\mathbb{R}^3$. Since $Q$ is even, its Fourier transform is real, which implies that $a=0$ and $\hat Q\geq 0$. \fi \subsection{Regularity and Decay} In this subsection, we collect some basic regularity and decay estimates for solutions $u \in H^{1/2}(\mathbb{R}^3)$ of the nonlinear equation \begin{equation} \label{eq:eq1} \sqrt{-\Delta} \, u - \big ( |u|^2 \ast |x|^{-1} \big ) u = -u . \end{equation} Note that we do not require $u$ to be non-negative or even real-valued in this section, unless we explicitly say so. \begin{lemma}[Smoothness of solutions] \label{smooth} Let $u\in H^{1/2}(\mathbb{R}^3)$ be a solution of \eqref{eq:eq1}. Then $u \in H^s(\mathbb{R}^3)$ for all $s \geq 1/2$. \end{lemma} \begin{proof} This follows from a simple bootstrap argument. Indeed, note that $u$ satisfies \begin{equation} \label{eq:uresolvent} u = (\sqrt{-\Delta} + 1)^{-1} F(u), \end{equation} where we put $F(u) = (|u|^2 \ast |x|^{-1}) u$. Since $F(u)$ maps $H^s(\mathbb{R}^3)$ into itself for any $s \geq 1/2$ (see, e.\,g., \cite{Le2}) and thanks to the smoothing property $(\sqrt{-\Delta} + 1)^{-1} : H^s(\mathbb{R}^3) \rightarrow H^{s+1}(\mathbb{R}^3)$, we obtain the desired result. \end{proof} Next, we record a decay estimate for solutions of \eqref{eq:eq1}. \begin{lemma}[Decay rates] \label{decay} Any solution $u \in H^{1/2}(\mathbb{R}^3)$ of \eqref{eq:eq1} satisfies \begin{equation} \label{ineq:upper} |u(x)| \leq C (1 + |x|)^{-4} \end{equation} and \begin{equation} \label{ineq:newton} (|u|^2 \ast |x|^{-1})(x) \leq C (1 + |x|)^{-1} . \end{equation} Moreover, if we assume that $u(x) \geq 0$ and $u \not \equiv 0$, then we also have the lower bound \begin{equation} \label{ineq:lower} u(x) \geq C (1+|x|)^{-4}. \end{equation} In particular, any such solution $u(x)$ is strictly positive. \end{lemma} \begin{proof} Note that $u \in L_2(\mathbb{R}^3)$ is an eigenfunction for the ``relativistic'' Schr\"odinger operator $$ H := \sqrt{-\Delta} + V $$ with the local potential $V(x) = -(|u|^2 \ast |x|^{-1})(x)$. Furthermore, by Lemma \ref{smooth} and Sobolev embeddings, we have $u \in L_p(\mathbb{R}^3)$ for all $p \geq 2$, which implies that $V(x)$ is continuous and $V(x) \rightarrow 0$ as $|x| \rightarrow \infty$. Hence $u$ is an eigenfunction corresponding to the eigenvalue $-1$ below the bottom of the essential spectrum of $H$. From \cite[Proposition IV.1]{CaMaSi} we now deduce the bound \eqref{ineq:upper}. Next, we see that deriving the bound \eqref{ineq:newton} amounts to estimating the function $V(x)$ defined above. First, we note that the Hardy-Kato inequality (see, e.\,g., \cite{He}) gives us $$ | V(x) | \leq \sup_{x \in \mathbb{R}^3} \int_{\mathbb{R}^3} \frac{|u(y)|^2}{|x-y|} \, dy \leq C \int_{\mathbb{R}^3} \overline{u}(y) (\sqrt{-\Delta} \, u)(y) \,dy \leq C \| u \|_{H^{1/2}}^2. $$ Also, from \eqref{ineq:upper} we have a radially symmetric bound for $u(x)$. Thus, by Newton's theorem (see, e.\,g., \cite[Theorem 9.7]{LiLo}), we deduce $$ | V(x) | \leq \frac{1}{|x|} \int_{\mathbb{R}^3} \frac{C}{(1+|y|)^4} \, dy \leq \frac{C}{|x|} . $$ Combining these two estimates for $V(x)$, we obtain the bound \eqref{ineq:newton}. Finally, let us also assume that $u(x) \geq 0$ and $u \not \equiv 0$. By standard Perron-Frobenius arguments, we conclude that $u(x)$ is the unique ground state eigenfunction for the Schr\"odinger operator $H$. In particular, invoking \cite[Proposition IV.3]{CaMaSi} yields the lower bound \eqref{ineq:lower}. \end{proof} \begin{remark} As an alternative to probabilistic arguments used in \cite{CaMaSi}, we could also provide a more ``hands-on'' proof of Lemma \ref{decay}, which is based on bootstrapping equation \eqref{eq:uresolvent} and using the explicit formula for the Green's function \begin{align} (\sqrt{-\Delta} + \tau )^{-1}(x,y) & = \int_0^\infty e^{-t \tau} \exp (-t \sqrt{-\Delta} )(x,y) \, dt \nonumber \\ & = \frac{1}{\pi^2} \int_0^\infty e^{-t \tau} \frac{t}{(t^2 + |x-y|^2)^2} \, dt . \label{eq:reskernel} \end{align} We refer to \cite{Le3} for details for this alternate proof; see, e.\,g., \cite[Section 7.11]{LiLo} for the explicit formula of the kernel. \end{remark} \subsection{Proof of Proposition \ref{nonexist}}\label{sec:nonexist} Suppose $u \in H^{1/2}(\mathbb{R}^3)$ is radial and solves $$ \sqrt{-\Delta} \, u - \big (|x|^{-1} \ast |u|^2 \big ) u = E u $$ with some constant $E \geq 0$. With $V:= - |u|^2 * |x|^{-1}$ one has the virial identity $$ \|(-\Delta)^{1/4} u\|^2 = \int |x| \partial_r V |u|^2 \,dx \,, $$ which can be proved along the lines of \cite[Thm. 4.21]{Th}. (The assumptions on $V$ follow easily from Newton's theorem.) Next, integrating the equation against $u$ shows that $\|(-\Delta)^{1/4} u\|^2 + \int V |u|^2 \,dx = E\|u\|^2$. Hence, $$ \int_{\mathbb{R}^3} (V+|x| \partial_r V -E) |u|^2 \,dx =0 \,. $$ But Newton's theorem gives us $$ V(x) + |x| \partial_r V(x) = -4\pi \int_r^\infty |u(s)|^2 s\,ds \leq 0 \,, $$ Therefore we have $(V+|x| \partial_r V -E) |u|^2 = 0$ almost everywhere. If $E > 0$, this shows directly that $u \equiv 0$. If $E=0$ holds, then we conclude $(\int_r^\infty |u(s)|^2 s \,ds ) u(r) = 0$ for almost every $r \geq 0$, which again implies $u \equiv 0$. This completes the proof of Proposition \ref{nonexist}. \hfill $\blacksquare$ \section{Symmetry} \label{sec:symmetry} We now establish our first main result of Theorem \ref{main}. That is, any nonnegative solution $u(x) \geq 0$ of problem \eqref{eq:eq} is radially symmetric up to translation. The basic strategy rests on the method of moving planes, which was applied in \cite{GiNiNi2} to obtain a similar statement for the local elliptic equations of the form $-\Delta u + f(u) = 0$. To make the method of moving planes work successfully in our case, we establish a suitable ``non-local Hopf lemma'' below. The goal of this section is to establish the following result. \begin{theorem}[Symmetry]\label{symm} Any solution of problem \eqref{eq:eq} is radial with respect to some point and strictly decreasing with respect to the distance from that point. \end{theorem} Since radial symmetry around a point means reflection symmetry with respect to any plane passing through that point, we start by proving a result about reflections. For the sake of concreteness, we consider reflections on the plane $\{x_1=0\}$. The following assertion will immediately imply Theorem \ref{symm}. \begin{proposition}\label{symm1} Let $u \in H^{1/2}(\mathbb{R}^3)$ be a solution of problem \eqref{eq:eq} and assume that the function $f := (u^2*|x|^{-1}) u$ satisfies \begin{equation} \label{eq:com} \int_{\mathbb{R}^3} y_1 f(y) \,dy = 0 \,. \end{equation} Then, for each $x'\in\mathbb{R}^2$ fixed, the function $u(\cdot,x')$ is symmetric with respect to the point $x_1=0$ and strictly decreasing for $x_1>0$. \end{proposition} Before we turn to the proof of Proposition \ref{symm1}, we first give the proof of Theorem \ref{symm}. \begin{proof}[Proof of Theorem \ref{symm} assuming Proposition \ref{symm1}] Let $u$ be a solution of problem \eqref{eq:eq} and define $f := (u^2*|x|^{-1}) u$. Since $u \geq 0$ and $u \not \equiv 0$, we have $\int_{\mathbb{R}^3} f(y) \, dy > 0$. Thus there exists a translation $a\in\mathbb{R}^3$ such that \begin{equation} \label{eq:comall} \int_{\mathbb{R}^3} y_j f(y-a) \,dy = 0, \qquad \text{for}\ j=1,2,3 \,. \end{equation} (We note that the integrals converge absolutely in view of the estimates from Lemma \ref{decay}.) For any orthogonal matrix $R\in O(3)$, the function $v_R(x) := u(Rx-a)$ is a solution of \eqref{eq:eq} and the normalization \eqref{eq:comall} implies that $f_R:= (v_R^2*|x|^{-1}) v_R$ satisfies \eqref{eq:com}. Hence, by Proposition \ref{symm1}, the function $v_R(x)$ is symmetric with respect to $x_1=0$ and strictly decreasing for $x_1>0$. Since the rotation $R \in O(3)$ is arbitrary, this means that $u$ is radial with respect to $a$ and strictly decreasing as a function of $|x-a|$. \end{proof} The proof of Proposition \ref{symm1} will be given in Subsection \ref{sec:symm1proof} after having proved two preliminary results. In this section we use the following notation. For any $\lambda\in\mathbb{R}$ and any point $x=(x_1,x') \in \mathbb{R} \times \mathbb{R}^2$, we denote by \begin{equation} \label{eq:xrefl} x^\lambda := (2\lambda-x_1,x') \end{equation} its reflection with respect to the hyperplane $\{x_1=\lambda\}$. Moroever, the reflection of a function $u$ on $\mathbb{R}^3$ with respect to the hyperplane $\{ x_1=\lambda\}$ will be denoted by \begin{equation} \label{eq:urefl} u_\lambda(x) := u(x^\lambda). \end{equation} \subsection{Asymptotics of the solution}\label{sec:asymp} Recall from Lemma \ref{decay} that any solution $u$ of \eqref{eq:eq} decays like $|x|^{-4}$. To make the method of moving planes work, we need more precise asymptotics of $u$ and its first derivative. To this end, we consider the equations of the following general form: \begin{equation}\label{eq:asymp} \sqrt{-\Delta} \, u =-u +f , \end{equation} where the inhomogeneity $f(x)$ is some given measurable function on $\mathbb{R}^3$. Clearly, this equation coincides with equation in \eqref{eq:eq} if we put $f:=(u^2*|x|^{-1}) u$; and according to our a-priori estimates from Lemma \ref{decay}, we then have $0< f(x) \leq C (1+|x|)^{-5}$. In fact, our asymptotics will be valid for more general inhomogeneities $f(x)$. The precise statement is as follows. \begin{lemma}\label{asymp} Assume that $u\in H^1(\mathbb{R}^3)$ satisfies \eqref{eq:asymp} with $|f(x)| \leq C (1+|x|)^{-\rho}$ for some $\rho>4$. Then \begin{enumerate} \item[(i)] $\lim_{|x|\to\infty} |x|^4 u(x) = \pi^{-2} \int f(y) \,dy$. \item[(ii)] $\lim_{x_1\to\infty} \frac{|x|^6}{x_1} \frac{\partial u}{\partial x_1}(x) = - 4 \pi^{-2} \int f(y) \,dy$. \item[(iii)] If $\lambda^j\to\lambda$ and $|x^j|\to\infty$ with $x_1^j<\lambda^j$, then \begin{equation*} \lim_{j\to\infty} \frac{|x^j|^6}{2(\lambda^j -x_1^j)} \left(u(x^j)-u_{\lambda^j}(x^j)\right) = \frac4{\pi^2} \int_{\mathbb{R}^3} f(y) (\lambda-y_1) \,dy \,, \end{equation*} where $u_{\lambda^j}(x)$ is defined in \eqref{eq:urefl} above. \end{enumerate} \end{lemma} \begin{proof} We write $u=(\sqrt{-\Delta}+1)^{-1}f$ and use the explicit formula \eqref{eq:reskernel} for the resolvent kernel. Calculating the corresponding results for this kernel we easily obtain the assertion of the lemma for $f$'s with compact support. We omit the details. The extension to more general $f$'s uses a density argument in the same spirit as in \cite[Lem. 2.1]{GiNiNi2}. \end{proof} \subsection{A non-local Hopf lemma} As a next step, we derive the following non-local Hopf lemma. \begin{lemma}\label{hopf} Let $w\in H^{1}(\mathbb{R}^3) \cap C^1(\mathbb{R}^3)$ be odd with respect to the plane $\{x_1=0\}$ and assume that, for some $\tau \in \mathbb{R}$, we have \begin{equation} \label{eq:hopf} \begin{split} \sqrt{-\Delta} \, w & \geq -\tau w \qquad \text{in} \ \{x_1>0\} \,, \\ w & \geq 0 \qquad \text{in} \ \{x_1>0\} \,. \end{split} \end{equation} Then either $w\equiv 0$, or else $w>0$ in $\{x_1>0\}$ and $ \frac{\partial w}{\partial x_1} \big |_{x_1=0}>0$. \end{lemma} A different extension of Hopf's lemma to the non-local context is proved in \cite{BiLoWa}. Their approach does not allow for positive values of $\tau$ which, however, will be crucial for us. \begin{proof} Since we assume $w \geq 0$, it is sufficient to do the proof assuming that $\tau > 0$ holds. Next, we assume that $w\not\equiv 0$ and define $h:=(\sqrt{-\Delta} +\tau)w$. We note that $h$ is odd with respect to the plane $\{x_1=0\}$ and that $h\geq 0$ in $\{x_1>0\}$. Moreover, one easily sees that $h\not\equiv 0$; e.\,g.~via the Fourier transform. Next, we write $$ w=(\sqrt{-\Delta} +\tau)^{-1}h = \int_0^\infty e^{- t\tau} \exp(-t\sqrt{-\Delta})h \,dt \, . $$ This shows that it is enough to prove that $\exp(-t\sqrt{-\Delta})h$ is strictly positive in $\{x_1>0\}$ and has a strictly positive $x_1$-derivative on $\{x_1=0\}$. Using that $h$ is odd with respect to the plane $\{ x_1 = 0\}$ and writing $x=(x_1,x')$, $y=(y_1,y')$, we find (recalling formula \eqref{eq:reskernel} for the integral kernel) that \begin{align*} \exp(-t\sqrt{-\Delta})h (x) & = \pi^{-2} \int_{\mathbb{R}^3} \frac{t}{(t^2+|x-y|^2)^2} h(y) \,dy \\ & = \pi^{-2} \int_{y_1>0} \left(\frac{t}{(t^2+|x-y|^2)^2} \right .\\ & \quad \left . - \frac{t}{(t^2+(x_1+y_1)^2+|x'-y'|^2)^2} \right) h(y) \,dy \,. \end{align*} If $x_1>0$, then the integrand is non-negative and $\not \equiv 0$, and hence $\exp(-t\sqrt{-\Delta})h (x)>0$. Differentiating the above expression under the integral sign (which can be justified by dominated convergence), we find $$ \frac{\partial}{\partial x_1} \exp(-t\sqrt{-\Delta})h (0,x') = \frac{8}{\pi^{2}} \int_{y_1>0} \frac{t y_1}{(t^2+y_1^2+|x'-y'|^2)^3} h(y) \,dy \,, $$ which again is strictly positive. \end{proof} \subsection{Proof of Proposition \ref{symm1}} \label{sec:symm1proof} Now we are ready to implement the method of moving planes. Let $u$ be a solution of problem \eqref{eq:eq} and assume that $f := (u^2*|x|^{-1}) u$ satisfies \eqref{eq:com}. Recalling the definition of $u_\lambda$ before Subsection \ref{sec:asymp}, we define the set $$ \Lambda := \{ \mu>0 :\ \text{for all} \ \lambda > \mu \ \text{and for all}\ x \ \text{with}\ x_1 < \lambda \ \text{one has}\ u(x)\geq u_\lambda(x) \} \,. $$ We divide the proof of Proposition \ref{symm1} into three steps as follows. \emph{Step 1. $\Lambda$ is non-empty.} We first note that according to Lemma \ref{asymp} (ii) there is a $\overline\lambda>0$ such that \begin{equation} \label{eq:decrease} \frac{\partial u}{\partial x_1}(x) < 0 \qquad \text{if}\ x_1 \geq \overline\lambda \,. \end{equation} We now prove that $\Lambda$ is non-empty by contradiction. If $\Lambda$ were empty, there would exist sequences of numbers $(\lambda^j)\to\infty$ and points $(x^j)$ with $x^j_1<\lambda^j$ such that \begin{equation} \label{eq:cont1} u(x^j)<u_{\lambda^j}(x^j) \,. \end{equation} Next, we claim that \begin{equation} \label{eq:cont1inf} |x^j|\to\infty \end{equation} and, with $\overline\lambda$ from \eqref{eq:decrease}, \begin{equation} \label{eq:cont1bdd} x^j_1 <\overline\lambda \,. \end{equation} To prove our claim, we note that $(x^j)^{\lambda^j}_1>\lambda^j\to\infty$ together with the decay estimate in Lemma \ref{decay} implies that $u_{\lambda^j}(x^j)\to 0$. Therefore, by \eqref{eq:cont1}, we also have $u(x^j)\to 0$. Since $u$ is continuous by Lemma \ref{smooth} and strictly positive by Lemma \ref{decay}, we obtain \eqref{eq:cont1inf}. Hence the bound \eqref{eq:cont1bdd} follows from \eqref{eq:decrease} and \eqref{eq:cont1}. Now choose $j$ sufficiently large such that $\lambda^j>\overline\lambda$ holds. Then \eqref{eq:cont1bdd} implies that $\overline\lambda<(x^j)^{\overline\lambda}_1< (x^j)^{\lambda^j}_1$. Thus, by \eqref{eq:decrease}, we conclude \begin{equation} \label{eq:cont1est} u_{\overline\lambda}(x^j)> u_{\lambda^j}(x^j) \,. \end{equation} On the other hand, \eqref{eq:cont1inf}, \eqref{eq:cont1bdd} and \eqref{eq:com} together with Lemma \ref{asymp} (iii) (and $\overline\lambda$ instead of $\lambda^j$) imply that $$ \frac{|x^j|^6}{2(\overline\lambda -x_1^j)} \left(u(x^j)-u_{\overline\lambda}(x^j)\right) \to \frac{4\overline\lambda}{\pi^2} \int f(y) \,dy >0 \,, $$ contradicting \eqref{eq:cont1} and \eqref{eq:cont1est}. Hence the set $\Lambda$ is non-empty. \emph{Step 2. $\lambda_1:=\inf\Lambda=0$.} Again, we argue by contradiction and assume that $\lambda_1>0$. We note that $u(x)\geq u_\lambda(x)$ for all $x$ with $x_1<\lambda$ and all $\lambda>\lambda_1$. Hence, by continuity (see Lemma \ref{smooth}), we also have $u(x)\geq u_{\lambda_1}(x)$ if $x_1<\lambda_1$. Note that the function $w:= u-u_{\lambda_1}$ satisfies the equation $$ \sqrt{-\Delta} w + Vw=-w +f\,, \qquad V := -\frac12 (u^2+u_{\lambda_1}^2)*|x|^{-1} $$ with inhomogeneity $$ f(x) := \frac12 (u(x)+u_{\lambda_1}(x)) \left( (u^2-u_{\lambda_1}^2)*|x|^{-1} \right)(x) . $$ Next, a calculation shows that \begin{multline*} f(x) = \frac12 (u(x)+u_{\lambda_1}(x)) \int_{y_1<\lambda_1} \left( \frac1{|x-y|} \right . - \\ \left . \frac1{\sqrt{(x_1+y_1-2\lambda_1)^2 +|x'-y'|^2}} \right) \left( u(y)^2-u_{\lambda_1}(y)^2 \right) \,dy \,. \end{multline*} Since $|x-y|<\sqrt{(x_1+y_1-2\lambda)^2 +|x'-y'|^2}$ if $x_1,y_1<\lambda_1$, we see that $f\geq 0$ in $\{x_1<\lambda_1\}$ and hence $\sqrt{-\Delta} w \geq-w-Vw\geq -w$ in that set. Moreover, recall that $w = u - u_{\lambda_1}$ belongs to all $H^s(\mathbb{R}^3)$. Therefore, by the non-local Hopf lemma (Proposition \ref{hopf}), we either have $w\equiv 0$, or else \begin{equation} \label{eq:cont2hopf} w>0 \ \text{in}\ \{x_1<\lambda_1\} \qquad \text{and} \qquad \frac{\partial w}{\partial x_1}(x) < 0 \ \text{on}\ \{x_1=\lambda_1\} \,. \end{equation} The first case cannot occur since $w\equiv 0$, $\lambda_1>0$ and \eqref{eq:com} imply $u\equiv 0$, but for $u\equiv 0$ one has $\lambda_1=0$. Hence we will assume that \eqref{eq:cont2hopf} holds. By definition of $\lambda_1$, there exist sequences of numbers $(\lambda^j)\to\lambda_1-$ and points $(x^j)$ with $x^j_1<\lambda^j$ such that \begin{equation} \label{eq:cont2} u(x^j)<u_{\lambda^j}(x^j) \,. \end{equation} Passing to a subsequence if necessary, we may either assume that $x^j\to x$ or else that $|x^j|\to\infty$. If $x_j\to x$, then \eqref{eq:cont2} implies $u(x)\leq u_{\lambda_1}(x)$. Moreover, since $x_1\leq\lambda_1$, the first relation in \eqref{eq:cont2hopf} allows us to deduce that $x_1=\lambda_1$ and $u(x)= u_{\lambda_1}(x)$. Now \eqref{eq:cont2} yields $\frac{\partial u}{\partial x_1}(x)\geq 0$, contradicting the second relation in \eqref{eq:cont2hopf}. If $|x^j|\to\infty$, then we argue as in Step 1 (using Lemma \ref{asymp} (iii) with the sequence $(\lambda^j)$) to arrive at a contradiction. \emph{Step 3. Conclusion.} In the previous step we have shown that $u(x)\geq u_\lambda(x)$ if $x_1<\lambda$ for any $\lambda>0$. Hence by continuity $u(x)\geq u(-x_1,x')$ if $x_1<0$. Repeating the same argument with $u(x)$ replaced by $u(-x_1,x')$ (and noting that the choice of the origin in \eqref{eq:com} is not affected by this replacement) yields the reverse inequality $u(-x_1,x')\geq u(x)$ if $x_1<0$. Hence $u(\cdot,x')$ is symmetric with respect to $x_1=0$. Using the nonlocal Hopf lemma (Proposition \ref{hopf}) as in Step 2, we find that if $\lambda>0$ then $u(x)> u_\lambda(x)$ for all $x_1<\lambda$. This means that $u(\cdot,x')$ is strictly decreasing for $x_1>0$. The proof of Proposition \ref{symm1} is complete. \hfill $\blacksquare$ \iffalse \section{Uniqueness} \label{sec:unique} Theorem \ref{symm} of the previous section shows that any solution $u$ of problem \eqref{eq:eq} is a radial function $u=u(|x|)$, after a suitable translation. In the present section, we now prove the uniqueness of this radial solution. The main idea of the proof will be based on the harmonic extension of $u$ to the upper half-space $\mathbb{R}^4_+ = \mathbb{R}^3 \times \mathbb{R}_+$. By variational arguments for an associated action functional $\mathcal{A}_u[\psi]$, which will be introduced in \eqref{eq:forma} below, we obtain the desired uniqueness result, which is as follows. \begin{theorem}\label{unique} The problem \begin{equation}\label{eq:unique} \sqrt{-\Delta}u - (u^2 * |x|^{-1})u = -u \,, \qquad 0<u\in H^{1/2}(\mathbb{R}^3) \,, \qquad u \ \text{radial} \,, \end{equation} has at most one solution. \end{theorem} Actually, we shall prove uniqueness for an equivalent problem. More precisely, for a given function $v(x)$ on $\mathbb{R}^3$, we put \begin{equation} \label{eq:Phi} \Phi_v(x) := \int_{|y|<|x|} \left(|y|^{-1} - |x|^{-1} \right) |v(y)|^2 \,dy \,. \end{equation} Of course, if $v$ is radial, then so is $\Phi_v(x)$. In Subsection \ref{sec:uniquemodproof}, we shall prove the following uniqueness result. \begin{theorem}\label{uniquemod} The problem \begin{equation} \label{eq:uniquemod} \sqrt{-\Delta}u + \Phi_u u = u \,, \qquad 0<u\in H^{1/2}(\mathbb{R}^3) \,,\qquad u \ \text{radial} \,, \end{equation} has at most one solution. \end{theorem} \begin{proof}[Equivalence of Theorems \ref{unique} and \ref{uniquemod}] First note that if $u$ is radial, then by Newton's theorem $$ \int_{\mathbb{R}^3} \frac{|u(y)|^2}{|x-y|} \,dy = \int_{\mathbb{R}^3} \frac{|u(y)|^2}{|y|} \,dy - \Phi_u(x) \,. $$ Hence if $u$ and $\tilde u$ solve \eqref{eq:unique}, then they solve $$ \sqrt{-\Delta}u + \Phi_u u = \mu u \,, \qquad \sqrt{-\Delta}\tilde u + \Phi_{\tilde u} \tilde u = \tilde \mu \tilde u \,, $$ with $\mu:=\int |y|^{-1} |u(y)|^2 \,dy - 1$ and $\tilde\mu:=\int |y|^{-1} |\tilde u(y)|^2 \,dy - 1$, respectively. Note that $\mu>0$ since $u^2 * |x|^{-1} \leq \int |y|^{-1} |u(y)|^2 \,dy$ and therefore by \eqref{eq:unique} $$ -\int |y|^{-1} |u(y)|^2 \,dy \ \|u\|^2 < \|(-\Delta)^{1/4} u\|^2 - \iint \frac{u(x)^2 u(y)^2}{|x-y|} \,dx\,dy = -\|u\|^2 \,, $$ and similarly $\tilde\mu>0$. If we define $v(x):=\mu^{-3/2} u(x/\mu)$ and $\tilde v(x):=\tilde\mu^{-3/2} \tilde u(x/\tilde\mu)$, then $v$ and $\tilde v$ are two solutions of \eqref{eq:uniquemod} and hence $v(x)=\tilde v(x)$, i.e., $u(x)=\beta^{-3/2}\tilde u(x/\beta)$ with $\beta:=\tilde\mu/\mu$. But the equation \begin{align*} -u(x) & =\sqrt{-\Delta}u(x) - (u^2 * |x|^{-1})(x) u(x) \\ & = \beta^{-5/2} \left( (\sqrt{-\Delta}\tilde u)(x/\beta) - (\tilde u^2 * |\cdot|^{-1})(x/\beta) u(x/\beta) \right) \\ & = -\beta^{-5/2} \tilde u(x/\beta) = -\beta^{-1} u(x) \end{align*} implies that $\beta=1$, i.e., $u=\tilde u$. Conversely, assume that $u$ and $\tilde u$ solve \eqref{eq:uniquemod}. Then they solve \eqref{eq:unique} with $$ \sqrt{-\Delta}u - (u^2 * |x|^{-1})u = -\lambda u \,, \qquad \sqrt{-\Delta}\tilde u - (\tilde u^2 * |x|^{-1})\tilde u = -\tilde\lambda\tilde u \,, $$ with $\lambda:=\int |y|^{-1} |u(y)|^2 \,dy - 1$ and $\tilde\lambda:=\int |y|^{-1} |\tilde u(y)|^2 \,dy - 1$, respectively. It follows from Proposition \ref{nonexist} that $\lambda>0$ and $\tilde\lambda>0$. By scaling one can set both parameters $\lambda$ and $\tilde\lambda$ to one and then one can deduce $u=\tilde u$ similarly as before, this time using Theorem \ref{unique}. \end{proof} \subsection{The harmonic extension}\label{sec:harmext} Let $u$ be a solution of \eqref{eq:uniquemod}. We consider the harmonic extension $U$ of $u$ to the halfspace $\mathbb{R}^4_+:=\{ (x,t) : x\in\mathbb{R}^3, \, t>0 \}$, defined by $$ U(x,t) := \left(\exp(-t\sqrt{-\Delta})u \right) (x) = \frac1{\pi^2} \int_{\mathbb{R}^3} \frac{t}{(t^2+|x-y|^2)^2} u(y) \,dy \,; $$ see \eqref{eq:reskernel}. Note that $U \in H^1(\mathbb{R}^4_+)$ holds. Indeed, from Lemma \ref{decay} we see that $u \in L_1$ and hence $\hat{u} \in L_\infty$. In particular, this implies that $u \in \dot{H}^{-1/2}$ since $\hat{u} \in L_2 \cap L_\infty$. Therefore, by Plancherel's identity, we conclude that \begin{align*} \| U \|_{H^1(\mathbb{R}^4_+)}^2 & = \| \nabla_{(x,t)} U \|_{L^2(\mathbb{R}^4_+)}^2 + \| U \|_{L^2(\mathbb{R}^4_+)}^2 = \int_0^\infty \int_{\mathbb{R}^3} \left( 2|\xi|^2 + 1 \right) e^{-2t|\xi|} |\hat u(\xi)|^2 \,d\xi \,dt \\ & = \int_{\mathbb{R}^3} \left( |\xi| + (2|\xi|)^{-1} \right) |\hat u(\xi)|^2 \,d\xi \leq C ( \| u \|_{\dot{H}^{1/2}}^2 + \|u \|_{\dot{H}^{-1/2}}^2 ) < \infty . \end{align*} Moreover, one easily checks that $U$ solves the following elliptic boundary problem: \begin{equation}\label{eq:harmext} \left \{ \begin{array}{ll} -\Delta U = 0 & \quad \mbox{in $\mathbb{R}^4_+$}, \\[1ex] \displaystyle -\frac{\partial}{\partial t} U + \Phi_u U = U & \quad \mbox{on $\partial \mathbb{R}^4_+ = \mathbb{R}^3\times\{0\}$,} \end{array} \right . \end{equation} where $\Phi_u : \mathbb{R}^3 \rightarrow \mathbb{R}$ is given by \eqref{eq:Phi} with $u(x) = U(x,0)$. As an essential ingredient for our uniqueness proof, we introduce the quadratic form \begin{equation}\label{eq:forma} \mathcal{A}_u[\psi] := \iint_{\mathbb{R}^4_+} |\nabla_{(x,t)} \psi|^2 \,dx\,dt + \int_{\mathbb{R}^3} \left(\Phi_u(x)-1\right) |\psi(x,0)|^2 \,dx \end{equation} with form domain $H^1(\mathbb{R}^4_+)$. Note that the second term in $\mathcal{A}_u$ is well-defined according to the Sobolev trace theorem. As an important fact, we now derive a lower bound. \begin{lemma}\label{nonneg} Suppose $u$ solves \eqref{eq:uniquemod} and define $\mathcal{A}_u[\psi]$ as in \eqref{eq:forma}. Then $\mathcal{A}_u[\psi]\geq 0$ for all $\psi\in H^1(\mathbb{R}^4_+)$. \end{lemma} The proof below shows that one has equality if and only if $\psi \in H^1(\mathbb{R}^4_+)$ is a multiple of $U \in H^1(\mathbb{R}^4_+)$, i.\,e., the harmonic extension of $u \in H^{1/2}(\mathbb{R}^3)$ to the half-space $\mathbb{R}^4_+$. In fact, this argument is reminiscent of Allegretto-Piepenbrink theory: If the Schr\"odinger equation has a positive supersolution, then the corresponding Schr\"odinger operator is non-negative. \begin{proof} Let $\psi\in H^1(\mathbb{R}^4_+)$. Since $U$ is strictly positive, we can define $\phi:=U^{-1} \psi$. Next, by abbreviating $\nabla=\nabla_{(x,t)}$, we find $$ |\nabla \psi|^2 = U^2 |\nabla \phi|^2 + |\phi|^2 |\nabla U|^2 + U\nabla U\cdot \nabla \left(|\phi|^2\right) \,. $$ Integrating by parts and using \eqref{eq:harmext}, we obtain \begin{align*} \iint_{\mathbb{R}^4_+} |\nabla_{(x,t)} \psi|^2 \,dx\,dt & = \iint_{\mathbb{R}^4_+} \left(U^2 |\nabla \phi|^2 + |\phi|^2 |\nabla U|^2 - \Div (U\nabla U) |\phi|^2 \right) \,dx\,dt \\ & \qquad - \int_{\mathbb{R}^3} \frac{\partial U}{\partial t}(x,0) U(x,0) |\phi|^2 \,dx \\ & = \iint_{\mathbb{R}^4_+} U^2 |\nabla \phi|^2 \,dx\,dt - \int_{\mathbb{R}^3} (\Phi_u-1) |\psi(x,0)|^2 \,dx . \end{align*} Thus we find $\mathcal{A}_u[\psi] = \iint U^2 |\nabla \phi|^2 \,dx\,dt$, which implies the assertion of Lemma \ref{nonneg}. \end{proof} \subsection{Proof of Theorem \ref{uniquemod}}\label{sec:uniquemodproof} We argue by contradiction. Let $u$ and $v$ be solutions of \eqref{eq:uniquemod} with $u\not\equiv v$. We first note that there exists an $x_0\neq 0$ such that $u(x_0)=v(x_0)$. Indeed, otherwise we have, by continuity (see Lemma \ref{smooth}), that $u > v$ almost everywhere; and hence $\Phi_u>\Phi_v$ almost everywhere. But the weak definition of \eqref{eq:uniquemod} implies $$ (u, \Phi_v v) = - \int_{\mathbb{R}^3} \left((-\Delta)^{1/4} u (-\Delta)^{1/4} v - uv \right) \,dx = (\Phi_u u, v) \,, $$ which is a contradiction. By real analyticity (Theorem \ref{anal}) there is a smallest positive $R$ such that $u(x)=v(x)$ for $|x|=R$. Exchanging the roles of $u$ and $v$ if necessary, we can assume that $u(x)>v(x)$ for $0<|x|<R$. We define $B_R:=\{x\in\mathbb{R}^3 : \ |x|< R \}$ and denote by $U$ and $V$ the harmonic extensions of $u$ and $v$. Thanks to the continuity of $u$ and $v$, the set $$S = \{(x,t)\in\mathbb{R}^4_+ :\ U(x,t)>V(x,t) \}$$ is open in $\mathbb{R}^4_+$, and $S$ can be decomposed into connected components. Let $\Omega \subset \overline{\mathbb{R}^4_+}$ denote the closure of the connected component of $S$ that contains $B_R \times \{0\}$ in its closure in $\overline{\mathbb{R}^4_+}$. We set $$ W(x,t) := \begin{cases} U(x,t)-V(x,t) & \text{if}\ (x,t)\in\Omega \,,\\ 0 & \text{otherwise}\,. \end{cases} $$ According to the properties established in Subsection \ref{sec:harmext}, we have $W\in H^1(\mathbb{R}^4_+)$ and \begin{equation}\label{eq:sigma} \begin{split} -\Delta W & = 0 \qquad \text{in} \ \mathbb{R}^4_+\setminus\partial\Omega \\ -\frac{\partial}{\partial t} W + \frac12 (\Phi_u+\Phi_v)W & = W - f \qquad \text{on} \ B_R\times\{0\} \,. \end{split} \end{equation} where $f:= \frac12 (\Phi_u-\Phi_v)(u+v)$. Indeed, the last property follows from the fact that if $x\in B_R$, then $W(x,t)=U(x,t)-V(x,t)$ for all sufficiently small $t>0$ and hence \begin{align*} -\frac{\partial}{\partial t} W & = -\frac{\partial}{\partial t} U +\frac{\partial}{\partial t} V = -\Phi_u U + \Phi_v V + W \\ & = -\frac12 (\Phi_u+\Phi_v)W - \frac12 (\Phi_u-\Phi_v)(U+V) + W \,. \end{align*} With \eqref{eq:sigma} at hand, we calculate $$ \iint_{\mathbb{R}^4_+} |\nabla_{(x,t)} W|^2 \,dx\,dt = -\int_{\mathbb{R}^3} \frac{\partial W}{\partial t} W \,dx = - \int_{\mathbb{R}^3} \left(\frac12 (\Phi_u+\Phi_v)-1 \right) W^2 \,dx - \int_{\mathbb{R}^3} fW \,dx \,, $$ which shows $$ \mathcal{A}_u[W] + \mathcal{A}_v[W] = -2 \int_{\mathbb{R}^3} fW \,dx = -2\int_{B_R} f W \, dx . $$ Here the last equality follows from our construction of $W$. Since $u>v$ on $B_R\setminus\{0\}$, we have $\Phi_u>\Phi_v$ and hence $f>0$ on that set. Moreover, we have $W \geq 0$ and $W \not \equiv 0$ on $B_R$. Therefore we infer that $\int_{B_R} fW \,dx>0$, which shows that $\mathcal{A}_u[W] + \mathcal{A}_v[W] < 0$. But this contradicts Lemma \ref{nonneg}. The proof of Theorem \ref{uniquemod} is now complete. \hfill $\blacksquare$ \fi \section{Real analyticity}\label{sec:anal} In this section, we prove that any real-valued solution of the equation \eqref{eq:eq1} is real-analytic, which is a substantial improvement of Lemma \ref{smooth} above. Our proof will derive pointwise exponential decay in Fourier space. A similar argument has been applied in the analyticity proof of solitary waves for some nonlinear water wave equations in $d=1$ spatial dimension; see \cite{LiBo}. However, apart from higher dimensionality, our case also involves a nonlocal nonlinearity. To deal with this difficulty of a nonlocal nonlinearity, we derive exponential bounds in Fourier space for a coupled system of equations. Our main result on analyticity is as follows. \begin{theorem}\label{anal} Let $u\in H^{1/2}(\mathbb{R}^3)$ be a real-valued solution of \eqref{eq:eq1}. Then there exists a constant $\sigma>0$ and an analytic function $\tilde u$ on $\{z\in\mathbb{C}^3 :\ |\im z_j|<\sigma, 1\leq j\leq 3 \}$ such that $\tilde u(x)=u(x)$ if $x\in\mathbb{R}^3$. \end{theorem} Note that we do not assume $u$ to be non-negative. Moreover, our proof is independent of the radial symmetry established in the Section \ref{sec:symmetry}. We follow the technique developed in \cite{LiBo}. The heart of the argument is contained in the following statement. \begin{proposition}\label{anallemma} Let $\lambda, \alpha >0$, $0\leq f\in L_1(\mathbb{R}^3)$ and $0\leq W\in L_1(\mathbb{R}^3, (1+|\xi|)d\xi)$ such that \begin{equation}\label{eq:anallemma} (|\xi|+\lambda)f \leq W * f \,, \qquad |\xi|^2 W \leq \alpha f * f \,. \end{equation} Then there exist non-negative functions $g_n$, $n\in\mathbb{N}_0$, and constants $a,b>0$ such that \begin{equation} \label{eq:analind} |\xi|^n f \leq g_n * f \,, \qquad \|g_n\|_1 \leq a b^{n} (2n+1)^{n-1} \,. \end{equation} In particular, if $f\in L_p(\mathbb{R}^3)$ for some $1\leq p \leq \infty$, then \begin{equation} \label{eq:analp} \| |\xi|^n f \|_p \leq a b^{n} (2n+1)^{n-1} \|f\|_p \,. \end{equation} \end{proposition} At several places in this proof we will use the so-called Abel identity, \begin{equation}\label{eq:abel} \sum_{l=0}^n \binom{n}{l} (l+a)^{l-1} (n-l+b)^{n-l-1} = \frac{a+b}{ab} (n+a+b)^{n-1} \,, \end{equation} see \cite[p. 18]{Ri}. \begin{proof} We prove \eqref{eq:analind} by induction over $n$. For $n=0$, \eqref{eq:analind} follows from \eqref{eq:anallemma} with $g_0:=\lambda^{-1} W$ and any $a \geq \lambda^{-1} \|W\|_1$. Now let $n\geq 1$ and assume that \eqref{eq:analind} has already been shown for all smaller values of $n$. By the triangle inequality one has $|\xi|^{n-1} \leq \sum_{l=0}^{n-1} \binom{n-1}{l} |\xi-\eta|^l |\eta|^{n-1}$ and therefore by \eqref{eq:anallemma} and the induction hypothesis $$ |\xi|^n f \leq \sum_{l=0}^{n-1} \binom{n-1}{l} \left( |\eta|^l W\right) * \left( |\eta|^{n-1-l} f\right) \leq g_n * f $$ where $$ g_n := \sum_{l=0}^{n-1} \binom{n-1}{l} \left( |\eta|^l W\right) * g_{n-1-l} \,. $$ Hence \begin{equation} \label{eq:gnnorm} \| g_n \|_1 \leq \sum_{l=0}^{n-1} \binom{n-1}{l} \| |\eta|^l W\|_1 \|g_{n-1-l} \|_1 \,. \end{equation} Next, we estimate $\| |\eta|^l W\|_1$ for $l\geq 2$. For $m\in\mathbb{N}_0$ one has again by \eqref{eq:anallemma}, the triangle inequality and the induction hypothesis for $m<n$ $$ |\xi|^{m+2} W \leq \alpha \sum_{k=0}^m \binom{m}{k} \left(|\eta|^k f \right) * \left(|\eta|^{m-k} f \right) \leq \alpha \sum_{k=0}^m \binom{m}{k} g_k*f*g_{m-k}*f \,. $$ Hence \begin{align*} \| |\xi|^{m+2} W \|_1 & \leq \alpha \|f\|_1^2 \sum_{k=0}^m \binom{m}{k} \|g_k\|_1 \|g_{m-k}\|_1 \\ & \leq \alpha a^2 b^{m} \|f\|_1^2 \sum_{k=0}^m \binom{m}{k} (2k+1)^{k-1} (2(m-k)+1)^{m-k-1} \\ & = 2 \alpha a^2 b^{m} \|f\|_1^2 (2m+2)^{m-1} \,, \end{align*} where we used Abel's identity \eqref{eq:abel} in the last calculation. In order to simplify some arithmetics below, we estimate this by \begin{align}\label{eq:wdecay} \| |\xi|^l W \|_1 & \leq 2 \alpha a^2 b^{l-2} \|f\|_1^2 (2l+2)^{l-1} \end{align} for $l\geq 2$. If we choose $a^2 b^{l-2}$ large enough, then this holds also for $l=0$ and $l=1$. Plugging this into \eqref{eq:gnnorm} and using the induction hypothesis and again Abel's identity, we arrive at \begin{align*} \| g_n \|_1 & \leq 2 \alpha a^3 b^{n-3} \|f\|_1^2 \sum_{l=0}^{n-1} \binom{n-1}{l} (2l+2)^{l-1} (2(n-1-l)+1)^{n-l-2} \\ & = 3 \alpha a^3 b^{n-3} \|f\|_1^2 (2n+1)^{n-2} \,. \end{align*} This proves the assertion provided we have \begin{equation}\label{eq:abchoice} 3 \alpha a^2 \|f\|_1^2 \leq b^{3} (2n+1) \,. \end{equation} Let us show that such a choice of parameters $a$ and $b$ is possible. We fix the ratio $a/b$ by $a^2/b^2=\|W\|_1/(\alpha\|f\|_1^2)=:c^2$ with $c>0$, so that \eqref{eq:wdecay} holds for $l=0$. Now we choose $a$ (keeping the ratio $a^2/b^4$ fixed) to be $$ a := \max\{ \lambda^{-1} \|W\|_1,\, \||\xi|W\|_1/(2\alpha c\|f\|_1^2), \, \alpha c^3 \|f\|_1^2 \} \,. $$ Hence \eqref{eq:analind} holds for $n=0$, \eqref{eq:wdecay} holds for $l=1$, \eqref{eq:abchoice} holds for all $n\geq 1$ and the proof is complete. \end{proof} \begin{proof}[Proof of Theorem \ref{anal}] Let $u\in H^{1/2}$ be a real-valued solution of \eqref{eq:eq1} and $\hat u$ its Fourier transform \eqref{eq:fourier}. Then $$ |\xi| \hat u - w*\hat u=-\hat u $$ with $$ w(\xi) := \frac{1}{(2\pi)^3} \iint_{\mathbb{R}^3\times\mathbb{R}^3} \frac{u(y)^2 e^{-i\xi\cdot x}}{|x-y|} \,dx\,dy = \frac{1}{|\xi|^{2} 2 \pi^2} \int_{\mathbb{R}^3} u(y)^2 e^{-i\xi\cdot y} \,dy = \frac{\hat u * \hat u (\xi)}{2\pi^2 |\xi|^2 } \,. $$ Here we used that $u$ is real-valued. Hence $f:=|\hat u|$ satisfies \eqref{eq:anallemma} with $W:=|w|$, $\alpha= (2\pi^2)^{-1}$ and $\lambda=1$. We claim that the assumptions of Lemma \ref{anallemma} are satisfied. Indeed, by Lemma \ref{decay}, we have $u\in L_1$ and hence $f\in L_\infty$. Also, by Lemma \ref{smooth}, we conclude $f\in L_1$. This implies that $\hat u *\hat u\in L_1\cap L_\infty$ and hence $W\in L_1(\mathbb{R}^3, (1+|\xi|)d\xi)$. Therefore we can apply Lemma \ref{anallemma} and obtain constants $a$ and $b$ such that $$ \sup_{\xi} \left| \exp(\tau |\xi|) \hat u(\xi) \right| \leq \sum_n \frac{\tau^n}{n!} \||\xi|^n f\|_\infty \leq \sum_n \alpha_n \tau^n $$ with $\alpha_n := (n!)^{-1} a b^{n} (2n+1)^{n-1} \|f\|_\infty$. Since we find $$ \frac{\alpha_{n+1}}{\alpha_n} = \frac{b (2n+3)^{n} }{ (n+1) (2n+1)^{n-1}} \to 2be \, , $$ the above supremum is finite for $\tau<\sigma:=(2be)^{-1}$. Thus the function $$ \tilde u(z) := (2\pi)^{-3/2} \int_{\mathbb{R}^3} e^{i(x_1 z_1+x_2 z_2+x_3 z_3)} \hat u(\xi) \,d\xi $$ is analytic in $\{z\in\mathbb{C}^3 :\ |\im z_j|<\sigma, 1\leq j\leq 3 \}$ and it coincides with $u$ on $\mathbb{R}^3$ by Plancherel's theorem. \end{proof} \begin{remark} \label{rem:Qmanal} As a further corollary of Proposition \ref{anallemma}, we note that real-analycity also follows for real-valued solutions $u \in H^{1/2}(\mathbb{R}^3)$ satisfying $$ \sqrt{-\Delta + m^2} \, u - \big ( u^2 \ast |x|^{-1} \big ) u = -\mu u, $$ where $m>0$ and $\mu > -m$ are given parameters. \end{remark} \iffalse \section{Nondegeneracy}\label{sec:nondeg} Our goal in this subsection is to prove Theorem \ref{kernel} characterizing the kernel of the self-adjoint (nonlocal) operator $L_+$ given by $$ L_+ \xi = \sqrt{-\Delta} \, \xi + \xi - \big ( Q^2 \ast |x|^{-1} \big ) \xi - 2 Q \big ( (Q \xi) \ast |x|^{-1} \big ), $$ which acts on $L^2(\mathbb{R}^3)$ with operator domain $H^1(\mathbb{R}^3)$. Here $Q(x) > 0$ is the unique radial solution of \eqref{eq:eq} as given by Theorem \ref{unique}. Since $L_+$ commutes with rotations (due to the radiality of $Q$), there are self-adjoint operators $L_{+,\ell}$ in $L^2(\mathbb{R}_+, r^2 dr)$, with $\ell = 0,1,2,\ldots$, such that if we decompose $u\in H^1(\mathbb{R}^3)$ into spherical harmonics \begin{equation*} u(x) = \sum_{\ell = 0}^\infty \sum_{m=-\ell}^\ell f_{\ell m}(r) Y_{\ell m}(\Omega)\,, \quad \mbox{with $x=r\Omega,\ r=|x|$ and $\Omega\in\mathbb{S}^2\, ,$} \end{equation*} then \begin{equation} \left(L_+ u \right)(x) = \sum_{\ell = 0}^\infty \sum_{m=-\ell}^\ell \left( L_{+,\ell} f_{\ell m} \right)(r) Y_{\ell m}(\Omega) \,. \end{equation} More explicitly, we have that \begin{equation} L_{+,\ell} = \sqrt{-\Delta_{\ell}} + 1 + V + W_\ell \,, \end{equation} where \begin{equation} \label{eq:Delta_ell} -\Delta_\ell := -\partial_r^2 - (2/r) \partial_r + \ell (\ell + 1)/r^2 \end{equation} denotes the action of $\sqrt{-\Delta}$ on the sector of angular momentum $\ell$, and $V$ denotes multiplication with the function \begin{equation} \label{eq:defV} V(r) = -\big ( Q^2*|x|^{-1})(r) \,, \end{equation} and $W_\ell$ denotes the nonlocal linear operator defined by \begin{equation} \label{eq:defW} (W_\ell f)(r) = -\frac{8\pi}{2 \ell + 1} Q(r) \int_0^\infty \frac{r^\ell_<}{r^{\ell+1}_>} Q(s) f(s) s^2 \, ds \,, \end{equation} where we put $r_< = \min\{r,s\}$ and $r_> = \max\{r,s\}$. (The formula for $W_\ell$ is an easy consequence of the expansion of the kernel $|x-y|^{-1}$ into spherical harmonics; see also \cite{Le} for more details.) We break the proof of Theorem \ref{kernel} into two parts as follows. When $L_+$ is restricted to radial functions (i.\,e., the angular momentum sector corresponding to $\ell = 0$), the kernel of $L_+$ is trivial: \begin{proposition}\label{trivker} We have $\ker L_{+,0} = \{0\}$. \end{proposition} Furthermore, the kernel of $L_{+,\ell}$ in the sectors of higher angular momentum $\ell \geq 1$ is found to be the following. \begin{proposition}\label{kernelnonrad} We have $\ker L_{+,1} = \spa \{\partial_r Q \}$ and $\ker L_{+,\ell} = \{0\}$ for all $\ell\geq 2$. \end{proposition} \subsection{Proof of Proposition \ref{trivker}} Our strategy for proving that the equation $L_+ v =0$ has no non-trivial radially symmetric solution is as follows. Assume $v$ is a radially symmetric solution. In Lemma \ref{inhom} we shall find a radial solution $R$ of the inhomogeneous equation $L_+ R = - Q$. Putting $$ \alpha := 1 -2 \int_{\mathbb{R}^3} \frac{Q R}{|x|} \,dx \,, \qquad \beta := 2 \int_{\mathbb{R}^3} \frac{Q v}{|x|} \,dx \,, $$ and introducing the (non-self-adjoint) operator $$ \mathcal L_+ u := L_+ u + 2 Q \int_{\mathbb{R}^3} \frac{Q u}{|x|} \,dx \, , $$ we find that $\mathcal L_+ (\alpha v+\beta R) = 0$. Note that $\alpha v+\beta R$ is radially symmetric. The key step will be to show that $\mathcal L_+ w = 0$ has only trivial radially symmetric solutions; see Proposition \ref{trivkermod}. Once this is known, we conclude that $\alpha v+\beta R\equiv 0$. This implies $0=\alpha L_+ v=-\beta L_+ R=\beta Q$ and therefore $\beta=0$. But then $\mathcal L_+ v= L_+ v =0$ and again, by Proposition \ref{trivkermod}, we conclude $v\equiv 0$, which proves the assertion. We turn now to the proof of the two ingredients of this argument, Lemma \ref{inhom} and Proposition \ref{trivkermod}. \begin{lemma}\label{inhom} There exists a radial, real-analytic $R\in H^2(\mathbb{R}^3)$ such that $L_+ R= -Q$. \end{lemma} \begin{proof} Let $Q_\mu(x) = \mu^{3/2} Q(\mu x)$ for $\mu >0$, and define $$ R := \frac\partial{\partial\mu} \big |_{\mu=1} Q_\mu = \frac32 Q + x \cdot\nabla Q = \frac32 Q + |x|\partial_r Q \,. $$ Obviously, the function $R$ is radial and according to the properties of $Q$ it belongs to $H^2$. Moreover, by Theorem \ref{anal}, we deduce that $R$ is real-analytic. Differentiating the equation $$ \sqrt{-\Delta} Q_\mu - \big (Q_\mu^2 * |x|^{-1} \big ) Q_\mu = -\mu Q_\mu $$ with respect to $\mu$ and evaluating at $\mu=1$ we obtain $L_+ R= -Q$, as claimed. \end{proof} Here is the key step in the proof of Theorem \ref{trivker}. \begin{proposition}\label{trivkermod} If $v\in \ker \mathcal L_+$ is radial, then $v\equiv 0$. \end{proposition} \begin{proof} Let $v\in\ker\mathcal L_+$ be radial. By considering the real and the imaginary part of $v$ separately, we may assume that $v$ is real-valued. We begin with three preliminaries. First, by Newton's theorem, \begin{equation} \label{eq:trivkereq} 0=\mathcal L_+ v = L_- v + f \end{equation} where \begin{equation} L_- = \sqrt{-\Delta} + 1 - \big (Q^2 \ast |x|^{-1} \big ) \end{equation} and \begin{equation}\label{eq:f} f(x):= 2 Q(x) \int_{|y|<|x|} \left(|y|^{-1} - |x|^{-1} \right) Q(y)v(y) \,dy \,. \end{equation} Second, note that $v$ is real-analytic. Indeed, if $\tilde v := v + \gamma R$ with $R$ from Lemma \ref{inhom} and $\gamma:= - 2 \int |x|^{-1} Q v\,dx$, then $L_+\tilde v=0$. As we shall see in Proposition \ref{anal2} below $\tilde v$, and hence by Lemma \ref{inhom} also $v$, are real-analytic. Third, we have $v\in \dot H^{-1/2}$. Indeed, in the proof of Lemma \ref{anal2} we have shown that $\hat v \in L_2\cap L_\infty$, so in particular it is square-integrable with respect to the measure $|\xi|^{-1}\,d\xi$. After these preliminaries, we now turn to the proof of Proposition \ref{trivkermod}. We shall argue by contradiction assuming that $v\not\equiv 0$ and distinguish two cases according to whether $v$ changes sign or not. We begin by assuming that $v$ does not change sign. To this end, we observe that $L_-$ is self-adjoint and satisfies $L_- Q=0$. Therefore, by \eqref{eq:trivkereq}, we obtain \begin{equation*} 0 =(L_-Q,v)=(Q,L_-v) = - 2 \int_{\mathbb{R}^3} Q^2(x) \int_{|y|<|x|} \left(|y|^{-1} - |x|^{-1} \right) Q(y)v(y) \,dy \, dx. \end{equation*} This, however, is a contradiction, since the integrand in the last integral is of definite sign and not identically zero. Hence from now on we assume that $v$ changes sign. We distinguish two cases according to whether $v$ vanishes at the origin or not. \emph{Case 1: $v(0) \neq 0$}. Assume that $v(0)\neq 0$. Since $\mathcal{L}_+ v =0$ is a linear equation, we can multiply $v$ by any constant. In particular, we can assume that $v(0) > Q(0)$ holds. Moreover, since $v$ is continuous and changes its sign, there is a first intersection radius $R >0$ such that $v>Q$ in $B_R:=\{|x|<R\}$ and $v=Q$ on $\partial B_R$. Let $V$ and $\Psi$ be the harmonic extensions of $v$ and $Q$ to the halfspace $\mathbb{R}^4_+$. The set $S=\{(x,t)\in\mathbb{R}^4_+: V(x,t)>\Psi(x,t) \}$ is open in $\mathbb{R}^4_+$, and we denote by $\Omega\subset \overline{\mathbb{R}^4_+}$ the closure of the connected component of $S$ in $\mathbb{R}^4_+$ which contains $B_R\times\{0\}$ in its closure in $\overline{\mathbb{R}^4_+}$. Next, we define $$ \Sigma(x,t) := \begin{cases} V(x,t)-\Psi(x,t) & \text{if}\ (x,t)\in\Omega \,,\\ 0 & \text{otherwise}\,. \end{cases} $$ Similarly as in Subsection \ref{sec:harmext}, we see that $\Sigma\in H^1(\mathbb{R}^4_+)$ holds (here we use that $v\in \dot H^{-1/2}\cap \dot H^{1/2}$, as remarked above) and \begin{equation*} \begin{split} -\Delta \Sigma & = 0 \qquad \text{in} \ \mathbb{R}^4_+\setminus\partial\Omega \\ -\frac{\partial}{\partial t} \Sigma - \big (Q^2*|x|^{-1} \big ) \Sigma & = -\Sigma - f \qquad \text{on} \ B_R\times\{0\} \end{split} \end{equation*} with $f$ from \eqref{eq:f}. This implies $$ \iint_{\mathbb{R}^4_+} |\nabla_{(x,t)} \Sigma|^2 \,dx\,dt = -\int_{\mathbb{R}^3} \frac{\partial\Sigma}{\partial t} \Sigma \,dx = \int_{\mathbb{R}^3} \left(Q^2*|x|^{-1} - 1 \right) \Sigma^2 \,dx - \int_{\mathbb{R}^3} f\Sigma \,dx \,. $$ With $\mathcal{A}_Q$ from \eqref{eq:forma} and by the construction of $\Sigma$, we find $$ \mathcal{A}_Q[\Sigma] = -\int_{\mathbb{R}^3} f\Sigma \,dx = -\int_{B_R} f \Sigma \,. $$ Similarly as in the proof of Theorem \ref{uniquemod}, the right side is strictly negative. But this contradicts Lemma \ref{nonneg}. Hence the case $v(0) \neq 0$ cannot occur. \emph{Case 2: $v(0) = 0$.} Finally, we assume that $v(0)=0$. Then, by real analyticity, there exist $\epsilon,\rho>0$ such that, after replacing $v$ by $-v$ if necessary, $0=v(0)<v(x)<\epsilon$ if $0<|x|<\rho$ and $v(x)=\epsilon$ for $|x|=\rho$. If $V$ denotes the harmonic extension of $v$, then the set $S=\{(x,t) : V(x,t)<\epsilon\}$ is open in $\mathbb{R}^4_+$. Let $\Omega\subset \overline{\mathbb{R}^4_+}$ be the closure of the connected component of $S$ whose closure in $\overline{\mathbb{R}^4_+}$ contains $B_\rho\times\{0\}$. By the strong maximum principle, we have $V(x,t)>0$ for all $(x,t)$ in the interior of $\Omega$. Moreover, the classical Hopf lemma for harmonic functions (see, e.\,g., \cite[Section 6.4.2]{Ev}) says that $$\frac{\partial V}{\partial t}(0,0)>0.$$ (Note that $(0,0)$ satisfies an interior ball condition in $\Omega$, since $V(x,t)<\epsilon$ for all $(x,t)$ in a neighborhood of $(0,0)$.) On the other hand, we have $$\frac{\partial V}{\partial t}(0,0)=-\sqrt{-\Delta}v(0)=(-Q^2*|x|^{-1}(0)+1) v(0) +f(0)=0,$$ using the equation $\mathcal{L}_+ v=0$. Again, this a contradiction and completes the proof of Proposition \ref{trivkermod}. \end{proof} \subsection{Proof of Proposition \ref{kernelnonrad}} With Proposition \ref{trivkermod} at hand now, the arguments in this subsection follow closely those given in \cite{Le}. The key ingredient in the proof of Proposition \ref{kernelnonrad} is the following property. \begin{lemma} \label{lem:posimp} For each $\ell \geq 0$, the operator $L_{+,\ell}$ in $L_2(\mathbb{R}_+, r^2 dr)$ enjoys the Perron-Frobenius property. That is, if $\inf \spec\left(L_{+,\ell}\right)$ is an eigenvalue, then it is nondegenerate and the corresponding eigenfunction can be chosen strictly positive. \end{lemma} \begin{proof} By \cite[Thm. XIII.43]{ReSi} it suffice to prove that $(L_{+,\ell} + \mu)^{-1}$ is positivity improving for some $\mu > 0$. First, we claim that \begin{equation}\label{posimpr} \mbox{$(\sqrt{-\Delta_\ell} + \mu)^{-1}$ is positivity improving for all $\mu > 0$,} \end{equation} where $\sqrt{-\Delta_\ell}$ denotes the restriction of $\sqrt{-\Delta}$ to the sector of angular momentum $\ell$; cf.~\eqref{eq:Delta_ell}. To prove our claim \eqref{posimpr}, we recall the subordination formula $$ e^{-|a|} = \frac{1}{\sqrt{\pi}} \int_0^\infty \frac{e^{-u}}{\sqrt{u}} e^{-a^2/4u} \, du \,, $$ for any real number $a$. Hence we obtain \begin{equation} \label{eq:sub} \exp(-t \sqrt{-\Delta_\ell}) = \frac{1}{\sqrt{\pi}} \int_0^\infty \frac{e^{-u}}{\sqrt{u}} \exp( t^2 \Delta_\ell /4u) \, du \,. \end{equation} >From \cite{Le} we know that $\exp(\tau \Delta_\ell)$ is positivity improving for all $\tau > 0$, and hence the subordination formula \eqref{eq:sub} implies that $\exp(-t \sqrt{-\Delta_\ell})$ is positivity improving for all $t>0$. The claim \eqref{posimpr} now follows by writing $(\sqrt{-\Delta_\ell}+\mu)^{-1} = \int_0^\infty e^{-t \mu} \exp(-t \sqrt{-\Delta_\ell}) \, dt$. Next, we recall the formulas \eqref{eq:defV} and \eqref{eq:defW}. Since $-V$ is a positive function, the operator of multiplication with this function is positivity improving. Moreover, the operator $-W_\ell$ has a positive integral kernel and hence it is positivity improving as well. Using that $-V-W_\ell$ is a bounded operator, a simple resolvent expansion now shows that $(L_{+,\ell} + \mu)^{-1}= (\sqrt{-\Delta_{\ell}} + \lambda + V + W_\ell +\mu)^{-1}$ is positivity improving, provided that $\mu >0$ is sufficiently large. \end{proof} We are now in position to give the \begin{proof}[Proof of Proposition \ref{kernelnonrad}] We start with the case $\ell=1$. By differentiating the equation $L_- Q=0$ with respect to $x$, we obtain $L_+ \partial_{x_i} Q = 0$ for $i=1,2,3$. Since $\partial_{x_i} Q = Q'(r) \frac{x_i}{|x|}$ belongs to the sector of angular momentum $\ell =1$, we conclude that $L_{+,1} Q' = 0$ and therefore $Q'$ is an eigenfunction of $L_{+,1}$. Moreover, since $Q$ is decreasing, we have $Q' \leq 0$. In view of Lemma \ref{lem:posimp}, we conclude that $Q'$ is (up to a normalization) the unique ground state eigenfunction of $L_{+,1}$, which yields \begin{equation}\label{eq:kernelnonrad1} \inf\spec L_{+,1} = 0\,. \end{equation} Next, let $\ell\geq 2$. We show that $\inf\spec L_{+,\ell} > 0$ holds. In particular, this will imply that $L_{+,\ell}$ has a trivial kernel. Clearly, we may assume that $\inf\spec L_{+,\ell} <1$ (since otherwise there is nothing to prove). Since $\inf\text{ess-}\spec L_{+,\ell} =1$, we see that Lemma \ref{lem:posimp} implies that $\inf\spec L_{+,\ell}$ is a simple eigenvalue corresponding to a strictly positive eigenfunction $\phi_{\ell} > 0$. Next, we observe that $-\Delta_\ell\geq-\Delta_1$. Since taking square-roots is operator-monotone, we find $$ A_\ell := ( \phi_{\ell}, \sqrt{-\Delta_\ell} \phi_{\ell} ) - ( \phi_{\ell}, \sqrt{-\Delta_{1}} \phi_{\ell} ) \geq 0. $$ (In fact, the inequality is strict, but we do not need this here.) Next, we consider the difference $$ B_\ell := ( \phi_{\ell}, W_\ell \phi_{\ell} ) - ( \phi_{\ell}, W_{1} \phi_{\ell} ) = \frac{8\pi}3\int_0^\infty \int_0^\infty \phi_{\ell}(r) Q(r) w_\ell(r,s) Q(s) \phi_\ell(s) r^2\,dr\,s^2\,ds $$ with $w_\ell(r,s)=\frac{r_<}{r_>^2} (1- \frac 3{2\ell+1} \frac{r_<^l}{r_>^l})>0$ for $r,s >0$. Since $\phi_\ell$ and $Q$ are strictly positive, we deduce that $B_\ell > 0$ holds. In summary, this shows \begin{align*} \inf\spec L_{+,\ell} & = ( \phi_{\ell}, L_{+,\ell} \phi_{\ell} ) = ( \phi_\ell, L_{+,1} \phi_\ell ) + A_\ell + B_\ell \\ & > ( \phi_\ell, L_{+,1} \phi_\ell ) \geq \inf\spec L_{+,1} =0 \,. \end{align*} Thus $0$ cannot be an eigenvalue of $L_{+,\ell}$ when $\ell \geq 2$. This completes the proof of Proposition~\ref{kernelnonrad}. \end{proof} \fi \section{Real analyticity II} \label{sec:anal2} In this section, we establish (as some additional result) analyticity of kernel elements of the linearized operator associated with $Q$ solving \eqref{eq:eq}. Although the arguments will follow closely Section \ref{sec:anal}, we provide the details of the (tedious) adaptation. Suppose that $Q \in H^{1/2}(\mathbb{R}^3)$ is a real-valued solution to \eqref{eq:eq1}. According to Theorem \ref{anal}, $Q$ is real-analytic. We consider the associated linearized operator $$ L_+ \xi = \sqrt{-\Delta} \, \xi + \xi - \big ( Q^2 \ast |x|^{-1} \big ) \xi - 2 Q \big ( (Q \xi) \ast |x|^{-1} \big ). $$ This defines a self-adjoint operator in $L^2(\mathbb{R}^3)$ with operator domain $H^1(\mathbb{R}^3)$. We have the following result. \begin{proposition}\label{anal2} If $v\in\ker L_+$, then $v$ is real-analytic. More precisely, there exists a constant $\sigma$ and an analytic function $\tilde v$ on $\{z\in\mathbb{C}^3 : |\im z_j|<\sigma, \, 1\leq j\leq 3 \}$ such that $\tilde v(x)=v(x)$ if $x\in\mathbb{R}^3$. \end{proposition} The proof of Proposition \ref{anal2} will be given at the end of the section. First, we establish the following auxiliary fact. \begin{lemma}\label{nondegdecay} Assume that $v\in\ker L_+$ is radial. Then $v\in L_1$ and hence $\hat v\in L_\infty$. \end{lemma} \begin{proof} We have $$ v=(\sqrt{-\Delta}+1)^{-1} (f_1 + f_2) $$ where $f_1:= (Q^2*|x|^{-1}) v$ and $f_2:=2((Qv)*|x|^{-1})Q$. Let us first consider $f_2$ and we note that \begin{equation*} |f_2(x)| \leq C (1+|x|)^{-4} | (Qv) \ast |x|^{-1}|(x) \leq C (1 + |x|)^{-5}, \end{equation*} Here, the pointwise bound on $Q$ comes from Lemma \ref{decay} and the pointwise bound on $(Qv) \ast |x|^{-1}$ follows from combining Hardy's inequality (to get an $L_\infty$-bound) and Newton's theorem to conclude that $|(Qv) \ast |x|^{-1}|(x) \leq C/|x|$ for $|x| >0$. Next, we note that $f_1\in L_{6/5+}$, since $Q^2*|x|^{-1}\in L_{3+}$ and $v\in L_2$. Since $(\sqrt{-\Delta}+1)^{-1}$ is the convolution by a function in $L_1$ (indeed, a function bounded by a constant times $\min\{|x|^{-2},|x|^{-4}\}$), we conclude that $v\in L_{6/5+}$. But this implies that $f_1\in L_1$, and hence $v$ is the convolution of an $L_1$ kernel with the $L_1$ function $f_1+f_2$, and hence $v$ must be in $L_1$. \end{proof} As a next step and similarly as in Section \ref{sec:anal}, we prove the following statement. \begin{lemma}\label{analker} Let $W$ and $V$ be non-negative functions on $\mathbb{R}^n$ satisfying for some $a,b>0$ and all $n\in\mathbb{N}_0$ and $1\leq p\leq \infty$ \begin{equation}\label{eq:analkerass} \| |\xi|^n W \|_1 \leq ab^{n} (2n+1)^{n-1} \,, \qquad \| |\xi|^n V \|_p \leq ab^{n} (2n+1)^{n-1} \,. \end{equation} Let $\lambda >0$, $0\leq f\in L_2(\mathbb{R}^3)\cap L_\infty(\mathbb{R}^3)$ and $g\geq 0$ measurable such that \begin{equation}\label{eq:analker} \begin{split} (|\xi|+\lambda)f \leq W * f + V*g \,, \qquad |\xi|^2 g \leq V*f \,. \end{split} \end{equation} Then there exist $\tilde a,\tilde b>0$ such that for all $n\in\mathbb{N}_0$, \begin{equation} \label{eq:analkerind} \| |\xi|^n f \|_\infty \leq \tilde a \tilde b^{n} (2n+1)^{n-1} \,, \qquad \| |\xi|^{n+2} g \|_\infty \leq \tilde a \tilde b^{n+2} (2(n+2)+1)^{(n+2)-1} \,. \end{equation} \end{lemma} \begin{proof} We begin by showing that $g$ and $|\xi|g$ are integrable and that $|\xi|^2 g$ is bounded. To see this, note that since $V\in L_1\cap L_2$ and $f\in L_2$, $h:=|\xi|^2 g \leq V*f \in L_2 \cap L_\infty$ and therefore $$ \int_{\mathbb{R}^3} g \,d\xi \leq \|h\|_\infty \int_{|\xi|<1} |\xi|^{-2} \,d\xi + \|h\|_2 \left( \int_{|\xi|>1} |\xi|^{-4} \,d\xi \right)^{1/2} <\infty \,. $$ Using this information, as well as $W\in L_1$, $f\in L_2$, $V\in L_2$, we find $|\xi| f \leq W * f + V*g \in L_2$. By the triangle inequality, $|\xi| h \leq |\xi|(V*f) \leq (|\eta|V)*f + V*(|\eta| f) \in L_2\cap L_\infty$, and therefore $$ \int_{\mathbb{R}^3} |\xi| g \,d\xi \leq \|h\|_\infty \int_{|\xi|<1} |\xi|^{-1} \,d\xi + \||\xi| h\|_2 \left( \int_{|\xi|>1} |\xi|^{-4} \,d\xi \right)^{1/2} <\infty \,. $$ We define $$ \tilde a := \max\{\|f\|_\infty , \| g \|_1 \} \,. $$ Since \eqref{eq:analkerass} remains true if $b$ is increased, we may assume that $$ b \geq \max\{ \left(\| |\xi|^2 g \|_\infty/ (5 \tilde a) \right)^{1/2}, \| |\xi| g \|_1 / \tilde a , 2\tilde a, \left( 2\tilde a \right)^{1/2}/7 \} $$ Note that these choices imply that \begin{equation}\label{eq:g1est} \| |\xi|^{l} g \|_1 \leq \tilde a b^{l} (2l+1)^{l-1} \qquad \text{for } l=0,1 \,. \end{equation} Having modified $b$ in this way, we shall prove \eqref{eq:analkerind} with $\tilde b=b$ (and $\tilde a$ as defined above). We proceed by induction with respect to $n\in\mathbb{N}_0$. For $n=0$ the assertion is an immediate consequence of our choices for $\tilde a$ and $b$. Now let $n\geq 1$ and assume that \eqref{eq:analkerind} has already been shown for all smaller values of $n$. By \eqref{eq:analker} and the triangle inequality $$ |\xi|^n f \leq \sum_{l=0}^{n-1} \binom{n-1}{l} \left( \left( |\eta|^l W \right) * \left(|\eta|^{n-1-l} f \right) + \left(|\eta|^l V \right) * \left(|\eta|^{n-1-l} g \right) \right) $$ and therefore \begin{align*} \| |\xi|^n f \|_\infty \leq \sum_{l=0}^{n-1} \binom{n-1}{l} \| |\xi|^l W \|_1 \||\xi|^{n-1-l} f \|_\infty & + \sum_{l=0}^{n-3} \binom{n-1}{l} \| |\xi|^l V \|_1 \| \|\xi|^{n-1-l} g \|_\infty \\ & + \sum_{l=n-2}^{n-1} \binom{n-1}{l} \| |\xi|^l V \|_\infty \| \|\xi|^{n-1-l} g \|_1 \,. \end{align*} (The middle sum should be discarded if $n\leq 2$.) Hence by the induction hypothesis, by \eqref{eq:analkerass} and \eqref{eq:g1est} $$ \| |\xi|^n f \|_\infty \leq 2a \tilde a b^{n-1} \sum_{l=0}^{n-1} \binom{n-1}{l} (2l+1)^{l-1} (2(n-l-1)+1)^{n-l-2} = 4 a \tilde a b^{n-1} (2n)^{n-2} \,. $$ In the last calculation we used Abel's identity \eqref{eq:abel}. This proves the first assertion in \eqref{eq:analkerind}, provided we have $$ 4 a \tilde a b^{n-1} (2n)^{n-2} \leq a b^n (2n+1)^{n-1} $$ for all $n\geq 1$. It is easy to see that this holds if $2\tilde a\leq b$, which holds by the choice of $b$. To prove the second assertion in \eqref{eq:analkerind} we proceed similarly as before, using the triangle inequality to get $$ |\xi|^{n+2} g \leq \sum_{l=0}^{n} \binom{n}{l} \left( |\eta|^l V \right) * \left(|\eta|^{n-l} f \right) $$ and hence $$ \| |\xi|^{n+2} g \|_\infty \leq \sum_{l=0}^{n} \binom{n}{l} \| |\eta|^l V \|_1 \| |\eta|^{n-l} f \|_\infty \,. $$ Relation \eqref{eq:analkerass} together with the estimates for $|\xi|^{n-l} f$ (which we have already proved) and Abel's identity \eqref{eq:abel} imply that $$ \| |\xi|^{n+2} g \|_\infty \leq a \tilde a b^{n} \sum_{l=0}^{n} \binom{n}{l} (2l+1)^{l-1} (2(n-l) +1)^{n-l-1} = 2 a \tilde a b^{n} (2n+2 )^{n-1} \,. $$ This proves the second assertion in \eqref{eq:analkerind}, provided we have $$ 2 a \tilde a b^{n} ( 2n+2 )^{n-1} \leq a b^{n+2} (2(n+2)+1)^{(n+2)-1} $$ for all $n\geq 1$. It is easy to see that this holds if $\tilde a\leq 49 b^2/2$, which holds by the choice of $b$. This completes the proof of Lemma \ref{analker}. \end{proof} \begin{proof}[Proof of Proposition \ref{anal2}] For the Fourier transform $\hat v$, the equation $L_+ v=0$ leads to $$ |\xi| \hat v - w*\hat v -\tilde w*\hat Q = - \hat v $$ with $w$ as in the proof of Proposition \ref{anal} and $$ \tilde w(\xi) := \frac{ \hat Q * \hat v (\xi)}{\pi^2 |\xi|^2 } \,. $$ Hence $f:=|\hat v|$ and $g:=|\tilde w|$ satisfy \eqref{eq:analker} with $W:=|w|$ and $V:= \pi^{-1}|\hat Q|$. (Indeed, we know that $\hat Q>0$, but we do not need this fact.) Now the assumptions \eqref{eq:analkerass} can be deduced from \eqref{eq:analp} and \eqref{eq:wdecay} after modifying $a$ and $b$. (Strictly speaking, we use the equation before \eqref{eq:wdecay} which yields the term $(2l+2)$ in \eqref{eq:wdecay} replaced by $(2l+1)$.) Moreover, $f\in L_\infty$ by Lemma \ref{nondegdecay}. Now the statement of Proposition \ref{anal2} follows as in the proof of Theorem \ref{anal}. \end{proof} \bibliographystyle{amsalpha}
proofpile-arXiv_065-6523
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sect_intro} Stellar explosions, broadly refered to as supernovae (SNe), are understood to stem from a sudden release of energy either associated with the collapse of the degenerate core of a massive star or from the thermonuclear combustion of fresh fuel deep inside the stellar envelope. Whether one or the other mechanism occurs seems to depend on the main-sequence mass of the progenitor star, with core collapse occuring systematically if its value is above $\sim$8\,M$_{\odot}$\, \citep[hereafter WHW02]{WHW02}. Interestingly, whatever the mechanism, the typical kinetic energy of SN ejecta is on the order of 10$^{51}$\,erg, as inferred for example for the well-studied SN 1987A (Type II peculiar; \citealt{blinnikov_etal_2000}), for SN 1999em (Type II-Plateau, heareafter II-P; \citealt{utrobin_07}), or for the very uniform set of events that Type Ia SNe constitutes \citep{WKB07_Ia_lc}. The cause of this apparent degeneracy in explosion energy is, paradoxically, perhaps not so much tied to the mechanism itself, but instead to the rather uniform total envelope binding energy of the progenitor stars, on the order of 10$^{51}$\,erg; anything falling short of that leads to a fizzle and no SN display. \begin{figure*} \epsfig{file=f1a.ps,width=8.5cm} \epsfig{file=f1b.ps,width=8.5cm} \caption{{\it Left:} Comparison of absolute-$V$-band-magnitude light curves for an illustrative and non-exhaustive sample of SNe, SN impostors and/or erupting LBVs (violet: SN1999em, \citealt{leonard_etal_02a,DH06_SN1999em}; blue: SN1999gi, \citealt{leonard_etal_02b}; turquoise: SN 2005cs, \citealt{pastorello_etal_09}; green: SN1999br, \citealt{pastorello_etal_09}: light green: $\eta$ Car, \citealt{frew_04}; yellow: SN 1997bs, \citealt{vandyk_etal_00}; red: NGC2363-V1, \citealt{drissen_etal_01,petit_etal_06}). For each, we adopt the distance and reddening given in the associated references. The time origin is that of maximum recorded brightness. Note that all these objects have comparable effective temperatures on the order of 10,000\,K, hence comparable bolometric correction, making the comparison of their absolute $V$-band magnitude meaningful. We also show on the left side and in black the absolute visual magnitude of the galactic red-supergiant stars studied by \citet[black]{levesque_etal_05}, as well as the {\it bolometric} magnitude of the O star models computed by \citet[gray; we use the bolometric magnitude here since O stars are hot and have large bolometric corrections]{martins_etal_05}. {\it Right:} Same as left, but now zooming in on the time of maximum brightness (the color coding is the same as in the left panel). Notice the stark contrast between SN light curves, associated with shorter/brighter events, and erupting massive stars, associated with longer/fainter events. Importantly, notice the overlap between the intrinsic brightness of $\eta$ Car and that of the low-luminosity Type II-P SN 1999br. In this work, we propose that this diversity of radiative displays may be accomodated by a {\it common, explosive, origin}. \label{fig_obs_lc} } \end{figure*} The last decade of observations of such transient phenomena has shown, however, that the radiative signatures associated with SNe (as classified in circulars) are very diverse, from very faint to very luminous, from fast-evolving to slow-evolving or fast-expanding to slow-expanding. This diversity has been observed little in Type Ia SNe, with a few peculiar events such as SN 2002ic (presence of narrow hydrogen lines in an otherwise standard Type Ia spectrum; \citealt{hamuy_iau_02ic}) or SNLS-03D3bb (possible Type Ia SN from a super-Chandrasekhar white dwarf star; \citealt{howell_etal_06}). In contrast, there has been a rich diversity in explosions associated (perhaps erroneously at times) with massive stars and the mechanism of core collapse. We have observed 1) Type Ic SNe, associated or not with a long-soft $\gamma$-ray burst, and with a standard or a very large kinetic energy (SN 1998bw, \citealt{WES_99}; SN 2002ap, \citealt{mazzali_etal_02}); 2) a large population of Type II-Plateau (II-P) SNe in what seems to be the generic explosion of a moderate-mass red-supergiant (RSG) star (e.g. SN 2005cs, \citealt{maund_etal_05,UC_08}); 3) a growing number of low-luminosity SNe that share properties with standard Type II-P SNe except for being significantly and globally less energetic (e.g. SN1997D, \citealt{chugai_utrobin_00}; SN 1999br, \citealt{pastorello_etal_04}; OT2006-1 in M85, whose status is ambiguous, see \citealt{kulkarni_etal_2007_m85,pastorello_etal_07_m85}). We show a sample of $V$-band absolute-magnitude light curves of such core-collapse SNe in Fig.~\ref{fig_obs_lc}, with representative peak values of $-$14 to $-$17\,mag and a 100-day plateau duration (best seen in the right panel of that figure), hence about 6-10\,mag brighter than their proposed RSG progenitors (shown as black crosses). From this expanded SN sample, the range of corresponding explosion energies has considerably widened, extending above and below the standard 10$^{51}$\,erg value. Within the core-collapse SN context, this modulation is thought to stem from modulations in the energy revival of the stalled shock above the nascent proto-neutron star, in turn modulated by the stellar-core structure \citep{burrows_etal_07a}. The stretching to low explosion energies of potential core-collapse SN events is intriguing. For the most energetic explosions belonging to points 1 and 2 above, the classification as a SN is unambiguous. However, some transient events show an ejecta/outflow kinetic energy and a peak magnitude that are SN-like, although the events did not stem from core collapse (a star is observed at that location on post-explosion/eruption images); the community calls these SN impostors (e.g. SN1997bs; Fig.~\ref{fig_obs_lc}; \citealt{vandyk_etal_00}). Conversely, this raises the issue whether {\it low-energy} Type II-P SNe are associated with core collapse - they might but they need not. We illustrate this overlap in radiative properties for a sample of such objects in Fig.~\ref{fig_obs_lc}. One such case is the Luminous Blue Variable (LBV) $\eta$ Car, whose properties during its 1843 eruption rival those of the low-luminosity Type II-P SN1999br. $\eta$ Car survived this gigantic eruption, which shed about 10\,M$_{\odot}$\, of material in what now constitutes the homunculus nebula \citep{smith_etal_2003}. In contrast to core-collapse SNe, such eruptive phenomena in massive stars have been associated with the proximity of the star to the Eddington luminosity $L_{\rm Edd} = 4 \pi c G M / \kappa$ ($\kappa$ is the mass absorption coefficient). Due to the steep dependence of luminosity to mass (e.g. with an exponent of 3.5 for main-sequence objects), this limit is easily reached by very massive stars such as $\eta$ Car, or more generally massive blue-supergiant stars. In this context, massive stars are thought to undergo considerable mass loss when their luminosity overcomes the Eddington limit,\footnote{Note, however, that energy will have to be supplied to the stellar envelope to push it over this limit, and in large amounts to explain such a nebula as the homunculus.} giving rise to a porosity-modulated continuum-driven outflow \citep{shaviv_00,owocki_etal_04}. Here, this super-Eddington wind constitutes a quasi steady-state outflow, and has therefore been thought to be of a fundamentally different nature from core-collapse SN ejecta. And indeed, one refers to a wind for the former and to an ejecta for the later. This dichotomy has been exacerbated by the stark contrast in typical light curves of eruptive stars (long lived with large brightness) and core-collapse SN explosions (short lived with huge brightness). In Fig.~\ref{fig_obs_lc}, we show two known eruptive massive stars that highlight this contrast. However, recent observations may be challenging such a strict segregation. First, the recent identification of very fast outflowing material ahead of $\eta$ Car's homunculus now suggests that such material was accelerated by a shock, rather than driven in a quasi-steady wind, and thus connects the giant outburst to an explosive origin \citep{smith_08_blast}. Second, the existence of interacting SNe tells us that a massive eruption can occur merely a few years before explosion. For some, e.g. SN 2006gy \citep{smith_etal_07a,smith_etal_07b, woosley_etal_07} or SN 1994W \citep{dessart_etal_09}, the amount is thought to be large enough to decelerate the energetic (and necessarily faster-expanding) subsequent ejection. This very strict timing of merely a few years, {\it which is orders of magnitude smaller than evolutionary or transport time-scales}, suggests a connection between the mechanisms at the origin of the two ejections. For SN2006gy, Woosley et al. propose recurrent pair-instability pulsations, a mechanism germane to super-massive stars and therefore extremely rare. For lower mass massive stars, this short delay of a few years seems to exclude a very-long, secular, evolution for the production of the first ejection since this would have no natural timing to the comparatively instantaneous event of core collapse. Motivated by these recent observations, we explore in this paper whether this diversity of events could be reproduced by a unique and deeply-rooted mechanism, associated with the sudden energy release above the stellar core and the subsequent shock heating of the progenitor envelope. This means would communicate a large energy to the stellar envelope on a shock crossing time-scale of days rather than on a very long-diffusion time-scale of thousands of years or more. Although different in their origin, this energy release could be a weak analogue of what results in pair-instability pulsations, i.e. a nuclear flash, as identified in the 8-12\,M$_{\odot}$\, range by \citet{weaver_woosley_79}. In this paper, following this shock-heating hypothesis, we use 1D radiation-hydrodynamics simulations to explore the production of explosions/eruptions in stars more massive that $\sim$10\,M$_{\odot}$\, on the main sequence. Rather than focusing on specific models, like those potentially associated with failed supernovae \citep{fryer_etal_09}, we parameterize the problem through a simple energy deposition, taking place with a given magnitude, over a given time, and at a given depth in a set of pre-SN progenitor star models. We do not aim at reproducing any specific observation but, through a systematic approach, try to identify important trends, in a spirit similar to that of \citet{falk_arnett_77}. However, we depart from these authors by studying ``non-standard'' explosions. In practice, we consider cases where the energy deposited can be both smaller or larger than the binding energy of the overlying envelope, but must imperatively be released on a very short time-scale to trigger the formation of a shock. Doing so, we identify three regimes, with ``standard'' SN explosions (short-lived transients) at the high energy end, objects that we will group in the category SN ``impostors'' (long-lived transients) at intermediate energy, and variable stars at the very low energy end. Let us stress here that we do not make the claim that all massive-star eruptions, or all transients in general, stem from a strong, sudden, and deeply-rooted energy release in their envelope. Here, we make this our working hypothesis and investigate how much such an explosive scenario can explain observations. We do find that this scenario has great potential and should be considered as a possibility when examining the origin of massive-star eruptions and associated transient phenomena. \begin{figure} \epsfig{file=f2.ps,width=8.5cm} \caption{Density distribution as a function of Lagrangian mass at the onset of collapse for the models of WHW02 evolved at solar metallicity. Notice the flattening density distribution above the core for increasing mass. In low-mass massive stars, the star is structured as a dense inner region (the core), and a tenuous extended H-rich envelope. \label{fig_rho_mr} } \end{figure} The paper is structured as follows. In \S\ref{sect_input}, we briefly present the stellar evolutionary models of WHW02 that we use as input for our 1D 1-group radiation-hydrodynamics simulations. We discuss the properties of the progenitor massive stars of WHW02, such as density structure and binding energy, that are relevant for the present study. We then describe in \S\ref{sect_model}, the numerical technique and setup for our energy deposition study. In \S\ref{sect_s11}, we present the results of a sequence of simulations based primarily on the 11\,M$_{\odot}$\, model of WHW02, discussing the properties of the shocked progenitor envelope for different values of the strength (\S\ref{var_edep}), the depth (\S\ref{var_mcut}) and the duration (\S\ref{var_dt}) of the energy deposition. We also discuss in \S\ref{var_mprog} the results obtained for more massive pre-SN progenitors, ranging from 15 to 25\,M$_{\odot}$\, on the main sequence. In \S\ref{sect_rad}, we present synthetic spectra computed separately with CMFGEN \citep{HM98_lb,DH05_qs_SN,dessart_etal_09} for a few models at a representative time after shock breakout. In \S\ref{sect_discussion}, we discuss the implications of our results for understanding transient phenomena, and in \S\ref{sect_conclusion} we summarize our conclusions. \section{Input stellar evolutionary models and physical context} \label{sect_input} \subsection{Input Models} The energy-deposition study presented here starts off from the stellar evolutionary calculations of WHW02, who provide a set of pre-SN massive stars, objects evolved all the way from the main sequence until the onset of core collapse. We refer the reader to WHW02 for a discussion of the physics included in such models. These physically-consistent models give envelope structures that obey the equations and principles of stellar structure and evolution. The exact details of these models are not relevant since we aim at developing a general, qualitative, understanding of stellar envelope behaviour in response to shock-heating - we do not aim at connecting a specific observation to a specific input. Here, we focus on 10-40\,M$_{\odot}$\, progenitor stars evolved at solar metallicity and in the absence of rotation. These calculations include the effect of radiation-driven mass loss through a treatment of the outer-boundary mass-flux, so that the final mass in this set reaches a maximum of $\sim$15\,M$_{\odot}$\, for a $\sim$20\,M$_{\odot}$\, progenitor star. Mass loss leads also to a considerable size reduction of the stellar envelope, with partial or complete loss of the hydrogen rich layers, producing smaller objects at core collapse for main-sequence masses above $\sim$30\,M$_{\odot}$. In the sequence of pre-SN models of WHW02, surface radii vary from 4--10$\times 10^{13}$\,cm in the 10-30\,M$_{\odot}$\, range, decreasing to $\sim 10^{11}$\,cm in the 30-40\,M$_{\odot}$\, range. This set of models is not exhaustive. All our input stellar-evolutionary models have reached the end of a massive-star life and all have a degenerate core that collapses. They are thus not exactly suited for post-main sequence massive stars that may be burning hydrogen/helium in the core/shell. The sampled mass range does not stretch to the most massive stars known, which also happen to exhibit variability and eruptions. In particular, it does not include massive blue supergiants, which are connected observationally with eruptive phenomena. It does not include highly-massive stars close to the Eddington limit, nor objects that for some reason would be near critical rotation. It does not include any object resulting from close binary evolution. These are clear limitations for an association of transient phenomena with specific progenitors, but this is not the goal of this study. Observations have likely been biased toward the most massive stars undergoing massive ejections, missing fainter progenitors undergoing more modest eruptions. Circumventing specific issues linking tightly specific models to specific observations, we attempt instead to develop a qualitative understanding. Our study is thus conceptual and explores the behavior of a gravitationally-bound envelope subject to shock-heating. As we explain below, the present set covers a wide enough range of properties to reveal important systematics in this context, which can then be applied more generally to other stars, be they of larger mass etc. \begin{figure*} \epsfig{file=f3a.ps,width=8.75cm} \epsfig{file=f3b.ps,width=8.75cm} \caption{{\it Left:} Same as Fig.~\ref{fig_rho_mr}, but now for the binding energy as a function of the Lagrangian-mass coordinate $M_r$ of the envelope exterior to $M_r$: $e_{\rm binding}(M_r) = \int_{r}^{r_{\rm max}} ( G M_r/r - e_{\rm int} ) dM_r$. In our determination of the binding energy, we have included the internal energy \citep{ZN_71}. Symbols refer to the base of the corresponding shells, with diamond for the H-rich shell, triangles for the He-rich shell, and dots for the O-rich shell. {\it Right:} Variation of the envelope binding energy at the onset of collapse in the models of WHW02 and shown as a function of the progenitor mass on the main sequence. Here, the ``envelope'' refers to total mass exterior to the inner 1.8\,M$_{\odot}$. The colorbar applies to both left and right panels. \label{fig_eb_mr} } \end{figure*} The density distribution above the iron core is an important property that distinguishes the structure of massive stars. Low-mass massive stars show a steep density fall-off above their degenerate core, while the more massive stars boast very flat density profiles (Fig.~\ref{fig_rho_mr}; recall that at the time shown, the Fe or ONeMg core is degenerate and starts to collapse). In the context of core-collapse SN explosions, this has been recognized as one of the key aspect of the problem, since this density profile directly connects to the mass-accretion rate onto the proto-neutron star in the critical first second that follows core bounce \citep{burrows_etal_07a}. This has led \citet{burrows_goshy_93}, and more recently \citet{murphy_burrows_08} to identify a criterion for explosion through a competition between neutrino luminosity and mass accretion rate. Such a criterion, although tuned by the global hydrostatic configuration of the progenitor star, isolates the conditions for a successful explosion to those in the direct vicinity of the pre-collapsed core. A more global, and more fundamentally meaningful, criterion for a successful explosion is whether the energy deposited at the base of the progenitor envelope is smaller or greater than its binding energy. This matter is rarely addressed explicitly, supposedly because the 10$^{51}$\,erg explosion energy aimed for is thought to be much in excess of the binding energy of the envelope, but this may in fact not be the case and it is ignoring the key role the binding energy plays in producing the diversity of characters observed in core-collapse SNe, and potentially in numerous transients. \subsection{The Binding Energy Barrier} \label{sect_ebind} The variation in the progenitor envelope density structure (Fig.~\ref{fig_rho_mr}) is echoed in the corresponding binding energy of the stellar envelope. In the left panel of Fig.~\ref{fig_eb_mr}, we show the binding energy of the envelope exterior to a Lagrangian-mass coordinate $M_r$, as a function of $M_r$, for the same set of pre-SN massive star models. To differentiate mass shells of different compositions, we use symbols to mark the inner edge of the hydrogen shell (diamond), of the He-rich shell (triangle), and of the O-rich shell (dot). The binding energy shows a strong dependency with progenitor main-sequence mass, and we can identify two classes of objects. Objects with main-sequence mass below $\sim$25\,M$_{\odot}$\, possess a sizable hydrogen envelope, whose fraction of the total mass at the onset of collapse grows as we go to lower-mass progenitors. It represents at collapse $\sim$80\% of the total mass in the 11\,M$_{\odot}$\, model, but only $\sim$30\% in the 25\,M$_{\odot}$\, model (these masses refer to the main-sequence mass). In all these cases, whenever present, the binding energy of the hydrogen shell is very small, on the order of 10$^{47}$erg, hence very loosely bound to the star. In contrast, for objects with main-sequence masses in excess of 25-30\,M$_{\odot}$, the hydrogen envelope is shed through mass loss during the pre-SN evolution, and the pre-SN star is essentially a ``compact'' star, with an envelope whose binding energy is now 3-4 orders of magnitude larger. This is perhaps the single-most important characteristics of pre-SN massive star envelopes resulting from the single-star evolution calculations of WHW02, namely that hydrogen-deficient (generally higher-mass originally) stars are very tightly-bound objects while hydrogen-rich (generally lower-mass originally) stars are very loosely bound. Moving deeper into the envelope, i.e. in the He or O shells, the same trend persists, with a systematic increase in binding energy for each shell edge as we move up in mass from low-mass to high-mass massive stars. As shown in the right panel of Fig.~\ref{fig_eb_mr}, the binding energy of the entire envelope exterior to the degenerate core varies from $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$10$^{49}$\,erg for the 11\,M$_{\odot}$\, model (a RSG star with a final mass of $\sim$10.6\,M$_{\odot}$) up to $\sim$10$^{51}$\,erg in the 40\,M$_{\odot}$\, (a H-deficient Wolf-Rayet star with a final mass of $\sim$8\,M$_{\odot}$) model. For an explosion to ensue, an energy greater than the binding energy must be deposited suddenly at the base of the envelope, the excess energy above the binding energy being used to accelerate the envelope and turn it into outflow/ejecta. While a very modest energy of $\sim$10$^{49}$\,erg is sufficient to unbind the envelope of a 10\,M$_{\odot}$\, RSG star, such a deposition in a Wolf-Rayet star would produce nothing remarkable. In the present work, the connection between energy deposition and binding energy is key to the understanding of the properties of the resulting mass ejections. More generally, the low binding energy of massive star envelopes is of primary importance for the understanding of their stability/variability, as in the context of pulsations, for example. The origin of the energy deposition that we artificially treat in this work may be a sequel of the gravitational collapse of the core, in which case it can happen only once, leading either to an explosion or to a fizzle. Alternative sources in massive stars will likely be less energetic, but in objects with low binding energy, may still be of relevance to a wide spectrum of astrophysical events. The most suited mechanism in the present context would be the prompt thermonuclear combustion of a small amount of material. In the context of very massive stars like $\eta$ Car (not directly addressed through our progenitor set), \citet{guzik_05} propose that non-radial gravity mode oscillations could lead to mixing into the hydrogen-shell burning layer of fresh fuel located directly above it, thereby releasing suddenly a large amount of energy to unbind a fraction of the above layers. In lower mass massive stars at the end of their lives, this combustion could concern already processed material, like carbon, oxygen, or silicon, at a location just above the degenerate core. Such ``nuclear flashes'' have indeed been encountered in stellar-evolutionary calculations of 8-12\,M$_{\odot}$\, massive stars \citep{weaver_woosley_79}. Interestingly, with a (nuclear) binding energy of $\sim$1\,MeV/nucleon, the combustion of $^{12}$C or $^{16}$O to $^{56}$Fe liberates $E_{\rm nuc} \sim1.9 \times 10^{49} (M / 0.01M_{\odot}$)\,erg. Hence, as little as a few percent of a solar mass of carbon or oxygen burnt to iron can yield an energy in excess of the binding energy of the lowest-mass massive stars, and of comparable magnitude to that of the weakest core-collapse SN explosions \citep{pastorello_etal_04,kitaura_etal_06} or LBVs. \subsection{Energy Transport by Diffusion/Convection versus Shock-heating} \label{sect_tdiff} In radiative stellar envelopes, energy is transported outward by diffusion. Any increase in energy release from the core provokes a steepening of the temperature profile and an enhanced radiative flux, carrying outward this extra energy, often aided by convection too. However, radiative diffusion or convection can only be effective for small increases in energy, due to the short mean-free path of photons and/or the modest sound speed. Associated time-scales for such energy transport are therefore long and the means poorly efficient. In Fig.~\ref{fig_tdiff_mr}, we show the diffusion time as a function of depth in the 10-40\,M$_{\odot}$\, model sequence of WHW02, computed by adopting a constant mass-absorption coefficient of 0.1\,cm$^2$\,g$^{-1}$ (intermediate between that for the hydrogen-rich and silicon-rich shells) but a depth-dependent mean-free-path \citep[this is controlled primarily by the twenty-order-of-magnitude variation in density between the core and the surface]{mitalas_sills_92}. Resulting diffusion times from regions immediately above the core range from 10$^4$ to 10$^5$ years. In contrast, the shock crossing time-scale of the envelope is $\sim R_{\star}$/$\langle v_{\rm shock}\rangle$, which for Wolf-Rayet to RSG progenitor stars with radii of 10$^{11}$ to 10$^{14}$\,cm and even modest, but supersonic, shock waves with $\langle v_{\rm shock}\rangle \sim$1000\,km~s$^{-1}$, is typically on the order of minutes to days. In this study, we want to explore what happens when the deposition of energy leads to an energy increase that cannot be remedied either by diffusion or convection transport, but instead has to lead to shock formation (even for small energy deposition). This leads to a completely different regime since in this situation the shock can communicate this extra energy to the {\it entire} envelope on a short time-scale of days: The stellar envelope hence responds immediately to the change of conditions at depth, instead of secularly evolving to a bigger/smaller or cooler/hotter star. This dichotomy was recently discussed in the context of the helium flash, whose associated energy was found to be too small to lead to the formation of a shock \citep{mocak_etal_08,mocak_etal_09}. In our artificial approach, the energy release has to occur on a very short time-scale to trigger the formation of a shock. How short will depend on the progenitor since, as shown in Fig.~\ref{fig_eint_mr}, the depth-dependent internal energy (including both radiation and thermal components) varies considerably between progenitors, increasing at a given $M_r$ with main-sequence mass. Forming a shock will require a stronger energy deposition, a shorter deposition time, and/or a more exterior deposition site in higher mass progenitors. \begin{figure} \epsfig{file=f4.ps,width=8.5cm} \caption{Same as Fig.~\ref{fig_rho_mr}, but now showing the variation of the diffusion time. The computation assumes unequal photon mean-free-path as in \citet{mitalas_sills_92}, but a constant and representative mass absorption coefficient of 0.1\,cm$^2$\,g$^{-1}$. \label{fig_tdiff_mr} } \end{figure} \begin{figure} \epsfig{file=f5.ps,width=8.5cm} \caption{Same as Fig.~\ref{fig_rho_mr}, but now showing the variation of the internal energy (including radiation and thermal parts). \label{fig_eint_mr} } \end{figure} \subsection{Origin of energy deposition} In the present study on pre-SN massive-star progenitors, we find that an energy deposition that is marginally larger than the envelope binding energy yields ejecta that are reminiscent of SN impostors, but in the present context, are also reminiscent of the mass ejections that must take place prior to core collapse in interacting SNe. The short delay between the two ejections, of no more than a few years to produce a luminous interacting SN, suggests a shock-heating solution for both ejections. Any other energy-transport means in a massive star that could propel mass out of the gravitational potential would take place on a time-scale of thousands of years or more. The disconnection between the surface and the core thus suggests that both events are tied to a phenomenon that happens in the core or in its direct vicinity. We propose thermonuclear flashes associated with shell burning in the last stages of massive star evolution (pair-instability pulsations are analogous to this scenario but may only apply to stars with a main-sequence mass in the range 95--130\,M$_{\odot}$; \citealt{woosley_etal_07}). These burning stages are very short (month to year time-scales) and always occur at the end of the life of all massive stars. They offer a natural tuning and a reproducible circumstance for the production of interacting SNe, with at least one major interaction between the penultimate shell ejection and that resulting from the gravitational collapse of the degenerate core. Given the relatively large number of interacting SNe compared to the strict time requirements to produce them (a few-year delay is no more than an instant relative to the lifetime of a star!) suggests that indeed the conditions must be met often rather than encountered by chance. The stellar-evolutionary calculations of \citet{weaver_woosley_79} for 8-12\,M$_{\odot}$\, massive stars support the occurrence of nuclear flashes in association with Ne and Si core/shell burning, and occuring merely a few years prior to core collapse. These simulations are old and would need to be revisited in a thorough fashion, with full hydrodynamics, multi-dimensionality, and high resolution. In the context of more massive stars like $\eta$ Car, \citet{guzik_05} propose that non-radial gravity mode oscillations above the core and near the hydrogen-burning shell could lead to mixing of hydrogen-rich material downward into hotter denser layers. Combustion of this fresh fuel would yield a burst of energy triggering mass ejection and a bolometric brightening. In this work, we emphasize the dire consequences of any such energy release on loosely-bound massive stars, potentially triggering single eruptions as evidenced in transients or multiple eruptions as evidenced in interacting SNe \citep{dessart_etal_09}. More generally, the hydrodynamical investigations of the late stages of nuclear burning in massive stars by \citet{bazan_arnett_94,bazan_arnett_98,asida_arnett_00,arnett_etal_05,meakin_arnett_06} reveal the multi-dimensional and time-dependent nature of the problem, issues that have been largely overlooked in stellar evolutionary calculations so far. In particular, they find that considerable energy can leak out of the convectively-burning shells in the form of waves, with a potential to alter the otherwise quasi-steady hydrostatic configuration of the stellar envelope. Similar investigations for lower-mass massive stars are highly needed. These works provide a physical motivation for our investigations. For the simulations presented here, we do not, however, build a fully-consistent picture of the problem, but instead start off by imposing the modalities of such an energy deposition. In practice, we follow a very simple approach, adopting a constant energy-deposition rate for a given duration. We thus neglect any feedback of the dynamics on the mechanism causing the energy release. \section{Numerical Technique and Computational Setup} \label{sect_model} The numerical simulations presented in this work were all performed with Vulcan/1D (V1D), a one dimensional radiation-hydrodynamics code that works in the framework of Newtonian physics. It incorporates many methods and techniques, using both Lagrangian and Eulerian schemes for planar, cylindrical, and spherical geometries. The Lagrangian hydrodynamics scheme is based on the staggered-grid method described in \citet{RM_67}, which uses artificial viscosity for shock waves. Some test problems of an early version of this code are described in \citet{livne_93}. For the treatment of non-LTE radiative-transfer problems we have recently implemented several solvers with different levels of approximations for the radiation fluxes. The coarser method is our gray flux-limited-diffusion method, which has also a multi-group extension. We have also constructed more accurate schemes for the radiation fields using moment methods and full angle-dependent solvers, which are similar in nature (but not in details) to the scheme described in \citet{ensman_phd,ensman_94}. Those however were not implemented in this work. The radiation field is coupled to matter in a fully implicit fashion, which guarantees stability and large time steps. Since the important physics relevant to this study occurs at large optical depth, the multi-group capability of the code is not used. Opacities/emissivities are interpolated from a set of tables prepared with a separate program. It computes the populations for a large number of atomic levels (up to 1000 per species), under the assumption of LTE, for typically fifteen species (H, He, C, N, O, Ne, Na, Mg, Si, S, Ar, Ca, Cr, Fe, and Ni), and for a set of compositions representative of the pre-SN envelope of a 15\,M$_{\odot}$\, main-sequence star (we generate tables for compositions corresponding to a mean atomic weight between 1.3 and 25 atomic-mass units). For each species, up to 18 ionization stages are included. At present, we account for scattering opacity (due to electrons), and absorptive opacity due to bound-free and free-free processes. Although lines can be included, we have neglected them in this study. The equation of state takes into account excitation/ionization to all available levels of the different atomic/ionic species in the composition. The distribution function of these levels is computed using a method similar to the one described in \citet{kovetz_shaviv_94}. The resulting electron density is then used together with the temperature in order to extract the pressure, energy, chemical potential and their derivatives respective to the electron density and temperature from a table computed in advance by solving the Fermi-Dirac integrals. The pressure, energy and entropy of the ions are then added as an ideal gas. All our simulations use as inputs the massive-star models of WHW02, evolved from the main sequence until the onset of core collapse. At present, V1D adopts the same grid as the one read-in from the chosen input model of WHW02. The resolution is therefore quite low at the progenitor surface, which affects the accuracy of the breakout signal. The range of mass encompassed by the V1D grid is set by the inner mass cut where energy is deposited, the excised region being shrunk to a point mass at the origin. We do not compute the explosive nucleosynthetic yields, and thus do not alter the original composition of the massive-star envelopes computed by WHW02. In particular, there is no production of $^{56}$Ni and no associated heating through radioactive decay accounted for in our simulations. In the main parameter study presented here, we use the 11\,M$_{\odot}$\, model (named s11.0 by WHW02). We first explore the evolution of the progenitor envelope after depositing an energy $E_{\rm dep}$ between 5$\times 10^{48}$\,erg and 1.28$\times 10^{51}$\,erg at a mass cut $M_{\rm cut}=1.8$\,M$_{\odot}$\, (uniformly deposited in the range 1.8 to 2.3\,M$_{\odot}$) and for a duration of 10\,s (\S\ref{var_edep}). We also explore the sensitivity of the outcome to the location of the energy deposition, from a Lagrangian mass coordinate of 1.8 to 3, 7, and 9\,M$_{\odot}$\, (\S\ref{var_mcut}), and to the duration of the energy deposition, from ten seconds, to one hour, one day, one week, and one month (\S\ref{var_dt}). Finally, we perform a few simulations using more massive progenitors, with 15, 20, and 25\,M$_{\odot}$\, main-sequence mass (\S\ref{var_mprog}). A summary of the model initial parameters as well as quantities characterizing the main results is shown are Tables~\ref{tab_s11} and \ref{tab_s11_wcomp}. Note that in the text, when we mention the masses associated with the WHW02 models, we mean the main-sequence masses, as in the 11\,M$_{\odot}$\, or the 25\,M$_{\odot}$\, models; these do not correspond to the star mass at the onset of collapse, which is shown for all models in, e.g., Fig.~\ref{fig_rho_mr}. \section{Results for the 11\,M$_{\odot}$\, sequence} \label{sect_s11} \begin{table*} \begin{minipage}{160mm} \caption{Summary of the parameters and key results for the sequence started with the 11\,M$_{\odot}$\, progenitor model (pre-SN mass of 10.6\,M$_{\odot}$). Note that all models are not run for the same total time ($t_{\rm end}$), owing to numerical problems at very late times as the density and temperature drop in the outer ejecta. Numbers in parenthesis refer to powers of ten in the unit shown in the first column of the same row. The effective mass on the V1D grid depends on the adopted inner mass cut $M_{\rm cut}$. The time origin corresponds to the start of the simulation, when energy is deposited for $t_{\rm edep}$ seconds. Starting from the top row down, we give the total energy deposited (in 10$^{50}$\,erg), the inner mass shell where it is deposited and the duration of that deposition (in s), the time at the end of the simulation (in days), the sum of the mass of all shells that at the end of the simulation have a velocity larger than the local escape speed $\sqrt{2 G M_r / r}$, the ejecta mass-weighted average velocity (in km~s$^{-1}$), the kinetic energy of the ejecta (in 10$^{50}$\,erg) at the end of the simulations, the maximum ejecta velocity (in km~s$^{-1}$), the time (in days) and the peak luminosity (in L$_{\odot}$) of shock breakout , the average speed of the shock (in km~s$^{-1}$), the kinetic and the internal (thermal plus radiation contributions) at the time of breakout (in erg), the time of peak luminosity in the post-breakout plateau (in days) and the corresponding luminosity (in L$_{\odot}$), the duration of high brightness for the transient (time during which the luminosity is greater than1/50th of $L_{\rm peak, plateau}$), and the time-integrated bolometric luminosity (in 10$^{49}$\,erg; note that the time interval varies between models). [See text for discussion.] \label{tab_s11}} \begin{tabular}{lcccccccccc} \hline Model Name & s11\_0 & s11\_01 & s11\_1 & s11\_2 & s11\_3 & s11\_4 & s11\_5 & s11\_6 & s11\_7 & s11\_8 \\ \hline $E_{\rm dep}$ (10$^{50}$\,erg) & 5.0(-2) & 7.5(-2) & 1.0(-1) & 2.0(-1) & 4.0(-1) & 8.0(-1) & 1.6(0) & 3.2(0) & 6.4(0)& 1.28(1)\\ $M_{\rm cut}$ (M$_{\odot}$) & 1.8 & 1.8 & 1.8 & 1.8 & 1.8 & 1.8 & 1.8 & 1.8 & 1.8 & 1.8 \\ $t_{\rm edep}$ (s) & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \\ $t_{\rm end}$ (d) & 730 & 193 & 730 & 730 & 160 & 730 & 117 & 103 & 730 & 686 \\ $M_{\rm ejected}$ (M$_{\odot}$) & 0.13 & 5.95 & 7.75 & 8.60 & 8.72 & 8.79 & 8.80 & 8.80 & 8.80 & 8.80 \\ $\langle v \rangle_{M}$ (km~s$^{-1}$) & 42 & 76 & 121 & 296 & 510 & 789 & 1165 & 1687 & 2412 & 3432 \\ $(E_{\rm kin})_{\rm end}$ (10$^{50}$\,erg) & 2.30(-5)&4.00(-3) & 1.33(-2) & 9.25(-2) & 2.79(-1) & 6.74(-1) & 1.47(0)&3.08(0) & 6.30(0) & 1.276(1) \\ $\langle v_{\rm max}\rangle$ (km~s$^{-1}$) & 60 & 410 & 560 & 1310 & 1800 & 2740 & 4100 & 5970 & 8740 & 12700 \\ $t_{\rm SBO}$ (d) & 54.7 & 32.1 & 21.4 & 9.6 & 5.8 & 3.8 & 2.6 & 1.8 & 1.3 & 0.9 \\ $L_{\rm peak, SBO}$ (L$_{\odot}$) & 1.7(5) & 3.3(6) & 5.2(7) & 1.2(9) & 4.9(9) & 1.6(10) & 4.2(10) &1.1(11)& 2.5(11)& 5.7(11) \\ $\langle v_{\rm shock}\rangle$ (km~s$^{-1}$) & 86 & 147 & 221 & 492 & 812 & 1238 & 1815 & 2622 & 3717 & 5244 \\ $(E_{\rm kin})_{\rm SBO}$ (erg) & 2.1(46) & 2.1(47) & 6.7(47) & 4.4(48) & 1.3(49) & 3.1(49) & 6.7(49) & 1.4(50) & 2.8(50) & 5.8(50) \\ $(E_{\rm int})_{\rm SBO}$ (erg) & 5.2(48) & 4.6(48) & 3.8(48) & 7.5(48) & 1.7(49) & 3.9(49) & 8.3(49) & 1.7(50) & 3.5(50) & 7.1(50) \\ $t_{\rm peak,plateau}$ (d) & 208 & 198 & 243 & 115 & 97 & 82 & 73 & 64 & 55 & 49 \\ $L_{\rm peak, plateau}$ (L$_{\odot}$) & 3.6(5) & 1.5(6) & 2.9(6) & 1.2(7) & 2.9(7) & 6.0(7) & 1.1(8) & 2.0(8)& 3.6(8) & 6.2(8) \\ $\Delta t_{\rm plateau}$ (d) & 675 & 517 & 341 & 287 & $>$155 & 180 & $>$115 & $>$106 & 111 & 100 \\ $\int L_{\rm bol}$ dt (10$^{49}$\,erg) & 6.6(-3) & 5.6(-3) & 2.5(-2)& 6.7(-2) & 1.3(-1) & 2.5(-1) & 4.0(-1) & 6.6(-1)& 1.1(0) & 1.8(0) \\ \hline \end{tabular} \end{minipage} \end{table*} \subsection{Effect of varying the Energy Deposition Magnitude} \label{var_edep} In this section, we present the results for the sequence of simulations based on the 11\,M$_{\odot}$\, model for a range of energy deposition magnitudes. A compendium of parameters describing the set up and the results is given in Table~\ref{tab_s11}. Note that, initially (prior to energy deposition), the envelope (total) internal energy is $\sim 10^{49}$\,erg, and its gravitational binding energy is -2.2$\times 10^{49}$\,erg in this 11\,M$_{\odot}$\, model (the model mass at the time of collapse is 10.6\,M$_{\odot}$). In our simulations, energy deposition leads to a strong increase in internal energy (temperature, pressure), leading to the formation of a shock which moves outward. In all runs, the shock crosses the entire envelope and eventually emerges at the surface of the progenitor, giving rise to a shock-breakout signal (more precisely, a radiative precursor precedes the emergence of the shock, but one generally refers to this whole phase as shock breakout). The shocked envelope left behind has been turned into a radiation-dominated plasma, with an extra energy that varies from a small ($E_{\rm dep}=5\times 10^{48}$\,erg in the s11\_0 model) to a large value ($E_{\rm dep}=1.28\times 10^{51}$\,erg in the s11\_8 model). After energy deposition, all the stored energy is internal. As the shock forms and moves out, that stored internal energy is converted into kinetic energy behind the shock, and as the shock progresses outward, more and more material is shock-heated and the total internal energy goes up at the expense of the kinetic energy. As time progresses further, the kinetic energy eventually increases due to radiation work on the ejecta. Figure~\ref{fig_s11_5_enr} illustrates this evolution until just after shock breakout for the case in which $E_{\rm dep}=1.6\times 10^{50}$\,erg (model s11\_5). \begin{figure} \epsfig{file=f6.ps,width=8.5cm} \caption{Time evolution of the mass-integrated total (black), thermal (orange), radiative (blue), kinetic (green), and gravitational (red; its absolute value) energies for the model s11\_5. Notice the interplay between radiative (which dominates the internal energy over the thermal component) and kinetic energy. The time of shock breakout is 2.6\,d, at which the total internal (kinetic) energy is 8.3$\times 10^{49}$\,erg (6.7$\times 10^{49}$\,erg). The time origin is the start of the simulation. \label{fig_s11_5_enr} } \end{figure} As the shock approaches the surface, the envelope properties become qualitatively and quantitatively different, depending on the adopted $E_{\rm dep}$ value. For a large $E_{\rm dep}$, we find that at shock emergence, the total envelope energy is in rough equipartition between kinetic and internal. As time progresses, all the energy is eventually converted into kinetic as radiation, nearly entirely trapped at large optical depth, does work to accelerate the ejecta to its asymptotic velocity. In contrast, for small $E_{\rm dep}$, the kinetic energy of the envelope at breakout and later is very small. Here, the stored internal energy is weakly enhanced after energy deposition (i.e. $E_{\rm dep} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} E_{\rm int}$ originally), and this excess energy is essentially all exhausted in doing work against gravity, merely lifting off the envelope from the gravitational potential well. At shock emergence, the envelope kinetic energy is thus negligible. Hence, while in all cases from low to high energy deposition, a shock forms and eventually emerges at the progenitor surface, the quantitative differences are very large between cases. Transitioning from dynamic to quasi-static diffusion regimes of energy transport, we identify three situations: 1) The energy deposition is much larger than the binding energy and a SN-like explosion ensues; 2) the energy deposition is on the order of the binding energy and a low-energy explosion/eruption results; and 3) the energy deposition is lower than the binding energy and the excess energy merely shifts the star to a new quasi-equilibrium state from which it relaxes on a very long time-scale. In the following subsections, we describe each of these regimes individually. \subsubsection{$E_{\rm dep} > E_{\rm binding}$} This regime applies to our simulations with $E_{\rm dep}$ between 4$\times 10^{49}$\,erg (model s11\_3) and 1.28$\times 10^{51}$\,erg (model s11\_8). The separation between this sample and cases with lower energy deposition is drawn at the threshold value of $E_{\rm dep}$ for which the entire envelope is ejected. In Table~\ref{tab_s11}, we give a census of the properties for this set. Although qualitatively similar, these simulations give rise to 8.8\,M$_{\odot}$\, ejecta with a mass-weighted average velocity in the range 500--3500\,km~s$^{-1}$, and kinetic energy in the range 2.8$\times 10^{49}$ ($E_{\rm dep}=4\times 10^{49}$\,erg) up to 1.276$\times 10^{51}$\,erg ($E_{\rm dep}=1.28\times 10^{51}$\,erg). For the former, the binding energy takes a visible share of the energy deposited so that the asymptotic kinetic energy is sizably smaller than $E_{\rm dep}$. For the latter, the energy deposited is overwhelmingly large and represents 99.7\% of the final kinetic energy. In all cases, the initial prompt deposition of energy turns the stellar envelope into a true ``ejecta'', and this material is lost from the star without any noticeable fallback. We find that the shock-crossing time through the envelope is between 0.9 and 5.8\,d (the distance between the deposition site and the progenitor surface is 4.7$\times 10^{13}$\,cm), with a time-averaged shock velocity between $\sim$5000 and 800\,km~s$^{-1}$ (in the same order from high to low $E_{\rm dep}$).\footnote{Note that the shock is always supersonic and may propagate at very different speeds depending on location and initial energy deposition.} This therefore modulates the delay between the energy release (which may be a sequel of gravitational collapse or otherwise) and the time of shock breakout, which is the first detectable signal testifying for the violent energy deposition that took place at depth. Importantly, this breakout signal is the unambiguous signature that a shock wave emerged from the progenitor surface, a signature that would directly eliminate steady-state wind solutions for the origin of the subsequent outflow/ejecta. \begin{figure} \epsfig{file=f7.ps,width=8.5cm} \caption{Time evolution of the normalized intrinsic bolometric luminosity around shock breakout, as computed with V1D, and revealing the hour-to-day range in duration of the shock-breakout signal, caused here by the change in shock speed (or explosion energy). Light-travel-time delays are not accounted for. The variation in the breakout-peak luminosity is shown in Fig.~\ref{fig_s11_lum_all}. A color coding is used to differentiate models. \label{fig_s11_lum_sbo} } \end{figure} Owing to the sizable range in shock speeds, the breakout signal varies considerably in duration. From a random walk argument, it is born when the decreasing envelope optical-depth $\tau$ at the shock depth $\Delta r$ below the surface eventually makes the diffusion time $t_{\rm diff} = \tau \Delta r /c$ shorter than the shock-propagation time to the progenitor surface $t_{\rm shock} = \Delta r / v_{\rm shock}$. This occurs at an optical depth $\tau \sim c / v_{\rm shock}$, and can be very large for a small shock-propagation speed. As shown in Fig.~\ref{fig_s11_lum_sbo} (note that the luminosity in this figure is normalized to the peak value to better illustrate its time variation), we find shock-breakout durations from $\sim$0.5 up to $\sim$2\,hr in the sequence (we refer to the intrinsic breakout duration, hence neglect light-travel time effects which would broaden the signal seen by a distant observer over a duration of at least $\sim R_{\star}/c \sim$1\,hr). Similarly, the peak luminosity at breakout varies enormously with the energy deposited or the shock strength, as shown in the left panel of Fig.~\ref{fig_s11_lum_all}, where the time axis is the time since energy deposition. To a higher energy-deposition strength corresponds stronger shocks, shorter shock-crossing times (earlier shock emergence), and both shorter-duration and greater peak breakout luminosities. For the range of models discussed in this section, we obtain values between 4.9$\times 10^{9}$ up to 5.7$\times 10^{11}$\,L$_{\odot}$, a contrast of about a hundred which is comparable to the corresponding contrast in asymptotic ejecta kinetic energy. So far, the systematics of shock breakout have been tied to variations in the progenitor properties such as surface radius or atmospheric scale height, while obviously the explosion energy, if stretched to low enough values, can lengthen considerably the breakout signal. But importantly, while the delay between energy deposition and breakout varies by a factor of a hundred in this model set, it remains on a time-scale of days and hence much shorter than the diffusion time on the order of thousands of years for the corresponding layers (see \S\ref{sect_tdiff}). \begin{figure*} \epsfig{file=f8a.ps,width=8.5cm} \epsfig{file=f8b.ps,width=8.5cm} \caption{Same as Fig.~\ref{fig_s11_lum_sbo}, but this time showing the un-normalized bolometric luminosity. In the left panel, we use a logarithmic scale for the abscissa, showing the time since the start of the simulations. In the right panel, we use a linear scale for the abscissa, showing the time since shock breakout. Stronger explosions show shorter and stronger breakout signals, followed by a brighter plateau brightness but shorter plateau length. At intermediate values of the energy deposition, the plateau-brightness is followed by a ledge, which corresponds to the time when the photosphere recedes into the more tightly-bound (and slowly-expanding) helium-rich part of the envelope - no artificial nor physical mixing occurs in our 1D simulations, a process that could smear out such a feature. Note that for cases in which $E_{\rm dep} \sim E_{\rm binding}$, the breakout luminosity is barely larger than the following plateau, which we find here can last for up to two years in this 11\,M$_{\odot}$\, progenitor star. \label{fig_s11_lum_all} } \end{figure*} These properties at and subsequent to breakout are visible in the conditions at the photosphere, which we show in Fig.~\ref{fig_s11_phot} for the velocity $V_{\rm phot}$, the radius $R_{\rm phot}$, the temperature $T_{\rm phot}$, and the Lagrangian mass coordinate $M_{\rm phot}$. Because of the infrequent writing of model quantities to file, the breakout phase is not well resolved. This matters little for the velocity, the radius, and the mass at the photosphere, which all evolve slowly. However, the photospheric temperature shows a peak whose strength is underestimated in the high energy-deposition cases, characterized by short breakout durations (note that the low spatial resolution at the progenitor surface prevents an accurate modeling of this breakout phase in any case). Hence, at breakout, we find an approximate range of photospheric temperatures between 5$\times 10^4$ to multiples of 10$^5$\,K. In scattering-dominated atmospheres, the color temperature of the emergent spectral-energy distribution corresponds roughly to the gas temperature at the thermalization depth \citep[p 149]{mihalas_78}, which is up to a factor of two greater than $T_{\rm phot}$ \citep{DH05_epm}. For a thermal (blackbody) emitter at $T$, the peak of the spectral-energy distribution (SED) is at $\sim 2900/T_{4}$\,\AA\, (where $T_4$ is $T/10^4$ K). At breakout, the spectral-energy distribution for the present set of models will thus peak in the range 100-600\AA. It is at breakout that a fraction of the material at the progenitor surface gets accelerated to large velocities. In the sequence of models s11\_3 to s11\_8, we find increasing maximum ejecta speeds from 1800\,km~s$^{-1}$\, up to $\sim$13000\,km~s$^{-1}$, typically a factor of four higher than the mean mass-weighted velocity of the ejecta. The general evolution of the long-term, post-breakout, photospheric conditions is qualitatively similar for this set of models. After breakout and for a few days, the internal energy stored in the shock-heated stellar envelope does work to accelerate the ejecta to its asymptotic velocity. For larger energy deposition, the expansion rate is greater and leads to more efficient cooling of the ejecta, thereby mitigating the impact of the larger internal energy initially provided by the shock. This cooling is quasi-adiabatic since little radiation is lost from the photon decoupling layers - it all occurs at large optical depth. After a few days, the ejecta velocity increases monotonically with radius (homologous expansion is only approximately attained since the progenitor radius may represent up to 5--10\% of the current photospheric radius at all times). As the ejecta expand, cool, and recombine to a lower ionization state, the photospheric velocity decreases with time. The photospheric radius first increases, reaching values in excess of 10$^{15}$\,cm for the high energy cases, but eventually decreases due to recombination and the diminishing optical depth of the ejecta. The location of the photosphere is that of full ionization, which for the H-rich composition of this stellar envelope occurs around 5000\,K. Hence, the photospheric temperature, after its initial peak at breakout, slowly decreases to level off at $\sim$5000\,K. Ultimately, the photosphere reaches the inner region of the ejecta and the ``event'' enters its nebular phase. This transition occurs earlier for higher values of $E_{\rm dep}$, after about 100 days in model s11\_8 but up to $\sim$200 days in the s11\_3 model (since the code crashed for that model, this is an estimate based on the noticeable trend with energy deposition for that quantity; see Table~\ref{tab_s11}). This evolution in photospheric properties parallels the bolometric evolution of the explosion, which we show in Fig.~\ref{fig_s11_lum_all}. In all cases, the early breakout peak in luminosity is followed by a fading (initially from radiative cooling at the shock-heated surface layers) followed by a sustained brightness as the ejecta expand (the rate of increase in photospheric radius compensates the rate of decrease in photospheric temperature): This is the so-called plateau phase of Type II-P SNe. Peak luminosities, reached at a later time for a lower energy deposition (from 50 to 100 days after breakout), are in the range 3$\times 10^7$ up to 6$\times 10^8$\,L$_{\odot}$. Similarly, the plateau duration lengthens with lower energy deposition, ranging from 100 up to about 200 days (recall that this occurs in models s11\_3 to s11\_8 for the same ejected mass of $\sim$8.8\,M$_{\odot}$). At the end of our simulations, we find time-integrated bolometric luminosities in the range 10$^{48}$--10$^{49}$\,erg, which represent, in the same order, 1/20th down to 1/70th of the corresponding asymptotic ejecta kinetic energy. Hence, at the high energy end, we obtain light curves that are reminiscent of Type II-P SNe (Fig.~\ref{fig_obs_lc}), ranging from the energetic events like SN 2006bp \citep{dessart_etal_08}, to the more standard SN 1999em \citep{leonard_etal_02a}, through to the low luminosity SN 1999br \citep{pastorello_etal_09}. The luminosity contrast of a factor of $\sim$20 in this set of models is on the order of that inferred for Type II-P SNe (Fig.~\ref{fig_obs_lc}; \citealt{pastorello_etal_09}). For moderate to high $E_{\rm dep}$ values, the plateau durations we obtain are on the order of those observed for Type II-P SNe, although we find a lengthening for lower $E_{\rm dep}$ values that has not been observed definitely. For example, the observations during the plateau phase of SN1999br are unfortunately truncated after 100 days, at a time when the SN was not showing a clear sign of fading. The subsequent observation 250 days later, when the SN has faded, does not exclude 99br kept its plateau brightness for a longer time, perhaps for a total of 150 or 200 days (see Fig.~\ref{fig_obs_lc}). To summarize, the regime of explosions described in this section corresponds to that of type II-P SNe, with a range of explosion energies and bolometric luminosities that are in line with inferences from observations. The rather large energy-deposition magnitude (i.e. $\geq 4 \times 10^{49}$\,erg) in this regime favors a scenario involving the gravitational collapse of the stellar core and, given the resulting complete envelope ejection, the formation of a neutron star remnant. We wish to emphasize that there is no fundamental limitation for preventing explosions in this 11\,M$_{\odot}$\, model for energies as low as a few times 10$^{49}$\,erg, as in model s11\_3. The corresponding light curve would be reminiscent of that of SN 1999br, only slightly fainter. \begin{figure*} \epsfig{file=f9a.ps,width=8.5cm} \epsfig{file=f9b.ps,width=8.5cm} \epsfig{file=f9c.ps,width=8.5cm} \epsfig{file=f9d.ps,width=8.5cm} \caption{{\it Top left:} Time evolution of the velocity at the photosphere for the sequence of models associated with the 11\,M$_{\odot}$\, model of WHW02. Only the energy deposited during the initial ten seconds after the start of the simulation differs between models, all initial model characteristics being the same. A color coding is used to distinguish the different cases, with total energy deposition encompassing the range 5$\times 10^{48}$ (black) up to 1.28$\times 10^{51}$\,erg (red). {\it Top right:} Same as top left, but now for the radius at the photosphere. {\it Bottom left:} Same as top left, but now for the temperature at the photosphere. Note that the time of breakout is poorly sampled (dumps written every 10000\,s), so that the temperature shown at breakout is an underestimate. {\it Bottom right:} Same as top left, but now showing the Lagrangian mass coordinate of the photosphere as a function of time. Note that the mass interior to the inner boundary of the V1D grid is 1.8\,M$_{\odot}$\, for this sequence of models, and is not accounted for in this last panel. \label{fig_s11_phot} } \end{figure*} \subsubsection{$E_{\rm dep} \sim E_{\rm binding}$} In this section, we continue the sequence toward lower energy deposition, entering a regime where its magnitude is on the order of the binding energy of the envelope, and, incidently, on the same order as the total internal energy in the envelope. In contrast with results for high $E_{\rm dep}$ values, only a fraction of the progenitor envelope is ejected in models s11\_01 to s11\_2, ranging from 5.95 up to 8.6\,M$_{\odot}$. Despite the incomplete success for explosion, all models go through a similar sequence as that of powerful explosions. A shock crosses the progenitor envelope, heating it and propelling it outward. The shock eventually breaks out, after 10--30\,d (time-average shock speeds of 150--500\,km~s$^{-1}$), with an accompanying peak luminosity of 3$\times 10^6$ up to 10$^9$\,L$_{\odot}$. The fastest ejected material is accelerated modestly, from 400 (s11\_01 model) up 1300\,km~s$^{-1}$ (s11\_2 model). At breakout, the kinetic energy of the envelope is a diminishing fraction of the total envelope energy, which is primarily stored internally. Asymptotically, the total kinetic energy of the material ejected (a fraction only of the total) is in the range 4$\times 10^{47}$ up to 10$^{49}$\,erg, with mass-weighted average velocities in the range 70--300\,km~s$^{-1}$. The long-term light curves are characterized by peak luminosities in the range of 10$^6$--10$^7$\,L$_{\odot}$\, and very long plateau durations of 1--2 years, yielding low-luminosity long-lived transients (Fig.~\ref{fig_s11_lum_all}). Hence, these events look somewhat like SNe because they attain luminosities/expansion-rates that are comparable to those of low-luminosity/energy Type II-P SNe. But they also look like giant eruptions as seen in the most massive luminous stars, characterized by super-Eddington luminosities on the order of 10$^7$\,L$_{\odot}$, with ejected shells weighing few solar masses and outflowing velocities of a few hundred km~s$^{-1}$. The Eddington luminosity for this 10.6\,M$_{\odot}$\, pre-SN star is $L_{\rm Edd}= 4 \pi c G M / \kappa \sim 4.07\times 10^5$\,L$_{\odot}$\, (we use a representative mass-absorption coefficient $\kappa=0.34$\,cm$^{2}$\,g$^{-1}$ for this hydrogen-rich stellar envelope), and hence, the luminosities quoted above in the range 10$^6$--10$^7$\,L$_{\odot}$\, are all significantly super-Eddington. By contrast to radiative-driven {\it outflows}, the super-Eddington nature of the luminosity is only a feature in the current context since the {\it ejecta} is already unbound after shock passage (this does not exclude secondary effects that could stem from the high luminosity). The outcome of such a weak energy ``explosion'' can be two things. If the origin is a very weak energy release following core collapse, then the fraction of mass that was expelled will lead to a one-time transient, and the proto-neutron star will accumulate the inner part of the envelope, perhaps transitioning to a black hole if it becomes sufficiently massive (in the s11\_01, it would form a black hole with a 3.85\,M$_{\odot}$\, baryonic mass). But if the weak energy release is caused by something else, e.g. thermonuclear combustion of material at the surface of the degenerate (and hydrostatically-stable) core, the outcome is a massive eruption in a massive star. In this case, the luminosity eventually levels off, as the photosphere receding in mass through the ejecta layers eventually reaches the inner envelope layer that failed to eject. This second category of objects would be called SN impostors.\footnote{In this case, the stellar core would eventually collapse, potentially ejecting the residual envelope, which is now of much lower mass and quasi hydrogen-deficient. Depending on the time delay between the two events, this could also produce a strong interaction with the previous ejected shell, yielding an interacting SN.} Unless the event occurs nearby and allows the inspection of the transient site, one does not know whether a star survived or not. Hence, such events are ambiguous and may be interpreted either as a one-time non-repeatable explosion (a SN) or a potential recidivist like $\eta$ Car. These are impostors in the sense that they would not result from the collapse of a degenerate core, but they do share that important property with core-collapse SNe that, in the present context, they would result from shock-heating of the envelope. Hence, a breakout signal would systematically herald the forthcoming ejection, excluding its origin as a steady-state super-Eddington wind accelerated from a hydrostatic base. The longer duration of $\sim$1\,d and the softer distribution (UV-peaked) of the breakout signal should allow the detection of such events in large-scale sky surveys, providing a clear discriminant between an explosive or a steady-state-wind origin for such mass ejections. \begin{table*} \begin{minipage}{140mm} \caption{Same as Table~\ref{tab_s11}, but now showing the results for the sequence of models with differing energy deposition depth in the 11\,M$_{\odot}$\, progenitor star (see \S\ref{var_mcut} for discussion). \label{tab_s11_wcomp}} \begin{tabular}{lcccccccc} \hline Model Name & s11\_1 & s11\_1\_w3 & s11\_1\_w7 & s11\_1\_w9 & s11\_3 & s11\_3\_w3 & s11\_3\_w7 & s11\_3\_w9 \\ \hline $E_{\rm dep}$ (10$^{50}$\,erg) & 1(-1) & 1(-1) & 1(-1) & 1(-1) & 4(-1) & 4(-1) & 4(-1) & 4(-1) \\ $M_{\rm cut}$ (M$_{\odot}$) & 1.8 & 3.0 & 7.0 & 9.0 & 1.8 & 3.0 & 7.0 & 9.0 \\ $t_{\rm edep}$ (s) & 10 & 10 & 10 & 10 & 10 & 10 & 10 & 10 \\ $t_{\rm end}$ (d) & 730 & 450 & 450 & 354 & 160 & 200 & 200 & 200 \\ $M_{\rm ejected}$ (M$_{\odot}$) & 7.75 & 7.60 & 3.60 & 1.60 & 8.72 & 7.60 & 3.60 & 1.60 \\ $\langle v \rangle_{M}$ (km~s$^{-1}$) & 121 & 350 & 520 & 804 & 510 & 715 & 1000 & 1600 \\ $(E_{\rm kin})_{\rm end}$ (10$^{50}$\,erg) &1.3(-2) & 1.0(-1) & 1.0(-1) & 1.0(-1) & 2.8(-1)& 4.1(-1) & 4.1(-1) & 4.3(-1) \\ $\langle v_{\rm max} \rangle$ (km~s$^{-1}$) & 560 & 1200 & 1430 & 1650 & 1800 & 2000 & 2400 & 3200 \\ $t_{\rm SBO}$ (d) & 21.4 & 9.7 & 4.3 & 1.8 & 5.8 & 5.0 & 2.2 & 0.9 \\ $L_{\rm peak, SBO}$ (L$_{\odot}$) & 5.2(7) & 1.0(9) & 1.8(9) & 3.7(9) & 4.9(9) & 6.9(9) & 1.2(10) & 2.4(10) \\ $\langle v_{\rm shock}\rangle$ (km~s$^{-1}$) & 221 & 447 & 520 & 740 & 812 & 870 & 1000 & 1460 \\ $(E_{\rm kin})_{\rm SBO}$ (erg) & 6.7(47) & 4.3(48) & 3.3(48) &3.0(48) & 1.3(49) & 1.7(49) & 1.3(49) & 1.2(49) \\ $(E_{\rm int})_{\rm SBO}$ (erg) & 3.8(48) & 6.7(48) & 7.6(48) &8.3(48) & 1.7(49) & 2.6(49) & 3.0(49) & 3.3(49) \\ $t_{\rm peak,plateau}$ (d) & 243 & 177 & 122 & 80 & 97 & 135 & 96 & 64 \\ $L_{\rm peak, plateau}$ (L$_{\odot}$) & 2.9(6) & 1.8(7) & 3.2(7) & 4.4(7) & 2.9(7) & 5.4(7) & 9.8(7) & 1.3(8) \\ $\Delta t_{\rm plateau}$ (d) & 341 & 181 & 126 & 87 & $>$155 & 142 & 102 & 71 \\ $\int L_{\rm bol}$ dt (10$^{49}$\,erg) & 2.5(-2) & 7.1(-2) & 7.7(-2) & 8.0(-2) & 1.3(-1) & 1.8(-1) & 2.0(-1) & 2.2(-1) \\ \hline \end{tabular} \end{minipage} \end{table*} \subsubsection{$E_{\rm dep} < E_{\rm binding}$} For the smallest energy deposition value in our sequence (model s11\_0; $E_{\rm dep}=$5$\times$10$^{48}$\,erg), only a tiny fraction of the envelope ($\sim$0.13\,M$_{\odot}$) is ejected, concomitanly with shock breakout, while the bulk of the envelope remains attached to the star. The energy deposition is used to lift the envelope out of the gravitational potential well, but falling short from unbinding it. This results in a ``bloated'' star whose radius has increased by about a factor of two in 50 days after breakout, while the surface temperature has hardly changed. Consequently, the corresponding star, with its puffed-up envelope, brightens from a luminosity of $\sim$10$^5$\,L$_{\odot}$\, up to $\sim$1.37$\times$10$^5$\,L$_{\odot}$, which remains well below its Eddington luminosity of 4.07$\times$10$^5$\,L$_{\odot}$. Over the two-year time-scale covered by the simulation, the photosphere retains a similar radius and temperature. Since the energy left over by the shock is stored at large optical depth, and since it was insufficient to communicate any kinetic energy to the envelope, the transport regime is no longer of dynamic diffusion (as in the high energy-deposition cases) but instead of static diffusion, with characteristic time-scales of $\sim$100\,yr for this massive star envelope (Fig.~\ref{fig_tdiff_mr}). Albeit weak, such an energy deposition causes a significant translation of the star in the HR diagram and may be linked, in some cases, to the photometric and spectroscopic variability of H-rich massive stars. The central element that enables such low-energy perturbations to have any effect is the very low binding energy of massive star envelopes, in particular at the low-mass end. \subsection{Effect of varying the Energy-Deposition depth} \label{var_mcut} \begin{figure*} \epsfig{file=f10a.ps,width=8.5cm} \epsfig{file=f10b.ps,width=8.5cm} \caption{Same as Fig.~\ref{fig_s11_lum_all} for the 11\,M$_{\odot}$ model, but now showing the emergent bolometric-light evolution resulting for cases in which the energy is deposited at a Lagrangian mass of 1.8 (black), 3 (blue), 7 (green), and 9\,M$_{\odot}$\, (red). The left (right) panel corresponds to an energy deposition of 1 (4) $\times$10$^{49}$\,erg. In all cases, we choose the time origin as the time of shock breakout. \label{fig_s11_wmin_phot_lum} } \end{figure*} We have argued in the introduction and in \S\ref{sect_input} that, physically, a deposition of energy from deep regions in the progenitor star is more likely since this is where most of the energy is stored or can be made available. This can occur through gravitational collapse of the degenerate core from its original radius of a few 1000\,km to $\sim$10\,km (the standard initiation of a core-collapse SN explosion), or from violent shell burning above the core (our preferred trigger for transient phenomena in massive stars). In contrast, the envelope layers well above the core, characterized by lower densities and temperatures, tend to merely react to energy changes occurring at depth. It is, however, interesting to investigate the dependency of our results on the adopted site of energy deposition. We have thus performed two sequences for $E_{\rm dep}=10^{49}$\,erg (model s11\_1) and $E_{\rm dep}=4\times10^{49}$\,erg (model s11\_3), with a deposition at 1.8 (as before), 3, 7, and 9 \,M$_{\odot}$\, Lagrangian mass coordinate. These correspond to locations in the helium or in the hydrogen shell. The deposition time in all cases is 10\,s. We show the resulting light curves for both sequences in Fig.~\ref{fig_s11_wmin_phot_lum}, with the corresponding key parameters given in Table~\ref{tab_s11_wcomp}. For the s11\_1 model sequence (black curve, left panel of Fig.~\ref{fig_s11_wmin_phot_lum}), the energy deposition is on the order of the binding energy of the {\it entire} envelope above the 1.8\,M$_{\odot}$\, mass cut. As we discussed before, this led to a SN-impostor-like event, with only a fraction of the total envelope ejected to infinity. The envelope outside of a 3\,M$_{\odot}$\, mass cut has, however, only a 10$^{47}$\,erg binding energy so, as we move outward the site of energy deposition (from 3, to 7, and 9\,M$_{\odot}$\, Lagrangian mass coordinate), we switch to a successful explosion with sizable kinetic energy and rapid light-curve evolution. As given in Table~\ref{tab_s11_wcomp}, the models with a mass cut at and beyond 3\,M$_{\odot}$\, lead to full ejection of the layers exterior to the energy-deposition site, with representative ejecta expansion rates of 400--800\,km~s$^{-1}$ and kinetic energy of 10$^{49}$\,erg. Interestingly, the phase of high-brightness lasts from 90 up to 180 days despite the low amount of ejected material, i.e. between 1.6 and 7.6\,M$_{\odot}$. The long duration of these events, even for low ejected masses, is caused by two factors. First, the expansion is relatively slow so that radiative diffusion is modestly facilitated by the ejecta density reduction with time (we are closer to a static regime of radiative diffusion). Second, the energy losses through expansion are minimal since we start from a very extended (i.e. loosely-bound) stellar envelope. Note here that in the more compact (i.e. tightly-bound) progenitor configurations for Type I SNe (Wolf-Rayet stars or white dwarfs), the cooling through expansion is tremendous and the events owe their large brightness at a few weeks after explosion entirely to the heating caused by unstable isotopes with week-long half-lifes. {\it A corollary is that weak-energy ejections with little $^{56}$Ni can only reach a large luminosity if they arise from a loosely-bound initial configuration, e.g. a big star.} For the s11\_3 model sequence (light curve shown in the right panel of Fig.~\ref{fig_s11_wmin_phot_lum}), the energy deposition is greater than the binding energy of the envelope above the 1.8\,M$_{\odot}$\, mass cut so that no matter what the position of the mass cut is outside of 1.8\,M$_{\odot}$, we obtain full ejection of the shocked layers, with representative velocities in the range 500-1600\,km~s$^{-1}$. To conclude, we find that moving the energy-deposition mass cut $M_{\rm cut}$ outwards for a given energy-deposition magnitude $E_{\rm dep}$ produces a similar effect to increasing $E_{\rm dep}$ at a given $M_{\rm cut}$. Quantitatively, this is modulated by the binding energy of the envelope exterior to $M_{\rm cut}$. We observe in particular that with such RSG progenitors it is possible to obtain light curves with a very extended plateau even for very small ejecta mass. Note that this remains an experiment though, since we believe the energy deposition in a massive star must occur in the vicinity of the core, hence interior to the Helium shell of pre-SN massive stars. \begin{figure} \epsfig{file=f11.ps,width=8.5cm} \caption{Same as Fig.~\ref{fig_s11_lum_all}, but now showing the bolometric light evolution resulting from models in which the energy is deposited at a fixed rate over 10\,s (black), 1\,hr (blue), 1\,d (light green), 1\,week (green), and 1 month (red). The site of energy deposition is at Lagrangian-mass coordinate 3\,M$_{\odot}$\, and the energy deposition is 4$\times$10$^{49}$\,erg in all cases. The time origin is the time of shock breakout. \label{fig_var_dt} } \end{figure} \subsection{Effect of varying the Energy-Deposition duration} \label{var_dt} A fundamental element of our study is that energy deposition must occur on a short time-scale to ensure the formation of a shock, rather than on a long time-scale which would allow radiative diffusion and convection to carry outward the extra energy. In this section, we present results with V1D for a model with $E_{\rm dep}=4\times 10^{49}$\,erg and $M_{\rm cut}=3$\,M$_{\odot}$, but an energy-deposition time of 10 seconds, one hour, one day, one week, and one month. Since the resulting ejecta and bolometric properties are similar to those of the model s11\_3\_w3 (see Table~\ref{tab_s11_wcomp}), we do not tabulate the results for this sequence of models. However, we show the synthetic light curves in Fig.~\ref{fig_var_dt}. In all cases, a phase of shock breakout takes place, followed by a bright plateau that persists over a similar duration. No qualitative difference is visible, and quantitative differences are small. Hence, shock formation persists even for very long deposition times, despite the large range covered. In other words, a deposition of energy that takes place over a time-scale much longer than the expected shock revival time in core-collapse SN explosion (i.e. one second) still leads to shock formation. We are aware that the neglect of feedback effects makes this exploration somewhat artificial. In the context of a nuclear flash, expansion could, for example, endanger the continuation of nuclear burning. For energy-deposition time-scales much longer than a month, a diffusion wave, rather than a shock, would form, together perhaps with convection. Such a regime, not of interest in the present study, would have to be followed in 2D or 3D (see, for example, \citealt{mocak_etal_08,mocak_etal_09}) to properly model the interplay between these two means of energy transport and the resulting effects on the envelope structure. \begin{figure} \epsfig{file=f12.ps,width=8.5cm} \caption{Same as Fig.~\ref{fig_s11_lum_all}, but now showing the bolometric light evolution for different progenitor-mass models: 11 (black), 15 (blue), 20 (green), and 25\,M$_{\odot}$\, (red; these masses correspond to main-sequence masses). The corresponding total progenitor masses at the time of collapse for this set is 10.6, 12.6, 14.7, and 12.5\,M$_{\odot}$, respectively. In all models, we deposit the same energy of 10$^{50}$\,erg, and at the same {\it physical} location in the envelope, corresponding to the base of the helium shell. In the same order, this corresponds to the Lagrangian-mass coordinate of 1.8, 3.1, 5.0, and 7.2\,M$_{\odot}$. The time origin is the time of shock breakout, which corresponds to a range of delays since energy-deposition in the four models. \label{fig_var_mprog} } \end{figure} \subsection{Dependency on Progenitor Mass} \label{var_mprog} The choice of the 11\,M$_{\odot}$\, model for the investigations presented so far was motivated by the lower density of the progenitor envelope. This produces larger Courant-time increments for the hydrodynamics and easier computing. However, as evidenced by the variation in envelope binding energy for 10-40\,M$_{\odot}$\, massive stars (recall that these model masses correspond to those on the main sequence; the corresponding masses at the onset of collapse can be seen from Fig.~\ref{fig_rho_mr}) and by the dependencies presented in the previous sections, a given energy deposition will obviously produce a very different outcome for different progenitor masses. In this section, we thus explore the radiative and dynamical properties of ejecta produced by the deposition of 10$^{50}$\,erg at the base of the helium shell in 11, 15, 20, and 25\,M$_{\odot}$\, massive-star progenitors. In the same order, this corresponds to mass cuts at 1.8, 3.1, 5.0, and 7.2\,M$_{\odot}$\, in the envelope, to envelope binding energies (we quote absolute values) of 9.9$\times 10^{48}$\,erg, 4.6$\times 10^{49}$\,erg, 6.6$\times 10^{49}$\,erg, and 1.4$\times 10^{50}$\,erg, and to total progenitor masses at the time of collapse of 10.6, 12.6, 14.7, and 12.5\,M$_{\odot}$. We show the resulting light curves in Fig.~\ref{fig_var_mprog}. As before with the exploration of outcomes for varying energy-deposition magnitudes (\S\ref{var_edep}) or sites (\S\ref{var_mcut}), adopting different progenitor stars and depositing the energy in regions that are more or less bound leads to a similar modulation in light-curve properties. With the adopted value $E_{\rm dep}=$10$^{50}$\,erg, we obtain light curves reminiscent of low-luminosity Type II-P SNe if it is larger than the binding energy of the overlying envelope (11 and 15\,M$_{\odot}$\, models), and long-lived and fainter events reminiscent of SN impostors or low-energy transients otherwise (20 and 25\,M$_{\odot}$\, models). Note that if, using the same progenitors (11, 15, 20, and 25\,M$_{\odot}$\, main-sequence mass models), we deposit a given energy at a radius where the Lagrangian mass is 2\,M$_{\odot}$\, lower than the total progenitor mass (same shell mass underneath the progenitor surface), we obtain essentially the same light curves and ejecta properties for all models. This degeneracy results from the generic $\sim$10$^{47}$\,erg binding energy of the corresponding (outer) H-rich shell in these different stars. To summarize, and perhaps paradoxically, the consideration of these additional progenitors (with different envelope structure/binding energy) does not add any new perspective to our results so far since the diversity of outcomes was fully revealed earlier on with the 11\,M$_{\odot}$\, model through the variation of the values of $E_{\rm dep}$ and $M_{\rm cut}$. Our sample of progenitor models does not include more massive stars. Such models were not added to retain a homogeneous sample, focus on conceptual aspects (i.e. shock heating of a representative gravitationally-bound stellar envelope) and leave aside the specificities of the numerous different categories of stars undergoing eruptions. One such category is that of blue-supergiant stars in the phase of hydrogen/helium core/shell burning. These objects are found at moderate masses, covered by our sample, but also extend to much larger masses. For example, the stars associated with events as diverse as SN1987A and the erupting $\eta$ Car are BSGs, with associated masses in the range 15-20\,M$_{\odot}$\ and $\sim$100\,M$_{\odot}$. However, compared to our sample of RSG stars, these objects are all more tightly bound, as reflected by their smaller surface radii of at most a hundred rather than a thousand times that of the sun. Hence, the various regimes highlighted above would apply with all energies scaled upwards. The duration of the transient would depend on the amount of material ejected, reflecting the dependencies discussed in \S\ref{var_mcut}. Because of their larger binding energy, very low-energy transient resulting from the mechanism presented here would be less likely to occur in such objects. For the most massive objects, the proximity to the Eddington limit will considerably reduce the binding energy of the envelope and will make it more prone to eruptions if perturbed by a deep-seated energy source. To provide more quantitative results and comparison with observations, we will present in a forthcoming study the gas and radiation properties we predict for very massive post main-sequence stars subject to shock heating. \section{Sample of Non-LTE Synthetic Spectra at 5 Days after Shock Breakout} \label{sect_rad} In this section, we provide synthetic spectra that illustrate the spectral-energy distribution of the various models discussed above. We choose the representative time of five days after shock breakout since it is a typical discovery time for SNe, and perhaps as well for optical transients etc. This also offers a unique reference time for comparison in all models. We focus on the models presented in \S\ref{var_edep} in which only the energy deposition magnitude is varied, using the 11\,M$_{\odot}$\, model and adopting a deposition site at the 1.8\,M$_{\odot}$\, Lagrangian-mass coordinate. The computation we perform follows the approach described in various contexts in \citet{DH06_SN1999em,dessart_etal_08,dessart_etal_09}. Using the results from our V1D computations, we extract the gas properties at a total optical depth of a few tens and at a time of about five days after shock breakout. We then use these characteristics of radius, density, temperature, and velocity (see Fig.~\ref{fig_s11_phot}) to setup the initial conditions for the non-LTE radiative-transfer calculation. At such early times, the photosphere resides in the outer hydrogen-rich part of the progenitor envelope, characterized by a typical RSG surface composition (see, e.g., \citealt{dessart_etal_08}). We adopt a density exponent of ten and a homologous expansion. Recall that the line and continuum formation regions are rather confined and hence such characteristics, in practice, really matter {\it locally}, i.e. the density/velocity distributions may vary differently in a global sense (for details, see \citealt{dessart_etal_09}). We show in Fig.~\ref{fig_v1d_sed} the synthetic spectra (i.e. the emergent flux in the observer's frame), assuming a distance of 10\,Mpc and neglecting any reddening. At that reference time, and from the high to the low energy-deposition case, we obtain a trend towards smaller ejecta velocity (narrower line profiles), smaller temperature (redder peak flux distribution), combined with a smaller radius (smaller flux). At higher energy deposition, the synthetic spectra are reminiscent of Type II-P SNe (from standard to low-luminosity ones), while for lower energy deposition, the spectra are qualitatively similar to those of cool mass-losing stars like the present day $\eta$ Car or SN impostors, e.g. 1961 \citep{zwicky_61}, 1997bs \citep{vandyk_etal_00}, 2000ch \citep{wagner_etal_04}, or 2002kg \citep{maund_etal_06}. We thus recover spectroscopically the various regimes identified in bolometric light curves and ejecta properties, primarily through modulations in luminosity and line width. An interesting aspect of this set of simulations is that for H-rich massive stars (and as long as the temperatures are not too low to foster the formation of molecules and dust), we obtain a very uniform set of spectral features (with the dominance of hydrogen and metals, whose abundance is that of the environment in which the star was born), merely varying in widths, with slight changes in the flux distribution mostly associated with variations in ionization. For example, the LBV $\eta$ Car has the same surface composition as the progenitor of SN 1999em \citep {hillier_etal_01,DH06_SN1999em}; it also has similar line-profile morphology and flux distribution as that of the interacting SN 1994W \citep{dessart_etal_09}. This spectroscopic degeneracy/uniformity adds to the difficulty of inferring the dynamical origin of the event. \begin{figure*} \epsfig{file=f13.ps,width=18cm} \caption{Non-LTE synthetic spectra for the models presented in \S\ref{var_edep}, based on the ejecta properties computed with V1D and extracted at a representative time of 5 days after shock breakout (we scale the synthetic fluxes adopting a distance to the object of 10\,Mpc). Notice how, despite the 2-order-of-magnitude range in energy deposition in this model sequence, the resulting spectral-energy distributions are qualitatively similar, differing essentially in the absolute flux level (the larger the energy deposition, the more luminous the event) and in line-profile width (the larger the energy deposition, the larger the ejecta kinetic energy). This similarity is also seen in observations of hydrogen-rich transients and results from the comparable envelope composition of the progenitor star and the comparable ionization state at the ejecta at the photosphere. \label{fig_v1d_sed} } \end{figure*} \section{Discussion} \label{sect_discussion} In this section, we discuss the ramifications of our results for explosions and eruptions in massive stars. Let us stress again that we do not make the claim that all massive-star mass ejections, or all transients in general, stem from a strong, sudden, and deeply-rooted energy release in the progenitor-star envelope. But here, given this working hypothesis, we explore its potential for explaining the properties and the diversity of observed transient phenomena associated with massive stars. \subsection{Binding energy considerations} The binding energy is the most fundamental quantity controlling the outcome of a given energy deposition. Although this conclusion seems trivial, its implications are far-reaching. Since the binding energy of massive stars increases with progenitor mass, SN explosions arising from more massive progenitors will, on average, be more energetic. Indeed, the energy deposition at the base of the envelope to be ejected has to be a few units of the binding energy, one unit to unbind the envelope, and the remaining units to give it its kinetic energy (fast expansion) and internal energy (bright display). In the sample of massive stars studied here, the envelope binding energy (right panel of Fig.~\ref{fig_eb_mr}) sets a higher energy threshold for increasing progenitor main-sequence mass. Consequently, stars originally more massive than $\sim$15\,M$_{\odot}$\, require an energy deposition of at least 10$^{51}$\,erg to unbind the envelope while 100 times less is sufficient in a 10\,M$_{\odot}$\, progenitor. A corollary is that quite generally, low-energy explosions in gravitationally-bound objects are more likely to be associated with loosley-bound objects. This appears counter intuitive since one may argue that the ejecta kinetic energy at infinity should be anything from zero ($E_{\rm dep} = E_{\rm binding}$) up to arbitrary large values (set by $E_{\rm dep} - E_{\rm binding}$). This is true in theory, but in practice, the range of energies for events classified as a SN or as a low-energy transient differs, and vary greatly with the progenitor binding energy. At present, we have no reason to believe that $E_{\rm dep}$ and $E_{\rm binding}$ are exactly tuned in any way. This holds very obviously in the context of the core-collapse SN explosion mechanism where the conditions for a successful explosion are set by what happens in the vicinity of the core and thus entirely disconnected from the {\it global} threshold set by the envelope binding energy. So, we can assume that $E_{\rm dep}$ is {\it uniformly} distributed (the number of events having an energy between $E_{\rm dep}$ and $E_{\rm dep} + \Delta E$ scales linearly with the energy bin $\Delta E$ but is independent of $E_{\rm dep}$), with values above and below the envelope-binding energy. Now, mass ejection will only occur if $E_{\rm dep} > E_{\rm binding}$. Consider an object with a binding energy of 10$^{51}$\,erg, and focus on the {\it uniform} distribution of events with an energy deposition between 10$^{51}$\,erg and 2$\times$10$^{51}$\,erg (we ignore those events that lead to no eruption and limit the maximum energy to that for a typical SN ejecta, e.g. for SN1987A). We can say that all ejecta with a kinetic energy in the range 10$^{50}$\,erg up to 10$^{51}$\,erg represent powerful explosions and will tend to be viewed/classified as a SN: This represents 90\% of the events in our distribution. Ejecta with an (asymptotic) kinetic energy less than 10$^{50}$\,erg will be produced with a 10\% probability, those with less than 10$^{49}$\,erg with a 1\% probability, and those with less than 10$^{48}$\,erg with a 0.1\% probability. This does not mean that a low-energy transient cannot take place in a highly bound object, but it strongly suggests that unless there is a mechanism that biases the energy deposition to match the binding energy (all SN explosions strongly support the contrary!), low-energy ejecta are {\it statistically more likely} to occur in loosely-bound objects. Another way to consider this is that to produce a low-energy transient in a highly-bound object, the excess energy to deposit is so small compared to the binding energy of the envelope that the required tuning to make this low-energy ejection can hardly be met. Hence, low-energy transients associated with stars should stem from ejection of loosely-bound regions, for example the envelope of low-mass massive stars or, in another context, low-mass shells at the surface of a white dwarf. Obviously, this is relevant only if any such energy deposition can take place but observations of transients demonstrate that it can. How it does it is a question left for future work. As more and more energy is needed to eject the envelopes of more massive progenitors, the extreme case of tightly bound Wolf-Rayet stars can only be associated with very energetic explosions. The fact that (nearly) all hypernovae are associated with Type Ib/c SNe is in this respect quite compelling. It is not so much that they somehow can achieve higher energy deposition; it is that they {\it must} (the fact that they do achieve a higher energy-deposition to explode is an interesting problem that concerns the explosion mechanism, neutrino physics, magneto-rotational effects etc). In other words, of all possible scenarios, {\it only} those that lead to a very large energy deposition (well in excess of the binding energy) can yield a successful, and visible, explosion. This may act as a natural tuning ingredient to favor fast-rotating cores in tightly-bound progenitors like Wolf-Rayet stars \citep{burrows_etal_07b}. The fraction of Wolf-Rayet stars that fail to explode following core collapse is unknown at present. This binding-energy argument may also be a very natural explanation for the lack of detection of Type II SN explosions for objects with main sequence mass greater than $\sim$20\,M$_{\odot}$\, \citep{smartt_etal_09}. The standard neutrino mechanism may, after all, fail to deliver an energy that is comparable to the binding energy of the more massive progenitor stars. And indeed, this may not be so surprising given the ten order of magnitude density contrast in the region 2-5\,M$_{\odot}$\, between the 11 and the 25\,M$_{\odot}$\, models of WHW02 (Fig.~\ref{fig_rho_mr}). This would suggest that a lot of core-collapse events associated with the higher mass massive stars do not yield explosions, but instead form a black hole with no transient electromagnetic display (see \citealt{smartt_etal_09}). The proposition of \citet{kochanek_etal_08} to monitor a large number of red supergiants for failed explosion would nourish this option. It is the low binding energy (hence extended envelope structure) of massive stars that causes the bright displays of their ejected envelopes, even for low energy deposition, and even in the absence of any heating from unstable isotopes. In general, the change from a modest-size progenitor star to an extended ejecta causes a dramatic cooling, primarily due to expansion. And indeed, in Type Ia/b/c SNe, this cooling is dramatic and the large SN luminosity results only because of the presence of unstable isotopes, whose half-life of days to weeks makes the SN bright on a comparable time-scale. In contrast, only modest radial expansion (typically by 2 orders of magnitude compared to 5--7 in Type I SNe) occurs after the explosion of the already extended envelopes of H-rich massive stars. In such objects, large luminosities of 10$^7$--10$^8$\,L$_{\odot}$\, can be achieved without the contribution from unstable isotopes, and even for modest ejecta kinetic energies. Hence, in general, low-energy (stellar) explosions/eruptions should occur from regions that are weakly gravitationally-bound. Given the very low envelope binding energy of low-mass massive stars and hence the very low energy threshold for producing explosion/eruption, we argue that these should be prime candidates for transient events in the Universe, perhaps even in connection to interacting SNe. The stellar initial-mass function is also biased in favor of such low-mass massive stars, which are therefore not rare (about one for every 200 solar-type stars). This argumentation likely applies to SN 2008S \citep{prieto_etal_08, smith_etal_09b,boticella_etal_09}, M85-OT2006-1 \citep{kulkarni_etal_2007_m85,pastorello_etal_07_m85}, or the enigmatic SN2008ha \citep{foley_etal_09,valenti_etal_09}. As argued above, the association of a low-energy low-luminosity ejecta with a Wolf-Rayet progenitor, believed at present to always be a highly-bound object, seems unlikely (although not excluded in theory) in the context where the energy deposition occurs deep within the progenitor star. In that context, it would require the presence of a loosely-bound shell at the surface of the progenitor star and a pretty exquisite tuning to allow the energy deposition occurring at depth to be no more than a few 1.001 times the envelope binding energy. Some of these postulates may not apply (e.g. could the energy deposition occur just below the surface of the WR star?), but it is unclear at present which is faulty. Specific radiation-hydrodynamical simulations based on suitable progenitors are needed to address this problem. \subsection{Explosion/eruption associated with transient phenomena} A growing sample of transient objects possess intermediate luminosities between SNe and novae. Their origin is debated. Owing to their low energy, the community has argued for either a weak-energy core-collapse SN explosion (e.g. \citealt{boticella_etal_09,pastorello_etal_09}), or for a super-Eddington wind in a SN impostor (e.g. \citealt{maund_etal_06,smith_etal_09}). The ambiguity stems from the common energy-ground in which events associated with these two scenarios fall, the $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}$10$^{50}$\,erg kinetic energy of the homunculus associated with $\eta$ Car rivaling that of low-luminosity Type II-P SN ejecta. Reversing the argument, how many of these low-luminosity Type II-P SNe are in fact impostors, boasting ejecta kinetic energy that are well below that of LBVs? The ``SN'' status in this energy regime seems to be often based on faith rather than on unambiguous observational or theoretical evidence. In the shock-heated solutions we presented in \S\ref{var_edep}, the ejecta are systematically turned into a radiation-dominated plasma. At shock breakout, they are essentially unbound (their total energy $E_{\rm grav} + E_{\rm kin} + E_{\rm int} $ is positive) and possess a large amount of internal energy. In those models with low-expansion velocity and super-Eddington luminosities of 10$^6$--10$^7$\,L$_{\odot}$, the photosphere has properties that are comparable to those of erupting LBVs, yet their super-Eddington luminosity plays no role in driving the material, which was already unbound after shock passage. From an inspection of the long-term light curve, it seems difficult to distinguish between the two since the time-scale over which the radiative display evolves is very long and related to radiative diffusion: Apart from the breakout signature, it bears no imprint of the explosive origin, i.e. the sudden and deeply-rooted release of energy. The recent observation of fast-moving material ahead of $\eta$ Car's homunculus points very strongly toward a shock-heated envelope as the origin of the giant eruption \citep{smith_08_blast}. The invocation of a super-Eddington wind in the low-mass progenitor star associated with SN2008S may be supporting the same hypothesis since low-mass massive stars are well below their Eddington limit, while a shock could easily take them beyond that limit. An interesting and generic component of all the shock-heating scenarios that we explored in this paper is the associated shock-breakout signal, whose detection could therefore disentangle situations associated with a quasi-steady wind from a hydrostatically-bound super-Eddington atmosphere on the one hand from those associated with a slowly-moving shock-heated ejecta on the other hand. \subsection{Variability in Massive stars} While the binding energy of massive star envelopes increases with progenitor mass, the mean binding energy is low for all objects that have retained a hydrogen envelope. In particular, and in the models of WHW02, that outer H-rich shell has a very uniform and very low binding energy of 10$^{47}$\,erg. Owing to mass loss, the mass of that shell decreases with time for higher-mass massive stars, but it always remains loosely bound. In our simulations, we found that even for a very modest energy deposition, well below the binding-energy value, important structural changes of the envelope occurred. We suspect that hydrodynamic-fluid instabilities taking place deep in the stellar interior \citep{bazan_arnett_94,bazan_arnett_98,asida_arnett_00,arnett_etal_05,meakin_arnett_06} may provide the energy seed for at least a fraction of the variability observed in massive stars. This has not been fully realized so far in part because studies of stellar variability tend to isolate the surface layers and ignore the interior, while studies of stellar interiors reduce all the complicated physics occurring near the surface to imposed and simplified conditions at the grid outer boundary (see, however, \citealt{cantiello_etal_09}). We expect that, owing to their huge surface radii and therefore low binding energy, supergiant and hypergiant stars should be very sensitive to such perturbations from the stellar interior, and indeed observations of such stars reveal a rich and violent mass-loss history \citep{smith_etal_09}. Similarly, objects that through the course of their {\it quasi-steady} evolution happen to lie close to the Eddington limit and/or reach critical rotation are naturally more exposed to potential {\it time-dependent} deeply-rooted energy leaks. This does not exclude the role of other mechanisms for mass ejections, for example, pulsations, rotation, radiation-driven mass loss etc., although it is at present not clear how these processes can produce the extreme properties required in the context of high-luminosity interacting SNe or giant eruptions of LBVs. \section{Conclusions} \label{sect_conclusion} In this paper, we have presented one-dimensional one-group (gray) two-temperature radiation-hydrodynamics simulations of pre-SN massive-star envelopes subject to a sudden release of energy above their degenerate core. Although at the high energy end, the likely phenomenon is core collapse, we more generally have in mind the thermonuclear incineration of intermediate-mass elements present in shells above the core. The motivation for this study is: 1) The existence of interacting SNe (which must eject a shell at least once before core-collapse, but more likely eject multiple shells, by some means not elucidated so far); 2) the identification of fast-moving material exterior to $\eta$ Car's homunculus (which cannot stem from radiation-driving in a wind; \citealt{smith_08_blast}); 3) the broad range of energies inferred for Type II-P SNe, overlapping at the low-energy end with high-energy transients and massive-star eruptions. The bulk of our results stem from work on the 11\,M$_{\odot}$\, model of WHW02, although tests with higher-mass progenitors yield the same qualitative conclusions and regimes, merely shifted quantitatively. This work is an exploration of the outcome of a strong, sudden, and deeply-rooted energy deposition taking place at the base of a massive-star envelope. We are not proposing this scenario holds for all massive-star mass ejections, but we investigate what results when this circumstance obtains. There is no doubt it does, but how frequently and how robustly is left for future study. Although this result is not new, we find that the fundamental quantity controling the outcome of a strong, sudden, and deeply-rooted energy deposition is the binding energy of the stellar envelope, which increases considerably with progenitor mass in the range 10--40\,M$_{\odot}$. What is new, however, is our study of the long-term evolution of the gas and radiative properties that result from configurations where the energy deposited is greater, on the order of, or less than the envelope binding energy. We identify three regimes, with a continuous progression from 1) SN explosions at high energy ($E_{\rm dep} > E_{\rm binding}$), with a complete envelope ejection associated with a 100-day long high-plateau luminosity; 2) SN impostors at the intermediate energy range ($E_{\rm dep} \sim E_{\rm binding}$), with a partial envelope ejection, and a more modest but longer-lived plateau luminosity; and 3) bloated/variable stars at the low-energy end ($E_{\rm dep} < E_{\rm binding}$), with little or no mass ejection but with a residual puffed-up envelope of weakly-modified properties. What conditions the results is not the magnitude of the energy deposition itself but how it compares with the binding energy of the envelope exterior to the site of deposition. Hence, to achieve the same result requires more energy in a more massive progenitor star. These properties are summarized in Fig.~\ref{fig_summary_ekin}. \begin{figure} \epsfig{file=f14.ps,width=8.5cm} \caption{Correlation between various energies (normalized to the corresponding adopted energy deposition) and the energy deposition (normalized to the corresponding envelope binding energy). In black, we show the normalized asymptotic energy of the ejecta/outflow, and in blue (red) the normalized envelope/ejecta internal (kinetic) energy at the time of shock breakout. Only the models presented in Table~1 are shown here. The black curve illustrates the three regimes we have presented here, with SN explosions, SN impostors, and variable/perturbed stars as the energy deposition varies from being much larger, comparable, and smaller than the envelope binding energy. Notice how the internal energy always dominates over the kinetic energy at the time of shock breakout in the simulations performed here. \label{fig_summary_ekin} } \end{figure} In all simulations presented, a shock forms systematically with a strength that depends on the energy deposited. It crosses the envelope in $\sim$1 to $\sim$50 days, hence communicating its energy to the entire progenitor envelope {\it quasi-instantaneously}, i.e. as opposed to, e.g., a diffusion time-scale for energy transport of 10$^4$ years or more at the corresponding depth. This shock eventually emerges at the progenitor surface with a breakout signal that varies from a duration of an hour up to a few days (modulated here by the shock speed rather than by the atmospheric-scale height), with a flux peaking at longer wavelengths for weaker shock strengths. This breakout signal is the (and may be the only) {\it unambiguous} evidence that the subsequent ``ejection" was triggered by shock-heating, and thus has an explosive origin. At shock breakout, the luminosity reaches a peak, then fades, before stabilizing/rising again forming an extended plateau. This plateau phase corresponds to the recombination epoch of the ejected mass, the internal energy decreasing primarily through expansion and little through radiation. It is the large stored internal energy (in the form of radiation) that keeps the ejecta optically thick and luminous (radioactive decay is neglected in our work). We find a continuum of light curves, faster-expanding ejecta being more luminous both at breakout (stronger shock) and during the plateau phase (Fig.~\ref{fig_summary_lpeak}). The models presented in \S\ref{var_edep} corroborate the correlation between plateau luminosity and mid-plateau photospheric velocity identified by \citet{hamuy_pinto_02}, and refined by \citet{nugent_etal_06,poznanski_etal_09}. We also find that the plateau duration is anti-correlated with energy deposition (Fig.~\ref{fig_summary_dt}). At larger energy, faster expansion leads to faster cooling and recombination so that the ejecta photosphere recedes faster in mass/radius after reaching its peak earlier. For small energy variations in this regime, interplay between kinetic and internal energy (which are comparable at breakout) yield a plateau duration that is $\sim$100\,d, which is on the order of Type II-P plateau lengths. For lower energy deposition, we switch slowly from a regime of dynamic diffusion to that of quasi-static diffusion. The more slowly-expanding ejecta, characterized by a slowly-decreasing optical depth with time, gives the bolometric luminosity a modest peak value and a slow evolution, with plateau durations of up to 1-2 years. The plateau phase is thus more extended and fainter for lower energy deposition, echoing the light-curve trend going from SNe to SN impostors (Fig.~\ref{fig_obs_lc}). Note that, in our simulations, these time-scales are always 6-7 orders of magnitude larger than the time-scale for the energy deposition, which was chosen to be ten seconds in most cases. In other words, the time-scale over which the light curve evolves (basically that of radiative diffusion in an expanding medium) has no connection to the time-scale over which the energy was deposited in the first place. \begin{figure} \epsfig{file=f15.ps,width=8.5cm} \caption{Correlation between the peak luminosity during the plateau phase (black dots) and the mass-weighted average ejecta velocity. We also show the correlation for the peak luminosity at shock breakout (red dots; scaled down by a factor of 1000 for convenience). Note that we include models from Tables~1 and 2. We overplot the line $L \propto v^{1.6}$, which follows closely the distribution of points for $L_{\rm Peak, Plateau}$ versus $\langle v \rangle_M$. Our radiation-hydrodynamics simulations support the correlation identified by \citet{hamuy_pinto_02} and subsequently improved upon by \citet{nugent_etal_06,poznanski_etal_09}. Our slope is in close agreement with that proposed in this last reference. Impressively, the relation holds over the entire domain explored. Note that there is no consideration of radioactive decay from unstable isotopes or departures from spherical symmetry, and only data points associated with the 11\,M$_{\odot}$-progenitor star are used. Relaxing these choices would likely introduce some scatter. \label{fig_summary_lpeak} } \end{figure} \begin{figure} \epsfig{file=f16.ps,width=8.5cm} \caption{Correlation between the plateau duration (black) and the time-like quantity $R_{\rm phot,P}/V_{\rm phot,P}$ (ratio of the radius and the velocity at the photosphere at the time of peak-plateau brightness) versus the time-like quantity equal to the asymptotic ejecta kinetic energy divided by the peak-plateau luminosity brightness. \label{fig_summary_dt} } \end{figure} From our exploration and with our set of models, we find that explosions of varying strength can yield the broadest range of outcomes in {\it low-mass} massive stars because they are characterized by a very low envelope binding energy (Fig.~\ref{fig_eb_mr}). We indeed obtain light curves evolving from week to year time-scales (Fig.~\ref{fig_s11_lum_all}) and ejecta expansion rates ranging from a few tens to a few thousand km~s$^{-1}$ (Fig.~\ref{fig_s11_phot}). An explosion/eruption producing a transient requires merely 10$^{49}$\,erg in such objects, a value that is so low that gravitational collapse of the stellar core may not be required. And indeed, in this mass range, stellar-evolutionary calculations reveal the existence of nuclear flashes in the last nuclear-burning stages \citep{weaver_woosley_79}, which could represent such an energy source. We therefore propose that low-mass massive stars are prime candidates for transient phenomena in the Universe, as well as prime canditates for interacting SNe such as 1994W \citep{dessart_etal_09}. In our simulations, sudden energy deposition above the core leads to shock-heating of the {\it entire} envelope. Whenever the energy deposited is greater than its binding energy, the {\it entire} envelope is ejected, with an asymptotic kinetic energy that is commensurate with the energy deposited. If a subsequent energy deposition occurs (e.g. a nuclear flash followed by gravitational collapse, as needed in a fraction at least of interacting SNe), the second ejection would have low mass and little or no hydrogen. Depending on the time between the two ejections, one could get an interacting SN for a short delay (i.e. a few years) or two transients at the same location for a long delay (the first one being dim if the explosion energy is small and the second one being potentially very dim due to the low ejected mass). These scenarios are rather speculative, but they are warranted since at present most, if not all, transients have an unknown origin and are poorly characterized. In our simulations, and within the context of this work, we obtain small mass ejections only when depositing the energy close to the progenitor surface, an eventuality that seems difficult to justify physically in such massive stars. Such low-mass ejections would seem to be better suited, for example, to the surface layers of an extended white dwarf. Observationally, low-mass ejections are likely to be associated with fast transients. For transients that are both fast and faint, a low-mass ejection in a highly- or moderately-bound object seems required. At the least, our simulations for (loosely-bound) RSG stars perturbed by a small deeply-rooted energy release produce large mass ejections (the overlying hydrogen envelope) and long-faint transients. Synthetic spectra computed for a sequence of models with varying energy deposition reveal a continuous evolution from Type II-P SN-like spectra at high energy ($L\sim$10$^8$\,L$_{\odot}$), to low-luminosity Type II-P SN spectra at intermediate energy ($L\sim$10$^7$\,L$_{\odot}$), to SN-impostor-like spectra at low energy ($L\sim$10$^6$\,L$_{\odot}$), with, in the same order, narrower line profiles and redder/cooler spectral-energy distributions (Fig.~\ref{fig_v1d_sed}). The results from this work should not be compromised by the approximations made in our radiation-hydrodynamics simulations. First, with one dimensionality we prevent convective transport and any structure formation through, e.g., Rayleigh-Taylor instabilities. This may alter the properties of models characterized by low expansion speeds (longer evolution time-scale); we thus plan to study this eventuality with 2D and 3D simulations in the future. Second, we deposit the energy at a large and fixed rate, independent of any feedback effects. In the case of nuclear flashes, such feedback effects could shut-off the burning prematurely. We are aware of this artifact and will attempt in future work to develop a more physically-consistent approach by investigating the conditions that may lead to shock formation, rather than assuming a setup that systematically leads to it. However, provided a shock forms, we think our results apply. Third, progenitor models may differ on a mass-by-mass comparison with other groups but the general trend of increasing binding energy with main sequence mass should hold. Fourth, one-group transport should be accurate enough since it has been shown to capture the essentials of such radiation hydrodynamics simulations \citep{utrobin_chugai_09} - the key physics presented here takes place at large optical depth, under LTE conditions. Our finding that very modest energy perturbations can dramatically affect the structure of a massive star motivates detailed multi-dimensional hydrodynamical investigations of massive-star interiors, in particular of the last burning stages preceding core-collapse (see, e.g., \citealt{bazan_arnett_94,bazan_arnett_98,asida_arnett_00, arnett_etal_05,meakin_arnett_06}). As shown in these hydrodynamical simulations, and more generally, we surmise that the quasi-steady state approach of standard stellar-evolutionary codes (which keep an eye on the longer-term evolution of stars) may be missing important ingredients for our understanding of massive-star evolution. This has relevance for understanding mass loss and in particular massive-star eruptions, stellar variability/stability, and interacting SNe. Such pre-SN mass ejections would also modify, and perhaps sizably, the envelope mass and structure, thereby affecting the conditions surrounding core collapse and explosion. Ongoing and forthcoming surveys/missions like Pan-STARRS, Sky Mapper, the Palomar Transient Factory, GALEX, or the Large Synoptic Survey Telescope will better reveal the diversity of explosions, eruptions, and more generally transient phenomena in the Universe. We surmise that surveys up to now have missed a large number of low-energy long-lived transients, such as low-luminosity Type II-P SNe (objects even less energetic than SN1999br) and SN impostors. It is somewhat surprising that we have not yet detected Type II SNe with Plateau durations well in excess of 100 days. Moreover, for the shock-heating solutions presented here, a breakout signal systematically takes place. At SN-like energies, the signal may be too short to be resolved \citep{gezari_etal_08}, but for lower-energy transients, the reduced shock speed and strength would lengthen the breakout duration up to about a day, and move the peak of the spectral-energy distribution from $\sim$100\AA\ to the 1000-3000\AA\ range. Hence, the breakout signal should be more easily detectable in such transients, allowing to distinguish between an explosive event and, e.g., a super-Eddington wind. \section*{Acknowledgments} LD acknowledges financial support from the European Community through an International Re-integration Grant, under grant number PIRG04-GA-2008-239184.
proofpile-arXiv_065-6538
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Acknowledgements} Thank you to Safa Motesharrei for his corrections and comments. J.A.Y. was partially supported by NSF Grant DMS-0616585 and NIH Grant R01-HG0294501. E.S. was partially supported by NSF Grants DMS-0639300 and DMS-0907818, and NIH Grant R01-MH79502. \begin{flushleft} \addcontentsline{toc}{subsection}{References} \footnotesize \bibliographystyle{aps}
proofpile-arXiv_065-6544
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Conclusions} Almost all networks operate with partial network information at different nodes, requiring nodes to make distributed decisions. While a rich literature exists on design of network protocols and their analysis, there is no prior work to understand the impact of distributed decisions on Shannon-theoretic capacity region. In this paper, we laid foundation to characterize partial network information and studied the impact in several network connectivities. Seeking universal optimality, where local decisions with certain side information are always globally optimal, we discovered that there appears to be a critical minimum information required for the network to allow globally optimal decisions. Our current approach is compound capacity based and our next step is to understand impact of partial information on fading interference channels.
proofpile-arXiv_065-6555
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Basic features of the set-up} The field content is as follows. The ${\mathcal N}=2\;$ vector multiplet consists of the U(1) gauge field $A_{\mu}$ and the SU$(N)$ gauge field $A^a_{\mu}$, where $a=1,..., N^2-1$, and their Weyl fermion superpartners plus complex scalar fields $a$, and $a^a$ and their Weyl superpartners. The $N_f$ quark multiplets of the U$(N)$ theory consist of the complex scalar fields $q^{kA}$ and $\tilde{q}_{Ak}$ (squarks) and their fermion superpartners, all in the fundamental representation of the SU$(N)$ gauge group. Here $k=1,..., N$ is the color index while $A$ is the flavor index, $A=1,..., N_f$. We will treat $q^{kA}$ and $\tilde{q}_{Ak}$ as rectangular matrices with $N$ rows and $N_f$ columns. Let us discuss the vacuum structure of this theory. The vacua of the theory (\ref{model}) are determined by the zeros of the potential $V$. With the generic choice of the quark masses we have $C_{N_f}^{N}= N_f!/N!\tilde N!$ isolated $r$-vacua in which $r=N$ quarks (out of $N_f$) develop vacuum expectation values (VEVs). Consider, say, the (1,2,...,$N$) vacuum in which the first $N$ flavors develop VEVs. We can exploit gauge rotations to make all squark VEVs real. Then in the problem at hand they take the form \begin{equation} \langle q^{kA}\rangle =\sqrt{ \xi}\, \left( \begin{array}{cccccc} 1 & \ldots & 0 & 0 & \ldots & 0\\ \ldots & \ldots & \ldots & \ldots & \ldots & \ldots\\ 0 & \ldots & 1 & 0 & \ldots & 0\\ \end{array} \right), \qquad \langle \bar{\tilde{q}}^{kA}\rangle =0, \label{qvev} \end{equation} where we write down the quark fields as matrices in color and flavor indices ($k=1,..., N\,,\,\, A=1,...,N_f$). The FI term $\xi$ singles out the $r=N$ vacua from the set of all $r$-vacua. In the vacuum under consideration the adjoint fields also develop VEVs, namely, \begin{equation} \left\langle \left(\frac12\, a + T^a\, a^a\right)\right\rangle = - \frac1{\sqrt{2}} \left( \begin{array}{ccc} m_1 & \ldots & 0 \\ \ldots & \ldots & \ldots\\ 0 & \ldots & m_N\\ \end{array} \right), \label{avev} \end{equation} For generic values of the quark masses, the SU$(N)$ subgroup of the gauge group is broken down to U(1)$^{N-1}$. However, in the special limit \begin{equation} m_1=m_2=...=m_{N_f}, \label{equalmasses} \end{equation} the SU$(N)\times$U(1) gauge group remains unbroken by the adjoint field. In this limit the theory acquires a global flavor SU$(N_f)$ symmetry. While the adjoint VEVs do not break the SU$(N)\times$U(1) gauge group in the limit (\ref{equalmasses}), the quark condensate (\ref{qvev}) results in the spontaneous breaking of both gauge and flavor symmetries. A diagonal global SU$(N)$ combining the gauge SU$(N)$ and an SU$(N)$ subgroup of the flavor SU$(N_f)$ group survives, however. We refer to this diagonal global symmetry as to $ {\rm SU}(N)_{C+F}$. The color-flavor locking takes place in a slightly different way than in the case $N_f = N$ (or $\tilde N =0$). The presence of the global SU$(N)_{C+F}$ group is instrumental for formation of the non-Abelian strings. More exactly, the pattern of breaking of the color and flavor symmetry is as follows: \begin{equation} {\rm U}(N)_{\rm gauge}\times {\rm SU}(N_f)_{\rm flavor}\to {\rm SU}(N)_{C+F}\times {\rm SU}(\tilde{N})_F\times {\rm U}(1)\,, \label{c+f} \end{equation} where $\tilde{N}=N_f-N$. Here SU$(\tilde N )_F$ factor stands for the flavor rotation of the $\tilde N$ quarks. For unequal quark masses the global symmetry (\ref{c+f}) is broken down to U(1)$^{N_f-1}$. All gauge bosons in the bulk are massive, \begin{equation} m_{\gamma} =g_1\,\sqrt{\frac{N}{2}\,\xi\,},\qquad m_{W}=g_2\sqrt{\xi}. \label{phmass} \end{equation} The adjoint fields $a$ and $a^a$ as well as $N^2$ components of the quark matrix $q$ acquire the same masses as the corresponding gauge bosons. \section{A journey through various regimes} We will single out a group of $N$ quarks and another one of $\tilde N$ quarks. We generically refer to the masses in the first and second groups as $m_P$ and $m_K$, respectively; $P=1, ..., N$ numerates the quark flavors which develop VEVs, while $K=N+1, ..., N_f$ numerates ``extra" quark flavors. The extra flavors become massless in the limit (\ref{equalmasses}). The mass differences inside the first group (or inside the second group) are called $\Delta M_{\rm inside}$. The mass differences $m_P-m_K$ are referred to as $\Delta M_{\rm outside}$. The transitions we will study are those in $\xi$ (the vertical axis) and $\Delta M_{\rm inside}$ (the horizontal axis in Fig.~\ref{four}). \begin{figure} \includegraphics[width=0.65\textwidth,height=1.1in]{FigTmp2.eps} \caption{The transitions in the $\{\xi ,\,\, \Delta M_{\rm inside}\}$ plane.} \label{four} \end{figure} At $\xi =0$ we arrive at the Seiberg--Witten solution. Note that $m_P-m_K\equiv \Delta M_{\rm outside}$ is kept fixed in the process. Eventually, we take $\Delta M_{\rm outside}\ll\Lambda$. This is necessary to get the dual theory non-Abelian. If $m_P-m_K\equiv \Delta M_{\rm outside}\neq 0$, the extra quark flavors acquire masses determined by the mass differences $m_P-m_K$. Note that all states come in representations of the unbroken global group (\ref{c+f}), namely, the singlet and adjoint representations of SU$(N)_{C+F}$ \begin{equation} (1,\, 1), \quad (N^2-1,\, 1), \label{onep} \end{equation} and bifundamentals \begin{equation} \quad (\bar{N},\, \tilde N), \quad (N,\, \bar{\tilde N})\,, \label{twop} \end{equation} where we mark representation with respect to two non-Abelian factors in (\ref{c+f}). In the beginning we have the gauge group U$(N)$, with $N_f$ matter hypermultiplets. The light part of the spectrum includes the vector supermultiplet ($16\times (N^2-1)$ degrees of freedom with mass $\sim g\sqrt \xi$) plus extra bifundamentals (\ref{twop}). In addition, there are $\tilde N^2-1$ composites of the type presented in Fig.~\ref{figmeson}. Their mass is heavy (i.e. $\sim \sqrt\xi$). In the end we have the gauge group U$(\tilde N)$, with $\tilde N$ matter hypermultiplets. The light part of the spectrum includes the vector supermultiplet ($16\times (\tilde N^2-1)$ degrees of freedom with mass $\sim g\sqrt \xi$). In addition, there are extra bifundamentals (\ref{twop}) and $N^2-1$ heavy composites $D\bar D$. \begin{figure} \includegraphics[width=0.25\textwidth,height=0.5in]{mmeson.eps} \caption{Meson formed by antimonopole and dyon connected by two strings. Open and closed circles denote dyon and antimonopole, respectively. } \label{figmeson} \end{figure} \section {Non-Abelian strings} \label{strings} The $Z_N$-string solutions in the theory with $N_f=N$ break the SU$(N)_{C+F}$ global group. Therefore, strings have orientational zero modes, associated with rotations of their color flux inside the non-Abelian SU($N$). The global group is broken down to ${\rm SU}(N-1)\times {\rm U}(1)$. As a result, the moduli space of the non-Abelian string is described by the coset space \begin{equation} \frac{{\rm SU}(N)}{{\rm SU}(N-1)\times {\rm U}(1)}\sim CP(N-1)\,. \label{modulispace} \end{equation} The low-energy effective theory on the world sheet of the non-Abelian string is ${\mathcal N}=2\;$ SUSY two-dimensional $CP(N-1)$ model \cite{HT1,SYmon}. Now we add $\tilde N$ ``extra" quark flavors (first, with degenerate masses). Then the strings become semilocal, whose transverse size is a modulus. In this case these strings do not generate linear confinement. However, at the end, $\Delta M_{\rm outside}\neq 0$ will lift the size moduli, so that linear confinement will ensue. Non-Abelian semilocal strings have two types of moduli: orientational and size moduli. The orientational zero modes are parametrized by a complex vector $n^P$, $P=1,...,N$, while its $\tilde N=(N_f-N)$ size moduli are parametrized by another complex vector $\rho^K$, $K=N+1,...,N_f$. The effective two-dimensional theory which describes internal dynamics of the non-Abelian semilocal string is an ${\mathcal N}=(2,2)\;$ ``toric" sigma model including both types of fields. Its bosonic action in the gauge formulation (which assumes taking the limit $e^2\to\infty$) has the form \begin{eqnarray} &&S = \int d^2 x \left\{ \left|\nabla_{\alpha} n^{P}\right|^2 +\left|\tilde{\nabla}_{\alpha} \rho^K\right|^2 +\frac1{4e^2}F^2_{\alpha\beta} + \frac1{e^2}\, \left|{\partial}_{\alpha}\sigma\right|^2 \right. \nonumber\\[3mm] &+&\left. 2\left|\sigma+\frac{m_P}{\sqrt{2}}\right|^2 \left|n^{P}\right|^2 + 2\left|\sigma+\frac{m_{K}}{\sqrt{2}}\right|^2\left|\rho^K\right|^2 + \frac{e^2}{2} \left(|n^{P}|^2-|\rho^K|^2 -2\beta\right)^2 \right\}, \nonumber\\[4mm] && P=1,...,N\,,\qquad K=N+1,...,N_f\,,\qquad \tilde{\nabla}_k={\partial}_k+iA_k\,. \label{wcp} \end{eqnarray} The fields $n^{P}$ and $\rho^K$ have charges +1 and $-1$ with respect to the auxiliary U(1) gauge field, hence, the difference in the covariant derivatives, $ \nabla_i=\partial_i-iA_i$ and $\tilde{\nabla}_j=\partial_j+iA_j$ respectively. The $D$-term condition \begin{equation} |n^P|^2 - |\rho^K|^2=2\beta\,, \label{unitvec} \end{equation} is implemented in the limit $e^2\to\infty$. Moreover, in this limit the gauge field $A_{\alpha}$ and its ${\mathcal N}=2\;$ bosonic superpartner $\sigma$ become auxiliary and can be eliminated. The two-dimensional coupling constant $\beta$ is related to the four-dimensional one as $ \beta= {2\pi}/{g_2^2}\,. $ \section{Monodromies, or what becomes of the quark fields in the journey} \label{bulkdual} To simplify presentation, we will consider a particular example, $N=3$ and $\tilde N=2$. In analyzing the transition from domain I to III (see Fig.~\ref{figphasediag}) we make two steps. First, we take the quark mass differences to be large, passing to domain II. In this domain the theory stays at weak coupling, and we can safely decrease the value of the FI parameter $\xi$. Next, we use the exact Seiberg--Witten solution of the theory on the Coulomb branch \cite{SW1,SW2} (i.e. at $\xi=0$) to perform the passage from domain II to III. In this journey we will have to carefully consider two Argyres--Douglas points, to deal with two monodromies, as we vary $m_3$, see Fig.~\ref{five}. \begin{figure} \includegraphics[width=0.55\textwidth,height=0.7in]{FigTmp3.eps} \caption{Two Argyres--Douglas points on the way from domain I to III. } \label{five} \end{figure} In each passage we split the masses, first $m_1$ and $m_4$, and then $m_2$ and $m_5$. At the end we tend $m_1\to m_2\to m_3$ and $m_5\to m_4$ We investigate the monodromies using the approach of Ref.~\cite{SYcross} which is similar to that of Ref.~\cite{CKM}. We start with the $r=3$ vacuum at large $\xi$ in domain I, where three quarks with charges \begin{eqnarray} &&\left(n_e,n_m;\,n_e^3,n_m^3;\,n_e^8,n_m^8\right)= \left(\frac12,0;\,\frac12,0;\,\frac1{2\sqrt{3}},0\right), \nonumber\\[2mm] &&\left(n_e,n_m;\,n_e^3,n_m^3;\,n_e^8,n_m^8\right)= \left(\frac12,0;\,-\frac12,0;\,\frac1{2\sqrt{3}},0\right), \nonumber \\[2mm] &&\left(n_e,n_m;\,n_e^3,n_m^3;\,n_e^8,n_m^8\right)= \left(\frac12,0;\,0,0;\,-\frac1{\sqrt{3}},0\right), \label{quarkcharges} \end{eqnarray} develop VEV's. Here $n_e$ and $n_m$ denote electric and magnetic charges of a given state with respect to the U(1) gauge group, while $n_e^3$, $n_m^3$ and $n_e^8$, $n_e^8$ stand for the electric and magnetic charges with respect to the Cartan generators of the SU(3) gauge group (broken down to U(1)$\times$U(1) by $\Delta m_{AB}$). Then in domain III these quarks transform into light dyons with charges \begin{eqnarray} && D^{11}:\,\,\, \left(\frac12,0;\,\frac12,\frac12;\,\frac1{2\sqrt{3}},\frac{\sqrt{3}}{2}\right), \nonumber\\[2mm] && D^{22}:\,\,\, \left(\frac12,0;\,-\frac12,-\frac12;\,\frac1{2\sqrt{3}},\frac{\sqrt{3}}{2}\right), \nonumber\\[2mm] && D_{33}:\,\,\, \, \left(\frac12,0;\,0,0;\,-\frac1{\sqrt{3}},-\sqrt{3}\right). \label{dyons} \end{eqnarray} For consistency of our analysis it is instructive to consider another route from the domain I to the domain III, namely the one along the line $\Delta M_{\rm inside}=0$. On this line we keep the global color-flavor locked group unbroken. Then we obtain a surprising result: the quarks and gauge bosons which form the adjoint $(N^2-1)$ representation of SU($N$) at large $\xi$ and the dyons and gauge bosons which form the adjoint $(\tilde N^2-1)$ representation of SU($\tilde N$) at small $\xi$ are, in fact, {\em distinct} states. How can this occur? Since we have a crossover between the domains I and III rather than a phase transition, this means that in the full microscopic theory the $(N^2-1)$ adjoints of SU($N$) become heavy and decouple as we pass from the domain I to III along the line $\Delta m_{AB}=0$. Moreover, some composite $(\tilde N^2-1)$ adjoints of SU($\tilde N$), which are heavy and invisible in the low-energy description in the domain I become light in the domain III and form the $D^{lK}$ dyons ($K=N+1,...,N_f$) and gauge bosons $B^p_{\mu}$. The phenomenon of level crossing takes place. Although this crossover is smooth in the full theory, from the standpoint of the low-energy description the passage from the domain I to III means a dramatic change: the low-energy theories in these domains are completely different; in particular, the degrees of freedom in these theories are different. This logic leads us to the following conclusion. In addition to light dyons and gauge bosons included in the low-energy theory in the domain III at small $\xi$, we have heavy fields (with masses of the order of $\Lambda$) which form the adjoint representation $(N^2-1,1)$ of the global symmetry. These are screened (former) quarks and gauge bosons from the domain I continued into III. Let us denote them as $M_P^{P'}$ ($P,P'=1,...,N$). What is their physical nature in the region III? Before answering this question let us note that by the same token, it is seen that in domain I, in addition to the light quarks and gauge bosons, we have heavy fields $M_K^{K'}$ ($K,K'=N+1,...,N_f$), which form the adjoint $(\tilde N^2-1)$ representation of SU($\tilde N$). This is schematically depicted in Fig.~\ref{figevol}. \begin{figure} \includegraphics[width=0.27\textwidth,height=1in]{evol.eps} \caption{Evolution of the SU$(N)$ and SU$(\tilde N)$ $W$ bosons vs. $\xi$. } \label{figevol} \end{figure} Now we come back to the physical nature of adjoint fields $M_P^{P'}$ in the region III. It is well known that the $W$ bosons usually do not exist as localized states in the strong coupling regime on the Coulomb branch (speaking in jargon, they ``decay"). They split into antimonopoles and dyons on CMS on which the Argyres--Douglas points lie \cite{SW1,BF}. Consider, for example, the $W$-boson associated with the $T^3$ generator ($T^3$ $W$ boson for short) with the charge $(0,0;\,1,0;\,0,0)$ in the domain II. As we go through CMS this $W$ boson decay into the $T^3$ antimonopole and dyon with the charges $(0,0;\, 0,-1;\,0,0)$ and $(0,0;\,1,1;\,0,0)$, respectively. It means that the $W$ boson is absent in domain III, in full accord with the analysis of the SU(2) theory in \cite{BF}. This picture is valid on the Coulomb branch at $\xi=0$. As we switch on small $\xi\neq 0$ the monopoles and dyons become confined by strings. In fact, the elementary monopoles/dyons are represented by junctions of two different elementary non-Abelian strings \cite{T,SYmon}, see also a detailed discussion of the monopole/dyon confinement in \cite{SYdual}. This means that, as we move from the domain II into III at small nonvanishing $\xi$ the $W$ boson ``decays" into an antimonopole and dyon; however, these states cannot abandon each other and move far apart because they are confined. Therefore, the $W$ boson evolves into a stringy meson formed by an antimonopole and dyon connected by two strings, as shown in Fig.~\ref{figmeson}, see \cite{SYrev} for a discussion of these stringy mesons. These stringy mesons have nonvanishing U(1) global charges with respect to the Cartan generators of the SU(3) subgroup of the global group (\ref{c+f}) (above we discussed only one $W$ boson of this type, related to the $T^3$ generator, however, in fact, we have six different charged gauge boson/quark states of this type). In the equal mass limit these globally charged stringy mesons combine with neutral (with respect to the group U(1)$^{N_f-1}$ stringy mesons formed by pairs of monopoles and antimonopoles (or dyons and antidyons) connected by two strings, to form the octet representation of the SU(3) subgroup of the global group (\ref{c+f}) (in general, the adjoint representation of SU$(N)$). They are heavy in the domain III, with mass of the order of $\Lambda$. We identify these stringy mesons with $(N^2-1)$ adjoints $M_P^{P'}$ ($P,P'=1,...,N$) of the SU$(N)$ subgroup with which we have seen {\em en route} from the domain I to III along the line $\Delta m_{AB}=0$. The same applies to the $q^{kK}$ quarks ($K=N+1,...,N_f$) of the domains I and II. As we go through the crossover into the domain III at small $\xi$ $q^{kK}$ quarks evolve into stringy mesons formed by pairs of antimonopoles and dyons connected by two strings, see Fig.~\ref{figmeson}. However, these states are unstable. To see that this is indeed the case, please, observe that in the equal mass limit these stringy mesons fill the bifundamental representations $(N,\bar{\tilde N})$ and $(\bar{N},\tilde N)$ of the global group; hence, can decay into light dyons/dual gauge bosons with the same quantum numbers. It is quite plausible to suggest that these fields $M_P^{P'}$ and $M_K^{K'}$ are Seiberg's mesonic fields \cite{Sdual,IS}, which occur in the dual theory upon breaking of ${\mathcal N}=2\;$ supersymmetry by the mass-term superpotential $\mu{\mathcal A}^2$ for the adjoint fields when we take the limit $\mu\to\infty$. In this limit our theory becomes ${\mathcal N}=1\;$ SQCD. Previously, these $M_{AB}$ fields were not identified in the ${\mathcal N}=2\;$ theory. In conclusion, we demonstrated that non-Abelian confinement in our theory is a combined effect of the Higgs screening, ``decay" processes on CMS and confining string formation. The strings that are dynamically formed always confine monopoles or dyons (whose charges can be represented as a sum of those of a monopole plus $W$-boson) both, in the original and dual theories, rather than quarks. \vspace{-3mm} \small
proofpile-arXiv_065-6558
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Collimated, bipolar outflows accompany the birth of young stars from the earliest stages of star formation to the end of their accretion phase \citep[e.g.][]{reipurth2001}. While the birth of isolated low-mass stars is becoming well understood, the formation of massive stars ($>10 \msun$) and clusters remains a topic of intense study. Observations show that moderate to high-mass stars tend to form in dense clusters \citep{lada2003}. In a clustered environment, the dynamics of the gas and stars can profoundly impact both accretion and mass-loss processes. Feedback from these massive clusters may play a significant role in momentum injection and turbulence driving in the interstellar medium. Outflows from massive stars are less studied than those from low mass stars largely because massive stars accrete most of their mass while deeply embedded. Therefore, unlike low mass young stars that are accessible in the optical, massive stellar outflows can only be seen at infrared and longer wavelengths. Direct evidence for jets from massive young stellar objects (YSOs) from \hh\ or optical emission is generally lacking \citep[e.g.][]{alvarez2005,kumar2002,wang2003}, although there is evidence that massive stars are the sources of collimated molecular outflows from millimeter observations \citep[e.g.][]{beuther2002b}. Outflows from massive stars may allow accretion to continue after their radiation pressure would otherwise halt accretion in a spherically symmetric system \citep{krumholz2009}. They therefore represent a crucial component in understanding how stars above $\sim$10 \msun\ can form. IRAS 05358\ is a double cluster of embedded infrared sources located at a distance of 1.8 kpc in the Auriga molecular cloud complex \citep{heyer1996} associated with the HII regions Sh-2 231 through 235 at Galactic coordinates around $l,~b$ = 173.48,+2.45 in the Perseus arm. Sh~2-233IR~NE\ is the collection of highly obscured and mm-bright sources slightly northeast of Sh~2-233IR~SW, which is the location of the IRAS 05358+3543 point source and the optical emission nebula (see Figure \ref{fig:overview_ha}). The IRAS source is probably a blend of the three brightest infrared objects in the MSX A-band and MIPS 24 \um\ images, which are located at Sh~2-233IR~NE, IR 41, and IR 6. For the purpose of this paper,the whole complex including both sources is referred toas IRAS 05358, and otherwise refer to individual objects specifically. Early observations revealed the presence of OH \citep{Wouterloot1993}, \HtwoO\ \citep{Scalise1989, Henning1992}, and methanol \citep{Menten1991} masers about an arcminute northeast of the IRAS source, indicating that massive stars are likely present at that location. Near infrared observations revealed the presence of two embedded clusters \citep{porras2000,jiang2001} labeled Sh~2-233IR~SW\ for the southwestern cluster associated with the IRAS source, and Sh~2-233IR~NE\ for the northeastern cluster located near the OH, \HtwoO, and CH$_3$OH masers. Stars identified in \citet{porras2000} are referred to by the designation ``IR (number)'' corresponding to the catalog number in that paper. \citet{porras2000} also included scanning Fabry-Perot velocity measurements of the inner $\sim1$\arcmin. CO observations revealed broad line wings indicative of a molecular outflow \citep{casoli1986,shepherd1996}. \citet{kumar2002} and \citet{khanzadyan2004} presented narrow band images of 2.12 \um\ \htwo\ emission that reveled the presence of multiple outflows. Interferometric imaging of CO and SiO confirmed the presence of at least three flows emerging from the northeast cluster centered on the masers \citep{beuther2002} having a total mass of about 20 \msun . \citet{beuther2002} also presented MAMBO 1.2 mm maps and a mass estimate of 610 \msun\ for the whole region. \citet{williams2004} presented SCUBA maps and mass estimates of the clusters of 195/126\msun\ for Sh~2-233IR~NE\ and 24/12 \msun\ for Sh~2-233IR~SW\ (850 \um/450 \um). \citet{Zinchenko1997} measured the dense gas properties using the NH\ensuremath{_3}\ (1,1) and (2,2) lines. They measure a mean density $n \approx 10^{3.60}$ \ensuremath{\textrm{cm}^{-3}}, temperature 26.5K, and a mass of 600 \msun . The total luminosity of the two clusters is about 6300 \lsun , indicating that the region is giving birth to massive stars \citep{porras2000}. Millimeter wavelength interferometry with arcsecond angular resolution has revealed a compact cluster of deeply embedded sources centered on the \HtwoO\ and methanol maser position \citep{beuther2002,beuther2007,leurini2007}. \citet{beuther2002} identified 3 mm continuum cores, labeled mm1-mm3 (shown in Figure \ref{fig:outflowsh2}). \citet{beuther2007} resolved these cores into smaller objects. Source mm1a is associated with a cm continuum point source and will be discussed in detail below. IRAS 05358\ has previously been observed at low spatial resolution in the J=2-1 and J=3-2 transitions with the Kosma 3m telescope \citep{Mao2004}. While the general presence of outflows was recognized and a total mass estimated, the specific outflows were not resolved. \citet{beuther2002} observed the CO J=6-5, J=2-1, and J=1-0 transitions at moderate resolution in the inner few arcminutes. \citet{Thomas2008} observed C$^{17}$O in the J=2-1 and J=3-2 transitions with a single pointing using the JCMT. \section{Observations} A collection of data acquired by the authors and from publicly available archives is presented. An overview of the data is presented in figure \ref{fig:overview_ha}. The goal was to develop a complete picture of the outflows in IRAS 05358\ and their probable sources. CO data were acquired to estimate the total outflowing mass and to identify outflowing molecular material unassociated with \hh\ shocks. Archival Spitzer IRAC and MIPS 24 \um\ data were used to identify probable YSOs as candidate outflow sources. Near-infrared spectra were acquired primarily to determine \hh\ kinematics and develop a 3D picture of the region. Optical spectra were acquired to attempt to identify stellar types in the unobscured Sh~2-233IR~SW\ region. Finally, archival VLA data were used to acquire better constraints on the position and physical properties of the known ultracompact HII (UCHII) region, and to detect or set limits on other UCHIIs. \subsection{Sub-millimeter Observations} The 345 GHz J = 3-2 rotational transition of CO was observed with the James Clerk Maxwell Telescope (JCMT) on 4 January, 2008 with the 16 element (14 functional) HARP-B heterodyne focal plane array. Two 12\arcmin\ $\times$ 10\arcmin\ raster scans in R.A. and Dec. were taken with orthogonal orientations to assure complete coverage in the region of interest; this resulted in a useable field 11.7\arcmin\ $\times$ 11.3\arcmin\ with higher noise along the edges. The beam size at 345 GHz is about 15\arcsec. Observations were conducted during grade 3 conditions with the 225 GHz zenith optical depth of the atmosphere $\tau\sim0.1$. A channel width of 488 kHz corresponding to 0.423 \kms\ was used. The maps required a total of 1 hour to acquire and resulted in an effective integration time of 4.6 seconds per pixel (there are 12,000 $6\times6\arcsec$\ pixels in the final grid), resulting in a noise per pixel of 0.36 K \kms. The optical depth and telescope efficiency corrections were applied by the JCMT pipeline to convert the recorded antenna temperatures to the corrected antenna \footnote{See \\ \url{http://docs.jach.hawaii.edu/JCMT/OVERVIEW/tel\_overview/} for a discussion of JCMT parameters}. An additional main-beam correction has been applied, $$T_{mb}=\frac{T_A*}{\eta_{mb}}$$ where $\eta_{mb} $ was measured by observing Mars to be $\approx0.60$ at 345 GHz. Emission in the sidelobes is expected to be small at the outflow velocities. On September 25 and November 15, 2008 the CO, $^{13}$CO, and C$^{18}$O J=2-1 transitions were observed in the central 3\arcmin\ of IRAS 05358. The beamsize at 220 GHz is about 23\arcsec. The sideband configuration used also includes the \linebreak \nolinebreak{ SO~\ensuremath{5_6-4_5} } and $^{13}$CS 5-4 transitions. Conditions during these observations were grade 5 ($\tau \sim 0.24-0.28$) and therefore too poor to use the HARP instrument, but acceptable for the A3 detector. Data reduction used the Starlink package following the standard routines recommended by the JCMT support scientists \footnote{ \url{http://www.jach.hawaii.edu/JCMT/spectral\_line/data\_reduction/acsisdr/}}. The CO 3-2 data cube was extracted over a velocity range from --50 to 10 \kms\ LSR and spectral baselines were fit over the velocity range --50 to --40 and 0 to 10 \kms\ and subtracted. The data were re-gridded into 6\arcsec\ pixels and 2 pixel Gaussian smoothing was used to fill in the gaps left by the two bad detectors in the 4 $\times$4 array. The data cube was cropped to remove undersampled edges which have high noise and bad baselines. The beam efficiency was 0.68 at 230 GHz. The A3 data cubes were extracted over the velocity range --60 to 20 \kms\ and baselines were calculated over --60 to --40 and 0 to 20 \kms. The data was gridded into 10\arcsec\ pixels with 2 pixel gaussian smoothing to reduce sub-resolution noise variations. \subsection{Spitzer} Spitzer IRAC bands 1 to 4 and MIPS band 1 data were retrieved from the Spitzer Science Center archive. \citet{qiu2008} acquired the data as part of a study of many high-mass star forming regions; they identified YSO candidates based on IRAC colors. The version 18 post-BCD data products were used to produce images and photometric catalog from \citet{qiu2008}, which was made from a more carefully-reduced data set, was used for SED analysis. \subsection{Near-IR images} Near-infrared data were acquired using the Wide-field Infrared Camera (WIRCam) on the Canada-France-Hawaii Telescope (CFHT) on Mauna Kea. The field of view is 20\arcmin$\times$20\arcmin\ ~and pixel scale 0.3\arcsec. Data were acquired on November 18, 19 and December 20, 2005. The seeing was 0.5-0.7\arcsec\ during the observations. A 0.032 \um\ wide filter centered at 2.122 \um\ was used to take images of the \hh\ S(1) 1-0 rovibrational transition. Each \hh\ exposure was 58 seconds, and dithered images were taken for a total exposure time of 1755 seconds. The data were reduced with the WIRCam pipeline. \subsection{Near-IR spectra} Near-infrared spectra were acquired using the TripleSpec instrument at Apache Point Observatory. TripleSpec simultaneously acquires J, H, and K band spectra over a 42\arcsec\ long slit. A slit width of 1.1\arcsec\ with an approximate spectral resolution $\lambda/\Delta\lambda=2700$ was used. Observations were taken on the nights of December 2, 2008 and January 7, 2009. Data on December 2 were taken in an ABBA nod pattern, but because of the need to observe extended structure across the slit a stare strategy was selected on January 7. The data were reduced using the {\sc twodspec} package in IRAF. HD31135, an A0 star, was used as a flux calibrator. Wavelength calibration was performed using night sky lines. Lines filling the slit were subtracted to remove atmospheric emission lines. Telluric absorption correction was {\emph not} performed, but telluric absorption is considered in the analysis. The transformations from the observed geocentric reference frame to $v_{LSR}$ were computed to be 0.78 \kms\ on Dec 2 and 19.74 \kms\ on Jan 8. \subsection{Optical Spectra} Optical spectra were acquired using the Double Imaging Spectrograph instrument at APO. The high-resolution red and blue gratings were centered at 6564 \AA\ and 5007 \AA\ with a coverage of about 1200 angstroms and resolution $\lambda/\Delta\lambda \approx 5000$. Sets of three 900s exposures and three 200s exposures were acquired on the targets and on the spectrophotometric calibrator G191-b2b with a 1.5" slit. Observations were taken on the night of January 17, 2009 under clear conditions. Optical spectra were also reduced using the {\sc twodspec} package in IRAF. Wavelength calibration was done with HeNeAr lamps and night sky lines in the red band, and HeNeAr lamps in the blue band. Lines filling the slit were subtracted to remove atmospheric lines, though some astrophysical lines also filled the slit and these were measured before background subtraction. The $v_{LSR}$ correction for this date was 24.4 \kms. \subsection{Optical imaging} CCD images images were obtained on the nights of 14 and 15 September 2009 NOAO Mosaic 1 Camera at the f/3.1 prime focus of the 4 meter Mayal telescope atthe Kitt Peak National Observatory (KPNO). The Mosaic 1 camera is a 8192$\times$8192 pixel array (consisting of eight 2048$\times$4096 pixel CCD chips) with a pixel scale of 0.26$''$ pixel$^{-1}$ and a field of view 35.4$'$ on a side. Narrow-band filters centered on 6569\AA\ and 6730\AA\ both with a FWHM of 80\AA\ were use to obtain H$\alpha$ and [SII] images. An SDSS i' filter which is centered on 7732\AA\ with a FWHM of 1548\AA was used for continuum imaging. A set of five dithered 600 second exposures were obtained in H$\alpha$ and [SII] using the standard MOSDITHER pattern to eliminate cosmic rays and the gaps between the individual chips in Mosaic. A dithered set of five 180 second exposures were obtained in the in the broad-band SDSS i-band filter to discriminate between H$\alpha$, [SII], and continuum emission. Images were reduced in the standard manner by the NOAO Mosaic reduction pipeline \citep{valdes2007}. \Figure{f1.eps}{The CFHT \hh\ (bold), CO 3-2 HARP (thin), and CO 2-1 A3 (dashed) fields overlaid on the KPNO \ensuremath{\textrm{H}\alpha}\ mosaic with selected objects identified by their SIMBAD names. Sh~2-233IR~SW\ is coincident with IRAS 05358+3543. }{fig:overview_ha}{1.0} \subsection{VLA data} VLA archival data from projects AR482, AR513, AS831, and AM697 were re-reduced to perform a deeper search for UCHII regions and aquire more data points on the known UCHII's SED. Data from AR482 were previously published in \citet{beuther2007}, the other data are unpublished. The data were reduced using the VLA pipeline in AIPS ({\sc vlarun}). The observations used and sensitivities and beam sizes achieved are listed in Tables \ref{tab:vlatimes} and \ref{tab:vla}. There appeared to be calibration errors in the AR482 observations (the phase calibrator was 2-3 times brighter than in all other observations) and this data were therefore not used in the final analysis, but it produced consistent pointing results. \Table{cccccccc} {VLA Observation Program Names, Dates, and Times} {\colhead{VLA } & \colhead{Observation} & \colhead{Time } & \colhead{Array} & \colhead{Band} & \colhead{Fluxcal} & \colhead{Phase cal} & \colhead{Phase cal } \\ Observation & Date &on&&&&& Percent \\ Name &&Source&&&&& Uncertainty \\} {tab:vlatimes} { AR482 & August 2 2001 & 2580s & B & X &3c286 & 0555+398 & 22 \\ AR513 & June 21 2003 & 7770s & A & X &3c286 & 0555+398 & 0.8 \\ AS831 & February 26 2005 & 2640s & B & X &3c286 & 0555+398 & 0.7 \\ AS831 & August 5 2005 & 2660s & C & X &3c286 & 0555+398 & 0.3 \\ AS831 & May 11 2006 & 2610s & A & X &3c286 & 0555+398 & 3.0 \\ AL704 & August 7 2007 & 6423s & A & Q &3c273 & 0555+398 & 18 \\ AL704 & September 1 2007 & 6423s & A & Q &3c273 & 0555+398 & 13 \\ AM697 & November 26 2001 & 2880s & D & Q &3c286 & 0555+398 & 2.2 \\ AM697 & November 28 2001 & 1530s & D & K &3c286 & 0555+398 & 2.1 \\ AM697 & November 28 2001 & 1530s & D & U &3c286 & 0555+398 & 5.8 \\ }{} \section{Results} \subsection{Near Infrared Imaging: Outflows and Stars} Eleven distinct outflows have been identified in IRAS 05358\ in the images. Outflows are identified from a combination of J=3-2 CO data, shock excited \htwo\ emission, and published interferometric maps \citep{beuther2002}. Suspected CO outflows were identified by the presence of wings on the CO J=3-2 emission lines that extended beyond the typical velocity range of emission associated with the line core. The single dish data were compared to the interferometric maps of \citet{beuther2002}. The CFHT \hh\ image was then used to search for shock-excited emission associated with the outflow lobes. \begin{figure*}[htpb] \center \epsscale{0.75} \hspace{-1.2in} \plotone{f2.eps} \caption{The outflows described in section \ref{sec:outflows} overlaid on the CFHT \hh\ image. Numbers followed by {\it r} and {\it b} (red and blue), {\it n} and {\it s} (north and south), or {\it e} and {\it w} (east and west) are thought to be counterflows. Red and blue vectors indicate red and blue doppler shifts. Green vectors indicate where the doppler shift is ambiguous or cannot be determined. Magenta circles are Spitzer 24\um\ sources. Red squares are \citet{beuther2002} mm sources (from left to right, mm1, mm2, mm3). The blue diamond is a YSO candidate detected only in IRAC bands. The length of the vectors corresponds to the approximate length of the outflows. Source 1 and 6 correspond to \citet{porras2000} IR 6 and IR 41 respectively, and they are discussed under these names in sections \ref{sec:outflows}. The bows of Outflow 1n and 4n are detected in \ensuremath{\textrm{H}\alpha}\ and [S II] emission and are therefore as identified as Herbig-Haro objects HH 993 and 994 respectively. \label{fig:outflowsh2}} \end{figure*} \Figure{f3.eps} {CO contours integrated from $v_{LSR}=$ -13 to -4 \kms\ (red) and -34 to -21 \kms\ (blue) at levels of 2,4,6,10,20,30,40,50 K \kms\ overlaid on the \hh\ image. Specific outflows are labeled in Figure \ref{fig:outflowsh2} on the same scale.} {fig:COonH2}{0.75} \begin{figure*}[htpb] \hspace{-0.6in} \includegraphics[scale=0.40,clip=true]{f4.eps} \includegraphics[scale=0.40,clip=true]{f5.eps} \caption{(a) \hh\ image with SO~\ensuremath{5_6-4_5} \ peak flux contours at 0.5-1.4 K in intervals of 0.15 K overlaid. With a critical density $\sim3.5\ee{6}$ \citep{leidendb}, this transition is a dense gas tracer. (b) The [S II] image with outflow vectors overlaid. Diffuse emission can be seen at the north ends of Outflows 1, 4, and 6 and around the reflection nebula near source IR 41.} \label{fig:so_on_h2} \end{figure*} \label{sec:outflows} Figure \ref{fig:outflowsh2} shows the \htwo\ S(1) 1-0 2.1218 \um\ (a rovibrational transition in the electronic ground state from the $v=1$, $J=3$ to the $v=0$, $J=1$ state) emission in the vicinity of IRAS 05358\ with outflows and possible outflow sources labeled. The mm cores from \citet{beuther2002} are identified by red squares. The flow vectors in figure \ref{fig:outflowsh2} were chosen on the basis of the \htwo\ bow shock morphologies and orientations of chains of \htwo\ features, association with arcsecond-scale CO features on the \citet{beuther2002} Figure 8 CO map, and/or association with lobes of Doppler-shifted CO emission in the CO 3-2 data. The color of the vector indicates the suspected Doppler shift; red and blue correspond to red and blueshifts and green vectors indicate that the Doppler shift is uncertain. {\it IRAS 05358\ outflow 1:} The most prominent flow in \htwo\ is associated with the bright bow-shocks N1 and N6 \citep{khanzadyan2004} located towards PA $\approx$ 345\arcdeg\ and 170\arcdeg\ respectively from the sub-mm source mms1b \citep{beuther2002}. This flow, \citet{beuther2002} outflow A, is associated with redshifted and blueshifted CO emission. The northern shock is seen in \ensuremath{\textrm{H}\alpha}\ and [S II] emission (figure \ref{fig:so_on_h2}b) and is given a Herbig-Haro designation HH 993. This flow is indicated by oppositely directed green vectors from the vicinity of smm1, 2, and 3. It is listed as ``Jet 1'' in \citet{qiu2008}. \citet{kumar2002} identified the knot immediately behind the bow shock as a Mach disk. In the \citet{beuther2002} interferometric maps, the north flow contains redshifted features and the south flow contains primarily blueshifted features. There are also blueshifted CO features to the west of the \hh\ knots that are probably part of a different flow that is not seen in \hh\ emission. The velocity of the flow as measured from \hh\ emssion is blueshifted as much as 80 \kms (LSR), but one component is blueshifted only 14 \kms\ (see table \ref{tab:OutflowH2}), which is consistent with the cloud velocity. A redshifted SiO lobe is present in the south counterflow. The presence of \ensuremath{\textrm{H}\alpha}, [S II], and [O III] emission in the north shock and corresponding nondetections in the south shock suggest that there is substantially greater extinction towards the south knot. While the velocities in three of the four apertures picked along the TripleSpec slit are blueshifted, there are also knots with velocities consistent with the cloud velocity. \citet{porras2000} measure the velocity of the counterflow to be -17.3 \kms, which is consistent with the cloud velocity. Outflow 1 is propagating very nearly in the plane of the sky. A line connecting the two bow shocks in Outflow 1 goes directly through \citet{beuther2007} source mm2a despite the clear association in the \citet{beuther2002} interferometric CO map (their Figure 8) with mm1a. The currently available data do not clarify which is the source of the outflow: while the bent CO outflow appears to trace Outflow 1 back to mm1a, there are additional parallel CO outflows towards the confused central region that could originate from either mm1a or mm2a. A Spitzer 4.5 \um\ and 24 \um\ source is barely detected in \hh\ 2.5\arcmin\ to the north of Outflow 1. It is only apparent when the \hh\ image is smoothed and would have been dismissed as noise except for the association with a probably 4.5 \um\ extended source. It is labeled 24\um\ source 7 in figure \ref{fig:outflowsh2}. It appears to be slightly resolved at 4.5\um, and is therefore likely shocked emission. The object may be a protostellar source with an associated outflow, but its proximity to the projected path of Outflow 1 suggests that it may be an older outflow knot. {\it IRAS 05358\ Outflow 2:} The second brightest \htwo\ features trace a bipolar flow emerging from the immediate vicinity of the sub-mm cluster at PA $\approx$ 135\arcdeg\ (red lobe) and 315\arcdeg\ (blue lobe). It is listed as ``Jet 2'' in Figure 6 of \citet{qiu2008}. The counterflow probably overlaps in the line of sight with the counterflow from Outflow 3. It is shorter on the counterflow side either because it has already penetrated the cloud and is no longer impacting any ambient gas or, more likely, it has slowly drilled its way out of the molecular cloud and has not been able to propagate as quickly as the northwest flow. The \hh\ velocities measured for these knots are $\sim$ 30 \kms\ blueshifted, or marginally blue of the cloud LSR velocity. The disk identified in \citet{Minier2000} is approximately perpendicular to the measured angle of Outflow 2 assuming that mm1a is the source of this flow. It is therefore an excellent candidate for the outflow source. A diagram of the mm1a region is shown in figure \ref{fig:mm1adiagram}. See Section \ref{sec:vlaresults} for detailed discussion. {\it IRAS 05358\ outflow 3:} The \citet{beuther2002} CO and SiO maps reveal a third flow, their outflow B at PA $\approx$ 135\arcdeg\ (red lobe) and 315\arcdeg\ (blue lobe). A chain of \htwo\ features, \citet{khanzadyan2004} features N3D and N3E, are probably shocks in this flow. It is listed as ``Jet 3'' in \citet{qiu2008}. The two chains of \htwo\ emission indicate that outflows 2 and 3 are distinct. There also appears to be a counterflow at a shorter distance from the mm cores similar to counterflow 2. Outflows 2 and 3 may be associated with either redshifted or blueshifted features in the \citet{beuther2002} CO and SiO maps. High velocity flows with both parities are present near both the northwest (\citet{beuther2002} outflow C) and southeast flow for these jets, but the resolution of the millimeter observations is inadequate to determine which flow is in which direction. \citet{porras2000} measures $v_{LSR} = -7.5$ \kms\ for their knot 4A, which corresponds to the blended southeast counterflow of outflows 2 and 3. Their Figure 7 shows a wide line that is probably better represented by two or three blended lines, one consistent with the cloud velocity and the other(s) redshifted. Since Outflow 2 has a measured blueshift and outflow 3 is significantly fainter, the redshifted counterflow emission is probably associated with Outflow 2 and the blueshifted with outflow 3. {\it IRAS 05358\ outflow 4:} The JCMT CO data and \htwo\ images reveal a large outflow lobe consisting of blue lobes 1 and 4 that form a tongue of blueshifted emission propagating to the northeast at PA $\approx$ 20\arcdeg\ (Figure \ref{fig:outflowsh2}) from the cluster of sub-mm cores. A faint chain of \htwo\ features runs along the axis of the CO tongue and terminates in a bright \htwo\ bow shock located at the northern edge of \ref{fig:outflowsh2}. Several \htwo\ knots lie along the expected counterflow direction, but that portion of the field contains multiple outflows and is highly confused. If the counterflow is symmetric with the northeast knot, it extends 2.1 parsecs on the sky. The bow shock of Outflow 4 is seen in the HII and [S II] images, implying that the extinction is much lower than in the cluster. Two apertures placed along the bow shock reveal that it is blueshifted about 70\kms\ and may be extincted by as little as $A_V\sim.5$. It is designated HH 994. {\it IRAS 05358\ outflow 5:} Figure \ref{fig:outflowsh2} shows a bright chain of \htwo\ knots and bow shocks starting about 10\arcsec\ west of mm3 and propagating south at PA $\approx$ 190\arcdeg. The SiO maps of \citet{beuther2002} show a tongue of blueshifted emission along this chain (their Outflow C). The outflow projects back to H$^{13}$CO$^+$ source 3, which is also a weak mm source. A lack of obvious counterflow and the possibility that the knots identified with Outflow 5 could be associated with a number of different crossing flows makes this identification very tentative. Higher spatial resolution observations will be required to determine the association of this outflow. {\it IRAS 05358\ outflow 6:} The fourth brightest source in the Spitzer 24\um\ data is located at J(2000) = 05:39:08.5, +35:46:38 (source 5 in the IRAS 05358\ section of the \citet{qiu2008} catalog, referred to in table \ref{tab:OutflowH2} as Q5) in the middle of the molecular ridge that extends from IRAS 05358\ towards the northwest (24\um\ object 4 in figure \ref{fig:outflowsh2}). The star is located at the northwest end of the tongue of 1.2mm emission mapped by \citep{beuther2002} with the MAMBO instrument on the IRAM telescope. This part of the cloud is also seen in silhouette against brighter surrounding emission at 8\um. At wavelengths below 2\um, it is fainter than 14-th magnitude and therefore is not listed in the 2MASS catalog, and it is not detected in \citet{yan2009} down to 19th magnitude in K. Spitzer data indicates very red colors between 3.6 and 70 \um, indicating that this object is likely to be a Class I protostar. The SED is fit using the online tool provided by \citet{robitaille2007}. Unfortunately, a wide variety of parameters all achieved equally good fits, so no conclusions are drawn about the stellar mass or other very uncertain parameters. However, the top models all had $A_V > 20$ and many in the range 30-50, indicating that the line of sight is probably through a thick envelope or disk towards this source. This source lies at the base of the tongue of blueshifted CO 3-2 emission that extends northwest of IRAS 05358\ at PA $\approx$ 345\arcdeg\ and has mass $\sim .5\msun$. A pair of \htwo\ features, \citet{khanzadyan2004} N12A and N12B are located 30 and 55\arcsec\ from the suspected YSO, forming a chain along the axis of the blueshifted CO tongue. \citet{khanzadyan2004} \htwo\ knot N3F lies along the flow axis in the redshifted direction. {\it IRAS 05358\ outflow 7:} The 20\arcsec\ long chain of \htwo\ knots labeled \citet{khanzadyan2004} N11 appears to trace part of a jet at PA $\approx$ 345\arcdeg\ that propagated parallel to outflow 6 about 20\arcsec\ to the east. The northwest portion of Outflow C in the \citet{beuther2002} SiO map is in approximately the same direction as Outflow 7, and it may represent a redshifted counterflow to the northwest-pointing \hh\ knots. The jet axis passes within a few arc-seconds of a faint and red YSO located at J(2000) = 05 39 10.0, +35 46 27 (blue diamond in figure \ref{fig:outflowsh2} about 35\arcsec\ south of the southern end of the \htwo\ feature). It may be a 24\um\ source but is lost in the PSF of the bright source at the center of Sh~2-233IR~NE. This object is also undetected down to 19th magnitude in the \citet{yan2009} K-band image. {\it IRAS 05358\ outflow 8:} A prominent jet-like \htwo\ feature protrudes from the vicinity of Sh~2-233IR~SW\ at PA $\approx$ 335\arcdeg\ and ends in bright knot N9. The feature N5B is is located just outside the ring of \htwo\ emission that surrounds the IRAS source at the base of the jet. Towards the southeast, knot N6 is located opposite knot N9 with respect to the southwest cluster. IR 41, the \ensuremath{\textrm{H}\alpha}\ emission source, labeled 24\um\ source 6 in figure \ref{fig:outflowsh2}, is probably the source of this outflow. {\it IRAS 05358\ outflow 9:} In the Spitzer and K$_s$ images, an infrared reflection nebula opens towards the southwest at PA $\approx$ 245\arcdeg\ and points towards a blueshifted CO region. The reflection nebula is also seen in \ensuremath{\textrm{H}\alpha}. It is likely that the CO emission in CO Region 1 (table \ref{tab:comeas}) traces a fossil cavity whose walls provide the scattering surface of the reflection nebula. {\it IRAS 05358\ outflow 10 and IR 6:} A bright \htwo\ filament protrudes at PA $\approx$ 15\arcdeg\ towards the northeast of IR 6 (24\um\ source 1, \citet{qiu2008} source 8). The star is the third brightest 24\um\ source in the IRAS 05358\ region. Since it is visible at visual wavelengths, it is not heavily embedded. Its H$\alpha$ emission and association with an outflow lobe and \htwo\ emission suggest that it is a moderate mass Herbig AeBe star associated with the IRAS 05358\ complex. The optical spectrum confirms this hypothesis: the star has \ensuremath{\textrm{H}\alpha}\ absorption wings on either side of a very bright, asymmetric \ensuremath{\textrm{H}\alpha}\ emission profile (see section \ref{sec:dis}). IR 6 is seen to be the source of Outflow 10. Data for this source is available from $\sim$0.45-24\um, so the \citet{robitaille2007} spectral fitter puts strong constraints on the star's mass and luminosity. The measured mass and luminosity are $M=4.5\pm0.5$ \msun\ and $L = 10^{2.3\pm.25} L_\odot$, parameters consistent with a B7V ($\pm 1$ spectral class) main sequence star. The range of ages in the models covers $10^4-10^7$ years but favors stars in the range $10^5-10^6$ years. While there is a small clump of redshifted CO emission to the northeast of the object, the \htwo\ spectrum shows that the north flow is blueshifted $v_{LSR}\sim-40 $\kms, and the lack of a visible counterflow suggests that the counterflow may be masked behind an additional extincting medium. The counterflow drawn in figure \ref{fig:outflowsh2} is not seen in emission but is identified as a probable location for a counterflow because of the confident association of outflow 10n with source IR 6. {\it IRAS 05358\ outflow 11:} A chain of \hh\ knots is seen at 2.12\um\ and in the Spitzer 4.5\um\ image. They trace back to either IR 78 or 24\um\ source 4. There is a tongue of redshifted CO 3-2 emission in the same direction as this flow that suggests it may be redshifted. {\it IR 41}: There is an arc-like \hh\ emission feature surrounding the \ensuremath{\textrm{H}\alpha}\ emission line star IR 41. This implies that the star is probably a late B-type star with too little Lyman continuum emission to generate a photon-dominated region (PDR) but enough soft UV to excite \hh. From the measured \ensuremath{\textrm{H}\alpha}\ and nondetection of \ensuremath{\textrm{H}\beta}\ at the star's location down to a 5-$\sigma$ limit of 1\ee{-17} erg s$^{-1}$ \ensuremath{\textrm{cm}^{-2}} \AA$^{-1}$, a lower limit on the extinction column $A_V=15$ is derived. The \citet{robitaille2007} fitter yields a mass estimate of 7.4$\pm 0.6$\msun and luminosity $L=10^{2.97\pm.16}L_\odot$ among the 222 best fits out of a grid of 200,000 model SEDs (fits with $\chi^2<5000$). The luminosity is very well constrained, varying only modestly to $L=10^{2.99\pm.15}L_\odot$ for the 904 best fits ($\chi^2 < 10000$), while the mass shifts down to $6.5\pm1.0\msun$. The mass estimate may be biased by the lower number of high-mass models computed. The star's mass is most compatible with a main sequence B4V star, though its luminosity is closer to a B5V star. The disk mass is constrained to be $>10^3 \msun$. The age is reasonably well constrained to be $T = 10^{5.78\pm.12}$ for the best 904 models, but is essentially unconstrained for the best 222. Similarly, the stellar temperature is entirely unconstrained by the fitting process. The very high values of $\chi^2$ would normally be worrisome, but the $\chi^2$ statistic only represents statistical error, while the data is dominated by various systematic errors including calibration offsets in the optical/NIR and poor resolution in the far-IR. Therefore, it is not possible to find a perfect model fit, but still possible to put constraints on the physical properties of the source. \Figure{f6.eps}{The \ensuremath{\textrm{H}\alpha}\ image with CO contours at redshifted, blueshifted, and middle velocities in red, blue, and green respectively. Contours are at 2,4,8,12,20 K \kms\ for the red and blue, and 20,25,30,40,50,60,70 K \kms\ for the green. Red is integrated from -12 to -4 \kms, Blue from -31 to -21 \kms, and green from -21 to -12 \kms. }{fig:HA_with_CO}{1.0} {\it South of IRAS 05358}: There is a symmetric flow with one faint \hh\ knot and a bright central source about 4\arcmin\ south of IRAS 05358. The \hh\ knot is at J(2000) = 05:39:15.63 +35:42:13.2. The flow has a clear red and blue region as identified in figure \ref{fig:cofig}; the red flow extends from -9 to -14 \kms\ and the blue from -19 to -23 \kms\ (the outflow is swamped by ambient emission in the intermediate velocity range). The outflow is $\sim 2\arcmin$ long, though the probable source identified is not directly between the two lobes. The ellipses used are labeled in table \ref{tab:comeas} as Red S and Blue S. \subsection{Imaging results: Optical} Deep [S II] images show that some of the outflows have pierced through the obscuring dust layers or excited extremely bright sulfur emission. \citet{khanzadyan2004} knot N1 at the end of Outflow 1 is visible [S II] emission The bow of outflow 4 and the northwest end of outflow 6 are detected in [S II]. Only the Outflow 1 and 4 bows are detected in \ensuremath{\textrm{H}\alpha}\ emission, indicating that the emission is most likely from shock heating, not external photoionizing radiation. If the shocks were externally irradiated, we would expect the emission to be dominated by the recombination lines. Because they have been detected in the optical, these two flows can be classified as Herbig-Haro objects. \subsection{CO results} IRAS 05358\ is located at the center of the CO 3-2 integrated velocity maps (Figure \ref{fig:cofig}). The parent molecular cloud, centered at $v_{LSR} = -17.5$ \kms , extends from the southeast towards the northwest with the brightest emission coming from the core associated with Sh~2-233IR~SW, while the highest integrated emission is associated with Sh~2-233IR~NE. Sh~2-233IR~NE\ has a central velocity of $\sim -16.0$ \kms\ from the optically thin \ensuremath{\textrm{C}^{18}\textrm{O}}\ 2-1 measurements. Material that has been swept up and accelerated by jets and outflows can be seen at velocities $v_{LSR} < -21$ \kms and $v_{LSR} > -12$ \kms\ (Figure \ref{fig:cofig}). The integrated CO 3-2 map peaks at J(2000) = 05:39:12.8 +35:45:55, while the highest observed brightness temperature is at J(2000) = 5:39:09.4 +35:45:12. This offset is discussed in the context of CO isotopologues in section \ref{sec:co21} and in section \ref{sec:discussion-outflows}. Regions with line wings relative to the ambient cloud within 5\arcmin\ of the northeast cluster were assumed to be associated with outflows from the cluster. Further than 5\arcmin, it is likely that the high velocity wings are accelerated by neighboring HII regions (see section \ref{sec:surroundings}). These line wings were integrated over the velocity range -34 to -21 \kms\ (blue) and -12 to 1 \kms\ (red) to acquire estimates of the outflowing mass under the assumption that outflowing gas is optically thin. The extracted regions are displayed in Figure \ref{fig:cofig}b and measurements in table \ref{tab:comeas}. The line wings in the central arcminute and central 5 arcminutes were measured for comparison with lower resolution data and to compute a total outflow mass in the central region. The objects in Table \ref{tab:comeas} labeled CO Region 1, 2, and 3 have uncertain associations with outflows. CO Region 1 is tentatively associated with outflow 11. CO region 2 may be associated with Outflow 3 but is in a highly confused region and may have many contributors. CO region 3 is probably associated with outflow 10. In contrast, the associations with outflows 4 and 6/7 are more certain because they are further from the central region and less confused. Outflow 1 is seen at high velocities in \citet{beuther2002} interferometer maps. Outflow 9 is selected primarily based on CO emission. \Figure{f7.eps} { The JCMT HARP CO J=3-2 map integrated over all velocities with significant emission (-34 \kms\ to -4 \kms) shown in gray log scale from 0 to 150 K \kms. The elliptical regions over which line wings were integrated are shown with blue and red circles corresponding to blue and red line wings. The measurements are presented in table \ref{tab:comeas}.}{fig:cofig}{1.0} \Figure{f8.eps}{SCUBA 850\um\ image in linear grayscale from -1 to +10 mJy/beam, with a saturated peak of 24 mJy/beam, with \ensuremath{^{12}\textrm{CO}}\ 2-1 (orange solid, contours at 45,60,85,100,115,130,145 K \kms) and \ensuremath{^{13}\textrm{CO}}\ 2-1 (green dashed, contours at 20,30,40,45 K \kms) integrated contours. The box shows the region plotted in Figure \ref{fig:co21map}.}{fig:scuba_co21}{1.0} \Figure{f9.eps}{CO spectra of Sh~2-233IR~NE\ in \ensuremath{^{12}\textrm{CO}}\ (blue), \ensuremath{^{13}\textrm{CO}}\ (green), and \ensuremath{\textrm{C}^{18}\textrm{O}}\ (red). The top-left plot is the pixel centered at J(2000) = 5:39:13.67 +35:46:26.0 and each pixel is 10\arcsec\ on a side. The region mapped here is shown with a box in Figure \ref{fig:scuba_co21}. Redshifted self-absorption, a possible infall tracer, is evident in the \ensuremath{^{12}\textrm{CO}}\ spectra in the outer pixels. The inner pixels show self-absorption only at central velocities: this may be an indication that emission from outflows dominates any infall signature, or simply that there is no bulk infall towards Sh~2-233IR~NE.}{fig:co21map}{0.75} \Figure{f10.eps}{CO spectra of inner 12\arcsec\ centered on Sh~2-233IR~NE\ for all observed CO lines. The CO 3-2 and 2-1 beams are not matched, but in both cases the area integrated over is 1-2 resolution elements across. The divisions demarcating the red and blue line wings are shown with vertical dashed lines at $v_{LSR}=-21$ and -12 \kms. }{fig:co21_all3}{1.0} \begin{deluxetable}{lccccc} \tabletypesize{\footnotesize} \centering \tablecaption{Measured properties of CO flows \label{tab:comeas} } \tablehead{ \colhead{\tablenotemark{a}Region Name} & \colhead{$\int T_{mb}*$} & \colhead{$M (M_\odot)$} & \colhead{p ($M_\odot$ \kms)} & \colhead{N (\ensuremath{\textrm{cm}^{-2}})} & \colhead{E ($10^{42}$ erg)}} \startdata \tablenotemark{b}A. Outflow 4a & 4.27 & .022 & .15 & 1.4\ee{19} & 11 \\ \tablenotemark{b}B. Outflow 4b & 4.60 & .032 & .21 & 1.5\ee{19} & 13 \\ \tablenotemark{b}C. Outflow 1n & 14.5 & .088 & .71 & 4.8\ee{19} & 66 \\ \tablenotemark{b}D. Outflow 6/7 & 4.45 & .045 & .30 & 1.5\ee{19} & 29 \\ \tablenotemark{r}E. CO Region 3 & 1.31 & .016 & .112 & 4.3\ee{18} & 8.5 \\ \tablenotemark{b}F. Sh~2-233IR~NE\ & 41.8 & .464 & 3.72 & 1.4\ee{20} & 330 \\ \tablenotemark{m}F. Sh~2-233IR~NE\ & 132.9& 1.47 & - & 4.4\ee{20} & -\\ \tablenotemark{r}F. Sh~2-233IR~NE\ & 30.0 & .333 & 2.03 & 9.9\ee{19} & 135 \\ \tablenotemark{b}G. Outflow1s & 14.6 & .064 & .48 & 4.8\ee{19} & 40 \\ \tablenotemark{r}H. CO Region 2 & 4.54 & .012 & .074 & 1.5\ee{19} & 5 \\ \tablenotemark{b}I. Outflow 9 & 6.33 & .039 & .39 & 2.1\ee{19} & 43\\ \tablenotemark{b}J. CO Region 1 & 3.61 & .015 & .12 & 1.2\ee{19} & 11\\ \tablenotemark{r}K. Red S & 5.26 & .051 & .34 & 1.7\ee{19} & 26 \\ \tablenotemark{b}L. Blue S & 3.66 & .053 & .47 & 1.2\ee{19} & 47 \\ \tablenotemark{b}1\arcmin\ aperture\tablenotemark{c}& 15.1 & .96 & 7.6 & 5.0\ee{19} & 670\\ \tablenotemark{b}3\arcmin\ aperture& 2.7 & 1.6 & 12 & 9.0\ee{18} & 1000\\ \tablenotemark{b}5\arcmin\ aperture& 1.7 & 2.7 & 20 & 5.6\ee{18} & 1600 \\ \tablenotemark{r}1\arcmin\ aperture & 11.8 & 0.75 & 4.7 & 3.9\ee{19} & 320\\ \tablenotemark{r}3\arcmin\ aperture & 1.9 & 1.1 & 6.8 & 6.2\ee{18} & 460\\ \tablenotemark{r}5\arcmin\ aperture & 0.96 & 1.5 & 10 & 3.2\ee{18} & 640\\ \tablenotemark{b}1\arcmin\ $^{12}$CO 2-1 & 10.4 & .94 & 7.1 & 4.9\ee{19} & 590 \\ \tablenotemark{m}1\arcmin\ $^{12}$CO 2-1 & 97.78 & 8.83 & - & 4.6\ee{20} & - \\ \tablenotemark{r}1\arcmin\ $^{12}$CO 2-1 & 9.17 & 0.83 & 5.52 & 4.3\ee{19} & 430 \\ \tablenotemark{m}1\arcmin\ $^{13}$CO 2-1 & 41.12 & 211 & - & 1.1\ee{22} & - \\ \tablenotemark{m}1\arcmin\ C$^{18}$O 2-1 & 5.31 & 271 & - & 1.4\ee{22} & - \\ \enddata \tablenotetext{a}{Unless labeled otherwise, regions are extracted from CO 3-2 data as shown in figure \ref{fig:cofig}b } \tablenotetext{b}{Blue integration over velocity range -34 to -21 \kms} \tablenotetext{c}{Apertures are centered on J(2000) = 05:39:11.238 +35:45:41.80 in Sh~2-233IR~NE} \tablenotetext{r}{Red integration over velocity range -13 to -4 \kms} \tablenotetext{m}{Middle range integration over -21 \kms\ to -13 \kms. Assumed not to be outflowing, so no momentum is computed} \end{deluxetable} \subsection{Near-infrared spectroscopy: Velocities} \label{sec:tspecresults} The slit positions used and apertures extracted from those slits are displayed in Figure \ref{fig:tspecslits}. Position-velocity diagrams of the 1-0 S(1) line are displayed in Figure \ref{fig:outflows_h2_pv}. Velocity measurements are presented in Table \ref{tab:OutflowH2}. \Figure{f11.eps}{TripleSpec slits (blue) overlaid on the \hh\ image. The red boxes indicate the apertures extracted from those slits to fit and measure \hh\ properties. The apertures are also indicated in the position-velocity diagrams.}{fig:tspecslits}{0.75} The near-IR spectrum of Outflow 1 has the largest signal. All of the K-band \hh\ lines except the 2-1 S(0) 2.3556 \um\ (too weak) and 1-0 S(4) 1.8920 \um (poor atmospheric transmission) lines were detected (see Table \ref{tab:nirmeas}). Velocities from gaussian fits to each line are reported. In the central portion of Sh~2-233IR~NE, outflowing \hh\ emission at $v_{LSR}\approx-30$ \kms\ is detected. This material may be associated with a line-of-sight flow, or may originate from the base of the already identified flows 1-3. In source IR 58, Br$\gamma$ and He I 2.05835\um are detected, indicating that there is an embedded PDR in this source. There is a hint of a second, fainter star adjacent to IR 58. IR 93 is observed to be a double source in the TripleSpec spectrum, but the spectrum is too weak to do any identification. Br$\gamma$ and possibly He I are detected at fainter levels. Table \ref{tab:nirmeas} shows the measured line strengths (when detected) of all \hh\ lines in each aperture. The errors listed are statistical errors that do not include the systematics errors introduced by a failure to correct for narrow atmospheric absorption lines. \Figure{f12.eps}{Position-velocity diagrams of the \hh\ 2.1218 \um\ line in Outflows 1,2, 4, IR 6, and IR93/IR58. The velocity range is from -340 to 190 \kms.}{fig:outflows_h2_pv}{1.0} \Table{cccccc} {TripleSpec fitted \htwo\ outflow velocities} {Outflow Number & Aperture Number & \tablenotemark{a}v$_{LSR}$ (\kms) &\tablenotemark{b}v$_{LSR} (\kms)$ } {tab:OutflowH2} { 1 & 1 & -33.54 (0.15) & -31.85 (0.32) \\ 1 & 2 & -13.60 (0.57) & -13.56 (0.96) \\ 1 & 3 & -40.51 (0.41) & -36.13 (0.81) \\ 1 & 4 & -88.7 (2.8) & -83.7 (7.9) \\ 2 & 1 & -82.6 (7.6) & -81 (21) \\ 2 & 2 & -30.41 (0.57) & -28.9 (1.4) \\ 2 & 3 & -33.89 (0.62) & -35.2 (3.7) \\ 4 & 1 & -73.34 (0.48) & -70.2 (1.1) \\ 4 & 2 & -64.08 (0.61) & -67.8 (2.2) \\ IR6 & 1 & -39.4 (1.6) & -39.4 (4.2) \\ IR93 & 2 & -26.07 (0.43) & -26.85 (0.97) \\ IR93 & 3 & -30.6 (1.5) & -32.0 (2.5) \\ IR93 & 4 & -29.14 (0.77) & -30.3 (2.1) \\ IR93 & 6 & -47.7 (7.9) & -71 (37) \\ }{ \tablenotetext{a}{Measured from \hh\ 1-0 S(1) 2.1218 \um\ line \tablenotetext{b}{Measured from all detected \hh\ lines fit with model described in section \ref{sec:tspecresults}} } \clearpage \begin{deluxetable}{ccccccccc} \centering \tabletypesize{\scriptsize} \tablecaption{Measured properties of \hh\ flows} \tablehead{ Outflow & \tablenotemark{a}Center & \tablenotemark{b}PA & \tablenotemark{c}Length & \tablenotemark{d}Source & \tablenotemark{e}Flow & \tablenotemark{e}Counterflow & \tablenotemark{f}Age & \tablenotemark{g}LOS \\ & & & & & Length & Length & (50 \kms) & Velocity \\ } \startdata 1 & 05:39:13.023 +35:45:38.66 & -13.3 &142.3" &mm2? &58 &84.2 &1.4e4 &- \\ 2 & 05:39:13.058 +35:45:51.28 & -47.0 &44.6" &mm1a &44.6 &- &6.6e3 &Blue \\ 3 & 05:39:12.48 +35:45:54.9 & -62 &44" &mm3? &44 &- &6.5e3 &Red \\ 4 & ambiguous & 17.8-21.8 &141-144" & ? &141-144 &- &2.1e4 &Blue \\ 5 & 05:39:12 +35:45:51 & 170 &38-48 &mm3? &38-48 &- &6.5e3 &Blue \\ 6 & 05:39:09.7 +35:45:17 & 14.5 &197 &Q5 &197 &- &2.9e4 &Blue \\ 8 & 05:39:10.002 +35:45:10.87 & -154.6 &105.5" &IR41? &54.7 &52.9 &7.9e3 &- \\ \enddata \tablenotetext{a}{Midpoint of bipolar outflow if symmetric, position of jet source candidate if asymmetric} \tablenotetext{b}{Position angle uncertainties are $\sim 5\ensuremath{^{\circ}}$ because they are not perfectly collimated, causing an ambiguity in their true directions. The exact angles used to draw vectors in figure \ref{fig:outflowsh2} are listed for reproducibility.} \tablenotetext{c}{Total length of outflow on the sky, including counterflow} \tablenotetext{d}{Candidate jet source object. Outflows 2 and 6 have clear associations, the others are weaker candidates.} \tablenotetext{e}{Flow length is the distance from the CENTER position to the last \hh\ knot in the position angle direction as listed. Counterflow length is the distance from the CENTER position to the opposite far knot.} \tablenotetext{f}{Timescale of jet assuming it is propagating at 50 \kms, an effective lower limit to see \hh\ emission. If two lengths are available, uses the longer of the two. These are lower limits to the true timescale \citep{parker1991}.} \tablenotetext{g}{The parity of the outflow along the line of sight. Outflow 1 and 8 have counterflows with parities as indicated in figure \ref{fig:outflowsh2}} \end{deluxetable} \clearpage \begin{deluxetable}{ccccccccccccc}\setlength\tabcolsep{3pt} \scriptsize \tabletypesize{\tiny} \tablewidth{0pt} \centering \tablecaption{Measured \hh\ line strengths \label{tab:nirmeas}} \tablehead{ &1-0 S(0)&1-0 S(1)&1-0 S(2)&1-0 S(3)&1-0 S(6)&1-0 S(7)&1-0 S(8)&1-0 S(9)&1-0 Q(1)&1-0 Q(2)&1-0 Q(3)&1-0 Q(4) \\ aperture&2.2233&2.12183&2.03376&1.95756&1.78795&1.74803&1.71466&1.68772&2.40659&2.41344&2.42373&2.43749} \startdata outflow1ap1 & 3.60E-15& 9.80E-15& 5.50E-15& 1.20E-14& 4.70E-15& 3.10E-15& 8.60E-16& 1.10E-15& 9.20E-15& 6.10E-15& 1.10E-14& 6.90E-15 \\ & ( 2.4e-17)&( 3.4e-17)&( 6.8e-17)&( 2e-16)&( 2e-16)&( 2.8e-17)&( 2.8e-17)&( 2.7e-17)&( 1.4e-16)&( 7.4e-17)&( 8.8e-17)&( 7.8e-17) \\ outflow1ap2 & 7.10E-16& 1.80E-15& 9.90E-16& 1.80E-15& -& -& -& -& 3.00E-15& 1.90E-15& 3.20E-15& 2.00E-15 \\ & ( 2.1e-17)&( 2.7e-17)&( 6.8e-17)&( 1.7e-16)& -& -& -& -&( 1.3e-16)&( 4e-17)&( 7.8e-17)&( 3.8e-17) \\ outflow1ap3 & 1.60E-15& 4.10E-15& 2.20E-15& 4.70E-15& -& 8.30E-16& -& -& 5.60E-15& 3.70E-15& 6.60E-15& 4.80E-15 \\ & ( 2.4e-17)&( 3.4e-17)&( 6.3e-17)&( 1.8e-16)& -&( 2.8e-17)& -& -&( 1.4e-16)&( 5.9e-17)&( 8.2e-17)&( 5.2e-17) \\ outflow1ap4 & -& 9.00E-16& -& -& -& -& -& -& -& -& -& - \\ & -&( 3e-17)& -& -& -& -& -& -& -& -& -& - \\ outflow2ap1 & -& 3.60E-16& -& -& -& -& -& -& -& -& -& - \\ & -&( 1.5e-17)& -& -& -& -& -& -& -& -& -& - \\ outflow2ap2 & 9.40E-16& 2.40E-15& 1.50E-15& 1.80E-15& -& 4.00E-16& -& -& 3.00E-15& -& 3.70E-15& - \\ & ( 1.7e-17)&( 2.2e-17)&( 5.8e-17)&( 1.1e-16)& -&( 2.3e-17)& -& -&( 4.7e-17)& -&( 7.9e-17)& - \\ outflow2ap3 & 2.10E-15& 1.90E-15& 1.80E-15& 2.20E-15& -& 6.70E-16& -& -& 5.70E-15& -& 7.30E-15& - \\ & ( 1.7e-17)&( 2.2e-17)&( 5.1e-17)&( 1.3e-16)& -&( 2.9e-17)& -& -& -8.00E-16& -& -8.00E-16& - \\ outflow4ap1 & 5.50E-16& 2.00E-15& 8.50E-16& 2.00E-15& -& 9.40E-16& 1.90E-16& 3.40E-16& 1.40E-15& -& 1.40E-15& - \\ & ( 2e-17)&( 2e-17)&( 5e-17)&( 1.3e-16)& -&( 2.8e-17)&( 1.8e-17)&( 2.3e-17)& -4.00E-16& -&( 6.9e-17)& - \\ outflow4ap2 & 5.60E-16& 2.00E-15& 5.30E-16& 2.10E-15& -& 5.80E-16& -& 1.10E-16& -& -& 2.00E-15& - \\ & ( 2e-17)&( 2.2e-17)&( 2.4e-17)&( 1.2e-16)& -&( 2.3e-17)& -&( 1.8e-17)& -& -& -2.00E-16& - \\ IR6ap1 & -& 1.10E-15& -& 9.30E-16& -& 4.30E-16& -& -& -& -& -& - \\ & -&( 3e-17)& -&( 1.4e-16)& -&( 3.2e-17)& -& -& -& -& -& - \\ IR93ap1 & -& 6.60E-15& -& 2.70E-15& -& -& -& -& -& -& 5.80E-15& - \\ & -&( 3.5e-17)& -&( 1e-16)& -& -& -& -& -& -&( 7.4e-17)& - \\ IR93ap2 & 4.40E-15& 6.60E-15& 3.90E-15& 3.30E-15& -& 1.10E-15& -& -& 7.60E-15& 5.10E-15& 6.90E-15& 5.50E-15 \\ & ( 3.2e-17)&( 3.7e-17)&( 9.2e-17)&( 1.4e-16)& -&( 2.7e-17)& -& -&( 8e-17)&( 5.2e-17)&( 7.4e-17)&( 6.1e-17) \\ IR93ap3 & 1.00E-15& 1.70E-15& -& 9.00E-16& -& -& -& -& 2.00E-15& 1.70E-15& 1.90E-15& - \\ & ( 2.3e-17)&( 3.6e-17)& -&( 1.2e-16)& -& -& -& -&( 8e-17)&( 3.8e-17)&( 8.8e-17)& - \\ IR93ap4 & 2.60E-15& 3.70E-15& -& -& -& -& -& -& 4.30E-15& 3.50E-15& 4.70E-15& - \\ & ( 3.2e-17)&( 3.6e-17)& -& -& -& -& -& -&( 8.5e-17)&( 5.2e-17)&( 7.4e-17)& - \\ IR93ap5 & -& 1.90E-15& -& 1.00E-15& -& -& -& -& -& -& -& - \\ & -&(2.4e-17) & -&(1.00e-16)& -& -& -& -& -& -& -& - \\ IR93ap6 & -& 4.10E-16& -& -& -& -& -& -& -& -& -& - \\ & -&( 3e-17)& -& -& -& -& -& -& -& -& -& - \\ \enddata \tablecomments{Fluxes are in units erg s$^{-1} \ensuremath{\textrm{cm}^{-2}} $\AA$^{-1}$. Errors are listed on the second row for each aperture. Errors of (0) indicate that the line was detected, but that the fluxes should not be trusted because the background was probably oversubtracted.} \end{deluxetable}\addtocounter{table}{-1} \begin{deluxetable}{ccccccccccccccc} \scriptsize \tabletypesize{\tiny} \tablewidth{0pt} \centering \tablecaption{Measured \hh\ line strengths (cont'd) \label{tab:nirmeas2}} \tablehead{ &2-1 S(1)&2-1 S(3)&3-2 S(3)&3-2 S(4)&4-3 S(5) & [Fe II] & [Fe II] \\ &2.24772&2.07351&2.2014&2.12797&2.20095 & 1.6435 & 1.2567 \\ } \startdata outflow1ap1 & 2.00E-15& 1.20E-15& 6.20E-16& 2.60E-16& 7.10E-16 & 4.4e-15 &3.5e-15 \\ & ( 2.5e-17)&( 3.5e-17)&( 2.2e-17)&( 1.6e-17)&( 1.9e-17) & ( 7.9e-17) &( 4e-17) \\ outflow1ap2 & -& -& -& -& - & 6.7e-16 &3.1e-16 \\ & -& -& -& -& - & ( 7.8e-17) &( 3.3e-17) \\ outflow1ap3 & 9.90E-16& -& 5.70E-16& 2.50E-16& 6.40E-16 & 1.3e-15 &5.7e-16 \\ & ( 2.6e-17)& -&( 0)&( 1.2e-17)&( 0) & ( 8.9e-17) &( 4e-17) \\ outflow1ap4 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ outflow2ap1 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ outflow2ap2 & 6.40E-16& -& -& -& - & - & - \\ & ( 1.9e-17)& -& -& -& - & - & - \\ outflow2ap3 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ outflow4ap1 & 4.30E-16& 4.20E-16& -& -& 1.60E-16 & - & - \\ & ( 1.9e-17)& (0) & -& -& (0) & - & - \\ outflow4ap2 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ IR6ap1 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ IR93ap1 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ IR93ap2 & 3.80E-15& -& 3.10E-15& -& - & - & - \\ & ( 2.2e-17)& -& (0) & -& - & - & - \\ IR93ap3 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ IR93ap4 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ IR93ap5 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ IR93ap6 & -& -& -& -& - & - & - \\ & -& -& -& -& - & - & - \\ \enddata \tablecomments{Fluxes are in units erg s$^{-1} \ensuremath{\textrm{cm}^{-2}} $\AA$^{-1}$. Errors are listed on the second row for each aperture. Errors of (0) indicate that the line was detected, but that the fluxes should not be trusted because the background was probably oversubtracted.} \end{deluxetable} \clearpage \subsection{Spectroscopic Results: Optical} \label{sec:dis} IR 6 and IR 41 (objects 1 and 6 in Figure \ref{fig:outflowsh2}) both show \ensuremath{\textrm{H}\alpha}\ in emission. IR 41 is close to the reflection nebula in the southeast portion of IRAS 05358\ and is probably the reflected star. The reflection nebula's spectrum is very similar to IR 41's spectrum at \ensuremath{\textrm{H}\alpha}\ in both width and brightness (see Figure \ref{fig:outflow10_pv}). \Figure{f13.eps}{A position-velocity diagram of IR 6 and 41 including the reflection nebula near IR 41 ($\approx 7.4\msun$). IR 6 shows a two-peaked \ensuremath{\textrm{H}\alpha}\ emission profile, but is the less massive ($\approx 4.5\msun$) of the pair. The separation between the two sources is 55\farcs3, and each pixel is 0\farcs4.}{fig:outflow10_pv}{0.25} There are three components in the \ensuremath{\textrm{H}\alpha}\ profile of IR 6: a broad absorption feature seen far ($\sim400\kms$ from the line center) on the wings and two emission peaks. The peaks are separated by 190 \kms\ and the blueshifted peak is weaker than the redshifted (Table \ref{tab:IR6}). The H$\beta$ profile shows much deeper absorption and weaker emission but with similar characteristics. The presence of the \ensuremath{\textrm{H}\alpha}\ emission makes identification of the stellar type from the \ensuremath{\textrm{H}\alpha}\ line profile uncertain. The derived extinction to IR 6 is at least $A_V=7$ from an assumed \ensuremath{\textrm{H}\alpha}/\ensuremath{\textrm{H}\beta}\ ratio of 2.87 \citep{agnsquared}. The \ensuremath{\textrm{H}\beta}\ flux was measured from zero to the peaks of the emission profile and therefore probably overestimates the \ensuremath{\textrm{H}\beta}\ flux and underestimates the extinction. \Table{cccccccc} {IR 6 Deblended Profiles} {& \tablenotemark{a} Blue & \tablenotemark{b}Blue & \tablenotemark{a} Red & \tablenotemark{b}Red & Absorption & Gaussian / & \tablenotemark{b}Absorption \\ &Emission & Wavelength & Emission & Wavelength & & Lorentzian FWHM & Wavelength \\ } {tab:IR6} {\ensuremath{\textrm{H}\alpha}\ & 4.4\ee{-14} & 6559.79 & 1.3\ee{-13} & 6564.23 & -2.6\ee{-14} & 1.5 / 0.19 & 6563.02 \\ \ensuremath{\textrm{H}\beta} & \tablenotemark{c}2.4\ee{-14} & 4857.68 & \tablenotemark{c}1.8\ee{-14} & 4864.28 & \tablenotemark{d} -4.6\ee{-14} & 0.17 / 16.5 & 4861.91 \\} { \linebreak \tablecomments{Measurements are made using a Voigt profile fit in the IRAF {\sc splot} task.} \tablenotetext{a}{Flux measurements are in units of erg s$^{-1}$ \ensuremath{\textrm{cm}^{-2}} \AA$^{-1}$} \tablenotetext{b}{Wavelengths are in Geocentric coordinates. Subtract 0.53\AA\ from \ensuremath{\textrm{H}\alpha}\ and 0.39\AA\ from \ensuremath{\textrm{H}\beta}\ to put in LSR coordinates.} \tablenotetext{c}{\ensuremath{\textrm{H}\beta}\ emission was measured assuming a continuum of zero and therefore represents an upper limit in the \ensuremath{\textrm{H}\beta}\ emission} \tablenotetext{d}{\ensuremath{\textrm{H}\beta}\ deblending may contain systematic errors from a guessed subtraction of the \ensuremath{\textrm{H}\beta}\ emission} } \Table{cccccccc} {Lines observed in the optical spectra} {Source & \ensuremath{\textrm{H}\alpha}\ & \ensuremath{\textrm{H}\beta}\ & [S II] 6716\AA & [S II] 6731\AA & [O I] 6300\AA & [O I] 6363\AA & [O I] 5577\AA } {tab:optical} { Outflow1 ap1 & 4.3\ee{-16} & - & 5.7\ee{-16} & 6.3\ee{-16} & 5.3\ee{-16} & 1.8\ee{-16} & - \\ & 6561.49 & - & 6715.3 & 6729.6 & 6299.7 & 6363.3 & - \\ Outflow1 ap2 & 4.5\ee{-16} & - & 4.5\ee{-16} & 4.6\ee{-16} & 3.1\ee{-16} & 1.2\ee{-16} & - \\ & 6561.22 & - & 6714.9 & 6729.3 & 6299.4 & 6363.2 & - \\ Ambient Medium - slit 1 & 6.7\ee{-17} & 5.3\ee{-18} & 1.0\ee{-17} & 7.9\ee{-18} & 3.5\ee{-16} & 1.2\ee{-16} & 4.8\ee{-17} \\ & 6562.87 & 4861.7 & 6716.7 & 6731.2 & 6300.3 & 6363.8 & 5578.0 \\ IR 41 nebula & 2.6\ee{-15} & - & - & - & 4.4\ee{-16} & 1.9\ee{-16} & - \\ & 6562.85 & - & - & - & 6300.3 & 6363.9 & - \\ IR 41 & 6.5\ee{-15} & - & - & - & 1.1\ee{-16} & 7\ee{-17} & - \\ & 6562.9 & - & - & - & 6300.0 & 6363.3 & - \\ IR 6 & 1.76\ee{-13} & \tablenotemark{a} 4.1\ee{-14} &-&-&-&- & \\ & - & - &-&-&-&- & \\ }{ \linebreak \tablecomments{ Wavelengths listed are in \AA\ and are geocentric. To convert to LSR velocities, subtract 24.35 \kms. The ambient medium fluxes represent averages across the slit. Fluxes are in erg s$^{-1} \ensuremath{\textrm{cm}^{-2}} $\AA$^{-1}$. } \tablenotetext{a}{\ensuremath{\textrm{H}\beta}\ measurement in IR 6 is an upper limit} } \subsection{Radio Interferometry} \label{sec:vlaresults} A point source was detected in the X, U, K and Q band VLA maps with high significance at the same location as the X-band point source reported in \citet{beuther2007}. Seven-parameter gaussians were fit to each image to measure the beam sizes and positions and flux densities. The measurements are listed in Table \ref{tab:vla}. The locations of the point source and the shape of the beams from the re-reduced X and Q band images are displayed in figure \ref{fig:mm1adiagram}. A Class II 6.7 GHz methanol maser was detected in IRAS 05358\ by \citet{Menten1991}. It was observed with the European VLBI Network (EVN) by \citet{Minier2000} and seen to consist of a linear string of maser spots that trace a probable disk in addition to maser spots scattered around a line perpendicular to the proposed disk (see Figure \ref{fig:mm1adiagram}). The VLA source is more than a VLA beam away from the VLBI CH$_3$OH maser disk identified by \citet{Minier2000}. It is to the southeast in the opposite direction of Outflow 2. Outflow 2 is at position angle -47$\ensuremath{^{\circ}}$, while the disk is at PA 25\ensuremath{^{\circ}}. The 8$\ensuremath{^{\circ}}$ difference from being perpendicular is well within the error associated with determining the angle of the outflow in this confused region, so the VLBI disk is a strong candidate for the source of Outflow 2. \begin{figure*}[htpb] \epsscale{0.75} \plotone{f14.eps} \caption{A diagram of the region surrounding mm1a from \citet{beuther2007}. The ellipses are centered at the measured source centers and their sizes represent the beam sizes of the Plateau de Bure interferometer at 1.2mm \citep[blue,][]{beuther2007}, Gemini MICHELLE at 7.9\um\ \citep[red,][]{Longmore2006}, the VLA at 3.6cm (green), and the VLA at 7mm (orange). The maser disk was measured with the European VLBI Network by \citet{Minier2000}, so the size and direction of the disk are very well constrained. The black circle is centered on the pointing center of the VLBI observation and represents the absolute pointing uncertainty. The arrow pointing in the direction of Outflow 2 traces clumps along the outflow back to the mm emission region. The vector is not to scale - Outflow 2 is about 45\arcsec\ long. \label{fig:mm1adiagram}} \end{figure*} The astrometric uncertainty in VLA measurements are typically $\lesssim0.1$\arcsec. Different epochs of high-resolution X-band and Q-band data confirmed that the pointing accuracy is substantially better than 0.1\arcsec\ in this case. The VLBI absolute pointing uncertainty is reported to have an upper limit of $\sim 0.03$\arcsec\ \citep{Minier2000}. The separation between the VLA Q-band center and the VLBI disk center is 0.22\arcsec, whereas the separation between the combined X and Q band pointing centers is only 0.027\arcsec, which can be viewed as a characteristic uncertainty. This is evidence for at least two distinct massive stars in a binary separated by $\sim$ 400 AU. While the statistical significance of the binary separation is quite high using formal errors, the systematic errors cannot be constrained nearly as well. This object is a candidate binary system but is not yet confirmed. \begin{deluxetable}{lllll} \tablecolumns{7} \tablecaption{VLA measurements near IRAS 05358\ mm1a \tabletypesize{\footnotesize} \label{tab:vla}} \footnotesize \tablehead{ \colhead{Frequency } & \colhead{Beam major /} & \colhead{RA (error)} & \colhead{Peak flux } & \colhead{Map RMS } \\ \colhead{Observed} & \colhead{minor / PA} & \colhead{Dec (error)} & \colhead{(error)} & \colhead{(mJy/beam)} \\ } \startdata 43.3 GHz & 0.022\arcsec\ / 0.029\arcsec\ / -10.4 \ensuremath{^{\circ}} & 05:39:13.065425 (0.000015) & 1.319 (0.027) & 0.179 \\ &&35:45:51.14732 (0.00031) & \\ 22.5 GHz & 1.52\arcsec\ / 1.28\arcsec\ / 232 \ensuremath{^{\circ}} & 05:39:13.05521 (0.0029) & 1.26 (0.04) & 0.091 \\ &&35:45:51.378 (0.046) &\\ 15.0 GHz & 1.58\arcsec\ / 2.00\arcsec\ / 0 \ensuremath{^{\circ}} & 05:39:13.062 (0.005) & 1.274 (0.065) & 0.124 \\ &&35:45:51.4 (0.1) &\\ 8.4 GHz & 0.107\arcsec\ / 0.122\arcsec\ / 7.9 \ensuremath{^{\circ}} & 05:39:13.064548 (0.000036) & 0.506 (0.003) & 0.015 \\ &&35:45:51.170356 (0.000613) &\\ \enddata \tablenotetext{.}{Errors reported here are fit errors. Absolute flux calibration errors are negligible for the X-band data but are about equivalent to measurement errors for the K, and U bands and dominant in the Q band} \end{deluxetable} \section{Analysis} \subsection{Near-Infrared Spectroscopic Extinction Measurements} Extinction along a line of sight can be calculated using the 1-0 Q(3) / 1-0 S(1) line ratio. \begin{equation} A_\lambda = 1.09 \left[ -\textrm{ln} \frac{S_\nu(S)/S_\nu(Q)}{A_{ul}(S)\lambda_Q/A_{ul}(Q) \lambda_S} \right] \left[ \left(\frac{\lambda_S}{\lambda_Q}\right)^{-1.8} -1 \right]^{-1} \end{equation} Because they are from the same upper state, their intensity ratio should be set by their Einstein A values times the relative energies of the transitions. However, as shown by \citet{luhman1998}, narrow atmospheric absorption lines in the long wavelength portion of the K band, where the Q branch lines lie, can create a significant bias. Because the lines have not been corrected for atmospheric absorption, the Q branch fluxes should actually be lower limits. Since the 1-0 S(1) transition at 2.1818 microns is affected very little by atmospheric absorption, and the exinction measured is proportional to the Q/S line ratio, the measured extinction should be a lower limit. The [Fe II] 1.6435 and 1.2567 \um\ lines were detected in Outflow 1, allowing for another direct measurement of the extinction. The measured ratio $FR$ = 1.26\um/1.64\um\ in Outflow 1 was .8, while the true value is at least 1.24 but may be as high as 1.49 \citep{Smith2006,luhman1998,Giannini2008}. The extinction measured from this ratio ranges from $A_V = 4.1$ ($FR=1.24$) to 5.8 ($FR=1.49$). The S(1)/Q(3) ratio uncorrected for telluric absorption is .91, which yields an extinction lower limit of $A_V = 18.7$, is inconsistent with the measurement from [Fe II]. The \ensuremath{\textrm{H}\alpha}\ detection and \ensuremath{\textrm{H}\beta}\ upper limit give a lower limit on the extinction of $A_V = 6.6$, which is consistent with both of the other methods to within the calibration uncertainty. It is possible that the two measurements come from unresolved regions with different levels of extinction, though a factor of at least 3 change over an area $\sim 100 AU$ far from the millimeter cores seems unlikely. A strong IR radiation field could plausibly change the line ratio from the expected Einstein A value. The question is not resolved but may be possible to address with near-IR observations of nearby bright HH flows with more careful atmospheric calibration. \subsection{Optical Spectra} \subsubsection{Stellar Type} IR 6 is suspected to be the source of the bright \hh\ finger at PA $\approx$ 15\arcdeg. IR 6 is also a 24\um\ source and was detected by MSX (designation G173.4956+02.4218). We identify this star as a Herbig Ae/Be star. \subsubsection{Density and Extinction Measurements} The spectrum of knot N1 (the bow of Outflow 1) allowed a measurement of electron density in the shocks from the [S II] 6716/6731 line ratio . Densities were determined to be $n=$ 700 \ensuremath{\textrm{cm}^{-3}}\ in the forward lump and $n=$500 \ensuremath{\textrm{cm}^{-3}}\ in the second lump. \ensuremath{\textrm{H}\alpha}, [N II] 6583, [O I] 6300, and [O III] 6363 were also detected, but no lines were detected in the blue portion of the spectrum presumably because of extinction. The measured velocities from [S II] are faster than the \hh\ velocity measurements at about $v_{LSR} = -68 \pm 5$ \kms. There is also an ambient ionized medium that uniformly fills the slit with a [S II]-measured density $n_e=120\ \ensuremath{\textrm{cm}^{-3}}$. Evidently, nearby massive stars are ionizing the low-density ISM located in front of IRAS 05358. This material is moving at velocity $v_{LSR} = -7 \pm 5$ \kms\ and is extincted by $A_V=1.5$ as determined from \ensuremath{\textrm{H}\alpha}/H$_\beta=2.87$ assuming the gas is at 10$^4$ K. \subsection{UCHII region measurement} A uniform density, ideal HII region will have an intensity curve $I = I_0 ( 1 - e^{-\tau_\nu})$ where \begin{equation} \tau = 8.235\times10^{-2} \left(\frac{T_e}{K}\right)^{-1.35} \left(\frac{\nu}{\textrm{GHz}}\right)^{-2.1} \left(\frac{\textrm{EM}}{\textrm{pc~cm}^{-6}}\right) a(\nu,T) \end{equation} following \citet{rohlfs2004} equation 9.35, where $a(\nu,T) \approx 1$ is a correction factor. By assuming an excitation temperature $T_{ex} = 8500$K, blackbody with a turnover to an optically thin thermal source was fit to the centimeter SED. The turnover frequency from this fit is $\tau=$15.5 GHz, corresponding to an emission measure $EM=7.4 \times 10^8$ pc cm$^{-6}$. This turnover frequency is lower than the $\sim35$ GHz reported by \citet{beuther2007}. The turnover is clearly visible in the U, K and Q data points in figure \ref{fig:HIIregionfit}. By assuming the X-band emission is optically thick, a source size can be derived. \begin{equation} \label{eqn:uchiirad} 2 r = \left[\frac{S_\nu}{2 k_B T_{ex}} \lambda^2 D^2\right]^{1/2} \end{equation} where D is the distance to the source. Assuming a spherical UCHII region and a distance of 1.8 kpc, the source has radius $r=$30 AU (for comparison, the Q band beam minor axis is $\sim$90 AU, so the region could in principle be resolved by the VLA + Pie Town configuration). The measured density is $n=(EM / r)^{1/2} = 2.2\times10^6$ \ensuremath{\textrm{cm}^{-3}}, with a corresponding emitting mass $M = n \mu m_H 4/3 \pi r^3 = 1.0\times 10^{-6}$ \msun\ using $\mu=1.4$. Using \citet{kurtz1994} equation 1, \begin{equation} N_{Lyc} = (8.04\times10^{46} s^{-1}) T_e^{-0.85} \left(\frac{r}{pc} \right)^3 n_e^2 \end{equation} the number of Lyman continuum photons per second required to maintain ionization is estimated to be $N_{Lyc} = 5.9\times10^{44}$ , a factor of $\sim4$ lower than measured by \citet{beuther2007} and closer to a B2 ZAMS star ($\sim11\msun$) than B1 using Table 2 of \citet{panagia1973}. If the star has not yet reached the main sequence, it could be significantly more massive \citep{hosokawa2009}, so our stellar mass estimate is a lower limit. The gravitational binding radius of a 11 \msun\ star is $r_g = 2 G M / c_s^2 \approx 190$ AU (the HII region is assumed to be supported entirely by thermal pressure, which provides an upper limit on the binding radius since turbulent pressure can exceed thermal pressure). The UCHII region radius of 30 AU is much smaller, indicating that, under the assumption of spherical symmetry, the HII region is bound. \citet{leurini2007} noted that the CH$_3$CN line profile around this source could be fit with a binary system with separation $<1100$AU and a total mass of 7-22 \msun . This is entirely consistent with our picture of a massive binary system with a 11 \msun\ star in a UCHII region and another high mass star with a maser disk. There are no other sources in the IRAS 05358\ region to a 5$\sigma$ limit of 0.075 mJy in the X-band, which provides the strictest upper limit. From equation \ref{eqn:uchiirad}, this corresponds to an optically thick source size of 24 AU. The maser disk has a spatial extent of around 140 AU, so it is quite unlikely that either an undetected UCHII region or the observed UCHII are associated with the maser disk. Assuming the same turnover point for undetected sources, an upper limit is set on $N_{Lyc}$ for undetected sources: \begin{equation} N_{Lyc} = (8.04\times10^{46} s^{-1}) \left(\frac{S_\nu }{ 2 k_B T_{ex}^{1.85} } \lambda_{cm}^2 D_{pc}^2\right) EM \end{equation} Our 5$\sigma$ upper limit is $N_{Lyc} = 1.38\ee{44}$ s$^{-1}$, indicating that any stars present must be a later class than B3, or lower than about 8 \msun . For an emission measure as much as 3 times higher, the corresponding stellar mass would be less than 10 \msun . It is likely that no other massive stars have formed in Sh~2-233IR~NE. After independently determining the best-fit UCHII model to the VLA data, we included the PdBI data points from \citet{beuther2007} and fit a power-law to both data sets. If the emission measure was allowed to vary, the derived parameters were $EM=6.3\ee{8}$ and $\beta=0.7$. However, doing this visibly worsened the UCHII region fit without significantly improving the power-law fit, so the fit was repeated holding a fixed emission measure, yielding $\beta=0.8$ (plotted in Figure \ref{fig:HIIregionfit}b). This power-law is much shallower than the $\beta=1.6$ measured by \citet{beuther2007} without access to the 44 GHz data point, and suggests that there is a significant population of large grains in source mm1a. However, we caution that the fits were performed only accounting for statistical errors, not the significant and unknown systematic errors that are likely to be present in mm interferometric data. The PdBI beams are much larger than the VLA beams, so the larger beams could be systematically shifted up by including additional emission, which would reduce $\beta$. Nonetheless, the new VLA data constrains the UCHII emission to contribute no more than 10\% of the 3.1mm flux. \begin{figure*}[htpb] \epsscale{0.75} \plotone{f15.eps} \plotone{f16.eps} \caption{(a) The HII region fit to measured X, K, U, and Q band data. Error bars represent statistical error in the flux measurement. The Q band error is dominated by flux calibration uncertainty (see Table \ref{tab:vlatimes}). The measured turnover is at 9.5 GHz. (b) A fit to both the VLA data presented in this paper and the (sub)mm points from \citet{beuther2007}. The best fit spectral index for the dust emission is $\alpha=2.8$ ($\beta=0.8$), significantly lower than the $\alpha=3.6$ measured by \citet{beuther2007} without the 0.7 mm data point.} \label{fig:HIIregionfit} \end{figure*} \subsection{Mass, Energy, and Momentum estimates from CO} \subsubsection{Equations} The column density for CO J=3-2 is estimated using the equation \begin{equation} \label{eqn:column} N_{\hh} = \frac{\hh}{\textrm{CO}}\frac{8\pi\nu^3k_B}{3c^3hB_eA_{ul}}(1-e^{h\nu/k_BT_{ex}})^{-1} \frac{1}{\eta_{mb}} \int T_A^*(v) dv \end{equation} where $A_{ul}=A_{32}=2.5\times10^{-6}\textrm{s}^{-1}$ and $A_{21}=6.9\ee{-7}$s$^{-1}$\citep{turner1977}, the rotational constant $B_e = 57.64$, 55.10, and 55.89 GHz for \ensuremath{^{12}\textrm{CO}}, \ensuremath{^{13}\textrm{CO}}, and \ensuremath{\textrm{C}^{18}\textrm{O}}\ respectively, $\eta_{mb} = .68$, and $T_{ex}$ is assumed to be 20K. The partition function is approximated as \begin{equation} Z=\sum_{J=1}^\infty (2J+1) exp \left(\frac{-J(J+1)hB_e}{k_B T_{ex}}\right) \approx \int_0^\infty (2J+1)exp \left(\frac{-J(J+1)hB}{k_bT_{ex}}\right) dJ \end{equation} which is valid when $T_{ex} >> hB_e/k_B \sim 2.8 $K. Equation \ref{eqn:column} becomes \begin{equation} N_{\hh} = ( 3.27\times10^{18} \ensuremath{\textrm{cm}^{-2}}) \frac{1}{\eta_{mb}} \int T_A^*(v) dv \end{equation} where the integrand is in units K \kms. The mass is then \begin{equation} M = \mu\ m_{\hh}\ A\ N_{\hh} = 1.42\times10^{-5} A \frac{1}{\eta_{mb}} \int T_A^*(v) dv \end{equation} where A is the area in cm$^2$, $\mu=1.4$ is a constant to account for the presence of helium, and again velocity is in \kms. \subsubsection{CO J = 2-1 Isotopologue Comparison} \label{sec:co21} \citet{Thomas2008} observed C$^{17}$O in the J=2-1 and 3-2 transitions each with a single pointing using the JCMT centered at J(2000) = 05:39:10.8 +35:45:16 and measured a column density $N_{\hh}=4.03\times10^{22}$ cm$^{-2}$. The peak column density is $1.7\ee{22}$\ensuremath{\textrm{cm}^{-2}}\ in \ensuremath{^{13}\textrm{CO}}\ and $2.2\ee{22}$ in \ensuremath{\textrm{C}^{18}\textrm{O}}\ at J(2000) = 5:39:10.2 +35:45:26, which is reasonably consistent with the C$^{17}$O measurement considering abundance uncertainties. The peaks of the integrated spectra for \ensuremath{\textrm{C}^{18}\textrm{O}}\ and \ensuremath{^{13}\textrm{CO}}\ are coincident, but the \ensuremath{^{12}\textrm{CO}}\ integrated peak is at J(2000) = 5:39:12.6 +35:45:46 (Figure \ref{fig:scuba_co21}, discussed more in section \ref{sec:discussion-outflows}). Measurements of the column density, mass, momentum, and energy are performed as in Equation \ref{eqn:column}. Assuming a \ensuremath{^{12}\textrm{CO}}/\ensuremath{^{13}\textrm{CO}}\ ratio of 60 \citep{lucas1998} and optically thin \ensuremath{^{13}\textrm{CO}}, the mean column density across the region is $N_{\hh} = 4.5\times10^{21}$ \ensuremath{\textrm{cm}^{-2}}. The resulting total mass of the central $\sim3$\arcmin\ is about 320 \msun, which is substantially smaller than the 600 \msun\ measured by \citet{beuther2002} and \citet{Zinchenko1997}, but it is nearly consistent with 870\um\ and NH$_3$ estimates of 450 and 400 \msun\ from \citet{Mao2004} and is within the systematic uncertainties of these measurements. Assuming \ensuremath{\textrm{C}^{18}\textrm{O}}\ is optically thin and the \ensuremath{\textrm{C}^{18}\textrm{O}}/\ensuremath{^{13}\textrm{CO}}\ ratio is 10, the column density is 5.2\ee{21} \ensuremath{\textrm{cm}^{-2}}\ and the mass is 360 \msun, which is consistent with the \ensuremath{^{13}\textrm{CO}}\ measurements, indicating that optical depth effects are probably not responsible for the discrepancy with the dust mass estimate. \subsubsection{CO Mass and Energy Measurements for Specific Outflows} Table \ref{tab:comeas} lists measurements of mass and momentum in apertures shown in figure \ref{fig:cofig}. Where red and blue masses are listed, there is an outflow in the red and blue along the line of sight. Where only one is listed, an excess to one side of the cloud rest velocity was detected and assumed to be accelerated gas from a protostellar outflow. Blue velocities are integrated from -33 to -21 \kms. Red velocities are integrated from -12 \kms\ to 1 \kms. All masses are computed assuming CO is optically thin in the outflow, which leads to a lower bound on the mass; \ensuremath{^{13}\textrm{CO}}\ 2-1 was measured to have an optical depth of 0.1 in 7 very high velocity outflows in \citet{choi1993}, so if a relative abundance \ensuremath{^{12}\textrm{CO}}/\ensuremath{^{13}\textrm{CO}} = 60 is assumed\citep{lucas1998}, masses increase by a factor of 6. It is not possible to completely distinguish outflowing matter from the ambient medium. While the outflowing matter is generally at higher velocities, the outflow and ambient line profiles are blended. A uniform selection of high velocities was applied across the region but this may include some matter from the cloud, biasing the mass measurements upward. Outflows in the plane of the sky and low-velocity components of outflows will be blended with the cloud profile, which would lead to underestimates of the outflowing mass. The momentum measurements, however, should be more robust because they are weighted by velocity, and higher velocity material is more certainly outflowing. The momentum measurements are referenced to the central velocity of Sh~2-233IR~NE, $v_{LSR}=-16.0$ \kms. \section{Discussion} \subsection{Outflow Mass and Momentum} \label{sec:discussion-outflows} \citet{beuther2002} reported a total outflowing mass of 20 \msun\ in Sh~2-233IR~NE. We measure a significantly lower outflow mass of 2 \msun\ under the assumption that the gas is optically thin, but this assumption is not valid: a lower limit can be set from the weak \ensuremath{^{13}\textrm{CO}}\ 2-1 outflow detection (lower limit because not all of the outflowing material is detected) on the outflowing mass of $\sim 4$ \msun . \citet{choi1993} measure an optical depth of \ensuremath{^{13}\textrm{CO}}\ 2-1 $\tau \approx 0.1$ in 7 very high velocity outflows. Our \ensuremath{^{13}\textrm{CO}}\ data suggests that the optical depth is somewhat lower, $\tau\approx0.07$. The abundance \ensuremath{^{12}\textrm{CO}}/\ensuremath{^{13}\textrm{CO}} = 60 is used \citep{lucas1998} to derive a total outflowing mass estimate $M\approx25$ \msun . The total outflowing mass is therefore $\sim 4\%$ of the total cloud mass, though most of the outflowing material is coming from Sh~2-233IR~NE, so as much as 13\% of the material in Sh~2-233IR~NE\ may be outflowing. The most prominent outflow in IRAS 05358, Outflow 1, is primarily along the plane of the sky, so the high velocity CO is likely associated with the other outflows that have significant components along the line of sight. As pointed out in \citet{beuther2002}, the integrated and peak CO are aligned with the main mm core. High-velocity \hh\ near the mm cores and the blueshifted outflows 2 and 4 all suggest that there are many distinct outflows that together are responsible for the high velocity CO gas. The offset between the integrated \ensuremath{^{13}\textrm{CO}}\ peak and \ensuremath{^{12}\textrm{CO}}\ peak in the J=2-1 integrated maps, which corresponds with an offset in the peak of the integrated CO 3-2 map and the peak temperature observed in CO 3-2, suggests that the gas mass is largely associated with Sh~2-233IR~SW, but the outflowing gas is primarily associated with Sh~2-233IR~NE. The integrated and maximum brightness temperatures in \ensuremath{^{13}\textrm{CO}}\ and \ensuremath{\textrm{C}^{18}\textrm{O}}\ are also centered near Sh~2-233IR~SW, which rules out optical depth as the cause of this offset. CO may be depleted in the dense mm cores, which would help account for the lower mass estimate from CO isotopologues relative to dust mass and NH$_3$. Alternately, the gas temperature in Sh~2-233IR~SW\ may be significantly higher than in Sh~2-233IR~NE\ except in the outflows, which are probably warm. In this case, the outflowing \ensuremath{^{12}\textrm{CO}}\ enhances the integrated intensity because of its high temperature and reduced effective optical depth, but it does not set the peak brightness because of the low filling-factor of the high-temperature gas. Because the outflows are seen in \hh, which requires shock velocities $\sim30$ \kms\ to be excited \citep{bally2007}, and because the association between the high-velocity CO and the plane-of-the-sky \hh\ is unclear, a velocity of 30 \kms\ is used when estimating the dynamical age. Assuming the outflow is about 0.5 pc long in one direction (e.g. Outflow 1), the dynamical age is 1.6\ee{4} years. Outflow 4, which is around 1 pc long, is also seen at a velocity of -70 \kms\ LSR, or about -50 \kms\ with respect to the cloud, and therefore has a dynamic age 2\ee{4} years, which is consistent. \subsection{Energy Injection / Ejection} Using an assumed outflow lifetime of $5\times10^3$ years for $v=100\ \kms$ as a lower limit (because the full extent of the flows is not necessarily observed) and $1\times10^5$ as an upper limit (for the CO velocities $\sim10\ \kms$ and the longest $\sim1$ pc flows), mechanical luminosities of the outflows $L=E/t$ are derived. The summed mechanical luminosity of the outflows is compared to the turbulent decay luminosity within a 12\arcsec, 1\arcmin, and 5\arcmin\ radius centered on Sh~2-233IR~NE\ in Table \ref{tab:turb}. \setlength\tabcolsep{3pt} \Table{ccccccc}{Comparison of turbulent decay and outflow injection} {Radius (pc) & $t_{turb}$\tablenotemark{a} (yr) & $L_{turb} (\lsun)$ & $L_{outflows}$\tablenotemark{b} $(\lsun) $ & Binding Energy (ergs) \tablenotemark{c} & Outflow Energy (ergs) & Turbulent Energy (ergs) \tablenotemark{d}} {tab:turb} { 0.10 & 2\ee{4} & 20 & 0.03-0.6 & 3.4 \ee{46} & 3.5 \ee{44} & 5.0\ee{46}\\ 0.52 & 1\ee{5} & 12 & 0.6 - 9.4 & 5.9 \ee{46} & 5.9 \ee{45} & 1.5\ee{47}\\ 2.62 & 5\ee{5} & 2.3 & 1-22 & 1.2 \ee{46} & 1.4 \ee{46} & 1.5\ee{47}\\ } { \tablenotetext{a}{Masses are assumed to be 600 \msun\ for the 1\arcmin\ and 5\arcmin\ apertures, and 200\msun\ for the 12\arcsec\ aperture.} \tablenotetext{b}{Outflow luminosities are given as a range with a lower limit $L=E_{out} / 10^5 \textrm{yr}$ and upper limit $L=E_{out}/ 5\times10^3 \textrm{yr}$, where $E_{out}$ is from Table \ref{tab:comeas} multiplied by 6 to correct for outflow opacity. } \tablenotetext{c}{Binding energy is the order-of-magnitude estimate GM$^2$/R} \tablenotetext{d}{Turbulent energy is computed using the measured 5 \kms\ line width as the turbulent velocity.} } The rate of turbulent decay can be estimated from the crossing-time of the region, $L / v$, where $L$ is the length scale and $v$ is the the typical turbulent velocity. On the largest ($\sim$ few pc) scales, the mechanical luminosity from high-velocity outflowing material is approximately capable of balancing turbulent decay and upholding the cloud against collapse. However, at the size scales of the Sh~2-233IR~NE\ clump ($\sim 0.1$ pc), turbulent decay occurs on more than an order of magnitude faster timescales than outflow energy injection. On the smallest scales, outflow energy can be lost from the cluster through collimated outflows, though wide-angle flows and wrapped up magnetic fields will not propagate outside of the core region. Once collimated flows impact the local interstellar medium in a bow shock, their energy and momentum are distributed more isotropically and again contribute to turbulence. The imbalance on a small size scale is consistent with the observed infall signature (Figure \ref{fig:co21_all3}) in the inner 12\arcsec\ around Sh~2-233IR~NE\ and the lack of a similar profile elsewhere. \subsection{Comparison to other clumps} The classification scheme laid out in \citet{klein2005} is used to identify Sh~2-233IR~NE\ as a Protocluster and Sh~2-233IR~SW\ as a Young Cluster. \citet{maury2009} performed a similar analysis of the Early Protocluster NGC 2264-C. They also found that the outflow mechanical luminosity could provide the majority of the turbulent energy $L_{turb}\sim1.2 \lsun$ within the protocluster in a radius of 0.7 pc with a mass 2300\msun . \citet{williams2003} performed an outflow study of the OMC 2/3 region with radius 1.2 pc and mass 1100 \msun, which is also an Early Protocluster, and concluded that $L_{turb} \sim L_{flow} \sim 1.3 \lsun$. While all three regions have nearly the same turbulent decay luminosities and outflow mechanical luminosities, Sh~2-233IR~NE\ in IRAS 05358\ is significantly more compact and lower mass than the Early Clusters, and is the only one of the three that contains signatures of massive star formation. \subsection{Surrounding Regions} \label{sec:surroundings} About 8\arcmin\ to the southeast of IRAS 05358\ is another embedded star forming region, G173.58+2.45. Interferometric and stellar population studies have been performed by \citet{shepherd1996} and \citet{shepherd2002}. The bipolar outflow detected in their interferometric maps is also cleanly resolved in our figure \ref{fig:cofig}. In our wide-field \hh\ maps, there is a complex of outflows similar to that of IRAS 05358, but fainter. The large HII region Sharpless 231 to the northeast can be seen in the \ensuremath{\textrm{H}\alpha}\ image (figure \ref{fig:overview_ha}). The expanding HII region is pushing against the molecular ridge that includes IRAS 05358\ and accelerating the CO gas in the blue direction (e.g. the northern blueshifted clumps in figures \ref{fig:cofig} and \ref{fig:HA_with_CO}). It can be seen from the IRAC 8\um\ data that UV radiation from the HII region reaches to the IRAS 05358\ clusters. The expanding HII region's pressure on the molecular ridge may be responsible for triggering the collapse of IRAS 05358\ and G173. The size gradient from S232 ($\sim 30\arcmin$\ across) to S231 ($\sim 10\arcmin$) to S233 ($\sim 2-3\arcmin$) is suggestive of an age gradient assuming uniform HII region expansion velocities and a common distance. Investigation of this hypothesis will require detailed stellar population studies in the HII regions with proper regard for eliminating foreground and background sources. \subsection{Massive Star Binary} Our identification of a probable massive star binary with an associated outflow contributes to a very small sample of known maser disks with \hh\ emission perpendicular to the disk. \citet{debuizer2003} observed 28 methanol maser sources with linear distributions of maser spots in the \hh\ 2.12 \um\ line, and he identified only 2 sources with \hh\ emission perpendicular to the maser lines. None of the outflows identified in his survey were as collimated as Outflow 2, so the methanol disk / outflow combination presented here may be the most convincing association of a massive protostellar disk with a collimated outflow. The association of a massive star with an UCHII region and a methanol maser disk and the very small size of the UCHII region both suggest that the massive stellar system is very young. \citet{walsh1998} suggested that the development of a UCHII region leads to the destruction of maser emission regions. Their conclusion is consistent with our interpretation of mm1a as a binary system. \section{Summary \& Conclusion} We have presented a multiwavelength study of the IRAS 05358\ star forming region. IRAS 05358\ contains an embedded cluster of massive stars and is surrounded by outflows. The outflows were linked to probable sources and determined that at least one outflow is probably associated with a massive ($\sim 10 \msun$) star. Added kinematic information and a wide field view of the infrared outflows has been used to develop a more complete picture of the region. \begin{itemize} \item Sh~2-233IR~NE\ is a Protocluster and Sh~2-233IR~SW\ is a Young Cluster \item Energy injection on the scales of IRAS 05358\ can maintain turbulence, but on the small scales of the Sh~2-233IR~NE\ protocluster, is inadequate by $\sim 2$ orders of magnitude. Sh~2-233IR~NE\ is collapsing. \item there are 11 candidate outflows, 7 of which have candidate counterflows, in the IRAS 05358\ complex \item there is a probable massive binary with one member of mass 12 \msun\ in mm1a, and the other which is the source of Outflow 2 \item there are at least two moderate-mass ($\sim$5\msun) young stars in IRAS 05358\ \end{itemize} We have identified additional middle- and high-mass young stars with outflows, and presented a case for a high-mass binary system within the millimeter core mm1a. \section{Acknowledgements} We would like to thank Vincent Minier for providing us with the positions of the VLBI maser spots and Steve Myers and George Moellenbrock for their assistance with VLA data reduction. We would also like to thank Cara Battersby, Devin Silvia, Mike Shull, and Jeremy Darling for helpful comments on early drafts. This work made use of SAOIMAGE DS9 (\url{http://hea-www.harvard.edu/RD/ds9/}), IRAF (\url{http://iraf.net/}, scipy (\url{http://www.scipy.org}), and APLpy (\url{http://aplpy.sourceforge.net/}).
proofpile-arXiv_065-6565
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{I. Introduction} \newenvironment The The relativistic mean-field theory (RMF) \cite{walecka,brian}, which is one of the main applications of quantum hadrodynamics (QHD), is successful in nuclear structure studies \cite{sugahara,lalazissis,sharma}. The nonlinear (NL) Walecka model (NLWM), based on the RMF approach, has been extensively used to study the properties of nuclear and neutron matter, $\beta$-stable nuclei, and then extended to the drip-line regions \cite{boguta,bodmer,mueller95,furnstahl,mueller96,liu02,liu05}. In recent years some authors \cite{liu02,menezes,gaitanos,baran,liu05} have stressed the importance of including the scalar isovector virtual $\delta(a_{0}(980))$ field in hadronic effective field theories when asymmetric nuclear matter is studied. The inclusion of the $\delta$ meson leads to the structure of relativistic interactions, where a balance between an attractive (scalar) and a repulsive (vector) potential exists. The $\delta$ meson plays the role in the isospin channel and mainly affects the behavior of the system at high density regions and so is of great interest in nuclear astrophysics. The properties of nuclear matter at high density play a crucial role for building models of neutron stars (NS). Neutron stars are objects of highly compressed matter. The structure of a compact star is characterized by its mass and radius, which are obtained from appropriate equation of state (EOS) at high densities. The EOS can be derived either from relativistic or potential models. In order to describe the medium dependence of nuclear interactions, a density dependent relativistic hadron (DDRH) field theory has been recently proposed \cite{liu0702,fuchs,jons,type}. Recently the authors \cite{liu0702} used the density dependent coupling models with the $\delta$ meson being included to study the neutron stars. They found that the introduction of the $\delta$ meson in the constant coupling model leads to heavier neutron stars in a nucleon-lepton picture. The neutron star masses in the density dependent models can be reduced when the $\delta$ meson is taken into account. The in-medium modification to the masses of $\sigma$, $\omega$, and $\rho$ mesons has been studied in experiments and theoretical approaches for a decade \cite{brown,hatsuda,sarkar,ozawa,trnka,krusche,nasseripour}. Recently the authors of Ref. \cite{abhijit} investigated the effect of in-medium meson masses on the properties of the nuclear matter in the Walecka model and the effective masses of $\sigma$ and $\omega$ mesons in the nuclear medium were calculated by taking into account the effects of the vacuum fluctuation (VF). In this work we want to see the VF effects on asymmetric matter and neutron stars. We also want to clarify the density dependence of in-medium nucleon and meson masses. The VF effects are naturally introduced by considering loop corrections to the self-energies of in-medium nucleons and mesons. The effective masses of nucleons and mesons ($\sigma$, $\omega$, $\rho$, and $\delta$) in the nuclear medium will be calculated in the VF-RMF model. The VF effects on asymmetric matter and neutron stars will be studied. This paper is organized as follows. In Sec. II, we derive the in-medium effective masses of nucleons and mesons and the EOS for nuclear matter in VF-RMF model. In Sec. III, we compare our results with those of the NL-RMF model. In Sec. IV, a brief summary is presented. \section*{II. Hadron effective masses and EOS of nuclear matter} The relativistic Lagrangian density of the interacting many-particle system consisting of nucleons, isoscalar (scalar $\sigma$, vector $\omega$), and isovector (scalar $\delta$, vector $\rho$) mesons used in this work is \begin{widetext} \begin{eqnarray}\label{eq:1} {\cal L}&=&\bar{\psi}\Bigl[i\gamma^{\mu}\partial_{\mu}-g_{\omega}\gamma^{\mu}\omega_{\mu} -g_\rho\gamma^{\mu}\vec{\tau}\cdot\vec{b}_{\mu}-(M-g_{\sigma}\phi -g_{\delta}\vec{\tau}\cdot\vec{\delta})\Bigr]\psi \nonumber\\ &&+\frac{1}{2}(\partial_{\mu}\phi\partial^{\mu}\phi-m_{\sigma}^{2}\phi^2)-U(\phi) +\frac{1}{2}m_{\omega}^{2}\omega_{\mu}\omega^{\mu} \nonumber\\ &&+\frac{1}{2}m_{\rho}^{2}\vec{b}_{\mu}\cdot\vec{b}^{\mu} +\frac{1}{2}(\partial_{\mu}\vec{\delta}\partial^{\mu}\vec{\delta} -m_{\delta}^{2}\vec{\delta}^2) \nonumber\\ &&-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{4}\vec{G}_{\mu\nu}\vec{G}^{\mu\nu} + \delta{\cal L}, \end{eqnarray} \end{widetext} where $\phi$, $\omega_{\mu}$, $\vec{b}_{\mu}$, and $\vec{\delta}$ represent $\sigma$, $\omega$, $\rho$, and $\delta$ meson fields, respectively, $U(\phi)=\frac{1}{3}a\phi^{3}+\frac{1}{4}b\phi^{4}$ is the nonlinear potential of the $\sigma$ meson, $F_{\mu\nu}\equiv\partial_{\mu}\omega_{\nu}-\partial_{\nu}\omega_{\mu}$, and $\vec{G}_{\mu\nu}\equiv\partial_{\mu}\vec{b}_{\nu}-\partial_{\nu}\vec{b}_{\mu}$. In order to remove the divergence in the loop calculation, the counterterm for the Lagrangian density, $\delta{\cal L}$, included in Eq. (1) reads \begin{eqnarray}\label{eq:2} \delta{\cal L}&=&\alpha_{1}\phi+\frac{1}{2!}\alpha_{2}\phi^{2}+\frac{1}{3!}\alpha_{3}\phi^{3} +\frac{1}{4!}\alpha_{4}\phi^{4}+\frac{\zeta_{\sigma}}{2}(\partial\phi)^2\nonumber\\ &&+\beta_{1}\vec{\delta}+\frac{1}{2!}\beta_{2}\vec{\delta}^{2}+\frac{1}{3!}\beta_{3}\vec{\delta}^{3} +\frac{1}{4!}\beta_{4}\vec{\delta}^{4}+\frac{\zeta_{\delta}}{2}(\partial\vec{\delta})^2\nonumber\\ &&+\frac{\zeta_{\omega}}{2}(\partial\omega_{\mu})^2+\frac{\zeta_{\rho}}{2}(\partial \vec{b}_{\mu})^2, \end{eqnarray} where the parameters $\alpha_{i}$, $\beta_{i}$, and $\zeta_{j}$ ($i$=1, 2, 3, 4; $j$=$\sigma$, $\omega$, $\rho$, $\delta$) are fixed by the renormalization methods suggested in Refs. \cite{brian, haruki}. The field equations in the RMF approximation are \begin{eqnarray}\label{eq:3} [i\gamma^{\mu}\partial_{\mu}-(M- g_{\sigma}\phi -g_\delta{\tau_3}\delta_3)-g_\omega\gamma^{0}{\omega_0}-g_\rho\gamma^{0}{\tau_3}{b_0}]\psi&=&0,\label{eq:6-1}\nonumber\\ m_{\sigma}^{2}\phi+a\phi^2+b\phi^3&=&g_{\sigma}\rho^{s},\label{eq:6-2}\nonumber\\ m_{\omega}^{2}\omega_{0}&=&g_{\omega}\rho,\label{eq:6-3}\nonumber\\ m_{\rho}^{2} b_{0}&=&g_{\rho}\rho_{3},\label{eq:6-4}\nonumber\\ m_{\delta}^{2} \delta_{3}&=&g_{\delta}\rho_{3}^{s}, \end{eqnarray} where $\rho^{(s)}=\rho_{p}^{(s)}+\rho_{n}^{(s)}$ and $\rho_{3}^{(s)}=\rho_{p}^{(s)}-\rho_{n}^{(s)}$, where $\rho_{i}$ and $\rho^{s}_{i}$ (the index (subscript or superscript) $i$ denotes proton or neutron throughout this paper) are the nucleon and scalar densities, respectively. The nucleon and the scalar densities are given by, respectively, \begin{eqnarray}\label{eq:4} \rho_{i}&=&\frac{k_{F_{i}}^{3}}{3\pi^2}, \end{eqnarray} and \begin{eqnarray}\label{eq:5} \rho_{i}^{s}&=& - i\int \frac{{\rm d} ^4k}{(2\pi)^4}{\rm Tr}G^{i}(k), \end{eqnarray} where $k_{F_{i}}$ is the Fermi momentum of the nucleon and $G^{i}(k)$ is the nucleon propagator in the VF-RMF model: \begin{eqnarray}\label{eq:6} G^{i}(k)&=&(\gamma_{\mu}k^{\mu}+M_{i}^{\star})\Bigl[\frac{1}{k^2-M_{i}^{\star2}+i\eta} +\frac{i\pi}{E_{F_{i}}^{\star}}\delta(k^0-E_{F_{i}})\theta(k_{F_{i}}-|\vec{k}|)\Bigr]\nonumber\\ &\equiv&G^{i}_{F}(k)+G^{i}_{D}(k), \end{eqnarray} where $E_{F_{i}}^{\star}=\sqrt{k_{F_{i}}^2+M_{i}^{\star2}}$, $M_{i}^{\star}$ is the nucleon effective masses, and $\eta$ is infinitesimal. Here $G^{i}_{F}(k)$ describes the free propagation of positive and negative energy quasi-nucleons. $G^{i}_{D}(k)$ describes quasi-nucleon 'holes' inside the Fermi sea and corrects the propagation of positive energy quasi-nucleons by the Pauli exclusion principle. \begin{figure}[hbtp] \begin{center} \includegraphics[scale=0.8]{Fig01.eps} \caption{Loop-diagram corrections to the self-energy of nucleons (a) and mesons (b) in nuclear medium, where $N$ denotes nucleon and $k$ is the four momentum of the meson.}\label{fig01} \end{center} \end{figure} The nucleon effective mass without the $\delta$ field in the RMF approach is $M^\star=M-g_\sigma \phi$. When the VF effects are considered, the loop-diagram corrections to the self-energy of nucleons and mesons shown in Fig. 1 are naturally introduced. The nucleon effective mass without the $\delta$ field in the RMF approach including the vacuum fluctuations can be calculated from Fig. 1(a). Thus, we have \begin{eqnarray}\label{eq:7} M^{\star}&=&M + \frac{ig_{\sigma} ^{2}}{m_{\sigma}^{\star 2}}\int \frac{{\rm d} ^4k}{(2\pi)^4}[{\rm Tr}G^{p}(k)+{\rm Tr}G^{n}(k)] +\frac{a}{m_{\sigma}^{\star2}g_{\sigma}}(M^{\star}-M)^2 -\frac{b}{m_{\sigma}^{\star2}g_{\sigma}^{2}}(M^{\star}-M)^3\nonumber\\ &=&M-\frac{g_{\sigma}^{2}}{2\pi^2m_{\sigma}^{\star2}} \Bigl[k_{F_{p}}M^{\star}E_{F_{p}}^{\star}-M^{\star3}{\rm ln}(\frac{k_{F_{p}}+E_{F_{p}}^{\star}}{M^{\star}})\Bigr] \nonumber\\ &&-\frac{g_{\sigma}^{2}}{2\pi^2m_{\sigma}^{\star2}} \Bigl[k_{F_{n}}M^{\star}E_{F_{n}}^{\star}-M^{\star3}{\rm ln}(\frac{k_{F_{n}}+E_{F_{n}}^{\star}}{M^{\star}})\Bigr] \nonumber\\ &&+\frac{g_{\sigma}^{2}}{\pi^2m_{\sigma}^{\star2}} \Bigl[M^{\star3}\ln(\frac{M^{\star}}{M})-M^2(M^{\star}-M)-\frac{5}{2}M(M^{\star}-M)^2-\frac{11}{6}(M^{\star}-M)^3\Bigr]\nonumber\\ &&+\frac{a}{m_{\sigma}^{\star2}g_{\sigma}}(M^{\star}-M)^2 -\frac{b}{m_{\sigma}^{\star2}g_{\sigma}^{2}}(M^{\star}-M)^3, \end{eqnarray} where the second term in the right hand side of the first line in Eq. (7) is given by Fig. 1(a), and the third and fourth terms are the contributions from the nonlinear potential of the $\sigma$ meson. As well known, the $\delta$ meson leads to a definite splitting of proton and neutron effective masses, the nucleon effective masses with the $\delta$ meson in the RMF approach are given by, respectively, \begin{eqnarray}\label{eq:8} M_p^{\star}&=&M - g_{\sigma} \phi -g_{\delta} \delta_3, \end{eqnarray} and \begin{eqnarray}\label{eq:9} M_n^{\star}&=&M- g_{\sigma} \phi + g_{\delta} \delta_3. \end{eqnarray} The nucleon effective masses with the $\delta$ meson in the RMF approach including the vacuum fluctuations can be calculated from Fig. 1(a), \begin{eqnarray}\label{eq:10} M_{p}^{\star}&=&M - g_{\sigma} \phi-g_{\delta} \delta_3\nonumber\\ &=&M + \frac{ig_{\sigma} ^{2}}{m_{\sigma}^{\star 2}}\int \frac{{\rm d} ^4k}{(2\pi)^4}[{\rm Tr}G^{p}(k)+{\rm Tr}G^{n}(k)]\nonumber\\ && + \frac{ig_{\delta} ^{2}}{m_{\delta}^{\star 2}}\int \frac{{\rm d} ^4k}{(2\pi)^4}[{\rm Tr}G^{p}(k)-{\rm Tr}G^{n}(k)]\nonumber\\ &&-a\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^2}{(-2g_{\sigma})^3} -b\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^3}{(-2g_{\sigma})^4}, \end{eqnarray} and \begin{eqnarray}\label{eq:11} M_{n}^{\star}&=&M - g_{\sigma} \phi+ g_{\delta} \delta_3\nonumber\\ &=&M + \frac{ig_{\sigma} ^{2}}{m_{\sigma}^{\star 2}}\int \frac{{\rm d} ^4k}{(2\pi)^4}[{\rm Tr}G^{n}(k)+{\rm Tr}G^{p}(k)]\nonumber\\ &&+ \frac{ig_{\delta} ^{2}}{m_{\delta}^{\star 2}}\int \frac{{\rm d} ^4k}{(2\pi)^4}[{\rm Tr}G^{n}(k)-{\rm Tr}G^{p}(k)]\nonumber\\ &&-a\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^2}{(-2g_{\sigma})^3} -b\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^3}{(-2g_{\sigma})^4}. \end{eqnarray} Thus, we get after the momentum integral \begin{eqnarray}\label{eq:12} M^{\star}_{p}&=&M-\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} +\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[k_{F_{p}}M^{\star}_{p}E_{F_{p}}^{\star}-M_{p}^{\star3}{\rm ln}(\frac{k_{F_{p}}+E_{F_{p}}^{\star}}{M_{p}^{\star}})\Bigr] \nonumber\\ &&+\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} +\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[M^{\star3}_{p}\ln(\frac{M^{\star}_{p}}{M})-M^2(M^{\star}_{p}-M)\nonumber\\ &&-\frac{5}{2}M(M^{\star}_{p}-M)^2-\frac{11}{6}(M^{\star}_{p}-M)^3\Bigr]\nonumber\\ &&-\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} -\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[k_{F_{n}}M^{\star}_{n}E_{F_{n}}^{\star}-M_{n}^{\star3}{\rm ln}(\frac{k_{F_{n}}+E_{F_{n}}^{\star}}{M_{n}^{\star}})\Bigr] \nonumber\\ &&+\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} -\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[M^{\star3}_{n}\ln(\frac{M^{\star}_{n}}{M})-M^2(M^{\star}_{n}-M)\nonumber\\ &&-\frac{5}{2}M(M^{\star}_{n}-M)^2-\frac{11}{6}(M^{\star}_{n}-M)^3\Bigr]\nonumber\\ &&-a\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^2}{(-2g_{\sigma})^3} -b\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^3}{(-2g_{\sigma})^4}, \end{eqnarray} and \begin{eqnarray}\label{eq:13} M^{\star}_{n}&=&M-\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} -\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[k_{F_{p}}M^{\star}_{p}E_{F_{p}}^{\star}-M_{p}^{\star3}{\rm ln}(\frac{k_{F_{p}}+E_{F_{p}}^{\star}}{M_{p}^{\star}})\Bigr] \nonumber\\ &&+\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} -\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[M^{\star3}_{p}\ln(\frac{M^{\star}_{p}}{M})-M^2(M^{\star}_{p}-M)\nonumber\\ &&-\frac{5}{2}M(M^{\star}_{p}-M)^2-\frac{11}{6}(M^{\star}_{p}-M)^3\Bigr]\nonumber\\ &&-\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} +\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[k_{F_{n}}M^{\star}_{n}E_{F_{n}}^{\star}-M_{n}^{\star3}{\rm ln}(\frac{k_{F_{n}}+E_{F_{n}}^{\star}}{M_{n}^{\star}})\Bigr] \nonumber\\ &&+\frac{1}{2\pi^2}(\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}} +\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}) \Bigl[M^{\star3}_{n}\ln(\frac{M^{\star}_{n}}{M})-M^2(M^{\star}_{n}-M)\nonumber\\ &&-\frac{5}{2}M(M^{\star}_{n}-M)^2-\frac{11}{6}(M^{\star}_{n}-M)^3\Bigr]\nonumber\\ &&-a\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^2}{(-2g_{\sigma})^3} -b\frac{2g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{(M_{p}^{\star}+M_{n}^{\star}-2M)^3}{(-2g_{\sigma})^4}, \end{eqnarray} where $m_{j}^{\star} ~(j=\sigma, \omega, \rho, \delta)$ are the in-medium meson masses. The effective mass of the meson (or in-medium meson mass) is defined as the pole position of the meson propagator at zero three-momentum (on-shell) or zero four-momentum (off-shell). We calculate the in-medium meson masses in the random phase approximation (RPA) \cite{abhijit, haruki}. The corresponding diagrams are given in Fig. 1(b). Thus, we obtain the in-medium masses of scalar mesons: \begin{equation}\label{eq:14} m_{j}^{\star2}=m_{j}^{2}+\Pi_{j}(q^{\mu})\ \, (j=\sigma,\delta), \end{equation} where \begin{equation}\label{eq:15} \Pi_{j}(q^{\mu})=-ig_{j}^{2}\sum\limits_{i=n,p}\int\frac{{\rm d^4}k}{(2\pi)^4}{\rm Tr} G^{i}(k+q)G^{i}(k). \end{equation} The expressions of $\Pi_{j}$ (on-shell and off-shell) are, respectively, \begin{eqnarray}\label{eq:16} \Pi_{j}(\vec{q}=0; q^{0}=m_{j}^{\star})&=&\frac{g_{j}^{2}}{4\pi^2}\sum\limits_{i=n,p}\Bigl[\frac{1}{m_{j}^{\star}}(4M_{i}^{\star2}-m_{j}^{\star2})^{3/2} \arctan(\frac{m_{j}^{\star}k_{F_{i}}}{E_{F_{i}}^{\star}\sqrt{ 4M_{i}^{\star2}-m_{j}^{\star2}}})\nonumber\\ &&+(m_{j}^{\star2}-6M_{i}^{\star2}){\rm ln}\Bigl(\frac{k_{F_{i}}+E_{F_{i}}^{\star}}{M_{i}^{\star}}\Bigr)+2k_{F_{i}}E_{F_{i}}^{\star}\Bigr]\nonumber\\ &&-\frac{g^{2}_{j}}{2\pi^2}\sum\limits_{i=n,p}\biggl\{\frac{3}{2}(M_{i}^{\star2}-M^{2})\int\limits_{0}^{1} {\rm d}x \ln\Bigl(1-\frac{m_{j}^{2}}{M^{2}}x(1-x)\Bigr)\nonumber\\ &&+\frac{3}{2}\int\limits_{0}^{1}{\rm d}x\Bigl[(M_{i}^{\star2}-m_{j}^{\star2}x(1-x)) \ln\Bigl(\frac{M_{i}^{\star2}-m_{j}^{\star2}x(1-x)}{M^{2}-m_{j}^{2}x(1-x)}\Bigr)\Bigr]\nonumber\\ &&-\frac{m_{j}^{2}-m_{j}^{\star2}}{4}-3M(M_{i}^{\star}-M) -\frac{9}{2}(M_{i}^{\star}-M)^2\biggr\}, \end{eqnarray} and \begin{eqnarray}\label{eq:17} \Pi_{j}(q^{\mu}=0)&=&\frac{g_{j}^{2}}{2\pi^2}\sum\limits_{i=n,p}\Bigl[\frac{3M_{i}^{\star2}k_{F_{i}}+k_{F_{i}}^{3}}{E_{F_{i}}^{\star}} -3M_{i}^{\star2}{\rm ln}\Bigl(\frac{k_{F_{i}}+E_{F_{i}}^{\star}}{M_{i}^{\star}}\Bigr)\Bigr]\nonumber\\ &&-\frac{3g_{j}^{2}}{4\pi^2}\sum\limits_{i=n,p}\Bigl[2M_{i}^{\star2} \ln\Bigl(\frac{M_{i}^{\star}}{M}\Bigr)-M^{2}+4MM_{i}^{\star}-3M_{i}^{\star2}\Bigr]. \end{eqnarray} The effective masses of vector mesons can be obtained from Fig. 1(b) \begin{equation}\label{eq:18} m_{j}^{\star2}=m_{j}^{2}+\Pi_{j T}(q^{\mu})\ \ (j=\omega,\rho), \end{equation} where $\Pi_{j T}$ is the transverse part of the polarization tensor as follows: \begin{equation}\label{eq:19} \Pi_{\mu\nu}(q^{\mu})=-ig_{j}^{2}\sum\limits_{i=n,p}\int\frac{{\rm d^4}k}{(2\pi)^4}{\rm Tr} \gamma_{\mu}G^{i}(k+q)\gamma_{\nu}G^{i}(k). \end{equation} So the expressions of $\Pi_{j T}$ (on-shell and off-shell) are, respectively, \begin{eqnarray}\label{eq:20} \Pi_{j T}(\vec{q}=0; q^{0}=m_{j}^{\star})&=&-\frac{g_{j}^{2}}{6\pi^2}\sum\limits_{i=n,p}\Bigl[ \frac{8M_{i}^{\star4}+2M_{i}^{\star2}m_{j}^{\star2}-m_{j}^{\star4}}{m_{j}^{\star}\sqrt{4M_{i}^{\star2}-m_{j}^{\star2}}} \arctan\Bigl(\frac{m_{j}^{\star}k_{F_{i}}}{E_{F_{i}}^{\star}\sqrt{4M_{i}^{\star2}-m_{j}^{\star2}}}\Bigr)\nonumber\\ &&-2k_{F_{i}}E_{F_{i}}^{\star}-m_{j}^{\star2}{\rm ln}\Bigl(\frac{k_{F_{i}}+E_{F_{i}}^{\star}}{M_{i}^{\star}}\Bigr)\Bigr]\nonumber\\ &&-\frac{g_{j}^{2}m_{j}^{\star2}}{2\pi^2}\sum\limits_{i=n,p}\int\limits_{0}^{1}{\rm d}xx(x-1) \ln\Bigl[\frac{M_{i}^{\star2}-m_{j}^{\star2}x(1-x)}{M^{2}-m_{j}^{2}x(1-x)}\Bigr], \end{eqnarray} and \begin{equation}\label{eq:21} \Pi_{j T}(q^{\mu}=0)=\frac{g_{j}^{2}}{3\pi^2}\sum\limits_{i=n,p}\frac{k_{F_{i}}^{3}}{E_{F_{i}}^{\star}}. \end{equation} We note that the VF contributions are included in the second summations of $\Pi_{j}(q^{\mu})$ and $\Pi_{jT}(q^{\mu})$ (Eqs. (16), (17) and (20)), the VF contribution in Eq. (21) equals zero. The in-medium meson propagator is significantly modified by the interaction of the mesons with the nucleons. Clearly, this modification of the meson propagators will change the nucleon effective mass as well as the energy density. The meson propagators in the tadpole diagram are calculated at zero four momentum transfer. So we must replace the meson mass appearing in the nucleon effective mass and the energy density by the meson effective mass defined as \cite{abhijit}: \begin{equation}\label{eq:22} m_{j}^{\star2}=m_{j}^{2}+\Pi_{j}(q^{\mu}=0)\ \, (j=\sigma,\delta), \end{equation} and \begin{equation}\label{eq:23} m_{j}^{\star2}=m_{j}^{2}+\Pi_{j T}(q^{\mu}=0)\ \ (j=\omega,\rho). \end{equation} Therefore, the energy-momentum tensor in the VF-RMF model can be expressed as \begin{eqnarray}\label{eq:24} T_{\mu\nu}&=&i{\bar \psi}\gamma_{\mu}\partial_{\nu}\psi+g_{\mu\nu}\biggl[\frac{1}{2}m_{\sigma}^{\star}\phi^{2} +\frac{1}{2}m_{\delta}^{\star}{\vec \delta}^{2}-\frac{1}{2}m_{\omega}^{\star}\omega_{\lambda}\omega^{\lambda} -\frac{1}{2}m_{\rho}^{\star}{\vec b_{\lambda}}{\vec b^{\lambda}}+U(\phi)\biggr]. \end{eqnarray} Thus the EOS for nuclear matter in the VF-RMF model can be obtained. The energy density is given by \begin{eqnarray}\label{eq:25} \varepsilon&=&\langle T^{00}\rangle=\frac{g_{\omega}^2}{2m_{\omega}^{\star2}}\rho^2 +\frac{g_{\rho}^2}{2m_{\rho}^{\star2}}\rho_{3}^2 +\frac{1}{2}m_{\sigma}^{\star2}\phi^2 +\frac{1}{2}m_{\delta}^{\star2}\delta_{3}^2\nonumber\\ &&+\frac{1}{8\pi^2}\sum\limits_{i=n,p}\Bigl[k_{F_{i}}E_{F_{i}}^{\star}(M_{i}^{\star2}+2k_{F_{i}}^{2}) -M_{i}^{\star4}{\rm ln}(\frac{k_{F_{i}}+E_{F_{i}}^{\star}}{M_{i}^{\star}}) \Bigr]+U(\phi) \nonumber\\ &&-\frac{1}{8\pi^2}\sum\limits_{i=n,p}\Bigl[M_{i}^{\star4}{\rm ln}(\frac{M_{i}^{\star}}{M})+M^{3}(M-M_{i}^{\star}) -\frac{7}{2}M^2(M-M_{i}^{\star})^2\nonumber\\ &&+\frac{13}{3}M(M-M_{i}^{\star})^3-\frac{25}{12}(M-M_{i}^{\star})^4\Bigr], \end{eqnarray} and the pressure is \begin{eqnarray}\label{eq:26} P&=&\frac{1}{3}\langle T^{ii}\rangle=\frac{g_{\omega}^2}{2m_{\omega}^{\star2}}\rho^2 +\frac{g_{\rho}^2}{2m_{\rho}^{\star2}}\rho_{3}^2 -\frac{1}{2}m_{\sigma}^{\star2}\phi^2 -\frac{1}{2}m_{\delta}^{\star2}\delta_{3}^2\nonumber\\ &&+\frac{1}{8\pi^2}\sum\limits_{i=n,p}\Bigl[M_{i}^{\star4}{\rm ln}(\frac{k_{F_{i}} +E_{F_{i}}^{\star}}{M_{i}^{\star}})-E_{F_{i}}^{\star}k_{F_{i}}(M_{i}^{\star2}-\frac{2}{3}k_{F_{i}}^{2})\Bigr]-U(\phi)\nonumber\\ &&+\frac{1}{8\pi^2}\sum\limits_{i=n,p}\Bigl[M_{i}^{\star4}{\rm ln}(\frac{M_{i}^{\star}}{M})+M^{3}(M-M_{i}^{\star}) -\frac{7}{2}M^2(M-M_{i}^{\star})^2\nonumber\\ &&+\frac{13}{3}M(M-M_{i}^{\star})^3-\frac{25}{12}(M-M_{i}^{\star})^4\Bigr]. \end{eqnarray} The density dependence of the nuclear symmetry energy, $E_{sym}$, is one of the basic properties of asymmetric nuclear matter for studying the structure of neutron stars. Empirically, we have information on $E_{sym}$ only at the saturation point, where it ranges from 28 to 35 MeV according to the nuclear mass table \cite{myers}. The nuclear symmetry energy is defined through the expansion of the binding energy in terms of the asymmetry parameter $\alpha$ \cite{liu05}: \begin{equation}\label{eq:27} E/A(\rho,\alpha)=E/A(\rho,0)+E_{sym}(\rho)\alpha^2+O(\alpha^4)+\cdot\cdot\cdot, \end{equation} where the binding energy density is defined as $E/A=\varepsilon/\rho-M$, and the asymmetry parameter $\alpha=(\rho_{n}-\rho_{p})/\rho$. The nuclear symmetry energy is defined by \begin{equation}\label{eq:28} E_{sym}\equiv\frac{1}{2}\frac{\partial^{2}(E/A)}{\partial\alpha^{2}}\Big|_{\alpha=0} =\frac{1}{2}\rho\frac{\partial^{2}\varepsilon}{\partial\rho^{2}_{3}}\Big|_{\rho_{3}=0}. \end{equation} According to the definition, an explicit expression for the symmetry energy in the VF-RMF model is obtained as \begin{eqnarray}\label{eq:29} E_{sym}&=&\frac{1}{2}\frac{g_{\rho}^{2}}{m_{\rho}^{\star2}}\rho+\frac{1}{6}\frac{k_{F}^{2}}{E_{F}^{\star}} -\frac{1}{2}\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}\frac{M^{\star2}\rho}{E_{F}^{\star2} (1+\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}C-\frac{1}{\pi^{2}}\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}D)}\nonumber\\ &&+\frac{\rho}{2\pi^2}\frac{g_{\delta}^{4}}{m_{\delta}^{\star4}}\frac{M^{\star2}}{E_{F}^{\star2}} \frac{D}{1+\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}C-\frac{1}{\pi^{2}}\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}D}\nonumber\\ &&-\frac{\rho}{4\pi^2}\frac{g_{\delta}^{4}}{m_{\delta}^{\star4}} \frac{M^{\star2}}{E_{F}^{\star2}}\frac{1}{(1+\frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}C-\frac{1}{\pi^{2}} \frac{g_{\delta}^{2}}{m_{\delta}^{\star2}}D)^2}\Bigr[12M^{\star2}{\rm ln}\frac{M^{\star}}{M}+7M^{\star2}\nonumber\\ &&-7M^{2}+26M(M-M^{\star})-25(M-M^{\star})^2\Bigr], \end{eqnarray} where \begin{equation} C=\frac{1}{\pi^2}\Big[\frac{k_{F}E_{F}^{\star2}+2M^{\star2}k_{F}}{E_{F}^{\star}} -3M^{\star2}{\rm ln}\Bigl(\frac{k_{F}+E_{F}^{\star}}{M^{\star}}\Bigr)\Big], \end{equation} and \begin{equation}\label{eq:30} D=3M^{\star2}{\rm ln}\frac{M^{\star}}{M}+M^{\star2}-M^{2}-5M(M^{\star}-M)-\frac{11}{2}(M^{\star}-M)^2. \end{equation} As discussed in Refs. \cite{norman,kouno}, the incompressibility $K$ is one of the important ingredients for the nuclear EOS. The incompressibility of nuclear matter is defined by \cite{norman,kouno} \begin{equation}\label{eq:31} K=9\rho_{0}^{2}\frac{\partial^{2}(\varepsilon/\rho)}{\partial \rho^{2}}\Big|_{\rho=\rho_{0}}=9\frac{\partial P}{\partial \rho}\Big|_{\rho=\rho_{0}}=9\rho_{0}\frac{\partial \mu}{\partial \rho}\Big|_{\rho=\rho_{0}}, \end{equation} where $\mu=(\varepsilon+P)/\rho$ is the baryon chemical potential and $\rho_{0}$ is the saturation density. We can easily obtain the incompressibility from the EOS in the VF-RMF model as \begin{equation} \label{eq:32} K=9\rho_{0}\Bigl(\frac{k_{F}^{2}}{3\rho E_{F}^{\star}}+\frac{g_{\omega}^{2}}{m_{\omega}^{\star2}}+\frac{M^{\star}}{E_{F}^{\star}}\frac{\partial M^{\star}}{\partial \rho}\Bigr)\Big|_{\rho=\rho_{0}}, \end{equation} where \begin{equation}\label{eq:33} \frac{\partial M^{\star}}{\partial \rho}=-\frac{g_{\sigma}^{2}}{m_{\sigma}^{\star2}}\frac{M^{\star}}{E_{F}^{\star}}Q^{-1}, \end{equation} and \begin{eqnarray}\label{eq:34} Q&=&1+\frac{g_{\sigma}^{2}}{\pi^{2}m_{\sigma}^{\star2}}\Bigl(k_{F}E_{F}^{\star}+\frac{2k_{F}M^{\star2}}{E_{F}^{\star}}-3M^{\star2}{\rm ln}(\frac{k_{F}+E_{F}^{\star}}{M})+\frac{9}{2}M^{\star2}+\frac{3}{2}M^{2}-6MM^{\star}\Bigr)\nonumber\\ &&+\frac{2a}{m_{\sigma}^{\star2}g_{\sigma}}(M-M^{\star})+\frac{3b}{m_{\sigma}^{\star2}g_{\sigma}^{2}}(M-M^{\star})^{2}. \end{eqnarray} The final outcome of a supernova explosion can be a neutron star or a black hole. The neutron star is believed to evolve from an initially hot protoneutron star. The matter in cold neutron stars is in the ground state in nuclear equilibrium. Matter in equilibrium concerning weak interactions is called as $\beta$-equilibrium matter. The composition of $\beta$-equilibrium system is determined by the request of charge neutrality and chemical-potential equilibrium \cite{norman,lattimer}. The balance processes for $\beta$-equilibrium ($npe$) system are the following weak reactions: \begin{eqnarray}\label{eq:35} n&\longrightarrow &p+e^{-}+\bar{\nu}_{e},\\ p+e^{-}&\longrightarrow& n+\nu_{e}. \end{eqnarray} The chemical-potential equilibrium condition for ($npe$) system can be written as \begin{equation}\label{eq:36} \mu_{e}=\mu_{n}-\mu_{p}, \end{equation} where the electron chemical-potential $\mu_{e}=\sqrt{k_{F_{e}}^{2}+m^{2}_{e}}$, $k_{F_{e}}$ is the electron momentum at the fermion level and $m_{e}$ is the electron mass. The charge neutrality condition is \begin{equation}\label{eq:37} \rho_{e}=\rho_{p}=X_{p}\rho, \end{equation} where $\rho_{e}$ is the electron density, and the proton fraction $X_{p}=Z/A=\rho_{p}/\rho$. In the ultra-relativistic limit for noninteracting electrons, the electron density can be expressed as a function of its chemical potential \begin{equation}\label{eq:38} \rho_{e}=\frac{1}{3\pi^{2}}\mu_{e}^{3}. \end{equation} Then, we can obtain the relation between the proton fraction $X_{p}$ and the nuclear symmetry energy $E_{sym}$ \begin{equation}\label{eq:39} 3\pi^{2}\rho X_{p}-[4E_{sym}(1-2X_{p})]^{3}=0. \end{equation} The EOS for $\beta$-equilibrium ($npe$) matter can be estimated by using the values of $X_{p}$, which can be obtained by solving Eq. (41). The properties of the neutron stars can be finally studied by solving Tolmann-Oppenheimer-Volkov (TOV) equations \cite{tolman} with the derived nuclear EOS as an input. \section*{III. Results and discussions} In order to make a comparison with the NL-RMF model \cite{liu02,liu05}, the same saturation properties of nuclear matter and hadron masses are listed in Table I, which are used to determine the model parameters. The obtained model parameters are presented in Table II with the NL-RMF model parameters together for a comparison. The coupling constants are defined as $f_{j}=g_{j}^{2}/m_{j}^{2}$ ($j=\sigma$, $\omega$, $\rho$, $\delta$) in Refs. \cite{liu02,liu05}. The parameters of self-interacting terms in Table II are defined as $A=a/g_{\sigma}^{3}$ and $B=b/g_{\sigma}^{4}$. \begin{center} {{\large \bf Table I.}~Saturation properties of nuclear matter and hadron masses.} \par \vspace{0.5cm} \noindent \begin{tabular}{ c c c } \hline $saturation ~properties$ &\cite{liu05} \\ \hline $\rho_{0}~(fm^{-3})$ &0.16 \\ \hline $E/A ~(MeV)$ &-16.0 \\ \hline $K~(MeV)$ &240.0 \\ \hline $E_{sym}~(MeV)$ &31.3 \\ \hline $M^{\star}/M $ &0.75 \\ \hline $M~(MeV)$ &939 \\ \hline $m_{\sigma}~(MeV)$ &550 \\ \hline $m_{\omega}~(MeV)$ &783 \\ \hline $m_{\rho}~(MeV)$ &770 \\ \hline $m_{\delta}~(MeV)$ &980 \\ \hline \end{tabular} \end{center} \par \vspace{0.3cm} \noindent {{\large \bf Table II.}~Model Parameters in the VF-RMF and NL-RMF models.} \par \begin{center} \vspace{0.5cm} \noindent \begin{tabular}{c|c|c|c|c} \hline $Parameter$ &\multicolumn{2}{|c}{$VF-RMF$ model} &\multicolumn{2}{|c}{$NL-RMF$ model}\cite{liu05} \\ \cline{2-5} &$VF\rho$ &$VF\rho\delta$ &$NL\rho$ &$NL\rho\delta$ \\\hline $g_\sigma$ &12.33 &12.33 &8.96 &8.96 \\\hline $g_\omega$ &10.52 &10.52 &9.24 &9.24 \\\hline $g_\rho$ &4.01 &6.80 &3.80 &6.93 \\\hline $g_\delta$ &0.00 &7.85 &0.00 &7.85 \\\hline $A~(fm^{-1})$ &0.048 &0.048 &0.033 &0.033 \\\hline $B$ &-0.021 &-0.021 &-0.0048 &-0.0048 \\\hline \end{tabular} \end{center} \vspace{0.5cm} We use the obtained parameters in Table II to complete the calculations of self-consistence in the present work. The masses of hadrons (nucleons and mesons) in the medium can be obtained in the relativistic mean field approach (RMFA) with the VF effects by calculating the loop-diagrams in Fig. 1. We first come to the self-consistent calculations of in-medium meson masses. The obtained in-medium meson masses are presented in Fig. 2. It is obviously seen from Eqs. (16), (17) and (20), (21) that the in-medium mesom masses are related to the asymmetry parameter $\alpha$. The results in Fig. 2 are for both cases of $\alpha$=0.0 (symmetric matter) given by VF$\rho$ and $\alpha$=1.0 (asymmetric matter) given by VF$\rho\delta$ models, respectively. \vspace{3cm} \begin{figure}[hbtp] \begin{center} \includegraphics[width=4in]{Fig0201.eps} \includegraphics[width=4in]{Fig0202.eps} \vglue -3.5cm \caption{Meson effective masses as a function of the baryon density in the VF-RMF model.} \end{center} \end{figure} Fig. 2(a) shows the in-medium meson effective masses (on-shell: ${\vec q}=0,~q^{0}=m_{j}^{\star},~j=\sigma,~\omega,~\rho,~\delta$) as a function of the baryon density in the VF-RMF model. The in-medium modification to the masses of $\sigma$, $\omega$, and $\rho$ mesons have been studied in other theoretical models \cite{hatsuda,brown,sarkar}. The in-medium effective mass decrease at the normal density is $\sim$18\% for $\rho$ and $\omega$ mesons in the model based on QCD sum rule \cite{hatsuda}. The mass decrease is $\sim$20\% for $\omega$ and $\rho$ mesons at the normal density according to Brown-Rho (BR) scaling law \cite{brown}. In our model, the decreases of in-medium meson effective masses at the normal density are $\sim$25\% for $\sigma$, $\sim$20\% for $\omega$, and only $\sim$5\% for $\rho$ mesons in the symmetric VF$\rho$ case, respectively. In the VF$\rho\delta$ case, the decreases of $\rho$ and $\delta$ mesons are 11$\sim$13\% and 25$\sim$27\% at the normal density, respectively. Most experiments and theoretical approaches indicated a decrease of the in-medium meson effective masses around the normal density comparing with the masses at zero density \cite{hatsuda,brown,sarkar,ozawa,trnka,krusche}. However, in the latest experiment \cite{nasseripour}, no significant mass shift for the $\rho$ meson with momenta ranging from 0.8 to 3.0 GeV was observed. Up to now, the experimental results have not yet converged, and more work is needed to obtain consistent understanding of the in-medium behavior of vector mesons \cite{hayano}. In general, the medium modifications to the masses of mesons are momentum dependent \cite{post}. We note that the on-shell meson effective masses are obtained for the mesons at rest in our model. This is not in the momentum range of the CLAS experiment \cite{nasseripour}. Until now, there is no experimental measurement about the in-medium modification to the mass of the $\delta$ meson. Our model indicates a significant decrease of the $\delta$ meson effective mass around the normal density. We note that the effective meson masses become to increase at high density regions in the VF-RMF model. Unfortunately, the high density regions are beyond the reach of current experiments. It will be very interesting to test our prediction in the future experiments. Fig. 2(b) shows the in-medium meson effective masses (off-shell: $q^{\mu}=0$), which are used for the calculations of the nuclear EOS, as a function of the baryon density in the VF-RMF model. The off-shell meson masses are different from the on-shell meson masses because the four momenta carried by the meson propagators in these two situations are different. Since the meson propagators in the nucleon self-energy are computed at zero four momentum transfer (see Fig. \ref{fig01}(a)), we have to use the off-shell meson masses in the tadpole loop calculation for self-consistency. \vspace{3cm} \begin{figure}[htbp] \begin{center} \includegraphics[width=4in]{Fig0301.eps} \includegraphics[width=4in]{Fig0302.eps} \vglue -3.5cm \caption{Nucleon effective masses as a function of the baryon density in different models. The upper (lower) dashed and dash-dotted lines correspond to the masses of the proton (neutron). (a) VF-RMF model. (b) NL-RMF model.} \end{center} \end{figure} The nucleon effective masses play an important role in the calculations of the EOS (see Eqs. (25) and (26)). We calculate the loop-diagram corrections to the self-energies of nucleons in medium. The nucleon effective masses without the $\delta$ meson can be calculated from Eq. (7). One can see from Eqs. (12) and (13) that the presence of the $\delta$ meson leads to proton and neutron effective mass splitting. We present the baryon density dependence of proton and neutron effective masses for different proton fractions in the two models for a comparison in Fig. 3. The solid lines in Fig. 3 are the nucleon effective mass for symmetric matter ($X_{p}$=0.5). Fig. 3(a) shows that the proton and neutron effective masses given by the VF$\rho\delta$ model decrease slowly with the increase of the baryon density, at variance with the NL$\rho\delta$ model that presents a much faster decrease (Fig. 3(b)). This main difference between the VF$\rho\delta$ and the NL$\rho\delta$ models actually comes from the in-medium meson masses (see Eqs. (22) and (23)). The in-medium meson masses (off-shell) increase with the increase of the baryon density in the VF-RMF model (see Fig. 2(b)). So it is natural that the VF effects lead to a slow decrease of the nucleon effective masses as the baryon density increases. \vspace{-2.5cm} \begin{figure}[htbp] \includegraphics[scale=0.42]{Fig04.eps} \vglue -3.0cm \caption{The symmetry energy as a function of the baryon density in different models. The inset is the corresponding proton fraction.} \end{figure} The density dependence of the symmetry energy for the VF-RMF and NL-RMF models is presented in Fig. 4. We see a similar behavior of $E_{sym}$ at saturation density for both the two models. With the increase of the baryon density, the symmetry energy given by the VF-RMF model increases slowly comparing with that given by the NL-RMF model. From Fig. 4 we see that the symmetry energy with the $\delta$ meson is stiffer than that without the $\delta$ meson for both the VF-RMF and the NL-RMF cases. The symmetry energy in the VF$\rho\delta$ case is softer than that in the NL$\rho\delta$ case. This is due to the VF effects. The presence of the $\delta$ meson affects the symmetry energy and consequently the EOS of asymmetric nuclear matter. \vspace{2cm} \begin{figure}[htbp] \includegraphics[scale=0.35]{Fig05.eps} \vglue -3.5cm \caption{Equation of state for ($npe$) matter.} \end{figure} \vspace{-2cm} The $\beta$-equilibrium matter is relevant to the composition of the neutron stars. The EOS, pressure vs density, for ($npe$) matter in the VF-RMF and the NL-RMF models is presented together in Fig. 5 for a comparison. We see that the EOS in the VF-RMF is lower. Due to the VF effects, the EOS of asymmetric matter becomes softer. In the present work, only two pictures for the neutron star composition are considered: pure neutron and $\beta$-equilibrium matter, $i.e.$ without strangeness bearing baryons and deconfined quarks (see Refs. \cite{lattimer,maieron}). Furthermore, we limit the constituents to be neutrons, protons, and electrons in the latter case. In fact, in $\beta$-equilibrium matter, nucleons and electrons indeed dominate at low temperature. The structure of neutron stars can be calculated by solving TOV equations. The correlation between the neutron star mass and the corresponding radius for the pure neutron and the $\beta$-equilibrium ($npe$) matter by the VF-RMF model are shown in Fig. 6. The obtained maxium mass, corresponding radius and central density are reported in Table III. We see from Fig. 6 and Table III that the VF-RMF model leads to the decrease of the neutron star masses for both the pure neutron and the ($npe$) matter. However, the NL-RMF model leads to heavier neutron stars (see Refs. \cite{liu05,liu0702}). This is mainly because the EOS of asymmetric matter becomes softer since the symmetry energy becomes softer in the VF-RMF model. \vspace{3.0cm} \begin{figure}[htbp] \includegraphics[scale=0.35]{Fig06.eps} \vglue -3.5cm \caption{The mass of the neutron star as a function of the radius of the neutron star.} \end{figure} \begin{table} \begin{center} {{\large \bf Table III.}~~The maximum mass, the corresponding radius and the central density of the neutron star in the VF-RMF model.} \par \vspace{0.5cm} \noindent \begin{tabular}{c|c|c|c} \hline $ $ &$Model$ &\multicolumn{2}{c}{$VF-RMF$} \\ \hline $neutron~star$ &$properties$ &$VF\rho$ &$VF\rho\delta$ \\ \hline $pure ~neutron$ &$M_{S}/M_{\bigodot}$ &1.82 &2.07 \\ \cline {2-4} &$R (km)$ &11.57 &12.49 \\ \cline {2-4} &$\rho_c/\rho_0$ &6.41 &5.48 \\ \hline $(npe)~matter$ &$M_{S}/M_{\bigodot}$ &1.45 &1.51 \\\cline{2-4} &$R (km)$ &10.22 &10.77 \\ \cline {2-4} &$\rho_c/\rho_0$ &8.98 &8.20 \\ \hline \end{tabular} \end{center} \end{table} We note that the coupling constant $f_{j}=g_{j}^{2}/m_{j}^{2}$ in the NL-RMF model is a constant fixed by the saturation properties of nuclear matter \cite{liu02,liu05}. In order to make a comparison, we define $F_{j}^{\star}=g_{j}^{2}/m_{j}^{\star 2}(\rho)$ ($j$=$\sigma$, $\omega$, $\rho$, $\delta$) in our model, here $m_{j}^{\star}$ is the off-shell meson mass. We are interested in the VF effects on the coupling constants. It is well known that the coupling constants in the DDRH model are density dependent (see \cite{liu0702}), whereas the meson masses are constant. In the present model, the coupling constants are constant, but the meson masses are density dependent. In order to distinguish, we define $F_{j}=g_{j}^{\star 2}(\rho)/m_{j}^{2}~(j=\sigma,~\omega,~\rho,~g_{j}^{\star}$ is the coupling constant) for the DDRH$\rho$ case. We present a comparison between $F_{j}^{\star}$ and $F_{j}$ in Fig. 7. We see from Fig. 7 that $F_{j}^{\star}$ decreases with the increase of the baryon density for both $\sigma$ and $\omega$ mesons, which are due to the increase of the off-shell meson masses in high density regions. It shows that the variance trends of $F_{j}^{\star}$ and $F_{j}$ are roughly the same for $\sigma$, $\omega$, and $\rho$ mesons. If we attribute the density dependence of the coupling constants in the DDRH model to the VF effects in our model, the density dependence of $F_{j}^{\star}$ and $F_{j}$ should have the same physical origin. \vspace{2.5cm} \begin{figure}[hbtp] \includegraphics[scale=0.35]{Fig07.eps} \vglue -3.5cm \caption{A comparison between the ratio $F_{j}^{\star}$ and $F_{j}$.} \end{figure} \section*{IV. Summary} The VF corrections are investigated in the framework of the RMF approximation by using the relativistic Lagrangian density with the $\delta$ field in this work. By taking into account the loop corrections to the self-energies of the in-medium nucleons and mesons, the VF effects are naturally introduced into the RMF model. In order to make a comparison with the NL-RMF model, the same saturation properties of nuclear matter are used to determine the parameters of the VF model. We calculate the contributions from self-energy diagrams to the masses of the nucleons and mesons. The effective masses of the nucleons and mesons, especially the $\rho$ and $\delta$ mesons, in the nuclear medium are obtained. We find that the nucleon effective masses decrease more slowly with the increase of the baryon density comparing with the NL-RMF case. We also find that the dependence of the off-shell meson masses on the density in the medium is different from that of the on shell meson masses (see Fig. 2). The effects of the in-medium hadron masses on the nuclear EOS and the properties of the neutron stars are studied. The VF effects lead to softness of the symmetry energy and the EOS of asymmetric matter. Due to such softness, the neutron star masses are reduced quite a lot. This indicates that the VF effects on the neutron stars are important. We see from Fig. 2(b) that the off-shell in-medium meson masses increase with the increase of the baryon density. The off-shell in-medium meson masses can be used to calculate $F_{j}^{\star}$. The variance trends of $F_{j}^{\star}$ for $\sigma$, $\omega$, and $\rho$ mesons are roughly consistent with $F_{j}$ in the DDRH$\rho$ case \cite{liu0702}. The density dependence behavior of the off-shell in-medium meson masses is very interesting indeed. In the present work, we work in the Hartree approximation in which we only consider the dominant contributions from tadpole diagrams to the nucleon self-energy when we investigate the VF effects. In fact, the exchange diagram contributions can only provide small corrections to the EOS in the RMF approach at high densities \cite{brian}. It is well known that the symmetries of the infinite nuclear matter system can simplify the mean-field Lagrangian considerably. As pointed out in Ref. \cite{brian}, when translational and rotational invariance of the nuclear matter is taken into account, the expectation values of all three-vector fields must vanish. Therefore, the expectation value of $\vec{G}_{\mu\nu}$ in Eq. (1) is zero and nonlinear $\rho$ meson interactions (three or four $\rho$ vertices) do not appear. Furthermore, since we only take into account the tadpole diagrams, only the neutral iso-vector meson ($\rho^{0}$) is involved in the nucleon self-energy diagrams even in the asymmetric nuclear matter ($\rho^{\pm}$ could contribute to the exchange diagrams which are ignored in the Hartree approximation). It is suggested that the tensor coupling to the nucleon should be taken into account for the study of the vector mesons in nuclear medium \cite{machl}. In the present work, since we focus our attention on the VF effects on the properties of neutron stars, we do not include the tensor coupling effects in the calculations for the effective masses of the vector mesons. This will be studied in the future work, especially for the study of the $\rho$ meson masses in the medium. Furthermore, the future study of the VF effects on the meson-nucleon coupling constants will be carried out. In addition, more careful study of the high density behavior of the meson-nucleon effective couplings, especially $g_{\delta}$, is important and attractive. \begin{acknowledgments} This project is supported by the National Natural Science Foundation of China (Project Nos. 10675022 and 10875160), the Key Project of Chinese Ministry of Education (Project No. 106024), and the Special Grants from Beijing Normal University. \end{acknowledgments}
proofpile-arXiv_065-6582
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{} \acknowledgements \end{document} \section{Introduction} The black holes in the X-ray binaries M33~X-7 and IC10~X-1 are two of the most massive stellar-mass black holes: $15.65 \pm 1.45\mbox{$\mathrm{M}_{\odot}$}$ \citep{Orosz+07} and 23-34 \mbox{$\mathrm{M}_{\odot}$} \citep{Prestwich+07,Silverman+Filippenko08} respectively. Such high masses require that the progenitor star was very massive and experienced only a moderate mass-loss rate \citep[e.g.][]{Belczynski+09}. However, both black holes orbit a massive companion star in an close orbit: 3.45 days in the case of M33 X-7 and 1.43 days in the case of IC10~X-1. These orbits are so tight that radius of the progenitor star must have been larger than the current separation between the stars. This implies that the progenitor experienced severe mass loss via Roche-lobe overflow. This is in contradiction with the very moderate mass loss rate required to achieve such a high mass for the black hole. Explaining both the high mass of the black hole and the tight orbit simultaneously is a major challenge for binary evolution models. Here, we discuss an alternative evolutionary scenario for very close massive binaries in which mass loss by Roche-lobe overflow is avoided. \section{Rotational mixing in massive binaries} In models of rapidly rotating, massive stars, rotational mixing can efficiently transport centrally produced helium throughout the stellar envelope. Instead of expanding during core H-burning as non-rotating models do, they stay compact, become more luminous and move blue-wards in the Hertzsprung-Russell diagram \citep{Maeder87}. This type of evolution is often referred to as (quasi-)chemically homogeneous evolution and has been proposed for the formation of long gamma-ray burst progenitors \citep{Yoon+06,Woosley+06}. High rotation rates can be readily achieved in binary systems due to mass and angular momentum transfer \citep{Cantiello+07} and also by tidal interaction in close binaries \citep{Detmers+08}. In \citet{DeMink+09} we demonstrated that even in detached, tidally-locked binaries, rotational mixing can lead to chemically homogeneous evolution. In these models it is the less massive star, in which the effects of rotational mixing are less pronounced, that fills its Roche lobe first, contrary to what classical binary evolution theory predicts. In single stars this type of evolution only occurs at low metallicity, because at solar metallicity mass and angular momentum loss in the form of a stellar wind spins down the stars and prevents initially rapidly rotating stars from following nearly chemically homogeneous evolutionary tracks \citep{Yoon+06, Brott+09}. In a close binary tides can replenish the angular momentum, opening the possibility for chemically homogeneous evolution in the solar neighbourhood. \begin{figure}[] \section{The formation of short-period black-hole binaries} The binary models presented by \citet{DeMink+09} all evolve into contact, due to expansion of the secondary star. However, Roche-lobe overflow may be avoided altogether in systems in which the secondary stays compact, either because it also evolves chemically homogeneously, which may occur if $M_1 \approx M_2$, or because it evolves on a much longer timescale than the primary, when $M_2\ll M_1$. Whereas standard binary evolution theory predicts that the smaller the orbital period, the earlier mass transfer sets in, they find that binaries with the smallest orbital periods may avoid the onset of mass transfer altogether, see Fig.~1. This evolutionary scenario does not fit in the traditional classification of interacting binaries (Case~{\it A}, {\it B} and {\it C}), which is based on the evolutionary stage of the primary component at the onset of mass transfer \citep{Kippenhahn+Weigert67,Lauterborn70}. In the remainder of this paper we will refer to this new case of binary evolution, in which mass transfer is delayed or avoided altogether as a result of very efficient internal mixing, as Case~{\it M}. The massive and tight systems in which Case~{\it M} can occur are rare \citep{DeMink+08}. Additional mixing processes induced by the presence of the companion star, which may be important in such systems, will widen the parameter space in which Case~{\it M} can occur: it would lower the minimum mass for the primary star and increase the orbital period below which this type of evolution occurs. The massive LMC binary [L72]~LH~54-425, with an orbital period of 2.25~d \citep{Williams+08} may be a candidate for this type of evolution. Another interesting case is the galactic binary WR20a, which consists of two core hydrogen burning stars of $82.7\pm5.5$\mbox{$\mathrm{M}_{\odot}$} and $81.9\pm5.5\mbox{$\mathrm{M}_{\odot}$}$ in an orbit of 3.69~d. Both stars are so compact that they are detached. The surface abundances show evidence for rotational mixing: a nitrogen abundance of six times solar, while carbon is depleted \citep{Bonanos+04, Rauw+05}. If Roche-lobe overflow is avoided throughout the core hydrogen-burning phase of the primary star, both stars will stay compact while the primary gradually becomes a helium star and can be observed as a Wolf-Rayet star. Initially the Wolf-Rayet star will be more massive than its main sequence companion, but mass loss due to the strong stellar wind may reverse the mass ratio, especially in systems which started with nearly equal masses. Examples of observed short-period Wolf-Rayet binaries with a main-sequence companion are CQ~Cep% , CX~Cep% , HD~193576 and the very massive system HD~311884 \citep{vanderHucht01}. Such systems are thought to be the result of very non-conservative mass transfer or a common envelope phase \citep[e.g.][]{Petrovic+05_WR}. Case~M constitutes an alternative formation scenario which does not involve mass transfer. Case~{\it M} is particularly interesting for the formation of massive black-hole binaries, such as M33~X-7 and IC~10~X-1. The explanation for the formation of these systems with standard binary evolutionary models or synthetic models \citep[e.g.][]{Abubekerov+09} involve a common-envelope phase that sets in after the end of core helium burning, as the progenitor of the black hole must have had a radius much larger than the current orbital separation. This scenario is problematic as it requires the black-hole progenitor to lose roughly ten times less mass before the onset of Roche-lobe overflow than what is currently predicted by stellar evolution models \citep{Orosz+07}. An additional problem is that the most likely outcome of the common envelope phase would be a merger, as the envelopes of massive stars are tightly bound \citep{Podsiadlowski+03}. In the Case~{\it M} scenario the black hole progenitor stays compact and avoids Roche-lobe overflow, at least until the end of core helium burning. \section {Conclusion} We propose an alternative formation scenario for close massive black hole binaries, such as M33~X-7 and IC10~X-1. In this scenario the system starts initially as a very close binary in which tides force the stars to rotate rapidly. This induces mixing processes in the progenitor of the black hole, which as a result stays compact within its Roche lobe. Whereas the short orbital period is a major challenge for classical binary evolution scenarios, in this scenario it constitutes an essential ingredient: it results in tidal-locking of the stellar rotation to the fast revolution of the orbit. The high mass of the black hole is naturally explained by this scenario: efficient mixing leads to the formation of very massive helium stars and consequently massive black holes. Opportunities to test the validity of this scenario for M33~X-7 may come from the properties of the companion star \citep{Valsecchi+09} and the high Kerr parameter of the black-hole \citep{Liu+08}, which according to \citet{Mendez09} can only be explained with hypercritical accretion onto the black-hole. Further modelling is needed to validate this statement in the light of this new evolutionary scenario. \bibliographystyle{rjr-asp-bib}
proofpile-arXiv_065-6585
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sect:intro} \defcitealias{Vaughan05}{V05} A perennial problem in observational astrophysics is detecting periodic or almost-periodic signals in noisy time series. The standard analysis tool is the periodogram \citep[see e.g. ][]{Jenkins69, Priestley81, Press92, Bloomfield00, Chatfield03}, and the problem of period detection amounts to assessing whether or not some particular peak in the periodogram is due to a periodic component or a random fluctuation in the noise spectrum \citep[see][]{Fisher29, Priestley81, Leahy83, vanderklis89, Percival93, Bloomfield00}. If the time series is the sum of a random (stochastic) component and a periodic one we may write $y(t) = y_R(t) + y_P(t)$ and, due to the independence of $y_R(t)$ and $y_P(t)$, the power spectrum of $y(t)$ is the sum of the two power spectra of the random and stochastic processes: $S_Y(f) = S_R(f) + S_P(f)$. This is a \emph{mixed} spectrum \citep[][section 4.4]{Percival93} formed from the sum of $S_P(f)$, which comprises only narrow features, and $S_R(f)$, which is a continuous, broad spectral function. Likewise, we may consider an evenly sampled, finite time series $y(t_i)$ ($i=1,2,\ldots,N$) as the sum of two finite time series: one is a realisation of the periodic process, the other a random realisation of the stochastic process. We may compute the periodogram (which is an estimator of the true power spectrum) from the squared modulus of the Discrete Fourier Transform (DFT) of the time series, and, as with the power spectra, the periodograms of the two processes add linearly: $I(f_j) = I_R(f_j) + I_P(f_j)$. The periodogram of the periodic time series will contain only narrow ``lines'' with all the power concentrated in only a few frequencies, whereas the periodogram of the stochastic time series will show power spread over many frequencies. Unfortunately the periodogram of stochastic processes fluctuates wildly around the true power spectrum, making it difficult to distinguish random fluctuations in the noise spectrum from truly spectral periodic components. See \cite{vanderklis89} for a thorough review of these issues in the context of X-ray astronomy. Particular attention has been given to the special case that the spectrum of the stochastic process is flat (a \emph{white noise} spectrum $S(f) = const$), which is the case when the time series data $y_R(t_i)$ are independently and identically distributed (IID) random variables. Reasonably well-established statistical procedures have been developed to help identify spurious spectral peaks and reduce the chance of false detections \citep[e.g.][]{Fisher29, Priestley81, Leahy83, vanderklis89, Percival93}. In contrast there is no comparably well-established procedure in the general case that the spectrum of the stochastic process is not flat. In a previous paper, \cite{Vaughan05} (henceforth \citetalias{Vaughan05}), we proposed what is essentially a generalisation of Fisher's method to the case where the noise spectrum is a power law: $S_R(f) = \beta f^{-\alpha}$ (where $\alpha$ and $\beta$ are the power law index and normalisation parameters). Processes with power spectra that show a power law dependence on frequency with $\alpha > 0$ (i.e. increasing power to lower frequencies) are called \emph{red noise} and are extremely common in astronomy and elsewhere \citep[see][]{Press78}. In this paper we expand upon the ideas in \citetalias{Vaughan05} and, in particular, address the problem from a Bayesian perspective that allows further generalisation of the spectral model of the noise. The rest of this paper is organised as follows. In section~\ref{sect:bayes} we introduce some of the basic concepts of the Bayesian approach to statistical inference; readers familiar with this topic may prefer to skip this section. Section~\ref{sect:stat} gives a brief overview of classical significance testing using $p$-values (tail area probabilities) and test statistics, and section~\ref{sect:ppp} discusses the posterior predictive $p$-value, a Bayesian counterpart to the classical $p$-value. Section \ref{sect:pc} reviews the conventional (classical) approaches to testing for periodogram peaks. Section \ref{sect:ml} outlines the theory of maximum likelihood estimation from periodogram data, which is developed into the basis of a fully Bayesian analysis in sections~\ref{sect:ba} and \ref{sect:ppper}. The Bayesian method is then applied to two real observations if AGN in section \ref{sect:data}. Section \ref{sect:disco} discusses the limitations of the method, and alternative approaches to practical data analysis. A few conclusions are given in section \ref{sect:conc}, and two appendices describe details of the simulations algorithms used in the analysis. \section{Bayesian basics, briefly} \label{sect:bayes} \begin{table*} \caption[]{Definitions used throughout the paper.} \label{table:def} \centering \begin{tabular}{l l} \hline \hline Term & Definition \\ \hline $f_j$ & The $j$th Fourier frequency $f_j = j / N \Delta T$ ($j=1,\ldots,N/2$)\\ $I_j$ & Periodogram at frequency $f_j$ \\ $\mathbf{I}$ & vector of periodogram values $\mathbf{I}= \{ I_1,\ldots,I_{N/2} \}$ \\ ${\mbox{\boldmath $\theta$}}$ & Model parameters ${\mbox{\boldmath $\theta$}} = \{ \theta_1, \ldots, \theta_M \}$ \\ $\hat{{\mbox{\boldmath $\theta$}}}_{\rm MLE}$ & Maximum Likelihood Estimates of parameters (equation \ref{eqn:mle}) \\ ${\mbox{\boldmath $x$}}$ & Data (e.g. time series) ${\mbox{\boldmath $x$}} = \{ x_1, \ldots, x_N \}$ \\ $p_C$ & Frequentist/classical (conditional) $p$-value (equation \ref{eqn:classic})\\ $p_B$ & Bayesian (posterior predictive) $p$-value (equation \ref{eqn:ppp})\\ $S_j$ & Model spectral density at frequency $f_j$, i.e. $S(f_j; {\mbox{\boldmath $\theta$}})$ \\ $\hat{S}_j$ & The model computed at the estimate $\hat{{\mbox{\boldmath $\theta$}}}$ (equation \ref{eqn:Rstat})\\ $D({\mbox{\boldmath $x$}}, {\mbox{\boldmath $\theta$}})$ & Deviance ($-2 \log p({\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}})$) given model $H$ (equation \ref{eqn:mlogl}) \\ $p({\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}, H)$ & Likelihood for parameters ${\mbox{\boldmath $\theta$}}$ of model $H$ given data ${\mbox{\boldmath $x$}}$ (equation \ref{eqn:bayes_eqn}) \\ $p({\mbox{\boldmath $\theta$}}|H)$ & Prior probability density for parameters ${\mbox{\boldmath $\theta$}}$ (equation \ref{eqn:bayes_eqn}) \\ $p({\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}, H)$ & Posterior probability density for parameters ${\mbox{\boldmath $\theta$}}$ given data ${\mbox{\boldmath $x$}}$ (equation \ref{eqn:bayes_eqn}) \\ $p({\mbox{\boldmath $x$}} | H)$ & Prior predictive density (aka marginal likelihood) of the data ${\mbox{\boldmath $x$}}$ (equation \ref{eqn:bayes_eqn}) \\ ${\mbox{\boldmath $x$}}^{\rm rep}$ & Replicated data (from repeat observations or simulations) (equation \ref{eqn:ppd})\\ $p({\mbox{\boldmath $x$}}^{\rm rep} | {\mbox{\boldmath $x$}}^{\rm obs})$ & Posterior predictive distribution given data ${\mbox{\boldmath $x$}}^{\rm obs}$ (equation \ref{eqn:ppd})\\ $T(\mathbf{x})$ & A test statistic \\ \hline \end{tabular} \end{table*} There are two main tasks in statistical inference: parameter estimation and model checking (or comparison). Bayesian parameter estimation is concerned with finding the probability of the parameters given the model $p({\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}, H)$, where ${\mbox{\boldmath $x$}}$ ($= \{ x_1, \ldots, x_N \}$) are data values , ${\mbox{\boldmath $\theta$}}$ ($= \{ \theta_1, \ldots, \theta_M \}$) are parameter values and $H$ represents the model. In contrast, \emph{frequentist} (or \emph{classical}) statistics restricts attention to the sampling distribution of the data given the model and parameters $p({\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}, H)$. These two probability functions are related by Bayes' Theorem \begin{equation} p({\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}, H) = \frac{p( {\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}, H) p( {\mbox{\boldmath $\theta$}}| H)}{p({\mbox{\boldmath $x$}} | H)}. \label{eqn:bayes_eqn} \end{equation} Each of the terms in Bayes' theorem has a name when used in Bayesian data analysis: $p({\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}, H)$ is the \emph{posterior} distribution of the parameters; $p( {\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}, H)$ is the \emph{likelihood} function of the parameters\footnote{ Note that when considered as a function of the data for known parameters, $p( {\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}, H)$ is the sampling distribution of the data, but when considered as a function of the parameters for fixed data, $p( {\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}, H)$ is known as the likelihood, sometimes denoted $L({\mbox{\boldmath $\theta$}})$.}; $p( {\mbox{\boldmath $\theta$}}| H)$ is the \emph{prior} distribution of the parameters, and $p({\mbox{\boldmath $x$}} | H)$ is a normalising constant sometimes referred to as the \emph{marginal likelihood} (of the data) or the \emph{prior predictive} distribution\footnote{Some physicists use the \emph{evidence} for this term, e.g. \citet{Sivia96}, \citet{Trotta08}.}. General introductions to Bayesian analysis for the non-specialist include \citet{Jeffreys92}, \citet{Berger88} and \citet{Howson91}; more thorough treatments include \cite{Berry96}, \citet{Carlin00}, \citet{Gelman04}, and \citet{Lee89}; and discussions more focussed on physics and astrophysics problems include \citet{Sivia96}, \citet{Gregory05}, and \citet{Loredo90, Loredo92}. In Bayesian analysis the posterior distribution is a complete summary of our inference about the parameters given the data ${\mbox{\boldmath $x$}}$, model $H$, and any prior information. But this can be further summarised using a point estimate for the parameters such as the mean, median or mode of the posterior distribution. For one parameter the posterior mean is \begin{equation} \label{eqn:post-mean} \mathrm{E}[ \theta | {\mbox{\boldmath $x$}}, H ] = \int \theta p(\theta | {\mbox{\boldmath $x$}}, H) d\theta \end{equation} A slightly more informative summary is a \emph{credible interval} (or credible region for multiple parameters). This is an interval in parameter space that contain a specified probability mass (e.g. $90$ per cent) of the posterior distribution. These intervals give an indication of the uncertainty on the point inferences. \begin{equation} \label{eqn:ci} \int_R p( \theta | {\mbox{\boldmath $x$}}, H) d\theta = C, \end{equation} where $C$ is the probability content (e.g. $C=0.9$) and $R$ is interval in parameter space. One common approach is to select the interval satisfying equation~\ref{eqn:ci} that contains the highest (posterior) density (i.e. the posterior density at any point inside is higher than at any point outside). This will give the smallest interval that contains a probability $C$, usually called the highest posterior density region (abbreviated to HDR or HPD interval by different authors). An alternative is the equal tail posterior interval, which is defined by the two values above and below which is $(1-C)/2$ of the posterior probability. These two types of interval are illustrated in \citet[][see their Fig. 1]{Park08}. If we have multiple parameters but are interested in only one parameter we may \emph{marginalize} over the other parameters. For example, if ${\mbox{\boldmath $\theta$}} = \{ \theta_1, \theta_2 \}$ then the posterior distribution for $\theta_1$ is \begin{eqnarray} \label{eqn:margin} p( \theta_1 | {\mbox{\boldmath $x$}} , H ) & = & \int p( \theta_1, \theta_2 | {\mbox{\boldmath $x$}}, H ) d\theta_2 \nonumber \\ & = & \int p( \theta_1 | \theta_2, {\mbox{\boldmath $x$}}, H ) p(\theta_2 | {\mbox{\boldmath $x$}}, H) d\theta_2. \end{eqnarray} This is the average of the \emph{joint} posterior $p( \theta_1, \theta_2 | {\mbox{\boldmath $x$}}, H )$ over $\theta_2$. In the second formulation the joint posterior has been factored into two distributions, the first is the conditional posterior of $\theta_1$ given $\theta_2$ and the second is the posterior density for $\theta_2$. Most present day Bayesian analysis is carried out with the aid of Monte Carlo methods for evaluating the necessary integrals. In particular, if we have a method for simulating a random sample of size $N$ from the posterior distribution $p(\theta | {\mbox{\boldmath $x$}}, H)$ then the posterior density may be approximated by a histogram of the random draws. This gives essentially complete information about the posterior (for a sufficiently large $N$). The posterior mean may be approximated by the sample mean \begin{equation} \mathrm{E}[\theta | {\mbox{\boldmath $x$}} , H] \approx \frac{1}{N} \sum_{i=1}^{N} \theta^i \end{equation} where $\theta^i$ are the individual simulations from the posterior. If the parameter is a vector ${\mbox{\boldmath $\theta$}} = \{ \theta_1, \ldots, \theta_M \}$, the $m$th component of each vector is a sample from the marginal distribution of the $m$th parameter. This means the posterior mean of the each parameter is approximated by the sample mean of each component of the vector. Intervals may be calculated from the sample quantiles, e.g. the $90$ per cent equal tail area interval on a parameter may be approximated by the interval between the $0.05$ and $0.95$ quantiles of the sample. In this manner the difficult (sometimes insoluble) integrals of equations \ref{eqn:post-mean}, \ref{eqn:ci} and \ref{eqn:margin} may be replaced by trivial operations on the random sample. The accuracy of these approximations is governed by the accuracy with which the distribution of the simulations matches the posterior density, and the size of the random sample $N$. Much of the work on practical Bayesian data analysis methods has been devoted to the generation and assessment of accurate Monte Carlo methods, particularly the use of Markov chain Monte Carlo (MCMC) methods, which will be discussed and used later in this paper. For model comparison we may again use Bayes theorem to give the posterior probability for model $H_i$ \begin{equation} p(H_i | {\mbox{\boldmath $x$}}) = \frac{ p( {\mbox{\boldmath $x$}} | H_i) p(H_i) }{ p({\mbox{\boldmath $x$}}) }, \end{equation} and then compare the posterior probabilities for two (or more) competing models, say $H_0$ and $H_1$ (with parameters ${\mbox{\boldmath $\theta$}}_0$ and ${\mbox{\boldmath $\theta$}}_1$, respectively). (In effect we are treating the choice of model, $H_i$, as a discrete parameter.) The ratio of these two eliminates the term in the denominator (which has no dependence on model selection): \begin{equation} O = \frac{ p(H_1 | {\mbox{\boldmath $x$}} )}{ p(H_0 | {\mbox{\boldmath $x$}} )} = \frac{p({\mbox{\boldmath $x$}} | H_1)}{ p({\mbox{\boldmath $x$}} | H_0)} \frac{p(H_1)}{p(H_0)} \label{eqn:oddsratio} \end{equation} The first term on the right hand side of equation~\ref{eqn:oddsratio} is the ratio of likelihoods and is often called the \emph{Bayes factor} \citep[see][]{Kass95} and the second term is the ratio of the priors. However, in order to obtain $p(H_i|{\mbox{\boldmath $x$}})$ we must first remove the dependence of the posterior distributions on their parameters, often called \emph{nuisance} parameters in this context (we are not interested in making inferences about ${\mbox{\boldmath $\theta$}}_i$, but they are necessary in order to compute the model). In order to do this the full likelihood function must be integrated or {\it marginalized} over the joint prior probability density function (PDF) of the parameters: \begin{equation} p({\mbox{\boldmath $x$}} | H_i) = \int p( {\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}_i, H_i) p({\mbox{\boldmath $\theta$}}_i | H_i) d{\mbox{\boldmath $\theta$}}_i \label{eqn:marginal} \end{equation} Here, $p({\mbox{\boldmath $x$}} | {\mbox{\boldmath $\theta$}}_i, H_i)$ is the likelihood and $p({\mbox{\boldmath $\theta$}}_i | H_i)$ the prior for the parameters of model $H_i$. \section{Test statistics and significance testing} \label{sect:stat} We return briefly to the realm of frequentist statistics and consider the idea of significance testing using a test statistic. A test statistic $T({\mbox{\boldmath $x$}})$ is a real-valued function of the data chosen such that extreme values are unlikely when the \emph{null hypothesis} $H_0$ is true. If the sampling distribution of $T$ is $p(T|H_0)$, under the null hypothesis, and the observed value is $T^{\rm obs} = T({\mbox{\boldmath $x$}}^{\rm obs})$, then the classical $p$-value is \begin{equation} p_C({\mbox{\boldmath $x$}}^{\rm obs}) = \int_{T^{\rm obs}}^{+\infty} p(T|H_0) dT = \Pr \{ T({\mbox{\boldmath $x$}}^{\rm rep}) \ge T({\mbox{\boldmath $x$}}^{\rm obs}) | H_0 \}, \end{equation} where $\Pr{x|y}$ is the probability of event $x$ given that event $y$ occured. The second formulation is in terms of \emph{replicated} data that could have been observed, or could be observed in repeat experiments \citep{Meng94, Gelman96, Gelman04}. The $p$-value gives the fraction of $p(T|H_0)$ lying above the observed value $T^{\rm obs}$. As such, $p$-values are \emph{tail area} probabilities, and one usually uses small $p_C$ as evidence against the null hypothesis. If the null hypothesis is \emph{simple}, i.e. has no free parameters, or the sampling distribution of $T$ is independent of any free parameters, then the test statistic is said to be \emph{pivotal}. If the distribution of the test statistic does depend on the parameters of the model, i.e $p(T| {\mbox{\boldmath $\theta$}}, H_0)$, as is often the case, then we have a \emph{conditional} $p$-value \begin{equation} \label{eqn:classic} p_C({\mbox{\boldmath $x$}}^{\rm obs}, {\mbox{\boldmath $\theta$}}) = \int_{T^{\rm obs}}^{+\infty} p(T|{\mbox{\boldmath $\theta$}}) dT = \Pr \{ T({\mbox{\boldmath $x$}}^{\rm rep}) \ge T({\mbox{\boldmath $x$}}^{\rm obs}) | {\mbox{\boldmath $\theta$}} \} \end{equation} (For clarity we have omitted the explicit conditioning on $H_0$.) In order to compute this we must have an estimate for the nuisance parameters ${\mbox{\boldmath $\theta$}}$. \section{Posterior predictive $p$-values} \label{sect:ppp} In Bayesian analysis the \emph{posterior predictive distribution} is the distribution of ${\mbox{\boldmath $x$}}^{\rm rep}$ given the available information which includes ${\mbox{\boldmath $x$}}^{\rm obs}$ and any prior information. \begin{equation} \label{eqn:ppd} p( {\mbox{\boldmath $x$}}^{\rm rep} | {\mbox{\boldmath $x$}}^{\rm obs}) = \int p( {\mbox{\boldmath $x$}}^{\rm rep} | {\mbox{\boldmath $\theta$}} ) p( {\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}^{\rm obs}) d{\mbox{\boldmath $\theta$}}, \end{equation} \citep[e.g. section 6.3 of][]{Gelman04}. Here, $p({\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}^{\rm obs})$ is the posterior distribution of the parameters (see eqn.~\ref{eqn:bayes_eqn}) and $p( {\mbox{\boldmath $x$}}^{\rm rep} | {\mbox{\boldmath $\theta$}} )$ is the sampling distribution of the data given the parameters. The Bayesian $p$-value is the (tail area) probability that replicated data could give a test statistic at least as extreme as that observed. \begin{eqnarray} \label{eqn:ppp} p_B({\mbox{\boldmath $x$}}) & = & \int p_C({\mbox{\boldmath $x$}}^{\rm obs}, {\mbox{\boldmath $\theta$}}) p({\mbox{\boldmath $\theta$}} | {\mbox{\boldmath $x$}}^{\rm obs}) d{\mbox{\boldmath $\theta$}} \nonumber \\ & = & \Pr \{ T({\mbox{\boldmath $x$}}^{\rm rep}) \ge T({\mbox{\boldmath $x$}}^{\rm obs}) | {\mbox{\boldmath $x$}}^{\rm obs}, H_0 \} \end{eqnarray} This is just the classical $p$-value (eqn.~\ref{eqn:classic}) averaged over the posterior distribution of ${\mbox{\boldmath $\theta$}}$ (eqn.~\ref{eqn:bayes_eqn}), i.e. the posterior mean $\mathrm{E}[p_C|{\mbox{\boldmath $x$}}^{\rm obs}, H_0]$ which may be calculated using simulations. In other words, it gives the average of the conditional $p$-values evaluated at over the range of parameter values, weighted by the (posterior) probability of the parameter values. The aim of the posterior predictive $p$-value (or, more generally, comparing the observed value of a test statistic to its posterior predictive distribution) is to provide a simple assessment of whether the data are similar (in important ways) to the data expected under a particular model. This tail area probability does not depend on the unknown value of parameters ${\mbox{\boldmath $\theta$}}$, and is often called the \emph{posterior predictive} $p$-value \citep[see][]{Rubin84, Meng94, Gelman96, Gelman04, Protassov02}. The (classical) conditional $p$-value and the (Bayesian) posterior predictive $p$-value are in general different but are equivalent in two special cases. If the null hypothesis is simple or the test statistic $T$ is pivotal, then the sampling and posterior predictive distributions of $T$ are the same, $p_C = p_B$. Like the classical (conditional) $p$-value, $p_B$ is used for model checking but has the advantage of having no dependence on unknown parameters. The posterior predictive distribution of $T$ includes the uncertainty in the classical $p$-value $p_C$ due to the unknown nuisance parameters \citep{Meng94}. The posterior predictive $p$-value is a single summary of the agreement between data and model, and may be used to assess whether the data are consistent with being drawn from the model: a $p$-value that is not extreme (i.e. not very close to $0$ or $1$) shows the observed value $T^{\rm obs}$ is not an outlier in the population $T^{\rm rep}$. \citet{Gelman04} and \citet{Protassov02} argue that model checking based on the posterior predictive distribution is less sensitive to the choice of priors (on the parameters), and more useful in identifying deficiencies in the model, compared to Bayes factors or posterior odds (eqn.~\ref{eqn:oddsratio}). \section{Conditional significance of periodogram peaks} \label{sect:pc} We now return to the problem of assessing the significance of peaks in periodograms of noisy time series. The null hypothesis, $H_0$, in this case is that the time series was the product of a stochastic process. It is well known that the periodogram of any stochastic time series of length $N$, denoted $I_j = I(f_j)$ at Fourier frequency $f_j = j/N \Delta T$ (with $j=1,\ldots,N/2$), is exponentially distributed\footnote{ The exponential distribution $p(x|\lambda) = \lambda e^{-\lambda x}$ is a special case of the chi square distribution $\chi_{\nu}^2$ with $\nu=2$ degrees of freedom, and a special case of the gamma distribution, $\Gamma(1,1/\lambda)$. See e.g. \citet{Eadie71}, \citet{Carlin00}, \citet{Gelman04} or \citet{Lee89} for more on specific distribution functions. } about the true spectral density $S_j = S(f_j)$ \begin{equation} \label{eqn:pdist} p(I_j | S_j) = \frac{1}{S_j} \exp( -I_j / S_j ), \end{equation} \citep[see][]{Jenkins69, Groth75, Priestley81, Leahy83, vanderklis89, Press92, Percival93, Timmer95, Bloomfield00, Chatfield03}. Strictly speaking this is valid for the Fourier frequencies other than the zero and Nyqist frequency ($j=1$ and $j=N/2$), which follow a different distribution, although in the limit of large $N$ this difference is almost always inconsequential. This distribution means that the ratio of the periodogram ordinates $I_j$ to the true spectrum $S_j$ will be identically distributed. If we have a parametric spectral model with known parameters, $S_j({\mbox{\boldmath $\theta$}})$, the ratio \begin{equation} R_j^{\rm obs} = 2 I_{j}^{\rm obs} / S_j({\mbox{\boldmath $\theta$}}) \end{equation} will be distributed as $\chi_{\nu}^2$ with $\nu = 2$ degrees of freedom (see \citetalias{Vaughan05}) and it is trivial to integrate this density function to find the classical tail area $p$-value corresponding to a given observed datum $I_j^{\rm obs}$ This simple fact is the basis of many ``textbook'' frequentist tests for periodicity. However, $p_C$ depends the parameters ${\mbox{\boldmath $\theta$}}$ (and, more generally, the model $H$), which in general we do not know. The standard solution is to estimate the parameters, e.g. by fitting the periodogram data, and thereby estimate the spectral density $S_j$ under the null hypothesis, call this $\hat{S}_j$, and use this estimate in the test statistic \begin{equation} \label{eqn:Rstat} \hat{R}_j^{\rm obs} = 2 I_{j}^{\rm obs} / \hat{S}_j. \end{equation} The problem is that the distribution of $\hat{R}_j$ will not be simply $\chi_2^2$ since that does not account for the uncertainty in the spectral estimate $\hat{S}_j$. \citetalias{Vaughan05} presented a partial solution to this, by treating the statistic $\hat{R}_j$ as the ratio of two random variables under certain simplifying assumptions. In what follows we use Bayesian methods to develop a much more general method for estimating the parameters of a power spectral model, and posterior predictive model checking to check the quality of a model fit and to map out the distribution of $\hat{R}_j$ conditional on the observed data. \section{Periodogram analysis via the likelihood function} \label{sect:ml} As discussed in \citetalias{Vaughan05}, and based on the results of \citet{Geweke83}, a very simple way to obtain a reasonable estimate of the index and normalisation of a power law power spectrum, $S(f) = \beta f^{-\alpha}$, is by linear regression of $\log I_j^{\rm obs}$ on $\log f_j$ \citep[see also][]{Pilgram98}. This provides approximately unbiased and normally distributed estimates of the power law index ($\alpha$) and normalisation (actually $\log \beta$) even for relatively few periodogram points (i.e. short time series). The log periodogram regression method has the advantage of being extremely simple computationally, so that estimates of the power law parameters (and their uncertainties) can be found with minimal effort. However, the method does not easily generalise to other model forms and does not give the same results as direct maximum likelihood analysis\footnote{\citet{Andersson02} provided a modification of the \citet{Geweke83} fitting method based on the fact that the logarithm of the periodogram ordinates follow a Gumbel distribution. He gives the log likelihood function for the logarithm of the periodogram fitted with a linear function. Maximising this function should give the maximum likelihood estimates of the power law parameters.} even in the special case of a power law model. As discussed in \citet{Anderson90}, and also Appendix A of \citetalias{Vaughan05}, maximum likelihood estimates (MLEs) of the parameters of a model $S({\mbox{\boldmath $\theta$}})$ may be found by maximizing the joint likelihood function \begin{equation} \label{eqn:like} p( \mathbf{I} | {\mbox{\boldmath $\theta$}}, H ) = \prod_{j=1}^{N/2} p(I_j | S_j) \end{equation} (cf. eqn \ref{eqn:pdist}), or equivalently minimising the following function \begin{equation} \label{eqn:mlogl} D( \mathbf{I}, {\mbox{\boldmath $\theta$}}, H) = -2 \log p( \mathbf{I} | {\mbox{\boldmath $\theta$}}, H) = 2 \sum_{j=1}^{N/2} \left\{ \frac{I_j}{S_j} + \log S_j \right\}, \end{equation} which is twice the minus log likelihood\footnote{ The periodogram is $\chi^2_2$ distributed (equation~\ref{eqn:pdist}) for Fourier frequencies $j=1, 2, \ldots, N/2-1$. At the Nyquist frequency ($j=N/2$) it has a $\chi^2_1$ distribution. One could choose to ignore the Nyquist frequency (sum over $j=1,\ldots,N/2-1$ only), or modify the likelihood function to account for this. But in the limit of large $N$ the effect on the overall likelihood should be negligible, and so we ignore it here and sum over all non-zero Fourier frequencies.}. This is sometimes known as the Whittle likelihood method, after \citet{Whittle53} and \citet{Whittle57}, and has been discussed in detail elsewhere \citep[e.g.][]{Hannan73, Pawitan94, Fan04, Contreras06}. Here we use the notation $D( \mathbf{I}, {\mbox{\boldmath $\theta$}})$ for consistency with \citet[][section 6.7]{Gelman04} where it is used as the \emph{deviance}, a generalisation of the common weighted square deviation (or chi square) statistic. Finding the MLEs of the parameters is the same as finding\footnote{For a function $f(x)$, $\operatorname{arg\,min} f(x)$ gives the the set of points of $x$ for which $f(x)$ attains its minimum value.} \begin{eqnarray} \label{eqn:mle} \hat{{\mbox{\boldmath $\theta$}}}_{\rm MLE} & = & \underset{{\mbox{\boldmath $\theta$}}}{\operatorname{arg\,min}} ~ D( \mathbf{I}^{\rm obs}, {\mbox{\boldmath $\theta$}}, H) \nonumber \\ & = & \underset{{\mbox{\boldmath $\theta$}}}{\operatorname{arg\,max}} ~ p( \mathbf{I} = \mathbf{I}^{\rm obs}| {\mbox{\boldmath $\theta$}}, H) \end{eqnarray} \section{Bayesian periodogram analysis through MCMC} \label{sect:ba} We have now laid the groundwork for a fully Bayesian periodogram analysis. Equation~\ref{eqn:like} gives the likelihood function for the data given the model $S({\mbox{\boldmath $\theta$}})$, or equivalently, equation~\ref{eqn:mlogl} gives the minus log likelihood function, which is often easier to work with. Once we assign a prior distribution on the model parameters we can obtain their joint posterior distribution using Bayes theorem (eqn~\ref{eqn:bayes_eqn}) \begin{equation} p({\mbox{\boldmath $\theta$}} | \mathbf{I}, H) \propto p(\mathbf{I} | {\mbox{\boldmath $\theta$}}, H) p({\mbox{\boldmath $\theta$}} | H) = q({\mbox{\boldmath $\theta$}} | \mathbf{I}, H). \end{equation} where $q({\mbox{\boldmath $\theta$}} | \mathbf{I}, H)$ is the unnormalised (joint posterior) density function (the normalisation does not depend on ${\mbox{\boldmath $\theta$}}$). This can be summarised by the posterior mean (or mode) and credible intervals (equations \ref{eqn:post-mean} and \ref{eqn:ci}). We may also assess the overall fit using a posterior predictive $p$-value for some useful test quantity. We may now write an expression for the joint posterior density (up to a normalisation term), or its negative logarithm (up to an additive constant) \begin{equation} \label{eqn:mlogpost} - \log q({\mbox{\boldmath $\theta$}} | \mathbf{I}, H) = D( \mathbf{I}, {\mbox{\boldmath $\theta$}}, H)/2 - \log p({\mbox{\boldmath $\theta$}}|H) \end{equation} The posterior mode may then be found by minimising this function (e.g. using a good numerical non-linear minimisation algorithm, or Monte Carlo methods in the case of complex, multi-parameter models). In the limit of large sample size ($N \rightarrow \infty$) the posterior density will tend to a multivariate Normal under quite general conditions \citep[see chapter 4 of][]{Gelman04}. For finite $N$ we may make a first approximation to the posterior using a multivariate Normal distribution centred on the mode and with a covariance matrix $\Sigma$ equal to the curvature of the log posterior at the mode (see \citealt{Gelman04}, section 12.2 and \citealt{Albert07}, section 5.5). (Approximating the posterior as a Normal in this way is often called the \emph{Laplace approximation}.) This can be used as the basis of a \emph{proposal distribution} in a Markov chain Monte Carlo (MCMC) algorithm that can efficiently generate draws from the posterior distribution $q({\mbox{\boldmath $\theta$}} | \mathbf{I} , H)$, given some data $\mathbf{I} = \mathbf{I}^{\rm obs}$. The MCMC was generated by a random-walk Metropolis-Hastings algorithm using a multivariate Normal (with the covariance matrix as above, but centred on the most recent iteration) as the proposal distribution. More details on posterior simulation using MCMC is given in Appendix \ref{sect:mc}. For each set of simulated parameters we may generate the corresponding spectral model $S({\mbox{\boldmath $\theta$}})$ and use this to generate a periodogram $\mathbf{I}^{\rm rep}$ from the posterior predictive distribution (which in turn may be used to generate a time series if needed, see Appendix \ref{sect:sim-ts}). \section{Posterior predictive periodogram checks} \label{sect:ppper} With the data simulated from the posterior predictive distribution, $\mathbf{I}^{\rm rep}$, we may calculate the distribution of any test statistic. Of course, we wish to use statistics that are sensitive to the kinds of model deficiency we are interested in detecting, such as breaks/bends in the smooth continuum, and narrow peaks due to QPOs. Given the arguments of sections \ref{sect:pc} a sensible choice of statistic for investigating QPOs is $T_{\rm R} = \max_j \hat{R}_j$ (see equation \ref{eqn:Rstat}). Notice that there is no need to perform a multiple-trial (Bonferroni) correction to account for the fact that many frequencies are tested before the strongest candidate is selected, as long as this exact procedure is also applied to the simulated data as the real data. Another useful statistic is based on the traditional $\chi^2$ statistic, i.e. the sum of the squared standard errors \begin{equation} \label{eqn:sse} \chi^2(\mathbf{I},{\mbox{\boldmath $\theta$}}) = \sum_{j=1}^{N/2} \frac{ ( I_j - \mathrm{E}[I_j|{\mbox{\boldmath $\theta$}}] )^2 }{\mathrm{V}[I_j|{\mbox{\boldmath $\theta$}}]} = \sum_{j=1}^{N/2} \left( \frac{ I_j - S_j({\mbox{\boldmath $\theta$}}) }{ S_j({\mbox{\boldmath $\theta$}}) } \right)^2 \end{equation} where $\mathrm{E}[\cdot]$ and $\mathrm{V}[\cdot]$ indicate expectation and variance, respectively. We use $T_{SSE} = \chi^2(\mathbf{I}, \hat{{\mbox{\boldmath $\theta$}}})$ where $\hat{{\mbox{\boldmath $\theta$}}}$ is the mode of the posterior distribution. This is an ``omnibus'' test of the overall data-model match (``goodness-of-fit'') and will be more sensitive to inadequacies in the continuum modelling since all data points are included (not just the largest outlier as in $T_{\rm R}$). This is the same as the merit function used by \citet[][eqn. 16]{Anderson90}, which we call $T_{\rm SSE}$ (for Summed Square Error). The above two statistics are useful for assessing different aspects of model fitness. By contrast the Likelihood Ratio Test (LRT) statistic \citep{Eadie71, Cowan98, Protassov02} is a standard tool for comparing nested models. As such it may be used to select a continuum model prior to investigating the residuals for possible QPOs. The LRT statistic is equal to twice the logarithm of the ratio of the likelihood maxima for the two models, equivalent to the difference between the deviance (which is twice the minimum log likelihood) of the two models \begin{eqnarray} \label{eqn:lrt} T_{\rm LRT} & = & -2 \log \frac{p(\mathbf{I}|\hat{{\mbox{\boldmath $\theta$}}}_{\rm MLE}^0,H_0)}{p(\mathbf{I}|\hat{{\mbox{\boldmath $\theta$}}}_{\rm MLE}^1,H_1)} \nonumber \\ & = & D_{\rm min}(H_0) - D_{\rm min}(H_1). \end{eqnarray} Asymptotic theory shows that, given certain regularity conditions are met, this statistic should be distributed as a chi square variable, $T_{\rm LRT} \sim \chi_{\nu}^2$, where the number of degrees of freedom $\nu$ is the difference between the number of free parameters in $H_1$ and $H_0$. When the regularity conditions are not met \citep[see][]{Freeman99, Protassov02, Park08} we do not expect the distribution to be that of the asymptotic theory. Nevertheless, the LRT is a powerful statistic for comparing models and can be calibrated by posterior predictive simulation, as shown by \citet{Protassov02} and \citet{Rubin94}. \section{Application to AGN data} \label{sect:data} In this section we apply the method detailed above to two example datasets, both long observations of nearby, variable Seyfert 1 galaxies, obtained from the {\it XMM-Newton}\ Science Archive\footnote{See {\tt http://xmm.esac.esa.int/}.}. \subsection{The power spectrum model} We shall restrict ourselves to two simple models for the high frequency power spectrum of the Seyferts. The first ($H_0$) is a power law plus a constant (to account for the Poisson noise in the detection process) \begin{equation} \label{eqn:m0} S(f) = \beta f^{-\alpha} + \gamma \end{equation} with three parameters ${\mbox{\boldmath $\theta$}} = \{\alpha, \beta, \gamma \}$, where $\beta$ (the power law normalisation) and $\gamma$ (the additive constant) are constrained to be non-negative. The second model ($H_1$) is a bending power law as advocated by \citet{Mchardy04} \begin{equation} \label{eqn:m1} S(f) = \beta f^{-1} \left( 1 + \left\{ \frac{f}{\delta} \right\}^{\alpha-1} \right)^{-1} + \gamma \end{equation} with four parameters ${\mbox{\boldmath $\theta$}} = \{ \alpha, \beta, \gamma, \delta \}$. For this model $\beta$, $\gamma$ and $\delta$ (the bending frequency) are all non-negative. The parameter $\alpha$ gives the slope at high frequencies ($f \gg \delta$) in model $H_1$, and the low frequency slope is assumed to be $-1$. (This assumption simplifies the model fitting process, and seems reasonable given the results of \citealt{Uttley02, Markowitz03, Mchardy04} and \citealt{Mchardy06}, but could be relaxed if the model checking process indicated a significant model misfit.) In the limit of $\delta \rightarrow 0$ the form of $H_1$ tends to that of the simple power law $H_0$. Following the advice given in \citet{Gelman04} we apply a logarithmic transformation to the non-negative parameters. The motivation for this is that the posterior should be more symmetric (closer to Normal), and so easier to summarise and handle in computations, if expressed in terms of the transformed parameters. We assign a uniform (uninformative) prior density\footnote{Although strictly speaking these prior densities are improper, meaning they do not integrate to unity, we may easily define the prior density to be positive only within some large but reasonable range of parameter values, and zero elsewhere, and thereby arrive at a proper prior density. In the limit of large $N$ the likelihood will dominate over the uninformative prior and hence the exact form of the prior density will become irrelevant to the posterior inferences.} to the transformed parameters, e.g. $p(\alpha, \log \beta, \log \gamma) = const$ for model $H_0$. This corresponds to a uniform prior density on the slope $\alpha$ and a Jeffreys prior on the parameters restricted to be non-negative (e.g. $p(\beta) = 1/\beta$), which is the conventional prior for a scale factor \citep{Lee89, Sivia96, Gelman04, Gregory05, Albert07}. \begin{figure} \centering \includegraphics[width=6.5cm,angle=270]{qpo1.ps} \caption{Posterior predictive distribution of the LRT statistic under $H_0$ for the RE J$1034+396$ data (computed using $5,000$ datasets simulated under $H_0$). The observed value $T_{\rm LRT}^{\rm obs}=9.67$ is shown with the vertical line, alongside the corresponding $p$-value. The distribution is not $\chi^2_1$ as would be predicted by the standard theory but instead resembles a mixture of distributions with half the probability in a $\chi^2_1$ distribution and half concentrated around zero. This might be expected given the arguments of \citet[][section 5.4]{Titterington85} and \citet[][Appendix B]{Protassov02}. } \label{fig:lrt} \end{figure} \begin{table} \caption{Posterior summaries of parameters for model $H_1$ for the RE J$1034+396$ data. The four parameters are as follows: $\alpha =$ power law index, $\beta =$ normalisation (in power density units at $1$ Hz, i.e. $[$rms$/$mean$]^2$ Hz$^{-1}$), $\gamma$ (Poisson noise level in power density units, $[$rms$/$mean$]^2$ Hz$^{-1}$), $\delta$ (bend frequency in Hz). The columns give the parameter name, the posterior mean and the lower and upper bounds of the $90$ per cent credible intervals.} \label{tab:h1} \centering \begin{tabular}{l l l l} \hline\hline Parameter & mean & $5$\% & $95$\% \\ \hline $\alpha$ & $3.4$ & $2.2$ & $5.2$ \\ $\beta$ & $2.3 \times 10^{-3}$ & $1.4 \times 10^{-3}$ & $3.7 \times 10^{-3}$ \\ $\gamma$ & $0.40$ & $0.34$ & $0.45$ \\ $\delta$ & $4.3 \times 10^{-4}$ & $2.0 \times 10^{-4}$ & $6.5 \times 10^{-4}$ \\ \hline \end{tabular} \end{table} \subsection{Application to {\it XMM-Newton}\ data of RE J1034+396} \label{sect:rej1034} The first test case we discuss is the interesting {\it XMM-Newton}\ observation of the ultrasoft Seyfert 1 galaxy RE J$1034+396$. \citet{Gierlinski08} analysed these data and reported the detection of a significant QPO which, if confirmed in repeat observations and by independent analyses, would be the first robust detection of its kind. For the present analysis a $0.2-10$ keV time series was extracted from the archival data using standard methods \citep[e.g.][]{Vaughan03b} and binned to $100$ s, to match that used by \citet{Gierlinski08}. The two candidate continuum models discussed above, $H_0$ and $H_1$ were compared to the data, which gave $D_{\rm min}^{\rm obs}(H_0) = 504.89$ and $D_{\rm min}^{\rm obs}(H_1) = 495.22$, therefore $T_{\rm LRT}^{\rm obs} = 9.67$. The MCMC was used to draw from the posterior of model $H_0$, and these draws were used to generate posterior predictive periodogram data, which were also fitted with the two models and the results used to map out the posterior predictive distribution of $T_{\rm LRT}$, which is shown in Fig. \ref{fig:lrt}. The corresponding tail area probability for the observed value is $p = 0.001$, small enough that the observed reduction in $D_{\rm min}$ between $H_0$ and $H_1$ is larger than might be expected by chance if $H_0$ were true. We therefore favour $H_1$ and use this as the continuum model. In the absence of complicating factors (see below) this amounts to a significant detection of a power spectral break. Using $H_1$ as the continuum model we then map out the posterior distribution of the parameters using another MCMC sample. Table \ref{tab:h1} presents the posterior means and intervals for the parameters of model $H_1$, and figure \ref{fig:post} shows the pairwise marginal posterior distributions for the parameters of the model. Figure \ref{fig:rej-fit} shows the data and model evaluated at the posterior mode. \begin{figure*} \centering \includegraphics[width=9cm,angle=270]{qpo2.ps} \caption{Pairwise marginal posterior distributions for the parameters of $H_1$: $\alpha =$ power law index, $\beta =$ normalisation (in power density units at $1$ Hz, i.e. $[$rms$/$mean$]^2$ Hz$^{-1}$), $\gamma$ (Poisson noise level in power density units, $[$rms$/$mean$]^2$ Hz$^{-1}$), $\delta$ (bend frequency in Hz). The parameters $\beta$ and $\delta$ are shown on a logarithmic scale. The lower-left panels show the contours evaluated using all $75,000$ posterior simulations, and the upper-left panels show some of the simulated posterior data (for clarity only $1,000$ points are shown). } \label{fig:post} \end{figure*} \begin{figure} \centering \includegraphics[width=6.0cm,angle=270]{fit1.ps} \caption{RE J$1034+396$ data and model ($H_1$) computed at the posterior mode. The data are shown as the histogram and the model is shown with the smooth curve. The lower panel shows the data/model residuals on a logarithmic scale. See \citet{Gierlinski08} for details of the observation. } \label{fig:rej-fit} \end{figure} Clearly there is a large outlier at $\sim 2.5 \times 10^{-3}$ Hz in the residuals after dividing out the model ($H_1$, computed at the posterior mode) which may be due to additional power from a QPO. We therefore calculate the posterior predictive distributions of the two test statistics $T_{\rm R}$ and $T_{\rm SSE}$ and compared these to the observed values ($T_{\rm R}^{\rm obs} = 18.41$ and $T_{\rm SSE}^{\rm obs} = 542.3$). The posterior predictive distributions of these two statistics, derived from $5,000$ simulations, are shown in Fig. \ref{fig:ppdist}. Both these statistics give moderately low $p$-values ($p_{\rm R} = 0.035$ and $p_{\rm SSE} = 0.025$), indicating there is room for improvement in the model and that the largest outlier is indeed rather unusual under $H_1$. This may indicate the presence of power from a QPO or some other deficiency in the continuum model. Very similar results were obtained after repeating the posterior predictive $p$-value calculations with a variant of $H_1$ in which the low frequency index (at $f \ll \delta$) is fixed at $0$ rather than $-1$, indicating that the $p$-values are not very sensitive to this aspect of the continuum model. \begin{figure*} \centering \hbox{ \includegraphics[width=6.0cm,angle=270]{qpo3.ps} \includegraphics[width=6.0cm,angle=270]{qpo4.ps} } \caption{Posterior predictive distributions of the $T_{\rm R}$ and $T_{\rm SSE}$ statistics under $H_1$ for the RE J$1034+396$ data. The observed value of each is shown with a vertical line. } \label{fig:ppdist} \end{figure*} \citet{Gierlinski08} split the time series into two segments and focussed their analysis on the second of these, for which the periodogram residual was largest and concentrated in one frequency bin only. The division of the data into segments is based on a partial analysis of the data -- it is in effect the application of a data-dependent ``stopping rule'' -- and it is extremely difficult to see how such a procedure could be included in the generation of replicated data $\mathbf{I}^{\rm rep}$ used to calibrated the posterior predictive $p$-values. We therefore consider $p$-values only for the analysis of the entire time series and do not try to replicate exactly the analysis of \citet{Gierlinski08}. \subsection{Application to {\it XMM-Newton}\ data of Mrk $766$} \label{sect:mrk766} A similar analysis was performed on the {\it XMM-Newton}\ observation of Mrk 766 discussed previously by \citet{Vaughan03}, who claimed to have detected a power spectral break using frequentist (classical) statistical tools such as $\chi^2$ fitting. The LRT statistic for the data was $T_{\rm LRT}^{\rm obs} = 18.56$, and the posterior predictive distribution for this statistic had the same shape as in the case of RE J$1034+396$ (Figure \ref{fig:lrt}). The $p$-value for the LRT comparison between $H_0$ and $H_1$ was $p < 2 \times 10^{-4}$ (i.e. not one of the $5,000$ simulations gave a larger value of $T_{\rm LRT}$). This amounts to a very strong preference for $H_1$ over $H_0$, i.e. a solid detection of a spectral break. Table \ref{tab:h2} summarises the posterior inferences for the parameters of $H_1$ and Figure \ref{fig:mrk-fit} shows the data, model and residuals. The residuals show no extreme outliers, and indeed the observed values of the test statistics $T_{\rm R}$ and $T_{\rm SSE}$ were not outliers in their posterior predictive distributions ($p_{\rm R}=0.93$ and $p_{\rm SSE}=0.89$). These suggest that $H_1$ provides an adequate description of the data (i.e. without any additional components). \begin{table} \caption{Posterior summaries of parameters for model $H_1$ for the Mrk $766$ data. The columns are as in Table~\ref{tab:h1}.} \label{tab:h2} \centering \begin{tabular}{l l l l} \hline\hline Parameter & mean & $5$\% & $95$\% \\ \hline $\alpha$ & $2.7$ & $2.4$ & $3.1$ \\ $\beta$ & $1.6 \times 10^{-2}$ & $0.95 \times 10^{-2}$ & $2.7 \times 10^{-2}$ \\ $\gamma$ & $0.10$ & $0.084$ & $0.12$ \\ $\delta$ & $2.1 \times 10^{-4}$ & $0.97 \times 10^{-4}$ & $3.4 \times 10^{-4}$ \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=6.0cm,angle=270]{fit2.ps} \caption{Mrk $766$ data and model ($H_1$) computed at the posterior mode. The panels are the same as in Figure \ref{fig:rej-fit}.} \label{fig:mrk-fit} \end{figure} \subsection{Sensitivity to choice of priors} \label{sect:prior} It is important to check the sensitivity of the conclusions to the choice of the prior densities, by studying, for example, the effect of a different or modified choice of prior on the posterior inferences. We have therefore repeated the analysis of the RE J$1034+396$ data using a different choice of priors. In particular, we used independent Normal densities on the four transformed parameters of $H_1$, this is equivalent to a Normal density on the index $\alpha$ and log normal densities on the non-negative valued parameters $\beta$, $\gamma$ and $\delta$. In other words, for each of the transformed parameters $p(\theta_i|H_1) = N(\mu_i, \sigma_i^2)$ where the \emph{hyperparameters} $\mu_i$ and $\sigma_i$ control the mean and width of the prior density functions. After choosing values for the hyperparameters based on knowledge gained from previous studies of nearby, luminous Seyfert galaxies \citep[e.g.][]{Uttley02, Markowitz03, Papadakis04, Mchardy06}, as outlined below, the posterior summaries (parameter means and intervals, pairwise marginal posterior contours, and posterior predictive $p$-values) were essentially unchanged, indicating that the inferences are relatively stable to the choice of prior. Previous studies usually gave a high frequency index parameter in the range $\alpha \sim 1-3$, and so we assigned $p(\alpha|H_1) = N(2,4)$, i.e. a prior centred on the typical index of $2$ but with a large dispersion (standard deviation of $2$). The normalisation of the $f^{-1}$ part of the power spectrum is thought to be similar between different sources, with $\beta \sim 0.005 - 0.03$ \citep[see][]{Papadakis04}, we assigned $p(\log \beta | H_1) = N(-2,1)$, i.e. a decade dispersion around the mean of $\beta \sim 10^{-2}$. The Poisson noise level is dependent on the count rate, which can be predicted very crudely based on previous X-ray observations; we assign a prior $p(\log \gamma | H_1) = N(0,1)$. The bend/break frequency $\delta$ is thought to correlated with other system parameters such as $M_{\rm BH}$, bolometric luminosity $L_{\rm Bol}$ and optical line width (e.g. $FWHM ~ {\rm H}\beta$). Using the estimated luminosity, and assuming RE J$1034+396$ is radiating close to the Eddington limit \citep{Middleton09} gave a prediction for the bend timescale of $T_{\rm b} \sim 1.6 \times 10^{-3}$ s, and using the optical line width of \citet{Veron01} gave $T_{\rm b} \sim 1.2 \times 10^{-3}$ s, using the relations of \citet{Mchardy06}. Both these (independent) predictions suggest $\delta = 1/T_{\rm b} \sim 10^{-3}$ Hz, and we therefore assigned a prior density $p(\log \delta | H_1) = N(-3,1)$. All of these priors are reasonably non-informative -- they have quite large dispersion around the mean values, to account for the fact that the empirical relations used make these predictions are rather uncertain themselves and also contain intrinsic scatter (i.e. there are significant source to source differences) -- yet they do include salient information about the model obtained from other sources. \section{Discussion} \label{sect:disco} We have described, in sections \ref{sect:ml}-\ref{sect:ppper}, a Bayesian analysis of periodogram data that can be used to estimate the parameters of a power spectral model of a stochastic process, compare two competing continuum models, and test for the presence of a narrow QPO (or strict periodicity). \subsection{Limitations of the method} The Whittle likelihood function (equation \ref{eqn:like}) is only an approximation to the true sampling distribution of a periodogram. In the absence of distortions due to the sampling window (more on this below), the ordinates of the periodogram of all stationary, linear (and many non-linear) stochastic processes become independently distributed following equation \ref{eqn:pdist} as $N \rightarrow \infty$. With finite $N$ (i.e. for real data) this is only approximately true, although with reasonable sample sizes (e.g. $N > 100$) it is a very good approximation. More serious worries about the distribution of the periodogram, and hence the validity of the Whittle likelihood, come from distortions due to the sampling effects known as aliasing and leakage \citep[e.g.][]{Uttley02}. It is fairly well established that X-ray light curves from Seyfert galaxies are stationary once allowance has been made for their red noise character and the linear ``rms-flux'' relation \citep[see][]{Vaughan03b, Uttley05}. Distortions in the expectation of the periodogram can be modelled by simulating many time series for a given power spectral model, resampling these in time as for the original data, and then calculating the average of their periodograms \citep[][and Appendix \ref{sect:sim-ts}]{Uttley02}. This does not account for distortions in the distribution of the periodogram ordinates (away from equation \ref{eqn:pdist} predicted by asymptotic theory), which is a more challenging problem with (as yet) no accepted solution. However, these affects will be minor or negligible for the data analysed in section \ref{sect:data} which are contiguously binned, as the effect of aliasing will be lost in the Poisson noise spectrum which dominates at high frequencies \citep{vanderklis89, Uttley02}, and the leakage of power from lower to higher frequencies is very low in cases where the power spectrum index is $\alpha \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1$ at the lowest observed frequencies. The task of fully accounting for sampling distortions in both the expectation and distribution of the periodogram, and hence having a more general likelihood function, is left for future work. We should also point out that the usual limitations on the use and interpretation of the periodogram apply. These include the (approximate) validity of the Whittle likelihood only when the time series data are evenly sampled. It may be possible to adjust the likelihood function to account for the non-independence of ordinates in the modified periodogram usually used with unevenly sampled time series \citep[e.g.][]{Scargle82}, but here we consider only evenly sampled data. It is also the case that the periodogram, based on a decomposition of the time series into sinusoidal components, is most sensitive to sinusiodal oscillations, especially when they lie close to a Fourier frequency (i.e. the time series spans an integer number of cycles; see \citealt{vanderklis89}). In situations where the time series is large and spans many cycles of any possible periods (the large $N$ regime), there is no reason to go beyond the standard tools of time series processing such as the (time and/or frequency) binned periodogram with approximately normal error bars \citep{vanderklis89}. The current method uses the raw periodogram of a single time series (with the Whittle likelihood) in order to preserve the frequency resolution and bandpass of the data, which is more important in the low $N$ regime (e.g. when only a few cycles of a suspected period are observed). The time series data analysed in section \ref{sect:data} were binned up to $100$ s prior to computing the periodogram; this in effect ignores frequencies above $5 \times 10^{-3}$ Hz which are sampled by the raw data from the detectors (recorded in counts per CCD frame at a much higher rate). The choice of bin size does affect the sensitivity to periodic signals of the method described in sections \ref{sect:ml}-\ref{sect:ppper}. Obviously one looses sensitivity to periodic components at frequencies higher than the Nyquist frequency. But also as more frequencies are included in the analysis there are more chances to find high $T_{\rm R}$ values from each simulation, which means the posterior predictive distribution of the test statistic does depend on the choice of binning. One could mitigate against this by imposing a priori restrictions on the frequencies of any allowed periods, for example by altering the test statistic to be $T_{\rm R} = \max_{j < J_0} R_j$ where $J_0$ is some upper limit. (The lower frequency of the periodogram is restricted by the duration of the time series, which is often dictated by observational constraints.) But these must be specified independently of the data, otherwise this is in effect another data-dependent stopping rule (the effect of limiting the frequency range of the search is illustrated below in the case of the RE J$1034+396$). This sensitivity to choice of binning could be handled more effectively by considering the full frequency range of the periodogram (i.e. no rebinning of the raw data) and explicitly modelling the periodic component of the spectrum with an appropriate prior on the frequency range (or an equivalent modelling procedure in the time domain). But this suffers from the practical drawbacks discussed below. \subsection{Alternative approaches to model selection} In many settings the Likelihood Ratio Test (LRT, or the closely related $F$-test) is used to choose between two competing models: the observed value of the LRT statistic is compared to its theoretical sampling (or reference) distribution, and this is usually summarised with a tail area probability, or $p$-value. As discussed above this procedure is not valid unless certain specific conditions are satisfied by the data and models. In the case of comparing a single power law ($H_0$ of section \ref{sect:data}) to a bending power law ($H_1$) the simpler model is reproduced by setting the extra parameter $\delta \rightarrow 0$ in the more complex model, which violates one of the conditions required by the LRT (namely that null values of the extra parameters should not lie at the boundaries of the parameter space). In order to use the LRT we must find the distribution of the statistic appropriate for the given data and models, which can be done using posterior predictive simulations. This method has the benefit of naturally accounting for nuisance parameters by giving the expectation of the classical $p$-value over the posterior distribution of the (unknown) nuisance parameters. One could in principle use the posterior predictive checks to compare a continuum only model (e.g. $H_0$ or $H_1$) to a continuum plus line (QPO) model ($H_2$) and thereby test for the presence of an additional QPO. \citet{Protassov02} and \citet{Park08} tackled just this problem in the context of X-ray energy spectra with few counts. However, we deliberately do not define and use a model with an additional line for the following reasons. Firstly, this would require a specific line model and a prior density on the line parameters, and it is hard to imagine these being generally accepted. Unless the line signal is very strong the resulting posterior inferences may be more sensitive to the (difficult) choice of priors than we would generally wish. Secondly, as shown by \citet{Park08}, there are considerable computational difficulties when using models with additional, narrow features and data with high variance (as periodograms invariably do), due to the highly multi-modal structure of the likelihood function. Our pragmatic alternative is to leave the continuum plus line model unspecified, but instead choose a test statistic that is particularly sensitive to narrow excesses in power such as might be produced under such a model \citep[see][and associated discussions, for more on the choice of test statistic in identifying model deficiency]{Gelman96}. This has the advantages of not requiring us to specify priors on the line parameters and simplifying the computations, but means the test is only sensitive to specific types of additional features that have a large effect on the chosen test statistic. (It is also worth pointing out that the periodogram ordinates are randomly distributed about the spectrum of the stocastic process $S_R(f)$. If a deterministic process is also present, e.g. producing a strictly periodic component to the signal, this will not in general follow the same $\chi^2$ distribution and the Whittle likelihood function would need to be modified in order to explicitly model such processes in the spectral domain.) One of the most popular Bayesian methods for choosing between competing models is the Bayes factor \citep{Kass95, Carlin00, Gelman04, Lee89}. These provide a direct comparison of the weight of evidence in favour of one model compared to its competitor, in terms of the ratios of the marginal likelihoods for the two models (equation \ref{eqn:oddsratio}). This may be more philosophically attractive than the posterior predictive model checking approach but in practice suffers from the same problems outlined above, namely the computational challenge of handling a multi-modal likelihood, and the sensitivity to priors on the line parameters, which may be even greater for Bayes factors than other methods \citep[see arguments in][]{Protassov02, Gelman04}. \subsection{Comparison with \citetalias{Vaughan05}} \citetalias{Vaughan05} tackled the same problem -- the assessment of red noise spectra and detection of additional periodic components from short time series -- using frequentist methods. The method developed in the present paper is superior in a number of ways. The new method is more general in the sense that the model for the continuum power spectrum (i.e. the ``null hypothesis'' model that contains no periodicities) may in principle take any parametric form but was previously restricted to a power law. It also provides a natural framework for assessing the validity of the continuum model, which should be a crucial step in assessing the evidence for additional spectral features (see below). Also, by using the Whittle likelihood rather than the \citet{Geweke83} fit function, the new method actually gives smaller mean square errors on the model parameters \citep[see][]{Andersson02}. \subsection{Comparison with other time series methods} Previous work on Bayesian methods for period detection \citep[e.g.][]{Bretthorst88, Gregory92, Gregory99} has focussed on cases where the stochastic process is assumed to be white (uncorrelated) noise on which a strictly periodic signal is superposed. They do not explicitly tackle the more general situation of a non-white continuum spectrum that is crucial to analysing data from compact accreting X-ray sources. The only non-Bayesian (i.e. frequentist) methods we are aware of for assessing evidence for periodicities in data with a non-white spectrum involve applying some kind of smoothing to the raw periodogram data. This gives a non-parametric estimate of the underlying spectrum, with some associated uncertainty on the estimate, which can then be compared to the unsmoothed periodogram data and used to search for outlying periodogram points. The Multi-Taper Method (MTM) of \citet{Thompson82} \citep[also][]{Thompson90} achieves the smoothing by averaging the multiple periodograms, each computed using one member of a set of orthogonal data tapers. See \citet[][chapter 7]{Percival93} for a good discussion of this method. The data tapers are designed to reduce spectral leakage and so reduced bias in the resulting spectrum estimate. The method proposed by \citet{Israel96} involves a more straightforward running mean of the peridogram data. Both of these are non-parametric methods, meaning that they do not involve a specific parametric model for the underlying spectrum. This lack of model dependence might appear to be an advantage, but in fact may be a disadvantage in cases where we do have good reasons for invoking a particular type of parametric model (e.g. the bending power laws seen in the Seyfert galaxy data). The continuum model's few parameters may be well constrained by the data, where the non-parametric (smoothed) estimate at each frequency is not. The non-parametric methods also leave a somewhat arbitrary choice of how to perform the smoothing, i.e. the type and number of data tapers in the MTM, or the size/shape of the smoothing kernel in the \citet{Israel96} method. Also, it is less obvious how to combine the sampling distribution of the periodogram ordinate (line component) and the spectrum estimate (continuum), and how to account for the number of ``independent'' frequencies searched. These are all automatically included in the posterior predictive $p$-value method as outlined above. In the present paper we have deliberately concentrated on the periodogram since this is the standard tool for time series analysis in astronomy. But the periodogram is by no means the best or only tool for the characterisation of stochastic processes or the identification of periodicities. Methods that explicitly model the original time series data in the time domain \citep[see e.g.][]{Priestley81, Chatfield03} may yet prove to be valuable additions to the astronomers toolkit. Indeed the raw form of the {\it XMM-Newton}\ data used in the AGN examples is counts per CCD frame, for the source (and possibly background region if this is a non-negligible contribution). The most direct data analysis would therefore model this process explicitly as a Poisson process with a rate parameter that varies with time (i.e. the ``true'' X-ray flux) that is itself a realisation of some stochastic process with specific properties (e.g. power spectrum or, equivalently, autocorrelation function, and stationary distribution). \subsection{The importance of model assessment} The posterior predictive approach provides an attractive scheme for model checking. In particular, it allows us to select a continuum model that is consistent with the observed data\footnote{Strictly, we compare the observed data to simulations drawn from the posterior predictive distribution under the chosen model $H$ using test statistics. If the observed data do not stand out from the simulations, by having extreme values of the statistics when compared to the simulations, we may assume that the data are consistent with the model (as far as the particular test statistics are concerned).} before testing for the presence of additional features. This is crucial since any simple test statistic, whether used in a frequentist significance test or a posterior predictive test, will be sensitive to certain kinds of deficiencies in the model without itself providing any additional information about the specific nature of any deficiency detected (a $p$-values is after all just a single number summary). A low $p$-value (i.e. a ``significant'' result) may be due to the presence of interesting additional features or just an overall poor match between the data and the continuum model \citep[for more on this in the context of QPO detection see][]{Vaughan06}. The use of more than one test statistic, properly calibrated using the posterior predictive simulations, as well as other model diagnostics (such as data/model residual plots) are useful in identifying the cause of the data/model mismatch. \subsection{Analysis of two Seyfert galaxies} \begin{figure} \centering \includegraphics[width=6.0cm,angle=270]{fake.ps} \caption{Simulated time series generated from the posterior predictive distribution of the RE J$1034+396$ periodogram data. The (grey) histogram shows the simulated data in $100$ s bins and the smooth (red) curve shows the $6$ bin moving average of these data. Compare with Figure 1 of \citet{Gierlinski08}. The power spectrum used to generate these data is a smoothly bending power law (plus white ``measurement'' noise) with no periodic or quasi-periodic components, and they the time series appears to show oscillatory structure.} \label{fig:sim} \end{figure} Section \ref{sect:data} presents an analysis of {\it XMM-Newton}\ data for the Seyfert galaxies RE J$1034+396$ and Mrk $766$. The former has produced the best evidence to date for a QPO in a Seyfert galaxy \citep{Gierlinski08}, while the latter showed no indication of QPO behaviour \citep{Vaughan03, Vaughan05b}. \citet{Gierlinski08} used the method presented in \citetalias{Vaughan05} to show that the observed peak in the periodogram was highly unlikely under the assumption than the underlying power spectrum continuum is a power law, but the present analysis gave somewhat less impressive evidence to suggest a QPO. The posterior predictive $p \approx 0.03$ comes from the fact that $\sim 150$ out of the $5,000$ posterior predictive simulations of the RE J$1034+396$ periodogram data showed $T_{\rm R} \ge T_{\rm R}^{\rm obs}$ (and approximately the same figure was obtained using $T_{\rm SSE}$). This might at first seem doubtful given how periodic the observed time series appears (see Figure 1 of \citealt{Gierlinski08}). But to demonstrate that such apparently periodic time series may indeed be generated from non-periodic processes we simulated time series from the posterior predictive periodogram data (for model $H_1$) that showed $T_{\rm R} \ge T_{\rm R}^{\rm obs}$. (The time series simulation method is given in Appendix \ref{sect:sim-ts}.) One example of these time series, chosen at random from the subset that had the largest residual $R_j$ occurring at a frequency of the same order as that seen in RE J$1034+396$ (in this case $\approx 1.3 \times 10^{-4}$ Hz), is shown in Figure \ref{fig:sim}. There are several reasons for the very different $p$-values between the analyses. One of these factors is that we based our calculation on a more general form of the continuum model. In the absence of a QPO (spectral line component) the power spectrum continuum is well modelled using a power law with a steep slope ($\alpha \sim 3$) that smoothly changes to a flatter slope (assumed index of $-1$) below a frequency $\delta \sim 4 \times 10^{-4}$ Hz, than a single power law. The bend frequency is close to that of the candidate QPO, which does have a large effect on the ``significance'' of the QPO as summarised in the $p$-value \citep[see][for previous examples of this effect]{Vaughan05b}. Indeed, the posterior predictive $p$-value was $2 \times 10^{-3}$ when recalculated assuming a simple power law continuum ($H_0$). A second factor is that \citet{Gierlinski08} gave special consideration to a particular subset of the times series chosen because of its apparently coherent oscillations, which in effect enhanced the apparent significance of the claimed periodicity, while the entire time series is treated uniformly in the present analysis (for reasons discussed in section \ref{sect:data}). A third factor is that we made no restriction on the allowed frequency of a period component, and so openly searched $457$ frequencies, where \citet{Gierlinski08} concentrated on the $\approx 60$ frequencies in their periodogram below $10^{-3}$ Hz. This will result in a factor $\sim 8$ change in the $p$-value (since the probability of finding a $T_{\rm R}$ value in a simulation that is larger than the observed in the real data scales approximately linearly with the number of frequencies examined). If we take $T_{\rm R}$ to be the largest residual at frequencies below $10^{-3}$ Hz (but including all the data in the rest of the modelling process), we find $21/5000$ of the RE J$1034+396$ simulations showed $T_{\rm R} \ge T_{\rm R}^{\rm obs}$ under these restricted conditions, corresponding to $p=0.004$, which is smaller by about the expected factor. A relatively minor difference is the more complete treatment of parameter uncertainties using the posterior distribution \citepalias[which is treated in an approximation fashion in the method of][]{Vaughan05}. One is therefore left with a choice between two models that could plausibly explain the data, a power law spectrum with a strong QPO or a bending power law spectrum (with weaker evidence for a QPO). The most powerful and least ambiguous confirmation of the reality of the QPO feature would come from a independent observation capable of both constraining the continuum more precisely and allowing a sensitive search for the candidate QPO. The results of the present analysis of the Mrk $766$ data agree reasonably well with those previously reported by \citet{Vaughan03b} which were obtained using standard frequentist methods (e.g. binning the data and estimating parameters by minimising the $\chi^2$ statistic). The high frequency slopes are essentially the same, but the frequency of the bend differs by a factor of $\sim 2.5$. This is most likely due to the slightly different models used, i.e. bending vs. sharply broken power laws. (Repeating the frequentist analysis of \citet{Vaughan03b} using the bending power law model gave a lower characteristic frequency, more consistent with that of the present analysis). \subsection{Other applications of this method} The techniques discussed in this paper may find application well beyond the specific field for which they were devised (namely, analysis of X-ray light curves from Seyfert galaxies), since the problems of estimating a noisy continuum spectrum and assessing the evidence for additional narrow features over and above that continuum are common to many fields. Other examples from X-ray astronomy include analysis of long timescale light curves from Galactic X-ray binaries and Ultra-Luminous X-ray sources (ULXs) in order to characterise the low frequency power spectrum and search for periodicities (e.g. due to orbital modulation). But the applications are by no means restricted to astronomy. For example, in geology there is considerable interest in detecting and characterising periodicities in stratigraphic records of environmental change, which may be connected to periodicities in external forcing such as might be expected from Milankovich cycles \citep[see e.g.][]{Weedon03}. However, there is controversy over the statistical and physical significance of the periodicities in these data, which are often dominated by stochastic red noise variations \citep{Bailey09}. \section{Conclusions} \label{sect:conc} We have presented Bayesian methods for the modelling of periodogram data that can be used for both parameter estimation and model checking, and may be used to test for narrow spectral features embedded in noisy data. The model assessment is performed using simulations of posterior predictive data to calibrate (sensibly chosen) test statistics. This does however leave some arbitrariness in the method, particularly in the choice of test statistic\footnote{In situations where two competing models can be modelled explicitly the LRT provides a natural choice of statistic.} (and in some situations the choice of what constitutes a simulation of the data). Such issues were always present, if usually ignored, in the standard frequentist tests. The posterior predictive approach has the significant advantage of properly treating nuisance parameters, and provides a clear framework for checking the different aspects of the reasonableness of a model fit. The issue of choosing a test statistic does not arise in more ``purist'' Bayesian methods such as Bayes factors, which concentrate on the posterior distributions and marginal likelihoods, but such methods of model selection carry their own burden in terms of the computational complexity and the difficulty of selecting (and the sensitivity of inferences to) priors on the model parameters. The method presented in this paper, making use of the posterior predictive checking, is an improvement over the currently popular methods that use classical $p$-value; but Bayesian model selection is an area of active research and it is not unreasonable to expect that new, powerful and practical computational tools will be developed or adapted to help bridge the gap between the pragmatic and the purist Bayesian approaches. The routines used to perform the analysis of the real data presented in section \ref{sect:data} will be made available as an {\tt R}\footnote{{\tt R} is a powerful, open-source computing environment for data analysis and statistics that may be downloaded for free from {\tt http://www.r-project.org/} \citep{r, r2}.} script from the author upon request. \section*{Acknowledgements} The author wishes to thank David van Dyk and Phil Uttley for valuable discussions during the final stages of writing this paper, and an anonymous referee for a helpful report. \bibliographystyle{mn2e}
proofpile-arXiv_065-6602
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{\hspace*{-0.72cm} \normalsize\bf\arabic{section}.$\;$#1}\vspace*{-0.3cm}} \def\subsec#1{\addtocounter{subsection}{1}\subsection*{\hspace*{-0.4cm} \normalsize\bf\arabic{section}.\arabic{subsection}.$\;$#1}\vspace*{-0.3cm}} \vspace{-0.7cm} \begin{flushright} $\vcenter{ { \hbox{{\footnotesize FUT-09-01}} } { \hbox{{\footnotesize TOKUSHIMA Report}} } { \hbox{(arXiv:0910.3049)} } }$ \end{flushright} \vskip 0.8cm \begin{center} {\large\bf Search for anomalous top-gluon couplings at LHC revisited} \end{center} \vspace{0.6cm} \begin{center} \renewcommand{\thefootnote}{\alph{footnote})} Zenr\=o HIOKI$^{\:1),\:}$\footnote{E-mail address: \tt [email protected]}\ and\ Kazumasa OHKUMA$^{\:2),\:}$\footnote{E-mail address: \tt [email protected]} \end{center} \vspace*{0.4cm} \centerline{\sl $1)$ Institute of Theoretical Physics,\ University of Tokushima} \centerline{\sl Tokushima 770-8502, Japan} \vskip 0.2cm \centerline{\sl $2)$ Department of Information Science,\ Fukui University of Technology} \centerline{\sl Fukui 910-8505, Japan} \vspace*{2.8cm} \centerline{ABSTRACT} \vspace*{0.2cm} \baselineskip=21pt plus 0.1pt minus 0.1pt Through top-quark pair productions at LHC, we study possible effects of nonstandard top-gluon couplings yielded by $SU(3)\times SU(2)\times U(1)$ invariant dimension-6 effective operators. We calculate the total cross section and also some distributions for $pp\to t\bar{t}X$ as functions of two anomalous-coupling parameters, i.e., the chromoelectric and chromomagnetic moments of the top, which are constrained by the total cross section $\sigma(p\bar{p} \to t\bar{t}X)$ measured at Tevatron. We find that LHC might give us some chances to observe sizable effects induced by those new couplings. \vfill PACS: 12.38.-t, 12.38.Bx, 12.38.Qk, 12.60.-i, 14.65.Ha, 14.70.Dj Keywords: anomalous top-gluon couplings, Tevatron, LHC, effective operators \\ \newpage \renewcommand{\thefootnote}{$\sharp$\arabic{footnote}} \pagestyle{plain} \setcounter{footnote}{0} \sec{Introduction} The Large Hadron Collider, LHC, now being about to operate \cite{LHC}, we will soon be able to study physics beyond the standard model of the strong and electroweak interactions in TeV world. Studies of such new physics can be classified into two categories: model-dependent and model-independent approaches. It is of course meaningless to try to find which is more efficient: they have both advantage and disadvantage. That is, the former could enable very precise calculations and analyses, but we have to start again from the beginning if the wrong framework was chosen, while we would rarely fail to get meaningful information in the latter but it would not be that easy there to perform very precise analyses since we usually need to treat many unknown parameters together. Therefore these two approaches to new physics should work complementary to each other. One reasonable way to decrease the number of such unknown parameters in a model-independent analysis is to assume a new physics characterized by an energy scale ${\mit\Lambda}$ and write down $SU(3)\times SU(2)\times U(1)$-symmetric effective (non-renormalizable) operators for the world below ${\mit\Lambda}$. Those operators with dimension 6 were systematically listed in \cite{Buchmuller:1985jz}. Although we still have to treat several operators (parameters) even in this framework, but some of the operators given there were found to be dependent of each other through equations of motion \cite{Grzadkowski:2003tf}. This shows that we might be further able to reduce the number of independent operators, and indeed it was recently done in \cite{AguilarSaavedra:2008zc}. In this effective-operator framework, not only electroweak couplings but also QCD couplings receive nonstandard corrections. It will be hard for many readers to imagine that the QCD couplings of light quarks are affected by those anomalous interactions, since the standard QCD interaction form has so far been tested very well based on a lot of experimental data. The top-quark couplings might however be exceptional, because this quark has not been studied enough precisely yet, and its extremely heavy mass seems to tell us something about a new physics beyond the standard model. That is, the $t$ quark could work as a valuable window to a non-SM physics once LHC starts to give us fruitful data. Under this consideration, we would like to perform here an analysis of anomalous top-gluon couplings produced by the dimension-6 effective operators through top-quark pair productions at LHC. We first describe our calculational framework in section 2. In section 3, we calculate the total cross section of $p\bar{p}\to t\bar{t}X$ at Tevatron energy and compare the result with the corresponding CDF/D0 data \cite{CDF-D0}, which gives a constraint on the anomalous-coupling parameters. We then use them to compute the total cross section and also some distributions for $pp \to t\bar{t}X$ at LHC, i.e., the top-angular, the top-transverse-momentum, and the $t\bar{t}$-invariant-mass distributions. There we will find that LHC might give us some chances to observe sizable effects induced by the new couplings. Finally, a summary is given in section 4. \sec{Framework} Let us clarify our basic framework in this section. In ref.\cite{Buchmuller:1985jz} were given three effective operators contributing to strong interactions. Those operators produce top-pair production amplitudes which include $\gamma^\mu$, $\sigma^{\mu\nu}q_\nu$, $(p_i + p_j)^\mu$ and $q^\mu$ terms (or more complicated Lorentz structure), where $p_{i,j}$ and $q$ are the top-quark $i$, $j$ and gluon momenta. However two of them were shown not to be independent in \cite{AguilarSaavedra:2008zc}, and we only need to take into account one operator \begin{equation} {\cal O}^{33}_{uG\phi} =\sum_a [\:\bar{q}_{L3}(x)\lambda^a \sigma^{\mu\nu} u_{R3}(x) \tilde{\phi}(x) G^a_{\mu\nu}(x)\:], \end{equation} where we followed the notation of \cite{AguilarSaavedra:2008zc}. This is quite a reduction. Now the anomalous top-gluon couplings are given by \begin{equation} {\cal O}_{gt} =\frac1{2\sqrt{2}} v \sum_a \bar{\psi}_t(x) \lambda^a \sigma^{\mu\nu} (1+\gamma_5) \psi_t(x) G_{\mu\nu}^a(x), \end{equation} and our starting Lagrangian thereby becomes with unknown coefficients $C_{uG\phi}^{33}$ as \begin{eqnarray} &&\!\!\!\!{\cal L} ={\cal L}_{\rm SM} + \frac1{{\mit\Lambda}^2} [\:C_{uG\phi}^{33}{\cal O}_{gt} + C_{uG\phi}^{33*}{\cal O}_{gt}^{{\protect\mbox{\tiny \dag}}}\:] \nonumber\\ &&\!\!\!\!\phantom{{\cal L}} ={\cal L}_{\rm SM} + \frac1{\sqrt{2}{\mit\Lambda}^2}v \sum_a[\:{\rm Re}(C_{uG\phi}^{33}) \bar{\psi}_t(x) \lambda^a \sigma^{\mu\nu} \psi_t(x) \nonumber \\ &&\phantom{{\cal L}_{\rm SM} + \frac1{\sqrt{2}{\mit\Lambda}^2}v\sum_a[} + i\,{\rm Im}(C_{uG\phi}^{33}) \bar{\psi}_t(x) \lambda^a \sigma^{\mu\nu} \gamma_5 \psi_t(x)\:] G_{\mu\nu}^a(x). \label{Lag} \end{eqnarray} Here $v$ is the Higgs vacuum expectation value ($=246$ GeV), and ${\rm Re}(C_{uG\phi}^{33})$ and ${\rm Im}(C_{uG\phi}^{33})$ correspond to the top-quark chromomagnetic and chromoelectric moments respectively. As a matter of fact, a number of analyses including nonstandard couplings have been performed in $t\bar{t}$ productions at high-energy hadron colliders ever since more than a decade ago \cite{Haberl:1995ek,Atwood:1992vj}. However, the couplings used there were not always the same. The precision of CDF/D0 data used there was not that high either. In contrast to it, we can now state that the analysis using the two moments is the most general model-independent one within the framework of effective operators. Therefore, it must be worth to revisit the CDF/D0 data, to refine the constraints on the anomalous couplings, and to apply the resultant information to $pp \to t\bar{t}X$ at LHC, which is about to operate. Apart from QCD higher order corrections, $q\bar{q}\to g\to t\bar{t}$ process is expressed by one Feynman diagram (Fig.\ref{Feynqq}), and the corresponding invariant amplitude is given by \begin{eqnarray} &&{\cal M}_{q\bar{q}} =\frac1{4\hat{s}}g_s^2 \sum_a \bar{u}(\mib{p}_t) \lambda^a {\mit\Gamma}^\mu(q) v(\mib{p}_{\bar{t}}) \, \bar{v}(\mib{q}_2) \lambda^a \gamma_\mu u(\mib{q}_1), \end{eqnarray} where $q\equiv q_1 + q_2 (=p_t + p_{\bar{t}}),\: \hat{s}\equiv q^2$, $[a]$ is the color label of the intermediate gluon,\footnote{Here (and hereafter) we do not show the color-component indices of $u$/$v$ spinors, and also all the spin variables for simplicity.} we expressed the anomalous-coupling parameters as \[ d_V = \frac{\sqrt{2}vm_t}{g_s{\mit\Lambda}^2} {\rm Re}(C^{33}_{uG\phi}),\ \ \ \ d_A = \frac{\sqrt{2}vm_t}{g_s{\mit\Lambda}^2} {\rm Im}(C^{33}_{uG\phi}), \] and we defined as \[ {\mit\Gamma}^\mu(q) \equiv \gamma^\mu - \frac{2i\sigma^{\mu\nu}q_\nu}{m_t} (d_V + id_A \gamma_5). \] \vskip 0.4cm \begin{center} \begin{figure}[htbp] \begin{minipage}{14.8cm} \hspace*{3.5cm} {\includegraphics[width=7cm]{qqtt.eps}} \caption{Feynman diagram of $q\bar{q} \to t\bar{t}$. The bullet $\bullet$ expresses the vertex which includes the anomalous couplings.}\label{Feynqq} \end{minipage} \end{figure} \end{center} On the other hand, $g g\to t\bar{t}$ consists of four intermediate states (Fig.\ref{Feyngg} a,b,c,d), and the corresponding amplitudes are \begin{eqnarray} &&{\cal M}_{gg} ={\cal M}_{gg}^{\rm a} + {\cal M}_{gg}^{\rm b} + {\cal M}_{gg}^{\rm c} + {\cal M}_{gg}^{\rm d}, \nonumber \\ &&\ \ \ {\cal M}_{gg}^{\rm a} = -\frac{g_s^2}{2\hat{s}} \sum_a \bar{u}(\mib{p}_t) \lambda^a {\mit\Gamma}^\mu(q) v(\mib{p}_{\bar{t}}) \nonumber \\ &&\phantom{{\cal M}_{gg}^1}\ \ \ \ \times if_{abc}[\:2q_{2\,\nu} \epsilon^\nu(\mib{q}_1) \epsilon_\mu(\mib{q}_2) - 2q_{1\,\nu} \epsilon_\mu (\mib{q}_1) \epsilon^\nu(\mib{q}_2) \nonumber \\ &&\phantom{{\cal M}_{gg}^1}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +(q_1 -q_2)_\mu \epsilon_\nu (\mib{q}_1) \epsilon^\nu(\mib{q}_2)\:] \\ &&\ \ \ {\cal M}_{gg}^{\rm b} =\frac14 g_s^2 \,\bar{u}(\mib{p}_t) \lambda^b\lambda^c {\mit\Gamma}^\mu(q_1) \frac1{m_t-\sla{k}_1} {\mit\Gamma}^\nu(q_2) v(\mib{p}_{\bar{t}}) \, \epsilon_\mu (\mib{q}_1) \epsilon_\nu(\mib{q}_2) \\ &&\ \ \ {\cal M}_{gg}^{\rm c} =\frac14 g_s^2 \,\bar{u}(\mib{p}_t) \lambda^c\lambda^b {\mit\Gamma}^\mu(q_2) \frac1{m_t-\sla{k}_2} {\mit\Gamma}^\nu(q_1) v(\mib{p}_{\bar{t}}) \, \epsilon_\nu (\mib{q}_1) \epsilon_\mu(\mib{q}_2) \\ &&\ \ \ {\cal M}_{gg}^{\rm d} =- g_s^2 \sum_a f_{abc} \bar{u}(\mib{p}_t) \lambda^a {\mit\Sigma}^{\mu\nu} v(\mib{p}_{\bar{t}})\, \epsilon_\mu (\mib{q}_1) \epsilon_\nu(\mib{q}_2). \end{eqnarray} Here $k_1 \equiv p_t -q_1$,\ $k_2 \equiv p_t -q_2$,\ $[a]$ and $[b,\ c]$ are the color labels of the intermediate gluon and the incident gluons with momenta $q_1,\ q_2$, $\epsilon(\mib{q}_{1,2})$ are the incident-gluon polarization vectors, and \[ {\mit\Sigma}^{\mu\nu} \equiv \frac{\sigma^{\mu\nu}}{m_t} (d_V + id_A \gamma_5). \] \vskip 0.6cm \begin{center} \begin{figure}[htbp] \begin{minipage}{14.8cm} \hspace*{1.6cm} {\includegraphics[width=11cm]{ggtt.eps}} \caption{Feynman diagrams of $gg \to t\bar{t}$. The bullet $\bullet$ expresses the vertex which includes the anomalous couplings.}\label{Feyngg} \end{minipage} \end{figure} \end{center} Based on these invariant amplitudes, the differential cross sections are calculated: the one for $q\bar{q} \to t\bar{t}$ in the $q\bar{q}$-CM frame is \begin{eqnarray} &&\frac{\ \ d\sigma_{q\bar{q}}}{dE_t^* d\cos\theta_t^*} = \frac{\beta_t^*}{16\pi \hat{s}}\delta(\sqrt{\hat{s}}-2E_t^*) \Bigl(\frac13\Bigr)^2\sum_{\rm color}\Bigl(\frac12\Bigr)^2 \sum_{\rm spin}|{\cal M}_{q\bar{q}}|^2,~~~~~ \label{qqtt} \end{eqnarray} and the one for $gg \to t\bar{t}$ in the $gg$-CM frame is \begin{eqnarray} &&\frac{\ \ d\sigma_{gg}}{dE_t^* d\cos\theta_t^*} = \frac{\beta_t^*}{16\pi \hat{s}}\delta(\sqrt{\hat{s}}-2E_t^*) \Bigl(\frac18\Bigr)^2\sum_{\rm color}\Bigl(\frac12\Bigr)^2 \sum_{\rm spin}|{\cal M}_{gg}|^2,~~~~~ \label{ggtt} \end{eqnarray} where the asterisk was used to express that the quantities with it are those in the parton CM frame, $\beta_t^* \equiv |\mib{p}_t^*|/E_t^* (= \sqrt{1-4m_t^2/\hat{s}})$ is the size of the produced top-quark velocity in this frame, and we already performed the azimuthal-angle integration since there is no non-trivial dependence on this angle included. After carrying out the color summation, we use the algebraic calculation system FORM \cite{FORM} to evaluate $|{\cal M}|^2$, and perform numerical computations. Concerning analytical expression of $\sum|{\cal M}|^2$, compact formulas are found in \cite{Haberl:1995ek,Gluck:1977zm}, which lead to \begin{eqnarray} &&\!\!\!\!\!\!\!{\sum_{\rm color}\sum_{\rm spin}|{\cal M}_{q\bar{q}}|^2} = 16 g_s^4\,\Bigl[\,1-2(v-z)-8(d_V-d_V^2+d_A^2) +8(d_V^2+d_A^2)v/z\,\Bigr],~~~~~ \label{Mqq} \\ &&\!\!\!\!\!\!\!{\sum_{\rm color}\sum_{\rm spin}|{\cal M}_{gg}|^2} = \frac{32}{3}g_s^4\,\Bigl[\,(4/v-9)\,[\,1-2v+4z(1-z/v)-8d_V(1-2d_V)\,] \nonumber\\ &&\!\!\!\!\!\!\!\!\!\! \phantom{\sum_{\rm color}\sum_{\rm spin}|{\cal M}_{gg}|^2= \frac{32}{3}g_s^2\,\Bigl[\,} +4(d_V^2+d_A^2)\,[\,14(1-4d_V)/z+(1+10d_V)/v\,] \nonumber\\ &&\!\!\!\!\!\!\!\!\!\! \phantom{\sum_{\rm color}\sum_{\rm spin}|{\cal M}_{gg}|^2= \frac{32}{3}g_s^2\,\Bigl[\,} -32(d_V^2+d_A^2)^2(1/z-1/v-4v/z^2)\,\Bigr], \label{Mgg} \end{eqnarray} where $z\equiv m_t^2/\hat{s}$, $v\equiv (\hat{t}-m_t^2)(m_t^2-\hat{s}-\hat{t})/\hat{s}^2$, $\hat{t}\equiv (q_1-p_t)^2$, we have re-expressed their anomalous-coupling parameters in our notation, and the overall 4-momentum conservation has been taken into account. We confirmed that our FORM results are in complete agreement with them. When we derive hadron cross sections from parton cross sections, we first need to connect parton cross sections in the parton-CM frame and hadron-CM frame. Among the quantities we are going to compute here, the total cross section, the top-quark $p_{\rm T}$ distribution, and the $t\bar{t}$ invariant-mass distributions are all Lorentz-transformation invariant,\footnote{Concerning the $p_{\rm T}$ distribution, note that we do not take into account the parton transverse momenta.}\ therefore we do not have to worry about the difference between these two frames. In case of the $p_{\rm T}$ distribution, we have \begin{eqnarray} &&\frac{d}{dp_{\rm T}}\sigma_{q\bar{q},gg} = \frac{d}{dp^*_{\rm T}}\sigma_{q\bar{q},gg} = \int^{c_{\rm max}^*}_{c_{\rm min}^*} d\cos\theta_t^* \frac{\ \:\:d\sigma_{q\bar{q},gg}}{dp^*_{\rm T} d\cos\theta_t^*} \nonumber \\ &&\phantom{\frac{d}{dp_{\rm T}}\sigma_{q\bar{q},gg}} = \int^{c_{\rm max}^*}_{c_{\rm min}^*} d\cos\theta_t^* \frac{|\mib{p}_t^*|}{E_t^* \sin\theta_t^*} \frac{\ \:\:d\sigma_{q\bar{q},gg}}{dE_t^* d\cos\theta_t^*}, \end{eqnarray} where the quantities without $*$ are those in the hadron-CM frame, we have chosen the $z$ axis in the direction of $\mib{q}_1$ so that $p_{\rm T}^*(=p_{\rm T}) = |\mib{p}_t^*|\sin\theta_t^*$ and \begin{equation} c_{\rm max}^* = -c_{\rm min}^* = \sqrt{1-4p_{\rm T}^{*2}/(\beta_t^*\sqrt{\hat{s}})^2}, \end{equation} and for the $t\bar{t}$ invariant-mass distribution, \begin{equation} \frac{d}{d\mu_{t\bar{t}}}\sigma_{q\bar{q},gg} = \frac{d}{d\mu^*_{t\bar{t}}}\sigma_{q\bar{q},gg} = \frac12 \frac{d}{dE^*_t}\sigma_{q\bar{q},gg} = \frac12 \int^{+1}_{-1} d\cos\theta_t^* \frac{d\sigma_{q\bar{q},gg}}{dE_t^* d\cos\theta_t^*}, \end{equation} where $\mu_{t\bar{t}}=\mu_{t\bar{t}}^* = \sqrt{\hat{s}} = 2E_t^*$. On the other hand, we need the appropriate Jacobian connecting the two frames when the angular distribution is considered as follows: The top energy and scattering angle in the parton-CM frame are expressed in terms of those in the hadron-CM frame as \begin{eqnarray} &&E_t^*=(E_t - \beta |\mib{p}_t|\cos\theta_t)/\sqrt{1-\beta^2}, \\ &&\cos\theta_t^*=(|\mib{p}_t|\cos\theta_t-\beta E_t)/(|\mib{p}_t^*|\sqrt{1-\beta^2}), \label{costhetat} \end{eqnarray} where $\beta$ is the Lorentz-transformation boost factor connecting the two frames, and we used $|\mib{p}_t^*|\,(=\sqrt{E_t^{*2}-m_t^2})$ in the denominator on the right-hand side of eq.(\ref{costhetat}) to make the formula compact. These relations lead to the Jacobian \begin{equation} \partial(E_t^*,\,\cos\theta_t^*)/\partial(E_t,\,\cos\theta_t) =|\mib{p}_t|/|\mib{p}_t^*| \end{equation} and the cross-section relation \begin{equation} \frac{\ \ \ d\sigma_{q\bar{q},gg}}{dE_t d\cos\theta_t} =\frac{|\mib{p}_t|}{|\mib{p}_t^*|} \frac{\ \ \ d\sigma_{q\bar{q},gg}}{dE_t^* d\cos\theta_t^*}\,. \end{equation} Then the hadron cross sections are obtained by integrating the product of {\it the parton distribution functions} and {\it the parton cross sections in the hadron-CM frame} on the momentum fractions $x_1$ and $x_2$ carried by the partons. Let us explicitly show the result of the above $E_t$ and $\theta_t$ double distribution, since the other quantities are easier to handle: \begin{equation} \frac{\ \ \ d\sigma_{p\bar{p}/pp}}{dE_t d\cos\theta_t} = \sum_{a,b} \int^1_{4m_t^2/s}\! \!\!\!dx_1 \int^1_{4m_t^2/(x_1 s)} \!\!\!\!\!\!\!\!dx_2 \:N_a(x_1) N_b(x_2) \frac{|\mib{p}_t|}{|\mib{p}_t^*|} \frac{\ d\sigma_{ab}}{dE_t^* d\cos\theta_t^*}, \end{equation} where $N_{a,b}(x)$ are the parton distribution functions of parton $a$ and $b$ ($a,b=u,\bar{u}$, $d,\bar{d}$, $s,\bar{s}$, $c,\bar{c}$, $b,\bar{b}$ and $g$). Thanks to the energy-conservation delta function in Eqs.(\ref{qqtt}) and (\ref{ggtt}), we can immediately perform $x_2$ integration and get to \begin{eqnarray} &&\frac{\ \ \ d\sigma_{p\bar{p}/pp}}{dE_t d\cos\theta_t} = \sum_{a,b} \int^1_{x_{1\,{\rm min}}} \!\!\!dx_1 N_a(x_1) N_b(x_2) \frac{x_2 \beta_t \displaystyle\sqrt{1-\beta^2}}{(1+\beta)(1-\beta_t\cos\theta_t)} \nonumber \\ && \phantom{\frac{d\sigma}{dE_t d\cos\theta_t} = \sum_{a,b} \int^1_{x_{1\,{\rm min}}}} \times \frac1{8\pi \hat{s}\sqrt{\hat{s}}} \Bigl(\frac1{f_c}\Bigr)^2\sum_{\rm color} \Bigl(\frac1{f_s}\Bigr)^2 \sum_{\rm color} |{\cal M}_{ab}|^2,~~ \label{Dsigma2} \end{eqnarray} where $\beta_t \equiv |\mib{p}_t|/E_t$, $\beta = (x_1-x_2)/(x_1+x_2)$, $\hat{s}$ is related to $s$ defined via the hadron momenta $p_{1,2}$ ($s \equiv (p_1 + p_2)^2$) as $\hat{s}=x_1 x_2 s$, $f_c$ and $f_s$ are the color and spin degrees of freedom of the incident partons respectively, and $x_2$ is now given as \begin{equation} x_2=\frac{x_1 E_t (1-\beta_t\cos\theta_t)}{x_1\sqrt{s}-E_t(1+\beta_t\cos\theta_t)}. \end{equation} Since $x_1$ and $x_2$ must satisfy $4m_t^2/(x_1 s) \leq x_2 \leq 1$, we have \begin{equation} x_{1\,{\rm min}} =\frac{E_t (1+\beta_t\cos\theta_t)}{\sqrt{s}-E_t(1-\beta_t\cos\theta_t)}. \end{equation} The top angular distribution is obtained by integrating eq.(\ref{Dsigma2}) on $E_t$ over \[ m_t \leq E_t \leq \sqrt{s}/2. \] \sec{Analyses} We are now ready to perform numerical computations. We first compare the total cross section of $p\bar{p}\to t\bar{t}X$ at Tevatron energy with CDF/D0 data to get improved constraints on $d_{V,A}$, then compute the total cross section, the top angular distribution, the top $p_{\rm T}$ distribution, and the $t\bar{t}$ invariant-mass distribution of $pp \to t\bar{t}X$ at LHC energy. Those top cross sections are not a quantity which directly shows the $C\!P$ nature of the interactions, and therefore depend both on $d_{V,A}$. This may seem inefficient but we could thereby get useful information of both parameters at the same time. \subsec{Analysis of Tevatron data} The latest data of $t\bar{t}$ pair productions at Tevatron for $\sqrt{s}=1.96$ TeV are \cite{Teva-data} \begin{eqnarray} &&\!\!\!\!\!\!\!\!\! \sigma_{\rm exp} = 7.02 \pm 0.63\ {\rm pb}\: \ \ ({\rm CDF}:\:m_t=175\:{\rm GeV}) \\ &&\!\!\!\!\!\!\!\!\! \phantom{\sigma_{\rm exp}} = 8.18^{\ +\ 0.98}_{\ -\ 0.87}\ {\rm pb}\ \ \ \ ({\rm D0}:\:m_t=170\:{\rm GeV}). \end{eqnarray} We could decrease the uncertainty if we combined them according to the standard formula of statistics, and indeed such averaging is often seen in many papers. We, however, would rather stay conservative and do not follow this way because it is not easy to treat (average) properly systematic errors in different detectors. In addition, different values are used for $m_t$ in their analyses, which also makes the averaging difficult. At any rate, the uncertainties of CDF and D0 data were $+3.6/-2.4$ pb and $\pm 2.2$ pb respectively when Haberl et al. performed the analysis \cite{Haberl:1995ek}, which tells us that it is truly the time to revisit this analysis. On the other hand, the total cross section in the framework of QCD with higher order corrections has been studied in detail in \cite{Kidonakis:2008mu} (see also \cite{Moch:2008qy}). We take the results using the latest set of parton-distribution functions ``CTEQ6.6M" (NNLO approximation) \cite{Nadolsky:2008zw} \begin{eqnarray} &&\!\!\!\!\!\!\!\!\! \sigma_{\rm QCD} = 6.73^{\ +\ 0.51}_{\ -\ 0.46}\ {\rm pb}\: \ \ (m_t=175\:{\rm GeV}) \\ &&\!\!\!\!\!\!\!\!\! \phantom{\sigma_{\rm QCD}} = 7.87^{\ +\ 0.60}_{\ -\ 0.55}\ {\rm pb}\: \ \ (m_t=170\:{\rm GeV}), \end{eqnarray} and combine these theoretical errors with the above experimental errors as \begin{eqnarray} &&\!\!\!\!\!\!\!\!\! \sigma_{\rm exp} = 7.02^{\ +\ 0.81}_{\ -\ 0.78}\ {\rm pb}\: \ \ ({\rm CDF}:\:m_t=175\:{\rm GeV}) \\ &&\!\!\!\!\!\!\!\!\! \phantom{\sigma_{\rm exp}} = 8.18^{\ +\ 1.15}_{\ -\ 1.03}\ {\rm pb}\: \ \ ({\rm D0}:\:m_t=170\:{\rm GeV}). \label{sigmadata} \end{eqnarray} Comparing them with our calculations $\sigma(d_V, d_A)$, which is the sum of the central values of the above $\sigma_{\rm QCD}$ and the non-SM part of our cross sections at the lowest order of perturbation, we find that $d_{V,A}$ are restricted as \begin{eqnarray} &&-0.01\ \rs\ d_V\ \rs\:+0.01 \ \ \ \ \ +0.38\ \rs\ d_V\ \rs\:+0.41 \end{eqnarray} when we put $d_A=0$. Similarly we have \begin{equation} |d_A|\ \rs\:+0.12 \end{equation} when we put $d_V=0$. Here, since $\sigma(d_V, d_A)$ depends on not $d_A$ but $d_A^2$ as is known from eqs.(\ref{Mqq}) and (\ref{Mgg}), we only get constraints on $|d_A|$. Finally, when we keep both $d_{V,A}$ non-zero, these two parameters produce corrections which tend to cancel each other unless $|d_V|$ is not that sizable, and consequently rather large $d_{V,A}$ are allowed: \begin{equation} d_V \simeq +0.2\ \ \ {\rm and}\ \ \ |d_A| \simeq +0.3. \end{equation} We show the experimentally allowed $d_{V,A}$ region in Fig.\ref{contor1}. We find that there still remains some area for these anomalous-coupling parameters, though the standard-model (QCD) prediction, i.e., $d_{V,A}=0$ is consistent with the data, too. \vskip 3cm \begin{figure}[htbp] \begin{minipage}{14.8cm} \begin{center} \psfrag{dv}{\begin{large}\hspace*{-0.0cm}$d_V$\end{large}} \psfrag{da}{\begin{large}\hspace*{-0.0cm}$d_A$\end{large}} {\includegraphics[width=13.95cm]{contor.eps}} \caption{Experimentally allowed region for $d_{V,A}$. The region between two solid/dashed curves is from CDF/D0 data.}\label{contor1} \end{center} \end{minipage} \end{figure} \newpage \subsec{LHC I: Total cross sections} Let us compute the total cross section and the differential distributions of $pp \to t\bar{t}X$ at LHC energy ($\sqrt{s}=$ 10 and 14 TeV) for \begin{center} $(d_V,\:d_A)=$\ \ \ (a)\ $(-0.01,\:0)$,\ \ \ (b)\ $(0.41,\:0)$,\ \ \ (c)\ $(0,\:0.12)$,\ \ \ (d)\ $(0.2,\:0.3)$ \end{center} as typical examples. Concerning the top-quark mass, we use the present world average $m_t=172$ GeV \cite{:2008vn}. First, the total cross sections of top pair productions are: $\sqrt{s}=$ 10 TeV \begin{equation} \begin{array}{llrl} {\rm (a)}\ \ d_V = -0.01,\ d_A=0~~~~~~& \sigma =& 447 &{\rm pb}\\ {\rm (b)}\ \ d_V = 0.41, \ d_A=0 & \sigma =& 1240 &{\rm pb}\\ {\rm (c)}\ \ d_V = 0 ,\ d_A= 0.12 & \sigma =& 637 &{\rm pb}\\ {\rm (d)}\ \ d_V = 0.2,\ d_A = 0.3 & \sigma =& 1835 &{\rm pb} \\ \end{array} \end{equation} $\sqrt{s}=$ 14 TeV \begin{equation} \begin{array}{llrl} {\rm (a)}\ \ d_V = -0.01,\ d_A=0~~~~~~& \sigma =& 991 &{\rm pb}\\ {\rm (b)}\ \ d_V = 0.41, \ d_A=0 & \sigma =& 3479 &{\rm pb}\\ {\rm (c)}\ \ d_V = 0 ,\ d_A= 0.12 & \sigma =& 1458 &{\rm pb}\\ {\rm (d)}\ \ d_V = 0.2,\ d_A = 0.3 & \sigma =& 4744 &{\rm pb} \end{array} \end{equation} They are much larger than the latest QCD predictions \cite{Kidonakis:2008mu} \[ \begin{array}{l} \sigma_{\rm QCD}(\sqrt{s}=10\ {\rm TeV})= 415^{\ +\ 34}_{\ -\ 29}\ {\rm pb}, \\ \sigma_{\rm QCD}(\sqrt{s}=14\ {\rm TeV})= 919^{\ +\ 76}_{\ -\ 55}\ {\rm pb}. \\ \end{array} \] In particular, the result with $(d_V,\:d_A)=(0.2,\:0.3)$ is several times larger than $\sigma_{\rm QCD}$, which means that we might encounter a surprising observation at LHC. It is indeed remarkable that the present Tevatron data still allow such a huge cross section at LHC, but this also indicates that coming measurements at LHC might give us a much stronger constraint on $d_{V,A}$. In order to see this possibility clearly, we assume that we have \[ \begin{array}{l} \sigma (\sqrt{s}=10\ {\rm TeV})= 415 \pm 100 \ {\rm pb}, \\ \sigma (\sqrt{s}=14\ {\rm TeV})= 919 \pm 100 \ {\rm pb} \\ \end{array} \] at LHC (including possible theoretical uncertainties), and draw figures similar to Fig.\ref{contor1} in Figs.\ref{contor2} \& \ref{contor3}. They show that LHC will actually give a very good opportunity to perform precise analyses of top-gluon couplings. \newpage \begin{figure}[htbp] \begin{minipage}{14.8cm} \begin{center} \psfrag{dv}{\begin{large}\hspace*{-0.0cm}$d_V$\end{large}} \psfrag{da}{\begin{large}\hspace*{-0.0cm}$d_A$\end{large}} {\includegraphics[width=10.cm]{contor10.eps}} \caption{Allowed region for $d_{V,A}$ which LHC ($\sqrt{s}=$10 TeV) might give us.}\label{contor2} \end{center} \end{minipage} \end{figure} \vspace{1.cm} \begin{figure}[htbp] \begin{minipage}{14.8cm} \begin{center} \psfrag{dv}{\begin{large}\hspace*{-0.0cm}$d_V$\end{large}} \psfrag{da}{\begin{large}\hspace*{-0.0cm}$d_A$\end{large}} {\includegraphics[width=10.cm]{contor14.eps}} \caption{Allowed region for $d_{V,A}$ which LHC ($\sqrt{s}=$14 TeV) might give us.}\label{contor3} \end{center} \end{minipage} \end{figure} \vspace{0.8cm} In this {\it virtual} analysis, however, one may expect that $d_V \simeq d_A \simeq 0$, i.e., an area around the QCD prediction is chosen as the best and unique solution since we used the very QCD result for the central value of the {\it assumed} data, but what the two figures show us seems to be against this expectation. This is due to the cancellation between the $d_V$ and $d_A$ terms as mentioned in the previous subsection. Then, is it impossible to single out QCD even if we got much more precise data as long as we rely on the total cross section alone? Fortunately it is not right: superposing the constraints from Tevatron and LHC, we find that only a small region around $d_V=d_A =0$ would survive as in Fig.\ref{allow} if the above {\it assumed} LHC data were true. \vspace{1.5cm} \begin{figure}[htbp] \begin{minipage}{14.8cm} \begin{center} {\includegraphics[width=14cm]{allow.eps}} \caption{The $d_{V,A}$ region allowed by Tevatron and {\it assumed} LHC data (the shaded part).}\label{allow} \end{center} \end{minipage} \end{figure} \vspace{1cm} \subsec{LHC I\hspace{-0.06cm}I: Differential distributions} Next, we give the top angular distributions in $pp \to t\bar{t}X$ normalized by $\sigma_0 = \sigma(d_V=d_A=0)$, i.e., $\sigma_0^{-1}d\sigma/d\cos\theta_t$ in Figs.\ref{Dis10} and \ref{Dis14} for $\sqrt{s}=$ 10 TeV and 14 TeV, where both $d\sigma$ and $\sigma_0$ are the tree-level quantities (concerning this approximation, see the later comments). As was just shown, the size of the cross section becomes larger when $d_{V,A} \neq 0$, so the corresponding distributions normalized by $\sigma_0$ also exceed the QCD result (solid curve), and moreover their shapes are different from the QCD distribution. \vfill \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-1.7cm} $\frac1{\sigma_0}\frac{d\sigma}{d\cos\theta_t}$\end{Large}} \psfrag{cs}{\begin{large}\hspace*{-0.0cm}$\cos\theta_t$\end{large}} \psfrag{c1}{\begin{small}QCD\end{small}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} {\includegraphics[width=8.3cm]{LHC10a.eps}} \caption{The top angular distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=10$ TeV}\label{Dis10} \end{center} \end{minipage} \end{figure} \vspace{0.8cm} \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-1.7cm} $\frac1{\sigma_0}\frac{d\sigma}{d\cos\theta_t}$\end{Large}} \psfrag{cs}{\begin{large}\hspace*{-0.0cm}$\cos\theta_t$\end{large}} \psfrag{c1}{\begin{small}QCD\end{small}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} {\includegraphics[width=8.3cm]{LHC14a.eps}} \caption{The top angular distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=14$ TeV}\label{Dis14} \end{center} \end{minipage} \end{figure} \newpage \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-1.7cm} $\frac1{\sigma_0}\frac{d{\mit\Delta}\sigma}{d\cos\theta_t}$\end{Large}} \psfrag{cs}{\begin{large}\hspace*{-0.0cm}$\cos\theta_t$\end{large}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} {\includegraphics[width=8.3cm]{LHC10b.eps}} \caption{Nonstandard effects in the top angular distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=10$ TeV}\label{Deldis10} \end{center} \end{minipage} \end{figure} \vspace{0.8cm} \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-1.7cm} $\frac1{\sigma_0}\frac{d{\mit\Delta}\sigma}{d\cos\theta_t}$\end{Large}} \psfrag{cs}{\begin{large}\hspace*{-0.0cm}$\cos\theta_t$\end{large}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} {\includegraphics[width=8.3cm]{LHC14b.eps}} \caption{Nonstandard effects in the top angular distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=14$ TeV}\label{Deldis14} \end{center} \end{minipage} \end{figure} \vspace{0.95cm} Here some comments are necessary about the QCD radiative corrections. We calculated these distributions at the lowest order in perturbation, assuming that most part of the corrections to the standard-model cross sections $\sigma_0$ is canceled through the normalization between the numerator and denominator, like the authors of \cite{Haberl:1995ek} did. According to \cite{Beenakker:1990maa}, however, we should not rely on this approximation too much. Therefore, we also show in Figs.\ref{Deldis10} \& \ref{Deldis14} the pure nonstandard contribution $d{\mit\Delta}\sigma(d_V,d_A)\equiv d\sigma-d\sigma_0$, where we normalize them by the same lowest-order $\sigma_0$ so that we can directly compare Figs.\ref{Dis10} \& \ref{Dis14} and Figs.\ref{Deldis10} \& \ref{Deldis14}. We find that all the curves there are similar to those in the previous figures except that the curve for $d_V=0.41$ and $d_A=0$ (the dashed curve) behaves differently when $|\cos\theta_t|$ gets close to 1. In the same way, let us show the top $p_{\rm T}$ distributions $ \sigma_0^{-1}d\sigma/dp_{\rm T}$ in Figs.\ref{Dispt10} \& \ref{Dispt14}. There we give the whole distributions alone, not the pure nonstandard contribution to them, since the above figures on the angular distributions told us that the difference is not that significant. We find that the shapes of four curves look rather alike, but the one for $(d_V,\,d_A)=(0.41,\,0)$ behaves differently and therefore apparently distinguishable from the others. In contrast to it, the difference between the QCD curve and the one for $(d_V,\,d_A)=(-0.01,\,0)$ is so small that it will be hard to draw meaningful information. \vfill \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-0.8cm} $\frac1{\sigma_0}\frac{d\sigma}{dp_{\rm T}}$\end{Large}} \psfrag{c1}{\begin{small}QCD\end{small}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} \psfrag{pt}{\begin{Large}$p_{\rm T}$\end{Large} (GeV)} {\includegraphics[width=12cm]{LHC10pt.eps}} \caption{The top $p_{\rm T}$ distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=10$ TeV}\label{Dispt10} \end{center} \end{minipage} \end{figure} \newpage \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-0.8cm} $\frac1{\sigma_0}\frac{d\sigma}{dp_{\rm T}}$\end{Large}} \psfrag{c1}{\begin{small}QCD\end{small}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} \psfrag{pt}{\begin{Large}$p_{\rm T}$\end{Large} (GeV)} {\includegraphics[width=12cm]{LHC14pt.eps}} \caption{The top $p_{\rm T}$ distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=14$ TeV}\label{Dispt14} \end{center} \end{minipage} \end{figure} \vskip 0.4cm Finally, figures \ref{Dism10} and \ref{Dism14} are the $t\bar{t}$ invariant-mass distributions $\sigma_0^{-1}d\sigma/d\mu_{t\bar{t}}$. Here again the one for $(d_V,\,d_A)=(0.41,\,0)$ behaves a bit differently, and the others will also be usable for our analysis except for $(d_V,\,d_A)=(-0.01,\,0)$. It is not surprising that both the $p_{\rm T}$ and $t\bar{t}$ invariant-mass distributions for $(d_V,\,d_A)=(0.41,\,0)$ have their peaks at a higher $p_{\rm T}/\mu_{t\bar{t}}$ point than the other curves. This is because $d_{V,A}$ terms can be enhanced by the top energy as understood in ${\cal M}_{q\bar{q},gg}$, and also there occurs partial cancellation between the $d_V$ and $d_A$ contributions when they take similar non-zero values like $(d_V,\,d_A)=(0.2,\,0.3)$, as mentioned in the discussion of the total cross section. This is interesting particularly for the invariant-mass distribution: this is a mere delta-function distribution in the parton-CM frame since $\mu_{t\bar{t}}=2E_t^*=\sqrt{\hat{s}}$ in this frame. Therefore, we may say that we are observing in figs.\ref{Dism10} and \ref{Dism14} the boost effects coming from the parton distribution functions, but still the above enhancement produces some difference. As a result, we may conclude our analyses as follows: those three differential distributions seem to indicate that there will be some chances to observe anomalous-coupling effects unless $|d_{V,A}|$ is very small, although their effects are not as drastic in these quantities as in the total cross section. \vspace*{0.5cm} \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-0.8cm} $\frac1{\sigma_0}\frac{d\sigma}{d\mu_{t\bar{t}}}$\end{Large}} \psfrag{cs}{\begin{large}\hspace*{-0.0cm}$\cos\theta_t$\end{large}} \psfrag{c1}{\begin{small}QCD\end{small}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} \psfrag{mtt}{\begin{Large}$\mu_{t\bar{t}}$\end{Large} (GeV)} \psfrag{mt}{\begin{small}$2m_t$\end{small}} {\includegraphics[width=9.cm]{LHC10m.eps}} \caption{The $t\bar{t}$ invariant-mass distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=10$ TeV}\label{Dism10} \end{center} \end{minipage} \end{figure} \vspace*{0.3cm} \begin{figure}[h] \begin{minipage}{14.8cm} \begin{center} \psfrag{dsig}{\begin{Large}\hspace*{-0.8cm} $\frac1{\sigma_0}\frac{d\sigma}{d\mu_{t\bar{t}}}$\end{Large}} \psfrag{cs}{\begin{large}\hspace*{-0.0cm}$\cos\theta_t$\end{large}} \psfrag{c1}{\begin{small}QCD\end{small}} \psfrag{c2}{\begin{small}$d_V=-0.01,\:d_A=0$\end{small}} \psfrag{c3}{\begin{small}$d_V=0.41,\:d_A=0$\end{small}} \psfrag{c4}{\begin{small}$d_V=0,\:d_A=0.12$\end{small}} \psfrag{c5}{\begin{small}$d_V=0.2,\:d_A=0.3$\end{small}} \psfrag{mtt}{\begin{Large}$\mu_{t\bar{t}}$\end{Large} (GeV)} \psfrag{mt}{\begin{small}$2m_t$\end{small}} {\includegraphics[width=9.cm]{LHC14m.eps}} \caption{The $t\bar{t}$ invariant-mass distribution normalized by $\sigma_0$: LHC energy $\sqrt{s}=14$ TeV}\label{Dism14} \end{center} \end{minipage} \end{figure} \vspace{0.3cm} \sec{Summary} We have studied in this article anomalous top-gluon coupling effects in the total cross section and several differential distributions of $pp \to t\bar{t}X$ at LHC energies $\sqrt{s}=$ 10 and 14 TeV in the framework of dimension-6 effective operators. We first obtained an experimentally allowed region for the chromoelectric and chromomagnetic moments from Tevatron (CDF/D0) data on the total cross section of $p\bar{p} \to t\bar{t}X$, then we thereby computed the above-mentioned quantities. We found the total cross section could get much larger than the standard-scheme (QCD) prediction. Also the top distributions could show a different behavior from QCD, though the non-SM effects are not as drastic as in the total cross section. It must be quite exciting if we actually get a huge cross section at LHC. Conversely, if we observe cross section close to the QCD prediction, we obtain a much stronger constraint on $d_V$ and $d_A$. In that case, an analysis combining the Tevatron and LHC data will work very effectively. We focused here on the top quark itself in the final state, and did not go into detailed analyses of its various decay processes, since it would help to maximize the number of events necessary for our studies. Indeed we have thereby shown that there would be some chances to observe interesting phenomena. However, if we get any nonstandard signal, we of course have to perform more systematic analyses including decay products, i.e., leptons/$b$ quarks. We should get ready for such an exciting situation as a next subject before actual experiments start. \vspace{0.7cm} \centerline{ACKNOWLEDGMENTS} \vspace{0.3cm} The algebraic calculations using FORM were carried out on the computer system at Yukawa Institute for Theoretical Physics (YITP), Kyoto University. \baselineskip=20pt plus 0.1pt minus 0.1pt \vskip 0.5cm \centerline{Note added} After the completion of this work, we were informed that anomalous top-gluon couplings were also studied in \cite{Lillie:2007hd} to explore the possibility that the right-handed top quark is composite. We would like to thank Tim Tait for calling our attention to these two papers. \vspace*{0.8cm}
proofpile-arXiv_065-6614
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In the videos (\href{http://ecommons.library.cornell.edu/bitstream/1813/14101/2/mpeg-1.mpg}{low-res.} and \href{http://ecommons.library.cornell.edu/bitstream/1813/14101/3/mpeg-2.mpg}{high-res.}), we first present an example of free hovering of a rigid, spatially asymmetric body in an oscillating airflow. The rigid object is a hollow ``pyramid" of height $h=3.2$ cm high, constructed of carbon fibers and wax paper. The background air oscillates at a frequency $f=20$ Hz. When the air speed is high enough ($V_\textrm{max}=1.7$ m/s), the paper pyramid hovers against its weight ($W=284$ dynes, or $m=0.29$ g). The pyramid aligns itself spontaneously in the lifting position, exhibiting surprising stability. The video is recorded by a high speed camera and then played back at 1/13 of the real speed. In the second part of the video, we show the vortical structures produced by the interaction between a ``V"-shaped object and an oscillating background flow. The flow visualization is realized by shadowgraph imaging of an oscillating water flow around a two-dimensional pyramid of a height $h=3.0$ cm, composed of two faces. In the example, shown in real-time in the video, the oscillating frequency of the water flow is $0.8$ Hz, with an amplitude of 2 cm. The system thus has approximately the same Reynolds number regime as the experiments in air. Vortices generated during each period coalesce into paired vortices, before detaching and propagating downward, producing the momentum flux needed to support the body. \end{document}
proofpile-arXiv_065-6618
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The only pure states which saturate the Schr\"{o}dinger-Robertson \cite{Robertson} uncertainty relation for the canonically conjugated coordinate and momenta, are the Gaussian states. The latter, due to Hudson's theorem \cite{Hudson}, are also the only pure states with strictly positive Wigner function \cite{strictly}. One may naturally wonder how these theorems can be extended to mixed states, and whether the \textquotedblleft Gaussian\textquotedblright\ link between them persists. In the space of mixed states, different forms of uncertainty relation have been suggested varying with the choice of the quantities characterizing the mixed states i.e. purity, purities of higher order or various entropies (see \cite{Dodonov} for a review). Among these proposals, a basic one, i.e. expressed in a closed form, is the \textit{purity-bounded uncertainty relation} suggested by Dodonov and Man'ko \cite{Man'ko}. According to this relation \ the minimum uncertainty is a function of the purity characterizing the degree of the mixedeness of a state. In a more recent work Dodonov \cite{Dodonov} has proven that the mixed states which minimize this relation possess positive Wigner functions and that they are not extremely \textquotedblleft far\textquotedblright\ from Gaussian states of the same purity. If instead of the purity, the von Neumann entropy is employed as a measure degree of mixedeness, the results are not identical. Bastiaans \cite{Bastiaans} has derived an uncertainty relation based on the von Neumann entropy of mixed states, the \textit{entropy-bounded uncertainty relation }and has proven that the mixed states of minimum uncertainty are the thermal (Gaussian) states. These results already give a preliminary answer to the questions we posed suggesting that, the link between the ability to minimize relevant uncertainty relations and positivity of Wigner functions seem to persist in the space of mixed states, though the Gaussianity is no longer a \ necessary requirement for extremality of the uncertainty. However, to be able to answer our questions in more complete way we need to have a closer view on the set of mixed states with positive Wigner functions. In a recent work \cite{PRA} we have formulated an extension of Hudson's theorem, by identifying the maximum degree of deviation from a Gaussian state which can be reached by a mixed state with positive Wigner function, given its purity and its uncertainty. This problem does not have a simple solution and in \cite{PRA} we have only identified the bounds for any continuous classical distribution which may not necessarily correspond to a valid Wigner function, and therefore the bounds are not tight for the set of quantum states. In view of this discussion, we extend in this work the purity-bounded uncertainty relation by adding one more parameter, the non-Gaussianity characterizing the distance of the state from a Gaussian state with the same covariance matrix. In this way we are able to draw more complete conclusions on the overlap between states minimizing this uncertainty relation and the set of states with positive Wigner function. In addition, this extension permits us to show explicitly how the Gaussian states smoothly become extremal pure states in the context of uncertainty and positivity of Wigner's functions. In Sec.~\ref{SecI}, we introduce the quantities which we choose to characterize one-dimensional mixed states and shortly review the already acquired results concerning uncertainty relations. In Sec.~\ref{SecII} we derive new bound on the uncertainty of a mixed state characterized by its purity and non-Gaussianity. We name this relation the\textit{ non-Gaussianity bounded uncertainty relation}. In Sec.~\ref{SecIII} we identify numerically the parts of the bound realized by states with positive Wigner function, thus visualizing the overlap of the set of states with positive Wigner function and the states minimizing the non-Gaussianity bounded uncertainty relation. In a recent work \cite{SPIE} we have extended the entropy-bounded uncertainty relation using similar methods as we use here. However, that extension, as well as, the entropy-bounded uncertainty relation itself requires numerical methods to be expressed in a proper form. In Sec.\ref{SecIV}, we discuss the main outcomes of this work and we compare them with those in \cite{SPIE}. \section{Our parametric space \label{SecI}} For our purposes we need first to choose a convenient parametrization of the space of mixed states that will permit us to formulate the extensions of the uncertainty relation and Hudson's theorem in a unified manner. Let us start with a general density matrix $\hat{\rho}$ determining a quantum state. \ Apparently a measure of the degree of mixedeness of the state is an indispensable parameter characterizing a mixed state. We choose as the most convenient measure for us, the \textit{purity} $\mu=\mathrm{Tr}\left( \hat{\rho}^{2}\right) $. The second basic characteristic of the state $\hat{\rho}$ is its \textit{covariance matrix} $\mathbf{\gamma}$ \ defined as \begin{equation} \gamma_{ij}=\mathrm{Tr}(\{(\hat{r}_{i}-d_{i}),(\hat{r}_{j}-d_{j})\}\hat{\rho}) \label{Co \end{equation} where $\hat{\mathbf{r}}$ is the vector of the canonically conjugated observables position and momentum (or quadrature operators in quantum optics) $\hat{\mathbf{r}}=(\hat{x},\hat{p})^{T}$, $\mathbf{d}=$\textrm{$Tr$ $(\hat{\mathbf{r}}\hat{\rho})$ is the displacement vector, and $\{\cdot ,\cdot\}$ is the anticommutator. We can put the displacement vector to zero with no loss of generality since the purity (as well as the quantities we will be interested in) is invariant with respect to $\mathbf{d}$. The importance of the covariance matrix stems from the fact that it is measurable experimentally and that it is tightly connected, as we show below, to the uncertainty relation. Throughout the text we will consider quantum systems with only one pair of conjugate variables, like the system of one particle or one-mode state in quantum optics, therefore we consider $2\times2$ covariance matrices. The covariance matrix of $\hat{\rho}$ uniquely determines a Gaussian state $\hat{\rho}_{G}$ which henceforth we will refer to \ as the \emph{reference Gaussian state} for the state $\hat{\rho}$ and we choose as second parameter the purity of $\hat{\rho}_{G}$, \begin{equation} \mu_{G}=1/\sqrt{\gamma_{11}\gamma_{22}-\left\vert \gamma_{12}\right\vert ^{2 }. \label{Ga \end{equation} As a third parameter we introduce a measure of the \textit{non-Gaussianity}, i.e., the distance of the state from its reference Gaussian state. At the first step we are going to work with the trace overlap $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $ between the states $\hat{\rho}$ and $\hat{\rho}_{G}$. Although the trace overlap is not a measure of distance, it can be employed together with $\mu_{G}$, $\mu$ for evaluating a correct measure of non-Gaussianity, the normalized Hilbert-Schmidt distance \cite{Dodonov},\cite{nG}, \begin{equation} \mathbf{\delta}=\frac{1}{2\mu}\mathrm{Tr}\left( \left( \hat{\rho}-\hat{\rho }_{G}\right) ^{2}\right) =\frac{\mu_{G}+\mu-2\mathrm{Tr}\left( \hat{\rho }\hat{\rho}_{G}\right) }{2\mu}. \label{di \end{equation} Therefore, while a state $\hat{\rho}$ requires an infinite set of parameters to be fully described, it is enough for our purposes to work in the reduced parametric space $\{\mu$, $\mu_{G}$, $\mathbf{\delta}\}$. In \cite{PRA} we have used the same parametric space. There we have formulated an extension of Hudson's theorem to mixed states by identifying an upper bound, though not tight, on the non-Gaussianity $\mathbf{\delta}$ for states with positive Wigner function. This bound is represented by a surface in the space $\{\mu$, $\mu_{G}$, $\mathbf{\delta}\}$. Let us we note that the existing purity-bounded uncertainty relation \cite{Man'ko} can be represented as a line-bound in the plane $\left\{ \mu,\mu_{G}\right\} $. Before we justify this statement, we shortly review the historical evolution of the knowledge concerning uncertainty relations and for more details we address an interested reader to \cite{Man'ko}. For pure states of covariance matrix $\mathbf{\gamma}$, the first uncertainty relation \begin{equation} \gamma_{11}\gamma_{22}\approx4 \label{Heis \end{equation} (where $\hbar=1$) was proposed by Heisenberg \cite{Heisenberg}\ and proven in its more strict form, $\gamma_{11}\gamma_{22}\geqslant1$, by Kennard \cite{Kennard} in 1927 . This uncertainty relation is saturated by coherent states and squeezed states whose major and minor axis in the phase-space, are parallel to the $x$- and $p$-axis. Few years later, the Heisenberg uncertainty relation was generalized \ by Schr\"{o}dinger and Robertson, independently, to a relation valid for any set of non-commuting observables and which, in the case of $(\hat{x},\hat{p})$, takes the for \begin{equation} \sqrt{\gamma_{11}\gamma_{22}-\left\vert \gamma_{12}\right\vert ^{2} \geq1.\label{Scr \end{equation} The Scr\"{o}dinger-Robertson uncertainty relation accounts for the possible correlations between the coordinates and therefore is \ saturated by all Gaussian pure states. Even though, the inequality Eq. (\ref{Scr}) is valid also for mixed states it cannot be saturated by any mixed state. Therefore, for the sets of states with purity less than one, this uncertainty relation is not tight. A tight uncertainty relation for mixed states is the purity-bounded uncertainty relation derived by Dodonov and Man'ko \cite{Man'ko} \begin{equation} \sqrt{\gamma_{11}\gamma_{22}-\left\vert \gamma_{12}\right\vert ^{2}}\geq \Phi\left( \mu\right) .\label{Hei \end{equation} For mixed states the lower limit on the uncertainties depends on the purity of the state via \ a monotonic function $\Phi$ satisfying \begin{equation} \Phi\left( 1\right) =1\text{ and }1\leq\Phi\left( \mu\right) \leq1/\mu. \end{equation} In view of Eq. (\ref{Ga}) it is then evident that the purity-bounded uncertainty relation Eq. (\ref{Hei}) is equivalent to a bound given by a line in the $\left\{ \mu,\mu_{G}\right\} $ plane. Dodonov and Man'ko have proven \cite{Man'ko} that this bound is realized by states with phase-independent and positive Wigner functions and they have derived an exact but piecewise expression for the function $\Phi$ in Eq.(\ref{Hei}). In A we present an alternative, though parametric, expression for this line-bound whose construction permits for a generalization to the three dimensional parametric space. We also note here a quite unexpected fact visualized by the purity-bounded uncertainty relation. It is known that on one hand the pure states which saturate the Scr\"{o}dinger-Robertson uncertainty relation for pure states are the Gaussian states. On the other hand the Gaussian mixed states (thermal states) maximize the von Neumann entropy \cite{Bastiaans} being, in a sense, maximally \textquotedblleft disordered\textquotedblrigh \ \cite{Wolf}\ given $\mu_{G}$ Eq.(\ref{Ga}). However, contrary to our intuition, the purity-bounded uncertainty relation dictates that the mixed Gaussian states are not the states which maximize the purity. In the next section, we extend the purity-bounded uncertainty relation by including into it the non-Gaussianity $\delta$ of state \begin{equation} \sqrt{\gamma_{11}\gamma_{22}-\left\vert \gamma_{12}\right\vert ^{2}}\geq F\left( \mu,\delta\right) . \label{rel \end{equation} For our derivation and in view of Eq. (\ref{Ga}) it is convenient (as we explain in more detail in Sec.\ref{SecII}) to conceive the inequality given by Eq.(\ref{rel}) as a bound in the introduced parametric space $\left\{ \mu ,\mu_{G},\mathbf{\delta}\right\} $. This bound \ valid for all mixed states is then compared, in Sec.\ref{SecIII}, with the bound for states with positive Wigner functions. \section{ Non-Gaussianity bounded uncertainty relation\label{SecII}} In order to find function $F$ in Eq.(\ref{rel}) it is enough to find the bound for the whole set of mixed quantum states in the parametric space $\left\{ \mu,\mu_{G},\mathbf{\delta}\right\} $. It is obvious that such a bound will be a surface (since the space is three-dimensional) and if it is single-valued in the parameters $\mu$ and $\delta$ it would represent the function $F\left( \mu,\delta\right) $. Our results show that the bound is in general single-valued, however there is some particular domain of $\left\{ \mu,\mathbf{\delta}\right\} $. where the bound is double-valued. This problem does not emerge if one works with $\mathrm{Tr}\left( \hat{\rho}\hat{\rho _{G}\right) $ instead of $\mathbf{\delta}$ (see Eq.~(\ref{di})). That is why we derive our results employing the trace overlap. With the help of Eq.~(\ref{di}) the results can be directly translated to the initial parametric space $\left\{ \mu,\mu_{G},\mathbf{\delta}\right\} $ and we do so, mainly for the visualization of the results. A possible way to proceed with the derivation of the bound is to search for the states which minimize or maximize the purity $\mu$ if the values of the other two parameters, $\mu_{G}$ and $\mathrm{Tr}\left( \hat{\rho}\hat{\rho }_{G}\right) $, are kept constant. We conceive then the total bound consisting of two regions, namely, the \textit{region I} where the purity $\mu$ is minimized and the \textit{region II} where the purity is maximized given $\mu_{G}$ and $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $. At the final step we recombine both regions and present the whole bound which we name \textit{non-Gaussianity bounded uncertainty relation} as the abbreviation of the more exact title \emph{non-Gaussianity and purity bounded uncertainty relation}. \subsection{\textit{Region I}: States of minimum purity} Let us consider a general density matrix $\hat{\rho}$ characterized by the purity $\mu_{G}$ of the reference Gaussian state $\hat{\rho}_{G}$ and the trace overlap between $\hat{\rho}$ and $\hat{\rho}_{G}$, $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $. By using symplectic transformations we can set the covariance matrix $\mathbf{\gamma}$ of $\hat{\rho}$ into a symmetric form where $\gamma_{11}=\gamma_{22}$ and $\gamma_{12}=0$. Symplectic transformations, such as squeezing and rotation, do not change the quantities of interest (see ref.\cite{PRA}) and therefore without loss of generality, we can assume that the reference Gaussian state for $\hat{\rho}$, $\hat{\rho _{G}$, is a thermal state. In the Wigner representation $\ $such a state, $\hat{\rho}_{G}$, is \begin{equation} W_{G}\left( r\right) =\frac{\mu_{G}}{\pi}e^{-r^{2}\mu_{G}},\qquad r=\sqrt{x^{2}+p^{2}} \label{WG \end{equation} where $\mu_{G}$ is its purity defined in Eq.~(\ref{Ga}). The next step is to prove the \emph{Lemma} stating that for a given state $\hat{\rho}$ with these characteristics, there is always a state of the same characteristics possessing a phase-independent Wigner function, that is of equal or lower purity than $\hat{\rho}$. To prove the Lemma it is more convenient to work in the Wigner phase-space representation. In the general case, quantum state $\hat{\rho}$ possesses an angular-dependent Wigner function $W\left( r,\varphi\right) $ and its purity can be expressed as \begin{equation} \mu=2\p {\displaystyle\iint} W\left( r,\varphi\right) ^{2}r\mathrm{d}r\mathrm{d}\varphi. \end{equation} The trace overlap between $\hat{\rho}$ and $\hat{\rho}_{G}$ is \begin{equation} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) =2\p {\displaystyle\iint} W\left( r,\varphi\right) W_{G}\left( r\right) r\mathrm{d}r\mathrm{d \varphi. \label{TrWW \end{equation} Now let us construct a new state $\hat{\rho}_{s}$ with phase-invariant Wigner function $W_{s}\left( r\right) $, \ by phase-averaging the Wigner function $W\left( r,\varphi\right) $, \begin{equation} W_{s}\left( r\right) =\frac{1}{2\pi}\int W\left( r,\varphi\right) d\varphi. \label{Ws \end{equation} The reference Gaussian state for $\hat{\rho}_{s}$ will be the same \ as for $\hat{\rho}$, since phase-averaging cannot affect the angular-independent Wigner function Eq.(\ref{WG}). The trace overlap among the symmetrized state $\hat{\rho}_{s}$ and $\hat{\rho}_{G}$ will remain the same, as well. This is a straightforward result of substitution of the phase-independent Wigner function given by Eq.~(\ref{Ws}) into Eq.~(\ref{TrWW}). However, the purity $\mu_{s}$ of the symmetrized state $\hat{\rho}_{s}$ is constrained to be smaller than, or equal to, that of $\hat{\rho}$. Indeed, by applying the Cauchy-Schwarz inequality we have \begin{align*} \mu_{s} & =2\p {\displaystyle\iint} W_{s}\left( r\right) ^{2}r\mathrm{d}r\mathrm{d}\varphi\\ & {\displaystyle\iiint} W\left( r,\varphi\right) W\left( r,\Phi\right) r\mathrm{d}rd\varphi d\Phi\\ & \leq\sqrt {\displaystyle\iiint} W\left( r,\varphi\right) ^{2}r\mathrm{d}rd\varphi d\Phi}\sqrt {\displaystyle\iiint} W\left( r,\Phi\right) ^{2}r\mathrm{d}rd\varphi d\Phi}\\ & =2\p {\displaystyle\iint} W\left( r,\varphi\right) ^{2}r\mathrm{d}r\mathrm{d}\varphi=\mu. \end{align*} This concludes the proof of our Lemma. From this Lemma is straightforward to deduce that the bound corresponding to the states of minimum purity is achieved by states with angular-independent Wigner function and we proceed by applying the method of Lagrange multipliers to identify these states. We note also here that this Lemma is well in accordance with the result of Dodonov and Man'ko \cite{Man'ko} who have shown that for any given $\mu_{G}$ one can always choose a coordinate system where the states with angular-independent Wigner functions minimize the purity $\mu$. Any state possessing an angular-independent Wigner function can be expressed \cite{Werner} as a convex combination of the \textit{number} (Fock) states \begin{equation} \hat{\rho}=\sum_{n=0}^{\infty}\rho_{n}\left\vert n\right\rangle \left\langle n\right\vert . \label{rho \end{equation} In order to identify the weights $\rho_{n}$ of the states which minimize the purity \begin{equation} \mu=\sum_{n=0}^{\infty}\rho_{n}^{2} \label{purity \end{equation} we apply the method of Lagrange multipliers with the following constraints imposed on the solution, \begin{enumerate} \item Normalization \begin{equation} \sum_{n=0}^{\infty}\rho_{n}=1. \label{normal \end{equation} \item Fixed purity for the reference Gaussian state $\hat{\rho}_{G}$ (an un-shifted thermal state as in Eq.(\ref{WG}) ) \begin{equation} \sum_{n=0}^{\infty}\rho_{n}\left( 2n+1\right) =1/\mu_{G}. \label{Gau \end{equation} \item Fixed overlap between $\hat{\rho}$ and $\hat{\rho}_{G} \begin{equation} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) =\sum_{n=0}^{\infty \rho_{n}\frac{2\mu_{G}\left( 1-\mu_{G}\right) ^{n}}{\left( 1+\mu _{G}\right) ^{n+1}}. \label{overlap \end{equation} \medskip \end{enumerate} Applying the Lagrange multipliers method on Eq.(\ref{purity}) with the constrains Eqs.(\ref{normal})-(\ref{overlap}), we obtain the weights which realize the extremum solutio \begin{equation} \rho_{n}=A_{1}+A_{2}n+A_{3}\frac{\mu_{G}\left( 1-\mu_{G}\right) ^{n }{\left( 1+\mu_{G}\right) ^{n+1}} \label{rhon \end{equation} with the coefficients $A_{i}$ determined by the conditions $1-3$. In order to restrict ourselves to positive density matrices we replace the index $n$ in Eq.(\ref{rhon}) by a continuous variable $x$ and introducing instead of $\rho_{n}$ the function \begin{equation} f\left( x\right) =A_{1}+A_{2}x+A_{3}\frac{\mu_{G}\left( 1-\mu_{G}\right) ^{x}}{\left( 1+\mu_{G}\right) ^{x+1}}. \label{f \end{equation} The same ansatz is employed in Appendix A to derive an alternative expression for the purity-bounded uncertainty relation Eqs.(\ref{N1})-(\ref{N2}). One can see that the solution Eq.~(\ref{rhon}), corresponds to a valid density matrix if and only if $f\left( x\right) $ is convex and positive function in an interval $\left[ x_{1},x_{2}\right] $ of the positive semi-axis. When this condition is satisfied, the integer parts of the roots $x_{1},x_{2}$ of the equation $f\left( x\right) =0$ define respectively the lower and upper limits of summation in Eq.~(\ref{rho}). In the case that the equation has only one positive root we denote this root as $x_{2}$ and put $x_{1}$ equal to zero. With this ansatz, the density operator that maximizes $\mu$ in Eq.~(\ref{purity}) under the constraints Eqs.~(\ref{normal})-(\ref{overlap}), can be re-expressed as \begin{equation} \hat{\rho} {\displaystyle\sum\limits_{n=n_{\min}}^{n_{\max}}} \left( A_{1}+A_{2}n+A_{3}\frac{\mu_{G}\left( 1-\mu_{G}\right) ^{n}}{\left( 1+\mu_{G}\right) ^{n+1}}\right) \left\vert n\right\rangle \left\langle n\right\vert \label{MaxMin \end{equation} with $n_{\min}=\left\lceil x_{1}\right\rceil $, $n_{\max}=\left\lfloor x_{2}\right\rfloor $. (where $\left\lceil x\right\rceil $ gives the smallest integer that is greater or equal to $x$, and $\left\lfloor x\right\rfloor $ gives the integer part of $x$). The rank of the solution $\mathrm{rank}\left( \hat{\rho}\right) =n_{\max}-n_{\min}$ .This ansatz can alternatively be interpreted as adding two unknowns, $n_{\min}$ and $n_{\max}$, into the initial extremization problem and setting two more constrains, $f(x_{1})=0$ and $f(x_{2})=0$, with $f(x)$ given by Eq.(~\ref{f}), but for brevity reasons we do not repeat the procedure. Now, we are ready to determine the coefficients $A_{i}$. For every positive integer value of $n_{\min}$ (including zero) \ we express the unknown Lagrange multipliers $A_{1}$, $A_{2}$ and $A_{3}$, in terms of $\mu_{G}$ and $x_{2}$. More precisely we employ the conditions Eqs.~(\ref{normal})-(\ref{Gau}), and (\ref{f}) which look now as \begin{align} \sum_{n=n_{\min}}^{\left\lfloor x_{2}\right\rfloor }\left( A_{1}+A_{2 n+A_{3}\frac{\mu_{G}\left( 1-\mu_{G}\right) ^{n}}{\left( 1+\mu_{G}\right) ^{n+1}}\right) & =1\label{co1}\\ \sum_{n=n_{\min}}^{\left\lfloor x_{2}\right\rfloor }\left( A_{1}+A_{2 n+A_{3}\frac{\mu_{G}\left( 1-\mu_{G}\right) ^{n}}{\left( 1+\mu_{G}\right) ^{n+1}}\right) \left( 2n+1\right) & =1/\mu_{G}\label{co2}\\ A_{1}+A_{2}x_{2}+A_{3}\frac{\mu_{G}\left( 1-\mu_{G}\right) ^{x_{2}}}{\left( 1+\mu_{G}\right) ^{x_{2}+1}} & =0. \label{co3 \end{align} This system of equations in a more explicit form (after the summation over $n$) is presented in Appendix B. For each non-negative integer $n_{\min}$ one has to solve this linear system of equations. After this, the last step is to substitute the derived expressions for $A_{i}$ into Eqs.(\ref{purity}) and (\ref{overlap}) and obtain parametric expressions for extremal $\mu^{ex}$ and $\mathrm{Tr}(\hat{\rho}_{G}\hat{\rho})^{ex}$ via $\mu_{G}$ and $x_{2}$. In order to assure that $n_{\min}$ is the same in all three equations, one has to require that the constrains $f\left( n_{\min }\right) >0$ and $f\left( n_{\min}-1\right) <0$ are satisfied. The exact solution presented in the Appendix B is rather cumbersome. However, one can derive an approximate solution simply by setting $\left\lfloor x_{2}\right\rfloor =x_{2}$. It turns out that for $n_{\min}=0$ one arrives to the following parametric expression\begin{widetext} \begin{align} \mu^{ex} & =\frac{\text{$\mu_{G}$}\left( 8\left( 2x_{2}+1\right) \text{$\mu_{G}$}(\text{$\mu_{G}$}+1)y^{x_{2}}\left( 2x_{2}\text{$\mu_{G} }+\text{$\mu_{G}$}-3\right) +y^{2x_{2}}-\left( 2x_{2}+1\right) { ^{2}\text{$\mu_{G}$}^{4}+4\left( 4x_{2}^{3}+6x_{2}^{2}-1\right) \text{$\mu_{G}$}^{3}\right) }{\left( y^{x_{2}}\left( \left( 2x_{2 ^{2}+2x_{2}-1\right) \text{$\mu_{G}$}^{2}+\left( 4x_{2}+2\right) \text{$\mu_{G}$}+3\right) +(\text{$\mu_{G}$}+1)\left( 2x_{2}\text{$\mu_{G} }+\text{$\mu_{G}$}-3\right) \right) {}^{2}}\nonumber\\ & +\frac{\text{$\mu_{G}$}\left( 18\left( 2x_{2}^{2}+2x_{2}+1\right) \text{$\mu_{G}$}^{2}+12\left( 2x_{2}+1\right) \text{$\mu_{G}$ -9+(\text{$\mu_{G}$}+1)^{2}\left( 2x_{2}\text{$\mu_{G}$}+\text{$\mu_{G} }-3\right) {}^{2}\right) }{\left( y^{x_{2}}\left( \left( 2x_{2 ^{2}+2x_{2}-1\right) \text{$\mu_{G}$}^{2}+\left( 4x_{2}+2\right) \text{$\mu_{G}$}+3\right) +(\text{$\mu_{G}$}+1)\left( 2x_{2}\text{$\mu_{G} }+\text{$\mu_{G}$}-3\right) \right) {}^{2}}\label{apro1}\\ Tr(\rho\rho_{G})^{ex} & =-\frac{\text{$\mu_{G}$}\left( -4\left( 2x_{2}+1\right) \text{$\mu_{G}$}y^{x_{2}}+(\text{$\mu_{G}$}-1)y^{2x_{2 }\left( 2x_{2}\text{$\mu_{G}$}+\text{$\mu_{G}$}+3\right) -(\text{$\mu_{G}+ }1)\left( 2x_{2}\text{$\mu_{G}$}+\text{$\mu_{G}$}-3\right) \right) }{y^{x_{2}}\left( \left( 2x_{2}^{2}+2x_{2}-1\right) \text{$\mu_{G}$ ^{2}+\left( 4x_{2}+2\right) \text{$\mu_{G}$}+3\right) +(\text{$\mu_{G} }+1)\left( 2x_{2}\text{$\mu_{G}$}+\text{$\mu_{G}$}-3\right) }\label{apro2 \end{align} \end{widetext}where $y=(1+\mu_{G})/(1-\mu_{G})$, $x_{2}\geq2$. These equations represent the global solution in very good approximation. In Fig.~\ref{Ubound} we present the derived solution which we call \textit{region I} of the bound, projected on the planes, \textit{ (a) } $\left\{ \mu_{G},\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) \right\} $, and $\ $\textit{(b)} $\left\{ \mu_{G},\mu\right\} $. In Fig.~\ref{Ubound}(a), the whole bound is confined between the dotted straight line which corresponds to the Gaussian states and the solid line which represents the solution of $rank$ $2$ realized by states that are mixtures of any two successive number states \begin{equation} \hat{\rho}_{r2}=a\left\vert n\right\rangle \left\langle n\right\vert +\left( 1-a\right) \left\vert n+1\right\rangle \left\langle n+1\right\vert . \label{r2 \end{equation} In Fig.\ref{Ubound}(b), the lowest dashed line represents the purity-bounded \cite{Man'ko} uncertainty relation. One can see it is the lower limit of the more general bound derived here. The upper limit of the so far derived bound (the upper solid line) is the \emph{rank} $2$ solution Eq.(\ref{r2}). This curve reaches the line $\mu=1$ of pure states $\ $(not shown in the graph) only at the points realized by the number states (indicated on the graph as $n=1,2,\dots$). The area between this line and the $\mu=1$ line remains \textit{not covered }by the derived bound. In the same figure (Fig.\ref{Ubound}(b)) one can notice that solid lines form loops which are arranged in horizontal rows. These lines define the borders between different ranks $\mathrm{rank}\left( \hat{\rho}\right) =n_{\max }-n_{\min}$ \ of the solution in\ Eq.(\ref{MaxMin}). Each horizontal row corresponds to a specific rank indicated in the figure. This classification of the solution according to the ranks is comparable with the results presented in \cite{Dodonov}. One can see also that the loops form \textquotedblleft columns\textquotedblright. Each column corresponds to different value of $n_{\min}$ which is equal to the number $n$ of the number state indicated at the upper limit of the right border of the column. \begin{figure}[h] {\centering{\includegraphics*[ width=0.35\textwidth]{Fig1.ps}}} \vspace {0.1cm}\caption{ The \textit{region I }of the bound projected \ \textit{(a) }on the $\left\{ \mu_{G},\mathrm{Tr}\left( \rho\rho_{G}\right) \right\} $ plane and \ \textit{(b) }on the $\left\{ \mu,\mu_{G}\right\} $ plane. The dotted (straight) line stands for the Gaussian states, the bold line for the solution of rank $2$ Eq.(\ref{r2}) and the dashed line for the purity-bounded uncertainty relation. In part \textit{(b),} the solid lines split the bound in vertical segments according to different $n_{\min}$ and in horzontal segments according to the rank $n_{\max}-n_{\min}$ of the density matrices of the solution Eq.(\ref{MaxMin}). In the figure only the ranks $2$ -- $6$ and $n_{\min}$ : $0$ -- $3$ are depicted. \label{Ubound \end{figure} \subsection{Region II: States of maximum purity} In Fig.~\ref{Ubound}(b) we have observed that the states which minimize the purity do not realize the whole wanted bound, meaning that \textit{not} for all pairs of values of $\left\{ \mu_{G},\mu\right\} $ we have obtained a bound on $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $. This fact is not surprising since the trial solution, composed by mixtures of number states, can achieve purity $\mu=1$, only for the number states. The rest of the bound which we call \emph{region II,} must be realized by states which maximize the purity, if $\mu_{G}$ and $\mathrm{Tr}\left( \hat{\rho}\hat{\rho }_{G}\right) $ are given. In order to obtain these states we apply first the Lagrange multipliers method for a \textit{ generic } density matrix $\hat{\rho}$. Thus we obtain the extremum, i.e. minimum or maximum value of the purity for a state $\hat{\rho}$ with given $\mu_{G}$ and $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $. In addition we require, without loss of generality, that covariance matrix of $\hat{\rho}$ is symmetric in $x$ and $p$. We express these constrains on the solution in the following way, \begin{enumerate} \item The state is normalized $\mathrm{Tr}\left( \hat{\rho}\right) =1$. \item The reference Gaussian state is a thermal non-displaced one, \begin{equation} \mathrm{Tr}\left( \hat{\rho}\hat{x}\right) =\mathrm{Tr}\left( \hat{\rho }\hat{p}\right) =\mathrm{Tr}\left( \hat{\rho}\hat{x}\hat{p}\right) =0 \label{cond \end{equation} and its purity is fixed \begin{equation} \mathrm{Tr}\left( \hat{\rho}\left( 2\hat{n}+1\right) \right) =\frac{1 {\mu_{G}}. \label{condII \end{equation} \item Fixed overlap with $\hat{\rho}_{G} \[ \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) =\mathrm{Tr}\left( \hat{\rho}\frac{e^{\beta\hat{n}}}{N}\right) \] where $\mathrm{e}^{\beta}=\frac{1-\mu_{G}}{1+\mu_{G}}$ and $N=\frac{2\mu_{G }{1+\mu_{G}}$ the normalization factor. \end{enumerate} The solution to this extremization problem is of the form \begin{equation} \hat{\rho}_{ex}=\alpha_{0}+\alpha_{1}\hat{x}+\alpha_{2}\hat{p}+\alpha_{3 \hat{x}\hat{p}+\alpha_{4}\hat{n}+\alpha_{5}e^{\beta\hat{n}}. \label{rhoex \end{equation} However the conditions given by Eq.~(\ref{cond}) dictate that $\alpha _{1}=\alpha_{2}=\alpha_{3}=0$ and therefore that the solution must be diagonal in the eigenbasis of the harmonic oscillator. In other words, the extremization problem on a generic density matrix is reduced to the problem we have solved in the Sec.~\ref{SecII}\noindent A and it provides the \textit{region I} of the bound. Therefore, the method of Lagrange multipliers does not add anything new and in order to find the states in the \textit{region II} maximizing purity we have to consider a possibility of the existence of a degenerate (invariant) subspace of solutions. \subsubsection{Pure states} Let us now proceed in a different way and try to process the extremization problem in a modified way to reveal the degeneracy. Instead of working with the mixing coefficients of the density matrix, we consider them fixed and we make variation on the wavevectors, a mixed state can be decomposed on. To clarify the procedure we start with the special case of pure states where $\hat{\rho}=\left\vert \psi\right\rangle \left\langle \psi\right\vert $ and $\left\vert \psi\right\rangle {\textstyle\sum\nolimits_{n}} \psi_{n}\left\vert n\right\rangle $ in an eigenbasis which for convenience we choose to be the one of the quantum harmonic oscillator. If we restrict ourselves to states which possess as a non-displaced reference Gaussian state then the non-vanishing amplitudes $\psi_{n}$ in the superposition should be at least separated by three vanishing ones, i.e. $\left\vert \psi\right\rangle {\textstyle\sum\nolimits_{m}} \psi_{m}\left\vert i_{m}\right\rangle $ where $i_{m+1}-i_{m}\geq3$. Since, the purity is fixed to $1$ we have to extremize another quantity which we choose to be the overla \begin{align} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) & =\left\langle \psi\right\vert e^{\beta\hat{n}}\left\vert \psi\right\rangle /N\nonumber\\ & =\frac{2\mu_{G}}{1+\mu_{G} {\displaystyle\sum\limits_{n}} \left\vert \psi_{n}\right\vert ^{2}\left( \frac{1-\mu_{G}}{1+\mu_{G}}\right) ^{n} \label{overlGen \end{align} assuming in addition that the pure states are normalized $\left\langle \psi\right\vert \left. \psi\right\rangle =1$ and of fixed covariance matrix \begin{align} 1/\mu_{G} & =\left\langle \psi\right\vert \left( 2\hat{n}+1\right) \left\vert \psi\right\rangle \nonumber\\ & {\displaystyle\sum\limits_{n}} \left\vert \psi_{n}\right\vert ^{2}(2n+1). \label{purityGen \end{align} The functional to variate is now the followin \begin{align} f\left( \left\{ \psi_{i}\right\} \right) & =\left\langle \psi\right\vert \frac{e^{\beta\hat{n}}}{N}\left\vert \psi\right\rangle \nonumber\\ & +a_{1}\left\langle \psi\right\vert \left. \psi\right\rangle +a_{2 \left\langle \psi\right\vert \left( 2\hat{n}+1\right) \left\vert \psi\right\rangle . \end{align} After differentiating over the amplitudes $\psi_{n}$ \ which without loss of generality we can consider as real numbers, we derive the following condition on the solution $\left\vert \psi\right\rangle $ to the extremization proble \begin{equation} \left( e^{\beta\hat{n}}+c_{1}+c_{2}\hat{n}\right) \left\vert \psi \right\rangle =0. \end{equation} In this form one can see that apart from the obvious solution where $\left\vert \psi\right\rangle =\left\vert n\right\rangle $ there is one more possibility that was not revealed when we applied the extremization procedure on the density matrices, i.e., a superposition of two number state \begin{equation} \left\vert \psi\right\rangle =\psi_{n}\left\vert n\right\rangle +\psi _{n+i}\left\vert n+i\right\rangle \label{pu \end{equation} with $i\geqslant3$ (so that the covariance matrix is symmetric) and $\left\vert \psi_{n}\right\vert ^{2}+\left\vert \psi_{n+i}\right\vert ^{2}=1$. It remains to identify the integer numbers $n$ and $i$, Eq.(\ref{pu}), which achieve the lowest overlap, Eq.(\ref{overlGen}),\ for a given value $\mu_{G}$, Eq.(\ref{purityGen}). According to our search the states which achieve minimum overlap almost everywhere are of the form $\psi_{n}\left\vert n\right\rangle +\psi_{n+3}\left\vert n+3\right\rangle $. More specifically, one finds that in the segment \begin{equation} \frac{1}{2n+1}\leq\mu_{G}\leq\frac{1}{2n+3}, \end{equation} bridging the number states $\left\vert n\right\rangle $ and $\left\vert n+1\right\rangle $ (see Fig.\ref{Ubound}\noindent b), the states which achieve minimum ar \begin{align} \left\vert \psi\right\rangle _{a} & =\psi_{n}\left\vert n\right\rangle +\psi_{n+3}\left\vert n+3\right\rangle \label{a}\\ \mathrm{for}\left. {}\right. \frac{1}{2n+1} & \leq\mu_{G}\leq r_{n}\nonumber \end{align} and \begin{align} \left\vert \psi\right\rangle _{b} & =\psi_{n-2}\left\vert n-2\right\rangle +\psi_{n+1}\left\vert n+1\right\rangle \label{b}\\ \mathrm{for}\left. {}\right. r_{n} & \leq\mu_{G}\leq\frac{1 {2n+3}\nonumber \end{align} where the parameter $r_{n}$ is the real root of the equatio \[ x^{4}-2(1+n)x^{3}-4x^{2}-6(1+n)x+3=0 \] in the interval $[\frac{1}{2n+1},\frac{1}{2n+3}]$. We note here that such a root always exists though we do not present its form explicitly here. The overlap that corresponds to the states $\left\vert \psi\right\rangle _{a}$ and $\left\vert \psi\right\rangle _{b}$ can be easily evaluated with the help of Eqs.(\ref{overlGen})-(\ref{purityGen}),\ \ \begin{align} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) _{a} & =\frac {1}{3\left( 1+\mu_{G}\right) }\left( \left( 7\mu_{G}+2n\mu_{G}-1\right) \left( \frac{1-\mu_{G}}{1+\mu_{G}}\right) ^{n}\right. \nonumber\\ & +\left. \left( 1-\mu_{G}-2n\mu_{G}\right) \left( \frac{1-\mu_{G} {1+\mu_{G}}\right) ^{n+3}\right) , \label{minaa \end{align} and \ \ \ \begin{align} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) _{b} & =\frac {1}{3\left( 1+\mu_{G}\right) }\left( \left( 1+3\mu_{G}-2n\mu_{G}\right) \left( \frac{1-\mu_{G}}{1+\mu_{G}}\right) ^{n+1}\right. \nonumber\\ & \left. +\left( -1+3\mu_{G}+2n\mu_{G}\right) \left( \frac{1-\mu_{G }{1+\mu_{G}}\right) ^{n-2}\right) . \label{minb \end{align} $\ $\ However the state $\left\vert \psi\right\rangle _{b}$, Eq.(\ref{b}), is not defined when $n=0$ or $1$. In this case the minimum $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $ is rather obtained by the state \begin{equation} \left\vert \psi\right\rangle _{\beta}=\psi_{n}\left\vert n\right\rangle +\psi_{n+1}\left\vert n+1\right\rangle \label{beta \end{equation} with $n=0$, $1$ respectively. The states in Eq.(\ref{beta}) do not possess a un-shifted thermal reference Gaussian state and, apart from numerical evidence, we do not have a formal proof \ \ for their extremal properties. Using the Wigner function for the states \ Eq.(\ref{beta}) expressed in terms of Laguerre polynomials as \begin{align} W_{n}\left( x,p\right) & =\frac{\left( -1\right) ^{n}}{\pi \mathrm{e}^{-x^{2}-p^{2}}\nonumber\\ \times & \left( (\left\vert \psi_{n}\right\vert ^{2}L_{n}\left( 2x^{2}+2p^{2}\right) \right. \nonumber\\ & -\left\vert \psi_{n+1}\right\vert ^{2}L_{n+1}\left( 2x^{2}+2p^{2}\right) \nonumber\\ & \left. +\frac{2\sqrt{2}}{\sqrt{n+1}}\operatorname{Re}\left( \psi_{n \psi_{n+1}^{\ast}\left( x-\mathrm{i}p\right) \right) L_{n}^{1}\left( 2x^{2}+2p^{2}\right) \right) . \label{Wibeta \end{align} we have calculated the corresponding overlaps \begin{align} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) _{\beta,0} & =-e^{\frac{(\alpha-1)\alpha}{2\alpha^{2}-3\alpha+2}}\nonumber\\ & \times\frac{-2\alpha^{5}+8\alpha^{4}-12\alpha^{3}+5\alpha^{2}+4\alpha -4}{(2-\alpha)^{3/2}\left( 2\alpha^{2}-3\alpha+2\right) ^{5/2}}, \label{betaO0 \end{align \begin{align} \mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) _{\beta,1} & =-\frac{2e^{\frac{2(\alpha-1)\alpha}{4\alpha^{2}-5\alpha+3}}}{(3-\alpha )^{5/2}\left( 4\alpha^{2}-5\alpha+3\right) ^{9/2}}\nonumber\\ & \times(64\alpha^{10}-560\alpha^{9}+2156\alpha^{8}-4668\alpha^{7}\nonumber\\ & +6004\alpha^{6}-4211\alpha^{5}+494\alpha^{4}+1938\alpha^{3}\nonumber\\ & -1908\alpha^{2}+837\alpha-162). \label{betaO1 \end{align} The parameter $\alpha=\left\vert \psi_{n}\right\vert ^{2}=1-\left\vert \psi_{n+1}\right\vert ^{2}$ is defined by the corresponding equations for $\mu_{G}$ \begin{align} \mu_{G_{\beta,0}} & =-1/\sqrt{2\alpha-3}\sqrt{4(1-\alpha)\alpha+2\alpha -3}\label{betaP0}\\ \mu_{G_{\beta,1}} & =-1/\sqrt{2\alpha-5}\sqrt{8(1-\alpha)\alpha+2\alpha-5} \label{betaP1 \end{align} We unify the results, Eqs.(\ref{minaa})-(\ref{minb}), (\ref{betaO0 )-(\ref{betaP1}), and we present them graphically in Fig.~\ref{Pure}. This graph simply represents \ the minimum value of the uncertainty $\sqrt {\gamma_{11}\gamma_{22}-\left\vert \gamma_{12}\right\vert ^{2}}=1/\mu_{G}$ of a pure state given the value of its overlap $\mathrm{Tr}\left( \hat{\rho \hat{\rho}_{G}\right) $ (or the non-Gaussianity $\delta$). The line in the inset formally stands for the function $F\left( \mu=1,\delta\right) $ \[ \sqrt{\gamma_{11}\gamma_{22}-\left\vert \gamma_{12}\right\vert ^{2}}\geqslant F\left( 1,\delta\right) \label{SRP} \] (see Eq.\noindent(\ref{rel})), which we have not explicitely derive since this requires an inversion of Eqs.(\ref{minaa})-(\ref{minb}).\textit{Interestingly this extended version of the Schr\"{o}dinger-Robertson uncertainty relation is saturated not only by the ground state of the harmonic oscillator but by all number states.} \begin{figure}[h] {\centering{\includegraphics*[ width=0.4\textwidth]{FigPure.ps}}} \vspace{0.1cm}\caption{An extended version of the Scr\"{o}dinger-Robertson relation for \textit{pure} states Eq.(\ref{SRP}) where the minimum value on the uncertainty $\sqrt{\gamma_{11}\gamma_{22}-\left\vert \gamma_{12 \right\vert ^{2}}$ depends on the quantity $\mathrm{Tr}(\hat{\rho}_{G \hat{\rho})$ (or $\delta$). The set of minimizing states is comprised by the states $\left\vert \psi\right\rangle _{a}$ Eq.(\ref{minaa}) \textit{solid blue line}, $\left\vert \psi\right\rangle _{b}$ Eq.(\ref{minb}) \textit{solid gray line}, and $\left\vert \psi\right\rangle _{\beta}$ Eqs.(\ref{betaO0 )-(\ref{betaP1}) \textit{dotted line}. The number states are included in the set of minimizing states and are marked on the figure as dots. \end{figure} \subsubsection{Mixed states} Moving away from the plane of pure states (density matrices of rank $1$) it is logical to proceed with the search for extremal states starting from density matrices of low rank. The first case to be considered is that of \emph{rank} $2$. According to the spectral theorem, a unique pair of mutually orthogonal pure states $\left\vert \psi_{1}\right\rangle ,$ $\left\vert \psi _{2}\right\rangle $ always exists such that $\hat{\rho}=p_{1}\left\vert \psi_{1}\right\rangle \left\langle \psi_{1}\right\vert +p_{2}\left\vert \psi_{2}\right\rangle \left\langle \psi_{2}\right\vert $. \ We consider now that the probability mixing coefficients $p_{1}$ and $p_{2}$ are fixed and we look for the states \ $\left\vert \psi_{i}\right\rangle $ for which the overlap becomes minimum. As before we assume that $\hat{\rho}$ possess a reference Gaussian state with no angular dependence on the phase space and we impose the constrains of normalization $\left\langle \psi_{i}\right. \left\vert \psi_{i}\right\rangle =1$ and fixed covariance matrix Eq.(\ref{condII}). As we did in the special case of pure states, we decompose the states $\left\vert \psi_{i}\right\rangle $ in the eigenbasis of the harmonic oscillator $\left\vert \psi_{i}\right\rangle =\sum\limits_{n \psi_{i,n}\left\vert n\right\rangle $ and apply the method of Lagrange multipliers by differentiating over the amplitudes $\psi_{i,n}$. The conditions on the vectors $\left\vert \psi_{i}\right\rangle $ \ that we obtain are the following \begin{align} \left( e^{\beta\hat{n}}+c_{1}+c_{2}\hat{n}\right) \left\vert \psi _{1}\right\rangle & =0,\label{mix1}\\ \left( e^{\beta\hat{n}}+c_{3}+c_{2}\hat{n}\right) \left\vert \psi _{2}\right\rangle & =0. \label{mix2 \end{align} From the Eqs.(\ref{mix1})-(\ref{mix2}) we conclude that\ in addition to simple mixtures of number states ($\left\vert \psi_{1}\right\rangle =\left\vert i\right\rangle $ and $\left\vert \psi_{2}\right\rangle =\left\vert j\right\rangle $) which we have discussed already in Sec.~\ref{SecII}\noindent A, there is an additional solution of the for \begin{align} \left\vert \psi_{1}\right\rangle & =\psi_{1,i}\left\vert i\right\rangle +\psi_{1,j}\left\vert j\right\rangle \label{mixed1}\\ \left\vert \psi_{2}\right\rangle & =\left\vert k\right\rangle \label{mixed2 \end{align} where $j\geqslant i+3$ and $k\neq i,j$. With simple observation we have arrived to the conclusion that most probably among the \emph{rank} $2$ states of this form the ones which achieve the minimum overlap (with some exception that we discuss below) are the states, Eqs.(\ref{mixed1})-(\ref{mixed2}), where $j=i+3$ and $k=i+1$ or $i+2$. More precisely, the states of the minimum overlap are \begin{align} \hat{\rho}_{i} & =p\left( \left( \psi_{n}\left\vert n\right\rangle +\psi_{n+3}\left\vert n+3\right\rangle \right) \left( \psi_{n}^{\ast }\left\langle n\right\vert +\psi_{n+3}^{\ast}\left\langle n+3\right\vert \right) \right) \nonumber\\ & +(1-p)\left\vert n+i\right\rangle \left\langle n+i\right\vert \label{mixed12 \end{align} where $i=1$ or $2$ \ and $\left\vert \psi_{n}\right\vert ^{2}+\left\vert \psi_{n+3}\right\vert ^{2}=1$. \ When $p=0$, Eq.(\ref{mixed12}), the solution Eq.(\ref{a}) for pure states \ is recovered, while for \ $\psi_{n}=0$ and $i=2$ (or $\psi_{n+3}=0$ and $i=1$) one arrives to the {rank} $2$ states Eq.(\ref{r2}) of the \textit{Region I.} As for pure states, for mixed states the \textit{Region II} of the bound is not covered totally by the states suggested by the optimization problem but there is part that the (shifted) states \begin{align} \hat{\rho}_{3} & =a\left\vert n\right\rangle \left\langle n\right\vert +\left( 1-a\right) \left\vert n+1\right\rangle \left\langle n+1\right\vert \nonumber\\ + & b\left\vert n\right\rangle \left\langle n+1\right\vert +b^{\ast }\left\vert n+1\right\rangle \left\langle n\right\vert , \label{assy \end{align} seem to intervene, where $\left\vert b\right\vert $ $\in\left[ -\sqrt {a\left( 1-a\right) },\sqrt{a\left( 1-a\right) }\right] $ and $n=0,1$. In the limiting case where $\left\vert b\right\vert =\sqrt{a\left( 1-a\right) }$ the corresponding states $\left\vert \psi\right\rangle _{\beta}$ Eq.~(\ref{beta}) are recovered from the Eq.(\ref{assy}). For the states $\hat{\rho}_{1}$, $\hat{\rho}_{2}$ the quantities of interest, $\mathrm{Tr}(\hat{\rho}_{G}\hat{\rho})$ and $\mu_{G}$, are given by Eqs.(\ref{overlGen})-(\ref{purityGen}). For the states $\hat{\rho}_{3}$ the calculations are more involved, and for this reason we just present the results below, \begin{align} \mathrm{Tr}(\hat{\rho}_{G}\hat{\rho})_{3,0} & =e^{\frac{\left\vert b\right\vert ^{2}}{a+2\left\vert b\right\vert ^{2}-2}}\nonumber\\ & \times\frac{\left( 2(a-1)\left\vert b\right\vert ^{4}+2(a-2)a\left\vert b\right\vert ^{2}+(a-2)^{2}\right) }{(2-a)^{3/2}\left( -a-2\left\vert b\right\vert ^{2}+2\right) ^{5/2} \end{align \begin{align} \mathrm{Tr}(\hat{\rho}_{G}\hat{\rho})_{3,1} & =-\frac{e^{\frac{2\left\vert b\right\vert ^{2}}{a+4\left\vert b\right\vert ^{2}-3}}}{(3-a)^{5/2}\left( -a-4\left\vert b\right\vert ^{2}+3\right) ^{9/2}}\nonumber\\ \times & 2\left( 16\left( 4a^{2}-15a+15\right) \left\vert b\right\vert ^{8}\right. \nonumber\\ & +2(a-3)^{2}\left( 10a^{2}-9a-25\right) \left\vert b\right\vert ^{4}\nonumber\\ & +2(a-3)^{3}\left( a^{2}+3a-10\right) \left\vert b\right\vert ^{2}\nonumber\\ & +(a-3)^{4}(a-2)\nonumber\\ & \left. +8\left( 8a^{3}-45a^{2}+70a-21\right) \left\vert b\right\vert ^{6}\right) \end{align} where the parameters $a$ and $b$ are the same as in Eq.(\ref{assy}). The latter connect indirectly (in a parametric way) the overlap with the other quantities of interest, \begin{align} \mu & =a^{2}+(1-a)^{2}+2\left\vert b\right\vert ^{2}\\ \mu_{G_{3,n}} & =1/\sqrt{(2a-2n-3)\left( 2a+4\left\vert b\right\vert ^{2}(n+1)-2n-3\right) \end{align} with $n=0$ or $1$. \begin{figure}[h] {\centering{\includegraphics*[ width=0.35\textwidth]{FigMixed.eps}}} \vspace{0.1cm}\label{Mixed}\caption{The\textit{ region II }of the bound projected on the plane $\{\mu,\mu_{G}\}$. We \ attribute to the bound three different colors; \textit{Blue} for the part realized by the states $\hat {\rho}_{0}$ Eq.(\ref{mixed12}), \textit{Green} for the satets $\hat{\rho}_{1}$ Eq.(\ref{mixed12}), and \textit{Red} for the states $\hat{\rho}_{3}$ Eq.(\ref{assy}). \end{figure} If the analogous extremization procedure is applied to mixed states of higher rank, then the number of the obtained conditions on the solution is increasing linearly with the rank. For instance, for the rank $3$ \ states which can always be expressed as $\hat{\rho}=p_{1}\left\vert \psi_{1}\right\rangle \left\langle \psi_{1}\right\vert +p_{2}\left\vert \psi_{2}\right\rangle \left\langle \psi_{2}\right\vert +p_{3}\left\vert \psi_{3}\right\rangle \left\langle \psi_{3}\right\vert $ with $\left\langle \psi^{i}\right\vert \left. \psi^{j}\right\rangle =\delta_{ij}$ and $\sum p_{i}=1$, one obtains in addition to \ the conditions Eqs.(\ref{mix1})-(\ref{mix2}), the condition \[ \left( e^{-\beta n}+c_{4}+c_{2}n\right) \left\vert \psi_{3}\right\rangle =0. \] This suggests an extreme solution of a form, e.g., $\hat{\rho}_{i}+\left\vert m\right\rangle \left\langle m\right\vert $ , where $\hat{\rho}_{i}$ is defined in Eq.(\ref{mixed12}) and $\left\vert m\right\rangle $ a Fock state with $m\neq n,$ $n+3,$ $n+i$. After investigation of different possibilities, we saw that such states do not achieve overlap lower than the \emph{rank} $2$ ones. This fact led us to the conclusion that the identified \emph{rank} $2$ density matrices, Eqs.(\ref{mixed12})-(\ref{assy}) are the extreme solutions which realize the \textit{Region II} of the bound. In Fig.~\ref{Mixed}, in analogy with the Fig.\ref{Ubound} (b), we present how the \textit{Region II} of the bound, projected on the plane $\{\mu,\mu_{G}\}$, is separated to three different parts, realized by the states $\hat{\rho}_{1 $, $\ \hat{\rho}_{2}$ Eq.(\ref{mixed12}) and \ $\hat{\rho}_{3}$ Eq.(\ref{assy ) respectively. \subsection{\bigskip The total bound} \begin{figure}[h] {\centering{\includegraphics*[ width=0.4\textwidth]{Fig2.ps}}} \vspace {0.1cm}\caption{The non-Gaussianity uncertainty relation as a bound in the $\left\{ \mu_{G},\mu,\delta\right\} $ parametric space. The part of the bound that is to the right of the curly black line represents the \textit{Region II} of the bound and the rest the \textit{Region I}. In the \textit{inset }the intersection of the bound with the plane $\mu=1$ is presented. \label{TBound \end{figure} The states which minimize the purity (\textit{Region I}) together with the states which maximize the purity (\textit{Region II}) form a surface which bounds from below all the mixed states in the parametric space $\left\{ \mu,\mu_{G},\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) \right\} $ or equivalently bounds from above in the space $\left\{ \mu,\mu_{G ,\delta\right\} $. In Fig.~\ref{TBound} we present the total bound that we name \textit{the non-Gaussianity bounded uncertainty relation}. Clearly the minimal uncertainty, $F(\mu,\delta)$, depends on the degree of non-Gaussianity of a state. Although the dependence is \textquotedblleft smooth\textquotedblright\ for the low-purity region (up to purity $1/2$) the structure of the bound becomes more complex for higher values of purity and for some very small regions the bound projected on the plane $\left\{ \mu,\delta\right\} $ is double valued. For these areas clearly the function $F\left( \mu,\delta\right) $ in Eq.(\ref{rel}) cannot be defined. This problem does not appear when one uses the quantity $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $ instead. \section{The Hudson's theorem and uncertainty relation for mixed states\label{SecIII}} According to the arguments in \cite{PRA} the Hudson's theorem for mixed states can be formulated as an upper bound on the non-Gaussianity for the states with a positive Wigner function. The idea behind this formulation is that since among the pure states only the Gaussian states can possess a positive Wigner function, for mixed states \ one expects that the maximum degree of non-Gaussianity of a state with a positive Wigner function is lower than that of a generic state. In practice, this bound can be utilized as a set of experimentally verifiable, necessary conditions for a state to \ have a positive Wigner function or alternatively, as a set of sufficient conditions for a state to posses a non-positive Wigner function. Such conditions can be essential in the construction of \textquotedblleft non-classical\textquotedblright\ states which are useful resources in protocols of quantum information \cite{Grangier}. Even though it is not trivial to identify the whole bound in the space $\left\{ \mu,\mu_{G},\mathbf{\delta}\right\} $ for states with non-negative Wigner functions, we can conclude using the same arguments as in Sec.~\ref{SecII}\noindent A that the purity of the states with non-negative Wigner function for given $\mu_{G}$ and $\mathrm{Tr}\left( \hat{\rho \hat{\rho}_{G}\right) $ is minimized by mixtures of number states. To our knowledge there is no necessary and sufficient condition that one can impose on the coefficients of a mixture of number states to guarantee positivity of the Wigner function. Due to this difficulty we cannot proceed with further analytical identification of the states which realize the Hudson's theorem for mixed states. On the other hand from the results in Sec.~\ref{SecII} we can numerically obtain some information about the Hudson's theorem for mixed states. To do so, in the bound Fig.\ref{TBound} for all states we identify numerically the part realized by states with positive Wigner function. We depict this common area, where the bounds for all states and for states with strictly positive Wigner function are overlapping, as a shaded region in Fig.\ref{Fig3}. In \cite{Sudbury} we have gone one step further and we have identified numerically the whole \textit{region I }of the bound for the states with positive Wigner function. We have concluded that the \ two bounds stay very close when the purity of the state is low (for a fixed uncertainty) and we obtained an indication that the distance between the two bounds is increasing with purity. \ In addition in \cite{Sudbury}\ the lowest degree of $\mathrm{Tr}\left( \hat{\rho}\hat{\rho}_{G}\right) $ that a state with strictly positive Wigner function may achieve for a given uncertainty, has been identified. \begin{figure}[h] {\centering{\includegraphics*[ width=0.3\textwidth]{Fig3.ps}}} \vspace {0.1cm}\caption{The part of the bound (red) realized by states with positive Wigner function projected on the planes\textit{ (a)} $\left\{ \mu _{G},\mathbf{\delta}\right\} $ and \textit{(b) }$\left\{ \mu_{G ,\mathbf{\mu}\right\} $. In figure \textit{ (a) \ }it is plotted together with the upper non-Gaussianity bounds for all mixed states (solid line) achieved by pure states and the boundary line (dashed line) among the \textit{Region I} and \textit{II }realized by the rank $2$ states Eq.(\ref{r2}). \label{Fig3 \end{figure} \section{Conclusions\label{SecIV}} Inspired by the discussion in \cite{Dodonov} \ and some recent results on the extension of Hudson's theorem for mixed states \cite{PRA}, we have extended the purity-bounded uncertainty relation, adding one more parameter, namely, the degree of non-Gaussianity. According to the results presented here, the minimal uncertainty for quantum states strongly depends not only on the degree of mixedeness but also on its non-Gaussian character. The set of states which saturate this uncertainty relation includes the Gaussian states. From this we can conclude that even though Gaussianity is not a requirement for the minimization of this uncertainty relation, Gaussian states are still included in the set of minimizing states and they dominate as purity tends to one. Interestingly, the non-Gaussianity bounded uncertainty relation is valid for pure states and there it is saturated, among others, by all eigenstates of the harmonic oscillator which appear as extremal points of the bound. Moreover, by identifying the states which possess a positive Wigner function among the ones which saturate the non-Gaussianity bounded uncertainty relation, we come closer to the extension of Hudson's theorem for mixed states. Gaussian mixed states (thermal states) both saturate the non-Gaussianity uncertainty relation and satisfy the extended Hudson's theorem. We should note that in \cite{Sudbury} we employ numerical methods to compliment the results presented here concerning the Hudson's theorem. \ The main result in \cite{Sudbury} is the identification of the minimum degree of trace overlap that a mixed state of fixed uncertainty may achieve. In this work we have chosen purity as a measure of the degree of mixedeness of the state. If the von Neumann entropy is employed instead \cite{SPIE}, then the results are similar to the one presented in this work and one arrives to the non-Gaussianity extension of the \textquotedblleft entropy-bounded\textquotedblright\ uncertainty relation suggested by Bastiaans \cite{Bastiaans}. The advantage in using the entropy, is that the solution is automatically a positive matrix and one does not need to employ the ansatz we have used in the current work, to impose positivity (Eq.(\ref{f})). Furthermore, the solution to the extremization problem contains two \textquotedblleft branches\textquotedblright, one bounding the trace overlap from above and one from below for fixed uncertainty and purity. Note than in the current work we derive only the lower bound on the quantity of trace overlap. On the other hand, the states which minimize the extended uncertainty relation in \cite{SPIE} \ are mixtures of all number states with $n\rightarrow\infty$, \ the solution cannot be expressed in a closed form and one needs the help of numerical methods to visualize the results. \acknowledgments AM gratefully acknowledges financial support from the Belgian National Fund for Scientific Research (FNRS). This work was carried out with the financial support of the European Commission via projects COMPAS and QAP, the support of the Belgian Federal program PAI via the Photonics@be project, and the support of the Brussels-Capital Region, via project CRYPTASC.
proofpile-arXiv_065-6633
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{}} \section{Introduction} \label{sec:intro} \input{introproc} \begin{comment} \section{Introduction} Proceedings of the DPF-2009 Conference will be made available at the Electronic Conference Proceedings Archive\footnote{http://www.slac.stanford.edu/econf/search.html} under eConf 090726. The purpose of this document is to guide the author and to provide a template that maintains a uniform style for the proceedings; the next sections will describe this in detail. \section{Using this Template} The first step to preparing the paper is to copy the slac\_two.rtx file, available at the DPF-2009 web site\footnote{http://www.dpf2009.wayne.edu/proceedings.php} for download as a template package, into a new directory. Next copy this template (dpf2009\_template.tex) and rename it as dpf2009\_SpeakerName.tex, where SpeakerName should be replaced by the author name (remove spacing between the first and last names as in this example). Edit the file to make the necessary modifications and save when finished. Because this template has been set up to meet requirements for conference proceedings papers, it is important to maintain these established styles. Other editorial guidelines are described in the next section. \section{Manuscript} \subsection{Page Limits and Text Layout} The guidelines for the length of articles are: 6, 8 and 10 pages for talks of length 20, 25 and 30 minutes, including the time allocated for questions. Please avoid nestling more than three sub-levels in sections; also the first character of section title key words should be uppercase. \subsection{Acronyms and Abbreviations} Acronyms should be defined the first time they appear. Abbreviations may be used for figures (Fig.), equations (Eq.) or if they are commonly used in language (etc., e.g., i.e., v.s.). However, when starting a sentence, abbreviations should not be used. \subsection{Tables} Tables may be one (e.g.~Table~\ref{example_table}) or two columns (e.g.~Table~\ref{example_table_2col}) in width with single border lines. The caption should appear above the table and, in the tex file, labels should be used to refer to tables. \begin{table}[h] \begin{center} \caption{Example of a Table.} \begin{tabular}{|l|c|c|c|} \hline \textbf{Margin} & \textbf{Dual} & \textbf{A4 Paper} & \textbf{US Letter Paper} \\ \hline Top & 7.6 mm & 37 mm & 19 mm \\ & (0.3 in) & (1.45 in) & (0.75 in) \\ \hline Bottom & 20 mm & 19 mm & 19 mm \\ & (0.79 in) & (0.75 in)& (0.75 in) \\ \hline Left & 20 mm & 20 mm & 20 mm \\ & (0.79 in) & (0.79 in) & (0.79 in) \\ \hline Right & 20 mm & 20 mm & 26 mm \\ & (0.79 in) & (0.79 in) & (1.0 in) \\ \hline \end{tabular} \label{example_table} \end{center} \end{table} \begin{table*}[t] \begin{center} \caption{Example of a Full Width Table.} \begin{tabular}{|l|c|c|c|c|c|c||l|c|c|c|c|c|c|} \hline \textbf{H-Margin} & \multicolumn{2}{c|}{\textbf{Dual}} & \multicolumn{2}{c|}{\textbf{A4}} & \multicolumn{2}{c||}{\textbf{US Letter}} & \textbf{V-Margins} & \multicolumn{2}{c|}{\textbf{Dual}} & \multicolumn{2}{c|}{\textbf{A4}} & \multicolumn{2}{c|}{\textbf{US Letter}} \\ \hline Left & 20 mm & 0.79 in & 20 mm & 0.79 in & 20 mm & 0.79 in & Top & 7.6 mm & 0.3 in & 37 mm & 1.45 in & 19 mm & 0.75 in \\ \hline Right & 20 mm & 0.79 in & 20 mm & 0.79 in & 26 mm & 1.0 in & Bottom & 20 mm & 0.79 in & 19 mm & 0.75 in & 19 mm & 0.75 in \\ \hline \end{tabular} \label{example_table_2col} \end{center} \end{table*} \subsection{Figures} \begin{figure}[h] \centering \includegraphics[width=80mm]{figure1.eps} \caption{Example of a One-column Figure.} \label{example_figure} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=135mm]{figure2.eps} \caption{Example of a Full Width Figure.} \label{example_figure_col2} \end{figure*} Figures may be one column (e.g.~Figure~\ref{example_figure}) or span the paper width (e.g.~Figure~\ref{example_figure_col2}). Labels should be used to refer to the figures in the tex file. The figure caption should appear below the figure. Color figures are encouraged; however note that some colors, such as pale yellow, will not be visible when printed. Lettering in figures should be large enough to be visible. Filenames of figures in the tex file should be numbered consecutively, e.g. figure1.ps, figure2a.ps, figure2b.ps, figure3.ps, etc. \subsection{Equations} Equations may be displayed in several ways, as in Eq.~\ref{eq-xy} \begin{equation} x = y. \label{eq-xy} \end{equation} The {\em eqnarray} environment may be used to split equations into several lines or to align several equations, for example in Eq.~\ref{eq-sp} \begin{eqnarray} T & = & Im[V_{11} {V_{12}}^* {V_{21}}^* V_{22}] \nonumber \\ & & + Im[V_{12} {V_{13}}^* {V_{22}}^* V_{23}] \nonumber \\ & & - Im[V_{33} {V_{31}}^* {V_{13}}^* V_{11}]. \label{eq-sp} \end{eqnarray} An alternative method is shown in Eq.~\ref{eq-spa} for long sets of equations where only one referencing equation number is required. Again, references should be to labels and not hard-coded equation numbers. \begin{equation} \begin{array}{rcl} \bf{K} & = & Im[V_{j, \alpha} {V_{j,\alpha + 1}}^* {V_{j + 1,\alpha }}^* V_{j + 1, \alpha + 1} ] \\ & & + Im[V_{k, \alpha + 2} {V_{k,\alpha + 3}}^* {V_{k + 1,\alpha + 2 }}^* V_{k + 1, \alpha + 3} ] \\ & & + Im[V_{j + 2, \beta} {V_{j + 2,\beta + 1}}^* {V_{j + 3,\beta }}^* V_{j + 3, \beta + 1} ] \\ & & + Im[V_{k + 2, \beta + 2} {V_{k + 2,\beta + 3}}^* {V_{k + 3,\beta + 2 }}^* V_{k + 3, \beta + 3}], \\ & & \\ \bf{M} & = & Im[{V_{j, \alpha}}^* V_{j,\alpha + 1} V_{j + 1,\alpha } {V_{j + 1, \alpha + 1}}^* ] \\ & & + Im[V_{k, \alpha + 2} {V_{k,\alpha + 3}}^* {V_{k + 1,\alpha + 2 }}^* V_{k + 1, \alpha + 3} ] \\ & & + Im[{V_{j + 2, \beta}}^* V_{j + 2,\beta + 1} V_{j + 3,\beta } {V_{j + 3, \beta + 1}}^* ] \\ & & + Im[V_{k + 2, \beta + 2} {V_{k + 2,\beta + 3}}^* {V_{k + 3,\beta + 2 }}^* V_{k + 3, \beta + 3}]. \\ & & \end{array}\label{eq-spa} \end{equation} \subsection{Footnotes} Footnotes should only be used in the body of the paper and not placed after the list of authors, affiliations, or in the abstract. \section{Paper Submission} Authors should submit their papers to the ePrint arXiv server\footnote{http://arxiv.org/help} after verifying that it is processed correctly by the LaTeX processor. Please submit the source code, the style files (revsymb.sty, revtex4.cls, slac\_two.rtx) and any figures; these should be self-contained to generate the paper from source. It is the author's responsibility to ensure that the papers are generated correctly from the source code at the ePrint server. After the paper is accepted by the ePrint server, please verify that the layout in the resulting PDF file conforms to the guidelines described in this document. Finally, contact the organizers of DPF-2009 and your parallel session conveners (see http://www.dpf2009.wayne.edu) with the ePrint number of the paper; the deadline to do this is 2~October~2009. \end{comment} \begin{acknowledgments} We are grateful for the extraordinary contributions of our PEP-II\ colleagues in achieving the excellent luminosity and machine conditions that have made this work possible. The success of this project also relies critically on the expertise and dedication of the computing organizations that support \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}. The collaborating institutions wish to thank SLAC for its support and the kind hospitality extended to them. This work is supported by the US Department of Energy and National Science Foundation, the Natural Sciences and Engineering Research Council (Canada), the Commissariat \`a l'Energie Atomique and Institut National de Physique Nucl\'eaire et de Physique des Particules (France), the Bundesministerium f\"ur Bildung und Forschung and Deutsche Forschungsgemeinschaft (Germany), the Istituto Nazionale di Fisica Nucleare (Italy), the Foundation for Fundamental Research on Matter (The Netherlands), the Research Council of Norway, the Ministry of Education and Science of the Russian Federation, Ministerio de Educaci\'on y Ciencia (Spain), and the Science and Technology Facilities Council (United Kingdom). Individuals have received support from the Marie-Curie IEF program (European Union) and the A. P. Sloan Foundation. \end{acknowledgments} \bigskip \section{Precise measurement of $\tau$ mass and $\tau^{+} \tau^{-}$ mass difference} A key test of CPT invariance is to measure the difference in mass between a particle and its antiparticle. Using 423 fb$^{-1}$ of data from the \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ detector, a pseudomass endpoint method was used to measure the mass of the $\tau$ lepton \cite{Aubert:2009ra}. The significant advantage in using this method is that it allows the mass of the $\tau^{+}$ and $\tau{-}$ to be measured independently, which allows us to test CPT invariance. The current world average of $\tau$ mass is 1776.84$\pm$ 0.17 MeV$/c^2$ \cite{pdg:2008}, and the mass difference is $M_{\tau^{+}} - M_{\tau^{-}} < 2.8 \times 10^{-4}$ at 90$\%$ confidence level. The pseudomass endpoint method was first used by the ARGUS \cite{argus} collaboration, and has since been employed by BELLE \cite{belle07}. The premise is first to consider reconstructing the mass of the $\tau$ from the final state hadronic products ($\tau^{-} \rightarrow h^{-}\nu_{\tau}$): \begin{equation} M_{\tau} = \sqrt{M_{h}^{2} + 2(\sqrt{s}/2 - E_{h}^{*})(E_{h}^{*} - P_{h}^{*}\cos\theta^{*})} , \end{equation} \noindent where $M_{h}$, $E_{h}$ and $P_h$ are the mass, energy, and magnitude of the three-momentum of the hadronic system $h$, respectively, $\theta^*$ is the angle between the hadronic system and the $\nu_\tau$; the * represents quantities in the $e^{+}e^{-}$ center-of-mass frame. The relationship $E_{\tau}^{*} = \sqrt{s}/2$ is used, where $\sqrt{s}$ is the initial $e^{+}e^{-}$ CM energy (10.58 GeV), and $E_{\tau}^{*}$ is the energy of the $\tau$. In the above representation the angle $\theta^*$ is unknown as the neutrino escapes undetected; we therefore define the pseudomass $M_{\rm p}$ with the condition that $\theta^{*} = 0$. This simplifies the above equation: \begin{equation} M_{p} =\sqrt{M_{h}^{2} + 2(\sqrt{s}/2 - E_{h}^{*})(E_{h}^{*} - P_{h}^{*})} . \end{equation} \noindent The distribution of $M_p$ has a sharp kinematic cutoff at $M_p = M_\tau$, although there will be smearing due to initial and final state radiation, and limited detector resolution. The signal mode chosen for this study is $\tau^{\pm} \rightarrow \pi^{+}\pi^{-} \pi^{\pm}\nu_{\tau}$, due in part to its high branching fraction, (9.03 $\pm$ 0.06) $\times 10^{-2}$ \cite{pdg:2008}, and the ability to obtain high signal purity. The data are then fitted to an empirical function: \begin{equation} F(x) = (p_{3} + p_{4}x)\tan^{-1}\frac{\left(p_{1} - x\right)}{p_{2}} + p_{5} + p_{6}x , \end{equation} \noindent where the $p_{i}$ are the parameters and $x$ is the pseudomass. A relationship between $p_{1}$ ,the endpoint parameter, and the pseudomass is required. This is determined from Monte Carlo studies; three different samples are generated with different tau mass values and these are fitted with the function defined above. A straight line is then fitted to the resulting $p_1$ values, which provides the relation to $M_\tau$. The data are split into two samples according to the total charge of the 3$\pi$ hadronic final state, and each sample is analyzed independently. The results of these fits are shown in Figure \ref{fig:pseudo}. This yields an $M_\tau$ of 1776.68 $\pm$ 0.12(stat)\,MeV$/c^2$. \begin{figure}[h] \includegraphics[width=80mm]{figure1.eps} \put(-160.0,80.0){\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} \put(-160.0,70.0){preliminary} \caption{Combined $\tau^{+}$ and $\tau^{-}$ pseudomass endpoint distribution. The points show the data, the curve is the fit to the data, and the solid area is the background. The inset is an enlargement of the boxed region around the edge position showing the fit quality where p1 is most sensitive.} \label{fig:pseudo} \end{figure} A number of systematic effects are investigated, including potential uncertainties on energy loss measurements for charged particles, and uncertainties in the magnetic field. However, the dominant uncertainty is found to be due to an underestimation of the reconstructed track momenta in the detector model. This contributes 0.39 MeV to a total systematic uncertainty of 0.41 MeV. \section{Strange hadronic tau decays} Strange hadronic tau decays offer a very clean environment for studying the weak current. The branching ratios feed directly into a measurement of $V_{us}$, and fits to the mass spectra can yield resonance parameter values which can further our understanding of the dynamics of these systems. In this section studies are presented of the hadronic mass distributions for the decays $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ and $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^{0}\nu_{\tau}$ (throughout the note, charge conjugate modes are implied). A fit to the invariant mass spectrum from $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ is presented along with precise resonance parameter values of the dominant K*(892)$^-$. Due to this mass spectrum having a peaking background from $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^{0}\nu_{\tau}$, the hadronic mass spectra and branching ratio from this mode were also measured and the results used directly to improve our Monte Carlo modelling. \subsection{Analysis of $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^{0}\nu_{\tau}$ } To select events of the type $e^+e^- \rightarrow \tau^+\tau^-$ with one tau-lepton decaying to $K^{0}_{S}\pi^{-}\pi^{0}\nu_{\tau}$, the event is first divided into two hemispheres in the center-of-momentum system (CMS) using the thrust axis. One hemisphere of the event is required to contain only one charged track; this is defined as the tag hemisphere. The other hemisphere is required to have three charged tracks; this is called the signal hemisphere. The tag track and at least one of the signal hemisphere tracks are required to originate from the interaction point. Approximately 35\% of $\tau$-leptons decay to fully leptonic final states. Requiring the track in the tag hemisphere to be identified as an electron or muon while requiring the signal hemisphere to contain only hadrons strongly reduces backgrounds from $e^+e^- \rightarrow q\overline{q}$ events. Electrons are identified using specialized likelihood selectors, whereas a neural network is used to identify muon tracks. $K^0_{S}$ candidates are constructed from any two oppositely charged tracks with an invariant mass within $25 \, \mbox{MeV}/c^2$ of the $K^0_S$ mass, $497.672 \, \mbox{MeV}/c^2$ \cite{Yao:2006px}. Only events with exactly one $K^0_S$ candidate are retained. The track from the signal side not originating from the $K^0_S$ candidate is required to be identified as a pion and originate from the interaction point. Pions are identified by $dE/dx$ in the tracking system, the shape of the shower in the calorimeter and information from the DIRC. All tracks on the signal side are required to lie within the geometrical acceptance region of the EMC and DIRC to ensure good particle identification. In addition, the net charge of the event must be zero and the thrust of the event is required to be greater than 0.85 to reduce the non-$\tau$ background. Backgrounds from Bhabha events are suppressed by requiring the momentum of the tag-side track to be less than $4.9 \, \mbox{GeV}/c$. Backgrounds from radiative Bhabha and $\mu$-pair events with a converted photon are suppressed by requiring the modulus of the cosine of the decay angle to be less than 0.97. The decay angle is defined as the angle between the momentum of the $\pi^+$ originating from the $K^0_S$ in the $K^0_S$'s rest frame and the $K^0_S$ momentum vector in the laboratory frame. When this quantity is calculated for $e^+e^-$ conversion pairs misidentified as pions, its value is concentrated near $\pm 1$. From studies of missing transverse event energy, backgrounds from two-photon events are determined to be negligible. We also require exactly one identified $\pi^0$ in the event; the trajectory of the $\pi^0$ must be within 90 degrees of the $ K^{0}_{S}\pi^{-}$ momentum vector. This ensures that the $\pi^0$ is more likely to be from the same $\tau$ as the $K_S\pi$. The neutral energy not attributed to the $K^0_S$ or the $\pi^0$ must be less then 100 MeV. This should be very small anyway, but the cut is to reject unwanted photons. The energy of the $\pi^0$ in the center-of-mass system must be greater than 1.2 GeV. This cut is to remove the large background contribution in the region below 1.2 GeV. \noindent Figure.~\ref{fig:pi0energy} shows the distribution of the $\pi^0$ energy. \begin{figure}[htbp] \begin{center} \includegraphics[width=80mm]{figure2.eps} \put(-63.0,146.0){\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} \put(-63.0,132.0){preliminary} \caption{Distribution of the $\pi^0$ energy.} \label{fig:pi0energy} \end{center} \end{figure} \noindent The branching fraction $\mathcal{B}(\tau^{-}\rightarrow \bar{K^{0}}\pi^{-}\pi^{0}\nu_{\tau})$ is estimated by \begin{equation} \mathcal{B}(\tau^{-}\rightarrow \bar{K^{0}}\pi^{-}\pi^{0}\nu_{\tau}) = \frac{1}{2\ensuremath{N_{\scriptscriptstyle{\tau\tau}}}\xspace} \frac{N_{\scriptscriptstyle{\rm data}} - N_{\scriptscriptstyle{\rm bkg}}}{\ensuremath{\varepsilon_{\scriptscriptstyle{\rm sig}}}\xspace^{'}}, \label{eq:BR} \end{equation} \noindent where $\ensuremath{N_{\scriptscriptstyle{\tau\tau}}}\xspace$ is the total number of \ensuremath{\tau^+\tau^-}\xspace\ pairs in the data, $N_{\scriptscriptstyle{\rm data}}$ is the number of selected events in data, $N_{\scriptscriptstyle{\rm bkg}}$ is the number of background events estimated from Monte Carlo, and $\ensuremath{\varepsilon_{\scriptscriptstyle{\rm sig}}}\xspace^{'}$ is the corrected signal efficiency to include $K_{S}^{0}$ and $K_{L}^{0}$ mesons. \begin{comment} The total number of \ensuremath{\tau^+\tau^-}\xspace\ pairs in the data is given by \begin{equation} \ensuremath{N_{\scriptscriptstyle{\tau\tau}}}\xspace = \sigma_{\scriptscriptstyle \ensuremath{\tau}\xspace} \lum_{\scriptscriptstyle {\rm data}} = 352 \, 900 \, 000, \end{equation} \noindent where $\sigma_{\scriptscriptstyle \ensuremath{\tau}\xspace}$ is the \ensuremath{\tau^+\tau^-}\xspace\ production cross-section\ at \mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}\ (\mbox{i.e.}\ $0.919 \ensuremath{\rm \,nb}\xspace$) and $\lum_{\scriptscriptstyle {\rm data}}$ is the (integrated) real data luminosity (\mbox{i.e.}\ $384.6 \ensuremath{\mbox{\,fb}^{-1}}\xspace$). \end{comment} \begin{table}[htbp] \begin{center} \caption{\footnotesize{$\mathcal{B}(\tau^{-}\rightarrow \bar{K^{0}}\pi^{-}\pi^{0}\nu_{\tau})$ measured in this analysis.}} \label{tab:kpi0BR} \begin{tabular}{@{}ll} \hline Sample & $\mathcal{B}(\tau^{-}\rightarrow \bar{K^{0}}\pi^{-}\pi^{0}\nu_{\tau})$ [\%] \\ \hline $e$-tag & $ 0.353 \pm 0.008 \, \ensuremath{\mathrm{(stat)}}\xspace \pm 0.016 \, \ensuremath{\mathrm{(syst)}}\xspace$ \\ $\mu$-tag & $ 0.329 \pm 0.008 \, \ensuremath{\mathrm{(stat)}}\xspace \pm 0.016 \, \ensuremath{\mathrm{(syst)}}\xspace$ \\ Combined & $ 0.342 \pm 0.006 \, \ensuremath{\mathrm{(stat)}}\xspace \pm 0.015 \, \ensuremath{\mathrm{(syst)}}\xspace$ \\ \hline \end{tabular} \end{center} \end{table} The hadronic mass distributions for the different combinations of final state hadrons are also extracted, and used to tune our Monte Carlo. Figure \ref{fig:newmc} below shows the mass distribution for $K_{S}^{0}\pi^{-}$ from $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^{0}\nu_{\tau}$, overlayed with the new Monte Carlo generated using this analysis. \begin{figure}[htbp \begin{center} \includegraphics[width=80mm]{figure3.eps} \label{lab:newmasskspibg} \put(-173.0,146.0){\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} \put(-173.0,132.0){preliminary} \caption{The data (points) and tuned Monte Carlo (blue) prediction for the $K_{S}^{0}\pi$ mass distribution from the $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^{0}\nu_{\tau}$ signal mode.} \label{fig:newmc} \end{center} \end{figure} \subsection{Fit to $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ mass spectrum} The analysis of the decay $\tau^- \rightarrow K^{0}_{S} \pi^- \nu_{\tau}$ is a fit of the hadronic mass distribution to a parametric function describing the resonant structure. From this we obtain precise values for the mass and width of the K*(892) as well as information on other resonances present in the spectrum. We denote the number of events found in bin $i$ (without background subtraction) by $n_i$. The prediction for the expectation value of $n_i$, $\nu_i = E[n_i]$, can be written \begin{equation} \label{eq:nui} \nu_i = \sum_{j=1}^M R_{ij} \mu_{j} + \beta_{i} \;, \end{equation} \noindent where $\beta_i$ is the expected number of background events, $\mu_{j}$ is the predicted number of signal events in bin $j$ before detector effects (the ``true'' distribution), and $R_{ij}$ is a response matrix that reflects the limited efficiency and resolution of the detector. The value of $R_{ij}$ is \begin{equation} \label{eq:Rij} R_{ij} = P(\mbox{found in bin } i \, | \mbox{true value in bin } \, j) \;, \end{equation} \noindent and thus the efficiency for bin $j$ is found by summing over all bins where the event could be found. \begin{comment} \begin{eqnarray} \label{eq:effj} & \varepsilon_j = \sum_{i=1}^N R_{ij} = \nonumber \\ & P(\mbox{event found anywhere}\, |\, \mbox{event created in bin } \, j) \;. \end{eqnarray} \end{comment} The predicted number of events in bin $j$ of the true distribution can be written \begin{equation} \label{eq:muj} \mu_j = \mu_{\rm tot} \int_{{\rm bin}\, j} f(m; \vec{\theta}) \, dm \;, \end{equation} \noindent where $m$ denotes the $K^0_{S}\pi^-$ invariant mass and $\vec{\theta}$ represents a set of parameters. The probability density function (pdf) $f(m; \vec{\theta})$ can be written \cite{Epifanov:2007rf}: \begin{eqnarray} f(m; \vec{\theta}) &\propto& \frac{1}{s}{\left(1-\frac{s}{{m_\tau}^2}\right)} \left(1+2\frac{s}{{m_\tau}^2}\right) \nonumber \\ &\times& P\left(P^2{|F_V|}^2 + \frac{3({m_K}^2 - {m_\pi}^2)^2}{4s(1+2\frac{s}{{m_\tau}^2})}|{F_S|}^2\right) \nonumber \,\,\,\,\,\,\, \end{eqnarray} \noindent where $s = m^2$. Here the vector form factor $F_V$ is given by \begin{equation} \label{eq:fv} F_V= \frac {1}{1+\beta+\gamma} [BW_{K^1}(s)+\beta BW_{K^2}(s)+\gamma BW_{K^3}(s)] \;. \nonumber \end{equation} \noindent This form allows for the K*(892) and two additional vector resonances. The quantities $\beta$ and $\gamma$ are complex interference terms between the resonances, and the {\it BW} terms refer to the to relativistic Breit-Wigner functions for the specific resonance, given by \begin{equation} \label{eq:bwrs} BW_R(s) = \frac{M_R^2}{s-{M_R^2} + i\sqrt{s}\Gamma_{R}(s)} \;. \end{equation} \noindent The energy dependent width is given by \begin{equation} \label{eq:gammars} \Gamma_{R}(s)= \Gamma_0R\frac{M_R^2}{s}\left(\frac{P(s)}{P(M_R^2)}\right)^{2\ell+1} \;, \end{equation} \noindent where \begin{equation} \label{eq:pofs} P(s) = \frac{1}{2\sqrt{s}}\sqrt{(s-M_+^2)(s-M_-^2)} \;, \end{equation} \noindent and where $M_- = M_K - M_\pi$, $M_+ =M_K + M_\pi$, and $\ell$ is orbital angular momentum. Thus one has $\ell = 1$ if the $K\pi$ system is from a P-wave (vector), or $\ell = 0$ if the $K\pi$ system is from an S-wave (scalar). The scalar form factor requires a different parametric function and can include contributions from the K$_{0}^{*}$(800) and K$_{0}^{*}$(1430) signals. This is \begin{eqnarray} \label{eq:fs} F_S &=& \varkappa\frac{s}{M^2_{K_{0}^{*}(800)}}BW_{K_{0}^{*}(800)}(s) \nonumber \\ &+& \lambda\frac{s}{M^2_{K_{0}^{*}(1430)}}BW_{K_{0}^{*}(1430)}(s) \;. \end{eqnarray} Each of the background modes is subtracted from the data, and then a least-squares fit is performed to the resulting mass spectrum. \noindent The fit model includes $\tau_j$ as a scale factor that relates the luminosity of the Monte Carlo sample for mode $j$ to that of the data, and $r_j$ as a factor that allows for the uncertainty in the prediction of the rate of the background process. The best estimate of $r_j$ is equal to unity, but this is treated as a Gaussian distributed quantity with a standard deviation equal to the relative uncertainty on the production rate for the $j$th background mode. The uncertainties in the values of other nominally fixed model parameters, e.g., the resonance parameters of the K*(1410), can be incorporated into the fit in a similar way. For a given parameter $\eta$ one has a previously estimated value $\hat{\eta}$ and standard deviation $\sigma_{\eta}$, taken from the PDG. One includes in the minimisation function a Gaussian term in $\eta$ centered about $\hat{\eta}$ with a standard deviation $\sigma_{\eta}$, and regards $\eta$ as an adjustable parameter. We also include terms in the minimisation which account for the uncertainty in the shapes of background mass distributions. This is particularly true for the $K^0_S \pi^- K^0_L$ mode, as it makes a larger contribution and the information on its shape is based largely on lower-statistics measurements from LEP \cite{aleph:1998}. We introduce two additional adjustable parameters, $\vec{\alpha} = (\alpha_1, \alpha_2)$, which have the effect of shifting and stretching the shape of the distribution \cite{Cowan:AltHist}. This transformation is applied to the $m_{ij}$ values for the $K^0_S \pi^- K^0_L$ background mode and the altered values are then used in the minimisation. The fitting procedures described above have been carried out using a variety of hypotheses. \begin{figure}[htbp \begin{center} \subfigure[] {\label{lab:onebwfit}\includegraphics[width=80mm]{figure4a.eps}} \put(-115.0,150.0){\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} \put(-115.0,140.0){preliminary}\\ \subfigure[] {\label{lab:nominal}\includegraphics[width=80mm]{figure4b.eps}} \put(-115.0,150.0){\mbox{\sl B\hspace{-0.4em} {\small\sl A}\hspace{-0.37em} \sl B\hspace{-0.4em} {\small\sl A\hspace{-0.02em}R}}} \put(-115.0,140.0){preliminary} \caption{A fit to the $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ mass distribution using (a) single K*(892) resonance, (b) a combination of K$_{0}^{*}$(800) + K*(892) + K*(1410).} \end{center} \end{figure} Figure~\ref{lab:onebwfit} shows that a single K*(892) is clearly not enough to model the mass spectrum accurately. This was seen by the Belle collaboration \cite{Epifanov:2007rf}, which proposed that the distribution should contain contributions from a K$_{0}^{*}$(800) scalar and K*(1410) vector resonances. In the region around 1.4 GeV in Fig.~\ref{lab:onebwfit}, the data are significantly higher than the fitted curve.The addition of the K*(1410) gives a significant improvement to the high mass region, yielding a $\chi^2$ of 130.04 for 95 degrees of freedom. In these fits the rate of the K*(1410) was allowed to vary within the error given in the PDG. The inclusion of the K$_{0}^{*}$(800) further reduces our $\chi^{2}$ to 113.05 for 94 degrees of freedom. This is a significantly better goodness-of-fit value than our K*(892) + K*(1410) fit model. For the mass and width of the K$_{0}^{*}$(800) we use the measurements from the BES collaboration ~\cite{Ablikim:2005ni}, $M=841\pm30^{+81}_{-73}$, $\Gamma=618\pm90^{+96}_{-144}$. The result is shown in Fig.~\ref{lab:nominal}. If instead of a K*(1410) one uses a scalar K$_{0}^{*}$(1430), one finds a comparable $\chi^2$ value of 114.11 for 94 degrees of freedom. As such the K*(1410) and K$_{0}^{*}$(1430) cannot be differentiated on their $\chi^2$ value. To study what combination of K*(1410) and K$_{0}^{*}$(1430) are present, one could exploit the different spins of the two resonances by carrying out an angular analysis. This is not part of the current study. The resulting values for the mass and width of the K*(892) are found to be \begin{eqnarray} M(K^*(892)^-) & = & 894.57 \pm 0.19 \, \mbox{(stat.)} \,\mbox{MeV/c$^2$} \nonumber\\ \Gamma(K^*(892)^-) & = & 45.89 \pm 0.43 \, \mbox{(stat.)} \nonumber \; \,\mbox{MeV/c} \end{eqnarray} \noindent The statistical errors quoted already cover a number of systematic uncertainties such as those in the rates and shapes of backgrounds, which were incorporated by including corresponding adjustable parameters in the fit. Several additional sources of systematic uncertainty are also taken into consideration. The response matrix $R_{ij}$ is derived from the Monte Carlo simulation of the detector. As a conservative estimate of the uncertainty of the detector response, which is dominated by modelling of the tracking and Calorimeter, we have varied the parameters of the response matrix by up to $\pm10\%$. As a check of the fitting method we have taken a fully reconstructed Monte Carlo sample of signal events, and fitted them using the signal model. As the MC generator models the $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ decay with only the K*(892) resonance, the fit model also only contained this resonance. A further uncertainty in the fit model stems from the choice of resonances.We take the difference in the mass and width values for the K*(892) between our nominal fit and the alternative models which also yield comparable $\chi^{2}$ values, as an estimate of the systematic uncertainty. The quadratic sum of all these sources of systematic uncertainty lead to an error on M(K*(892)) and $\Gamma$(K*(892)) of 0.19 MeV/c$^2$ and 0.57 MeV/c respectively. \section{Summary and Conclusion} \noindent Measurements of the $\tau$ mass and $\tau^{+} - \tau^{-}$ mass difference have been carried out yielding results of: \begin{eqnarray} M_{\tau} = 1776.68 \pm 0.12\mbox{(stat.)} \pm 0.41\mbox{(syst.)}\, \mbox{MeV}/c^2. \nonumber \\*[0.2 cm] \frac{M_{\tau^-} - M_{\tau^+}}{\langle M \rangle} = -3.4 \pm 1.3\mbox{(stat.)} \pm 0.3\mbox{(syst.)} \times 10^{-4} \nonumber. \end{eqnarray} \noindent where $\langle M \rangle$ is the average of $M_{\tau^{+}}$ and $M_{\tau^{-}}$. The $\tau$ mass result is in good agreement with the world average. We also find the mass difference result to be consistent with the results published by the Belle Collaboration at 1.8$\sigma$. We have also carried out studies of the decays $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ and $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^0\nu_{\tau}$ using $384.6 \ensuremath{\mbox{\,fb}^{-1}}\xspace$ of data. We have measured the branching ratio for $\tau^{-}\rightarrow \bar{K^{0}}\pi^{-}\pi^0\nu_{\tau}$, which is found to be: \begin{eqnarray} &{\cal B}(\tau^{-}\rightarrow \bar{K^{0}}\pi^{-}\pi^{0}\nu_{\tau}) = \nonumber \\ &(0.342 \pm 0.006 \, (\mbox{stat.}) \pm 0.015 \, (\mbox{sys.}))\% \;. \nonumber \end{eqnarray} For the $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\pi^0\nu_{\tau}$ mode we have measured the mass distributions of different combinations of final state hadrons: $\pi^{-}\pi^{0}$, $K_{S}^{0}\pi^{-}$, $K_{S}^{0}\pi^{0}$ and $K_{S}^{0} \pi^{-} \pi^{0}$. These were used to make important improvements to the {\tt TAUOLA} Monte Carlo generator, which allowed for a precise estimation of the background contribution from this mode in the analysis of the $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$ channel. We have carried out a fit of the hadronic mass distribution for $\tau^{-}\rightarrow K^{0}_{S}\pi^{-}\nu_{\tau}$. This yields precise measurements for the mass and width of the K*(892) resonance: \begin{eqnarray*} & M(K^*(892)^-) = \nonumber \\ & 894.30 \pm 0.19 \, \mbox{(stat.)} \pm 0.19 \, \mbox{(syst.)} \,\mbox{MeV/c$^2$} \;, \\*[0.2 cm] & \Gamma(K^*(892)^-) = \nonumber \\ & 45.56 \pm 0.43 \, \mbox{(stat.)} \pm 0.57 \, \mbox{(syst.)} \,\mbox{MeV/c} \;. \end{eqnarray*} \noindent These values confirm the Belle collaboration's measurements \cite{Epifanov:2007rf} that indicated a K*(892) mass several MeV higher and a width several MeV lower than the world average. The results reported here represent a factor of two improvement in precision relative to the Belle measurements. We analyse the possibility of other resonances being present in this mass spectrum, and conclude that a combination of K*(800), K*(892) and K*(1410) provides a good description of the data. Figure \ref{lab:comparison} shows the results of various measurements that went into calculating the 2008 PDG average values for the mass and width of the K*(892). The Belle 2007 result and our result both indicate a shift towards 895 MeV for the mass value. \begin{figure}[!h \begin{center} {\label{lab:masscomp1}\includegraphics[width=80mm]{figure5a.eps}} {\label{lab:masscomp2}\includegraphics[width=80mm]{figure5b.eps}} \caption{Comparison of the K*(892) mass and width values which were included in the PDG~\cite{pdg:2008} calculated average value and the recent result from Belle, and our result. The majority of the PDG values are from hydrogen bubble chamber experiments.} {\label{lab:comparison}} \end{center} \end{figure}
proofpile-arXiv_065-6635
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Recent years have seen sustained interested in attempting to determine whether any of the fundamental constants of nature vary. Efforts have been focused in particular on the fine structure constant, $\alpha$, and the proton-to-electron mass ratio, $\mu$, although other dimensionless constants have also been considered. Quasars provide a method of probing the value of $\alpha$ in the early universe by illuminating gas clouds along the line of sight to Earth. In particular, certain absorption lines in these gas clouds are sensitive to changes in one or more of the fundamental constants of nature. For an atom/ion, the relativistic corrections to the energy levels of an electron are proportional to $\alpha^2$, although the magnitude of the change depends on the transition under consideration. By comparing the wavelengths of absorption lines with differing sensitivities to a change in $\alpha$, we are able to place constraints on a change in $\alpha$ between the early universe and today. That is, we are able to measure $\Delta\alpha/\alpha = (\alpha_z-\alpha_0)/\alpha_0$, where $\alpha_z$ is the value of $\alpha$ at redshift $z$ and $\alpha_0$ is the laboratory value. The tentative detection of a variation in $\alpha$ reported by \citet{Webb99} has further increased interest in this field. Subsequent efforts refined this work by increasing the number of absorption systems considered. This yielded a constraint of $\Delta\alpha/\alpha = (-0.57 \pm 0.11)\times 10^{-5}$ \citep{Murphy04} from 143 absorption systems. However all of these observations are from the Keck/HIRES (High Resolution Echelle Spectrometer) instrument, and so it remains important to confirm such unexpected results with independent equipment. \citet{Chand04} reported $\Delta\alpha/\alpha = (-0.06 \pm 0.06) \times 10^{-5}$ from 23 measurements using the Ultraviolet and Visual Echelle Spectrograph (UVES), on the Very Large Telescope (VLT). These two measurements of $\Delta\alpha/\alpha$ are clearly discrepant. A later analysis \citep{Murphy08a} found that the analysis of \citeauthor{Chand04} cannot be correct as it states a statistical precision in excess of the maximum theoretical precision allowed by the data. Similarly, \citeauthor{Murphy08a} analysed the $\chi^2$ vs $\Delta\alpha/\alpha$ curves produced by the \citeauthor{Chand04} optimisation algorithm, and concluded that the shape of the curves demonstrate a failure of the algorithm to find the maximum likelihood value of $\Delta\alpha/\alpha$ implied by the model fits and data, and thus that the estimate of $\Delta\alpha/\alpha$ given by \citeauthor{Chand04} is unreliable. Although it would appear that the \citeauthor{Murphy04} results are robust, it is worth directly investigating the optimisation algorithm used in order to confirm that it is reliable. Furthermore, each measurement of $\Delta\alpha/\alpha$ will be subject to systematic errors, but some systematic errors should have an expectation value of zero, and thus averaging over many absorption systems will in principle eliminate such errors. It has become commonplace to quote values of $\Delta\alpha/\alpha$ for individual systems; in these cases, one has no method of determining the size of certain systematic errors through comparison with other systems, and so one should be particularly careful that the statistical errors are stated correctly. The MCMC method we describe herein allows one to confirm the validity of the statistical errors produced by a different method of analysis. \section{Motivation for MCMC} Ideally one would like an independent method of demonstrating whether or not a purported value of $\Delta\alpha/\alpha$ and the associated statistical uncertainty given by an optimisation algorithm are reliable. Optimisation algorithms seek to minimise $\chi^2 = \sum_i [I(\mathbf{x})_i - d_i]^2 / \sigma_i^2$, where $d_i$ are the spectroscopic data points, $I(\mathbf{x}_i)$ is the prediction of the model at each data point, and $\sigma_i$ is the standard error associated with each spectroscopic data point. However, $\chi^2 = -2 \ln(L(\mathbf{x}))$ up to an additive constant which can be neglected, as we only ever consider differences of $\chi^2$ for finding model parameters. We thus use the definition of the likelihood function $L(\mathbf{x}) \equiv \exp[-\chi^2(\mathbf{x})/2]$. One option is to use an alternate optimisation algorithm. Although optimisation algorithms are in principle simple, numerical issues can cause them to fail inconsistently; this may be the case for the algorithm utilised by \citet{Chand04}. In particular, the optimisation algorithms employed by \citet{Chand04}, \citet{Murphy08a} and \citet{Murphy04} are of the Newton type, which require all first and second partial derivatives of $\chi^2$ with respect to to the parameters to be known. The Voigt function used to model the absorption lines is not analytic, and nor are its derivatives. As such, partial derivatives must be approximated by finite difference methods. Inappropriate choices of the step size for the finite differencing scheme can either produce poor approximations to the derivatives (step size too large) or be rendered useless by roundoff error (step size too small), leading to poor performance of the optimisation algorithm. There are number of other numerical issues which may cause failure of the optimisation algorithm, but we do not consider these here. On account of these numerical issues, one would desire to explore the parameter space itself to directly determine the confidence limits on $\Delta\alpha/\alpha$. \section{Description of the MCMC method} Traditional Monte Carlo methods suffer from the ``curse of dimensionality''. That is, their performance degrades exponentially with increasing dimensionality. The Markov Chain Monte Carlo (MCMC) method degrades only polynomially with increased dimensionality, at the expense of introducing correlation between samples. Additionally, MCMC methods must be tuned to the probability distribution under consideration so as to explore the parameter space efficiently. We implement a variant of the Metropolis algorithm \citep{Metropolis1953} to explore our parameter space. The Metropolis algorithm proposes a new position in the parameter space, $\mathbf{x}'$, based on the current position, $\mathbf{x}$, according to some proposal function, $T(\mathbf{x},\mathbf{x}')$. The only requirement imposed is that $T(\mathbf{x},\mathbf{x}') = T(\mathbf{x}',\mathbf{x})$ (i.e the proposal distribution is symmetric). Although in principle there are large numbers of possible proposal funcitons, $T$, in practice the most common choice is a multidimensional Gaussian centred on the current point, such that $\mathbf{x}' = \mathbf{x} + gN(0,\mathbf{\Sigma})$ where $\mathbf{\Sigma}$ is the covariance matrix obtained from the optimisation algorithm at the purported best fit solution, and $g$ is a scalar tuning factor. The choice of $T$ influences only the efficiency of the algorithm, not the formal correctness of the solution. The initial $\mathbf{\Sigma}$ may or may not be a good approximation to the true convariance matrix. The use of $\mathbf{\Sigma}$ ensures that the distribution of proposed parameters is approximately the same as the underlying distribution; the closer $\mathbf{\Sigma}$ is to the true covariance matrix, the faster the MCMC algorithm will be. The tuning factor $g$ effectively controls the size of steps taken. If $g$ is too large, most trial steps will land in regions of low likelihood, and therefore most steps will be rejected (the chain will not move). On the other hand, if $g$ is too small, the acceptance rate will be $\approx 100\%$, but the parameter space will be explored too slowly. If both the target and proposal distributions are Gaussian then the ideal acceptance rate is $\approx 44\%$ \citep{GRG95}. The algorithm generates a sequence of points, $\{\mathbf{x}^t\}$, according to a two step prescription. First, from the current point, $\mathbf{x}$, propose a new point, $\mathbf{x}'$, via $T(\mathbf{x},\mathbf{x}')$. Then calculate the ratio $q=L(\mathbf{x}')/L(\mathbf{x})$. Secondly, with probability $\min(q,1)$ move to the new point i.e. set $\mathbf{x}^{t+1}=\mathbf{x}'$. Otherwise, retain the current point i.e. $\mathbf{x}^{t+1}=\mathbf{x}^t$. In this fashion, proposed moves to a point which is more likely than the existing point are always accepted, whereas proposed moves to a point which is less likely than the existing point are sometimes accepted, depending on the ratio of likelihoods. For a sufficiently large numbers of iterations, and with proper tuning of the algorithm, the distribution of $\{\mathbf{x}^t\}$ will sample from the underlying probability distribution, up to a normalisation constant. In particular, $\{\mathbf{x}^t_{\Delta\alpha/\alpha}\}$ will sample from the probability distribution of $\Delta\alpha/\alpha$, from which we can obtain a best estimate and confidence limits. To minimise running time, for each model fit we run our MCMC algorithm several times (usually five to ten, depending on the complexity of the situation, with several hundred thousand iterations per stage) and re-estimate $\mathbf{\Sigma}$ at each stage from the chain. Prior to starting each MCMC run, we execute small runs (typically 250 iterations) in which we tune $g$ to be such that the acceptance rate is between $30\%$ and $50\%$ (the outputs from these small runs do not count towards each stage). Thus, even if the initial covariance matrix does not allow a good exploration of the parameter space, by re-estimating $\mathbf{\Sigma}$ and retuning $g$ several times we can drastically increase the chance that the final MCMC run will produce a good approximation of the underlying probability distribution. We determine whether the final MCMC run is useful by examining the chain for autocorrelation. If the autocorrelation length in the chain is much smaller than the chain length, then we deem the final run acceptable. We do not require the usual ``burn-in'' period, where one discards a certain number of samples from the start of the chain, because we believe our parameters already start at the likelihood maximum. We can determine whether this assumption is robust by examining the chain -- parameters should stay near their starting values, on average, if our initial parameter estimates were good. We implement the Multiple Try Metropolis algorithm \citep{Liu2000}, which expands the Metropolis algorithm to allow multiple attempts at each step. If the initial proposal distribution is poorly tuned, this variant of the Metropolis algorithm tends to be much more robust for larger number of dimensions. Similarly, we do not use a Gaussian proposal distribution, but start with a radial distribution which has $P(r) \propto (2/3)r^2\exp(-r^2/2) + (1/3)\exp(-r)$. This mixture of an exponential distribution and the radial component of a 2D Gaussian allows the algorithm to occasionally take large steps, whilst otherwise taking steps clustered about some value; this speeds exploration of the parameter space where $\mathbf{\Sigma}$ is initially poorly tuned. For our proposal distribution, $T$, we generate our parameters from a spherically symmetric distribution with radial probability density $P(r)$ and then left multiply by $\mathbf{L}$ (where $\mathbf{LL^T}=\mathbf{\Sigma}$) so that the proposal distribution has the correct covariance structure. MCMC can be used to directly estimate posterior probabilities in the Bayesian framework with the appropriate choice of prior distribution. The likelihood ratio then becomes $L(\mathbf{x}) \rightarrow L(\mathbf{x})\pi(\mathbf{x})$, where $\pi(\mathbf{x})$ is the Bayesian prior for a particular set of parameters. We utilise improper flat priors for the column densities and redshifts of each component. Similarly, we utilise a flat prior on $\Delta\alpha/\alpha$. We utilise a flat prior for the logarithm of the Doppler parameters, rather than the Doppler parameters, to suppress movements to small $b$. Otherwise, the algorithm tends to propose many jumps to $b<0$ for narrow lines, which must be rejected -- this substantially reduces the efficiency of the algorithm. This is somewhat reasonable also on physical grounds, as we do not expect large numbers of gas clouds described by arbitrarily small temperatures, and certainly our fitting procedure would reject a profile decomposition of this nature. Using a flat prior for the Doppler parameters results in unacceptably large running times, and so we use the logarithmic prior as an easy and practical option. We have modified the spectral profile fitting program VPFIT\footnote{See http://www.ast.cam.ac.uk/$\sim$rfc/vpfit.html} to incorporate our MCMC algorithm. The outputs of the optimisation algorithm are fed directly into the initialisation for the MCMC code. Our MCMC algorithm uses the same code as VPFIT to generate the Voigt profiles and calculate $\chi^2$, and thus our algorithm does not eliminate the possibilty of a code bug entirely. However, with this caveat, our algorithm can determine whether the optimisation code used by VPFIT does or does not converge to the desired solution and produce appropriate uncertainty estimates. \section{Results} \begin{table*} \caption{\label{tabresults}Comparison of purported values of $\Delta\alpha/\alpha$ calculated by VPFIT, and the results of the MCMC algorithm. Quoted uncertainties are $1\sigma$.} \begin{center} \begin{tabular}{llll}\hline Object & Redshift & $\Delta\alpha/\alpha$ -- VPFIT & $\Delta\alpha/\alpha$ -- MCMC \\\hline LBQS\, 2206$-$1958 & 1.018 & $(-0.51 \pm 1.07) \times 10^{-5}$ & $(-0.51 \pm 0.88) \times 10^{-5}$\\ LBQS\, 0013$-$0029 & 2.029 & $(-0.86 \pm 0.94) \times 10^{-5}$ & $(-0.83 \pm 0.77) \times 10^{-5}$\\ Q\,0551$-$366 & 1.748 & $(-0.80 \pm 1.08) \times 10^{-5}$ & $(-0.89 \pm 0.84) \times 10^{-5}$\\\hline \end{tabular} \end{center} \end{table*} We have applied our MCMC algorithm as described above to the three quasar absorption systems described below. The numerical values produced by the optimisation algorithm (``VPFIT'') and the MCMC code (``MCMC'') are given in table \ref{tabresults}. In all cases we find good agreement between the VPFIT result and that produced by our MCMC code, although the statistical uncertainties produced by our MCMC code are mildly smaller than that produced by VPFIT, indicating that VPFIT may be conservative. Our fits all pass appropriate robustness tests (in particular, $\chi^2_\nu \approx 1$ where $\nu$ is the number of degrees of freedom for the fit). All of our final chains mix well, although with the second and third objects considered here the initial chains do not -- re-estimation of the covariance matrix multiple times is necessary to achieve a well mixed chain. All redshifts here refer to the redshift of the absorption system. \subsection{LBQS\,2206$-$1958 $z=1.018$} This absorption system appears to be well fitted by a single Voigt profile. We use the Mg{\sc \,ii} $\lambda\lambda2796,2803\AA$ transitions, which are relatively insensitive to $\alpha$ variation, and the Fe{\sc \,ii} $\lambda \lambda \lambda \lambda 2382,2600,2344,2587\AA$ transitions, which are strongly sensitive. The parameters are approximately jointly normally distributed. We expect this for single component fits -- the Voigt profile decomposition is effectively unique with one component. \subsection{LBQS\,0013$-$0029 $z=2.029$} This system appears with two obvious absorption features. We find that the bluer feature is better fitted by two components than it is by one on the basis of a statistically significant reduction in $\chi^2$ when using two components. That is, we fit three components to this absorption profile. We use a wide variety of transitions, namely: Si{\sc \,ii} $\lambda1526\AA$, Al{\sc \,iii} $\lambda \lambda 1854, 1862\AA$, Fe{\sc \,ii} $\lambda \lambda \lambda \lambda \lambda 2382,2600,2344,2587,1608\AA$ and Mg{\sc \,i} $\lambda 2852\AA$. The chain values of $\Delta\alpha/\alpha$ are approximately Gaussian-distributed. \subsection{Q\,0551$-$366 $z=1.748$} \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics[hiresbb,viewport=1 1 92 151,angle=-90,width=61mm]{SAIT_KING_FIG1.eps}} \end{center} \caption{\label{figQ0551366a}Histogram of the chain for $\log_{10}(N(2)/\rm{cm}^2)$, where $N(2)$ is the column density of the central component to the fit for the $z=1.748$ absorption system toward Q\,0551$-$366.} \end{figure} \begin{figure} \begin{center} \resizebox{\hsize}{!}{\includegraphics[hiresbb,viewport=1 1 90 148,angle=-90,width=61mm]{SAIT_KING_FIG2.eps}} \end{center} \caption{\label{figQ0551366b}Histogram of the chain for $\Delta\alpha/\alpha$ for the $z=1.748$ absorption system toward Q\,0551$-$366.} \end{figure} This absorption feature appears as one weak feature next to one relatively strong feature, with some overlap. We find the bluer feature to be well modelled by one component, however the higher wavelength feature appears to require two closely spaced components to achieve a statistically acceptable fit. Hence, we use three components in total to model the observed profile. The transitions we use are: Si{\sc \,ii} $\lambda1526\AA$, Mg{\sc \,i} $\lambda 2852\AA$ and Fe{\sc \,ii} $\lambda \lambda \lambda \lambda \lambda \lambda$ $2382,2600,2344,2587,1608,2374\AA$. We note that the parameters corresponding to the two highest wavelength components are manifestly not normally distributed (see Fig.\,\ref{figQ0551366a} for an example). We confirm, by inspection of the chain, that this effect is not due to permutations of corresponding parameters, which would leave $\chi^2$ unchanged. Despite the gross departures from Gaussianity for the column density and Doppler parameters corresponding to the two reddest components, the histogram of $\Delta\alpha/\alpha$ remains approximately Gaussian (see Fig.\,\ref{figQ0551366b}). The Voigt profile decomposition is not unique, and so we can only try to find the model which best describes the observations statistically. However, for a given model we would naively expect that there is a unique value of $\Delta\alpha/\alpha$ which minimises $\chi^2$, and additionally that $\Delta\alpha/\alpha$ should be approximately Gaussian. We find both of these statements to be true here. It is reassuring that we find concordance between the VPFIT and MCMC results for $\Delta\alpha/\alpha$ given the significant non-Gaussianity in some parameters. For non-Gaussian parameters, the parameter estimates produced by VPFIT will be the correct maximum likelihood estimates, however the confidence intervals will be biased. For our present purposes, we are only interested in the confidence limits on $\Delta\alpha/\alpha$, and here we find an acceptable level of agreement. \subsection{Combination of results} If we assume that a single value of $\alpha$ underlies these, we can combine the three VPFIT results above using a weighted mean to estimate $\Delta\alpha/\alpha = (-0.74 \pm 0.59) \times 10^{-5}$, which is statistically consistent with no change in $\alpha$. We use the VPFIT results as a more conservative estimate. \section{Discussion \& conclusion} Given suitable knowledge of the observed distribution of column densities and Doppler parameters, we could implement these distributions as priors to our model. However, our statistical constraints are generally sufficiently good that this should not significantly alter our parameter estimates. In any event, we are primarily interested in $\Delta\alpha/\alpha$, for which a flat prior is reasonable on ignorance grounds. We would like to apply our algorithm to verify the results of \citet{King08} in relation to $\Delta\mu/\mu$; however, those fits involve greater than a thousand parameters each. In this context, our algorithm is hopelessly inadequate. Our running times for the objects described herein varied from a few hours to a few days, and have only a few tens of parameters. Exploration of more complicated cases must wait for advances in computing power. Our results demonstrate that VPFIT produces reliable parameters estimates and uncertainties for relatively simple situations. Experience with VPFIT suggests that there does not appear to be any indication of failure with moderately complicated circumstances, and so we would argue that the optimisation algorithm used by VPFIT is robust. The implication of is this it that the results of \citet{Murphy04} are unlikely to be explained by some failure of the optimisation algorithm used by VPFIT. Thus the detection of a change in $\alpha$ must either be real, or due to some other unknown issue. \begin{acknowledgements} Presentation of this work at the 2009 IAU XXVII General Assembly JD9 conference was supported through financial assistance from the UNSW PRSS scheme.\end{acknowledgements} \bibliographystyle{aa}
proofpile-arXiv_065-6639
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \large \renewcommand{\theequation}{\arabic{section}.\arabic{equation}} \setcounter{equation}{0} Change of scale in quantum field theories (QFTs) is governed by renormalization group (RG) transformations. If a space of theories is parameterized by coupling constants $\{\lambda^{i}\}$ the RG transformations are governed by a beta-function vector field \begin{equation}\label{beta_fn} \mu \frac{d\lambda^{i}}{d\mu} = \beta^{i}(\lambda) \, . \end{equation} The idea that RG flows could be gradient flows, that is \begin{equation} \label{grad_prop} \beta^{i}(\lambda) = -G^{ij}(\lambda)\frac{\partial S(\lambda)}{\partial \lambda^{j}} \end{equation} for some metric $G^{ij}(\lambda)$ and potential function $S(\lambda)$ defined on the theory space, has some history. One of the earliest papers devoted to this question was \cite{WZ}. It was suggested in that paper that RG flows are gradient flows in a wide variety of situations. Gradient flows have some special properties. Thus, if the metric $G^{ij}$ is positive definite, the scale derivative of the potential function is negative definite \begin{equation} \label{monot} \mu\frac{dS}{d\mu} = \beta^{i}\frac{\partial S}{\partial \lambda^{i}} = -G_{ij}\beta^{i} \beta^{j} \le 0 \end{equation} and therefore $S$ monotonically decreases along the flow. This demonstrates irreversibility of the RG flows and forbids limiting cycle behaviour. Another appealing property of gradient flows is that the matrix of anomalous dimensions $\partial_{i}\beta^{j}$ is symmetric and thus their eigenvalues at critical points, that give critical exponents, are always real. The first perturbative computations in support of this idea were done for four-dimensional theories \cite{WZ2}. Later more evidence was found in the context of two dimensional general sigma models \cite{FriedanNM1}, \cite{FriedanNM2}. In \cite{FriedanNM2} a gradient formula of the form (\ref{grad_prop}) was formulated for such models and shown to hold up to two loops for a particular class of sigma models. A crucial ingredient for a gradient formula for general sigma models was the introduction of the dilaton field \cite{FradkinTs1}, \cite{FradkinTs2}. It was shown in \cite{Callanetal1}, \cite{Callanetal2} that including the dilaton couplings into a general sigma model one finds that the vanishing beta function equations are equivalent to critical points of a certain functional at the leading order in $\alpha'$. A gradient formula of the form (\ref{grad_prop}) was checked for general sigma models in \cite{Tseytlin0} to the first two orders in $\alpha'$. In string theory conformal sigma models describe strings propagating on the sigma model target manifolds. The sigma model couplings parameterize a metric $G_{IJ}$, antisymmetric tensor $B_{IJ}$ and a dilaton field $\Phi$ defined on the target space manifold. The gradient property (\ref{grad_prop}) attains a special significance in this context becoming a manifestation of the string action principle. The condition for conformal invariance is that the beta functions vanish: $\beta^{G}=\beta^{B}=\beta^{\Phi}=0$. It is equivalent to string equations of motion. The gradient property (\ref{grad_prop}) thus means that the string equations of motion arise by varying a functional of couplings - $S$, which can be identified with the string action functional. Another reinforcement of the gradient conjecture (\ref{grad_prop}) for two-dimensional theories came from the Zamolodchikov $c$-theorem \cite{Zam}. The last one is a general theorem applicable to unitary 2D theories that states that there is a function $c$ on the space of theories that monotonically decreases along the RG flows and coincides with the Virasoro central charge at fixed points. (We give a slightly modified proof of this theorem in section \ref{Zam_form}). The theorem was proved by constructing $c$ whose scale derivative takes the form of the right hand side of (\ref{monot}) with a certain positive definite metric. It was natural to conjecture that a gradient formula of the form (\ref{grad_prop}) holds with $S$ being the $c$ function and $G_{ij}$ - the Zamolodchikov metric. This was shown to hold at the leading order in conformal perturbation theory near fixed points \cite{Zam}, \cite{Zam2}. In the context of nonlinear sigma models this idea was discussed in \cite{Tseytlin_c}. It was argued in \cite{Tseytlin_c} that for the purposes of string theory the $c$-function cannot provide a suitable potential function (we comment more on this in section \ref{sec:final}). Other potential functions for RG flows of nonlinear sigma models were considered in \cite{Tseytlin1}, \cite{Tseytlin_c}, \cite{Tseytlin2}, \cite{OsbornNLSM1}, \cite{OsbornNLSM2} which were shown to be related to the central charge and to each other. In \cite{OsbornNLSM2} a potential function for nonlinear sigma models was constructed assuming the existence of a sigma model zero mode integration measure with certain properties. It was shown that a measure with the required properties can be constructed infinitesimally but a proof of the integrability of that construction is still lacking. An essential tool proposed in \cite{OsbornNLSM2} for deriving gradient formulas was the use of Wess-Zumino consistency conditions on local Weyl transformations in the presence of curved metric and sources. This technique was applied in \cite{Osborn} to a class of quantum field theories subject to certain power counting restrictions. It was shown that for these theories a gradient formula holds of a slightly different form than (\ref{grad_prop}) : \begin{equation}\label{eq:Osborngr0} \partial_{i}c =- g_{ij}\beta^{j} - b_{ij}\beta^{j} \end{equation} where $c$ and $g_{ij}$ are the Zamolodchikov's metric and $c$-function \cite{Zam} and $b_{ij}$ is a certain antisymmetric tensor. The necessity to introduce an antisymmetric tensor along with the Zamolodchikov's metric can be demonstrated by the use of conformal perturbation theory. Thus it was shown in \cite{Friedmanetal} by explicit perturbative calculations that the one-form $g_{ij}\partial^{j} c$ is not closed for some flows\footnote{The obstruction to closedness occurs at the next to leading order in perturbation. }. Still, as we will explain in the next section, Osborn's gradient formula (\ref{eq:Osborngr0}), although very inspiring, falls short of providing a general gradient formula. The main content of the present work is a derivation of a gradient formula that generalizes formula (\ref{eq:Osborngr0}) to a much wider class of theories that includes nonlinear sigma models as well. To finish the historical overview we mention here that a general gradient formula was proven for boundary renormalization group flows in two dimensions \cite{FriedanKon}. Such flows happen in QFTs defined on a half plane (or a cylinder) when the bulk theory is conformal but the boundary condition breaks the conformal invariance. One of the implications of the boundary gradient formula is a proof of Affleck and Ludwig's $g$-theorem \cite{AL} which is a statement analogous to Zamolodchikov's $c$-theorem. A string theory interpretation of this gradient formula is that it provides an off-shell action for open strings. The boundary gradient formula was proved under certain assumptions on the UV behaviour which are reminiscent of the power counting restrictions of \cite{Osborn}. Nevertheless we will show in the present paper that any assumptions of this kind can be dispensed with in proving a bulk gradient formula. The paper is organized as follows. In section 2 after introducing some notations we explain in more detail Osborn's gradient formula (\ref{eq:Osborngr0}) and the assumptions that went into proving it. We then state our main result - a general gradient formula (\ref{new_gf}) and discuss the assumptions needed to prove it. In section 3 we give a proof of Zamolodchikov's formula and recast it in the form that we use as a starting point for proving the gradient formula. In section 4 the first steps of the proof are explained. At the end of those steps we express the quantity $\partial_{i}c + g_{ij}\beta^{j} + b_{ij}\beta^{j}$ built from the elements present in (\ref{eq:Osborngr0}) via three point functions with a certain contact operator present in them. To analyze these three point functions we develop a sources and operations formalism in section 5. A short summary of the formalism is provided in subsection 5.2. After discussing the Callan-Symanzik equations in section 6 we resume the proof in section 7 putting to use the Wess-Zumino consistency conditions on the local renormalization operation and our infrared assumptions. At the end of section 7 an infrared regulated gradient formula is obtained. In section 8 the proof is concluded by removing the infrared cutoff. Section 9 contains a discussion of the properties of the gradient formula and the assumptions used in proving it. In section 9.5 the gradient formula is specialized to the nonlinear sigma model case and a proof is given of the correspondence between RG fixed points and stationary points of $c$. In section 10 we conclude with some final remarks. \section{The general gradient formula} \setcounter{equation}{0} In this paper we consider two-dimensional Euclidean quantum field theories equipped with a conserved stress-energy tensor $T_{\mu\nu}(x)$. The stress-energy tensor measures the response of the theory to metric perturbations, so that if $Z[g_{\mu\nu}]$ is a partition function defined on a 2-dimensional plane with metric $g_{\mu\nu}(x)=\delta_{\mu\nu} + \delta g_{\mu\nu}$ \begin{equation} \label{Tresp} \delta \ln Z = \frac{1}{2}\iint\!\! d^{2}x\, \langle \delta g_{\mu \nu}T^{\mu\nu}(x)\rangle \, . \end{equation} In two dimensions any metric can be made conformally flat so that $g_{\mu\nu}(x) = \mu^{2}(x) \delta_{\mu\nu}$ where the function $\mu(x)$ sets the local scale. A change of local scale is generated by the trace of stress-energy tensor $\Theta(x)\equiv g^{\mu\nu}T_{\mu\nu}(x)$ \begin{equation}\label{scale_tr} \mu(x)\frac{\delta \ln Z}{\delta \mu(x)} = \langle \Theta(x) \rangle \, . \end{equation} For correlation functions computed on ${\mathbb R}^{2}$ with constant scale $\mu$ the change of scale is obtained by integrating over an insertion of $\Theta(x)$ \begin{equation}\label{eq:scale} \mu\frac{\partial}{\partial \mu} \langle {\cal O}_{1}(x_1)\dots {\cal O}_{n}(x_n)\rangle_{c} = \int\!\! d^{2}x\, \langle \Theta(x) {\cal O}_{1}(x_1)\dots {\cal O}_{n}(x_n)\rangle_{c} \, . \end{equation} Here ${\cal O}_{1}\, , \dots {\cal O}_{n}$ are local operators and the subscript $c$ at the correlator brackets marks connected correlators. Assume that a family of renormalizable QFTs is parameterized by renormalized coupling constants $\lambda^{i}$, $i=1,\dots, N$. We assume that an action principle \cite{Schwinger} is satisfied. This means that for each coupling $\lambda^{i}$ there exists a local operator $\phi_{i}(x)$ such that for any set of local operators ${\cal O}_{1}\, , \dots ,{\cal O}_{n}$ \begin{eqnarray}\label{ap1} \frac{\partial}{\partial \lambda^{i}} \langle {\cal O}_{1}(x_1)\dots {\cal O}_{n}(x_n)\rangle_{c} = \int\!\! d^{2}x\, \langle \phi_{i}(x) {\cal O}_{1}(x_1)\dots {\cal O}_{n}(x_n)\rangle_{c}\, . \end{eqnarray} Note that the integrability of the integrand in (\ref{eq:scale}),(\ref{ap1}) assumes the appropriate infrared behaviour of the correlators. Assume further that the couplings $\lambda^{i}$ can be promoted to local sources $\lambda^{i}(x)$ for the fields $\phi_{i}(x)$. The generating functional $\ln Z$ then in general depends on the scale factor $\mu(x)$ and the sources $\lambda^{i}$, and the action principle (\ref{ap1}) means that in addition to (\ref{Tresp}) we have \begin{equation} \label{ap2} \frac{\delta \ln Z}{\delta \lambda^{i}(x)} = \langle \phi_{i}(x) \rangle \, . \end{equation} A correlation function of the form \begin{equation} \label{gen_corr} \langle \phi_{i_{1}}(x_1)\phi_{i_{2}}(x_2)\dots \phi_{i_{n}}(x_2)\Theta(y_1)\Theta(y_2)\dots \Theta(y_{m}) \rangle_{c} \end{equation} evaluated on a flat ${\mathbb R}^{2}$ can be obtained by taking variational derivatives of $\ln Z$ with respect to the sources $\lambda^{i}$ and the metric scale factor $\mu$ and then setting the sources and the scale to be constant. In a renormalized theory the correlators (\ref{gen_corr}) are distributions. They form a basic set of local physical quantities defined in a given QFT. In a renormalizable QFT a change of scale can be compensated by changing the couplings $\lambda^{i}$ according to (\ref{beta_fn}). By the action principle (\ref{ap1}) this implies that $\Theta(x) = \beta^{i}\phi_{i}(x)$ where $\beta^{i}$ are the beta functions. This equation should be understood as an operator equation, that is, as an equation that holds inside correlation functions (\ref{gen_corr}) up to contact terms (i.e. up to distributions supported on subsets of measure zero). The use of sources $\lambda^{i}(x)$ and non-constant Weyl factor $\mu(x)$ facilitates bookkeeping of the contact terms. In the presence of non-constant $\lambda^{i}(x)$ and $\mu(x)$ one can expand the difference $\Theta(x) - \beta^{i}(\lambda(x))\phi_{i}(x)$ in terms of derivatives of the sources and metric \cite{Osborn}. The expansion must by covariant with respect to changes of coordinates. This requirement ensures that the contact terms respect the conservation of stress-energy tensor. In \cite{Osborn} H. Osborn assumed that this expansion has the form \begin{equation}\label{OsbornD} \Theta(x)- \beta^{i}\phi_{i}(x) = \frac{1}{2}\mu^{2}R_{2}(x)C(\lambda) + \partial^{\mu}[W_{i}(\lambda)\partial_{\mu}\lambda^{i}] + \frac{1}{2}\partial_{\mu}\lambda^{i}\partial^{\mu}\lambda^{j}G_{ij}(\lambda) \end{equation} where \begin{equation} \mu^{2}R_{2}(x) = -2\partial_{\mu}\partial^{\mu}\ln \mu(x) \end{equation} is the two-dimensional curvature density. Note that in (\ref{OsbornD}) $C$, $W_{i}$ and $G_{ij}$ are functions of $\lambda$ evaluated on $\lambda^{i}(x)$ that depend on $x$ via $\lambda^{i}(x)$ only. Effectively equation (\ref{OsbornD}) gives a local version of renormalization group equation. Using the Wess-Zumino consistency conditions for the local renormalization group transformations (\ref{OsbornD}) H. Osborn derived a gradient formula \cite{Osborn} \begin{equation}\label{Osborn_grad} \partial_{i}c + g_{ij}\beta^{j}+ b_{ij}\beta^{j} = 0 \end{equation} where $c$ and $g_{ij}$ are the Zamolodchikov $c$-function and metric \cite{Zam} defined in terms of two-point functions as \begin{equation} c = 4\pi^{2}\left ( x^{\mu}x^{\nu} x^{\alpha}x^{\beta} - x^{2}g^{\mu\nu} x^{\alpha}x^{\beta} - \frac{1}{2}x^{2} x^{\mu}g^{\nu\alpha}x^{\beta} \right ) {\expvalc{T_{\mu\nu}(x)\,T_{\alpha\beta}(0)}}_{\big /\Lambda|x|=1} \label{eq:corig} \end{equation} \begin{equation} g_{ij} = 6\pi^{2} \Lambda^{-4} \,{\expvalc{\phi_{i}(x)\,\phi_{j}(0)}}_{\big / \Lambda |x|=1} \label{eq:gijorig} \end{equation} where $\Lambda^{-1}$ is a fixed arbitrary 2-d distance. The tensor $b_{ij}$ is an antisymmetric two-form that can be expressed as \begin{equation}\label{eq:bijorig} b_{ij}=\partial_{i}w_{j} - \partial_{j}w_{i}\, , \quad w_{i} = 3\pi \int\!\! d^{2}x\,x^{2}\theta(1-\Lambda|x|) \langle \phi_{i}(x)\Theta(0)\rangle_{c} \end{equation} where $\Lambda$ is the same mass scale used in the definition of $c$ and $g_{ij}$. The most restrictive assumption in \cite{Osborn} appears to be the form of expansion (\ref{OsbornD}). The fact that the expansion does not go beyond the second order in derivatives suggests a certain power counting principle. Such a principle could be provided in the vicinity of an ultraviolet fixed point by the standard power counting arguments for renormalizability. Even with such a counting principle the expansion (\ref{OsbornD}) is too restrictive. Thus it omits terms of the form $\partial_{\mu} \lambda^{i}J^{\mu}_{i}(x)$ where $J^{\mu}_{i}(x)$ are local vector fields which can be prescribed engineering dimension 1. Such terms in the scale anomaly can be generated by near marginal perturbations near fixed points. In particular they are present in generic current-current perturbations of Wess-Zumino-Witten theories \cite{FKinprep}. Another class of theories for which (\ref{OsbornD}) is too restrictive is general nonlinear sigma models. In this case one needs to allow the quantities $C$, $W_{i}$ and $G_{ij}$ in (\ref{OsbornD}) to have a non-trivial operator content. The case of sigma models was covered separately in \cite{Osborn} (see also \cite{Tseytlin1}, \cite{Tseytlin2}, \cite{Tseytlin_c}, \cite{OsbornNLSM1}, \cite{OsbornNLSM2} and references therein). It was shown that a gradient formula analogous to (\ref{Osborn_grad}) can be derived provided a sigma model integration measure with certain properties exists. In the present paper we will go beyond Osborn's UV assumptions allowing for an arbitrary local covariant expansion with operator-valued coefficients replacing (\ref{OsbornD}). Making instead assumptions about the infrared behaviour we derive a general formula \begin{equation} \label{new_gf} \partial_{i}c + (g_{ij}+\Delta g_{ij}) \beta^{j}+b_{ij}\beta^{j} = 0 \, . \end{equation} The metric correction $\Delta g_{ij}$ is constructed via two point functions of $\phi_{i}$ with the currents $J_{j}^{\mu}(x)$ arising from the expansion generalizing expansion (\ref{OsbornD}) (see formulas (\ref{eq:gLij}), (\ref{eq:deltagij})). Alternatively $\Delta g_{ij}$ can be expressed via 3 point functions with the pure-contact field $D(x) = \Theta(x) -\beta (x)$ (formula (\ref{eq:deltagijreg})). Formula (\ref{new_gf}) is derived under two separate assumptions on the infrared behaviour. The first assumption is that the action principle (\ref{ap1}) holds for one and two point functions of operators $\phi_{i}$ that assumes that these functions are at least once differentiable. This ensures in particular that the $c$-function is once differentiable. The second assumption is that for any vector field $J_{\mu}(x)$ we have \begin{equation}\label{IR1} \lim_{|x|\to \infty} |x|^{3}\langle J_{\mu}(x)T_{\alpha\beta}\rangle_{c}=0 \,. \end{equation} This condition is equivalent to requiring that the the long distance limit of the QFT does not exhibit spontaneously broken global conformal symmetry. (Recall that at fixed points special conformal symmetry requires $T(z)$ to decay at infinity as $|z|^{-4}$.) As a simple example in section \ref{IRexample} demonstrates, this condition is essential. If in a scale invariant theory the global conformal symmetry is broken via boundary conditions at infinity the value of the central charge may vary with moduli. Our considerations include the nonlinear sigma model case. We thus show that in order to have a gradient formula we may replace the somewhat obscure technical assumption on the measure given in \cite{OsbornNLSM2} by a more conceptually clear assumption on the stress-energy tensor behaviour (\ref{IR1}) which we show to be a necessary assumption in section \ref{IRexample}. A question remains, of course, how one can check whether our infrared conditions hold in any given theory. Since in the nonlinear sigma model the expectation values of diffeomorphism invariant local operators are believed to be free of perturbative infrared divergences they must be analytic in the couplings (\cite{Elitzur}, \cite{David}). This means that the first infrared assumption can be controlled in perturbation theory. It is less clear to us whether one can control the infrared behaviour of $T_{\mu\nu}$ perturbatively. We are planning to discuss applications of our general result (\ref{new_gf}) to nonlinear sigma models in more details in a separate paper \cite{FKinprep}. \section{Zamolodchikov's formula}\label{Zam_form}\label{sect:cgijformulas} \setcounter{equation}{0} Zamolodchikov proved in \cite{Zam} the following formula \begin{equation} \mu\frac{\partial c}{\partial{\mu}} = -\beta^{i}g_{ij}\beta^{j} \label{eq:zam} \end{equation} where $\mu$ is the RG scale, $c$ is the $c$-function (\ref{eq:corig}) and $g_{ij}$ is the metric introduced in (\ref{eq:gijorig}). This formula implies that $c$ decreases under the renormalization group flow and is stationary exactly at the fixed points. $c$ is normalized so that at fixed points its value coincides with the value of the Virasoro central charge. Note that the $c$-function and the metric $g_{ij}$ depend on $\Lambda$ only through the dimensionless ratio $\Lambda/\mu$, because according to (\ref{Tresp}) and (\ref{ap1}) the fields $T_{\mu\nu}(x)$ and $\phi_{i}(x)$ are densities in $x$, implying that their 2-point functions take the form \begin{eqnarray} \label{eq:muscale} \expvalc{T_{\mu\nu}(x)\,T_{\alpha\beta}(0)} &=& \mu^{4} F_{\mu\nu\alpha\beta}(\mu x)\, , \nonumber \\ \expvalc{\phi_{i}(x)\,\phi_{j}(0)} &=& \mu^{4} F_{ij}(\mu x)\, , \nonumber \\ \expvalc{T_{\mu\nu}(x)\,\phi_{i}(0)} &=& \mu^{4} F_{\mu\nu, i}(\mu x) \,. \end{eqnarray} Before we set out to prove the general gradient formula it is instructive to go over a proof of formula (\ref{eq:zam}). One way to prove equation (\ref{eq:zam}) is to derive alternative formulas for $c$ and $g_{ij}$ \begin{eqnarray} c&=& - \int\!\! d^{2}x \,\ G_{\Lambda}(x) \,\expvalc{\Theta(x)\,\Theta(0)} \label{eq:c}\\ g_{ij}&=& -\Lambda\partialby\Lambda \int\!\! d^{2}x \, G_{\Lambda}(x) \, \expvalc{\phi_{i}(x)\,\phi_{j}(0)} \label{eq:gij} \end{eqnarray} where \begin{equation} G_{\Lambda}(x) = 3\pi x^{2} \theta(1-\Lambda |x|) \,. \end{equation} These are the formulas for $c$ and $g_{ij}$ that we will use in the proof of the gradient formula. Equation (\ref{eq:zam}) follows immediately from formulas (\ref{eq:c}) and (\ref{eq:gij}): \begin{eqnarray} \mu\frac{\partial c}{\partial \mu} = -\Lambda\frac{\partial c}{ \partial \Lambda} &=& \Lambda\partialby\Lambda \int\!\! d^{2}x \,\ G_{\Lambda}(x) \,\expvalc{\Theta(x)\,\Theta(0)} \nonumber \\ &=& \int\!\! d^{2}x \,\ \Lambda\frac{\partial G_{\Lambda}(x) }{\partial \Lambda} \,\expvalc{\beta^{i}\phi_{i}(x)\,\beta^{j}\phi_{j}(0)} \nonumber \\ &=& - \beta^{i} g_{ij} \beta^{j} \,. \end{eqnarray} Replacing $\expvalc{\Theta(x)\,\Theta(0)}$ by $\expvalc{\beta^{i}\phi_{i}(x)\,\beta^{j}\phi_{j}(0)}$ in the second line is allowed because they differ only by a contact term in $x$, which gives no contribution since the smearing function $\Lambda\partial G_{\Lambda}(x)/\partial \Lambda $ is supported away from $x=0$. While formula (\ref{eq:gij}) is evidently equivalent to formula (\ref{eq:gijorig}) the equivalence of formulas (\ref{eq:corig}) and (\ref{eq:c}) for $c$ is shown as follows. Combine the special identity in two space-time dimensions \begin{equation} \left ( x^{2}g^{\mu\nu}g^{\alpha\beta} -g^{\mu\nu}x^{\alpha}x^{\beta} -x^{\mu}x^{\nu}g^{\alpha\beta} + 2 g^{\mu\alpha}x^{\nu}x^{\beta} - x^{2}g^{\mu\alpha}g^{\nu\beta} \right ) \expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)} = 0 \end{equation} with the Ward identity \begin{equation} \partial^{\mu}\expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)} = 0 \end{equation} and CPT invariance \begin{equation} \expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)} = \expvalc{T_{\mu\nu}(-x)T_{\alpha\beta}(0)} =\expvalc{T_{\alpha\beta}(x)T_{\mu\nu}(0)} \end{equation} to calculate \begin{equation} \label{int_rel} \partial^{\mu} \left [ \left ( 2 x^{\nu}x^{\alpha}x^{\beta} -2 x^{2} x^{\nu } g^{\alpha\beta} - x^{2} g^{\nu\alpha} x^{\beta} \right ) \expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)} \right ] = -3 x^{2} \expvalc{\Theta (x) \Theta(0)} \, . \end{equation} It follows from (\ref{int_rel}) that \begin{align} - \int d^{2}x \,&\ G_{\Lambda}(x) \,\expvalc{\Theta(x)\,\Theta(0)}\nonumber \\ &= \pi \int d^{2}x \,\theta(1-\Lambda|x|) \,\partial^{\mu} \left [ \left ( 2 x^{\nu}x^{\alpha}x^{\beta} -2 x^{2} x^{\nu } g^{\alpha\beta} - x^{2} g^{\nu\alpha} x^{\beta} \right ) \expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)} \right ] \nonumber \\ &= \pi \int d^{2}x \,\delta(1-\Lambda|x|) |x|^{-2} x^{\mu} \left ( 2 x^{\nu}x^{\alpha}x^{\beta} -2 x^{2} x^{\nu } g^{\alpha\beta} - x^{2} g^{\nu\alpha} x^{\beta} \right ) \expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)} \nonumber \\ &=2\pi^{2} \left ( 2 x^{\mu}x^{\nu}x^{\alpha}x^{\beta} - x^{2} x^{\mu}x^{\nu } g^{\alpha\beta} - x^{2} g^{\mu\nu } x^{\alpha}x^{\beta } - x^{2} x^{\mu}g^{\nu\alpha} x^{\beta} \right ) {\expvalc{T_{\mu\nu}(x)T_{\alpha\beta}(0)}} _{\big / \Lambda |x|=1} \end{align} which demonstrates the equivalence of (\ref{eq:corig}) and (\ref{eq:c}). \section{The proof of the gradient formula (first steps)} \setcounter{equation}{0} We start by defining a 1-form $r_{i}$ by the equation \begin{equation} \partial_{i}c + g_{ij}\beta^{j} +b_{ij}\beta^{j} +r_{i} = 0 \end{equation} and show that the remainder term $r_{i}$ can be expressed in terms of correlation functions of $\Theta(x)$ and $\phi_{i}(x)$ with the pure-contact field $D(x) = \Theta(x) -\beta (x)$.Infrared behavior of the correlation functions will be an important issue, so we introduce an IR cutoff at $|x|=L\gg \Lambda^{-1}$ and keep track of the error terms. Our assumptions about IR behavior will be designed to ensure the vanishing of the IR error in the limit $L\rightarrow \infty$. We start out by recasting $g_{ij}\beta^{j}$ as \begin{equation} g_{ij}\beta^{j} = 6\pi^{2} \Lambda^{-4}\langle \phi_{i}(x)\phi_{j}(0)\beta^{j}\rangle_{\big / \Lambda |x|=1} = 6\pi^{2}\Lambda^{-4}\langle \phi_{i}(x)\Theta(0)\rangle_{\big / \Lambda |x|=1} \end{equation} which is valid because $\beta^{j}\phi_{j}(0)$ differs from $\Theta(0)$ only by contact terms. This can be further rewritten as \begin{eqnarray} g_{ij}\beta^{j} = -\Lambda\frac{\partial}{\partial \Lambda} \int\!\!d^{2}x\, G_{\Lambda}(x)\langle \phi_{i}(x) \Theta(0)\rangle_{c} = \mu\frac{\partial}{\partial \mu} \int\!\!d^{2}x\, G_{\Lambda}(x)\langle \phi_{i}(x) \Theta(0)\rangle_{c} \end{eqnarray} where the scaling property (\ref{eq:muscale}) was used on the last step. Finally using (\ref{eq:scale}) we obtain \begin{equation} \label{formula1} g_{ij}\beta^{j} = \int d^{2}y\int d^{2}x \; G_{\Lambda}(x) \, \expvalc{\Theta(y)\,\phi_{i}(x)\,\Theta(0)} \, . \end{equation} Formula (\ref{formula1}) is infrared safe but as we want to impose the IR cutoff systematically, we write instead \begin{equation} g_{ij}\beta^{j} + E_{1} = \int_{|y|<L} d^{2}y \int d^{2}x \; G_{\Lambda}(x) \, \expvalc{\Theta(y)\,\phi_{i}(x)\,\Theta(0)} \label{eq:gijbetaj} \end{equation} The Ward identity gives the error term \begin{eqnarray} E_{1} &=& \int_{|y|<L} d^{2}y \;\partial^{\mu} \left [ y^{\nu} \int d^{2}x \; G_{\Lambda}(x) \, \expvalc{T_{\mu\nu}(y)\,\phi_{i}(x)\,\Theta(0)}\right ] \nonumber \\ &=& 2\pi\,y^{\mu}y^{\nu} \int d^{2}x \; G_{\Lambda}(x) \,{\expvalc{T_{\mu\nu}(y)\,\phi_{i}(x)\,\Theta(0)}}_{\big /|y|=L} \label{eq:E1} \end{eqnarray} which certainly vanishes in the limit $L\rightarrow\infty$. We next turn our attention to the derivative $\partial_{i}c$. Assuming that $c$ can be differentiated with respect to the coupling constants $\lambda^{i}$, we can write using formula (\ref{eq:c}) for $c$ and the action principle (\ref{ap1}) \begin{equation} \partial_{i} c = - \int d^{2}y \, \int d^{2}x \,\ G_{\Lambda}(x) \,\expvalc{\phi_{i}(y) \,\Theta(x)\,\Theta(0)} \,. \end{equation} Again, we regularize in the IR as \begin{equation} \partial^{L}_{i} c = - \int_{|y|<L} d^{2}y \, \int d^{2}x \,\ G_{\Lambda}(x) \,\expvalc{\phi_{i}(y) \,\Theta(x)\,\Theta(0)} \label{eq:ci} \, . \end{equation} Formulas (\ref{eq:gijbetaj}) and (\ref{eq:ci}) can be combined to get \begin{eqnarray} &&\partial^{L}_{i} c + g_{ij}\beta^{j} +E_{1} = \int_{|y|<L} d^{2}y \int d^{2}x \, G_{\Lambda}(x) \, \expvalc{\Theta(y)\,\phi_{i}(x)\,\Theta(0)-\phi_{i}(y) \,\Theta(x) \,\Theta(0)} \nonumber \\ &&= \int_{|y|<L} d^{2}y \int d^{2}x \, G_{\Lambda}(x) \, \expvalc{ \left [ \beta(y)+D(y)\right ]\,\phi_{i}(x)\,\Theta(0) -\phi_{i}(y) \,\left [\beta(x)+D(x)\right ] \,\Theta(0)} \nonumber \\ &&= -b^{L}_{ij}\beta^{j} + \int_{|y|<L} d^{2}y \int d^{2}x \, G_{\Lambda}(x) \, \expvalc{ D(y)\,\phi_{i}(x)\,\Theta(0) -\phi_{i}(y) \,D(x) \,\Theta(0)} \label{eq:riderived} \end{eqnarray} where we have introduced the 2-form $b^{L}_{ij}$ \begin{equation} b^{L}_{ij} = \int_{|y|<L} d^{2}y \int d^{2}x \, G_{\Lambda}(x) \,\expvalc{\phi_{i}(y) \,\phi_{j}(x) \,\Theta(0) -\phi_{j}(y) \,\phi_{i}(x) \,\Theta(0)} \,. \label{eq:bLij} \end{equation} Equation (\ref{eq:riderived}) can be written as \begin{equation} \partial^{L}_{i}c + g_{ij}\beta^{j} +E_{1}+ b^{L}_{ij}\beta^{j} +r^{L}_{i} = 0 \label{eq:gradformula1} \end{equation} with \begin{equation} r^{L}_{i} = \int_{|y|<L} d^{2}y \int d^{2}x \, G_{\Lambda}(x) \, \expvalc{\phi_{i}(y) \,D(x) \,\Theta(0) - D(y)\,\phi_{i}(x)\,\Theta(0)} \label{eq:ri} \end{equation} Equations (\ref{eq:gradformula1}), (\ref{eq:ri}) are the main results of this section. We will later show that under our assumptions on the infrared behaviour the limits \begin{equation} \partial_{i}c=\lim_{L\to \infty}\partial_{i}^{L}c \, , \qquad b_{ij} = \lim_{L\to \infty}b_{ij}^{L} \end{equation} exist. The error term $E_{1}$ goes to zero as $L\to \infty$. The remainder term $r_{i}^{L}$ is expressed via correlation functions involving the pure-contact field $D(x)$. In order to investigate this term we develop a sources and operations formalism for calculating correlation functions of $D(x)$. \section{Sources and operations} \setcounter{equation}{0} In this section we present a general formalism that allows computing correlation functions of pure contact fields using functional differential operators acting on functionals of sources and metric. The general exposition is somewhat tedious so for the reader's convenience we present the most important ingredients necessary to understand the proof of the gradient formula in a separate subsection \ref{subsec:summary}. \subsection{General formalism} So far we have introduced the fields $\phi_{i}(x)$ as operators conjugate to the coupling constants $\lambda^{i}$ that parameterize a renormalizable 2D QFT. It will be convenient to assume that the set $\phi_{i}$ is complete in a given class of fields which we denote by ${\cal F}$. The class of fields can be a complete set of spin-0 relevant and near marginal fields. We could define such fields without a reference to a particular fixed point by requiring that the corresponding coupling constant belongs to some family of renormalizable theories with finitely many couplings (there are finitely many couplings for which $\beta^{i}$ is not identically zero). This will not work for the nonlinear sigma models, for which the set of couplings is infinite, but in that case we could talk about near-relevant and near-marginal couplings using the engineering scaling dimensions introduced via free fields. As yet another possibility we could assume that the set $\{\phi_{i}\}$ spans all spin-0 local fields and work with a Wilsonian RG. We will keep the class of fields ${\cal F}$ unspecified throughout this section assuming only that ${\cal F} $ is closed under RG the precise sense of which we will discuss below. In general a field $O(x)$ is defined via its distributional correlation functions with other fields. If $O(x)\in {\cal F}$ the completeness of $\{\phi_{i}\}$ means that there are unique coefficients $O^{i}$ such that the field $O(x) - O^{i}\phi_{i}(x)$ has vanishing correlation functions with all fields from ${\cal F}$ inserted away from $x$. The field $O(x) - O^{i}\phi_{i}(x)$ is thus a pure contact field, that is its correlation functions are distributions supported on a subset of measure zero in $x$. We can define {\it ordinary} fields $O(x)$ as fields for which the correlations of $O(x) - O^{i}\phi_{i}(x)$ are zero as distributions. This means that the distributional correlation functions of such fields are obtained from those of the fields $\phi_{i}(x)$ by contracting them with the appropriate coefficients $O^{i}$. Whatever ${\cal F}$ we choose it is essential that the trace of stress-energy tensor can be expanded in these fields: $\Theta(x) = \beta^{i}(\lambda)\phi_{i}(x)$. It is worth noting that the set $\phi_{i}$ may include total derivative fields. Although the correlation functions are independent of the corresponding coupling constants the beta functions may be non-trivial and total derivatives may thus contribute to $\Theta(x)$. Let us further introduce sources $\lambda^{i}(x)$ for all fields $\phi_{i}(x)$ so that the generating functional $\ln Z$ depends on these sources and the metric scale factor $\mu(x)$ with equations (\ref{scale_tr}) and (\ref{ap2}) satisfied. This means that $\phi_{i}(x)$ and $\Theta(x)$ are represented by functional derivatives \begin{equation}\label{func_der} \phi_{i}(x) = \frac{\delta }{\delta \lambda^{i}(x)} \, , \qquad \Theta(x) = \mu(x) \frac{\delta}{\delta \mu(x)} \end{equation} which we chose to denote by the same symbols. The action of these functional derivatives on $\ln Z$ generates distributional correlation functions (\ref{gen_corr}). To facilitate the use of differential operators in computing correlation functions we introduce a shorthand notation \begin{eqnarray} \rangle\rangle &=& \ln Z \\ \langle\langle &=& \mbox{restriction of functionals to constant sources and flat 2-d metric} \end{eqnarray} so \begin{equation} \opval{\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots } = \expvalc{\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots} \end{equation} where, on the left hand side, the $\phi_{i}(x)$ and $\Theta(x)$ are functional differential operators (\ref{func_der}), while on the right hand side they are fields. Define {\it operations} ${\cal O}(x)$ to be first order local differential operators acting on functionals of the sources and 2-d metric. The word local here means that the coefficients of the functional derivatives in an operation given at $x$ can depend only on the values of $\lambda(x)$, $\mu(x)$ and finitely many derivatives thereof. An ordinary field $O(x)=O^{i}(\lambda)\phi_{i}(x)$ is naturally assigned an operation ${\cal O}(x)=O^{i}(\lambda(x))\phi_{i}$. Operations of this form we will call ordinary. An arbitrary operation $\mathcal{O} (x)$ gives rise to an ordinary field denoted $\ord{O}(x)$ via \begin{equation}\label{eq:ordinaryf} \expvalc{\ord{O}(x) \,\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots } = \opval{\mathcal{O}(x)\, \phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots }\, . \end{equation} Although the above formula specifies distributional correlation functions containing only a single $\ord{O}(x)$ it defines uniquely the coefficients ${O}^{i}$ in $\ord{O}(x)={O}^{i}\phi_{i}(x)$ and thus in principle fixes the correlation functions containing arbitrarily many $\ord{O}(x)$. The ordinary operation ${O}^{i}\phi_{i}(x)$ corresponding to $\ord{O}(x)$ will be denoted by the same symbol $\ord{O}(x)$. Define pure-contact operations ${\cal O}(x)$ as operations satisfying $\ord{O}(x) = 0$, i.e., \begin{equation} \langle\langle\, \mathcal{O} (x) = 0\, \end{equation} Then \begin{eqnarray}\label{eq:purecontactcorr} \opval{\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots {\cal O}(0)} &=& \opval{\lbrack \phi_{i_{1}}(x_{1}),\,\mathcal{O} (0)\rbrack \cdots \Theta(y_{1})\cdots } + \cdots \nonumber \\&&\qquad{} + \opval{ \phi_{i_{1}}(x_{1})\cdots \lbrack \Theta(y_{1}),\,\mathcal{O} (0)\rbrack \cdots} + \cdots \end{eqnarray} is a sum of contact terms. We would like now to construct an operation for a given operator that can be used in computing its correlation functions from $\ln Z$. Since we know how to do this for ordinary operators it suffices to solve this problem for a pure contact field. Let $O(x)\in {\cal F}$ be a pure contact field that does not explicitly depend on $\lambda^{i}$ that is $[\partial_{i},O(x)]=0$. Then we can construct a pure contact operation $ {\tilde O}(x)$ by requiring \begin{equation}\label{pc1} \opval{\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots {\tilde O}(x)} = \langle \phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots O(x)\rangle_{c} \, . \end{equation} This essentially fixes ${\tilde O}(x)$ because in physical correlators singularities appear only when some of the insertions coincide. The only ambiguity in ${\tilde O}(x)$ is operations annihilating $\ln Z$. Any choice however suffices for practical purposes. With this definition given an arbitrary operator $A(x)\in {\cal F}$ its correlators with the fundamental fields $\phi_{i_{k}}(x_{k})$, $\Theta(y_{l})$ can be computed using the ordinary operation $\ord{A}(x)=A^{i}\phi_{i}$ and the contact operation \begin{equation}\label{calA} {\cal A}(x)\equiv \widetilde{[A-\ord{A}]}(x) \end{equation} according to \begin{equation}\label{pc2} \langle \phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots A(x)\rangle_{c} = \opval{\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots [\ord{A}(x) + {\cal A}(x)]} \, . \end{equation} In the above correlation function the contact terms proportional to $\delta(x-x_{i_{k}})$ are essentially fixed by the action principle (\ref{ap1}). The extra contributions arising from the explicit dependence of the coefficients $A^{i}$ on $\lambda^{j}$'s are accounted for by commuting the operation $ \ord{A}(x)$ to the left. Similarly the contact terms proportional to $\delta(x-y_{i_{k}})$ are fixed by the change of scale equation (\ref{eq:scale}). All contact term contributions proportional to derivatives of delta functions are obtained by commuting the pure contact operation ${\cal A}(x)$ to the left until it annihilates $\langle\langle$. Consider now the operator $\Theta(x)$. Assuming, as we agreed before, that $\Theta(x) = \beta^{i}\phi_{i}\equiv \beta(x)$ in the operator sense means that $\underline{\Theta}=\beta(x)$ and the field $D(x)=\Theta(x)-\beta(x)$ is pure contact. As $\Theta(x)$ does not explicitly depend on $\lambda^{i}$ ($\mu\partial/\partial\mu$ and $\partial/\partial \lambda^i$ commute) we can define a pure contact operation $\mathcal{D}(x)$ in accordance with the general rule (\ref{calA}), (\ref{pc2}). The field $\Theta(x)$ is special in that it is represented by a variational derivative (\ref{func_der}). This implies that \begin{equation}\label{ren_oper} \opval{\phi_{i_{1}}(x_{1})\cdots \Theta(y_{1})\cdots [\Theta(x) - \beta(x) - \mathcal{D}(x)]}=0 \end{equation} that can be written more succinctly as a first order functional differential equation on the generating functional \begin{equation}\label{eq:RGopgen} [\Theta(x) - \beta(x) - \mathcal{D}(x)] \ln Z = 0 \, . \end{equation} Knowing the pure contact operation $\mathcal{D}(x)$ the correlation functions of $D(x)$ with any number of $\phi_{i}(x)$ and $\Theta(x)$ can be calculated as \begin{eqnarray}\label{Dcorrs} && \expvalc{D(x) \,\phi_{i_{1}}(x_{1}) \dots \Theta(y_{1})\dots} = \opval{(\Theta(x)-\beta(x)) \,\phi_{i_{1}}(x_{1}) \dots \Theta(y_{1})\cdots } \nonumber \\ &&= \opval{[(\Theta(x)-\beta(x)),\,\phi_{i_{1}}(x_{1}) \dots \Theta(y_{1})\dots] } +\opval{ \phi_{i_{1}}(x_{1}) \dots \Theta(y_{1})\dots\mathcal{D}(x)} \nonumber \\ &&= \opval{ [\phi_{i_{1}}(x_{1}) \dots \Theta(y_{1})\dots,\, (\mathcal{D}(x)+\beta(x)-\Theta(x))]} \nonumber \\ &&= \opval{ [\phi_{i_{1}}(x_{1}) ,\, \mathcal{D}(x)] \dots \Theta(y_{1})\dots} + \dots + \opval{ \phi_{i_{1}}(x_{1})\dots [\Theta(y_{1}),\mathcal{D}(x)]\dots } + \dots \nonumber \\ &&+ \partial_{i_{1}}\beta^{i} \delta (x-x_{1}) \expvalc{ \phi_{i}(x_{1}) \dots \Theta(y_{1})\cdots} + \dots \end{eqnarray} where equation (\ref{ren_oper}) was used on the second line, $\langle\langle \mathcal{D}(x)=0$ was used on the third line and \begin{equation} \lbrack\phi_{i_{1}}(x_{1}),\, \beta(x) ] = \delta(x-x_{1})\partial_{i_{1}}\beta^{i} \phi_{i}(x_{1}) \,. \end{equation} was used on the last line. The form of $\mathcal{D}(x)$ is constrained by 2-d covariance and locality. In general it can be written as an expansion in derivatives of the sources $\lambda^{i}$ and covariant derivatives of the curvature with coefficients being ordinary operations. It is interesting to consider additional restrictions on $\mathcal{D}(x)$ from power counting rules. We will distinguish two such rules which we call a {\it loose power counting} and a {\it strict power counting}. In both cases the expansion of $\mathcal{D}(x)$ goes only up to two derivatives in the sources and metric. In the loose power counting rule the coefficients can have a nontrivial operator content. Explicitly in this case we can write \begin{equation} \mathcal{D}(x) = \frac12 \mu^{2} R_{2}(x) C(x) + \partial_{\mu}\lambda^{i}(x) J^{\mu}_{i}(x) +\partial^{\mu}\left [ W_{i}(x)\partial_{\mu}\lambda^{i}\right ] +\frac12 \partial_{\mu}\lambda^{i}\partial^{\mu}\lambda^{j} G_{ij}(x) \label{eq:Doploose} \end{equation} where $C(x)$, $W_{i}(x)$, $G_{ij}(x)$ are ordinary spin-0 fields, and $J^{\mu}_{i}(x)$ is an ordinary spin-1 field, and where the 2-d curvature is given by $$ \mu^{2}R_{2}(x) = -2\partial^{\mu}\partial_{\mu} \ln \mu(x)\, . $$ Two comments are in order here. Firstly, notice the appearance of vector fields $J^{\mu}_{i}(x)$ in the expansion. As we defined operations only for spin-zero fields to accommodate fields and operations of nontrivial spin we need to introduce new fundamental fields and new sources for those fields. While used to obtain distributional correlation functions involving operators of nontrivial spin such sources are always set to zero in the end of a computation. The operation $\mathcal{D}(x)$ does contain terms proportional to the tensorial sources and their derivatives. However our proof avoids using the explicit form of such terms and we will not introduce the tensor field sources explicitly not to clutter the computations. Nevertheless the operations like $J^{\mu}_{i}(x)$, when appear, should be understood in this sense. Secondly, notice that in the power counting scheme used the operators $C(x)$, $W_{i}(x)$, $G_{ij}(x)$ must have dimension near zero. This means that, using the fixed point language, we allow for slightly irrelevant terms to appear in $\mathcal{D}(x)$. This is a common consideration used for general nonlinear sigma models \cite{FriedanNM2}. The loose power counting thus accommodates perturbative nonlinear sigma models. If one assumes the UV behaviour is governed by a unitary fixed point, the only dimension zero operator is the identity, and the total UV dimension of $\mathcal{D}(x)$ must be strictly 2 then the operators $C(x)$, $W_{i}(x)$, $G_{ij}(x)$ must be all proportional to the identity operator. We call this restrictions a strict power counting rule. It applies in a vicinity of a unitary fixed point that has a discrete spectrum of conformal dimensions. Under the additional assumption that there are no operators $J^{\mu}_{i}(x)$ appearing in $\mathcal{D}(x)$ the case of the strict power counting was investigated in \cite{Osborn}. Finally the case when the only restrictions on $\mathcal{D}(x)$ come from the general covariance and locality can be referred to as Wilsonian. We will prove the general gradient formula (\ref{new_gf}) in the Wilsonian case. The proof is simplified if we impose loose power counting. We will be discussing in parallel how our steps look in that case. As a last comment in this section note that due to equation (\ref{eq:RGopgen}) the operation $\mathcal{D}(x)$ is subject to Wess-Zumino consistency conditions \begin{equation} [\Theta(x) - \beta(x) - \mathcal{D}(x),\, \Theta(y) - \beta(y) - \mathcal{D}(y)] \ln Z = 0 \end{equation} which will be exploited in sections \ref{sect:DTheta}, \ref{sect:baregrad}. \subsection{Summary} \label{subsec:summary} Operations ${\cal O}(x)$ are local first order differential operators defined on functionals of the sources $\lambda^{i}(x)$ and metric. For the fundamental fields $\phi_{i}(x)$ and the trace of stress energy tensor $\Theta(x)$ the corresponding operations are the functional derivatives (\ref{func_der}). We introduced the notation $\langle\langle {\cal O}_{1}(x_1)\dots {\cal O}(x_n) \rangle\rangle$ for a sequence of operations ${\cal O}_{i}(x_{i})$ applied to the generating functional $\ln Z \equiv \rangle\rangle$ with the result restricted to constant sources and metric (the restriction is signified by the symbol $\langle\langle\, $). Given an operation ${\cal O}(x)$ one can extract a field from it by restricting it to constant sources and metric (\ref{eq:ordinaryf}). The resulting fields are denoted $\underline{{\cal O}}(x)$ and are called ordinary fields. Such fields have the form $\underline{{\cal O}}(x) = O^{i}\phi_{i}(x)$. A pure contact operation is an operation ${\cal O}(x)$ for which $\underline{{\cal O}}(x)=0$. For ordinary fields the distributional correlation functions are completely fixed by those of the fields $\phi_{i}$. More generally a given field $A(x)$ equals a linear combination of fundamental fields: $A(x) = A^{i}\phi_{i}(x)$ only up to contact terms. Such contact terms can be stored in a pure contact operation ${\cal A}(x)$ according to (\ref{pc2}). For the trace of stress-energy tensor $\Theta(x)$ we have $\Theta(x)=\beta^{i}\phi_{i}(x)\equiv \beta(x)$ up to contact terms. The corresponding contact terms are stored in a pure contact operation $\mathcal{D}(x)$. The generating functional satisfies an equation $[\Theta(x) - \beta(x) - \mathcal{D}(x)]\ln Z = 0$ which can be used to compute correlation functions involving the field $D(x) = \Theta(x) - \beta(x)$ according to (\ref{Dcorrs}). The form of $\mathcal{D}(x)$ is constrained by locality and general covariance. It can be further constrained by a power counting principle. We distinguish a strict power counting, which applies to a vicinity of a unitary fixed point with discrete spectrum of conformal dimensions, and a loose power counting that is suitable for describing renormalizable nonlinear sigma models. For the loose power counting case $\mathcal{D}(x)$ can be explicitly written as in formula (\ref{eq:Doploose}). \section{The Callan-Symanzik equations} \setcounter{equation}{0} In the operations formalism the Callan-Symanzik equations for correlators involving fields $\phi_{i}(x)$ and $\Theta(y)$ can be obtained by integrating equation (\ref{Dcorrs}) over $x$: \begin{eqnarray}\label{CSeq1} && \left ( \mu\partialby\mu - \beta^{i}\partialby{\lambda^{i}} \right ) \expvalc{\phi_{i_{1}}(x_{1})\dots \Theta(y_{1})\dots } =\int\!\! d^{2}x\, \opval{ D(x)\, \phi_{i_{1}}(x_{1})\dots \Theta(y_{1})\dots } \nonumber \\ &&= \partial_{i_{1}}\beta^{i}\expvalc{\phi_{i}(x_{1})\dots \Theta(y_{1})\dots } + \int\!\!d^{2}x\, \opval{ [\phi_{i_{1}}(x_{1}), \mathcal{D}(x)]\dots \Theta(y_{1})} + \dots \nonumber \\ && + \int\!\!d^{2}x\,\opval{ \phi_{i_{1}}(x_{1})\dots [\Theta(y_{1}),\mathcal{D}(x)]\dots } + \dots \end{eqnarray} It is convenient to define the following operations \begin{eqnarray}\label{eq:calDops} \mathcal{D}\phi_{i}(x) &=& \int\!\!d^{2}y\, \lbrack\phi_{i}(x),\, \mathcal{D}(y)]\, , \nonumber \\ \mathcal{D}\Theta(x) &=& \int\!\! d^{2}y\, \lbrack\Theta(x),\, \mathcal{D}(y)] \,. \end{eqnarray} In view of (\ref{CSeq1}) the operations $\mathcal{D}\phi_{i}(x)$ and $\mathcal{D}\Theta(x)$ can be interpreted as extra contributions to the Callan-Symanzik equations. We further notice that \begin{equation}\label{constrs} \int d^{2}x\,\langle\langle\,\mathcal{D} \Theta(x) =0 \, , \qquad \int d^{2}x\,\langle\langle\,\mathcal{D} \phi_{i}(x) = 0\, . \end{equation} This follows from the fact that $\int d^{2}x\,\Theta(x) = \mu\partial/\partial\mu$, $\int d^{2}y\,\phi_{i}(x) =\partial/\partial\lambda^{i}$, and every term in $\mathcal{D}(y)$ is proportional to derivatives of $\lambda^{i}(y)$ and $\mu(y)$. Equations (\ref{constrs}) imply that there must be ordinary spin-1 fields (and ordinary operations respectively) $J^{\mu}(x)$ and $J_{i}^{\mu}(x)$ such that \begin{equation} \underline{\mathcal{D} \Theta}(x) = - \partial_{\mu} J^{\mu}(x)\, , \end{equation} \begin{equation} \label{Jimu} \underline{\mathcal{D} \phi_{i}}(x) = - \partial_{\mu} J_{i}^{\mu}(x) \,. \end{equation} If we impose loose power counting, so that $\mathcal{D}(x)$ is given by equation (\ref{eq:Doploose}), then \begin{eqnarray} \mathcal{D}\Theta(x) &=& -\partial^{\mu}\partial_{\mu} C(x)\\ \mathcal{D}\phi_{i}(x) &=& -\partial_{\mu}\left [ J_{i}^{\mu}(x) + \partial^{\mu}\lambda^{j} G_{ij}(x) \right ] + \partial_{\mu} \lambda^{j}\partial_{i} J_{j}^{\mu}(x) + \frac{1}{2}\partial_{\mu}\lambda^{j}\partial^{\mu}\lambda^{k}\partial_{i}G_{jk}(x) \end{eqnarray} so \begin{eqnarray} J^{\mu}(x) &=& \partial^{\mu} C(x) \end{eqnarray} and $J_{i}^{\mu}(x)$ defined in (\ref{Jimu}) in general (without any power counting assumptions) coincides with the coefficient in the expansion of $\mathcal{D}(x)$ based on loose power counting, equation (\ref{eq:Doploose}). In general (without any power counting restrictions) since all terms in $\mathcal{D}(x)$ are proportional to derivatives of the sources and/or to derivatives of $\mu(x)$ there exists a scalar operator $C(x)$ such that $J^{\mu}(x)=\partial^{\mu}C(x)$. The Callan-Symanzik equations (\ref{CSeq1}) for the correlation functions at non-coincident points (neglecting contact terms) can be now be written as \begin{align} \label{eq:CS1} \mu\partialby\mu \expvalc{\phi_{i_{1}}(x_{1})\dots\Theta(y_{1})\dots } &= \beta^{i}\partialby{\lambda^{i}} \expvalc{\phi_{i_{1}}(x_{1})\cdots\Theta(y_{1})\dots } +\expvalc{\Gamma\phi_{i_{1}}(x_{1}) \dots \Theta(y_{1})\dots} + \dots \nonumber \\ &\qquad\qquad{}+\expvalc{\phi_{i_{1}}(x_{1}) \dots [-\partial_{\mu}J^{\mu}(y_{1})]\dots} + \dots \end{align} where \begin{equation} \label{eq:CS2} \Gamma\phi_{i_{1}}(x_{1}) = \partial_{i_{1}}\beta^{i}\phi_{i}(x_{1})-\partial_{\mu}J_{i_{1}}^{\mu}(x_{1}) \,. \end{equation} The terms involving the beta functions can be put into the Lie derivative ${\cal L}_{\beta}$ so that equation (\ref{eq:CS1}) takes a more succinct form \begin{eqnarray} \label{eq:CS3} &&[\mu\partialby\mu - {\cal L}_{\beta}] \expvalc{\phi_{i_{1}}(x_{1})\cdots\Theta(y_{1})\dots } \nonumber \\ && =\expvalc{[-\partial_{\mu}J_{i_{1}}^{\mu}(x_{1})] \dots \Theta(y_{1})\dots} + \dots +\expvalc{\phi_{i_{1}}(x_{1}) \dots [-\partial_{\mu}J^{\mu}(y_{1})]\dots} + \dots \end{eqnarray} \section{The proof continued} \setcounter{equation}{0} We now come back to the proof of the gradient formula which we left at the end of section 4. We express the remainder term $r^{L}_{i}$ of equation (\ref{eq:ri}) in the source-operation formalism. The 3-point functions occurring in equation~\ref{eq:ri} can be written as \begin{eqnarray} \expvalc{\phi_{i}(y)\,D(x)\,\Theta(0)} &=& \opval{\phi_{i}(y)\, \Theta(0) \, [\Theta(x)-\beta(x)]} +\partial_{i}\beta^{j} \delta^{2}(y-x)\opval{\Theta(0)\,\phi_{j}(x)}\nonumber \\ &=& \opval{\phi_{i}(y)\, \Theta(0)\, \mathcal{D}(x)} +\partial_{i}\beta^{j} \delta^{2}(y-x)\opval{\Theta(0)\,\phi_{j}(x)}\, , \\ \expvalc{D(y)\,\phi_{i}(x)\,\Theta(0)} &=& \opval{\phi_{i}(x)\, \Theta(0) \, \mathcal{D}(y)} +\partial_{i}\beta^{j} \delta^{2}(x-y)\opval{\Theta(0)\,\phi_{j}(y)}\, , \end{eqnarray} so \begin{equation} \expvalc{\phi_{i}(y) \,D(x) \,\Theta(0) - D(y)\,\phi_{i}(x)\,\Theta(0)} = \opval{\phi_{i}(y) \,\Theta(0)\,\mathcal{D}(x) - \phi_{i}(x)\,\Theta(0)\,\mathcal{D}(y)}\, . \end{equation} Substituting the last relation in equation (\ref{eq:ri}) and using $\langle\langle\, \mathcal{D}(x) =0$, gives \begin{eqnarray}\label{riLint} r^{L}_{i} &=& \int_{|y|<L} d^{2}y \int\!\! d^{2}x \, G_{\Lambda}(x) \, \opval{\phi_{i}(y) \,\Theta(0)\,\mathcal{D}(x) - \phi_{i}(x)\,\Theta(0)\,\mathcal{D}(y)} \nonumber\\ &=& \int_{|y|<L} d^{2}y \int\!\! d^{2}x \, G_{\Lambda}(x) \, \opval{\phi_{i}(y) \,\lbrack \Theta(0), \, \mathcal{D}(x)\rbrack +\lbrack\phi_{i}(y), \,\mathcal{D}(x)\rbrack \,\Theta(0)} \nonumber \\ && -\int_{|y|<L} d^{2}y \int\!\! d^{2}x \, G_{\Lambda}(x) \, \opval{\phi_{i}(x) \,\lbrack \Theta(0), \, \mathcal{D}(y)\rbrack +\lbrack\phi_{i}(x), \,\mathcal{D}(y)\rbrack \, \Theta(0)} \, . \end{eqnarray} Note that $\mathcal{D}(x)$ is a pure-contact operation, and $|x|\le \Lambda^{-1}\ll L$, so that \begin{eqnarray} \int_{|y|<L} d^{2}y\; \lbrack \Theta(0), \, \mathcal{D}(y)\rbrack &=& \int\!\! d^{2}y\; \lbrack \Theta(0), \, \mathcal{D}(y)\rbrack = \mathcal{D}\Theta(0)\\ \int_{|y|<L} d^{2}y\; \lbrack\phi_{i}(x)\,\mathcal{D}(y)\rbrack &=& \int\!\! d^{2}y\; \lbrack\phi_{i}(x)\,\mathcal{D}(y)\rbrack = \mathcal{D}\phi_{i}(0)\\ \int_{|y|<L} d^{2}y\; \langle\langle\, \lbrack\phi_{i}(y), \,\mathcal{D}(x)\rbrack &=& \int\!\! d^{2}y\; \langle\langle\, \lbrack\phi_{i}(y), \,\mathcal{D}(x)\rbrack = \lbrack\partial_{i}, \,\mathcal{D}(x)\rbrack = 0 \,. \end{eqnarray} Using these relations in (\ref{riLint}) we obtain \begin{align} r^{L}_{i} = - \int_{|y|<L} d^{2}y\, &12\pi\opval{\phi_{i}(y) \,C_{2}(0)} - \int d^{2}x \, G_{\Lambda}(x) \, \opval{\phi_{i}(x) \,\mathcal{D}\Theta(0)}\nonumber \\ &{}- \int d^{2}x \, G_{\Lambda}(x) \, \opval{\mathcal{D}\phi_{i}(x)\,\Theta(0)} \label{eq:ri2} \end{align} where we have defined an operation \begin{equation} C_{2}(y) = - \int\!\! d^{2}x \,\frac14 x^{2} \,\lbrack \Theta(y), \, \mathcal{D}(x)\rbrack \,. \end{equation} If loose power counting is imposed, $\mathcal{D}(x)$ is given by equation (\ref{eq:Doploose}), and we have \begin{equation} C_{2}(y) = - \int\!\! d^{2}x \,\frac14 x^{2} \, \left [ -\partial^{\mu}\partial_{\mu}\delta^{2}(x-y) \right ] C(x) = C(y) \,. \end{equation} Thus, with loose power counting, \begin{equation} \mathcal{D}\Theta(x) = -\partial^{\mu}\partial_{\mu}C_{2}(x) \,. \label{eq:C2C} \end{equation} We separate $r^{L}_{i}$ into two parts \begin{eqnarray} r^{L}_{i} &=& r^{L}_{i,1}+r^{L}_{i,2}\label{eq:rLiseparated}\\ r^{L}_{i,1} &=& - \int d^{2}x \, G_{\Lambda}(x) \, \opval{\mathcal{D}\phi_{i}(x)\,\Theta(0)} \label{eq:rLi1}\\ r^{L}_{i,2} &=& - \int_{|y|<L} d^{2}y\, 12\pi \opval{\phi_{i}(y) \,C_{2}(0)} - \int d^{2}x \, G_{\Lambda}(x) \, \opval{\phi_{i}(x) \,\mathcal{D}\Theta(0)} \label{eq:rLi2} \end{eqnarray} then investigate each in turn. \subsection{The IR condition and the sum-rule}\label{sect:IRsum} We investigate $r_{i,1}$ first. Our goal is to show that under certain assumptions this quantity is proportional to the beta functions. We have \begin{equation} \langle\langle\, \mathcal{D}\phi_{i}(x) = \langle\langle\, \underline{\mathcal{D}\phi_{i}}(x) = \langle\langle\, \left [ -\partial_{\mu}J_{i}^{\mu}(x) \right ] \end{equation} so \begin{equation} \opval{\mathcal{D}\phi_{i}(x)\,\Theta(0)} = - \opval{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)} = - \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)}\, . \end{equation} Substituting this expression in equation (\ref{eq:rLi1}) we get \begin{equation} r^{L}_{i,1} = \int d^{2}x \, G_{\Lambda}(x) \, \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)} \,. \label{eq:rLi12} \end{equation} Now we use the technique similar to the one we used in the proof of Zamolodchikov's formula (see section \ref{sect:cgijformulas}). It is straightforward to check that the Ward identity for $T_{\mu\nu}(x)$ implies \begin{equation} x^{2} \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)} = \partial_{\mu} \left [ x^{2} \expvalc{J_{i}^{\mu}(x)\,\Theta(0)} -2 x_{\alpha}x^{\beta}\expvalc{J_{i}^{\alpha}(x)\,T_{\beta}^{\mu}(0)} +x^{2}\expvalc{J_{i}^{\alpha}(x)\,T_{\alpha}^{\mu}(0)} \right ] \end{equation} which allows us to perform the integral in equation (\ref{eq:rLi12}), obtaining \begin{equation} r^{L}_{i,1} = 6\pi^2 \, x_{\mu} \left [ x^{2} \expvalc{J_{i}^{\mu}(x)\,\Theta(0)} -2 x_{\alpha}x^{\beta}\expvalc{J_{i}^{\alpha}(x)\,T_{\beta}^{\mu}(0)} {}+x^{2}\expvalc{J_{i}^{\alpha}(x)\,T_{\alpha}^{\mu}(0)} \right ] _{\big / \Lambda|x| = 1}\, . \end{equation} What we want however is an expression proportional to $\beta^{i}$. Recall that \begin{equation} G_{\Lambda}(x) = 3\pi x^{2}\,\theta(1-\Lambda|x|) \end{equation} so that \begin{equation} G_{0}(x) = 3\pi x^{2} \label{eq:G0} \end{equation} and \begin{equation} G_{0}(x) - G_{\Lambda}(x) = 3\pi x^{2}\,\theta(\Lambda|x|-1) \,. \end{equation} We write \begin{eqnarray} r^{L}_{i,1} &=& E_{2} +\int_{|x|\le L} d^{2}x \, [G_{\Lambda}(x)-G_{0}(x)] \, \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)}\nonumber \\ &=& E_{2} + \int_{|x|\le L} d^{2}x \, [G_{\Lambda}(x)-G_{0}(x)] \, \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\phi_{j}(0)}\beta^{j} \label{eq:rLi1final} \end{eqnarray} with \begin{eqnarray} E_{2} &=& \int_{|x|\le L} d^{2}x \, G_{0}(x) \, \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)}\\ &=&6\pi^2 \, x_{\mu} \left [ x^{2} \expvalc{J_{i}^{\mu}(x)\,\Theta(0)} -2 x_{\alpha}x^{\beta}\expvalc{J_{i}^{\alpha}(x)\,T_{\beta}^{\mu}(0)} {}+x^{2}\expvalc{J_{i}^{\alpha}(x)\,T_{\alpha}^{\mu}(0)} \right ] _{\big / |x| = L} \, . \label{eq:E2} \end{eqnarray} We are allowed to replace $\Theta(0)$ with $\beta^{j}\phi_{j}(0)$ to obtain equation (\ref{eq:rLi1final}) because $G_{\Lambda}(x)-G_{0}(x)$ vanishes for $\Lambda |x|\le 1$, so contact terms in the 2-point function make no difference. The IR error term $E_{2}$ will vanish in the limit $L\rightarrow\infty$ if the 2-point functions $\expvalc{J_{i}^{\mu}(x)\,T_{\alpha\beta}(0)}$ go to zero at large $x$ faster than $|x|^{-3}$: \begin{equation}\label{eq:IRcond} \lim\limits_{|x|\to \infty} |x|^{3}\expvalc{J_{i}^{\mu}(x)\,T_{\alpha\beta}(0)}=0 \, . \end{equation} A violation of this IR decay condition would mean that the long distance limit of the quantum field theory exhibits spontaneously broken global conformal symmetry. Our main IR assumption is that such a spontaneous breaking does not take place and equation (\ref{eq:IRcond}) is satisfied. The condition $\lim_{L\rightarrow \infty} E_{2} =0$ is equivalent to the sum rule \begin{equation} \label{eq:IRsum} \int d^{2}x \, x^{2} \, \expvalc{\partial_{\mu}J_{i}^{\mu}(x)\,\Theta(0)} = 0 \,. \end{equation} Such a sum rule holds for any spin-1 field, given our infrared assumption. \subsection{The term $r^{L}_{i,2}$} Similarly to (\ref{eq:rLi1final}) we want to write $r^{L}_{i,2}$ as an integral over $\Lambda |x|>1$ of an expression proportional to $\beta^{j}$. Equation (\ref{eq:C2C}), which one obtains when the loose power counting is imposed, motivates the following manipulation of equation (\ref{eq:rLi2}). Write the first term, using equation (\ref{eq:G0}) for $G_{0}(y)$, \begin{equation} - \int_{|y|<L} d^{2}y\, 12\pi \opval{\phi_{i}(y) \,C_{2}(0)} = - \int_{|y|<L} d^{2}y\,\left [\partial_{\mu}\partial^{\mu}G_{0}(y)\right ] \opval{\phi_{i}(y) \,C_{2}(0)} \end{equation} then integrate by parts. Equation (\ref{eq:rLi2}) becomes \begin{equation} r^{L}_{i,2} = E_{3} - \int_{|y|<L} d^{2}y\,G_{0}(y) \opval{\phi_{i}( y) \,\partial_{\mu}\partial^{\mu} C_{2}(0)} - \int d^{2}x \, G_{\Lambda}(x) \, \opval{\phi_{i}(x) \,\mathcal{D}\Theta(0)} \label{eq:rLi211} \end{equation} where $E_3$ is an infrared error \begin{equation} E_{3} = - \int_{|x|<L} d^{2}x \, \partial_{\mu}\left [ \partial^{\mu}G_{0}(x)\,\opval{\phi_{i}(0) \,C_{2}(x)} -G_{0}(x)\,\partial^{\mu} \opval{\phi_{i}(0) \,C_{2}(x)} \right ] \,. \label{eq:E3} \end{equation} We further rewrite equation (\ref{eq:rLi211}) as \begin{eqnarray} r^{L}_{i,2} = E_{3} +E_{4} + \int_{|x|<L} d^{2}x \, \left [ G_{0}(x)-G_{\Lambda}(x) \right ] \, \opval{\phi_{i}(x) \,\mathcal{D}\Theta(0)} \label{eq:rLi22} \end{eqnarray} where \begin{equation} E_{4} = - \int_{|x|<L} d^{2}x \, G_{0}(x) \opval{\phi_{i}(x) \, [\partial_{\mu}\partial^{\mu}C_{2}(0)+\mathcal{D}\Theta(0)]} \, . \label{eq:E4} \end{equation} The term $E_{4}$ is identically zero if we assume loose power counting, by equation (\ref{eq:C2C}). We will show in section \ref{sect:E4} that in general $E_4$ vanishes as $L\to \infty$. In equation (\ref{eq:rLi22}), the integration variable $x$ is bounded away from $0$, so we can substitute \begin{equation} \opval{\phi_{i}(x) \,\mathcal{D}\Theta(0)} = \opval{\mathcal{D}\Theta(0)\, \phi_{i}(x)} = \expvalc{-\partial_{\mu}J^{\mu}(0) \, \phi_{i}(x)} \end{equation} giving \begin{equation} r^{L}_{i,2} = E_{3} +E_{4} + \int_{|x|<L} d^{2}x \, \left [ G_{\Lambda}(x) - G_{0}(x)\right ] \, \expvalc{\phi_{i}(x) \, \partial_{\mu}J^{\mu}(0)} \, . \end{equation} Finally, we will now show that \begin{equation} \partial_{\mu}J^{\mu}(0) = \beta^{j} \partial_{\mu}J_{j}^{\mu}(0) \end{equation} so that $r^{L}_{i,2}$ also becomes proportional to $\beta^{j}$, up to IR errors, \begin{equation} r^{L}_{i,2} = E_{3} +E_{4} + \int_{|x|<L} d^{2}x \, \left [ G_{\Lambda}(x) - G_{0}(x)\right ] \, \expvalc{\phi_{i}(x) \, \partial_{\mu}J_{j}^{\mu}(0)}\beta^{j} \,. \label{eq:rLi2final} \end{equation} \subsection{The identity $\partial_{\mu}J^{\mu}(x) = \beta^{j}\partial_{\mu}J_{j}^{\mu}(x) $} \label{sect:DTheta} We want to show that the ordinary field \begin{equation} K(x) = \beta^{j}\partial_{\mu}J_{j}^{\mu}(x) -\partial_{\mu}J^{\mu}(x) \end{equation} is zero, which is to say that all its non-coincident correlation functions vanish: \begin{equation} \label{eq:Kcontact} \expvalc{K(x)\, \phi_{i_{1}}(x_{1}) \dots}=0 \qquad x\ne x_{1},\ldots \end{equation} In the source/operation formalism, this means that \begin{equation} \opval{K(x) \, \phi_{i_{1}}(x_{1}) \dots}=0 \qquad x\ne x_{1},\ldots \end{equation} To show this we first argue that (\ref{eq:Kcontact}) is equivalent to showing that \begin{equation}\label{eq:K1} \lbrack \mathcal{D},\, D(x) \rbrack \,\rangle\rangle = \mathcal{K}_{1}(x) \,\rangle\rangle \end{equation} for some pure-contact operation $\mathcal{K}_{1}(x)$. We then demonstrate that (\ref{eq:K1}) is a consequence of the Wess-Zumino consistency conditions on $\mathcal{D}(x)$. It follows from (\ref{eq:calDops}) that \begin{eqnarray}\label{eq:step1} \langle\langle\, K(x) &=& \langle\langle\,\left [ -\partial_{\mu}J^{\mu}(x)+ \beta^{j}\partial_{\mu}J_{j}^{\mu}(x)\right ] \nonumber\\ &=& \langle\langle\,\left [ \mathcal{D}\Theta(x)-\beta^{j}\mathcal{D}\phi_{j}(x) \right ] \nonumber \\ &=& \langle\langle\,\lbrack \mathcal{D},\, \Theta(x)-\beta^{j}\phi_{j}(x) \rbrack \nonumber\\ &=& \langle\langle\,\lbrack \mathcal{D},\, D(x) \rbrack \end{eqnarray} where $D(x)= \Theta(x)-\beta^{j}\phi_{j}(x)$ is acting here as an operation. This last calculation implicitly uses the obvious identity \begin{equation} \langle\langle\,\beta^{j}(\lambda(x)) = \beta^{j}(\lambda) \langle\langle \end{equation} and its direct implication \begin{equation}\label{eq:betaDcomm} \langle\langle\,\lbrack\mathcal{D},\, \beta^{j}(\lambda(x))\rbrack = - \langle\langle\,\beta^{j}(\lambda(x))\, \mathcal{D} = -\beta^{j}(\lambda) \langle\langle\, \mathcal{D} = 0 \,. \end{equation} Now we have \begin{equation} \opval{K(x) \, \phi_{i_{1}}(x_{1}) \cdots} = \opval{ \lbrack \mathcal{D},\, D(x) \rbrack \, \phi_{i_{1}}(x_{1}) \cdots} \,. \end{equation} The operation $\lbrack \mathcal{D},\, D(x) \rbrack$ commutes with all the $\phi_{i_{r}}(x_{r})$ because $x\ne x_{r}$, so \begin{equation} \opval{K(x) \, \phi_{i_{1}}(x_{1}) \cdots} = \opval{\phi_{i_{1}}(x_{1}) \cdots \lbrack \mathcal{D},\, D(x) \rbrack} \,. \end{equation} We now need to show that \begin{equation} \opval{\phi_{i_{1}}(x_{1}) \cdots \lbrack \mathcal{D},\, D(x) \rbrack} =0 \qquad x\ne x_{1},\ldots \end{equation} which by (\ref{eq:purecontactcorr}) equivalent to (\ref{eq:K1}). Equation (\ref{eq:K1}) follows from the Wess-Zumino consistency conditions. Recall that we have an equation \begin{equation}\label{eq:WZ1} 0 = [D(x)-\mathcal{D}(x)] \,\rangle\rangle \, . \end{equation} The Wess-Zumino consistency conditions are \begin{equation} \lbrack D(x)-\mathcal{D}(x), \, D(y)-\mathcal{D}(y) \rbrack \,\rangle\rangle=0 \, . \end{equation} It follows from \begin{equation}\label{eq:3comms} [\Theta(x),\Theta(y)]=0\, , \quad [\Theta(x),\beta(y)]=0\, , \quad [\beta(x),\beta(y)]=0 \end{equation} that \begin{equation} [D(x),D(y)]=0 \end{equation} and therefore (\ref{eq:WZ1}) is equivalent to \begin{equation}\label{eq:KK1} \lbrack\mathcal{D}(y),\, D(x) \rbrack\,\rangle\rangle = - \left ( \lbrack D(y), \, \mathcal{D}(x) \rbrack +\lbrack \mathcal{D}(x), \, \mathcal{D}(y) \rbrack \right )\,\rangle\rangle \, . \end{equation} The operation $\lbrack \mathcal{D}(x), \, \mathcal{D}(y) \rbrack$ is evidently pure-contact. It also follows from (\ref{eq:scale}) and (\ref{eq:betaDcomm}) that \begin{equation} \lbrack \int d^{2}y\,D(y), \, \mathcal{D}(x) \rbrack \end{equation} is a pure contact operation. Thus integrating equation (\ref{eq:KK1}) with respect to $y$ gives \begin{equation} \lbrack \mathcal{D},\, D(x) \rbrack \,\rangle\rangle = \mathcal{K}_{1}(x) \,\rangle\rangle \end{equation} where \begin{equation} \mathcal{K}_{1}(x) = - \lbrack \int d^{2}y\,D(y), \, \mathcal{D}(x) \rbrack -\lbrack \mathcal{D}(x), \, \mathcal{D} \rbrack \end{equation} is pure contact. This completes the proof that at all non-coincident correlation functions of $\beta^{j}\partial_{\mu}J_{j}^{\mu}(x) -\partial_{\mu}J^{\mu}(x)$ are identically zero. Therefore\footnote{It is worth noting that relation (\ref{eq:operrel}) is a generalization of the Curci-Paffuti relation \cite{CP} known for nonlinear sigma models. By methods similar to those employed in this section one can actually prove a stronger relation: $J^{\mu}(x) = \beta^{j}J_{j}^{\mu}(x)$. We do not need this stronger relation in the proof of the gradient formula. } \begin{equation}\label{eq:operrel} \partial_{\mu}J^{\mu}(x) = \beta^{j}\partial_{\mu}J_{j}^{\mu}(x) \,. \end{equation} \subsection{ $E_{4}$ is an IR error term}\label{sect:E4} We owe a proof that the term $E_4$ given by \begin{equation} \label{mainE4} E_4 = -\int\limits_{|x|<L}\!\! d^2x\, G_{0}(x)\langle\langle \phi_i(x) [ \partial_{\mu}\partial^{\mu} C_2(0) + {\cal D}\Theta(0)]\rangle\rangle \end{equation} is an infrared error, that is it vanishes as $L\to \infty$. The argument is a bit tedious, so the reader might want to skip this subsection at the first reading. We have Note that in general (without the assumption of loose power counting) we have \begin{equation} [\Theta(0),{\cal D}(y)]= -\partial_{\mu}\partial^{\mu}\delta(y)\, C_2(y) + \partial_{\mu}\partial_{\nu}\partial_{\gamma}\delta(y)\, C_{3}^{\mu \nu \gamma}(y) + \dots \end{equation} where the omitted terms contain derivatives of delta functions of order 4 and higher. For our purposes this expansion can be written more compactly as \begin{equation}\label{exp} [\Theta(0),{\cal D}(y)]= -\partial_{\mu}\partial^{\mu}\delta(y)\, C_2(y) + \partial_{\mu}\partial_{\nu}\partial_{\gamma}\delta(y)\, \tilde C_{3}^{\mu \nu \gamma}(y) \end{equation} where $\tilde C_{3}^{\mu \nu \gamma}(y)$ is some tensor operation. Formula (\ref{exp}) implies \begin{equation} \partial_{\mu}\partial^{\mu} C_2(0) + {\cal D}\Theta(0) = \partial_{\mu}\partial_{\nu}\partial_{\gamma} \tilde C_{3}^{\mu \nu \gamma}(0) \end{equation} and therefore \begin{equation}\label{eq:E4ax} \langle\langle \phi_i(x)[ \partial_{\mu}\partial^{\mu} C_2(0) + {\cal D}\Theta(0)]\rangle\rangle = \langle \partial_{\mu}\partial_{\nu}\partial_{\gamma} \underline{\tilde C_{3}}^{\mu \nu \gamma}(0)\phi_i(x)\rangle + \langle\langle [\phi_{i}(x), \partial_{\mu}\partial_{\nu}\partial_{\gamma} \tilde C_{3}^{\mu \nu \gamma}(0) ] \rangle\rangle \, . \end{equation} The second term on the right hand side of (\ref{eq:E4ax}) vanishes because it is proportional to a one point function of a total derivative operator. Thus we obtain \begin{equation}\label{eq:E4final} E_4 = -3\pi \int\limits_{|x|<L}\!\! d^2x\, x^2 \langle \phi_i(x) \partial_{\mu}\partial_{\nu}\partial_{\gamma} \underline{\tilde C_{3}}^{\mu \nu \gamma}(0)\rangle \end{equation} which exhibits that $E_4$ is a linear combinations of two point functions at separation $L$. Assuming that $\langle \phi_i(L) \underline{\tilde C_{3}}^{\mu \nu \gamma}(0)\rangle $ is integrable at infinity (which is consistent with $\langle \underline{\tilde C_{3}}^{\mu \nu \gamma}(0)\rangle =0$ being independent of $\lambda_i$) all combinations of two point functions entering $E_4$ go to zero as $L\to \infty$. \section{Conclusion of the proof} \setcounter{equation}{0} Combining our results for $r^{L}_{i,1}$ and $r^{L}_{i,2}$, equations (\ref{eq:rLi1final}) and (\ref{eq:rLi2final}), and substituting in equation (\ref{eq:rLiseparated}), we get \begin{equation} r^{L}_{i} = E_{2}+E_{3}+E_{4} + \left (\Delta g^{L}_{ij}\right ) \beta^{j} \label{eq:ri4} \end{equation} with \begin{equation} \Delta g^{L}_{ij} = \int_{|x|<L} d^{2}x \, [G_{\Lambda}(x)-G_{0}(x)] \, \expvalc{ \phi_{i}(x)\,\partial_{\mu}J_{j}^{\mu}(0) + \,\phi_{j}(x)\, \partial_{\mu}J_{i}^{\mu}(0) }\, . \label{eq:gLij} \end{equation} The metric correction $\Delta g^{L}_{ij}$ can be also written without any direct reference to currents $J_{i}^{\mu}(x)$ using the Callan-Symanzik equations (\ref{eq:CS3}) \begin{eqnarray}\label{eq:deltagijreg} \Delta g^{L}_{ij}= \int_{|x|<L} d^{2}x \, [G_{\Lambda}(x)-G_{0}(x)] ({\cal L}_{\beta} - \mu\frac{\partial}{\partial \mu})\langle \phi_{i}(x)\phi_{j}\rangle \, \end{eqnarray} which, using (\ref{eq:scale}), (\ref{ap1}), can be written in terms of integrated three point functions of fundamental operators up to IR error terms. Equation (\ref{eq:gradformula1}) becomes, finally, the IR-regulated gradient formula \begin{equation} \label{eq:IRreggrad} \partial^{L}_{i}c + (g_{ij}+\Delta g^{L}_{ij} + b^{L}_{ij})\beta^{j} + E(L) = 0 \end{equation} with total error \begin{equation} E(L) = E_{1}+E_{2}+E_{3}+E_{4} \,. \end{equation} The $L$-dependent constituents of the formula are: \begin{equation} \partial^{L}_{i} c = - \int_{|y|<L} d^{2}y \, \int d^{2}x \,\ G_{\Lambda}(x) \,\expvalc{\phi_{i}(y) \,\Theta(x)\,\Theta(0)}\, , \end{equation} \begin{equation} b^{L}_{ij} = \int_{|y|<L} d^{2}y \int d^{2}x \, G_{\Lambda}(x) \,\expvalc{\phi_{i}(y) \,\phi_{j}(x) \,\Theta(0) -\phi_{j}(y) \,\phi_{i}(x) \,\Theta(0)}\, , \end{equation} \begin{equation} E_{1} = 2\pi y^{\mu}y^{\nu} \int d^{2}x \; G_{\Lambda}(x) \,{\expvalc{T_{\mu\nu}(y)\,\phi_{i}(x)\,\Theta(0)}}_{\big /|y|=L}\, , \end{equation} \begin{equation} E_{2} = 6\pi^{2} \, x_{\mu} \left [ x^{2} \expvalc{J_{i}^{\mu}(x)\,\Theta(0)} -2 x_{\alpha}x^{\beta}\expvalc{J_{i}^{\alpha}(x)\,T_{\beta}^{\mu}(0)} {}+x^{2}\expvalc{J_{i}^{\alpha}(x)\,T_{\alpha}^{\mu}(0)} \right ] _{\big / |x| = L}\, , \end{equation} \begin{equation} E_{3} = - \int_{|x|<L} d^{2}x \, \partial_{\mu}\left [ \partial^{\mu}G_{0}(x)\,\opval{\phi_{i}(0) \,C_{2}(x)} -G_{0}(x)\,\partial^{\mu} \opval{\phi_{i}(0) \,C_{2}(x)} \right ]\, , \end{equation} \begin{equation} E_{4} =-3\pi \int\limits_{|x|<L}\!\! d^2x\, x^2 \langle \phi_i(x) \partial_{\mu}\partial_{\nu}\partial_{\gamma} \underline{\tilde C_{3}}^{\mu \nu \gamma}(0)\rangle \end{equation} and $\Delta g_{ij}^{L}$ is given in (\ref{eq:gLij}) (see equations (\ref{eq:ci}), (\ref{eq:bLij}), (\ref{eq:E1}), (\ref{eq:E2}), (\ref{eq:E3}), (\ref{eq:E4final})). Now that the infrared regulated formula (\ref{eq:IRreggrad}) is derived we can study its $L\to \infty$ limit. Let us recapitulate our assumptions on the infrared behavior. Firstly, we assume that the action principle holds at least for one and two point functions so that the one and two-point functions are at least once differentiable. Secondly, the infrared behavior of the stress-energy tensor correlators should satisfy (\ref{eq:IRcond}). The first assumption means that 2,3 and 4-point functions involving $\phi_{i}(x)$ or $T_{\mu\nu}(x)$ decay faster than $x^{2}$ when $|x| \to \infty$. This together with formula (\ref{eq:IRcond}) imply that \begin{eqnarray} \label{eq:IRlimits} && \lim\limits_{L\to \infty} E(L)=0 \, , \nonumber \\ && \lim\limits_{L\to \infty} \partial^{L}_{i}c = \partial_{i} c \, , \nonumber \\ && \lim\limits_{L\to \infty} b_{ij}^{L} = b_{ij} \end{eqnarray} where $b_{ij}$ is given by Osborn's formula\footnote{The 2-form $b_{ij}$ is exact provided $w_{j}$ defined in (\ref{eq:bijorig}) is differentiable. If one relaxes the differentiability assumptions there is room for the limit $b_{ij}= \lim_{L\rightarrow\infty}b^{L}_{ij}$ to exist without $w_{j}$ being differentiable, in which case $b_{ij}$ would be closed but not exact. The failure of differentiability of $w_{j}$ could come from some non-perturbative effects.} (\ref{eq:bijorig}). Note that in showing (\ref{eq:IRlimits}) formula (\ref{eq:IRcond}) is needed only to argue that $E_2$ vanishes at infinity while the first infrared assumption alone suffices to show all other limits. Note that although the same set of assumptions implies \begin{equation} \lim\limits_{L\to \infty} \Delta g_{ij}^{L} \beta^{j} < \infty \end{equation} there is no guarantee that the $L\to \infty$ limit of $\Delta g_{ij}^{L}$ is finite. However, Infrared divergences, if present in $\Delta g_{ij}^{L}$, are orthogonal to the beta function. Therefore they can be subtracted to obtain a finite quantity $\Delta g_{ij}$ so that the following gradient formula holds \begin{equation}\label{eq:finalgradf} \partial_{i}c = -(g_{ij} + \Delta g_{ij} + b_{ij}) \beta^{j} \, \end{equation} where \begin{equation}\label{eq:deltagij} \Delta g_{ij} = \lim\limits_{L\to \infty} [\Delta g_{ij}^{L} - \mbox{ subtractions }]\, . \end{equation} This completes the derivation of the general gradient formula. \section{Discussion} \setcounter{equation}{0} \subsection{Contact term ambiguities and scale dependence} \label{contterms} As the proof of the gradient formula uses distributional correlation functions which have contact term ambiguities one should ask if the formula itself is free from such ambiguities. The contact term ambiguities arise from the choice of renormalization scheme and are generated by adding to the generating functional finite local counterterms of the form \begin{equation}\label{contact_redef} \ln Z[\lambda, g_{ij}] \mapsto \ln Z[\lambda, g_{ij}] + \int d^2x [ f(\lambda) \mu^{2}R_{2}(x) + \frac{1}{2}c_{ij}(\lambda) \partial_{\mu}\lambda \partial^{\mu}\lambda(x) + \dots] \end{equation} where $f(\lambda)$ and $c_{ij}(\lambda)$ are arbitrary functions\footnote{We assume that these functions are at least once differentiable.} (scalar and tensor respectively) and the omitted terms contain higher order derivatives of the metric and sources. The redefinition (\ref{contact_redef}) shifts the terms in the renormalization operation ${\cal D}(x)$. The low order terms shift as \begin{eqnarray} C(x) &\mapsto & C(x) + \beta^{i}\partial_{i}f(x)\, , \\ W_{i}(x) &\mapsto & W_{i}(x) -\partial_{i}f(x) - c_{ij}\beta^{j}(x) \, , \\ G_{ij}(x) & \mapsto & G_{ij}(x) - {\cal L}_{\beta}c_{ij}(x) \, . \end{eqnarray} with all shifts proportional to the identity operator. The $c$-function and the metric tensors $g_{ij}$, $\Delta g_{ij}$ can each be written in a form involving two point correlators at non-zero separation only (see formulas (\ref{eq:corig}), (\ref{eq:gijorig}), (\ref{eq:gLij})). Thus these quantities are independent of the contact term ambiguities. The 1-form $w_{i}$ defined in (48) changes under (\ref{contact_redef}) as \begin{equation} w_{i} \mapsto w_{i} - \partial_{i}f \end{equation} and the antisymmetric form $b_{ij}$ thus does not change. Since the redefinition (\ref{contact_redef}) is the most general one\footnote{The higher order terms omitted in (\ref{contact_redef}) do not contribute to the change of $w_{i}$.} the two-form $b_{ij}$ is also independent of the contact term ambiguities. Another property that we would like to check is whether the quantities we defined depend on the scales $\mu$ and $\Lambda$ only via their ratio $\mu/\Lambda$. For the $c$-function (\ref{eq:corig}), the metric (\ref{eq:gijorig}) and the antisymmetric form (\ref{eq:bijorig}) this immediately follows from the scaling properties (\ref{eq:muscale}). As for the metric correction $\Delta g_{ij}$ it may happen that the infrared regulated quantity $\Delta g_{ij}^{L}$ contains a logarithmic divergence $\sim \ln L$ whose subtraction requires introducing a new scale. If this happens the subtracted correction will not depend on $\mu$ and $\Lambda$ via the ratio $\mu/\Lambda$ only. The physical significance of this is unclear to us. \subsection{The infrared condition: an example}\label{IRexample} Here we discuss a simple example that demonstrates the necessity of the infrared condition (\ref{eq:IRcond}) for a gradient formula to hold. Consider a free compact boson $X$ defined on a two-dimensional curved surface with metric $g_{\mu\nu}$ by the action functional \begin{equation} S[R, g_{\mu\nu}]=\frac{1}{8\pi}\int\!\! d^{2}x\, (\lambda\sqrt{g}g^{\mu\nu}\partial_{\mu}X\partial_{\nu}X + QX\sqrt{g}R_{2}) \end{equation} where $\lambda$ is the coupling constant corresponding to the radius of compactification squared, $R_{2}$ is the curvature of $g_{\mu\nu}$, and $Q$ is a parameter. Promoting $\lambda$ to a local source $\lambda(x)$ we can define a generating functional \begin{equation} \label{Zfunc} \ln Z[\lambda(x), g_{\mu\nu}(x)]= \int\! [dX]\, e^{-S[ \lambda(x), g_{\mu\nu}(x)]}\, . \end{equation} For the zero mode integral to be well defined we assume that the theory is defined only on a surface with the topology of a plane so that \begin{equation} \int\!\!d^2 x\, \sqrt{g}R_{2} = 0 \, . \end{equation} and the zero mode integral in (\ref{Zfunc}) only yields an overall numerical factor. Note that $Q$ cannot be considered as a coupling constant as it does not stand at a local operator. The functional integral is Gaussian so the anomaly can be readily computed (e.g. using the heat kernel method) with the result \begin{equation} D(x)=\Theta(x) = \frac{1}{2}C(\lambda)\sqrt{g}R_{2}(x) + J_{\lambda}^{\mu}(x)\partial_{\mu}\lambda + \frac{1}{2}g_{\lambda\lambda}\partial_{\mu}\lambda\partial^{\mu}\lambda + \partial_{\mu}(w_{\lambda}\partial^{\mu}\lambda) \end{equation} where \begin{eqnarray}\label{Dcoefs} && C(\lambda) = \frac{1}{12\pi} + \frac{Q^2}{4\pi \lambda} \\ && J_{\lambda}^{\mu}(x)= -\frac{Q}{4\pi \lambda}\partial^{\mu} X(x)\\ && g_{\lambda\lambda} = \frac{1}{64\pi \lambda^2} \end{eqnarray} The value of $w_{\lambda}$ is essentially scheme dependent. It can be shifted by adding to $S$ a local counterterm $\int\! d^2 x \, f(\lambda(x))R_{2}(x)$ dependent on an arbitrary function $f(\lambda)$. In the context of nonlinear sigma models such term can be fixed by target space diffeomorphism invariance. For the model at hand this gives $w_{\lambda}= (8\pi\lambda)^{-1} $. We see from (\ref{Dcoefs}) that while the theory has a vanishing beta function, its $c$-function: $c=12{\pi}C(\lambda)$ has a nontrivial derivative with respect to the modulus $\lambda$. We can further observe that it is the broken global conformal symmetry that is responsible for the breakdown of gradient property. The stress-energy tensor on a flat surface is \begin{equation} T_{\mu\nu}= \frac{\lambda}{4\pi}(:\partial_{\mu} X\partial_{\nu}X: -\frac{\delta_{\mu\nu}}{2} :\partial_{\gamma}X\partial^{\gamma}X:) + \frac{Q}{4\pi}(\delta_{\mu\nu}\partial_{\lambda}\partial^{\lambda} - \partial_{\mu}\partial_{\nu})X \end{equation} It has exactly the same form as the background charge model \cite{Dotsenko} with imaginary background charge. Note that in our theory there is no background charge. Moreover since our theory is defined on a topological plane the field $X$ can be taken to be compact with an arbitrary radius. The correlation function \begin{equation} \langle T(z) J_{\lambda, z}(0) \rangle = -\frac{Q^{2}}{4\pi \lambda^2}\frac{1}{z^{3}} \end{equation} means that special conformal transformations are broken by the boundary condition at infinity, \footnote{ The charge $\oint dz\, z^{2} T(z)$ does not vanish at infinity.}. Another way to see the necessity to have a theory defined on a sphere of large radius is in the context of nonlinear sigma model. There it is essential for the gradient formula to hold (at least in the leading order in the $\alpha'$ expansion) that the zero mode measure includes the dilaton contribution corresponding to spherical topology \cite{Osborn}. \subsection{Bare gradient formula} \label{sect:baregrad} Here we will show how the Wess-Zumino consistency condition for the local renormalization operation can be used to derive a different gradient formula. The main quantities in the new gradient formula are constructed using the anomalous contact terms present in $\cal D$ rather than correlation functions at finite separation. For this reason we call it a bare gradient formula. As a consequence of that the terms in that formula are defined modulo contact term ambiguities discussed in section \ref{contterms}. The new formula also suffers from potential infrared divergences in the metric. In this section however for the sake of brevity we will not introduce an explicit infrared cutoff and our manipulations with integrals will be formal. It is straightforward however to introduce such a cutoff with the main result correct up to some error terms vanishing when the cutoff is removed. Using (\ref{eq:3comms}) the Wess-Zumino consistency condition \begin{equation} \label{eq:WZgen} [(D(x_2)- {\cal D}(x_2) ), (D(x_1) - {\cal D}(x_1)) ] \, \rangle \rangle = 0 \end{equation} can be rewritten as\footnote{Note that this form of the Wess-Zumino condition is linear in ${\cal D}$. This leads to essential simplifications in computations and also ensures that terms with tensorial sources in ${\cal D}$ do not contribute to the final gradient formula.} \begin{eqnarray} && \Bigl[ [\Theta(x_2), {\cal D}(x_1)] - [\Theta(x_1), {\cal D}(x_2)] - [\beta(x_2), {\cal D}(x_1)] \nonumber \\ && + \beta(x_1)D(x_2) - {\cal D}(x_2) \Theta(x_1) + {\cal D}(x_1)D(x_2) \Bigr]\, \rangle \rangle =0 \, . \end{eqnarray} Applying to the above equation $\langle \langle \phi_{i}(y)$ on the left and integrating over $x_{1}$ we obtain \begin{eqnarray}\label{eq:WZnext} && \langle \langle \phi_{i}(y)[ {\cal D}\Theta(x_2) - \beta^{j}{\cal D}\phi_{j}]\rangle\rangle + \langle\langle \phi_{i}(y) \int\!\! d^{2} x_1\, \beta(x_1) D(x_2)\rangle \rangle + \langle \langle {\cal D} \phi_{i}(y) D(x_2) \rangle \rangle \nonumber \\ && -\mu \frac{\partial}{\partial \mu} ( \langle D(x_2) \phi_{i}(y)\rangle_{c} - \delta^{2}(y-x_2)\partial_{i}\beta^{j}\langle \phi_{j}\rangle) = 0 \end{eqnarray} where we used the identities \begin{equation} \int\!\! dx_1 [\Theta(x_1), {\cal D}(x_2)] = 0 \, , \quad \langle \langle \phi_i(y) \int\!\! dx_1 [\beta^{j}(x_2), {\cal D}(x_1) ] \phi_{j}(x_2)\rangle\rangle = 0\, . \end{equation} As we know from section \ref{sect:DTheta} $ {\cal D}\Theta - \beta^{j}{\cal D}\phi_{j}$ is a pure-contact operation. Its field part $ \beta^{j}\partial_{\mu}J^{\mu}_{i} - \partial_{\mu}J^{\mu}$ vanishes (is pure contact). Equation (\ref{eq:WZnext}) expresses the contact terms with $\phi_{i}(y)$ via the operation $\cal D$. Integrating the above formula over $x_2$ with the weight $(x_2-y)^{2}$ and using \begin{equation} \mu \frac{\partial}{\partial \mu} \int\!\! d^{2}x_2\, \langle D(x_{2})\phi_{i}(y) \rangle_{c} (x_2-y)^{2} = 0 \end{equation} we obtain\footnote{Recall that the currents $J_{i}^{\mu}$ and the metric $G_{ij}$ in (\ref{eq:Doploose}) are ordinary operations so that $\langle \partial_{i}J_{j}^{\mu}\rangle = 0$.} \begin{equation} \partial_{i}\langle C_{2} \rangle = - H_{ij}\beta^{j} + {\cal L}_{\beta}W_{i} + Q_{i} \end{equation} where \begin{eqnarray} H_{ij} &=& -G_{ij} - \frac{1}{4} \int\!\! d^{2} y \, y^{2} [ \langle \partial_{\mu}J^{\mu}_{j}(0) \phi_{i}(y) \rangle + \langle \partial_{\mu}J^{\mu}_{i}(0) \phi_{j}(y) \rangle \,] \, , \nonumber \\ G_{ij}&=& - \frac{1}{4} \int\!\! d^{2} y \, y^{2} \langle \langle [\phi_{i}(0), {\cal D}\phi_{j}(y)]\, \rangle \rangle \, , \nonumber \\ W_i &=& \frac{1}{4} \int\!\! d^{2} y \, y^{2} \langle D(y) \phi_{i}(0)\rangle_{c} \, , \nonumber \\ Q_{i} &=& \frac{1}{4}\int\!\! d^2 y\, y^{2} \langle \partial_{\mu}J^{\mu}_{i}(y) \Theta(0)\rangle_{c} \, . \end{eqnarray} Note that the tensor $G_{ij}$ is symmetric. This follows from the fact that operations $\phi_{i}(y)$, $\phi_{j}(x)$ commute. The metric tensor $H_{ij}$ can be also written in terms of integrated correlation functions \begin{equation} H_{ij} = \frac{1}{4}\int\!\! d^{2} y \, y^{2} \Bigl[ \int\!\! d^{2}x\, \langle D(x)\phi_{i}(y)\phi_{j}(0)\rangle_{c} - \partial_{i}\beta^{k}\langle \phi_{k}(y)\phi_{j}(0)\rangle_c - \partial_{j}\beta^{k}\langle \phi_{i}(y)\phi_{k}(0)\rangle_{c} \Bigr] \, . \end{equation} According to our main infrared assumption (\ref{eq:IRcond}) $Q_{i}$ vanishes and we have a gradient formula \begin{equation} \label{eq:baregrad} \partial_{i} c^{(0)} + g_{ij}^{(0)}\beta^{j} + b_{ij}^{(0)}\beta^{j} = 0 \end{equation} where \begin{equation} c^{(0)}= \langle C_{2}\rangle - W_{i}\beta^{i}\, , \quad g_{ij}^{(0)}= H_{ij}\, , \quad b_{ij}^{(0)}= \partial_{i}W_{j} - \partial_{j}W_{i} \,. \end{equation} The metric $H_{ij}$ potentially suffers from the same infrared divergences as the correction to Zamolodchikov's metric defined in (\ref{eq:deltagij}). We define the finite quantity entering (\ref{eq:baregrad}) by subtracting these divergences. When loose power counting applies the above quantities can be computed more explicitly using (\ref{eq:Doploose}). In this case we have \begin{equation} H_{ij} = G_{ij} - \frac{1}{4}\int\!\! d^{2}y \, y^{2} [\langle \partial_{\mu}J^{\mu}_{i}(y) \phi_{j}(0) \rangle + \langle \partial_{\mu}J^{\mu}_{j}(y) \phi_{i}(0) \rangle] \, , \end{equation} $\langle C_{2}\rangle = \langle C \rangle$ and $G_{ij}$, $W_{i}$ coincide with the respective quantities defined in formula (\ref{eq:Doploose}). In the case when the currents $J_{i}^{\mu}$ are absent formula (\ref{eq:baregrad}) matches with the one obtained by Osborn \cite{Osborn}. \subsection{Dressing transformations} For any gradient formula \begin{equation} \partial_{i}c + g_{ij}\beta^{j} + b_{ij} \beta^{j}=0 \end{equation} with a symmetric tensor $g_{ij}$ and an antisymmetric tensor $b_{ij}$ one can redefine $c$, $b_{ij}$ and $g_{ij}$ as \begin{eqnarray}\label{eq:dressing} \tilde c &=& c + \beta^{i}c_{ij}\beta^{j} \, , \nonumber \\ \tilde g_{ij} &= &g_{ij} - {\cal L}_{\beta} c_{ij} \, , \nonumber \\ \tilde b_{ij} &=& b_{ij} - (d i_{\beta} c)_{ij} \end{eqnarray} so that a gradient formula $\partial_{i}\tilde c = \tilde g_{ij}\beta^{j} +\tilde b_{ij} \beta^{j}$ holds. The tensor $c_{ij}$ above is any tensor on the space of couplings that may depend on the couplings and the renormalization scale $\mu$. We will refer to redefinitions (\ref{eq:dressing}) as dressing transformations. One can show that formula (\ref{eq:finalgradf}) is related to formula (\ref{eq:baregrad}) by means of a dressing transformation specified by \begin{equation} c_{ij}^{\Lambda} = \int\!\! d^{2}x\, G_{\Lambda}(x) \langle \phi_{i}(x) \phi_{j}(0)\rangle_{c} \end{equation} so that \begin{equation} c= c^{(0)} - \beta^{i}c_{ij}^{\Lambda}\beta^{j} \, . \end{equation} It is not hard to construct using dressing transformations a class of c-functions that monotonically decrease under the RG flow. Such functions $c^{f}$ can be defined as \begin{equation} c^{f} = -3\pi \int\!\! d^{2}x\, x^{2}f(x^{2})\langle \Theta(x) \Theta(0)\rangle_{c} \end{equation} where $f(x^2)$ is a function such that $f(0)=1$, $f(x^{2})$ decreases fast at infinity\footnote{An exponential decrease would suffice for all purposes.} and \begin{equation} x^{\mu}\partial_{\mu} f(x^{2}) < 0 \, . \end{equation} These potential functions satisfy a gradient formula \begin{equation} \partial_{i}c^{f} = -(g_{ij}^{f} + \Delta g_{ij}^{f} + b_{ij}^{f} )\beta^{j} \end{equation} where \begin{equation} g_{ij}^{f}= -3\pi \int\!\! d^{2}x\, x^{2} [x^{\mu}\partial_{\mu} f(x^{2})] \langle \phi_{i}(x)\phi_{j}(0)\rangle_{c} \end{equation} \begin{equation} \Delta g_{ij}^{f} = 3\pi \int\!\! d^{2}x\, x^{2}[f(x^{2})-1] (\langle \partial_{\mu}J^{\mu}_{i}(x)\phi_{j}(0)\rangle + \langle \partial_{\mu}J^{\mu}_{j}(x)\phi_{i}(0)\rangle) \end{equation} \begin{equation} b_{ij}^{f} = \partial_{i}w_{j}^{f} - \partial_{j}w_{i}^{f} \end{equation} \begin{equation} w_{i}^{f}= 3\pi \int\!\! d^{2}x\, x^{2} f(x^{2}) \langle \phi_{i}(x)\Theta(0)\rangle_{c} \end{equation} Such smeared $c$-functions were first considered in \cite{cspec}. \subsection{Renormalization group transformation as a flow of couplings} As one can observe from the form of Callan-Symanzik equations (\ref{eq:CS3}) the scale transformation of correlation functions $$ \langle \phi_{i_1}(x_1)\phi_{i_2}(x_2) \dots \Theta(y_1)\Theta(y_2) \dots \rangle_{c} $$ even at finite separation is not fully compensated by the change of couplings $\lambda^i$. In addition to changing the couplings according to their beta functions and rotating the fields $\phi_{i}$ by the anomalous dimension matrices $\partial_{i}\beta^{j}$ the operators $\phi_{i}(x)$ and $\Theta(y)$ each shift by an additional total derivative: $\partial_{\mu}J^{\mu}_{i}(x)$ and $\partial_{\mu}J^{\mu}(y)$ respectively. If the currents $J^{\mu}_{i}$, $J^{\mu}$ are not conserved these shifts affect the scale transformation of the correlation functions taken at finite separation. This signals that more couplings need to be introduced to parameterize such additional terms in the Callan-Symanzik equations. Thus to account for the current $J^{\mu}(y)$ it is customary to introduce dilaton couplings $\lambda^{i}_{D}$ that couple to $\phi_{i}(x)\mu^{2}R_{2}(x)$ terms in the Lagrangian \footnote{A completeness of the set $\phi_{i}$ is assumed here as discussed in section 5.}. The generating functional $Z$ depends on these couplings according to the functional differential equation \begin{equation}\label{eq:dil} \frac{\delta \ln Z}{\delta \lambda^{i}_{D}(x)} = \frac{1}{2}\mu^{2}R_{2}(x)\frac{\delta \ln Z}{\delta \lambda^{i}(x)} \, . \end{equation} The introduction of this new set of couplings is natural if one bears in mind that coupling constant redefinitions are responsible for having different RG schemes. To renormalize a theory on a curved space one needs counterterms of the form $\phi_{i}(x)\mu^{2}R_{2}(x)$. As usual such counterterms are defined up to arbitrary finite parts. Changing the dilaton couplings $\lambda^{i}_{D}$ accounts for changing the finite parts in such counterterms. (Previously we assumed that such counterterms are fixed somehow which amounts to partially fixing the RG scheme. This resulted in the extra terms in the Callan-Symanzik equations.) Expanding the operator $C(x)$ in (\ref{eq:Doploose}) as $C(x) = \beta^{i}_{D}\phi_{i}(x)$ we see that the coefficients $\beta^{i}_{D}$ can now be naturally interpreted as the beta functions for the dilaton couplings. For the loose power counting case the Callan-Symanzik equation for correlators of stress-energy tensor takes the form \begin{eqnarray}\label{eq:CSwithdil} &&\mu\frac{\partial}{\partial \mu} \langle T_{\mu \nu}(y_1) T_{\alpha \beta}(y_2) \dots \rangle_{c} = \beta^{i}\frac{\partial}{\partial \lambda^{i}}\langle T_{\mu \nu}(y_1) T_{\alpha \beta}(y_2) \dots \rangle_{c} + \langle \Gamma_{\mu\nu}^{C}(y_1)T_{\alpha \beta}(y_2) \dots \rangle_{c} \nonumber \\ && + \langle T_{\mu \nu}(y_1) \Gamma_{\alpha \beta}^{C}(y_2) \dots \rangle_{c} + \dots =(\beta^{i}\frac{\partial}{\partial \lambda^{i}} + \beta^{i}_{D}\frac{\partial}{\partial \lambda^{i}_{D}})\langle T_{\mu \nu}(y_1) T_{\alpha \beta}(y_2) \dots \rangle_{c} \end{eqnarray} where \begin{equation} \Gamma_{\mu \nu}^{C}(x) = (\partial_{\mu}\partial_{\nu} - g_{\mu \nu}\partial_{\alpha}\partial^{\alpha})C(x) \, . \end{equation} We used (\ref{eq:dil}) and (\ref{eq:Doploose}) to obtain the last equality in (\ref{eq:CSwithdil}). We see that the dilaton couplings account for mixings of the stress energy tensor with trivially conserved currents $\Gamma_{\mu \nu}^{C}(x)$. With the enlarged set of couplings $(\lambda, \lambda_{D})$ the change in scale for correlators of stress-energy tensor components (at finite separation) is exactly compensated by the change in coupling constants. In particular for the $c$-function (\ref{eq:corig}) we have \begin{equation}\label{eq:ccov} \mu \frac{\partial c}{\partial \mu} = (\beta^{i}\frac{\partial}{\partial \lambda^{i}} + \beta^{i}_{D}\frac{\partial}{\partial \lambda^{i}_{D}}) c\, . \end{equation} We can also compute the derivatives of the $c$-function (\ref{eq:corig}) with respect to the dilaton couplings. Using (\ref{eq:c}), (\ref{eq:dil}) and the identity \begin{equation} \frac{\partial}{\partial \lambda^{i}_{D}} = \int\!\! d^{2}x \, \frac{\delta}{\delta \lambda^{i}_{D}(x)} \end{equation} we obtain \begin{equation}\label{eq:partialc2} \frac{\partial c}{\partial \lambda^{i}_{D}} = -\frac{\partial}{\partial \lambda^{i}_{D}} \int\!\! d^2 x\, G_{\Lambda}(x) \langle \Theta(x) \Theta(0)\rangle_{c} = 2 \int\!\! d^2 x\, G_{\Lambda}(x) \langle \Theta(x) \partial_{\mu}\partial^{\mu}\phi_{i}(0)\rangle_{c} \, . \end{equation} Integrating by parts in (\ref{eq:partialc2}), using \begin{equation} \mu \frac{\partial}{\partial \mu} \langle \phi_i\rangle = \beta^{j} \partial_{j} \langle \phi_i\rangle \end{equation} and the assumption that \begin{equation} \int\!\! d^{2} x \, \langle \phi_{j}(x) \phi_{i}(0)\rangle = \partial_{j}\langle \phi_{i}\rangle = \partial_{i} \langle \phi_j \rangle < \infty \, , \end{equation} we obtain \begin{equation} \label{eq:dilgrad} \frac{\partial c}{\partial \lambda^{i}_{D}} = -g_{ij}^{D}\beta^{j} \end{equation} where \begin{equation} \label{eq:dilmetric} g_{ij}^{D} = 2\int\!\!d^{2}x\, [G_{0}(x) - G_{\Lambda}(x)]\langle \phi_{j}(x) \partial_{\mu}\partial^{\mu}\phi_{i}(0)\rangle_{c} \end{equation} is a symmetric tensor. We can further show that the contraction of gradient formula (\ref{eq:finalgradf}) with the beta functions $\beta^{i}$ gives Zamolodchikov's formula (\ref{eq:zam}). This boils down to the identity \begin{equation} \label{eq:beta_contr} \beta^{i}_{D}\frac{\partial c}{\partial \lambda_{D}^{i}} = \beta^{i}\Delta g_{ij}\beta^{j}\, . \end{equation} Using equations (\ref{eq:dilgrad}), (\ref{eq:dilmetric}) the left hand side of equation (\ref{eq:beta_contr}) can be written as \begin{equation} \label{eq:beta_contr2} \beta^{i}_{D}\frac{\partial c}{\partial \lambda_{D}^{i}} = 2\int\!\!d^{2}x\, [G_{\Lambda}(x) - G_{0}(x)] \langle \Theta(x) \partial_{\mu}\partial^{\mu}C(0)\rangle_{c} \end{equation} while for the right hand side we have \begin{equation} \label{eq:beta_contr3} \beta^{i}\Delta g_{ij}\beta^{j}=2\int\!\!d^{2}x\, [G_{\Lambda}(x) - G_{0}(x)] \langle \Theta(x)\beta^{i} \partial_{\mu} J^{\mu}_{i}(0)\rangle_{c} \, . \end{equation} The last expression coincides with (\ref{eq:beta_contr2}) by virtue of the identity $\partial_{\mu}\partial^{\mu}C(x) =\partial_{\mu} J^{\mu}_{i}(x)$ proven in section \ref{sect:DTheta}. This identity can be used because the two point function in (\ref{eq:beta_contr3}) is taken at finite separation. It is not hard to extend the proof of identity (\ref{eq:beta_contr}) to a more general case not assuming the loose power counting. Formula (\ref{eq:beta_contr}) shows in particular that the metric correction $\Delta g_{ij}$ is necessary to account for the flow of dilaton coupling constants when the last ones are present. The additional gradient formula (\ref{eq:dilgrad}) together with the main formula (\ref{eq:finalgradf}) imply that the $c$-function is stationary with respect to the couplings $(\lambda, \lambda_{D})$ at fixed points $\beta^{i}=0$. The inverse follows from the Zamolodchikov's formula (\ref{eq:zam}) combined with formula (\ref{eq:ccov}). Thus under our main set of assumptions and with loose power counting the stationary points of the $c$-function are in a one to one correspondence with the fixed points. \section{Final comments}\label{sec:final} As we said in the introduction one of the motivations to obtain a general gradient formula came from string theory. In regard with potential applications of our result to the problem of constructing string effective actions it should be stressed that we worked throughout with normalized connected correlation functions while it is the unnormalized and disconnected ones which are relevant to string theory. This fact also explains why our results {\it seem} to be at odds with the conclusion of \cite{Tseytlin_c} that the Zamolodchikov $c$-function does not give a suitable string effective action. In the unnormalized correlators the dilaton zero mode $\phi_{0}$ contributes an overall factor $e^{-2\phi_{0}}$ which results in having the same factor in $c$. Thus stationarity of $c$ with respect to $\phi_{0}$ implies that $c$ has to vanish at stationary points. This factor and the related problem disappear when one builds $c$ out of normalized correlators as we do in this paper. The aforementioned problem with $c$ prompted various authors to switch to using what we call the bare gradient formula which was discussed in section 9.3. The negative side of this is that the metric that appear in that formula, being built from contact terms, does not have any positivity properties. In the present paper we focused on a formal derivation of the new gradient formula and discussing its general properties. It would be instructive to illustrate how it works on concrete examples in conformal perturbation theory and nonlinear sigma models. We are planning to do this in a separate publication \cite{FKinprep}. It is also interesting to understand better the implications of the new formula for string theory. We leave this question to future studies. \begin{center} {\bf \large Acknowledgments} \end{center} The work of D.F. was supported by the Rutgers New High Energy Theory Center. Both authors acknowledge the support of Edinburgh Mathematical Society. \bibliographystyle{plain}
proofpile-arXiv_065-6647
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In \cite{AbrCoe1} we showed that vector space \em projectors \em \[ {{\rm P}:V\otimes W\to V\otimes W} \] which have a one-dimensional subspace of $V\otimes W$ as fixed-points, suffice to implement any linear map, and also the categorical trace \cite{JSV} of the category $({\bf FdVec}_\mathbb{K},\otimes)$ of finite-dimensional vector spaces and linear maps over a base field $\mathbb{K}$. The interest of this is that projectors of this kind arise naturally in quantum mechanics (for $\mathbb{K} = \mathbb{C}$), and play a key role in information protocols such as \cite{BBC} and \cite{Swap}, and also in measurement-based schemes for quantum computation. We showed how both the category $({\bf FdHilb},\otimes)$ of finite-dimensional \em complex \em Hilbert spaces and linear maps, and the category $({\bf Rel},\times)$ of relations with the cartesian product as tensor, can be physically realized in this sense. In \cite{AbrCoe2} we showed that such projectors can be defined and their crucial properties proved at the abstract level of \em strongly compact closed categories\em. This categorical structure is a major ingredient of the categorical axiomatization in \cite{AbrCoe2} of quantum theory \cite{vN}. It captures quantum entanglement and its behavioral properties \cite{Coe}. In this paper we will improve on the definition of strong compact closure, enabling a characterization in terms of adjoints - in the linear algebra sense, suitably abstracted - and yanking, without explicit reference to compact closure, and enabling a nicer treatment of bipartite projectors, coherent with the treatment of arbitrary projectors in \cite{AbrCoe2}. We are then able to show that the constructions in \cite{AbrCoe1} for realizing arbitrary morphisms and the trace by projectors also carry over to the abstract level, and that these constructions admit an information-flow interpretation in the spirit of the one for additive traces \cite{Abr,AHS}. It is the information flow due to (strong) compact closure which is crucial for the abstract formulation, and for the proofs of correctness of protocols such as quantum teleportation \cite{AbrCoe2}. A concise presentation of (very) basic quantum mechanics which supports the developments in this paper can be found in \cite{AbrCoe1,Coe}. However, the reader with a sufficient categorical background might find the abstract presentation in \cite{AbrCoe2} more enlightening. \section{Strongly compact closed categories} As shown in \cite{KellyLaplaza}, in any monoidal category ${\bf C}$, the endomorphism monoid ${\bf C} ({\rm I} , {\rm I} )$ is commutative. Furthermore any $s:{\rm I}\to{\rm I}$ induces a natural transformation \begin{diagram} s_A :A & \rTo^{\simeq} & {\rm I} \otimes \!A & \rTo^{s \otimes 1_A} & {\rm I} \otimes\! A & \rTo^{\!\!\simeq\ } & A\,.\ \ \ \ \ \end{diagram} Hence, setting $s \bullet f$ for $f \circ s_A=s_B\circ f$ for $f : A \rightarrow B$, we have \[ (s \bullet g)\circ(r \bullet f)=(s\circ r)\bullet(g\circ f) \] for $r:{\rm I}\to{\rm I}$ and $g:B\to C$. We call the morphisms $s\in {\bf C} ({\rm I} , {\rm I} )$ \em scalars \em and $s \bullet-$ \em scalar multiplication\em. In $(\mathbf{FdVec}_{\mathbb{K}},\otimes)$, linear maps $s:\mathbb{K} \to \mathbb{K}$ are uniquely determined by the image of $1$, and hence correspond biuniquely to elements of $\mathbb{K}$. In $(\mathbf{Rel},\times)$, there are just two scalars, corresponding to the Booleans $\mathbb{B}$. Recall from \cite{KellyLaplaza} that a \em compact closed category \em is a symmetric monoidal category ${\bf C}$, in which, when ${\bf C}$ is viewed as a one-object bicategory, every one-cell $A$ has a left adjoint $A^*$. Explicitly this means that for each object $A$ of ${\bf C}$ there exists a \em dual object \em $A^*$, a \em unit \em $\eta_A:{\rm I}\to A^*\otimes A$ and a \em counit \em $\epsilon_A:A\otimes A^*\to {\rm I}$, and that the diagrams \begin{equation}\label{ccc1} \begin{diagram} A&\rTo^{\simeq\ }&A\otimes{\rm I}&\rTo^{1_A\otimes\eta_A}&A\otimes(A^*\otimes A)\\ \dTo^{1_A}&&&&\dTo_{\simeq}\\ A&\lTo_{\simeq\ }&{\rm I}\otimes A&\lTo_{\epsilon_A\otimes 1_A}&(A\otimes A^*)\otimes A \end{diagram} \end{equation}\par\noindent and \begin{equation}\label{ccc2} \begin{diagram} A^*&\rTo^{\simeq\ }&{\rm I} \otimes A^*&\rTo^{\eta_A\otimes 1_{A^*}}&(A^*\otimes A)\otimes A^*\\ \dTo^{1_{A^*}}&&&&\dTo_{\simeq}\\ A^*&\lTo_{\simeq\ }& A^*\otimes{\rm I}&\lTo_{1_{A^*}\otimes\epsilon_A}&A^*\otimes (A\otimes A^*) \end{diagram} \end{equation}\par\noindent both commute. Alternatively, a compact closed category may be defined as a $*$-autonomous category \cite{Barr} with a self-dual tensor, hence a model of `degenerate' linear logic \cite{Seely}. For each morphism $f:A\to B$ in a compact closed category we can construct a \em dual \em $f^*$, a \em name \em $\ulcorner f \urcorner$ and a \em coname \em $\llcorner f \lrcorner$, respectively as \begin{diagram} B^*&\rTo^{\simeq}&{\rm I}\otimes B^*&\rTo^{\eta_A\otimes 1_{B^*}}&A^*\otimes A\otimes B^*\\ \dTo^{f^*}&&&&\dTo_{1_{A^*}\!\otimes f\otimes 1_{B^*}\hspace{-1.3cm}}\\ A^*&\lTo_{\simeq}&A^*\otimes {\rm I}&\lTo_{1_{A^*}\otimes \epsilon_B}&A^*\otimes B\otimes B^* \end{diagram} \begin{diagram} A^*\!\!\otimes\! A&\rTo^{1_{A^*}\!\!\otimes\! f}&A^*\!\otimes\! B&&&&&{\rm I}\\ \uTo^{\eta_A}&\ruTo_{\ulcorner f\urcorner}&&&&&\ruTo^{\llcorner f\lrcorner}&\uTo_{\epsilon_B} \\ {\rm I}&&&&&A\!\otimes\! B^*&\rTo_{f\!\otimes\! 1_{B^*}}&B\!\otimes\! B^*&& \end{diagram} In particular, the assignment $f\mapsto f^*$ extends $A\mapsto A^*$ into a contravariant endofunctor with $A\simeq A^{**}$. In any compact closed category, we have \[ {\bf C} (A \otimes B^\ast , {\rm I} ) \simeq {\bf C} (A, B) \simeq {\bf C} ({\rm I} , A^\ast \otimes B)\,, \] so `elements' of $A \otimes B$ are in biunique correspondence with names/conames of morphisms $f : A \to B$ Typical examples are $({\bf Rel},\times)$ where $X^*=X$ and where for $R\subseteq X\times Y$, \begin{eqnarray*} &&\hspace{-6mm}\ulcorner R\urcorner=\{(*,(x,y))\mid xRy,x\in X, y\in Y\}\\ &&\hspace{-6mm}\llcorner R\lrcorner=\{((x, y),*)\mid xRy,x\in X, y\in Y\} \end{eqnarray*}\par\noindent and, $({\bf FdVec}_\mathbb{K},\otimes)$ where $V^*$ is the dual vector space of linear functionals $v:V\to \mathbb{K}$ and where for $f:V\to W$ with matrix $(m_{ij})$ in bases $\{e_i^V\}_{i=1}^{i=n}$ and $\{e_j^W\}_{j=1}^{j=m}$ of $V$ and $W$ respectively we have \begin{eqnarray*} &&\hspace{-6mm}\ulcorner f\urcorner:\mathbb{K}\to V^*\otimes W::1\mapsto\sum_{i,j=1}^{\!i,j=n,m\!}m_{ij}\cdot \bar{e}_i^V\otimes e_j^W\\ &&\hspace{-6mm}\llcorner f\lrcorner:V\otimes W^*\to\mathbb{K}::e_i^V\otimes\bar{e}_j^W\mapsto m_{ij} . \end{eqnarray*}\par\noindent where $\{\bar{e}_i^{V}\}_{i=1}^{i=n}$ is the base of $V^*$ satisfying $\bar{e}_i^{V}(e^{V}_j)=\delta_{ij}$, and similarly for $W$. Another example is the category $n${\bf Cob} of $n$-dimensional \em cobordisms \em which is regularly considered in mathematical physics, e.g.~\cite{Baez}. Each compact closed category admits a categorical trace, that is, for every morphism $f:A\otimes C\to B\otimes C$ a trace ${\rm Tr}_{A,B}^C(f):A\to B$ is specified and satisfies certain axioms \cite{JSV}. Indeed, we can set \begin{equation}\label{eq:trace} {\rm Tr}_{A,B}^C(f):=\rho^{-1}_B\circ(1_B\otimes\epsilon_C)\circ (f\circ 1_{C^*})\circ (1_A\otimes (\sigma_{C^*\!,C}\circ\eta_C))\circ\rho_A \end{equation}\par\noindent where $\rho_X:X\simeq X\otimes{\rm I}$ and $\sigma_{X,Y}:X\otimes Y\simeq Y\otimes X$. In $({\bf Rel},\times)$ this yields \[ x\,{\rm Tr}_{X,Y}^Z(R)y\ \Leftrightarrow\ \exists z\in Z.(x,z)R(y,z) \] for $R\subseteq (X\times Z)\times (Y\times Z)$ while in $({\bf FdVec}_\mathbb{K},\otimes)$ we obtain \[ {\rm Tr}^U_{V,W}(f):e_i^V\mapsto {\sum}_\alpha m_{i\alpha j\alpha}\, e_j^W \] where $(m_{ikjl})$ is the matrix of $f$ in bases $\{e_i^V\!\otimes e_k^U\}_{ik}$ and $\{e_j^W\!\otimes e_l^U\}_{jl}$. \begin{definition}[Strong Compact Closure I]\label{def:sccc1} \em A \em strongly compact closed category \em is a compact closed category ${\bf C}$ in which $A=A^{**}$ and $(A\otimes B)^*\!\!=A^*\otimes B^*$, and which comes together with an involutive covariant compact closed functor $(\ )_*:{\bf C}\to {\bf C}$ which assigns each object $A$ to its dual $A^*$. \end{definition} So in a strongly compact closed category we have two involutive functors, namely a contravariant one $(\ )^*:{\bf C}\to {\bf C}$ and a covariant one $(\ )_*:{\bf C}\to {\bf C}$ which coincide in their action on objects. Recall that $(\ )_*$ being \em compact closed functor \em means that it preserves the monoidal structure strictly, and unit and counit i.e. \begin{equation}\label{sccceq} \ulcorner 1_{A_*}\urcorner=(\ulcorner 1_A\urcorner)_*\circ u_{\rm I}^{-1}\qquad{\rm and}\qquad\llcorner 1_{A_*}\lrcorner=u_{\rm I}\circ(\llcorner 1_A\lrcorner)_* \end{equation}\par\noindent where $u_{\rm I}:{\rm I}^*\simeq{\rm I}$. This in particular implies that $(\ )_*$ commutes with $(\ )^*$ since $(\ )^*$ is definable in terms of the monoidal structure, $\eta$ and $\epsilon$ --- in \cite{AbrCoe2} we only assumed commutation of $(\ )_*$ and $(\ )^*$ instead of the stronger requirement of equations (\ref{sccceq}). For each morphism $f:A\to B$ in a strongly compact closed category we can define an \em adjoint \em --- as in linear algebra --- as \[ f^\dagger:=(f_*)^*=(f^*)_*:B\to A\,. \] It turns out that we can also define strong compact closure by taking the adjoint to be a primitive. \begin{theorem}[Strong Compact Closure II]\label{def:SCC1} A strongly compact closed category can be equivalently defined as a symmetric monoidal category ${\bf C}$ which comes with \begin{enumerate} \item a monoidal involutive assignment $A\mapsto A^*$ on objects, \item an identity-on-objects, contravariant, strict monoidal, involutive functor $f\mapsto f^\dagger$, and, \item for each object $A$ a unit $\eta_A:{\rm I}\to A^*\otimes A$ with $\eta_{A^*}=\sigma_{A^*\!,A}\circ\eta_A$ and such that either the diagram \begin{equation}\label{sccc1} \begin{diagram} A&\rTo^{\simeq}&A\otimes{\rm I}&\rTo^{1_A\otimes\eta_A}&A\otimes(A^*\otimes A)\\ \dTo^{1_A}&&&&\dTo_{\simeq}\\ A&\lTo_{\simeq}&{\rm I}\otimes A&\lTo_{(\eta_A^\dagger\circ\sigma_{A,A^*})\otimes 1_A}&(A\otimes A^*)\otimes A \end{diagram} \end{equation}\par\noindent or the diagram \begin{equation}\label{sccc2} \begin{diagram} A&\rTo^{\!\!\!\!\!\!\simeq\!\!}&{\rm I}\otimes A&\rTo^{\eta_A\otimes 1_A}&(A^*\!\otimes A)\otimes A&\rTo^{\simeq}&A^*\!\otimes(A\otimes A)\\ \dTo^{1_A}&&&&&&\dTo~{1_{A^*}\!\otimes\sigma_{A,A}\!\!\!}\\ A&\lTo_{\!\!\!\!\!\!\simeq\!\!}&{\rm I}\otimes A&\lTo_{\eta_A^\dagger\otimes 1_A}&(A^*\!\otimes A)\otimes A&\lTo_{\simeq}&A^*\!\otimes(A\otimes A) \end{diagram} \end{equation}\par\noindent commutes, where $\sigma_{A,A}:A\otimes A\simeq A\otimes A$ is the twist map. \end{enumerate}\par\noindent \end{theorem} While diagram (\ref{sccc1}) is the analogue to diagram (\ref{ccc1}) with $\eta_A^\dagger\circ\sigma_{A,A^*}$ playing the role of the coname, diagram (\ref{sccc2}) expresses \em yanking \em with respect to the canonical trace of the compact closed structure. We only need one commuting diagram as compared to diagrams (\ref{ccc1}) and (\ref{ccc2}) in the definition of compact closure and hence in Definition \ref{def:sccc1} since due to the strictness assumption (i.e.~$A\mapsto A^*$ being involutive) we were able to replace the second diagram by $\eta_{A^*}=\sigma_{A^*\!,A}\circ\eta_A$. Returning to the main issue of this paper, we are now able to construct a \em bipartite projector \em (\textit{i.e.}~ a projector on an object of type $A\otimes B$) as \[ {\rm P}_f:=\ulcorner f\urcorner\circ(\ulcorner f\urcorner)^\dagger=\ulcorner f\urcorner\circ\llcorner f_*\lrcorner:A^*\otimes B\to A^*\otimes B\,, \] that is, we have an assignment \[ {\rm P}_{\_}:{\bf C}({\rm I},A^*\otimes B)\longrightarrow {\bf C}(A^*\otimes B,A^*\otimes B)::\Psi\mapsto \Psi\circ \Psi^\dagger \] from bipartite elements to bipartite projectors. Note that the use of $(\ )_*$ is essential in order for ${\rm P}_f$ to be endomorphic. We can \em normalize \em these projectors ${\rm P}_f$ by considering $s_f\bullet {\rm P}_f$ for $s_f:=(\llcorner f_*\lrcorner\circ\ulcorner f\urcorner)^{-1}$ (provided this inverse exists in ${\bf C} ({\rm I} , {\rm I} )$), yielding \[ (s_f\bullet {\rm P}_f)\circ(s_f\bullet {\rm P}_f)=s_f\bullet(\ulcorner f\urcorner\circ (s_f\bullet(\llcorner f_*\lrcorner\circ\ulcorner f\urcorner))\circ\llcorner f_*\lrcorner) =s_f\bullet {\rm P}_f\,, \] and also \[ (s_f\bullet {\rm P}_f)\circ\ulcorner f\urcorner=\ulcorner f\urcorner\qquad{\rm and}\qquad \llcorner f_*\lrcorner\circ(s_f\bullet {\rm P}_f)=\llcorner f_*\lrcorner\,. \] Any compact closed category in which $(\ )^{\ast}$ is the identity on objects is trivially strongly compact closed. Examples include relations and finite-dimensional real inner-product spaces, and also the interaction category SProc from \cite{}. So, importantly, are finite-dimensional \emph{complex} Hilbert spaces and linear maps $({\bf FdHilb},\otimes)$. We take ${\cal H}^*$ to be the \emph{conjugate space}, that is, the Hilbert space with the same elements as ${\cal H}$ but with the scalar multiplication and the inner-product in ${\cal H}^*$ defined by \[ \alpha \bullet_{{\cal H}^*} \phi := \bar{\alpha} \bullet_{{\cal H}} \phi \qquad \qquad\qquad \langle \phi \mid \psi \rangle_{{\cal H}^*} := \langle \psi \mid \phi \rangle_{{\cal H}}\,, \] where $\bar{\alpha}$ is the complex conjugate of $\alpha$. Hence we can still take $\epsilon_{\cal H}$ to be the \em sesquilinear \em inner-product. Conversely, an \em abstract notion of inner product \em can be defined in any strongly compact closed category. Given `elements' $\psi,\phi:{\rm I}\to A$, we define \[ \langle\psi\mid\phi\rangle:=\psi^\dagger\circ\phi\,\in{\bf C}({\rm I},{\rm I})\,. \] As an example, the inner-product in $(\mathbf{Rel},\times)$ is, for $x,y\subseteq\{*\}\times X$, \[ \langle x\mid y\rangle= 1_{{\rm I}} \quad{\rm for}\quad x \cap y \neq \varnothing \qquad {\rm and} \qquad \langle x\mid y\rangle= 0_{{\rm I}} \quad{\rm for}\quad x \cap y = \varnothing \] with $1_{{\rm I}}:=\{*\}\times\{*\}\subseteq\{*\}\times\{*\}$ and $0_{{\rm I}}:=\emptyset\subseteq\{*\}\times\{*\}$. When defining \em unitarity of an isomorphism \em $U:A\to B$ by $U^{-1}=U^\dagger$ we can prove the defining properties both of inner-product space adjoints and inner-product space unitarity: \[ \langle f^\dagger\!\circ\psi\mid\phi\rangle_B=(f^\dagger\!\circ\psi)^\dagger\! \circ\phi= \psi^\dagger\!\circ f\circ\phi=\langle \psi\mid f\circ\phi\rangle_A\,, \] \[ \langle U\circ\psi\mid U\circ\varphi\rangle_B= \langle U^\dagger\!\circ U\circ\psi\mid \varphi\rangle_A= \langle \psi\mid \varphi\rangle_A\,, \] for $\psi,\varphi:{\rm I}\to A$, $\phi:{\rm I}\to B$, $f:B\to A$ and $U:A\to B$. As shown in \cite{AbrCoe2}, an alternative way to define the abstract inner-product is \begin{diagram} \ {\rm I} \!& \rTo^{\rho_{{\rm I}}} & {\rm I} \otimes {\rm I} & \rTo^{1_{\rm I} \!\otimes\! u_{{\rm I}}} & {\rm I} \otimes {\rm I}^*\! & \rTo^{\phi \!\otimes\! \psi_*} & A \otimes A^*\! & \rTo^{\epsilon_A} & \!{\rm I} \end{diagram} where $u_{\rm I}:{\rm I}\simeq {\rm I}^*$ and $\rho_{\rm I}:{\rm I}\simeq {\rm I}\otimes{\rm I}$. Here the key data we use is the coname $\epsilon_A:A \otimes A^*\to{\rm I}$\,, and also $(\ )_*$: cf.~also the above examples of both real and complex inner-product spaces where $\epsilon_A:=\langle-\mid-\rangle$. Hence it is fair to say that \[ {{\rm strong\ compact\ closure}\over{\rm compact\ closure}} \ \simeq\ {{\rm inner\mbox{\rm -}product\ space}\over{\rm vector\ space}}\,. \] \begin{comment} Using this abstract inner product, the explicit definition of a bipartite projector (with one-dimensional range) in ${\bf FdHilb}$, that is, \[ {\rm P}_\Psi:={\langle \Psi\mid -\rangle\over|\Psi|^2}\bullet\Psi: V\otimes W\to V\otimes W \] with $\Psi\in V\otimes W$ (cf.~\cite{AbrCoe1}), also carries over to the abstract setting as \[ (s_f\circ(\llcorner f_*\lrcorner\circ-))\bullet\ulcorner f\urcorner: {\bf C} ({\rm I} , A^*\otimes B)\to {\bf C} ({\rm I} , A^*\otimes B) \] where we used $(\ulcorner f\urcorner)^\dagger=\llcorner f_*\lrcorner$\,. (??) \end{comment} Finally, note that abstract bipartite projectors ${\rm P}_f$ have two components: a `name'-component and a `coname'-component. While in most algebraic treatments involving projectors these are taken to be primitive, in our setting projectors are composite entities, and this decomposition will carry over to their crucial properties (see below). We depict names, conames, and projectors as follows: \vspace{3.5mm}\noindent{ \begin{minipage}[b]{1\linewidth} {\epsfig{figure=Pic1_1.eps,width=380pt}} \begin{picture}(380,0) \put(33.5,55.5){$\ulcorner f\urcorner$} \put(102.5,30){$\llcorner f\lrcorner$} \put(216.5,41){\large${\rm P}_f$} \put(273,41){\LARGE$:=$} \put(330.5,53){$\ulcorner f\urcorner$} \put(330.5,30){$\llcorner f_{\!*\!}\lrcorner$} \end{picture} \end{minipage}} \vspace{-2.5mm}\noindent In this representation, diagrams (\ref{ccc1}) and (\ref{sccc2}) can be expressed as the respective pictures \vspace{3.0mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic8.eps,width=300pt}} \begin{picture}(300,0) \put(33.5,71.5){$\ \epsilon_A$} \put(93.5,43.5){$\ \eta_A$} \put(211,90.5){$\ \eta_A^\dagger$} \put(212.5,33.5){$\ \eta_A$} \put(260.5,63.5){$\ \sigma_{A,A}$} \end{picture} \end{minipage}} \vspace{-4mm}\noindent being equal to the identity. Below we will express equalities in this manner. \section{Information-flow through projectors} \begin{lemma}[Compositionality - Abramsky and Coecke LiCS`04] In a compact closed category \[ \lambda^{-1}_C\circ (\llcorner f\lrcorner\otimes 1_C)\circ(1_A\otimes\ulcorner g\urcorner)\circ\rho_A=g\circ f \] for $A\rTo^{f}B\rTo^{g}C$, $\rho_A:A\simeq A\otimes{\rm I}$ and $\lambda_C:C\simeq{\rm I}\otimes C$, i.e., \vspace{3.0mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic2.eps,width=200pt}} \begin{picture}(200,0) \put(32.5,59.5){$\llcorner f\lrcorner$} \put(93,33.5){$\ulcorner g\urcorner$} \put(153.5,44){\LARGE$=$} \put(187,62){$g$} \put(187,31.5){$f$} \end{picture} \end{minipage}} \vspace{-4mm}\noindent in our graphical representation. \end{lemma} Following \cite{AbrCoe2,Coe} we can think of the information flowing along the grey line in the diagram below, being acted on by the morphisms which label the coname and the name respectively. \vspace{0.5mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic3.eps,width=140pt}} \begin{picture}(140,0) \put(32.5,60.5){$\llcorner f\lrcorner$} \put(91,31){$\ulcorner g\urcorner$} \end{picture} \end{minipage}} \vspace{-3.5mm}\noindent We refer to this as the \em information-flow interpretation of compact closure\em. Many variants can also be derived \cite{AbrCoe2,Coe}. The pictures expressing the non-trivial branches of diagrams (\ref{ccc1}) and (\ref{sccc2}) become \vspace{2.0mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic9.eps,width=300pt}} \begin{picture}(300,0) \put(35.5,86.5){$\ \epsilon$} \put(95.5,56.5){$\ \eta$} \put(211,106){$\ \eta^\dagger$} \put(214,46){$\ \eta$} \put(213.5,470){$\ \eta$} \put(263.5,81.5){$\ \sigma$} \end{picture} \end{minipage}} \vspace{-5mm} Lemma 2 of \cite{AbrCoe1}, which states that we can realize any linear map $g:V\to W$ using only $({\bf FdHilb},\otimes)$-projectors, follows trivially by setting $f:=1_{V}$ while viewing both $\llcorner 1_{V\!\!}\lrcorner$ and $\ulcorner g\urcorner$ as being parts of projectors --- all this is up to a scalar multiple which depends on the input of ${\rm P}_g$. Note that by functoriality $1_{V^*}=(1_V)_*$ and hence ${\rm P}_{(1_V)_*}\!\!={\rm P}_{1_{V^*}}$. As discussed in \cite{Coe} this feature constitutes the core of \em logic-gate teleportation\em, which is a fault-tolerant universal quantum computational primitive \cite{Gottesman}. Explicitly, \begin{lemma} In a strongly compact closed category ${\bf C}$ for $f:A\to B$, \[ f\otimes(\ulcorner 1_{A^*\!\!}\urcorner\circ \llcorner \xi\lrcorner)=s(f,\xi) \bullet\left(\sigma_{A,B}\circ({\rm P}_{1_{A^*}}\otimes 1_B)\circ(1_A\otimes {\rm P}_f)\right) \] where $s(f,\xi)\in{\bf C}({\rm I},{\rm I})$ is a scalar, $\sigma_{A,B}:A\otimes A^*\!\otimes B\to B\otimes A^*\!\otimes A$ is symmetry, $\xi:A^*\!\to B^*$ is arbitrary, and $s(f,f_*)=1_{\rm I}$. \end{lemma} \noindent Lemma 1 of \cite{AbrCoe1}, that is, we can realize the $({\bf FdHilb},\otimes)$-trace by means of projectors trivially follows from eq.(\ref{eq:trace}), noting that $\eta=\ulcorner 1\urcorner$ and $\epsilon=\llcorner 1\lrcorner$ and again viewing these as parts of projectors. Explicitly: \begin{lemma} In a strongly compact closed category ${\bf C}$ for $f:A\otimes C\to B\otimes C$, \[ {\rm Tr}_{A,B}^C(f)\otimes(\ulcorner 1_{C^*}\urcorner\circ \llcorner \xi\lrcorner)=s(\xi) \bullet\left((1_A\otimes {\rm P}_{1_{C^*}})\circ (f\otimes 1_{C^*})\circ(1_B\otimes {\rm P}_{1_{C^*}})\right) \] where $s(\xi)\in{\bf C}({\rm I},{\rm I})$ is a scalar, $\xi:C\to C$ is arbitrary, and $s(1_C)=1_{\rm I}$. \end{lemma} \noindent Indeed, since $\sigma_{A^*\!,A}\circ\ulcorner 1_A\urcorner=\ulcorner (1_{\!A})^{\!*\!}\urcorner=\ulcorner 1_{\!A^*\!}\!\!\urcorner$ by functoriality, eq.(\ref{eq:trace}) is \vspace{3.5mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic4.eps,width=210pt}} \begin{picture}(210,0) \put(37.5,62){$f$} \put(92,33.5){$\ulcorner 1\urcorner$} \put(92,90.5){$\llcorner 1\lrcorner$} \put(151.5,59){\LARGE$=$} \put(182,61.5){${\rm Tr}(f)$} \end{picture} \end{minipage}} \vspace{-3.5mm}\noindent Interestingly, using the information-flow interpretation of compact closure, provided $f$ itself admits an information-flow interpretation, this construction admits one too, and can be regarded as a feed-back construction. As an example, for $f:=(g_1\otimes g_2)\circ\sigma\circ(f_1\otimes f_2)$, we have (use naturality of $\sigma$, the definition of (co)name and compositionality) \vspace{3.5mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic5.eps,width=210pt}} \begin{picture}(210,0) \put(16,62){$f_1$} \put(57,62){$f_2$} \put(16,103){$g_1$} \put(57,103){$g_2$} \put(38.5,87.7){$\sigma$} \put(91,31.5){$\ulcorner 1\urcorner$} \put(91,131.5){$\llcorner 1\lrcorner$} \put(151.5,79){\LARGE$=$} \put(196,37){$f_1$} \put(196,67){$g_2$} \put(196,97){$f_2$} \put(196,127){$g_1$} \end{picture} \end{minipage}} \vspace{-3.5mm}\noindent When taking $f$ itself to be a projector ${\rm P}_g=\ulcorner g\urcorner\circ\llcorner g_*\!\lrcorner$ we have \vspace{3.5mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic6.eps,width=210pt}} \begin{picture}(210,0) \put(32,81){$\ulcorner f\urcorner$} \put(32,63.5){$\llcorner f_{\!*\!}\lrcorner$} \put(91.5,31.5){$\ulcorner 1\urcorner$} \put(91.5,112.5){$\llcorner 1\lrcorner$} \put(151.5,71){\LARGE$=$} \put(196,58){$f_*$} \put(196,88){$f^*$} \end{picture} \end{minipage}} \vspace{-3.5mm}\noindent using $\sigma\circ\ulcorner f\urcorner=\ulcorner f^{\!*\!}\urcorner$, naturality of $\sigma$ and compositionality. Note that the information-flow in the loop is in this case `forward' as compared to `backward' in the previous example. For $f$ of type $A\otimes (C_1\otimes \ldots \otimes C_n)\to B\otimes (C_1\otimes \ldots \otimes C_n)$ we can have multiple looping: \vspace{3.5mm}\noindent{ \begin{minipage}[b]{1\linewidth} \centering{\epsfig{figure=Pic7.eps,width=290pt}} \begin{picture}(290,0) \put(107.8,74.3){${\rm P}$} \put(28.3,120.0){$\sigma$} \put(68.3,100.0){$\sigma$} \put(152,31.5){$\ulcorner 1\urcorner$} \put(152,157.5){$\llcorner 1\lrcorner$} \end{picture} \end{minipage}} \noindent Note the resemblance between this behavior and that of \em additive traces \em \cite{Abr,AHS} such as the one on $({\bf Rel},+)$ namely \[ x\,{\rm Tr}_{X,Y}^Z(R)y\Leftrightarrow \exists z_1, \ldots, z_n\in Z.xRz_1R \ldots R z_n Ry \] for $R\subseteq X+Z\times Y+Z$. In this case we can think of a particle traveling trough a network where the elements $x\in X$ are the possible states of the particle. The morphisms $R\subseteq X\times Y$ are processes that impose a (non-deterministic) change of state $x\in X$ to $y\in R(x)$, emptyness of $R(x)$ corresponding to undefinedness. The sum $X+Y$ is the disjoint union of state sets and $R+S$ represents parallel composition of processes. The trace ${\rm Tr}_{X,Y}^Z(R)$ is \em feedback\em, that is, entering in a state $x\in X$ the particle will either halt, exit at $y\in Y$ or, exit at $z_1\in Z$ in which case it is fed back into $R$ at the $Z$ entrance, and so on, until it halts or exits at $y\in Y$. For a more conceptual view of the matter, note that the examples illustrated above all live in the \emph{free compact closed category} generated by a suitable category in the sense of \cite{KellyLaplaza}. Indeed our diagrams, which are essentially `proof nets for compact closed logic' \cite{AD}, give a presentation of this free category. Of course, these diagrams will then have representations in any compact closed category. For a detailed discussion of free contsructions for traced and strongly compact closed categories, see the forthcoming paper \cite{Abr05}. \section{$({\bf FRel},\times,{\rm Tr})$ from $({\bf FdHilb},\otimes,{\rm Tr})$} In \cite{AbrCoe1} \S3.3 we provided a lax functorial passage from the category $({\bf FdHilb},\otimes,{\rm Tr})$ to the category of finite sets and relations $({\bf FRel},\times,{\rm Tr})$. This passage involved choosing a base for each Hilbert space. When restricting the morphisms of ${\bf FdHilb}$ to those for which the matrices in the chosen bases are $\mathbb{R}^+$-valued we obtain a true functor. The results in \cite{AbrCoe2}, together with the ideas developed in this paper, provide a better understanding of this passage. In any monoidal category, ${\bf C}({\rm I},{\rm I})$ is an abelian monoid \cite{KellyLaplaza} (Prop.~6.1). If ${\bf C}$ has a zero object $0$ and biproducts ${\rm I}\oplus...\oplus{\rm I}$ we obtain an abelian semiring with zero $0_{\rm I}:{\rm I}\to{\rm I}$ and sum $-+-:\nabla_{\rm I}\circ (-\oplus-)\circ\Delta_{\rm I}:{\rm I}\to{\rm I}$. When in such a category every object is isomorphic to one of the form ${\rm I}\oplus \cdots \oplus{\rm I}$ (finitary), as is the case for both $({\bf FdHilb},\otimes)$ and $({\bf FRel},\times)$, then this category is equivalent (as a monoidal category) to the category of ${\bf C}({\rm I},{\rm I})$-valued matrices with the usual matrix operations. Note that this equivalence involves choosing a basis isomorphism for each object. For $({\bf FdHilb},\otimes)$ we have ${\bf C}({\rm I},{\rm I})\simeq\mathbb{C}$ and for $({\bf FRel},\times)$ we have ${\bf C}({\rm I},{\rm I})\simeq\mathbb{B}$, the semiring of booleans. Such a category of matrices is trivially strongly compact closed for $(\bigoplus_{i=1}^{i=n}{\rm I})^*:=\bigoplus_{i=1}^{i=n}{\rm I}$, \[ \eta:=(\delta_{i,j})_{i,j}:{\rm I}\to \left(\bigoplus_{i=1}^{i=n}{\rm I}\right)\!\otimes\! \Biggl(\bigoplus_{j=1}^{j=n}{\rm I}\Biggr) \] (using distributivity and ${\rm I}\otimes{\rm I}\simeq{\rm I}$), and \[ \epsilon:\left(\bigoplus_{i=1}^{i=n}{\rm I}\right)\otimes \left(\bigoplus_{i=1}^{i=n}{\rm I}\right)\to{\rm I}:: (\psi,\phi)\mapsto \phi^T\circ \psi \] where $\phi^T$ denotes the transpose of $\psi$. In the case of $({\bf FRel},\times)$, this yields the strong compact closed structure described above. If the abelian semiring ${\bf C}({\rm I},{\rm I})$ also admits a non-trivial involution $(\ )_*$, an alternative compact closed structure arises by defining $\epsilon:: (\psi,\phi)\mapsto (\phi^T)_*\circ \psi$, where $(\ )_*$ is applied pointwise. The corresponding strong compact closed structure involves defining the adjoint of a matrix $M$ to be $M^T_*$, \textit{i.e.}~ the involution is applied componentwise to the transpose of $M$. In this way we obtain (up to categorical equivalence) the strong compact closed structure on $({\bf FdHilb},\otimes)$ described above, taking $(\ )_*$ to be complex conjugation. Now we can relate trace preserving and (strongly) compact closed functors to (involution preserving) semiring homomorphisms. Any such homomorphism $h : R \to S$ lifts to a functor on the categories of matrices. Moreover, such a functor preserves compact closure (and strong compact closure if $h$ preserves the given involution), and hence also the trace. Clearly there is no semiring embedding $\xi:\mathbb{B}\rightarrow\mathbb{C}$ since $\xi(1+1)\not=\xi(1)+\xi(1)$. Conversely, for $\xi:\mathbb{C}\rightarrow\mathbb{B}$ neither $\xi(-1)\mapsto 0$ nor $\xi(-1)\mapsto 1$ provide a true homomorphism. But setting $\xi(c)=1$ for $c\not=0$ we have $\xi(x+y)\leq \xi(x)+\xi(y)$ and $\xi(x\cdot y)=\xi(x)\cdot\xi(y)$ which lifts to a \emph{lax} functor --- {\bf FRel} is order-enriched, so this makes sense. Restricting from $\mathbb{C}$ to $\mathbb{R}^+$ we obtain a true homomorphism, and hence a compact closed functor. \refs \bibitem [Abramsky 1996]{Abr} Abramsky, S. (1996) {\em Retracing some paths in process algebra}. Proceedings of the Seventh International Conference on Concurrency Theory, LNCS {\bf 1119}, 1--17. \bibitem[Abramsky 2005]{abr05} Abramsky, S. (2005) Abstract Scalars, Loops, and Free Traced and Strongly Compact Closed Categories. To appear. \bibitem [Abramsky and Coecke CTCS`02]{AbrCoe1} Abramsky, S. and Coecke, B. (2003) {\em Physical traces: quantum vs.~classical information processing}. Electronic notes on Theoretical Computer Science {\bf 69} (CTCS`02 issue). \texttt{arXiv:cs/0207057} \bibitem [Abramsky and Coecke LiCS`04]{AbrCoe2} Abramsky, S. and Coecke, B. (2004) {\em A categorical semantics of quantum protocols}. Proceedings of the 19th Annual IEEE Symposium on Logic in Computer Science (LiCS`04), IEEE Computer Science Press. (extended version at \texttt{arXiv: quant-ph/0402130}) \bibitem [Abramsky and Coecke (nd)]{AC2} Abramsky, S. and Coecke, B. (2004) {\em Abstract quantum mechanics}. In preparation. (major improvements and additions as compared to \cite{AbrCoe2}) \bibitem [Abramsky and Duncan 2004]{AD} Abramsky, S. and Duncan, R.~W. (2004) \emph{Categorical quantum logic}. To appear in the Proceedings of the Second International Workshop on Quantum Programming Languages (QPL 04). \bibitem [Abramsky, Haghverdi and Scott 2002]{AHS} Abramsky, S., Haghverdi, E. and Scott, P.~J. (2002) {\em Geometry of interaction and linear combinatory algebras}. Mathematical Structures in Computer Science {\bf 12}, 625--665. \bibitem [Baez 2004]{Baez} Baez, J. (2004) {\em Quantum quandaries: a category-theoretic perspective}. Structural Foundations of Quantum Gravity, eds. S. French, D. Rickles and J. Sahatsi, Oxford University Press. \texttt{arXiv:quant-ph/0404040} \bibitem [Barr 1979]{Barr} Barr, M. (1979) {\em $*$-Autonomous Categories}. Lecture Notes in Mathematics {\bf 752}, Springer-Verlag. \bibitem [quantum teleportation 1993]{BBC} Bennet, C.~H., Brassard, C., Cr\'epeau, C., Jozsa, R., Peres, A. and Wooters, W.~K. (1993) {\em Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels}. Physical Review Letters {\bf 70}, 1895--1899. \bibitem [Coecke 2003]{Coe} Coecke, B. (2003) {\em The logic of entanglement. An invitation}. Research Report PRG-RR-03-12 Oxford University Computing Laboratory. \texttt{web.comlab.ox.ac.uk/oucl/ publications/tr/rr-03-12.html} (short version at \texttt{arXiv:quant-ph/0402014}) \bibitem [Gottesman and Chuang 1999]{Gottesman} Gottesman, D. and Chuang, I.~L. (1999) {\em Quantum teleportation is a universal computational primitive}. Nature {\bf 402}, 390--393. \texttt{arXiv: quant-ph/9908010} \bibitem [Joyal, Street and Verity 1996]{JSV} Joyal, A., Street, R. and Verity, D. (1996) {\em Traced monoidal categories}. Proceedings of the Cambridge Philosophical Society {\bf 119}, 447--468. \bibitem [Kelly and Laplaza 1980]{KellyLaplaza} Kelly, G.~M. and Laplaza, M.~L. (1980) {\em Coherence for compact closed categories}. Journal of Pure and Applied Algebra {\bf 19}, 193--213. \bibitem [Seely 1998]{Seely} Seely, R.~A.~G. (1998) \em Linear logic, {$*$}-autonomous categories and cofree algebras\em. Categories in Computer Science and Logic, Contemporary Mathematics {\bf 92}, 371--382. \bibitem [von Neumann 1932]{vN} von Neumann, J. (1932) {\it Mathematische Grundlagen der Quantenmechanik}. Springer-Verlag. English translation (1955): {\it Mathematical Foundations of Quantum Mechanics}. Princeton University Press. \bibitem [entanglement swapping 1993]{Swap} \.{Z}ukowski, M., Zeilinger, A., Horne, M.~A. and Ekert, A.~K. (1993) {\em `Event-ready-detectors' Bell experiment via entanglement swapping}. Physical Review Letters {\bf 71}, 4287--4290. \endrefs \end{document}
proofpile-arXiv_065-6660
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }